Compare commits

...

10 Commits

Author SHA1 Message Date
1f3f111d88 Add metrics monitoring stack with VictoriaMetrics, Grafana, and node_exporter
Implement complete monitoring infrastructure following rick-infra principles:

Components:
- VictoriaMetrics: Prometheus-compatible TSDB (7x less RAM usage)
- Grafana: Visualization dashboard with Authentik OAuth/OIDC integration
- node_exporter: System metrics collection (CPU, memory, disk, network)

Architecture:
- All services run as native systemd binaries (no containers)
- localhost-only binding for security
- Grafana uses native OAuth integration with Authentik (not forward_auth)
- Full systemd security hardening enabled
- Proxied via Caddy at metrics.jnss.me with HTTPS

Role Features:
- Unified metrics role (single role for complete stack)
- Automatic role mapping via Authentik groups:
  - authentik Admins OR grafana-admins -> Admin access
  - grafana-editors -> Editor access
  - All others -> Viewer access
- VictoriaMetrics auto-provisioned as default Grafana datasource
- 12-month metrics retention by default
- Comprehensive documentation included

Security:
- OAuth/OIDC SSO via Authentik
- All metrics services bind to 127.0.0.1 only
- systemd hardening (NoNewPrivileges, ProtectSystem, etc.)
- Grafana accessible only via Caddy HTTPS proxy

Documentation:
- roles/metrics/README.md: Complete role documentation
- docs/metrics-deployment-guide.md: Step-by-step deployment guide

Configuration:
- Updated rick-infra.yml to include metrics deployment
- Grafana port set to 3001 (Gitea uses 3000)
- Ready for multi-host expansion (designed for future node_exporter deployment to production hosts)
2025-12-28 19:18:30 +01:00
eca3b4a726 removing backups of caddy configs 2025-12-28 18:19:05 +01:00
9bb9405b90 added jnss-web 2025-12-23 23:36:51 +01:00
bfd6f22f0e Add Vaultwarden password manager role with PostgreSQL and SSO support
- Implement complete Vaultwarden deployment using Podman Quadlet
- PostgreSQL backend via Unix socket with 777 permissions
- Caddy reverse proxy with WebSocket support for live sync
- Control-node admin token hashing using argon2 (OWASP preset)
- Idempotent token hashing with deterministic salt generation
- Full Authentik SSO integration following official guide
- SMTP email configuration support (optional)
- Invitation-only user registration by default
- Comprehensive documentation with setup and troubleshooting guides

Technical Details:
- Container: vaultwarden/server:latest from Docker Hub
- Database: PostgreSQL via /var/run/postgresql socket
- Port: 8080 (localhost only, proxied by Caddy)
- Domain: vault.jnss.me
- Admin token: Hashed on control node with argon2id
- SSO: OpenID Connect with offline_access scope support

Role includes automatic argon2 installation on control node if needed.
2025-12-22 21:33:27 +01:00
89b43180fc Refactor Nextcloud configuration to use OCC script approach and add email/OIDC support
Major architectural changes:
- Replace config file templating with unified OCC command script
- Remove custom_apps mount overlay that caused Caddy serving issues
- Implement script-based configuration for idempotency and clarity

Configuration improvements:
- Add email/SMTP support with master switch (nextcloud_email_enabled)
- Add OIDC/SSO integration with Authentik support
- Add apps installation (user_oidc, calendar, contacts)
- Enable group provisioning and quota management from OIDC
- Set nextcloud_oidc_unique_uid to false per Authentik docs

Files removed:
- nextcloud.config.php.j2 (replaced by OCC commands)
- redis.config.php.j2 (replaced by OCC commands)
- optimization.yml (merged into configure.yml)

Files added:
- configure-nextcloud.sh.j2 (single source of truth for config)
- configure.yml (deploys and runs configuration script)

Documentation:
- Add comprehensive OIDC setup guide with Authentik integration
- Document custom scope mapping and group provisioning
- Add email configuration examples for common providers
- Update vault variables documentation
- Explain two-phase deployment approach

Host configuration:
- Change admin user from 'admin' to 'joakim'
- Add admin email configuration
2025-12-21 14:54:44 +01:00
846ab74f87 Fix Nextcloud DNS resolution and implement systemd cron for background jobs
- Enable IP forwarding in security playbook (net.ipv4.ip_forward = 1)
- Add podman network firewall rules to fix container DNS/HTTPS access
- Implement systemd timer for reliable Nextcloud background job execution
- Add database optimization tasks (indices, bigint conversion, mimetypes)
- Configure maintenance window (04:00 UTC) and phone region (NO)
- Add security headers (X-Robots-Tag, X-Permitted-Cross-Domain-Policies)
- Create Nextcloud removal playbook for clean uninstall
- Fix nftables interface matching (podman0 vs podman+)

Root cause: nftables FORWARD chain blocked container egress traffic
Solution: Explicit firewall rules for podman0 bridge interface
2025-12-20 19:51:26 +01:00
90bbcd97b1 Add Gitea email configuration and document SMTP authentication troubleshooting
Changes:
- Configure Gitea mailer with Titan Email SMTP settings
- Add SMTP_AUTH = PLAIN for authentication method specification
- Update SMTP password in vault (vault_gitea_smtp_password)

Email Status:
Currently non-functional due to SMTP authentication rejection by Titan Email
servers. Error: 535 5.7.8 authentication failed

Troubleshooting Performed:
- Tested both port 587 (STARTTLS) and 465 (SSL/TLS)
- Verified credentials work in webmail
- Tested AUTH PLAIN and AUTH LOGIN methods
- Removed conflicting TLS settings
- Both authentication methods rejected despite correct credentials

Root Cause:
The issue is NOT a Gitea configuration problem. Titan Email SMTP server
is rejecting all authentication attempts from the VPS (69.62.119.31)
despite credentials being correct and working in webmail.

Possible causes:
- SMTP access may need to be enabled in Hostinger control panel
- VPS IP may require whitelisting
- Account may need additional verification for SMTP access
- Titan Email plan may not include external SMTP access

Documentation:
Created comprehensive troubleshooting guide at:
docs/gitea-email-troubleshooting.md

Files Modified:
- roles/gitea/templates/app.ini.j2 (+1 line: SMTP_AUTH = PLAIN)
- docs/gitea-email-troubleshooting.md (new file, complete troubleshooting log)
- host_vars/arch-vps/vault.yml (updated SMTP password - not committed)

Next Steps:
- Check Hostinger control panel for SMTP/IMAP access toggle
- Test SMTP from different IP to rule out IP blocking
- Contact Hostinger/Titan support for SMTP access verification
- Consider alternative email providers if Titan SMTP unavailable
2025-12-19 21:25:14 +01:00
1be7122251 Update task list - Gitea OAuth and registration configuration complete 2025-12-18 21:09:47 +01:00
467e79c84b Configure Gitea as private OAuth-enabled Git server with email support
Major Changes:
- Configure private Git server with OAuth-preferred authentication
- Integrate Titan Email for notifications and OAuth workflows
- Enable CI/CD Actions and repository mirroring
- Implement enhanced security hardening

Authentication & Access Control:
- Require sign-in for all access (unauthorized users blocked)
- OAuth via Authentik as primary login method (password form hidden)
- Password authentication still functional as backup via direct URL
- Registration disabled (admin-only user creation)
- Auto-registration for OAuth users with account linking support

Email Configuration (Titan Email):
- SMTP: smtp.titan.email:587 (STARTTLS)
- From address: hello@jnss.me
- Used for: OAuth account linking, notifications, confirmations
- Subject prefix: [Gitea]

Repository Privacy & Features:
- Private repositories by default (public repos allowed)
- Unauthorized users cannot view any content (must sign in)
- External integrations disabled (ext_issues, ext_wiki)
- Manual repository creation required (no push-to-create)
- LFS enabled for large file storage

Features Enabled:
- CI/CD Actions with GitHub actions support
- Repository mirroring (pull/push mirrors enabled)
- User organization creation
- Webhook security (restricted to private/loopback)

Security Enhancements:
- HTTPS-only session cookies with strict SameSite policy
- CSRF cookie HTTP-only protection
- Password breach checking (HaveIBeenPwned)
- 1-hour session timeout (reduced from 24h)
- Reverse proxy trust limited to Caddy only
- API Swagger docs disabled in production

Configuration Sections Added:
- [oauth2_client] - OAuth integration settings
- [mailer] - Email/SMTP configuration
- [session] - Enhanced session security
- [actions] - CI/CD workflow configuration
- [mirror] - Repository mirroring settings
- [api] - API access configuration
- [webhook] - Webhook security restrictions
- [service.explore] - Public content settings

Files Changed:
- roles/gitea/defaults/main.yml: +97 lines (OAuth, email, security vars)
- roles/gitea/templates/app.ini.j2: +94 lines (config sections)
- host_vars/arch-vps/vault.yml: +1 line (SMTP password - not committed)

Deployment Status:
- Successfully deployed to arch-vps
- Service running and healthy
- Ready for OAuth provider configuration in Authentik
- Tested: HTTP access, configuration generation, service health
2025-12-18 21:09:31 +01:00
cf71fb3a8d Implement SSH passthrough mode and refactor Gitea domain configuration
Major Changes:
- Add dual SSH mode system (passthrough default, dedicated fallback)
- Refactor domain configuration to use direct specification pattern
- Fix critical fail2ban security gap in dedicated mode
- Separate HTTP and SSH domains for cleaner Git URLs
2025-12-17 21:51:24 +01:00
69 changed files with 5994 additions and 260 deletions

View File

@@ -0,0 +1,211 @@
# Gitea Email Configuration Troubleshooting
## Summary
Attempted to configure Gitea email functionality using Titan Email (Hostinger) SMTP service. Email sending is currently **non-functional** due to SMTP authentication rejection by Titan Email servers.
## Configuration Details
### Email Provider
- **Provider:** Titan Email (by Hostinger)
- **Account:** hello@jnss.me
- **SMTP Server:** smtp.titan.email
- **Ports Tested:** 587 (STARTTLS), 465 (SSL/TLS)
### Gitea Configuration
```ini
[mailer]
ENABLED = true
PROTOCOL = smtp+starttls
SMTP_ADDR = smtp.titan.email
SMTP_PORT = 587
FROM = hello@jnss.me
USER = hello@jnss.me
PASSWD = <vault_gitea_smtp_password>
SUBJECT_PREFIX = [Gitea]
SEND_AS_PLAIN_TEXT = false
SMTP_AUTH = PLAIN
```
## Issue Description
Gitea fails to send emails with the following error:
```
Failed to send emails: failed to authenticate SMTP: 535 5.7.8 Error: authentication failed
```
## Troubleshooting Performed
### 1. Credential Verification
-**Webmail access:** Successfully logged into https://mail.titan.email/ with credentials
-**Send/Receive:** Can send and receive emails through webmail interface
-**Password confirmed:** Tested multiple times, credentials are correct
### 2. SMTP Connectivity Tests
-**Port 587 (STARTTLS):** Connection successful, TLS upgrade successful
-**Port 465 (SSL/TLS):** Connection successful with implicit TLS
-**DNS Resolution:** smtp.titan.email resolves correctly to multiple IPs
### 3. Authentication Method Testing
**Manual SMTP tests from VPS (69.62.119.31):**
```python
# Test Results:
AUTH PLAIN: 535 5.7.8 Error: authentication failed
AUTH LOGIN: 535 5.7.8 Error: authentication failed
```
**Both authentication methods rejected by server despite correct credentials.**
### 4. Configuration Iterations Tested
#### Iteration 1: Port 465 with smtps
```ini
PROTOCOL = smtps
SMTP_PORT = 465
```
**Result:** Authentication failed (535)
#### Iteration 2: Port 587 with smtp+starttls
```ini
PROTOCOL = smtp+starttls
SMTP_PORT = 587
```
**Result:** Authentication failed (535)
#### Iteration 3: Explicit AUTH PLAIN
```ini
PROTOCOL = smtp+starttls
SMTP_PORT = 587
SMTP_AUTH = PLAIN
```
**Result:** Authentication failed (535)
#### Iteration 4: Removed conflicting TLS settings
Removed:
- `ENABLE_TLS = true` (conflicted with PROTOCOL)
- `SKIP_VERIFY = false` (deprecated)
**Result:** Authentication still failed (535)
### 5. Debug Output Analysis
SMTP conversation debug output revealed:
```
send: 'AUTH PLAIN AGhlbGxvQGpuc3MubWUASGVsbG8xMjMh\r\n'
reply: b'535 5.7.8 Error: authentication failed: \r\n'
send: 'AUTH LOGIN aGVsbG8Aam5zcy5tZQ==\r\n'
reply: b'334 UGFzc3dvcmQ6\r\n'
send: 'SGVsbG8xMjMh\r\n'
reply: b'535 5.7.8 Error: authentication failed: UGFzc3dvcmQ6\r\n'
```
**Analysis:** Server accepts both AUTH PLAIN and AUTH LOGIN in EHLO response but rejects actual authentication attempts for both methods.
## Root Cause Analysis
### What Works
- ✅ SMTP server connectivity (both ports)
- ✅ TLS/STARTTLS negotiation
- ✅ Webmail authentication with same credentials
- ✅ Email sending through webmail
### What Doesn't Work
- ❌ SMTP AUTH PLAIN from VPS
- ❌ SMTP AUTH LOGIN from VPS
- ❌ Both fail with identical error: 535 5.7.8
### Conclusion
**The issue is NOT a Gitea configuration problem.** The SMTP server is actively rejecting authentication attempts despite:
- Correct credentials (verified in webmail)
- Proper TLS establishment
- Correct authentication protocol usage
## Possible Causes
1. **SMTP Access Disabled:** Titan Email may require SMTP/IMAP access to be explicitly enabled in Hostinger control panel or Titan settings
2. **IP-Based Restrictions:** VPS IP (69.62.119.31) may be blocked or require whitelisting
3. **Account Verification Required:** Account may need additional verification for SMTP access
4. **Service-Level Restriction:** Titan Email plan may not include SMTP access for external applications
5. **Missing Activation:** SMTP feature may require separate activation from webmail access
## Attempted Solutions
### Configuration Changes
- [x] Tested both port 587 (STARTTLS) and 465 (SSL/TLS)
- [x] Tried AUTH PLAIN and AUTH LOGIN methods
- [x] Removed conflicting TLS settings (ENABLE_TLS, SKIP_VERIFY)
- [x] Updated password in vault and redeployed
- [x] Verified minimal clean configuration
### External Tests
- [ ] Test SMTP from different IP (local machine vs VPS)
- [ ] Check Hostinger control panel for SMTP toggle
- [ ] Contact Hostinger/Titan support
- [ ] Verify account has SMTP privileges
## Recommendations
### Immediate Next Steps
1. **Check Hostinger Control Panel:**
- Log into hpanel.hostinger.com
- Navigate to Emails → hello@jnss.me
- Look for SMTP/IMAP access toggle or settings
2. **Test from Different IP:**
- Test SMTP authentication from local machine
- If successful: IP blocking issue (request VPS IP whitelist)
- If failed: Account-level restriction
3. **Contact Support:**
- Provide error: "535 5.7.8 authentication failed"
- Request SMTP access verification for hello@jnss.me
- Ask if SMTP requires separate activation
### Alternative Email Solutions
If Titan Email SMTP cannot be resolved:
1. **Use Different Email Provider:**
- Gmail (with App Passwords)
- SendGrid (free tier: 100 emails/day)
- Mailgun (free tier: 5,000 emails/month)
- AWS SES (free tier: 62,000 emails/month)
2. **Use Local Mail Server:**
- Install Postfix on VPS
- Configure as relay
- More complex but full control
3. **Disable Email Features:**
- Set `ENABLED = false` in [mailer]
- OAuth account linking won't work
- Password reset requires admin intervention
- No email notifications
## Current Status
**Email functionality: DISABLED**
Configuration is correct but non-functional due to SMTP authentication rejection by Titan Email servers.
## Files Modified
- `roles/gitea/defaults/main.yml` - Email configuration variables
- `roles/gitea/templates/app.ini.j2` - Mailer section configuration
- `host_vars/arch-vps/vault.yml` - SMTP password
## References
- Gitea Mailer Documentation: https://docs.gitea.com/administration/config-cheat-sheet#mailer-mailer
- SMTP Error Codes: https://www.greenend.org.uk/rjk/tech/smtpreplies.html
- Titan Email Settings: https://support.hostinger.com/en/collections/3363865-titan-email
---
**Date:** 2025-12-19
**Investigated by:** OpenCode AI Assistant
**Status:** Unresolved - Awaiting Titan Email SMTP access verification

View File

@@ -0,0 +1,245 @@
# Gitea SSH Migration Guide
Guide for migrating between Gitea SSH modes and updating Git remote URLs.
## SSH Modes Overview
### Passthrough Mode (Default)
- **Port**: 22 (standard SSH)
- **URL Format**: `git@git.jnss.me:user/repo.git`
- **Security**: System fail2ban protects all SSH traffic
- **Recommended**: ✅ For production use
### Dedicated Mode (Fallback)
- **Port**: 2222 (Gitea SSH server)
- **URL Format**: `ssh://git@git.jnss.me:2222/user/repo.git`
- **Security**: Separate fail2ban jail for port 2222
- **Use Case**: Debugging or when passthrough has issues
---
## Migration: Dedicated → Passthrough (Default)
When you deploy the new code, Gitea will automatically switch to passthrough mode.
### What Happens Automatically
1. ✅ Gitea's SSH server stops listening on port 2222
2. ✅ Port 2222 firewall rule removed
3. ✅ System SSH configured for Git passthrough
4. ✅ AuthorizedKeysCommand script deployed
5. ✅ fail2ban switches to system `sshd` jail
### What You Need to Do
**Update your Git remote URLs** in each repository:
```bash
# Check current remote URL
git remote -v
# Update to new format (no port number)
git remote set-url origin git@git.jnss.me:username/repo.git
# Verify new URL
git remote -v
# Test connection
git fetch
```
### Bulk Update Script
If you have many repositories, use this script:
```bash
#!/bin/bash
# migrate-git-urls.sh - Update all Git remotes from dedicated to passthrough
# Find all git repositories in current directory and subdirectories
find . -type d -name '.git' | while read gitdir; do
repo=$(dirname "$gitdir")
echo "Processing: $repo"
cd "$repo"
# Get current origin URL
current_url=$(git remote get-url origin 2>/dev/null)
# Check if it's the old format (with :2222)
if [[ $current_url == *":2222/"* ]]; then
# Convert to new format
new_url=$(echo "$current_url" | sed 's|ssh://git@git.jnss.me:2222/|git@git.jnss.me:|')
echo " Old: $current_url"
echo " New: $new_url"
git remote set-url origin "$new_url"
echo " ✅ Updated"
else
echo " Already using correct format or not Gitea"
fi
cd - > /dev/null
echo ""
done
echo "Migration complete!"
```
**Usage:**
```bash
chmod +x migrate-git-urls.sh
./migrate-git-urls.sh
```
---
## Migration: Passthrough → Dedicated
If you need to switch back to dedicated mode:
### 1. Update Configuration
Edit `host_vars/arch-vps/main.yml`:
```yaml
gitea_ssh_mode: "dedicated"
```
### 2. Deploy
```bash
ansible-playbook -i inventory/hosts.yml rick-infra.yml --limit arch-vps
```
### 3. Update Git Remotes
```bash
# Update to dedicated format (with :2222 port)
git remote set-url origin ssh://git@git.jnss.me:2222/username/repo.git
# Test connection
ssh -T -p 2222 git@git.jnss.me
git fetch
```
---
## URL Format Reference
### Passthrough Mode (Port 22)
```bash
# Clone
git clone git@git.jnss.me:username/repo.git
# Add remote
git remote add origin git@git.jnss.me:username/repo.git
# SSH test
ssh -T git@git.jnss.me
```
### Dedicated Mode (Port 2222)
```bash
# Clone
git clone ssh://git@git.jnss.me:2222/username/repo.git
# Add remote
git remote add origin ssh://git@git.jnss.me:2222/username/repo.git
# SSH test
ssh -T -p 2222 git@git.jnss.me
```
---
## Troubleshooting
### After Migration, Git Operations Fail
**Symptom**: `git push` fails with "Permission denied" or "Connection refused"
**Solution**:
1. Check your remote URL format:
```bash
git remote -v
```
2. Update if needed:
```bash
# For passthrough (no port)
git remote set-url origin git@git.jnss.me:username/repo.git
# For dedicated (with port)
git remote set-url origin ssh://git@git.jnss.me:2222/username/repo.git
```
3. Test SSH connection:
```bash
# Passthrough
ssh -T git@git.jnss.me
# Dedicated
ssh -T -p 2222 git@git.jnss.me
```
### SSH Key Not Recognized After Migration
**Symptom**: "Permission denied (publickey)"
**Cause**: SSH keys are stored in Gitea's database, not affected by mode change.
**Solution**:
1. Verify your SSH key is in Gitea:
- Log into Gitea web interface
- Go to Settings → SSH/GPG Keys
- Check your key is listed
2. Test key locally:
```bash
ssh-add -l # List loaded keys
```
3. Try with explicit key:
```bash
ssh -T -i ~/.ssh/id_ed25519 git@git.jnss.me
```
### Port 2222 Still Open After Switching to Passthrough
**Symptom**: `nc -zv git.jnss.me 2222` succeeds
**Cause**: Gitea service may still be running on port 2222
**Solution**:
```bash
# On the server
systemctl restart gitea
ss -tlnp | grep 2222 # Should show nothing
```
---
## Verification Checklist
After migration, verify:
- [ ] SSH connection works: `ssh -T git@git.jnss.me` (passthrough) or `ssh -T -p 2222 git@git.jnss.me` (dedicated)
- [ ] Can clone repository with new URL format
- [ ] Can push commits to repository
- [ ] fail2ban is active: `fail2ban-client status sshd` (passthrough) or `fail2ban-client status gitea-ssh` (dedicated)
- [ ] Firewall configured correctly: `nft list ruleset | grep 2222` (should show nothing in passthrough)
---
## Notes
- **Both modes are fully supported** - choose what works best for your setup
- **No data loss** - repositories, users, and SSH keys are unaffected by mode changes
- **Gradual migration** - you can update remote URLs at your own pace (old URLs may still work for a short time)
- **Team coordination** - if you're in a team, coordinate the migration so everyone updates their URLs
---
**Rick-Infra Gitea SSH Migration Guide**
Switch between passthrough and dedicated SSH modes safely.

326
docs/jnss-web-deployment.md Normal file
View File

@@ -0,0 +1,326 @@
# jnss-web Static Site Deployment Guide
## Overview
This document describes the deployment process for jnss-web, a SvelteKit-based static website serving as the primary entrypoint for jnss.me.
## Architecture
- **Technology**: SvelteKit v2 with static adapter
- **Build Tool**: Bun
- **Deployment**: Ansible playbook with git-based artifact deployment
- **Web Server**: Caddy (direct file serving)
- **Repository**: git.jnss.me/joakim/jnss-web
- **Branch Strategy**:
- `main` - Source code
- `deploy` - Build artifacts only
## Server Architecture
```
/opt/jnss-web-repo/ # Git clone of deploy branch
/var/www/jnss-web/ # Synced build artifacts (served by Caddy)
/etc/caddy/sites-enabled/
└── jnss-web.caddy # Site config with www redirect
/var/log/caddy/jnss-web.log # Access logs
```
## Development Workflow
### 1. Local Development (main branch)
```bash
cd ~/dev/jnss-web
git checkout main
# Make changes to source code
# ...
# Test locally
bun run dev
# Build to verify
bun run build
```
### 2. Prepare Deploy Branch
```bash
# Build production version
bun run build
# Switch to deploy branch
git checkout deploy
# Merge changes from main
git merge main --no-commit
# Rebuild to ensure fresh build
bun run build
# Add only build artifacts
git add build/
# Commit with descriptive message
git commit -m "Deploy: Add new feature X"
# Push to Gitea
git push origin deploy
```
### 3. Deploy to Server
```bash
cd ~/rick-infra
# Run deployment playbook
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml
# Watch logs (optional)
ssh root@arch-vps 'tail -f /var/log/caddy/jnss-web.log'
```
## Playbook Details
### Variables
Located in `playbooks/deploy-jnss-web.yml`:
- `jnss_web_repo_owner`: Your Gitea username (default: "joakim")
- `jnss_web_branch`: Branch to deploy (default: "deploy")
- `jnss_web_domain`: Primary domain (default: "jnss.me")
### Tags
- `jnss-web` - All jnss-web tasks
- `deploy` - Git clone/sync tasks
- `caddy` - Caddy configuration tasks
### Example Usage
```bash
# Full deployment
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml
# Only update Caddy config (no git pull)
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml --tags caddy
# Only sync files (skip Caddy reload)
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml --tags deploy
```
## Initial Setup
### jnss-web Project Configuration
The following changes need to be made in the jnss-web project:
#### 1. Install Static Adapter
```bash
cd ~/dev/jnss-web
bun add -D @sveltejs/adapter-static
```
#### 2. Update svelte.config.js
```javascript
import adapter from '@sveltejs/adapter-static';
/** @type {import('@sveltejs/kit').Config} */
const config = {
kit: {
adapter: adapter({
pages: 'build',
assets: 'build',
fallback: undefined,
precompress: false,
strict: true
})
}
};
export default config;
```
#### 3. Create src/routes/+layout.js
```javascript
export const prerender = true;
```
#### 4. Update .gitignore
Comment out or remove the build directory from .gitignore:
```gitignore
# build/ <- Comment this out or remove it
```
#### 5. Create Deploy Branch
```bash
# Build the site first
bun run build
# Create and switch to deploy branch
git checkout -b deploy
# Remove source files (keep only build artifacts)
git rm -r src static *.config.js package.json bun.lock jsconfig.json .npmrc
# Keep only build/ and documentation
git add build/ README.md
# Commit
git commit -m "Initial deploy branch with build artifacts"
# Push to Gitea
git push -u origin deploy
# Switch back to main
git checkout main
```
## Troubleshooting
### Build Directory Not Found
**Error**: "Build directory not found in repository"
**Solution**: Ensure you pushed the build artifacts to the deploy branch:
```bash
git checkout deploy
ls -la build/ # Should exist
git log --oneline -1 # Verify latest commit
```
### Caddy Not Serving New Content
**Solution**:
```bash
# SSH to server
ssh root@arch-vps
# Check Caddy status
systemctl status caddy
# Reload Caddy manually
systemctl reload caddy
# Check logs
journalctl -u caddy -f
```
### Permission Issues
**Solution**:
```bash
# SSH to server
ssh root@arch-vps
# Fix ownership
chown -R caddy:caddy /var/www/jnss-web
# Verify
ls -la /var/www/jnss-web
```
### Git Clone Fails
**Error**: Unable to clone from git.jnss.me
**Solution**:
- Verify repository is public in Gitea
- Test clone manually: `git clone https://git.jnss.me/joakim/jnss-web.git /tmp/test`
- Check Gitea service status
## Rollback Procedure
### Option 1: Rollback Deploy Branch
```bash
# In jnss-web repository
git checkout deploy
# Find previous commit
git log --oneline
# Reset to previous commit
git reset --hard <commit-hash>
# Force push
git push -f origin deploy
# Re-run deployment
cd ~/rick-infra
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml
```
### Option 2: Quick Server-Side Rollback
```bash
# SSH to server
ssh root@arch-vps
# Go to repo directory
cd /opt/jnss-web-repo
# Reset to previous commit
git reset --hard HEAD~1
# Re-sync
rsync -av --delete build/ /var/www/jnss-web/
# Reload Caddy
systemctl reload caddy
```
## Security Considerations
- Repository is public (contains only build artifacts)
- No secrets in build output
- Caddy serves only /var/www/jnss-web (no parent directory access)
- Security headers configured in Caddy
- HTTPS enforced via Caddy with Let's Encrypt
## Performance Optimizations
- Static assets cached for 1 year (immutable)
- HTML cached for 1 hour with revalidation
- Gzip compression enabled
- No server-side processing required
## Monitoring
### Check Deployment Status
```bash
# From control machine
ansible homelab -i inventory/hosts.yml -m shell -a "ls -lh /var/www/jnss-web"
```
### View Access Logs
```bash
ssh root@arch-vps 'tail -100 /var/log/caddy/jnss-web.log | jq .'
```
### Check Site Health
```bash
curl -I https://jnss.me
curl -I https://www.jnss.me # Should redirect to jnss.me
```
## Files Created
This deployment adds the following files to rick-infra:
- `playbooks/deploy-jnss-web.yml` - Main deployment playbook
- `playbooks/templates/jnss-web.caddy.j2` - Caddy configuration template
- `docs/jnss-web-deployment.md` - This documentation
And modifies:
- `roles/caddy/templates/Caddyfile.j2` - Removed default site section

View File

@@ -0,0 +1,311 @@
# Metrics Stack Deployment Guide
Complete guide to deploying the monitoring stack (VictoriaMetrics, Grafana, node_exporter) on rick-infra.
## Overview
The metrics stack provides:
- **System monitoring**: CPU, memory, disk, network via node_exporter
- **Time-series storage**: VictoriaMetrics (Prometheus-compatible, 7x less RAM)
- **Visualization**: Grafana with Authentik SSO integration
- **Access**: `https://metrics.jnss.me` with role-based permissions
## Architecture
```
User → metrics.jnss.me (HTTPS)
Caddy (Reverse Proxy)
Grafana (OAuth → Authentik for SSO)
VictoriaMetrics (Time-series DB)
node_exporter (System Metrics)
```
All services run on localhost only, following rick-infra security principles.
## Prerequisites
### 1. Caddy Deployed
```bash
ansible-playbook rick-infra.yml --tags caddy
```
### 2. Authentik Deployed
```bash
ansible-playbook rick-infra.yml --tags authentik
```
### 3. DNS Configuration
Ensure `metrics.jnss.me` points to arch-vps IP:
```bash
dig metrics.jnss.me # Should return 69.62.119.31
```
## Step 1: Configure Authentik OAuth Provider
### Create OAuth2/OIDC Provider
1. Login to Authentik at `https://auth.jnss.me`
2. Navigate to **Applications → Providers****Create**
3. Configure provider:
- **Name**: `Grafana`
- **Type**: `OAuth2/OpenID Provider`
- **Authentication flow**: `default-authentication-flow`
- **Authorization flow**: `default-provider-authorization-explicit-consent`
- **Client type**: `Confidential`
- **Client ID**: `grafana`
- **Client Secret**: Click **Generate** and **copy the secret**
- **Redirect URIs**: `https://metrics.jnss.me/login/generic_oauth`
- **Signing Key**: Select auto-generated key
- **Scopes**: `openid`, `profile`, `email`, `groups`
4. Click **Finish**
### Create Application
1. Navigate to **Applications****Create**
2. Configure application:
- **Name**: `Grafana`
- **Slug**: `grafana`
- **Provider**: Select `Grafana` provider created above
- **Launch URL**: `https://metrics.jnss.me`
3. Click **Create**
### Create Groups (Optional)
For role-based access control:
1. Navigate to **Directory → Groups****Create**
2. Create groups:
- **grafana-admins**: Full admin access to Grafana
- **grafana-editors**: Can create/edit dashboards
- All other users get Viewer access
3. Add users to groups as needed
## Step 2: Configure Vault Variables
Edit vault file:
```bash
ansible-vault edit host_vars/arch-vps/vault.yml
```
Add these variables:
```yaml
# Grafana admin password (for emergency local login)
vault_grafana_admin_password: "your-secure-admin-password"
# Grafana secret key (generate with: openssl rand -base64 32)
vault_grafana_secret_key: "your-random-32-char-secret-key"
# OAuth credentials from Authentik
vault_grafana_oauth_client_id: "grafana"
vault_grafana_oauth_client_secret: "paste-secret-from-authentik-here"
```
Save and close (`:wq` in vim).
## Step 3: Deploy Metrics Stack
Deploy all components:
```bash
ansible-playbook rick-infra.yml --tags metrics
```
This will:
1. Install and configure VictoriaMetrics
2. Install and configure node_exporter
3. Install and configure Grafana with OAuth
4. Deploy Caddy configuration for `metrics.jnss.me`
Expected output:
```
PLAY RECAP *******************************************************
arch-vps : ok=25 changed=15 unreachable=0 failed=0 skipped=0
```
## Step 4: Verify Deployment
### Check Services
SSH to arch-vps and verify services:
```bash
# Check all services are running
systemctl status victoriametrics grafana node_exporter
# Check service health
curl http://127.0.0.1:8428/health # VictoriaMetrics
curl http://127.0.0.1:9100/metrics # node_exporter
curl http://127.0.0.1:3000/api/health # Grafana
```
### Check HTTPS Access
```bash
curl -I https://metrics.jnss.me
# Should return 200 or 302 (redirect to Authentik)
```
### Check Metrics Collection
```bash
# Check VictoriaMetrics scrape targets
curl http://127.0.0.1:8428/api/v1/targets
# Should show node_exporter as "up"
```
## Step 5: Access Grafana
1. Navigate to `https://metrics.jnss.me`
2. Click **"Sign in with Authentik"**
3. Login with your Authentik credentials
4. You should be redirected to Grafana dashboard
First login will:
- Auto-create your Grafana user
- Assign role based on Authentik group membership
- Grant access to default organization
## Step 6: Verify Data Source
1. In Grafana, navigate to **Connections → Data sources**
2. Verify **VictoriaMetrics** is listed and default
3. Click on VictoriaMetrics → **Save & test**
4. Should show green "Data source is working" message
## Step 7: Create First Dashboard
### Option 1: Import Community Dashboard (Recommended)
1. Navigate to **Dashboards → Import**
2. Enter dashboard ID: `1860` (Node Exporter Full)
3. Click **Load**
4. Select **VictoriaMetrics** as data source
5. Click **Import**
You now have a comprehensive system monitoring dashboard!
### Option 2: Create Custom Dashboard
1. Navigate to **Dashboards → New → New Dashboard**
2. Click **Add visualization**
3. Select **VictoriaMetrics** data source
4. Enter PromQL query:
```promql
# CPU usage
100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
```
5. Click **Apply**
## Step 8: Configure Alerting (Optional)
Grafana supports alerting on metrics. Configure via **Alerting → Alert rules**.
Example alert for high CPU:
```promql
avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100 < 20
```
## Troubleshooting
### OAuth Login Fails
**Symptom**: Redirect to Authentik, but returns error after login
**Solution**:
1. Verify redirect URI in Authentik matches exactly: `https://metrics.jnss.me/login/generic_oauth`
2. Check Grafana logs: `journalctl -u grafana -f`
3. Verify OAuth credentials in vault match Authentik
### No Metrics in Grafana
**Symptom**: Data source working, but no data in dashboards
**Solution**:
1. Check VictoriaMetrics targets: `curl http://127.0.0.1:8428/api/v1/targets`
2. Verify node_exporter is up: `systemctl status node_exporter`
3. Check time range in Grafana (top right) - try "Last 5 minutes"
### Can't Access metrics.jnss.me
**Symptom**: Connection timeout or SSL error
**Solution**:
1. Verify DNS: `dig metrics.jnss.me`
2. Check Caddy is running: `systemctl status caddy`
3. Check Caddy logs: `journalctl -u caddy -f`
4. Verify Caddy config loaded: `ls /etc/caddy/sites/grafana.caddy`
### Wrong Grafana Role
**Symptom**: User has wrong permissions (e.g., Viewer instead of Admin)
**Solution**:
1. Verify user is in correct Authentik group (`grafana-admins` or `grafana-editors`)
2. Logout of Grafana and login again
3. Check role mapping expression in `roles/metrics/defaults/main.yml`:
```yaml
grafana_oauth_role_attribute_path: "contains(groups, 'grafana-admins') && 'Admin' || contains(groups, 'grafana-editors') && 'Editor' || 'Viewer'"
```
## Next Steps
### Add More Hosts
To monitor additional hosts (e.g., mini-vps):
1. Deploy node_exporter to target host
2. Update VictoriaMetrics scrape config to include remote targets
3. Configure remote_write or federation
### Add Service Metrics
To monitor containerized services:
1. Expose `/metrics` endpoint in application (port 8080)
2. Add scrape config in `roles/metrics/templates/scrape.yml.j2`:
```yaml
- job_name: 'myservice'
static_configs:
- targets: ['127.0.0.1:8080']
```
3. Redeploy metrics role
### Set Up Alerting
1. Configure notification channels in Grafana (Email, Slack, etc.)
2. Create alert rules for critical metrics
3. Set up on-call rotation if needed
## Security Notes
- All metrics services run on localhost only
- Grafana is the only internet-facing component (via Caddy HTTPS)
- OAuth provides SSO with Authentik (no separate Grafana passwords)
- systemd hardening enabled on all services
- Default admin account should only be used for emergencies
## Resources
- **VictoriaMetrics Docs**: https://docs.victoriametrics.com/
- **Grafana Docs**: https://grafana.com/docs/
- **PromQL Guide**: https://prometheus.io/docs/prometheus/latest/querying/basics/
- **Dashboard Library**: https://grafana.com/grafana/dashboards/
- **Authentik OAuth**: https://goauthentik.io/docs/providers/oauth2/
## Support
For issues specific to rick-infra metrics deployment:
1. Check service logs: `journalctl -u <service> -f`
2. Review role README: `roles/metrics/README.md`
3. Verify vault variables are correctly set
4. Ensure Authentik OAuth provider is properly configured

View File

@@ -0,0 +1,367 @@
# Service Domain Configuration Standard
Standard pattern for domain configuration in rick-infra service roles.
## Architecture Philosophy
Rick-infra follows a **direct domain specification** pattern for service configuration:
```yaml
# Direct and explicit
service_domain: "subdomain.jnss.me"
# NOT this (complex and inflexible)
service_subdomain: "subdomain"
service_domain: "{{ caddy_domain }}"
service_full_domain: "{{ service_subdomain }}.{{ service_domain }}"
```
## Benefits
1. **Simplicity**: One variable instead of three
2. **Flexibility**: Can use any domain (subdomain, root, or completely different)
3. **Explicitness**: Clear what domain the service uses
4. **No Forced Inheritance**: Not tied to infrastructure `caddy_domain`
5. **Consistency**: All services follow the same pattern
---
## Standard Pattern
### Basic Service (Single Domain)
For services that only need one domain:
```yaml
# roles/service/defaults/main.yml
service_domain: "service.jnss.me"
# host_vars/host/main.yml (explicit override)
service_domain: "service.jnss.me"
```
**Examples:**
- Authentik: `authentik_domain: "auth.jnss.me"`
- Nextcloud: `nextcloud_domain: "cloud.jnss.me"`
### Advanced Service (Multiple Domains)
For services that need separate domains for different purposes:
```yaml
# roles/service/defaults/main.yml
service_http_domain: "service.jnss.me" # Web interface
service_api_domain: "api.jnss.me" # API endpoint
service_ssh_domain: "jnss.me" # SSH/CLI operations
# host_vars/host/main.yml (explicit override)
service_http_domain: "service.jnss.me"
service_api_domain: "api.jnss.me"
service_ssh_domain: "jnss.me"
```
**Example:**
- Gitea:
- `gitea_http_domain: "git.jnss.me"` (web interface)
- `gitea_ssh_domain: "jnss.me"` (Git operations)
---
## Usage in Templates
### Caddy Configuration
```jinja
# roles/service/templates/service.caddy.j2
{{ service_domain }} {
reverse_proxy 127.0.0.1:{{ service_port }}
}
```
### Application Configuration
```jinja
# roles/service/templates/service.conf.j2
[server]
DOMAIN = {{ service_domain }}
ROOT_URL = https://{{ service_domain }}/
```
### Task Display Messages
```yaml
# roles/service/tasks/main.yml
- name: Display service information
debug:
msg: |
🌐 Web Interface: https://{{ service_domain }}
📍 Access your service at the domain above
```
---
## Domain Selection Guidelines
### Use Root Domain When:
- Service is the primary purpose of the infrastructure
- You want cleaner URLs (e.g., SSH: `git@jnss.me` vs `git@git.jnss.me`)
- Industry standard uses root domain (e.g., GitHub uses `github.com` for SSH)
### Use Subdomain When:
- Service is one of many
- You want explicit service identification
- You need clear separation between services
### Use Different Domain When:
- Service needs to be on a different apex domain
- External service integration requires specific domain
- Multi-domain setup for geographical distribution
---
## Examples by Service Type
### Identity/Auth Service
```yaml
authentik_domain: "auth.jnss.me"
```
**Rationale**: Auth subdomain is an industry standard
### Storage Service
```yaml
nextcloud_domain: "cloud.jnss.me"
```
**Rationale**: "cloud" clearly indicates storage/sync service
### Git Service
```yaml
gitea_http_domain: "git.jnss.me" # Web UI
gitea_ssh_domain: "jnss.me" # SSH operations
```
**Rationale**:
- HTTP uses `git.` for clarity
- SSH uses root domain to avoid `git@git.jnss.me` redundancy
- Matches GitHub/GitLab pattern
### Monitoring Service
```yaml
grafana_domain: "monitor.jnss.me"
prometheus_domain: "metrics.jnss.me"
```
**Rationale**: Different subdomains for different monitoring tools
---
## Configuration Layers
### 1. Role Defaults (`roles/service/defaults/main.yml`)
Provide sensible defaults:
```yaml
# Option A: Use specific domain (explicit)
service_domain: "service.jnss.me"
# Option B: Use caddy_domain if it makes sense (flexible)
service_domain: "service.{{ caddy_domain | default('localhost') }}"
# Recommendation: Use Option A for clarity
```
### 2. Host Variables (`host_vars/hostname/main.yml`)
**Always explicitly set** in production:
```yaml
# =================================================================
# Service Configuration
# =================================================================
service_domain: "service.jnss.me"
```
**Why explicit?**
- Clear what domain is configured
- Easy to change without understanding defaults
- Easier to audit configuration
- Documentation in configuration itself
### 3. Group Variables (`group_vars/production/main.yml`)
For settings shared across production hosts:
```yaml
# Common production settings
service_enable_ssl: true
service_require_auth: true
# Generally avoid setting domains in group_vars
# (domains are usually host-specific)
```
---
## Anti-Patterns to Avoid
### ❌ Subdomain Composition
```yaml
# DON'T DO THIS
service_subdomain: "service"
service_domain: "{{ caddy_domain }}"
service_full_domain: "{{ service_subdomain }}.{{ service_domain }}"
```
**Problems:**
- Complex (3 variables for 1 domain)
- Inflexible (can't use root or different domains)
- Forces inheritance from infrastructure variable
- Inconsistent with other services
### ❌ Implicit Inheritance
```yaml
# DON'T DO THIS
service_domain: "{{ caddy_domain }}"
```
**Problems:**
- Not explicit what domain is used
- Harder to change
- Hides actual configuration
- Requires understanding of infrastructure variables
### ❌ Mixed Patterns
```yaml
# DON'T DO THIS
authentik_domain: "auth.jnss.me" # Direct
nextcloud_subdomain: "cloud" # Composition
service_domain: "{{ caddy_domain }}" # Inheritance
```
**Problems:**
- Inconsistent
- Confusing for maintainers
- Different patterns for same purpose
---
## Migration from Old Pattern
If you have services using the old subdomain composition pattern:
### Step 1: Identify Current Variables
```yaml
# Old pattern
service_subdomain: "service"
service_domain: "{{ caddy_domain }}"
service_full_domain: "{{ service_subdomain }}.{{ service_domain }}"
```
### Step 2: Replace with Direct Domain
```yaml
# New pattern
service_domain: "service.jnss.me"
```
### Step 3: Update Template References
```jinja
# Old
{{ service_full_domain }}
# New
{{ service_domain }}
```
### Step 4: Remove Unused Variables
Delete `service_subdomain` and `service_full_domain` from defaults.
### Step 5: Add Explicit Host Configuration
```yaml
# host_vars/arch-vps/main.yml
service_domain: "service.jnss.me"
```
---
## Testing Domain Configuration
### Verify Caddy Configuration
```bash
# Check generated Caddy config
cat /etc/caddy/sites-enabled/service.caddy
# Test Caddy configuration syntax
caddy validate --config /etc/caddy/Caddyfile
# Check TLS certificate
curl -I https://service.jnss.me
```
### Verify Application Configuration
```bash
# Check service configuration
cat /etc/service/config.ini | grep -i domain
# Test service accessibility
curl https://service.jnss.me
```
### Verify DNS Resolution
```bash
# Check DNS resolution
dig service.jnss.me
# Test connectivity
nc -zv service.jnss.me 443
```
---
## Checklist for New Services
When creating a new service role:
- [ ] Use direct domain specification (not subdomain composition)
- [ ] Define domain(s) in `roles/service/defaults/main.yml`
- [ ] Add explicit domain(s) to host_vars
- [ ] Update all templates to use domain variable(s)
- [ ] Document domain configuration in role README
- [ ] Follow naming convention: `service_domain` or `service_[type]_domain`
- [ ] Test with different domain configurations
---
## Summary
**Standard Pattern:**
```yaml
# Defaults: Provide reasonable default
service_domain: "service.jnss.me"
# Host vars: Always explicit in production
service_domain: "service.jnss.me"
# Templates: Use variable directly
{{ service_domain }}
```
**Key Principles:**
1. Direct and explicit
2. One variable per domain
3. No forced inheritance
4. Consistent across all services
5. Flexible for any domain pattern
---
**Rick-Infra Domain Configuration Standard**
Simple, flexible, and consistent domain configuration for all services.

View File

@@ -0,0 +1,307 @@
# Vaultwarden SSO Feature Status and Configuration
**Document Date:** December 21, 2025
**Last Updated:** December 21, 2025
**Status:** SSO Configured, Waiting for Stable Release
---
## Executive Summary
Vaultwarden has been successfully deployed with **SSO integration pre-configured** for Authentik. However, SSO functionality is currently **only available in testing images** and not yet in the stable release. This document explains the current state, our decision to wait for stable, and how to activate SSO when it becomes available.
## Current Deployment Status
### What's Working
- ✅ Vaultwarden deployed successfully at `https://vault.jnss.me`
- ✅ PostgreSQL backend via Unix socket
- ✅ Admin panel accessible and working
- ✅ Email/password authentication working
- ✅ SMTP notifications configured
- ✅ All SSO environment variables correctly configured
- ✅ Authentik OAuth2 provider created and ready
### What's Not Working (By Design)
- ❌ SSO login option not appearing on login page
- ❌ "Use single sign-on" button missing
**Reason:** Using stable image (`vaultwarden/server:latest` v1.34.3) which does not include SSO code.
## Investigation Summary (Dec 21, 2025)
### Problem Reported
User deployed Vaultwarden with `vaultwarden_sso_enabled: true` and configured Authentik integration following official guides, but no SSO option appeared on the login page.
### Root Cause Identified
After investigation including:
- Service status check (healthy, running normally)
- Environment variable verification (all SSO vars present and correct)
- Configuration review (matches Authentik integration guide perfectly)
- API endpoint inspection (`/api/config` returns `"sso":""`)
- Official documentation review
**Finding:** SSO feature is only compiled into `vaultwarden/server:testing` images, not stable releases.
### Evidence
From [Vaultwarden Official Wiki](https://github.com/dani-garcia/vaultwarden/wiki):
> **Testing features**
>
> Features available in the `testing` docker image:
> - Single Sign-On (SSO), see [Documentation](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect)
From [SSO Documentation Page](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect):
> ⚠️ **Important**
>
> ‼️ ‼️ ‼️
> SSO is currently only available in the `:testing` tagged images!
> The current stable `v1.34.3` **does not** contain the SSO feature.
> ‼️ ‼️ ‼️
### API Response Analysis
```bash
# API config endpoint shows SSO not available
curl -s http://127.0.0.1:8080/api/config | grep -o '"sso":"[^"]*"'
# Returns: "sso":"" ← Empty = feature not compiled in
# Environment variables are set correctly
podman exec vaultwarden env | grep -i sso
SSO_ENABLED=true
SSO_ONLY=false
SSO_CLIENT_ID=DDOQXdwFn6pi4FtSvo7PK5b63pRzyD552xapTZGr
SSO_CLIENT_SECRET=02D308Sle2w2NPsi7UaXb3bvKKK4punFDT2LiVqKpzvEFqgPpyLysA8Z5yS4g8t4LYmsI9txLE02l5MtWP5R2RBavLhYHjNFHcwEmvYB94bOJw45YmgiGePaW4NHKcfY
SSO_AUTHORITY=https://auth.jnss.me/application/o/vaultwarden/
SSO_SCOPES="openid email profile offline_access"
# ... etc (all correct)
```
**Conclusion:** Environment configured correctly, feature simply not available in stable release.
## Decision: Wait for Stable Release
### Rationale
**Why NOT switch to testing image:**
1. **Production stability** - This is a password manager handling sensitive credentials
2. **Testing images are volatile** - Frequent updates, potential bugs
3. **No ETA for stable** - SSO marked as "testing feature" indefinitely
4. **Current auth works fine** - Email/password login is secure and functional
5. **Configuration is ready** - When SSO reaches stable, it will work immediately
**Why keep SSO configured:**
1. **Future-ready** - No additional work needed when SSO stabilizes
2. **No harm** - Environment variables are ignored by stable image
3. **Documentation** - Clear record of SSO setup for future reference
4. **Authentik provider ready** - Already created and configured
### Alternatives Considered
| Option | Pros | Cons | Decision |
|--------|------|------|----------|
| Use `testing` image | SSO available now | Unstable, potential data loss, frequent breaking changes | ❌ Rejected |
| Wait for stable | Stable, reliable, secure | No SSO until unknown future date | ✅ **Selected** |
| Remove SSO config | Cleaner config | Requires reconfiguration later | ❌ Rejected |
| Dual deployment | Test SSO separately | Resource waste, complexity | ❌ Rejected |
## Current Configuration
### Ansible Role Variables
Location: `roles/vaultwarden/defaults/main.yml`
```yaml
# Container version (stable, no SSO)
vaultwarden_version: "latest"
# SSO enabled (ready for when feature reaches stable)
vaultwarden_sso_enabled: true
vaultwarden_sso_only: false
vaultwarden_sso_client_id: "{{ vault_vaultwarden_sso_client_id }}"
vaultwarden_sso_client_secret: "{{ vault_vaultwarden_sso_client_secret }}"
vaultwarden_sso_authority: "https://auth.jnss.me/application/o/vaultwarden/"
vaultwarden_sso_scopes: "openid email profile offline_access"
vaultwarden_sso_signups_match_email: true
vaultwarden_sso_allow_unknown_email_verification: false
vaultwarden_sso_client_cache_expiration: 0
```
### Vault Variables (Encrypted)
Location: `group_vars/homelab/vault.yml`
```yaml
# SSO credentials from Authentik
vault_vaultwarden_sso_client_id: "DDOQXdwFn6pi4FtSvo7PK5b63pRzyD552xapTZGr"
vault_vaultwarden_sso_client_secret: "02D308Sle2w2NPsi7UaXb3bvKKK4punFDT2LiVqKpzvEFqgPpyLysA8Z5yS4g8t4LYmsI9txLE02l5MtWP5R2RBavLhYHjNFHcwEmvYB94bOJw45YmgiGePaW4NHKcfY"
```
### Authentik Provider Configuration
**Provider Details:**
- **Name:** Vaultwarden
- **Type:** OAuth2/OpenID Connect Provider
- **Client Type:** Confidential
- **Client ID:** `DDOQXdwFn6pi4FtSvo7PK5b63pRzyD552xapTZGr`
- **Application Slug:** `vaultwarden`
- **Redirect URI:** `https://vault.jnss.me/identity/connect/oidc-signin`
**Scopes Configured:**
-`authentik default OAuth Mapping: OpenID 'openid'`
-`authentik default OAuth Mapping: OpenID 'email'`
-`authentik default OAuth Mapping: OpenID 'profile'`
-`authentik default OAuth Mapping: OpenID 'offline_access'`
**Token Settings:**
- Access token validity: > 5 minutes
- Refresh token enabled via `offline_access` scope
**Authority URL:** `https://auth.jnss.me/application/o/vaultwarden/`
### Deployed Environment (VPS)
```bash
# Deployed environment file
Location: /opt/vaultwarden/.env
# All SSO variables present and correct
SSO_ENABLED=true
SSO_AUTHORITY=https://auth.jnss.me/application/o/vaultwarden/
SSO_SCOPES="openid email profile offline_access"
# ... etc
```
## How to Activate SSO (Future)
When Vaultwarden SSO reaches stable release:
### Automatic Activation (Recommended)
1. **Monitor Vaultwarden releases** for SSO in stable:
- Watch: https://github.com/dani-garcia/vaultwarden/releases
- Look for: SSO feature in stable image changelog
2. **Update container** (standard maintenance):
```bash
ansible-playbook rick-infra.yml --tags vaultwarden --ask-vault-pass
```
3. **Verify SSO is available**:
```bash
ssh root@69.62.119.31 "curl -s http://127.0.0.1:8080/api/config | grep sso"
# Should return: "sso":"https://vault.jnss.me" or similar
```
4. **Test SSO**:
- Navigate to: https://vault.jnss.me
- Log out if logged in
- Enter verified email address
- Click "Use single sign-on" button
- Should redirect to Authentik login
**No configuration changes needed** - Everything is already set up correctly.
### Manual Testing (Use Testing Image)
If you want to test SSO before stable release:
1. **Backup current deployment**:
```bash
ansible-playbook playbooks/backup-vaultwarden.yml # Create if needed
```
2. **Change to testing image** in `roles/vaultwarden/defaults/main.yml`:
```yaml
vaultwarden_version: "testing"
```
3. **Deploy**:
```bash
ansible-playbook rick-infra.yml --tags vaultwarden --ask-vault-pass
```
4. **Test SSO** (same as above)
5. **Revert to stable** when testing complete:
```yaml
vaultwarden_version: "latest"
```
**Warning:** Testing images may contain bugs, data corruption risks, or breaking changes. Not recommended for production password manager.
## Verification Commands
### Check Current Image Version
```bash
ssh root@69.62.119.31 "podman inspect vaultwarden --format '{{.ImageName}}'"
# Expected: docker.io/vaultwarden/server:latest
```
### Check SSO API Status
```bash
ssh root@69.62.119.31 "curl -s http://127.0.0.1:8080/api/config | grep -o '\"sso\":\"[^\"]*\"'"
# Current: "sso":"" (empty = not available)
# Future: "sso":"https://vault.jnss.me" (URL = available)
```
### Check SSO Environment Variables
```bash
ssh root@69.62.119.31 "podman exec vaultwarden env | grep -i sso | sort"
# Should show all SSO_* variables configured correctly
```
### Check Vaultwarden Version
```bash
ssh root@69.62.119.31 "podman exec vaultwarden /vaultwarden --version"
# Current: Vaultwarden 1.34.3
```
## Documentation Updates
The following files have been updated to document this finding:
1. **`roles/vaultwarden/README.md`**
- Added warning banner in SSO configuration section
- Updated troubleshooting section with SSO status checks
- Documented stable vs testing image behavior
2. **`roles/vaultwarden/VAULT_VARIABLES.md`**
- Added SSO feature status warning
- Documented that credentials are ready but inactive
3. **`roles/vaultwarden/defaults/main.yml`**
- Added comments explaining SSO availability
4. **`docs/vaultwarden-sso-status.md`** (this document)
- Complete investigation findings
- Configuration reference
- Activation procedures
## Timeline
- **2025-07-30:** Vaultwarden v1.34.3 released (current stable)
- **2025-12-21:** Vaultwarden deployed with SSO pre-configured
- **2025-12-21:** Investigation completed, SSO status documented
- **TBD:** SSO feature reaches stable release (no ETA)
- **Future:** Automatic SSO activation on next deployment
## Related Documentation
- [Vaultwarden Official Wiki](https://github.com/dani-garcia/vaultwarden/wiki)
- [SSO Documentation](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect)
- [Authentik Integration Guide](https://integrations.goauthentik.io/security/vaultwarden/)
- [Vaultwarden Testing Features](https://github.com/dani-garcia/vaultwarden/wiki#testing-features)
## Contact
For questions about this deployment:
- Infrastructure repo: `/home/fitz/rick-infra`
- Role location: `roles/vaultwarden/`
- Service: `vault.jnss.me`
- Host: `arch-vps` (69.62.119.31)
---
**Status:** SSO configured and ready, waiting for upstream stable release. No action required.

View File

@@ -65,13 +65,20 @@ nextcloud_db_password: "{{ vault_nextcloud_db_password }}"
nextcloud_valkey_db: 2 # Authentik uses 1
# Admin configuration
nextcloud_admin_user: "admin"
nextcloud_admin_user: "joakim"
nextcloud_admin_email: "joakim@jnss.me"
nextcloud_admin_password: "{{ vault_nextcloud_admin_password }}"
# Service configuration
nextcloud_service_enabled: true
nextcloud_service_state: "started"
# =================================================================
# Gitea Configuration
# =================================================================
gitea_http_domain: "git.jnss.me"
gitea_ssh_domain: "jnss.me"
# =================================================================
# Security & Logging
# =================================================================

View File

@@ -5,15 +5,21 @@
- [ ] What gets served on jnss.me?
- [ ] Backups
- [x] Titan email provider support. For smtp access to hello@jnss.me
- [ ] Vaultvarden
- [ ] Configure and set up Nextcloud
- [ ] OAuth
- [ ] Settings
- [ ] Contacts and calendars
- [ ] Storage bucket integration?
- [x] SMTP setup for email sending
- [ ] Gitea
- [ ] SSH setup
- [x] Gitea
- [x] SSH passthrough setup
- [x] Figure out how to disable registration and local password
- [x] SMTP setup for email sending
- [ ] Authentik Invitations for users?

View File

@@ -0,0 +1,133 @@
---
# ================================================================
# jnss-web Static Site Deployment Playbook
# ================================================================
# Deploys the jnss-web SvelteKit static site to jnss.me
#
# Usage:
# ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml
#
# This playbook:
# - Clones the jnss-web repository (deploy branch) to a temp directory
# - Syncs build artifacts to /var/www/jnss-web
# - Deploys Caddy configuration for jnss.me with www redirect
# - Reloads Caddy to serve the new site
# ================================================================
- name: Deploy jnss-web static site
hosts: homelab
become: true
vars:
# Git repository configuration
jnss_web_repo_url: "https://git.jnss.me/joakim/jnss-web.git"
jnss_web_branch: "deploy"
# Server paths
jnss_web_root: "/var/www/jnss-web"
# Domain configuration
jnss_web_domain: "jnss.me"
# Caddy configuration
caddy_user: "caddy"
caddy_sites_enabled_dir: "/etc/caddy/sites-enabled"
tasks:
# ============================================================
# Git Repository Management
# ============================================================
- name: Create temporary directory for git clone
tempfile:
state: directory
suffix: -jnss-web
register: temp_clone_dir
tags: [jnss-web, deploy]
- name: Clone jnss-web repository to temp directory
git:
repo: "{{ jnss_web_repo_url }}"
dest: "{{ temp_clone_dir.path }}"
version: "{{ jnss_web_branch }}"
depth: 1
tags: [jnss-web, deploy]
- name: Verify build directory exists in repository
stat:
path: "{{ temp_clone_dir.path }}/index.html"
register: build_dir
tags: [jnss-web, deploy]
- name: Fail if index.html not found
fail:
msg: "Build index.html not found in repository root. Ensure the deploy branch contains the built artifacts."
when: not build_dir.stat.exists
tags: [jnss-web, deploy]
# ============================================================
# Web Root Deployment
# ============================================================
- name: Remove old web root
file:
path: "{{ jnss_web_root }}"
state: absent
tags: [jnss-web, deploy]
- name: Create fresh web root directory
file:
path: "{{ jnss_web_root }}"
state: directory
owner: "{{ caddy_user }}"
group: "{{ caddy_user }}"
mode: '0755'
tags: [jnss-web, deploy]
- name: Copy build files to web root
copy:
src: "{{ temp_clone_dir.path }}/"
dest: "{{ jnss_web_root }}/"
owner: "{{ caddy_user }}"
group: "{{ caddy_user }}"
mode: '0755'
remote_src: true
tags: [jnss-web, deploy]
- name: Clean up temporary clone directory
file:
path: "{{ temp_clone_dir.path }}"
state: absent
tags: [jnss-web, deploy]
# ============================================================
# Caddy Configuration
# ============================================================
- name: Deploy Caddy configuration for jnss-web
template:
src: templates/jnss-web.caddy.j2
dest: "{{ caddy_sites_enabled_dir }}/jnss-web.caddy"
owner: root
group: "{{ caddy_user }}"
mode: '0644'
notify: reload caddy
tags: [jnss-web, caddy]
- name: Validate Caddy configuration
command: caddy validate --config /etc/caddy/Caddyfile
register: caddy_validate
changed_when: false
tags: [jnss-web, caddy]
- name: Display Caddy validation result
debug:
msg: "Caddy configuration is valid"
when: caddy_validate.rc == 0
tags: [jnss-web, caddy]
handlers:
- name: reload caddy
systemd:
name: caddy
state: reloaded

View File

@@ -0,0 +1,203 @@
---
# =================================================================
# Nextcloud Removal Playbook
# =================================================================
# Rick-Infra - Clean removal of Nextcloud installation
#
# This playbook removes all Nextcloud components:
# - Systemd services and timers
# - Container and images
# - Data directories
# - Database and user
# - Caddy configuration
# - System user and groups
#
# Usage: ansible-playbook playbooks/remove-nextcloud.yml -i inventory/hosts.yml
- name: Remove Nextcloud Installation
hosts: arch-vps
become: yes
gather_facts: yes
vars:
nextcloud_user: nextcloud
nextcloud_group: nextcloud
nextcloud_home: /opt/nextcloud
nextcloud_db_name: nextcloud
nextcloud_db_user: nextcloud
caddy_sites_enabled_dir: /etc/caddy/sites-enabled
tasks:
# ============================================
# Stop and Disable Services
# ============================================
- name: Stop and disable nextcloud-cron timer
systemd:
name: nextcloud-cron.timer
state: stopped
enabled: no
failed_when: false
- name: Stop and disable nextcloud-cron service
systemd:
name: nextcloud-cron.service
state: stopped
enabled: no
failed_when: false
- name: Stop and disable nextcloud service
systemd:
name: nextcloud.service
state: stopped
enabled: no
failed_when: false
# ============================================
# Remove Container and Images
# ============================================
- name: Remove nextcloud container (if running)
command: podman rm -f nextcloud
register: container_remove
changed_when: container_remove.rc == 0
failed_when: false
# ============================================
# Remove Systemd Units
# ============================================
- name: Remove nextcloud-cron systemd units
file:
path: "{{ item }}"
state: absent
loop:
- /etc/systemd/system/nextcloud-cron.timer
- /etc/systemd/system/nextcloud-cron.service
- name: Remove nextcloud quadlet file
file:
path: /etc/containers/systemd/nextcloud.container
state: absent
- name: Reload systemd daemon
systemd:
daemon_reload: yes
# ============================================
# Remove Database
# ============================================
- name: Drop nextcloud database
become_user: postgres
postgresql_db:
name: "{{ nextcloud_db_name }}"
state: absent
failed_when: false
- name: Drop nextcloud database user
become_user: postgres
postgresql_user:
name: "{{ nextcloud_db_user }}"
state: absent
failed_when: false
# ============================================
# Remove Caddy Configuration
# ============================================
- name: Remove nextcloud Caddy configuration
file:
path: "{{ caddy_sites_enabled_dir }}/nextcloud.caddy"
state: absent
notify: reload caddy
# ============================================
# Remove Data Directories
# ============================================
- name: Remove nextcloud home directory (including all data)
file:
path: "{{ nextcloud_home }}"
state: absent
# ============================================
# Remove User and Groups
# ============================================
- name: Remove nextcloud user
user:
name: "{{ nextcloud_user }}"
state: absent
remove: yes
force: yes
- name: Remove nextcloud group
group:
name: "{{ nextcloud_group }}"
state: absent
# ============================================
# Clean Up Remaining Files
# ============================================
- name: Find nextcloud-related files in /tmp
find:
paths: /tmp
patterns: "nextcloud*,nc_*"
file_type: any
register: tmp_files
- name: Remove nextcloud temp files
file:
path: "{{ item.path }}"
state: absent
loop: "{{ tmp_files.files }}"
when: tmp_files.files | length > 0
failed_when: false
- name: Remove caddy logs for nextcloud
file:
path: /var/log/caddy/nextcloud.log
state: absent
failed_when: false
# ============================================
# Verification
# ============================================
- name: Verify nextcloud service is removed
command: systemctl list-units --all nextcloud*
register: units_check
changed_when: false
- name: Verify nextcloud container is removed
command: podman ps -a --filter name=nextcloud
register: container_check
changed_when: false
- name: Display removal status
debug:
msg: |
✅ Nextcloud removal complete!
Removed components:
- ⏹️ Nextcloud service and cron timer
- 🐳 Container: {{ 'Removed' if container_remove.rc == 0 else 'Not found' }}
- 🗄️ Database: {{ nextcloud_db_name }}
- 📁 Data directory: {{ nextcloud_home }}
- 👤 System user: {{ nextcloud_user }}
- 🌐 Caddy configuration
Remaining services:
{{ units_check.stdout }}
Containers:
{{ container_check.stdout }}
handlers:
- name: reload caddy
systemd:
name: caddy
state: reloaded
failed_when: false

View File

@@ -322,8 +322,8 @@
sysctl_file: /etc/sysctl.d/99-security.conf
reload: yes
loop:
# Disable IP forwarding
- { name: 'net.ipv4.ip_forward', value: '0' }
# Enable IP forwarding (required for container networking)
- { name: 'net.ipv4.ip_forward', value: '1' }
- { name: 'net.ipv6.conf.all.forwarding', value: '0' }
# Disable source routing

View File

@@ -0,0 +1,57 @@
# jnss-web Static Site Configuration
# Generated by Ansible - DO NOT EDIT MANUALLY
# WWW Redirect - apex is primary
www.{{ jnss_web_domain }} {
redir https://{{ jnss_web_domain }}{uri} permanent
}
# Primary Domain
{{ jnss_web_domain }} {
root * {{ jnss_web_root }}
file_server
# SPA routing - serve index.html for all routes
try_files {path} /index.html
# Security headers
header {
X-Frame-Options SAMEORIGIN
X-Content-Type-Options nosniff
X-XSS-Protection "1; mode=block"
Referrer-Policy strict-origin-when-cross-origin
Permissions-Policy "geolocation=(), microphone=(), camera=()"
}
# Cache static assets aggressively
@static {
path /_app/* /assets/* /icons/* *.ico *.png *.jpg *.jpeg *.svg *.webp *.woff *.woff2 *.css *.js
}
header @static {
Cache-Control "public, max-age=31536000, immutable"
Vary "Accept-Encoding"
}
# Cache HTML with shorter duration
@html {
path *.html /
}
header @html {
Cache-Control "public, max-age=3600, must-revalidate"
}
# Enable compression
encode gzip
# Logging
log {
output file /var/log/caddy/jnss-web.log {
roll_size 100mb
roll_keep 5
}
format json {
time_format "2006-01-02T15:04:05.000Z07:00"
}
level INFO
}
}

View File

@@ -9,29 +9,47 @@
# - Nextcloud cloud storage
# - Authentik SSO/authentication
# - Gitea git hosting
# - Vaultwarden password manager
# - Metrics (VictoriaMetrics, Grafana, node_exporter)
#
# Usage:
# ansible-playbook playbooks/homelab.yml
# ansible-playbook rick-infra.yml
# ansible-playbook rick-infra.yml --tags metrics
- import_playbook: playbooks/security.yml
# - import_playbook: playbooks/security.yml
- name: Deploy Homelab Infrastructure
hosts: homelab
become: true
gather_facts: true
tasks:
# - name: Deploy Caddy
# include_role:
# name: caddy
# tags: ['caddy']
- name: Deploy Metrics Stack
include_role:
name: metrics
tags: ['metrics', 'monitoring', 'grafana', 'victoriametrics']
# - name: Deploy Authentik
# include_role:
# name: authentik
# tags: ['authentik', 'sso', 'auth']
- name: Deploy Gitea
include_role:
name: gitea
tags: ['gitea', 'git', 'development']
# - name: Deploy Gitea
# include_role:
# name: gitea
# tags: ['gitea', 'git', 'development']
# - name: Deploy Nextcloud
# include_role:
# name: nextcloud
# tags: ['nextcloud', 'cloud', 'storage']
# - name: Deploy Vaultwarden
# include_role:
# name: vaultwarden
# tags: ['vaultwarden', 'vault', 'password-manager', 'security']

View File

@@ -81,7 +81,6 @@
owner: root
group: "{{ caddy_user }}"
mode: '0644'
backup: true
notify: reload caddy
tags: [caddy, reverse-proxy]

View File

@@ -120,7 +120,7 @@
owner: root
group: "{{ caddy_user }}"
mode: '0640'
backup: yes
backup: no
notify: reload caddy
- name: Check Caddyfile syntax (basic check)

View File

@@ -21,43 +21,3 @@
# Import service configurations
import {{ caddy_sites_enabled_dir }}/*
# Primary domain: {{ caddy_domain }}
{{ caddy_domain }} {
{% if caddy_tls_enabled %}
{% if caddy_dns_provider == "cloudflare" and cloudflare_api_token %}
# DNS challenge for automatic TLS
tls {
dns cloudflare {{ cloudflare_api_token }}
resolvers {{ caddy_dns_resolvers | join(' ') }}
}
{% elif caddy_tls_email %}
# HTTP challenge for automatic TLS
tls {{ caddy_tls_email }}
{% endif %}
{% endif %}
# Serve static content
root * {{ caddy_default_site_root }}
file_server
# Logging
log {
{% if caddy_log_format == "json" %}
output file {{ caddy_log_dir }}/{{ caddy_domain | replace('.', '_') }}.log {
roll_size 100mb
roll_keep 5
}
format json {
time_format "2006-01-02T15:04:05.000Z07:00"
}
level {{ caddy_log_level }}
{% else %}
output file {{ caddy_log_dir }}/{{ caddy_domain | replace('.', '_') }}.log {
roll_size 100mb
roll_keep 5
}
level {{ caddy_log_level }}
{% endif %}
}
}

View File

@@ -8,8 +8,10 @@ Self-contained Gitea Git service for rick-infra following the established archit
-**Native Arch installation**: Uses pacman packages
-**PostgreSQL integration**: Uses shared PostgreSQL infrastructure
-**Caddy integration**: Deploys reverse proxy configuration
-**Dual SSH modes**: Passthrough (default) or dedicated SSH server
-**Flexible domains**: Separate HTTP and SSH domains
-**Security hardened**: SystemD restrictions and secure defaults
-**Firewall management**: Automatically configures nftables for SSH access
-**Firewall management**: Automatic nftables configuration per mode
-**fail2ban protection**: Brute force protection for SSH authentication
-**Production ready**: HTTPS, SSH access, LFS support
@@ -17,11 +19,11 @@ Self-contained Gitea Git service for rick-infra following the established archit
- **Dependencies**: PostgreSQL infrastructure role
- **Database**: Self-managed gitea database and user
- **Network**: HTTP on :3000 (localhost), SSH on :2222 (public)
- **Web access**: https://git.domain.com (via Caddy reverse proxy)
- **SSH access**: ssh://git@git.domain.com:2222
- **Firewall**: Port 2222 automatically opened via nftables
- **Security**: fail2ban monitors and blocks SSH brute force attempts
- **Network**: HTTP on :3000 (localhost), SSH via system SSH (port 22) or dedicated (port 2222)
- **Web access**: https://git.jnss.me (via Caddy reverse proxy)
- **SSH access**: git@jnss.me:user/repo.git (passthrough mode, default)
- **Firewall**: Managed per SSH mode (no extra ports in passthrough)
- **Security**: fail2ban protects SSH authentication (system or dedicated jail)
## Configuration
@@ -31,11 +33,13 @@ Key variables (defaults in `defaults/main.yml`):
# Service
gitea_service_enabled: true
gitea_http_port: 3000
gitea_ssh_port: 2222
# Domain
gitea_subdomain: "git"
gitea_domain: "{{ caddy_domain }}"
# Domain Configuration
gitea_http_domain: "git.jnss.me" # Web interface
gitea_ssh_domain: "jnss.me" # SSH/Git operations
# SSH Mode
gitea_ssh_mode: "passthrough" # or "dedicated"
# Database (self-managed)
gitea_db_name: "gitea"
@@ -46,36 +50,108 @@ gitea_db_password: "{{ vault_gitea_db_password }}"
gitea_app_name: "Gitea: Git with a cup of tea"
gitea_disable_registration: false
gitea_enable_lfs: true
# Firewall and Security
gitea_manage_firewall: true # Automatically manage nftables rules
```
## Domain Configuration
Gitea uses **separate domains** for HTTP and SSH access, providing flexibility and cleaner URLs:
### HTTP Domain (`gitea_http_domain`)
- **Purpose**: Web interface access
- **Default**: `git.jnss.me`
- **Example**: `https://git.jnss.me`
- **Used for**: Browsing repos, managing settings, viewing commits
### SSH Domain (`gitea_ssh_domain`)
- **Purpose**: Git clone/push/pull operations
- **Default**: `jnss.me` (root domain)
- **Example**: `git@jnss.me:user/repo.git`
- **Used for**: Git operations over SSH
**Why Separate Domains?**
- ✅ Cleaner SSH URLs (no redundant "git" subdomain)
- ✅ Flexibility to use completely different domains
- ✅ Matches industry standards (GitHub, GitLab use root domain for SSH)
- ✅ Professional appearance
**Configuration Examples:**
```yaml
# Standard setup (recommended)
gitea_http_domain: "git.jnss.me"
gitea_ssh_domain: "jnss.me"
# Result: git@jnss.me:user/repo.git
# Same domain for both
gitea_http_domain: "git.jnss.me"
gitea_ssh_domain: "git.jnss.me"
# Result: git@git.jnss.me:user/repo.git
# Completely custom
gitea_http_domain: "code.jnss.me"
gitea_ssh_domain: "git.example.com"
# Result: git@git.example.com:user/repo.git
```
## SSH Modes
### Passthrough Mode (Default - Recommended)
System SSH handles Git operations via AuthorizedKeysCommand:
```bash
# Clone repository (standard Git URL, no port number)
git clone git@jnss.me:username/repository.git
# Add as remote
git remote add origin git@jnss.me:username/repository.git
# Test SSH connection
ssh -T git@jnss.me
```
**Features:**
- ✅ Standard Git URLs (no :2222 port)
- ✅ Single SSH daemon (smaller attack surface)
- ✅ System fail2ban protects everything
- ✅ Port 22 only (no extra firewall rules)
- ✅ Matches GitHub/GitLab pattern
### Dedicated Mode (Fallback)
Gitea runs its own SSH server on port 2222:
```bash
# Clone repository (with port number)
git clone ssh://git@jnss.me:2222/username/repository.git
# Add as remote
git remote add origin ssh://git@jnss.me:2222/username/repository.git
# Test SSH connection
ssh -T -p 2222 git@jnss.me
```
**Features:**
- ✅ Complete isolation from system SSH
- ✅ Independent configuration
- ⚠️ Requires port 2222 open in firewall
- ⚠️ Non-standard URLs (requires :2222)
**To switch modes:**
```yaml
# host_vars/arch-vps/main.yml
gitea_ssh_mode: "dedicated" # or "passthrough"
```
Then re-run the playbook.
## Usage
1. **Add vault password**: Set `vault_gitea_db_password` in host_vars vault
2. **Deploy**: `ansible-playbook site.yml --tags gitea`
3. **Access**: Visit https://git.yourdomain.com to set up admin account
2. **Configure domains** (optional): Override `gitea_http_domain` and `gitea_ssh_domain` in host_vars
3. **Deploy**: `ansible-playbook site.yml --tags gitea`
4. **Access**: Visit https://git.jnss.me to set up admin account
## Dependencies
- PostgreSQL infrastructure role (auto-included)
- Caddy web server (for HTTPS access)
- Vault password: `vault_gitea_db_password`
## SSH Access
Gitea provides Git repository access via SSH on port 2222:
```bash
# Clone a repository
git clone ssh://git@git.jnss.me:2222/username/repository.git
# Or add as remote
git remote add origin ssh://git@git.jnss.me:2222/username/repository.git
```
### SSH Key Setup
## SSH Key Setup
1. **Generate SSH key** (if you don't have one):
```bash
@@ -95,40 +171,52 @@ git remote add origin ssh://git@git.jnss.me:2222/username/repository.git
4. **Test SSH connection**:
```bash
ssh -T -p 2222 git@git.jnss.me
# Passthrough mode
ssh -T git@jnss.me
# Dedicated mode
ssh -T -p 2222 git@jnss.me
```
## Firewall and Security
### Automatic Firewall Management
The Gitea role automatically manages firewall rules via nftables:
Firewall configuration is mode-aware:
- **Port 2222** is opened automatically when Gitea is deployed
- Firewall rules are stored in `/etc/nftables.d/gitea.nft`
- Rules are integrated with the main security playbook configuration
- To disable automatic firewall management, set `gitea_manage_firewall: false`
**Passthrough Mode:**
- No extra firewall rules needed (uses port 22)
- System SSH already configured in security playbook
**Dedicated Mode:**
- Port 2222 automatically opened via nftables
- Firewall rules stored in `/etc/nftables.d/50-gitea.nft`
- Rules integrated with main security playbook
- Automatically removed when switching to passthrough
### fail2ban Protection
SSH brute force protection is automatically configured:
Protection is mode-aware:
- **Jail**: `gitea-ssh` monitors Gitea SSH authentication attempts
- **Max retries**: 5 failed attempts
- **Find time**: 10 minutes (600 seconds)
- **Ban time**: 1 hour (3600 seconds)
- **Action**: IP banned via nftables
**Passthrough Mode:**
- System `sshd` jail protects all SSH traffic (port 22)
- Covers admin SSH + Git operations automatically
- No separate Gitea jail needed
**Dedicated Mode:**
- `gitea-ssh` jail monitors Gitea logs (port 2222)
- Max retries: 5 failed attempts
- Find time: 10 minutes (600 seconds)
- Ban time: 1 hour (3600 seconds)
- Action: IP banned via nftables
Check fail2ban status:
```bash
# Check Gitea SSH jail status
# Passthrough mode
fail2ban-client status sshd
# Dedicated mode
fail2ban-client status gitea-ssh
# View banned IPs
fail2ban-client get gitea-ssh banned
# Unban an IP if needed
fail2ban-client set gitea-ssh unbanip 203.0.113.100
```
### Firewall Verification
@@ -137,13 +225,20 @@ fail2ban-client set gitea-ssh unbanip 203.0.113.100
# List active nftables rules
nft list ruleset
# Check if Gitea SSH port is open
# Check for Gitea SSH port (should be empty in passthrough)
nft list ruleset | grep 2222
# Verify from external machine
nc -zv git.jnss.me 2222
# Verify SSH connectivity
nc -zv jnss.me 22 # Passthrough
nc -zv jnss.me 2222 # Dedicated
```
## Dependencies
- PostgreSQL infrastructure role (auto-included)
- Caddy web server (for HTTPS access)
- Vault password: `vault_gitea_db_password`
## Self-Contained Design
This role follows rick-infra's self-contained service pattern:
@@ -151,9 +246,18 @@ This role follows rick-infra's self-contained service pattern:
- Manages its own configuration and data
- Deploys its own Caddy reverse proxy config
- Manages its own firewall rules and security (nftables, fail2ban)
- Flexible domain configuration (not tied to infrastructure variables)
- Independent lifecycle from other services
## Migration Guide
See `docs/gitea-ssh-migration-guide.md` for:
- Switching between SSH modes
- Updating Git remote URLs
- Bulk migration scripts
- Troubleshooting
---
**Rick-Infra Gitea Service**
Git repository management with integrated CI/CD capabilities.
Git repository management with flexible SSH modes and domain configuration.

View File

@@ -20,16 +20,14 @@ gitea_home: "/var/lib/gitea"
# Network Configuration
gitea_http_port: 3000
gitea_ssh_port: 2222
# =================================================================
# Domain and Caddy Integration
# =================================================================
# Domain setup (follows rick-infra pattern)
gitea_subdomain: "git"
gitea_domain: "{{ caddy_domain | default('localhost') }}"
gitea_full_domain: "{{ gitea_subdomain }}.{{ gitea_domain }}"
gitea_http_domain: "git.jnss.me"
gitea_ssh_domain: "jnss.me"
# Caddy integration
caddy_sites_enabled_dir: "/etc/caddy/sites-enabled"
@@ -59,19 +57,129 @@ gitea_run_mode: "prod"
gitea_default_branch: "main"
gitea_enable_lfs: true
# Security settings
gitea_disable_registration: false
gitea_require_signin: false
# =================================================================
# Private Git Server & OAuth Configuration
# =================================================================
# SSH settings
gitea_start_ssh_server: true
# Access Control - Private server with public repos allowed
gitea_disable_registration: true # No public registration (admin only)
gitea_require_signin: false # Require sign-in (unauthorized users read-only)
gitea_show_registration_button: false # Hide registration UI
# OAuth Configuration - Preferred but not forced
gitea_enable_password_signin: false # Hide password login form
gitea_enable_basic_auth: true # Keep password API auth as backup
gitea_oauth2_auto_registration: true # Auto-create OAuth users
gitea_oauth2_account_linking: "login" # Show account linking page
gitea_oauth2_username_source: "preferred_username"
gitea_oauth2_update_avatar: true
gitea_oauth2_scopes: "profile,email,groups"
gitea_oauth2_register_email_confirm: false
# =================================================================
# Email Configuration (Titan Email via Hostinger)
# =================================================================
gitea_mailer_enabled: true
gitea_mailer_protocol: "smtp+starttls" # Port 587 with STARTTLS
gitea_smtp_addr: "smtp.titan.email"
gitea_smtp_port: 587
gitea_mailer_from: "hello@jnss.me"
gitea_mailer_user: "hello@jnss.me"
gitea_mailer_password: "{{ vault_smtp_password }}"
gitea_mailer_subject_prefix: "[Gitea]"
# =================================================================
# Enhanced Security Settings
# =================================================================
# Session Security
gitea_session_provider: "file"
gitea_session_cookie_name: "gitea_session"
gitea_session_life_time: 3600 # 1 hour
gitea_cookie_secure: true # HTTPS-only cookies
gitea_session_same_site: "strict" # Strict CSRF protection
# Security Hardening
gitea_csrf_cookie_httponly: true # Prevent XSS on CSRF token
gitea_password_check_pwn: true # Check password breach database
gitea_reverse_proxy_limit: 1 # Trust only one proxy (Caddy)
gitea_reverse_proxy_trusted_proxies: "127.0.0.0/8,::1/128"
# =================================================================
# Repository Configuration
# =================================================================
# Privacy Defaults (private by default, public allowed)
gitea_default_private: "private" # New repos are private
gitea_default_push_create_private: true # Push-created repos are private
# Note: NOT setting gitea_force_private - allows public repos
# Repository Features
gitea_disabled_repo_units: "repo.ext_issues,repo.ext_wiki"
gitea_enable_push_create_user: false # Require manual repo creation
gitea_enable_push_create_org: false
# =================================================================
# Features & Capabilities
# =================================================================
# CI/CD Actions
gitea_actions_enabled: true # Enable Gitea Actions
gitea_actions_default_url: "github" # Use GitHub actions
gitea_actions_log_retention_days: 90
gitea_actions_artifact_retention_days: 30
# Repository Mirroring
gitea_mirror_enabled: true
gitea_mirror_default_interval: "8h"
gitea_mirror_min_interval: "1h"
# Organization & User Management
gitea_allow_create_org: true # Users can create orgs
# API Configuration
gitea_api_swagger_enabled: false # Disable API docs
# Webhook Security
gitea_webhook_allowed_hosts: "private,loopback"
gitea_webhook_skip_tls_verify: false
gitea_webhook_deliver_timeout: 5
# =================================================================
# Service Explore Configuration
# =================================================================
gitea_explore_require_signin: false # Allow browsing public content
# =================================================================
# SSH Mode Configuration
# =================================================================
# SSH Mode: 'passthrough' or 'dedicated'
# - passthrough (default): Use system SSH on port 22
# * More secure (single SSH daemon, smaller attack surface)
# * Standard Git URLs (no :2222 port number needed)
# * System fail2ban automatically protects Git operations
# * Recommended for production use
#
# - dedicated (fallback): Run Gitea's built-in SSH server on port 2222
# * Complete isolation from system SSH
# * Independent configuration and restarts
# * Requires opening port 2222 in firewall
# * Useful for debugging or when passthrough causes issues
gitea_ssh_mode: "passthrough"
# Dynamic SSH configuration based on mode
gitea_ssh_port: "{{ 22 if gitea_ssh_mode == 'passthrough' else 2222 }}"
gitea_start_ssh_server: "{{ false if gitea_ssh_mode == 'passthrough' else true }}"
# =================================================================
# Firewall Configuration
# =================================================================
# Firewall management
gitea_manage_firewall: true # Set to false if firewall is managed externally
# Firewall management (only opens port in dedicated mode)
gitea_manage_firewall: "{{ true if gitea_ssh_mode == 'dedicated' else false }}"
# =================================================================
# Infrastructure Dependencies (Read-only)
@@ -91,4 +199,4 @@ postgresql_port: 5432
# - Creates its own database and user
# - Deploys Caddy configuration to sites-enabled
# - Uses native Arch Linux Gitea package
# - Follows self-contained service pattern
# - Follows self-contained service pattern

View File

@@ -27,4 +27,11 @@
- name: restart fail2ban
systemd:
name: fail2ban
state: restarted
state: restarted
- name: restart sshd
systemd:
name: sshd
state: restarted
# Safety: only restart if not running locally
when: ansible_connection != 'local'

View File

@@ -1,7 +1,7 @@
---
# Gitea fail2ban Configuration - Rick-Infra
# Protects Gitea SSH from brute force attacks
# Integrates with system fail2ban service
# Mode-aware: Only protects dedicated mode (port 2222)
# In passthrough mode, system 'sshd' jail protects port 22
- name: Install fail2ban
pacman:
@@ -13,6 +13,7 @@
content: |
# Fail2ban filter for Gitea SSH authentication failures
# Rick-Infra: Gitea role
# Only used in dedicated mode (port {{ gitea_ssh_port }})
[Definition]
# Match failed authentication attempts in Gitea logs
@@ -33,14 +34,17 @@
modification_time: preserve
access_time: preserve
- name: Add Gitea SSH jail to fail2ban
- name: Add Gitea SSH jail to fail2ban (mode-aware)
blockinfile:
path: /etc/fail2ban/jail.local
marker: "# {mark} ANSIBLE MANAGED BLOCK - Gitea SSH"
block: |
# Gitea SSH Protection - Rick-Infra
# Mode: {{ gitea_ssh_mode }}
# - dedicated: Monitors Gitea logs on port {{ gitea_ssh_port }}
# - passthrough: Disabled (system 'sshd' jail protects port 22)
[gitea-ssh]
enabled = true
enabled = {{ 'true' if gitea_ssh_mode == 'dedicated' else 'false' }}
port = {{ gitea_ssh_port }}
filter = gitea-ssh
logpath = {{ gitea_home }}/log/gitea.log
@@ -57,19 +61,61 @@
enabled: yes
state: started
- name: Add fail2ban restart handler
- name: Flush handlers to ensure fail2ban restarts
meta: flush_handlers
- name: Display fail2ban status for Gitea
- name: Wait for fail2ban to be ready
pause:
seconds: 2
- name: Verify gitea-ssh jail status (dedicated mode only)
command: fail2ban-client status gitea-ssh
register: gitea_jail_verify
changed_when: false
failed_when: false
when: gitea_ssh_mode == 'dedicated'
- name: Verify sshd jail status (passthrough mode)
command: fail2ban-client status sshd
register: sshd_jail_verify
changed_when: false
failed_when: false
when: gitea_ssh_mode == 'passthrough'
- name: Display fail2ban configuration status
debug:
msg: |
🛡️ fail2ban configured for Gitea SSH
📍 Filter: /etc/fail2ban/filter.d/gitea-ssh.conf
📍 Jail: gitea-ssh (in /etc/fail2ban/jail.local)
🔒 Protection: Port {{ gitea_ssh_port }}
⏱️ Ban time: 1 hour (3600 seconds)
🔢 Max retries: 5 attempts in 10 minutes
🛡️ fail2ban Protection for Gitea SSH
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📍 Mode: {{ gitea_ssh_mode | upper }}
Check status: fail2ban-client status gitea-ssh
{% if gitea_ssh_mode == 'dedicated' %}
📍 Jail: gitea-ssh
📍 Port: {{ gitea_ssh_port }}
📍 Status: {{ 'Active ✅' if gitea_jail_verify.rc == 0 else 'Not Active ⚠️' }}
📍 Filter: /etc/fail2ban/filter.d/gitea-ssh.conf
📍 Logfile: {{ gitea_home }}/log/gitea.log
Protection Settings:
• Max retries: 5 attempts
• Find time: 10 minutes (600 seconds)
• Ban time: 1 hour (3600 seconds)
Check status:
fail2ban-client status gitea-ssh
{% else %}
📍 Jail: sshd (system jail)
📍 Port: 22
📍 Status: {{ 'Active ✅' if sshd_jail_verify.rc == 0 else 'Not Active ⚠️' }}
📍 Coverage: All SSH traffic including Gitea Git operations
Note: In passthrough mode, the system 'sshd' jail automatically
protects all SSH traffic on port 22, including Gitea Git
operations. No separate gitea-ssh jail is needed.
Check status:
fail2ban-client status sshd
{% endif %}
# Rick-Infra: Self-contained fail2ban protection per role

View File

@@ -16,23 +16,30 @@
name: gitea
state: present
# Firewall configuration - self-managed by Gitea role
- name: Configure firewall for Gitea SSH
import_tasks: firewall.yml
tags: ['firewall']
when: gitea_manage_firewall | default(true)
# SSH Mode Configuration - Conditional based on gitea_ssh_mode
# Mode determines how Git SSH operations are handled
# fail2ban protection - self-managed by Gitea role
- name: Configure fail2ban for Gitea SSH
import_tasks: fail2ban.yml
tags: ['fail2ban', 'security']
when: gitea_manage_firewall | default(true)
- name: Configure SSH passthrough mode (default)
import_tasks: ssh_passthrough.yml
when: gitea_ssh_mode == "passthrough"
tags: ['ssh', 'passthrough']
- name: Configure SSH dedicated mode (fallback)
import_tasks: ssh_dedicated.yml
when: gitea_ssh_mode == "dedicated"
tags: ['ssh', 'dedicated']
- name: Install Git
pacman:
name: git
state: present
- name: Create Gitea group
group:
name: "{{ gitea_group }}"
system: yes
state: present
- name: Create Gitea user and group
user:
name: "{{ gitea_user }}"
@@ -156,8 +163,8 @@
msg: |
✅ Gitea Git service deployed successfully!
🌐 Web Interface: https://{{ gitea_full_domain }}
🔗 SSH Clone: ssh://git@{{ gitea_full_domain }}:{{ gitea_ssh_port }}
🌐 Web Interface: https://{{ gitea_http_domain }}
🔗 SSH Clone: ssh://git@{{ gitea_ssh_domain }}:{{ gitea_ssh_port }}
📦 Local HTTP: http://127.0.0.1:{{ gitea_http_port }}
🗄️ Database: {{ gitea_db_name }} (self-managed)

View File

@@ -0,0 +1,74 @@
---
# Gitea Dedicated SSH Server Configuration - Rick-Infra
# Configures Gitea to run its own SSH server on port 2222
# This is the fallback mode when passthrough is not desired
- name: Configure firewall for Gitea SSH (dedicated mode)
import_tasks: firewall.yml
tags: ['firewall']
- name: Configure fail2ban for Gitea SSH (dedicated mode)
import_tasks: fail2ban.yml
tags: ['fail2ban', 'security']
- name: Wait for fail2ban to be ready
pause:
seconds: 2
- name: Verify gitea-ssh jail is active
command: fail2ban-client status gitea-ssh
register: gitea_jail_status
changed_when: false
failed_when: false
- name: Display fail2ban protection status
debug:
msg: |
🛡️ Gitea SSH fail2ban protection:
{% if gitea_jail_status.rc == 0 %}
✅ gitea-ssh jail is ACTIVE
{{ gitea_jail_status.stdout }}
{% else %}
⚠️ WARNING: gitea-ssh jail not active!
This is a security risk - port {{ gitea_ssh_port }} is vulnerable to brute force attacks.
{% endif %}
- name: Fail if gitea-ssh jail is not running (security critical)
fail:
msg: |
SECURITY ERROR: gitea-ssh fail2ban jail is not active!
Port {{ gitea_ssh_port }} is exposed but not protected.
Check fail2ban configuration and logs.
when: gitea_jail_status.rc != 0
- name: Remove SSH passthrough configuration if present
blockinfile:
path: /etc/ssh/sshd_config
marker: "# {mark} ANSIBLE MANAGED BLOCK - Gitea SSH Passthrough"
state: absent
backup: yes
register: sshd_config_cleaned
notify: restart sshd
- name: Remove AuthorizedKeysCommand script if present
file:
path: /usr/local/bin/gitea-keys
state: absent
- name: Display dedicated mode configuration
debug:
msg: |
🔧 Gitea SSH Mode: DEDICATED
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📍 SSH Server: Gitea built-in (port {{ gitea_ssh_port }})
🔗 Clone URL: ssh://git@{{ gitea_ssh_domain }}:{{ gitea_ssh_port }}/user/repo.git
🔥 Firewall: Port {{ gitea_ssh_port }} opened (nftables)
🛡️ fail2ban: gitea-ssh jail protecting port {{ gitea_ssh_port }}
Test connection:
ssh -T -p {{ gitea_ssh_port }} git@{{ gitea_ssh_domain }}
Clone repository:
git clone ssh://git@{{ gitea_ssh_domain }}:{{ gitea_ssh_port }}/username/repo.git
# Rick-Infra: Self-contained dedicated SSH mode with full security

View File

@@ -0,0 +1,128 @@
---
# Gitea SSH Passthrough Configuration - Rick-Infra
# Configures system SSH to handle Gitea Git authentication
# This is the default mode: more secure, standard Git URLs
- name: Ensure OpenSSH server is installed
pacman:
name: openssh
state: present
- name: Create Gitea AuthorizedKeysCommand script
template:
src: gitea-keys.sh.j2
dest: /usr/local/bin/gitea-keys
mode: '0755'
owner: root
group: root
register: gitea_keys_script
- name: Configure SSH for Gitea passthrough
blockinfile:
path: /etc/ssh/sshd_config
marker: "# {mark} ANSIBLE MANAGED BLOCK - Gitea SSH Passthrough"
block: |
# Gitea SSH Passthrough - Rick-Infra
# System SSH delegates git user authentication to Gitea
# This allows standard Git URLs: git@{{ gitea_ssh_domain }}:user/repo.git
Match User {{ gitea_user }}
AuthorizedKeysCommandUser {{ gitea_user }}
AuthorizedKeysCommand /usr/local/bin/gitea-keys %u %t %k
AllowTcpForwarding no
AllowAgentForwarding no
X11Forwarding no
PermitTTY no
backup: yes
validate: '/usr/bin/sshd -t -f %s'
register: sshd_config_changed
notify: restart sshd
- name: Verify SSH configuration syntax
command: sshd -t
changed_when: false
register: sshd_test
- name: Display SSH validation result
debug:
msg: |
{% if sshd_test.rc == 0 %}
✅ SSH configuration is valid
{% else %}
⚠️ SSH configuration test failed:
{{ sshd_test.stderr }}
{% endif %}
- name: Fail if SSH configuration is invalid
fail:
msg: "SSH configuration test failed. Rolling back changes."
when: sshd_test.rc != 0
- name: Remove Gitea firewall rule (passthrough uses port 22)
file:
path: /etc/nftables.d/50-gitea.nft
state: absent
notify: reload nftables
register: firewall_cleaned
- name: Reload nftables if firewall rule was removed
systemd:
name: nftables
state: reloaded
when: firewall_cleaned.changed
- name: Configure fail2ban for passthrough mode
import_tasks: fail2ban.yml
tags: ['fail2ban', 'security']
- name: Flush handlers to ensure sshd restarts if needed
meta: flush_handlers
- name: Wait for SSH service to be available after restart
wait_for:
port: 22
host: "{{ ansible_host | default('127.0.0.1') }}"
timeout: 30
delegate_to: localhost
become: false
when: sshd_config_changed.changed
- name: Test SSH connection after configuration
ping:
when: sshd_config_changed.changed
- name: Verify passthrough is working
command: sudo -u {{ gitea_user }} /usr/local/bin/gitea-keys ssh-rsa test
register: gitea_keys_test
changed_when: false
failed_when: false
- name: Display passthrough mode configuration
debug:
msg: |
🔧 Gitea SSH Mode: PASSTHROUGH (Default)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📍 SSH Server: System SSH (port 22)
🔗 Clone URL: git@{{ gitea_ssh_domain }}:user/repo.git
🔥 Firewall: Port 2222 closed (not needed)
🛡️ fail2ban: System 'sshd' jail protects all SSH traffic
🔑 AuthorizedKeysCommand: /usr/local/bin/gitea-keys
How it works:
1. User connects: ssh git@{{ gitea_ssh_domain }}
2. System SSH checks: /usr/local/bin/gitea-keys
3. Script queries Gitea database for SSH key
4. If authorized, Gitea handles Git operation
Test connection:
ssh -T git@{{ gitea_ssh_domain }}
Clone repository:
git clone git@{{ gitea_ssh_domain }}:username/repo.git
Benefits:
✅ Standard Git URLs (no :2222 port number)
✅ Single SSH daemon (smaller attack surface)
✅ System fail2ban protects everything
✅ One port to manage and monitor
# Rick-Infra: Self-contained SSH passthrough mode with enhanced security

View File

@@ -6,19 +6,32 @@ APP_NAME = {{ gitea_app_name }}
RUN_MODE = {{ gitea_run_mode }}
[repository]
# === Repository Storage ===
ROOT = {{ gitea_home }}/repositories
DEFAULT_BRANCH = {{ gitea_default_branch }}
# === Privacy Defaults ===
DEFAULT_PRIVATE = {{ gitea_default_private }}
DEFAULT_PUSH_CREATE_PRIVATE = {{ gitea_default_push_create_private | lower }}
# === Repository Features ===
DISABLED_REPO_UNITS = {{ gitea_disabled_repo_units }}
ENABLE_PUSH_CREATE_USER = {{ gitea_enable_push_create_user | lower }}
ENABLE_PUSH_CREATE_ORG = {{ gitea_enable_push_create_org | lower }}
[server]
PROTOCOL = http
DOMAIN = {{ gitea_full_domain }}
DOMAIN = {{ gitea_http_domain }}
HTTP_PORT = {{ gitea_http_port }}
ROOT_URL = https://{{ gitea_full_domain }}/
ROOT_URL = https://{{ gitea_http_domain }}/
DISABLE_SSH = false
# SSH Mode: {{ gitea_ssh_mode }}
START_SSH_SERVER = {{ gitea_start_ssh_server | lower }}
SSH_DOMAIN = {{ gitea_full_domain }}
SSH_DOMAIN = {{ gitea_ssh_domain }}
SSH_PORT = {{ gitea_ssh_port }}
{% if gitea_ssh_mode == 'dedicated' %}
SSH_LISTEN_PORT = {{ gitea_ssh_port }}
{% endif %}
LOCAL_ROOT_URL = http://127.0.0.1:{{ gitea_http_port }}/
APP_DATA_PATH = {{ gitea_home }}/data
@@ -38,16 +51,62 @@ SSL_MODE = disable
CHARSET = utf8
[security]
# === Core Security ===
INSTALL_LOCK = true
SECRET_KEY = {{ ansible_machine_id }}{{ gitea_db_password | hash('sha256') }}
INTERNAL_TOKEN = {{ (ansible_machine_id + gitea_db_password) | hash('sha256') }}
# === Enhanced Security ===
CSRF_COOKIE_HTTP_ONLY = {{ gitea_csrf_cookie_httponly | lower }}
PASSWORD_CHECK_PWN = {{ gitea_password_check_pwn | lower }}
REVERSE_PROXY_LIMIT = {{ gitea_reverse_proxy_limit }}
REVERSE_PROXY_TRUSTED_PROXIES = {{ gitea_reverse_proxy_trusted_proxies }}
[service]
# === Access Control ===
DISABLE_REGISTRATION = {{ gitea_disable_registration | lower }}
REQUIRE_SIGNIN_VIEW = {{ gitea_require_signin | lower }}
SHOW_REGISTRATION_BUTTON = {{ gitea_show_registration_button | lower }}
# === OAuth Configuration ===
ENABLE_PASSWORD_SIGNIN_FORM = {{ gitea_enable_password_signin | lower }}
ENABLE_BASIC_AUTHENTICATION = {{ gitea_enable_basic_auth | lower }}
# === Defaults ===
DEFAULT_KEEP_EMAIL_PRIVATE = true
DEFAULT_ALLOW_CREATE_ORGANIZATION = true
NO_REPLY_ADDRESS = noreply.{{ gitea_domain }}
DEFAULT_ALLOW_CREATE_ORGANIZATION = {{ gitea_allow_create_org | lower }}
NO_REPLY_ADDRESS = noreply@{{ gitea_http_domain }}
[oauth2_client]
# === Authentik OAuth Integration ===
ENABLE_AUTO_REGISTRATION = {{ gitea_oauth2_auto_registration | lower }}
ACCOUNT_LINKING = {{ gitea_oauth2_account_linking }}
USERNAME = {{ gitea_oauth2_username_source }}
UPDATE_AVATAR = {{ gitea_oauth2_update_avatar | lower }}
OPENID_CONNECT_SCOPES = {{ gitea_oauth2_scopes }}
REGISTER_EMAIL_CONFIRM = {{ gitea_oauth2_register_email_confirm | lower }}
[mailer]
ENABLED = {{ gitea_mailer_enabled | lower }}
{% if gitea_mailer_enabled %}
PROTOCOL = {{ gitea_mailer_protocol }}
SMTP_ADDR = {{ gitea_smtp_addr }}
SMTP_PORT = {{ gitea_smtp_port }}
FROM = {{ gitea_mailer_from }}
USER = {{ gitea_mailer_user }}
PASSWD = {{ gitea_mailer_password }}
SUBJECT_PREFIX = {{ gitea_mailer_subject_prefix }}
SEND_AS_PLAIN_TEXT = false
SMTP_AUTH = PLAIN
{% endif %}
[session]
# === Session Security ===
PROVIDER = {{ gitea_session_provider }}
COOKIE_NAME = {{ gitea_session_cookie_name }}
COOKIE_SECURE = {{ gitea_cookie_secure | lower }}
SESSION_LIFE_TIME = {{ gitea_session_life_time }}
SAME_SITE = {{ gitea_session_same_site }}
[log]
MODE = console
@@ -63,4 +122,37 @@ CONTENT_PATH = {{ gitea_home }}/data/lfs
[git]
PATH = /usr/bin/git
# Rick-Infra: Simplified Gitea configuration for self-contained service
[actions]
# === CI/CD Configuration ===
ENABLED = {{ gitea_actions_enabled | lower }}
{% if gitea_actions_enabled %}
DEFAULT_ACTIONS_URL = {{ gitea_actions_default_url }}
LOG_RETENTION_DAYS = {{ gitea_actions_log_retention_days }}
ARTIFACT_RETENTION_DAYS = {{ gitea_actions_artifact_retention_days }}
{% endif %}
[mirror]
# === Repository Mirroring ===
ENABLED = {{ gitea_mirror_enabled | lower }}
DISABLE_NEW_PULL = false
DISABLE_NEW_PUSH = false
DEFAULT_INTERVAL = {{ gitea_mirror_default_interval }}
MIN_INTERVAL = {{ gitea_mirror_min_interval }}
[api]
# === API Configuration ===
ENABLE_SWAGGER = {{ gitea_api_swagger_enabled | lower }}
MAX_RESPONSE_ITEMS = 50
DEFAULT_PAGING_NUM = 30
[webhook]
# === Webhook Security ===
ALLOWED_HOST_LIST = {{ gitea_webhook_allowed_hosts }}
SKIP_TLS_VERIFY = {{ gitea_webhook_skip_tls_verify | lower }}
DELIVER_TIMEOUT = {{ gitea_webhook_deliver_timeout }}
[service.explore]
# === Public Content Exploration ===
REQUIRE_SIGNIN_VIEW = {{ gitea_explore_require_signin | lower }}
# Rick-Infra: Private Gitea configuration with OAuth and email support

View File

@@ -0,0 +1,28 @@
#!/bin/bash
# Gitea SSH Keys AuthorizedKeysCommand - Rick-Infra
# Generated by Ansible Gitea role
#
# This script is called by OpenSSH's AuthorizedKeysCommand to query
# Gitea's database for SSH public keys when the 'git' user connects.
#
# Called by SSH with parameters:
# %u = username (should be "git")
# %t = key type (ssh-rsa, ssh-ed25519, ecdsa-sha2-nistp256, etc.)
# %k = base64 encoded public key content
#
# The script returns authorized_keys format entries that include
# forced commands to execute Gitea's Git server.
set -euo pipefail
# Gitea keys command queries the database and returns authorized_keys format
# If the key is found, it returns a line like:
# command="/usr/bin/gitea serv key-123",no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty ssh-rsa AAAA...
exec /usr/bin/gitea keys \
--config /etc/gitea/app.ini \
--username "$1" \
--type "$2" \
--content "$3"
# Rick-Infra: AuthorizedKeysCommand for Gitea SSH passthrough mode

View File

@@ -2,7 +2,7 @@
# Generated by Ansible Gitea role
# Deployed to {{ caddy_sites_enabled_dir }}/gitea.caddy
{{ gitea_full_domain }} {
{{ gitea_http_domain }} {
# Reverse proxy to Gitea
reverse_proxy 127.0.0.1:{{ gitea_http_port }}
@@ -29,4 +29,4 @@
}
}
# Rick-Infra: Self-contained Gitea service with Caddy reverse proxy
# Rick-Infra: Self-contained Gitea service with Caddy reverse proxy

325
roles/metrics/README.md Normal file
View File

@@ -0,0 +1,325 @@
# Metrics Role
Complete monitoring stack for rick-infra providing system metrics collection, storage, and visualization with SSO integration.
## Components
### VictoriaMetrics
- **Purpose**: Time-series database for metrics storage
- **Type**: Native systemd service
- **Listen**: `127.0.0.1:8428` (localhost only)
- **Features**:
- Prometheus-compatible API and PromQL
- 7x less RAM usage than Prometheus
- Single binary deployment
- 12-month data retention by default
### Grafana
- **Purpose**: Metrics visualization and dashboarding
- **Type**: Native systemd service
- **Listen**: `127.0.0.1:3000` (localhost only, proxied via Caddy)
- **Domain**: `metrics.jnss.me`
- **Features**:
- OAuth/OIDC integration with Authentik
- Role-based access control via Authentik groups
- VictoriaMetrics as default data source
### node_exporter
- **Purpose**: System metrics collection
- **Type**: Native systemd service
- **Listen**: `127.0.0.1:9100` (localhost only)
- **Metrics**: CPU, memory, disk, network, systemd units
## Architecture
```
┌─────────────────────────────────────────────────────┐
│ metrics.jnss.me (Grafana Dashboard) │
│ ┌─────────────────────────────────────────────────┐ │
│ │ Caddy (HTTPS) │ │
│ │ ↓ │ │
│ │ Grafana (OAuth → Authentik) │ │
│ │ ↓ │ │
│ │ VictoriaMetrics (Prometheus-compatible) │ │
│ │ ↑ │ │
│ │ node_exporter (System Metrics) │ │
│ └─────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────┘
```
## Deployment
### Prerequisites
1. **Caddy role deployed** - Required for HTTPS proxy
2. **Authentik deployed** - Required for OAuth/SSO
3. **Vault variables configured**:
```yaml
# In host_vars/arch-vps/vault.yml
vault_grafana_admin_password: "secure-admin-password"
vault_grafana_secret_key: "random-secret-key-32-chars"
vault_grafana_oauth_client_id: "grafana"
vault_grafana_oauth_client_secret: "oauth-client-secret-from-authentik"
```
### Authentik Configuration
Before deployment, create OAuth2/OIDC provider in Authentik:
1. **Create Provider**:
- Name: `Grafana`
- Type: `OAuth2/OpenID Provider`
- Client ID: `grafana`
- Client Secret: Generate and save to vault
- Redirect URIs: `https://metrics.jnss.me/login/generic_oauth`
- Signing Key: Auto-generated
2. **Create Application**:
- Name: `Grafana`
- Slug: `grafana`
- Provider: Select Grafana provider created above
3. **Create Groups** (optional, for role mapping):
- `grafana-admins` - Full admin access
- `grafana-editors` - Can create/edit dashboards
- Users without these groups get Viewer access
### Deploy
```bash
# Deploy complete metrics stack
ansible-playbook rick-infra.yml --tags metrics
# Deploy individual components
ansible-playbook rick-infra.yml --tags victoriametrics
ansible-playbook rick-infra.yml --tags grafana
ansible-playbook rick-infra.yml --tags node_exporter
```
### Verify Deployment
```bash
# Check service status
ansible homelab -a "systemctl status victoriametrics grafana node_exporter"
# Check metrics collection
curl http://127.0.0.1:9100/metrics # node_exporter metrics
curl http://127.0.0.1:8428/metrics # VictoriaMetrics metrics
curl http://127.0.0.1:8428/api/v1/targets # Scrape targets
# Access Grafana
curl -I https://metrics.jnss.me/ # Should redirect to Authentik login
```
## Usage
### Access Dashboard
1. Navigate to `https://metrics.jnss.me`
2. Click "Sign in with Authentik"
3. Authenticate via Authentik SSO
4. Access granted based on Authentik group membership
### Role Mapping
Grafana roles are automatically assigned based on Authentik groups:
- **Admin**: Members of `grafana-admins` group
- Full administrative access
- Can manage users, data sources, plugins
- Can create/edit/delete all dashboards
- **Editor**: Members of `grafana-editors` group
- Can create and edit dashboards
- Cannot manage users or data sources
- **Viewer**: All other authenticated users
- Read-only access to dashboards
- Cannot create or edit dashboards
### Creating Dashboards
Grafana comes with VictoriaMetrics pre-configured as the default data source. Use PromQL queries:
```promql
# CPU usage
100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
# Memory usage
node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes
# Disk usage
100 - ((node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100)
# Network traffic
irate(node_network_receive_bytes_total[5m])
```
### Import Community Dashboards
1. Browse dashboards at https://grafana.com/grafana/dashboards/
2. Recommended for node_exporter:
- Dashboard ID: 1860 (Node Exporter Full)
- Dashboard ID: 11074 (Node Exporter for Prometheus)
3. Import via Grafana UI: Dashboards → Import → Enter ID
## Configuration
### Customization
Key configuration options in `roles/metrics/defaults/main.yml`:
```yaml
# Data retention
victoriametrics_retention_period: "12" # months
# Scrape interval
victoriametrics_scrape_interval: "15s"
# OAuth role mapping (JMESPath expression)
grafana_oauth_role_attribute_path: "contains(groups, 'grafana-admins') && 'Admin' || contains(groups, 'grafana-editors') && 'Editor' || 'Viewer'"
# Memory limits
victoriametrics_memory_allowed_percent: "60"
```
### Adding Scrape Targets
Edit `roles/metrics/templates/scrape.yml.j2`:
```yaml
scrape_configs:
# Add custom application metrics
- job_name: 'myapp'
static_configs:
- targets: ['127.0.0.1:8080']
labels:
service: 'myapp'
```
## Operations
### Service Management
```bash
# VictoriaMetrics
systemctl status victoriametrics
systemctl restart victoriametrics
journalctl -u victoriametrics -f
# Grafana
systemctl status grafana
systemctl restart grafana
journalctl -u grafana -f
# node_exporter
systemctl status node_exporter
systemctl restart node_exporter
journalctl -u node_exporter -f
```
### Data Locations
```
/var/lib/victoriametrics/ # Time-series data
/var/lib/grafana/ # Grafana database and dashboards
/var/log/grafana/ # Grafana logs
/etc/victoriametrics/ # VictoriaMetrics config
/etc/grafana/ # Grafana config
```
### Backup
VictoriaMetrics data is stored in `/var/lib/victoriametrics`:
```bash
# Stop service
systemctl stop victoriametrics
# Backup data
tar -czf victoriametrics-backup-$(date +%Y%m%d).tar.gz /var/lib/victoriametrics
# Start service
systemctl start victoriametrics
```
Grafana dashboards are stored in SQLite database at `/var/lib/grafana/grafana.db`:
```bash
# Backup Grafana
systemctl stop grafana
tar -czf grafana-backup-$(date +%Y%m%d).tar.gz /var/lib/grafana /etc/grafana
systemctl start grafana
```
## Security
### Authentication
- Grafana protected by Authentik OAuth/OIDC
- Local admin account available for emergency access
- All services bind to localhost only
### Network Security
- VictoriaMetrics: `127.0.0.1:8428` (no external access)
- Grafana: `127.0.0.1:3000` (proxied via Caddy with HTTPS)
- node_exporter: `127.0.0.1:9100` (no external access)
### systemd Hardening
All services run with security restrictions:
- `NoNewPrivileges=true`
- `ProtectSystem=strict`
- `ProtectHome=true`
- `PrivateTmp=true`
- Read-only filesystem (except data directories)
## Troubleshooting
### Grafana OAuth Not Working
1. Check Authentik provider configuration:
```bash
# Verify redirect URI matches
# https://metrics.jnss.me/login/generic_oauth
```
2. Check Grafana logs:
```bash
journalctl -u grafana -f
```
3. Verify OAuth credentials in vault match Authentik
### No Metrics in Grafana
1. Check VictoriaMetrics scrape targets:
```bash
curl http://127.0.0.1:8428/api/v1/targets
```
2. Check node_exporter is running:
```bash
systemctl status node_exporter
curl http://127.0.0.1:9100/metrics
```
3. Check VictoriaMetrics logs:
```bash
journalctl -u victoriametrics -f
```
### High Memory Usage
VictoriaMetrics is configured to use max 60% of available memory. Adjust if needed:
```yaml
# In roles/metrics/defaults/main.yml
victoriametrics_memory_allowed_percent: "40" # Reduce to 40%
```
## See Also
- [VictoriaMetrics Documentation](https://docs.victoriametrics.com/)
- [Grafana Documentation](https://grafana.com/docs/)
- [node_exporter GitHub](https://github.com/prometheus/node_exporter)
- [PromQL Documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/)
- [Authentik OAuth Integration](https://goauthentik.io/docs/providers/oauth2/)

View File

@@ -0,0 +1,178 @@
---
# =================================================================
# Metrics Infrastructure Role - Complete Monitoring Stack
# =================================================================
# Provides VictoriaMetrics, Grafana, and node_exporter as unified stack
# =================================================================
# VictoriaMetrics Configuration
# =================================================================
# Service Management
victoriametrics_service_enabled: true
victoriametrics_service_state: "started"
# Version
victoriametrics_version: "1.105.0"
# Network Security (localhost only)
victoriametrics_listen_address: "127.0.0.1:8428"
# Storage Configuration
victoriametrics_data_dir: "/var/lib/victoriametrics"
victoriametrics_retention_period: "12" # months
# User/Group
victoriametrics_user: "victoriametrics"
victoriametrics_group: "victoriametrics"
# Performance Settings
victoriametrics_memory_allowed_percent: "30"
victoriametrics_storage_min_free_disk_space_bytes: "10GB"
# Scrape Configuration
victoriametrics_scrape_config_dir: "/etc/victoriametrics"
victoriametrics_scrape_config_file: "{{ victoriametrics_scrape_config_dir }}/scrape.yml"
victoriametrics_scrape_interval: "15s"
victoriametrics_scrape_timeout: "10s"
# systemd security
victoriametrics_systemd_security: true
# =================================================================
# Grafana Configuration
# =================================================================
# Service Management
grafana_service_enabled: true
grafana_service_state: "started"
# Version
grafana_version: "11.4.0"
# Network Security (localhost only - proxied via Caddy)
grafana_listen_address: "127.0.0.1"
grafana_listen_port: 3420
# User/Group
grafana_user: "grafana"
grafana_group: "grafana"
# Directories
grafana_data_dir: "/var/lib/grafana"
grafana_logs_dir: "/var/log/grafana"
grafana_plugins_dir: "/var/lib/grafana/plugins"
grafana_provisioning_dir: "/etc/grafana/provisioning"
# Domain Configuration
grafana_domain: "metrics.{{ caddy_domain }}"
grafana_root_url: "https://{{ grafana_domain }}"
# Default admin (used only for initial setup)
grafana_admin_user: "admin"
grafana_admin_password: "{{ vault_grafana_admin_password }}"
# Disable registration (OAuth only)
grafana_allow_signup: false
grafana_disable_login_form: false # Keep fallback login
# OAuth/OIDC Configuration (Authentik)
grafana_oauth_enabled: true
grafana_oauth_name: "Authentik"
grafana_oauth_client_id: "{{ vault_grafana_oauth_client_id }}"
grafana_oauth_client_secret: "{{ vault_grafana_oauth_client_secret }}"
# Authentik OAuth endpoints
grafana_oauth_auth_url: "https://{{ authentik_domain }}/application/o/authorize/"
grafana_oauth_token_url: "https://{{ authentik_domain }}/application/o/token/"
grafana_oauth_api_url: "https://{{ authentik_domain }}/application/o/userinfo/"
# OAuth role mapping
grafana_oauth_role_attribute_path: "(contains(groups, 'authentik Admins') || contains(groups, 'grafana-admins')) && 'Admin' || contains(groups, 'grafana-editors') && 'Editor' || 'Viewer'"
grafana_oauth_allow_sign_up: true # Auto-create users from OAuth
grafana_oauth_scopes: "openid profile email groups"
# Data Source Configuration
grafana_datasource_vm_enabled: true
grafana_datasource_vm_url: "http://{{ victoriametrics_listen_address }}"
grafana_datasource_vm_name: "VictoriaMetrics"
# Security
grafana_systemd_security: true
grafana_cookie_secure: true
grafana_cookie_samesite: "lax"
# Database (SQLite by default)
grafana_database_type: "sqlite3"
grafana_database_path: "{{ grafana_data_dir }}/grafana.db"
# =================================================================
# Node Exporter Configuration
# =================================================================
# Service Management
node_exporter_service_enabled: true
node_exporter_service_state: "started"
# Version
node_exporter_version: "1.8.2"
# Network Security (localhost only)
node_exporter_listen_address: "127.0.0.1:9100"
# User/Group
node_exporter_user: "node_exporter"
node_exporter_group: "node_exporter"
# Enabled collectors
node_exporter_enabled_collectors:
- cpu
- diskstats
- filesystem
- loadavg
- meminfo
- netdev
- netstat
- stat
- time
- uname
- vmstat
- systemd
# Disabled collectors
node_exporter_disabled_collectors:
- mdadm
# Filesystem collector configuration
node_exporter_filesystem_ignored_fs_types:
- tmpfs
- devtmpfs
- devfs
- iso9660
- overlay
- aufs
- squashfs
node_exporter_filesystem_ignored_mount_points:
- /var/lib/containers/storage/.*
- /run/.*
- /sys/.*
- /proc/.*
# systemd security
node_exporter_systemd_security: true
# =================================================================
# Infrastructure Notes
# =================================================================
# Complete monitoring stack:
# - VictoriaMetrics: Time-series database (Prometheus-compatible)
# - Grafana: Visualization with Authentik OAuth integration
# - node_exporter: System metrics collection
#
# Role mapping via Authentik groups:
# - grafana-admins: Full admin access
# - grafana-editors: Can create/edit dashboards
# - Default: Viewer access
#
# All services run on localhost only, proxied via Caddy

View File

@@ -0,0 +1,23 @@
---
- name: restart victoriametrics
ansible.builtin.systemd:
name: victoriametrics
state: restarted
daemon_reload: true
- name: restart node_exporter
ansible.builtin.systemd:
name: node_exporter
state: restarted
daemon_reload: true
- name: restart grafana
ansible.builtin.systemd:
name: grafana
state: restarted
daemon_reload: true
- name: reload caddy
ansible.builtin.systemd:
name: caddy
state: reloaded

View File

@@ -0,0 +1,3 @@
---
dependencies:
- role: caddy

View File

@@ -0,0 +1,9 @@
---
- name: Deploy Grafana Caddy configuration
ansible.builtin.template:
src: grafana.caddy.j2
dest: /etc/caddy/sites-enabled/grafana.caddy
owner: caddy
group: caddy
mode: '0644'
notify: reload caddy

View File

@@ -0,0 +1,90 @@
---
- name: Create Grafana system user
ansible.builtin.user:
name: "{{ grafana_user }}"
system: true
create_home: false
shell: /usr/sbin/nologin
state: present
- name: Create Grafana directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ grafana_user }}"
group: "{{ grafana_group }}"
mode: '0755'
loop:
- "{{ grafana_data_dir }}"
- "{{ grafana_logs_dir }}"
- "{{ grafana_plugins_dir }}"
- "{{ grafana_provisioning_dir }}"
- "{{ grafana_provisioning_dir }}/datasources"
- "{{ grafana_provisioning_dir }}/dashboards"
- "{{ grafana_data_dir }}/dashboards"
- /etc/grafana
- name: Download Grafana binary
ansible.builtin.get_url:
url: "https://dl.grafana.com/oss/release/grafana-{{ grafana_version }}.linux-amd64.tar.gz"
dest: "/tmp/grafana-{{ grafana_version }}.tar.gz"
mode: '0644'
register: grafana_download
- name: Extract Grafana
ansible.builtin.unarchive:
src: "/tmp/grafana-{{ grafana_version }}.tar.gz"
dest: /opt
remote_src: true
creates: "/opt/grafana-v{{ grafana_version }}"
when: grafana_download.changed
- name: Create Grafana symlink
ansible.builtin.file:
src: "/opt/grafana-v{{ grafana_version }}"
dest: /opt/grafana
state: link
- name: Deploy Grafana configuration
ansible.builtin.template:
src: grafana.ini.j2
dest: /etc/grafana/grafana.ini
owner: "{{ grafana_user }}"
group: "{{ grafana_group }}"
mode: '0640'
notify: restart grafana
- name: Deploy VictoriaMetrics datasource provisioning
ansible.builtin.template:
src: datasource-victoriametrics.yml.j2
dest: "{{ grafana_provisioning_dir }}/datasources/victoriametrics.yml"
owner: "{{ grafana_user }}"
group: "{{ grafana_group }}"
mode: '0644'
notify: restart grafana
when: grafana_datasource_vm_enabled
- name: Deploy dashboard provisioning
ansible.builtin.template:
src: dashboards.yml.j2
dest: "{{ grafana_provisioning_dir }}/dashboards/default.yml"
owner: "{{ grafana_user }}"
group: "{{ grafana_group }}"
mode: '0644'
notify: restart grafana
- name: Deploy Grafana systemd service
ansible.builtin.template:
src: grafana.service.j2
dest: /etc/systemd/system/grafana.service
owner: root
group: root
mode: '0644'
notify: restart grafana
- name: Enable and start Grafana service
ansible.builtin.systemd:
name: grafana
enabled: "{{ grafana_service_enabled }}"
state: "{{ grafana_service_state }}"
daemon_reload: true

View File

@@ -0,0 +1,20 @@
---
# =================================================================
# Metrics Stack Deployment
# =================================================================
- name: Deploy VictoriaMetrics
ansible.builtin.include_tasks: victoriametrics.yml
tags: [metrics, victoriametrics]
- name: Deploy node_exporter
ansible.builtin.include_tasks: node_exporter.yml
tags: [metrics, node_exporter]
- name: Deploy Grafana
ansible.builtin.include_tasks: grafana.yml
tags: [metrics, grafana]
- name: Deploy Caddy configuration for Grafana
ansible.builtin.include_tasks: caddy.yml
tags: [metrics, caddy]

View File

@@ -0,0 +1,49 @@
---
- name: Create node_exporter system user
ansible.builtin.user:
name: "{{ node_exporter_user }}"
system: true
create_home: false
shell: /usr/sbin/nologin
state: present
- name: Download node_exporter binary
ansible.builtin.get_url:
url: "https://github.com/prometheus/node_exporter/releases/download/v{{ node_exporter_version }}/node_exporter-{{ node_exporter_version }}.linux-amd64.tar.gz"
dest: "/tmp/node_exporter-{{ node_exporter_version }}.tar.gz"
mode: '0644'
register: node_exporter_download
- name: Extract node_exporter binary
ansible.builtin.unarchive:
src: "/tmp/node_exporter-{{ node_exporter_version }}.tar.gz"
dest: /tmp
remote_src: true
creates: "/tmp/node_exporter-{{ node_exporter_version }}.linux-amd64"
when: node_exporter_download.changed
- name: Copy node_exporter binary to /usr/local/bin
ansible.builtin.copy:
src: "/tmp/node_exporter-{{ node_exporter_version }}.linux-amd64/node_exporter"
dest: /usr/local/bin/node_exporter
owner: root
group: root
mode: '0755'
remote_src: true
when: node_exporter_download.changed
- name: Deploy node_exporter systemd service
ansible.builtin.template:
src: node_exporter.service.j2
dest: /etc/systemd/system/node_exporter.service
owner: root
group: root
mode: '0644'
notify: restart node_exporter
- name: Enable and start node_exporter service
ansible.builtin.systemd:
name: node_exporter
enabled: "{{ node_exporter_service_enabled }}"
state: "{{ node_exporter_service_state }}"
daemon_reload: true

View File

@@ -0,0 +1,66 @@
---
- name: Create VictoriaMetrics system user
ansible.builtin.user:
name: "{{ victoriametrics_user }}"
system: true
create_home: false
shell: /usr/sbin/nologin
state: present
- name: Create VictoriaMetrics directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: "{{ victoriametrics_user }}"
group: "{{ victoriametrics_group }}"
mode: '0755'
loop:
- "{{ victoriametrics_data_dir }}"
- "{{ victoriametrics_scrape_config_dir }}"
- name: Download VictoriaMetrics binary
ansible.builtin.get_url:
url: "https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v{{ victoriametrics_version }}/victoria-metrics-linux-amd64-v{{ victoriametrics_version }}.tar.gz"
dest: "/tmp/victoria-metrics-v{{ victoriametrics_version }}.tar.gz"
mode: '0644'
register: victoriametrics_download
- name: Extract VictoriaMetrics binary
ansible.builtin.unarchive:
src: "/tmp/victoria-metrics-v{{ victoriametrics_version }}.tar.gz"
dest: /usr/local/bin
remote_src: true
creates: /usr/local/bin/victoria-metrics-prod
when: victoriametrics_download.changed
- name: Set VictoriaMetrics binary permissions
ansible.builtin.file:
path: /usr/local/bin/victoria-metrics-prod
owner: root
group: root
mode: '0755'
- name: Deploy VictoriaMetrics scrape configuration
ansible.builtin.template:
src: scrape.yml.j2
dest: "{{ victoriametrics_scrape_config_file }}"
owner: "{{ victoriametrics_user }}"
group: "{{ victoriametrics_group }}"
mode: '0644'
notify: restart victoriametrics
- name: Deploy VictoriaMetrics systemd service
ansible.builtin.template:
src: victoriametrics.service.j2
dest: /etc/systemd/system/victoriametrics.service
owner: root
group: root
mode: '0644'
notify: restart victoriametrics
- name: Enable and start VictoriaMetrics service
ansible.builtin.systemd:
name: victoriametrics
enabled: "{{ victoriametrics_service_enabled }}"
state: "{{ victoriametrics_service_state }}"
daemon_reload: true

View File

@@ -0,0 +1,12 @@
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
updateIntervalSeconds: 10
allowUiUpdates: true
options:
path: {{ grafana_data_dir }}/dashboards

View File

@@ -0,0 +1,12 @@
apiVersion: 1
datasources:
- name: {{ grafana_datasource_vm_name }}
type: prometheus
access: proxy
url: {{ grafana_datasource_vm_url }}
isDefault: true
editable: true
jsonData:
httpMethod: POST
timeInterval: 15s

View File

@@ -0,0 +1,26 @@
# Grafana Metrics Dashboard
{{ grafana_domain }} {
reverse_proxy http://{{ grafana_listen_address }}:{{ grafana_listen_port }} {
header_up Host {host}
header_up X-Real-IP {remote_host}
header_up X-Forwarded-Proto https
header_up X-Forwarded-For {remote_host}
header_up X-Forwarded-Host {host}
}
# Security headers
header {
X-Frame-Options SAMEORIGIN
X-Content-Type-Options nosniff
X-XSS-Protection "1; mode=block"
Referrer-Policy strict-origin-when-cross-origin
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
}
# Logging
log {
output file {{ caddy_log_dir }}/grafana.log
level INFO
format json
}
}

View File

@@ -0,0 +1,68 @@
# Grafana Configuration
# Managed by Ansible - DO NOT EDIT MANUALLY
[paths]
data = {{ grafana_data_dir }}
logs = {{ grafana_logs_dir }}
plugins = {{ grafana_plugins_dir }}
provisioning = {{ grafana_provisioning_dir }}
[server]
http_addr = {{ grafana_listen_address }}
http_port = {{ grafana_listen_port }}
domain = {{ grafana_domain }}
root_url = {{ grafana_root_url }}
enforce_domain = true
enable_gzip = true
[database]
type = {{ grafana_database_type }}
{% if grafana_database_type == 'sqlite3' %}
path = {{ grafana_database_path }}
{% endif %}
[security]
admin_user = {{ grafana_admin_user }}
admin_password = {{ grafana_admin_password }}
secret_key = {{ vault_grafana_secret_key }}
cookie_secure = {{ grafana_cookie_secure | lower }}
cookie_samesite = {{ grafana_cookie_samesite }}
disable_gravatar = true
disable_initial_admin_creation = false
[users]
allow_sign_up = {{ grafana_allow_signup | lower }}
allow_org_create = false
auto_assign_org = true
auto_assign_org_role = Viewer
[auth]
disable_login_form = {{ grafana_disable_login_form | lower }}
oauth_auto_login = false
{% if grafana_oauth_enabled %}
[auth.generic_oauth]
enabled = true
name = {{ grafana_oauth_name }}
client_id = {{ grafana_oauth_client_id }}
client_secret = {{ grafana_oauth_client_secret }}
scopes = {{ grafana_oauth_scopes }}
auth_url = {{ grafana_oauth_auth_url }}
token_url = {{ grafana_oauth_token_url }}
api_url = {{ grafana_oauth_api_url }}
allow_sign_up = {{ grafana_oauth_allow_sign_up | lower }}
role_attribute_path = {{ grafana_oauth_role_attribute_path }}
use_pkce = true
{% endif %}
[log]
mode = console
level = info
[analytics]
reporting_enabled = false
check_for_updates = false
check_for_plugin_updates = false
[snapshots]
external_enabled = false

View File

@@ -0,0 +1,36 @@
[Unit]
Description=Grafana visualization platform
Documentation=https://grafana.com/docs/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User={{ grafana_user }}
Group={{ grafana_group }}
WorkingDirectory=/opt/grafana
ExecStart=/opt/grafana/bin/grafana-server \
--config=/etc/grafana/grafana.ini \
--homepath=/opt/grafana
Restart=on-failure
RestartSec=5s
# Security hardening
{% if grafana_systemd_security %}
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths={{ grafana_data_dir }} {{ grafana_logs_dir }}
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictNamespaces=true
LockPersonality=true
{% endif %}
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,42 @@
[Unit]
Description=Prometheus Node Exporter
Documentation=https://github.com/prometheus/node_exporter
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User={{ node_exporter_user }}
Group={{ node_exporter_group }}
ExecStart=/usr/local/bin/node_exporter \
--web.listen-address={{ node_exporter_listen_address }} \
{% for collector in node_exporter_enabled_collectors %}
--collector.{{ collector }} \
{% endfor %}
{% for collector in node_exporter_disabled_collectors %}
--no-collector.{{ collector }} \
{% endfor %}
--collector.filesystem.fs-types-exclude="{{ node_exporter_filesystem_ignored_fs_types | join('|') }}" \
--collector.filesystem.mount-points-exclude="{{ node_exporter_filesystem_ignored_mount_points | join('|') }}"
Restart=on-failure
RestartSec=5s
# Security hardening
{% if node_exporter_systemd_security %}
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictNamespaces=true
LockPersonality=true
ReadOnlyPaths=/
{% endif %}
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,22 @@
global:
scrape_interval: {{ victoriametrics_scrape_interval }}
scrape_timeout: {{ victoriametrics_scrape_timeout }}
external_labels:
environment: '{{ "homelab" if inventory_hostname in groups["homelab"] else "production" }}'
host: '{{ inventory_hostname }}'
scrape_configs:
# VictoriaMetrics self-monitoring
- job_name: 'victoriametrics'
static_configs:
- targets: ['{{ victoriametrics_listen_address }}']
labels:
service: 'victoriametrics'
# Node exporter for system metrics
- job_name: 'node'
static_configs:
- targets: ['{{ node_exporter_listen_address }}']
labels:
service: 'node_exporter'
instance: '{{ inventory_hostname }}'

View File

@@ -0,0 +1,41 @@
[Unit]
Description=VictoriaMetrics time-series database
Documentation=https://docs.victoriametrics.com/
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User={{ victoriametrics_user }}
Group={{ victoriametrics_group }}
ExecStart=/usr/local/bin/victoria-metrics-prod \
-storageDataPath={{ victoriametrics_data_dir }} \
-retentionPeriod={{ victoriametrics_retention_period }} \
-httpListenAddr={{ victoriametrics_listen_address }} \
-promscrape.config={{ victoriametrics_scrape_config_file }} \
-memory.allowedPercent={{ victoriametrics_memory_allowed_percent }} \
-storage.minFreeDiskSpaceBytes={{ victoriametrics_storage_min_free_disk_space_bytes }}
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
RestartSec=5s
# Security hardening
{% if victoriametrics_systemd_security %}
NoNewPrivileges=true
PrivateTmp=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths={{ victoriametrics_data_dir }}
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictNamespaces=true
LockPersonality=true
{% endif %}
[Install]
WantedBy=multi-user.target

View File

@@ -52,9 +52,17 @@ See `defaults/main.yml` for all configurable variables.
Define these in your `host_vars/` with `ansible-vault`:
```yaml
# Core credentials (required)
vault_nextcloud_db_password: "secure-database-password"
vault_nextcloud_admin_password: "secure-admin-password"
vault_valkey_password: "secure-valkey-password"
# Email credentials (optional - only if email enabled)
vault_nextcloud_smtp_password: "secure-smtp-password"
# OIDC credentials (optional - only if OIDC enabled)
vault_nextcloud_oidc_client_id: "nextcloud-client-id-from-authentik"
vault_nextcloud_oidc_client_secret: "nextcloud-client-secret-from-authentik"
```
### Key Variables
@@ -91,27 +99,449 @@ This role uses a **two-phase deployment** approach to work correctly with the Ne
6. Container runs `occ maintenance:install` with PostgreSQL
7. Installation creates `config.php` with database credentials
### Phase 2: Custom Configuration (automatic)
### Phase 2: Configuration via OCC Script (automatic)
8. Ansible waits for `occ status` to report `installed: true`
9. Ansible deploys custom `redis.config.php` (overwrites default)
10. Container restart applies custom configuration
9. Ansible deploys and runs configuration script inside container
10. Script configures system settings via OCC commands:
- Redis caching (without sessions)
- Maintenance window and phone region
- Database optimizations (indices, bigint, mimetypes)
**Why this order?**
The Nextcloud container's entrypoint uses `version.php` as a marker to determine if installation is needed. If you deploy any files into `/opt/nextcloud/config/` before the container starts, the initialization process fails:
The Nextcloud container's entrypoint uses `version.php` as a marker to determine if installation is needed. We must wait for the container's auto-installation to complete before running configuration commands:
- Container copies files including `version.php`
- Entrypoint sees `version.php` exists → assumes already installed
- Skips running `occ maintenance:install`
- Result: Empty `config.php`, 503 errors
- Container must complete first-time setup (copy files, run `occ maintenance:install`)
- OCC commands require a fully initialized Nextcloud installation
- Running configuration after installation avoids conflicts with the entrypoint script
By deploying custom configs **after** installation completes, we:
- ✅ Allow the container's auto-installation to run properly
- ✅ Override specific configs (like Redis) after the fact
-Maintain idempotency (subsequent runs just update configs)
**Configuration Method:**
This role uses **OCC commands via a script** rather than config files because:
-**Explicit and verifiable** - Run `occ config:list system` to see exact state
-**No file conflicts** - Avoids issues with Docker image's built-in config files
-**Fully idempotent** - Safe to re-run during updates
-**Single source of truth** - All configuration in one script template
See the official [Nextcloud Docker documentation](https://github.com/nextcloud/docker#auto-configuration-via-environment-variables) for more details on the auto-configuration process.
## Installed Apps
This role automatically installs and enables the following apps:
- **user_oidc** - OpenID Connect authentication backend for SSO integration
- **calendar** - Calendar and scheduling application (CalDAV)
- **contacts** - Contact management application (CardDAV)
To customize the app list, override these variables in your `host_vars`:
```yaml
nextcloud_apps_install:
- user_oidc
- calendar
- contacts
- tasks # Add more apps as needed
- deck
- mail
```
## OIDC/SSO Integration
### Prerequisites
Before enabling OIDC, you must create an OIDC application/provider in your identity provider (e.g., Authentik):
**For Authentik:**
1. Navigate to **Applications → Providers**
2. Click **Create****OAuth2/OpenID Provider**
3. Configure:
- **Name**: `Nextcloud`
- **Authorization flow**: `default-authentication-flow` (or your preferred flow)
- **Client type**: `Confidential`
- **Client ID**: Generate or specify (save this)
- **Client Secret**: Generate or specify (save this)
- **Redirect URIs**: `https://cloud.jnss.me/apps/user_oidc/code`
- **Signing Key**: Select your signing certificate
- **Scopes**: Add `openid`, `profile`, `email`
4. Create **Application**:
- Navigate to **Applications → Applications**
- Click **Create**
- **Name**: `Nextcloud`
- **Slug**: `nextcloud`
- **Provider**: Select the provider created above
- **Launch URL**: `https://cloud.jnss.me`
5. Note the **Discovery URL**: `https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration`
### Configuration
Enable OIDC in your `host_vars/arch-vps/main.yml`:
```yaml
# OIDC Configuration
nextcloud_oidc_enabled: true
nextcloud_oidc_provider_id: "authentik" # Provider identifier (slug)
nextcloud_oidc_provider_name: "Authentik SSO" # Display name on login button
nextcloud_oidc_discovery_url: "https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration"
# Security settings (recommended defaults)
nextcloud_oidc_unique_uid: true # Prevents account takeover between providers
nextcloud_oidc_check_bearer: false
nextcloud_oidc_send_id_token_hint: true
# Attribute mappings (defaults work for most providers)
nextcloud_oidc_mapping_display_name: "name"
nextcloud_oidc_mapping_email: "email"
nextcloud_oidc_mapping_uid: "preferred_username" # Or "sub" for UUID
# Optional: Enable single login (auto-redirect to SSO)
nextcloud_oidc_single_login: false # Set to true to force SSO login
```
Add credentials to your vault file `host_vars/arch-vps/vault.yml`:
```yaml
vault_nextcloud_oidc_client_id: "nextcloud-client-id-from-authentik"
vault_nextcloud_oidc_client_secret: "nextcloud-client-secret-from-authentik"
```
### OIDC Scopes
The following scopes are requested from your OIDC provider by default:
```yaml
nextcloud_oidc_scope: "email profile nextcloud openid"
```
**Standard scopes:**
- `openid` - Required for OpenID Connect (contains no claims itself)
- `email` - User's email address (`email` and `email_verified` claims)
- `profile` - User's profile information (`name`, `given_name`, `preferred_username`, `picture`, etc.)
**Custom scope for Authentik:**
- `nextcloud` - Custom scope mapping you create in Authentik (contains `groups`, `quota`, `user_id`)
#### Creating the Nextcloud Scope Mapping in Authentik
The `nextcloud` scope must be created as a custom property mapping in Authentik:
1. Log in to Authentik as administrator
2. Navigate to **Customization****Property mappings****Create**
3. Select type: **Scope mapping**
4. Configure:
- **Name**: `Nextcloud Profile`
- **Scope name**: `nextcloud`
- **Expression**:
```python
# Extract all groups the user is a member of
groups = [group.name for group in user.ak_groups.all()]
# In Nextcloud, administrators must be members of a fixed group called "admin"
# If a user is an admin in authentik, ensure that "admin" is appended to their group list
if user.is_superuser and "admin" not in groups:
groups.append("admin")
return {
"name": request.user.name,
"groups": groups,
# Set a quota by using the "nextcloud_quota" property in the user's attributes
"quota": user.group_attributes().get("nextcloud_quota", None),
# To connect an existing Nextcloud user, set "nextcloud_user_id" to the Nextcloud username
"user_id": user.attributes.get("nextcloud_user_id", str(user.uuid)),
}
```
5. Click **Finish**
6. Navigate to your Nextcloud provider → **Advanced protocol settings**
7. Add `Nextcloud Profile` to **Scopes** (in addition to the default scopes)
### Group Provisioning and Synchronization
Automatically sync user group membership from Authentik to Nextcloud.
**Default configuration:**
```yaml
nextcloud_oidc_group_provisioning: true # Auto-create groups from Authentik
nextcloud_oidc_mapping_groups: "groups" # Claim containing group list
```
**How it works:**
1. User logs in via OIDC
2. Authentik sends group membership in the `groups` claim (from the custom scope)
3. Nextcloud automatically:
- Creates groups that don't exist in Nextcloud
- Adds user to those groups
- Removes user from groups they're no longer member of in Authentik
**Example: Making a user an admin**
Nextcloud requires admins to be in a group literally named `admin`. The custom scope mapping (above) automatically adds `"admin"` to the groups list for Authentik superusers.
Alternatively, manually create a group in Authentik called `admin` and add users to it.
**Quota management:**
Set storage quotas by adding the `nextcloud_quota` attribute to Authentik groups or users:
1. In Authentik, navigate to **Directory****Groups** → select your group
2. Under **Attributes**, add:
```json
{
"nextcloud_quota": "15 GB"
}
```
3. Users in this group will have a 15 GB quota in Nextcloud
4. If not set, quota is unlimited
### Complete Authentik Setup Guide
Follow these steps to set up OIDC authentication with Authentik:
**Step 1: Create the Custom Scope Mapping**
See [Creating the Nextcloud Scope Mapping in Authentik](#creating-the-nextcloud-scope-mapping-in-authentik) above.
**Step 2: Create the OAuth2/OpenID Provider**
1. In Authentik, navigate to **Applications** → **Providers**
2. Click **Create** → **OAuth2/OpenID Provider**
3. Configure:
- **Name**: `Nextcloud`
- **Authorization flow**: `default-authentication-flow` (or your preferred flow)
- **Client type**: `Confidential`
- **Client ID**: Generate or specify (save this for later)
- **Client Secret**: Generate or specify (save this for later)
- **Redirect URIs**: `https://cloud.jnss.me/apps/user_oidc/code`
- **Signing Key**: Select your signing certificate
- Under **Advanced protocol settings**:
- **Scopes**: Add `openid`, `email`, `profile`, and `Nextcloud Profile` (the custom scope created in Step 1)
- **Subject mode**: `Based on the User's UUID` (or `Based on the User's username` if you prefer usernames)
**Step 3: Create the Application**
1. Navigate to **Applications** → **Applications**
2. Click **Create**
3. Configure:
- **Name**: `Nextcloud`
- **Slug**: `nextcloud`
- **Provider**: Select the provider created in Step 2
- **Launch URL**: `https://cloud.jnss.me` (optional)
**Step 4: Note the Discovery URL**
The discovery URL follows this pattern:
```
https://auth.jnss.me/application/o/<slug>/.well-known/openid-configuration
```
For the application slug `nextcloud`, it will be:
```
https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration
```
**Step 5: Configure Nextcloud Role Variables**
In your `host_vars/arch-vps/main.yml`:
```yaml
nextcloud_oidc_enabled: true
nextcloud_oidc_provider_id: "authentik"
nextcloud_oidc_provider_name: "Authentik"
nextcloud_oidc_discovery_url: "https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration"
nextcloud_oidc_scope: "email profile nextcloud openid"
nextcloud_oidc_mapping_uid: "preferred_username" # Or "sub" for UUID-based IDs
nextcloud_oidc_mapping_display_name: "name"
nextcloud_oidc_mapping_email: "email"
nextcloud_oidc_mapping_groups: "groups"
nextcloud_oidc_mapping_quota: "quota"
nextcloud_oidc_group_provisioning: true
```
In your `host_vars/arch-vps/vault.yml`:
```yaml
vault_nextcloud_oidc_client_id: "nextcloud" # Client ID from Authentik
vault_nextcloud_oidc_client_secret: "very-long-secret-from-authentik" # Client Secret from Authentik
```
**Step 6: Deploy and Test**
Run the Nextcloud playbook:
```bash
ansible-playbook -i inventory/hosts.yml site.yml --tags nextcloud --ask-vault-pass
```
### Supported OIDC Providers
The `user_oidc` app supports any **OpenID Connect 1.0** compliant provider:
- **Authentik** (recommended for self-hosted)
- **Keycloak**
- **Auth0**
- **Okta**
- **Azure AD / Microsoft Entra ID**
- **Google Identity Platform**
- **GitHub** (via OIDC)
- **GitLab**
- **Authelia**
- **Kanidm**
- Any other OIDC 1.0 compliant provider
The `nextcloud_oidc_provider_id` is just an identifier slug - you can use any value like `authentik`, `keycloak`, `auth0`, `mycompany-sso`, etc.
### Verification
After deployment:
1. **Check provider configuration:**
```bash
podman exec --user www-data nextcloud php occ user_oidc:provider
podman exec --user www-data nextcloud php occ user_oidc:provider authentik
```
2. **Test login:**
- Visit `https://cloud.jnss.me`
- You should see a "Log in with Authentik SSO" button
- Click it to test SSO flow
- User account should be auto-created on first login
3. **Check user mapping:**
```bash
podman exec --user www-data nextcloud php occ user:list
```
### Troubleshooting OIDC
**Login button doesn't appear:**
```bash
# Check if user_oidc app is enabled
podman exec --user www-data nextcloud php occ app:list | grep user_oidc
# Enable if needed
podman exec --user www-data nextcloud php occ app:enable user_oidc
```
**Discovery URL errors:**
```bash
# Test discovery URL is accessible from container
podman exec nextcloud curl -k https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration
```
**JWKS cache issues:**
```bash
# Clear JWKS cache
podman exec --user www-data nextcloud php occ user_oidc:provider authentik \
--clientid='your-client-id'
```
## Email Configuration
Configure Nextcloud to send emails for password resets, notifications, and sharing.
### Configuration
Enable email in your `host_vars/arch-vps/main.yml`:
```yaml
# Email Configuration
nextcloud_email_enabled: true
nextcloud_smtp_host: "smtp.fastmail.com"
nextcloud_smtp_port: 587
nextcloud_smtp_secure: "tls" # tls, ssl, or empty
nextcloud_smtp_username: "nextcloud@jnss.me"
nextcloud_mail_from_address: "nextcloud"
nextcloud_mail_domain: "jnss.me"
# Set admin user's email address
nextcloud_admin_email: "admin@jnss.me"
```
Add SMTP password to vault `host_vars/arch-vps/vault.yml`:
```yaml
vault_nextcloud_smtp_password: "your-smtp-app-password"
```
### Common SMTP Providers
**Fastmail:**
```yaml
nextcloud_smtp_host: "smtp.fastmail.com"
nextcloud_smtp_port: 587
nextcloud_smtp_secure: "tls"
```
**Gmail (App Password required):**
```yaml
nextcloud_smtp_host: "smtp.gmail.com"
nextcloud_smtp_port: 587
nextcloud_smtp_secure: "tls"
```
**Office 365:**
```yaml
nextcloud_smtp_host: "smtp.office365.com"
nextcloud_smtp_port: 587
nextcloud_smtp_secure: "tls"
```
**SMTP2GO:**
```yaml
nextcloud_smtp_host: "mail.smtp2go.com"
nextcloud_smtp_port: 587
nextcloud_smtp_secure: "tls"
```
### Verification
After deployment:
1. **Check SMTP configuration:**
```bash
podman exec --user www-data nextcloud php occ config:list system | grep mail
```
2. **Check admin email:**
```bash
podman exec --user www-data nextcloud php occ user:setting admin settings email
```
3. **Send test email via Web UI:**
- Log in as admin
- Settings → Administration → Basic settings
- Scroll to "Email server"
- Click "Send email" button
- Check recipient inbox
### Troubleshooting Email
**Test SMTP connection from container:**
```bash
# Install swaks if needed (for testing)
podman exec nextcloud apk add --no-cache swaks
# Test SMTP connection
podman exec nextcloud swaks \
--to recipient@example.com \
--from nextcloud@jnss.me \
--server smtp.fastmail.com:587 \
--auth LOGIN \
--auth-user nextcloud@jnss.me \
--auth-password 'your-password' \
--tls
```
**Check Nextcloud logs:**
```bash
podman exec --user www-data nextcloud php occ log:watch
```
## Usage
### Include in Playbook
@@ -250,7 +680,7 @@ This role uses a **split caching strategy** for optimal performance and stabilit
- `memcache.local`: APCu (in-memory opcode cache)
- `memcache.distributed`: Redis (shared cache, file locking)
- `memcache.locking`: Redis (transactional file locking)
- Configuration: Via custom `redis.config.php` template
- Configuration: Via OCC commands in configuration script
**Why not Redis sessions?**
@@ -262,7 +692,7 @@ The official Nextcloud Docker image enables Redis session handling when `REDIS_H
4. **Worker exhaustion**: Limited FPM workers (default 5) all become blocked
5. **Cascading failure**: New requests queue, timeouts accumulate, locks orphan
This role disables Redis sessions by **not setting** `REDIS_HOST` in the environment, while still providing Redis caching via a custom `redis.config.php` that is deployed independently.
This role disables Redis sessions by **not setting** `REDIS_HOST` in the environment, while still providing Redis caching via OCC configuration commands.
**If you need Redis sessions** (e.g., multi-server setup with session sharing), you must:
1. Enable `REDIS_HOST` in `nextcloud.env.j2`

View File

@@ -1,22 +1,104 @@
# Nextcloud Role - Required Vault Variables
# Nextcloud Role - Vault Variables
This role requires the following encrypted variables to be defined in your vault file (typically `host_vars/<hostname>/vault.yml`).
This document describes all vault-encrypted variables used by the Nextcloud role.
## Required Variables
Add these to your encrypted vault file:
These variables **must** be defined in your vault file for the role to function:
```yaml
# Nextcloud database password
# =================================================================
# Core Credentials (REQUIRED)
# =================================================================
# PostgreSQL database password for Nextcloud user
vault_nextcloud_db_password: "CHANGE_ME_secure_database_password"
# Nextcloud admin account password
# Nextcloud admin user password
vault_nextcloud_admin_password: "CHANGE_ME_secure_admin_password"
# Valkey/Redis password (shared infrastructure)
# Valkey (Redis) password for caching (shared infrastructure)
vault_valkey_password: "CHANGE_ME_secure_valkey_password"
```
## Optional Variables
These variables are only required if you enable the corresponding features:
### Email/SMTP Configuration
Only required if `nextcloud_email_enabled: true`:
```yaml
# =================================================================
# Email/SMTP Credentials (OPTIONAL)
# =================================================================
# SMTP server password for sending emails
# Used with nextcloud_smtp_username for authentication
vault_nextcloud_smtp_password: "your-smtp-password-or-app-password"
```
**Example for Gmail:**
- Use an [App Password](https://support.google.com/accounts/answer/185833)
- Do NOT use your main Google account password
**Example for Fastmail:**
- Use an [App Password](https://www.fastmail.help/hc/en-us/articles/360058752854)
### OIDC/SSO Configuration
Only required if `nextcloud_oidc_enabled: true`:
```yaml
# =================================================================
# OIDC/SSO Credentials (OPTIONAL)
# =================================================================
# OAuth2/OIDC Client ID from your identity provider
vault_nextcloud_oidc_client_id: "nextcloud"
# OAuth2/OIDC Client Secret from your identity provider
# IMPORTANT: Keep this secret! Anyone with this can impersonate your app
vault_nextcloud_oidc_client_secret: "very-long-random-secret-from-authentik"
```
## Complete Vault File Example
Here's a complete example of a vault file with all possible variables:
```yaml
---
# =================================================================
# Example Vault File
# =================================================================
# File: host_vars/arch-vps/vault.yml
# Encrypted with: ansible-vault encrypt host_vars/arch-vps/vault.yml
# Caddy TLS
vault_caddy_tls_email: "admin@jnss.me"
vault_cloudflare_api_token: "your-cloudflare-token"
# Authentik
vault_authentik_db_password: "authentik-db-password"
vault_authentik_secret_key: "authentik-secret-key"
vault_authentik_admin_password: "authentik-admin-password"
# Valkey (shared infrastructure)
vault_valkey_password: "V4lk3y!P@ssw0rd#R3d1s"
# Nextcloud - Core (always required)
vault_nextcloud_db_password: "XkN8vQ2mP9wR5tY7uI0oP3sA6dF8gH1j"
vault_nextcloud_admin_password: "AdminP@ssw0rd!SecureAndL0ng"
# Nextcloud - Email (optional)
vault_nextcloud_smtp_password: "fastmail-app-password-xyz123"
# Nextcloud - OIDC (optional)
vault_nextcloud_oidc_client_id: "nextcloud"
vault_nextcloud_oidc_client_secret: "aksk_authentik_secret_very_long_random_string"
```
## Creating/Editing Vault File
### First Time Setup
@@ -37,6 +119,13 @@ ansible-vault edit host_vars/arch-vps/vault.yml
# Add the Nextcloud variables, then save and exit
```
### View Vault Contents
```bash
# View vault file contents
ansible-vault view host_vars/arch-vps/vault.yml
```
### Password Generation
Generate secure passwords:
@@ -49,39 +138,26 @@ openssl rand -base64 32
pwgen -s 32 1
```
## Example Vault File
## Running Playbooks with Vault
Your `host_vars/arch-vps/vault.yml` should include:
```yaml
---
# Caddy TLS
vault_caddy_tls_email: "admin@jnss.me"
vault_cloudflare_api_token: "your-cloudflare-token"
# Authentik
vault_authentik_db_password: "authentik-db-password"
vault_authentik_secret_key: "authentik-secret-key"
vault_authentik_admin_password: "authentik-admin-password"
# Nextcloud (ADD THESE)
vault_nextcloud_db_password: "generated-password-1"
vault_nextcloud_admin_password: "generated-password-2"
# Valkey (shared infrastructure)
vault_valkey_password: "valkey-password"
```
## Deployment
When deploying, you'll need to provide the vault password:
### Interactive Password Prompt
```bash
# Deploy with vault password prompt
ansible-playbook -i inventory/hosts.yml site.yml --tags nextcloud --ask-vault-pass
ansible-playbook -i inventory/hosts.yml site.yml --ask-vault-pass
```
# Or use a password file
ansible-playbook -i inventory/hosts.yml site.yml --tags nextcloud --vault-password-file ~/.vault_pass
### Using a Password File
```bash
# Create password file (DO NOT COMMIT THIS!)
echo 'your-vault-password' > .vault_pass
chmod 600 .vault_pass
# Add to .gitignore
echo '.vault_pass' >> .gitignore
# Run playbook
ansible-playbook -i inventory/hosts.yml site.yml --vault-password-file .vault_pass
```
## Security Notes
@@ -92,6 +168,29 @@ ansible-playbook -i inventory/hosts.yml site.yml --tags nextcloud --vault-passwo
- Store vault password securely (password manager, encrypted file, etc.)
- Consider using `ansible-vault rekey` to change vault password periodically
## Troubleshooting
### "Vault password incorrect"
**Problem:** Wrong vault password entered
**Solution:** Verify you're using the correct vault password
### "vault_nextcloud_db_password is undefined"
**Problem:** Variable not defined in vault file or vault file not loaded
**Solution:**
1. Verify variable exists in vault file:
```bash
ansible-vault view host_vars/arch-vps/vault.yml | grep vault_nextcloud
```
2. Ensure you're using `--ask-vault-pass`:
```bash
ansible-playbook -i inventory/hosts.yml site.yml --ask-vault-pass
```
## Verification
Check that variables are properly encrypted:
@@ -103,3 +202,8 @@ cat host_vars/arch-vps/vault.yml
# Decrypt and view (requires password)
ansible-vault view host_vars/arch-vps/vault.yml
```
## Reference
- [Ansible Vault Documentation](https://docs.ansible.com/ansible/latest/user_guide/vault.html)
- [Best Practices for Variables and Vaults](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html#variables-and-vaults)

View File

@@ -15,7 +15,6 @@ nextcloud_home: /opt/nextcloud
nextcloud_html_dir: "{{ nextcloud_home }}/html"
nextcloud_data_dir: "{{ nextcloud_home }}/data"
nextcloud_config_dir: "{{ nextcloud_home }}/config"
nextcloud_custom_apps_dir: "{{ nextcloud_home }}/custom_apps"
# Container configuration (FPM variant)
nextcloud_version: "stable-fpm"
@@ -52,6 +51,7 @@ nextcloud_domain: "cloud.jnss.me"
# Admin user (auto-configured on first run)
nextcloud_admin_user: "admin"
nextcloud_admin_email: "admin@jnss.me"
nextcloud_admin_password: "{{ vault_nextcloud_admin_password }}"
# Trusted domains (space-separated)
@@ -67,6 +67,87 @@ nextcloud_overwriteprotocol: "https"
nextcloud_php_memory_limit: "512M"
nextcloud_php_upload_limit: "512M"
# =================================================================
# Background Jobs Configuration
# =================================================================
nextcloud_background_jobs_mode: "cron" # Options: ajax, webcron, cron
nextcloud_cron_interval: "5min" # How often cron runs (systemd timer)
# =================================================================
# Nextcloud System Configuration
# =================================================================
nextcloud_maintenance_window_start: 4 # Start hour (UTC) for maintenance window
nextcloud_default_phone_region: "NO" # Default phone region code (ISO 3166-1 alpha-2)
# =================================================================
# Apps Configuration
# =================================================================
# Apps to install and enable
nextcloud_apps_install:
- user_oidc
- calendar
- contacts
# =================================================================
# Email/SMTP Configuration (Optional)
# =================================================================
nextcloud_email_enabled: true # Master switch - set to true to enable SMTP
# SMTP Server Configuration
nextcloud_smtp_mode: "smtp" # smtp, sendmail, qmail
nextcloud_smtp_host: "smtp.titan.email" # e.g., smtp.gmail.com, smtp.fastmail.com
nextcloud_smtp_port: 587 # 587 for TLS, 465 for SSL, 25 for plain
nextcloud_smtp_secure: "tls" # tls, ssl, or empty string for no encryption
nextcloud_smtp_auth: true # Enable SMTP authentication
nextcloud_smtp_authtype: "PLAIN" # LOGIN or PLAIN
nextcloud_smtp_username: "hello@jnss.me" # SMTP username
nextcloud_smtp_password: "{{ vault_smtp_password | default('') }}"
# Email Addressing
nextcloud_mail_from_address: "hello" # Local part only (before @)
nextcloud_mail_domain: "jnss.me" # Domain part (after @)
# Admin User Email (set at line 55 in Core Configuration section)
# =================================================================
# OIDC/SSO Configuration (Optional)
# =================================================================
nextcloud_oidc_enabled: true # Master switch - set to true to enable OIDC
# Provider Configuration
nextcloud_oidc_provider_id: "authentik" # Provider identifier (slug)
nextcloud_oidc_provider_name: "Authentik" # Display name (shown on login button)
nextcloud_oidc_client_id: "{{ vault_nextcloud_oidc_client_id | default('') }}"
nextcloud_oidc_client_secret: "{{ vault_nextcloud_oidc_client_secret | default('') }}"
nextcloud_oidc_discovery_url: "https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration" # Full discovery URL, e.g., https://auth.example.com/application/o/nextcloud/.well-known/openid-configuration
# Scopes (based on Authentik integration guide)
# The 'nextcloud' scope is a custom scope you must create in Authentik
nextcloud_oidc_scope: "email profile nextcloud openid"
# Provider Options
nextcloud_oidc_unique_uid: false # Hash provider+user ID to prevent account takeover (recommended: true)
nextcloud_oidc_check_bearer: false # Check bearer tokens for API/WebDAV calls
nextcloud_oidc_send_id_token_hint: true # Send ID token hint during logout
# Attribute Mappings (based on Authentik integration guide)
nextcloud_oidc_mapping_display_name: "name" # Claim for display name
nextcloud_oidc_mapping_email: "email" # Claim for email
nextcloud_oidc_mapping_quota: "quota" # Claim for quota (from Authentik property mapping)
nextcloud_oidc_mapping_uid: "preferred_username" # Claim for user ID
nextcloud_oidc_mapping_groups: "groups" # Claim for groups (from Authentik property mapping)
# Group Provisioning (based on Authentik integration guide)
nextcloud_oidc_group_provisioning: true # Auto-create groups from OIDC provider
# Single Login Option
nextcloud_oidc_single_login: true # If true and only one provider, auto-redirect to SSO
# =================================================================
# Caddy Integration
# =================================================================

View File

@@ -0,0 +1,36 @@
---
# =================================================================
# Nextcloud Configuration via Script
# =================================================================
# Rick-Infra - Nextcloud Role
#
# Deploys and runs a configuration script inside the Nextcloud
# container to set system configuration via OCC commands.
- name: Deploy Nextcloud configuration script
template:
src: configure-nextcloud.sh.j2
dest: "{{ nextcloud_config_dir }}/configure.sh"
mode: '0755'
tags: [config, nextcloud-config]
- name: Run Nextcloud configuration script
command: podman exec --user www-data nextcloud bash /var/www/html/config/configure.sh
register: nc_config_result
changed_when: false # Script output doesn't indicate changes reliably
failed_when: nc_config_result.rc != 0
tags: [config, nextcloud-config]
- name: Display configuration script output
debug:
msg: "{{ nc_config_result.stdout_lines }}"
when: nc_config_result.stdout | length > 0
tags: [config, nextcloud-config]
- name: Display configuration script errors
debug:
msg: "{{ nc_config_result.stderr_lines }}"
when:
- nc_config_result.stderr | length > 0
- nc_config_result.rc != 0
tags: [config, nextcloud-config]

View File

@@ -0,0 +1,72 @@
---
# =================================================================
# Nextcloud Background Jobs Configuration
# =================================================================
# Rick-Infra - Nextcloud Role
#
# Configures systemd timer for reliable background job execution
# instead of Ajax-based cron (which requires user activity)
- name: Create nextcloud cron service
copy:
content: |
[Unit]
Description=Nextcloud Background Jobs (cron.php)
Documentation=https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html
After=nextcloud.service
Requires=nextcloud.service
[Service]
Type=oneshot
ExecStart=/usr/bin/podman exec --user www-data nextcloud php -f /var/www/html/cron.php
# Logging
StandardOutput=journal
StandardError=journal
SyslogIdentifier=nextcloud-cron
dest: /etc/systemd/system/nextcloud-cron.service
mode: '0644'
backup: yes
notify: reload systemd
- name: Create nextcloud cron timer
copy:
content: |
[Unit]
Description=Nextcloud Background Jobs Timer
Documentation=https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html
[Timer]
OnBootSec=5min
OnUnitActiveSec={{ nextcloud_cron_interval }}
Unit=nextcloud-cron.service
[Install]
WantedBy=timers.target
dest: /etc/systemd/system/nextcloud-cron.timer
mode: '0644'
backup: yes
notify: reload systemd
- name: Enable and start nextcloud cron timer
systemd:
name: nextcloud-cron.timer
enabled: yes
state: started
daemon_reload: yes
- name: Configure Nextcloud to use cron for background jobs
command: >
podman exec --user www-data nextcloud
php occ background:cron
register: nextcloud_cron_mode
changed_when: "'background jobs mode changed' in nextcloud_cron_mode.stdout or 'Set mode for background jobs to' in nextcloud_cron_mode.stdout"
failed_when:
- nextcloud_cron_mode.rc != 0
- "'mode for background jobs is currently' not in nextcloud_cron_mode.stdout"
- name: Verify cron timer is active
command: systemctl is-active nextcloud-cron.timer
register: timer_status
changed_when: false
failed_when: timer_status.stdout != "active"

View File

@@ -40,7 +40,6 @@
- "{{ nextcloud_html_dir }}"
- "{{ nextcloud_data_dir }}"
- "{{ nextcloud_config_dir }}"
- "{{ nextcloud_custom_apps_dir }}"
tags: [setup, directories]
- name: Deploy environment configuration
@@ -52,12 +51,9 @@
notify: restart nextcloud
tags: [config]
# NOTE: Custom Redis config is deployed AFTER installation completes (see below)
# to avoid interfering with the container's first-time initialization process
# NOTE: redis-session-override.ini is NOT deployed because we use file-based sessions
# (not Redis sessions). If you enable REDIS_HOST in the future, you'll need to add
# proper session lock configuration.
# NOTE: Nextcloud is configured via OCC commands in a script after installation
# completes. This avoids interfering with the container's initialization process
# and provides a clean, explicit configuration approach.
- name: Create Quadlet systemd directory (system scope)
file:
@@ -82,7 +78,6 @@
owner: root
group: "{{ caddy_user }}"
mode: '0644'
backup: true
notify: reload caddy
tags: [caddy, reverse-proxy]
@@ -130,13 +125,20 @@
changed_when: false
tags: [verification]
- name: Deploy custom Redis caching configuration (post-installation)
template:
src: redis.config.php.j2
dest: "{{ nextcloud_config_dir }}/redis.config.php"
mode: '0644'
notify: restart nextcloud
tags: [config, redis]
- name: Configure Nextcloud via OCC script
include_tasks: configure.yml
tags: [config, configure]
- name: Truncate nextcloud.log to prevent bloat
shell: |
podman exec nextcloud truncate -s 0 /var/www/html/data/nextcloud.log || true
changed_when: false
failed_when: false
tags: [maintenance, cleanup]
- name: Configure background jobs (cron)
include_tasks: cron.yml
tags: [cron, background-jobs]
- name: Display Nextcloud deployment status
debug:
@@ -152,7 +154,8 @@
⚙️ Configuration:
- Redis caching enabled (application-level cache & file locking)
- PHP sessions use file-based storage (not Redis)
- Custom redis.config.php deployed post-installation
- Database optimizations applied
- Configuration via OCC commands
🚀 Ready for file storage and collaboration!

View File

@@ -0,0 +1,189 @@
#!/bin/bash
# =================================================================
# Nextcloud Configuration Script
# =================================================================
# Rick-Infra - Nextcloud Role
#
# This script configures Nextcloud via OCC commands after initial
# installation. It is generated from Ansible variables and runs
# inside the Nextcloud container.
#
# Generated by: roles/nextcloud/templates/configure-nextcloud.sh.j2
# Managed by: Ansible
set +e # Continue on errors, report at end
ERRORS=0
# Helper function for OCC
occ() {
php /var/www/html/occ "$@" 2>&1
}
# Track errors
check_error() {
if [ $? -ne 0 ]; then
ERRORS=$((ERRORS + 1))
echo "ERROR: $1" >&2
fi
}
# =================================================================
# Redis Caching Configuration
# =================================================================
# Configure Redis for application-level caching and file locking
# WITHOUT enabling Redis sessions (which can cause performance issues)
occ config:system:set memcache.distributed --value='\OC\Memcache\Redis' --quiet
check_error "Failed to set memcache.distributed"
occ config:system:set memcache.locking --value='\OC\Memcache\Redis' --quiet
check_error "Failed to set memcache.locking"
occ config:system:set redis host --value='{{ valkey_unix_socket_path }}' --quiet
check_error "Failed to set redis.host"
occ config:system:set redis password --value='{{ valkey_password }}' --quiet
check_error "Failed to set redis.password"
occ config:system:set redis dbindex --value={{ nextcloud_valkey_db }} --type=integer --quiet
check_error "Failed to set redis.dbindex"
# =================================================================
# Maintenance Configuration
# =================================================================
occ config:system:set maintenance_window_start --value={{ nextcloud_maintenance_window_start }} --type=integer --quiet
check_error "Failed to set maintenance_window_start"
occ config:system:set default_phone_region --value='{{ nextcloud_default_phone_region }}' --quiet
check_error "Failed to set default_phone_region"
# =================================================================
# Database Optimization
# =================================================================
# Add missing database indices
occ db:add-missing-indices --quiet
check_error "Failed to add missing database indices"
# Convert filecache to bigint
occ db:convert-filecache-bigint --no-interaction --quiet
check_error "Failed to convert filecache to bigint"
# Update mimetype database mappings
occ maintenance:repair --include-expensive --quiet
check_error "Failed to run maintenance:repair"
# =================================================================
# App Installation and Enablement
# =================================================================
# Install apps first, then enable them. This must happen before
# app-specific configuration (e.g., OIDC provider setup)
{% if nextcloud_apps_install is defined and nextcloud_apps_install | length > 0 %}
# Install apps
{% for app in nextcloud_apps_install %}
occ app:install {{ app }} --quiet 2>&1 | grep -v "already installed" || true
check_error "Failed to install app: {{ app }}"
{% endfor %}
{% endif %}
# =================================================================
# Email/SMTP Configuration
# =================================================================
{% if nextcloud_email_enabled | default(false) %}
# Configure SMTP mode
occ config:system:set mail_smtpmode --value={{ nextcloud_smtp_mode }} --quiet
check_error "Failed to set mail_smtpmode"
# Configure SMTP server
occ config:system:set mail_smtphost --value='{{ nextcloud_smtp_host }}' --quiet
check_error "Failed to set mail_smtphost"
occ config:system:set mail_smtpport --value={{ nextcloud_smtp_port }} --type=integer --quiet
check_error "Failed to set mail_smtpport"
{% if nextcloud_smtp_secure %}
occ config:system:set mail_smtpsecure --value={{ nextcloud_smtp_secure }} --quiet
check_error "Failed to set mail_smtpsecure"
{% endif %}
{% if nextcloud_smtp_auth %}
# Configure SMTP authentication
occ config:system:set mail_smtpauth --value=1 --type=integer --quiet
check_error "Failed to set mail_smtpauth"
occ config:system:set mail_smtpauthtype --value={{ nextcloud_smtp_authtype }} --quiet
check_error "Failed to set mail_smtpauthtype"
occ config:system:set mail_smtpname --value='{{ nextcloud_smtp_username }}' --quiet
check_error "Failed to set mail_smtpname"
occ config:system:set mail_smtppassword --value='{{ nextcloud_smtp_password }}' --quiet
check_error "Failed to set mail_smtppassword"
{% endif %}
# Configure email addressing
occ config:system:set mail_from_address --value='{{ nextcloud_mail_from_address }}' --quiet
check_error "Failed to set mail_from_address"
occ config:system:set mail_domain --value='{{ nextcloud_mail_domain }}' --quiet
check_error "Failed to set mail_domain"
{% endif %}
# Set admin user email address
{% if nextcloud_admin_email %}
occ user:setting {{ nextcloud_admin_user }} settings email '{{ nextcloud_admin_email }}' --quiet
check_error "Failed to set admin user email"
{% endif %}
# =================================================================
# OIDC/SSO Provider Configuration
# =================================================================
{% if nextcloud_oidc_enabled | default(false) %}
# Configure OIDC provider (creates if doesn't exist, updates if exists)
occ user_oidc:provider {{ nextcloud_oidc_provider_id }} \
--clientid='{{ nextcloud_oidc_client_id }}' \
--clientsecret='{{ nextcloud_oidc_client_secret }}' \
--discoveryuri='{{ nextcloud_oidc_discovery_url }}' \
--scope='{{ nextcloud_oidc_scope }}' \
--unique-uid={{ '1' if nextcloud_oidc_unique_uid else '0' }} \
--check-bearer={{ '1' if nextcloud_oidc_check_bearer else '0' }} \
--send-id-token-hint={{ '1' if nextcloud_oidc_send_id_token_hint else '0' }} \
{% if nextcloud_oidc_mapping_display_name %}
--mapping-display-name='{{ nextcloud_oidc_mapping_display_name }}' \
{% endif %}
{% if nextcloud_oidc_mapping_email %}
--mapping-email='{{ nextcloud_oidc_mapping_email }}' \
{% endif %}
{% if nextcloud_oidc_mapping_quota %}
--mapping-quota='{{ nextcloud_oidc_mapping_quota }}' \
{% endif %}
{% if nextcloud_oidc_mapping_uid %}
--mapping-uid='{{ nextcloud_oidc_mapping_uid }}' \
{% endif %}
{% if nextcloud_oidc_mapping_groups %}
--mapping-groups='{{ nextcloud_oidc_mapping_groups }}' \
{% endif %}
--group-provisioning={{ '1' if nextcloud_oidc_group_provisioning else '0' }} \
--quiet 2>&1 | grep -v "already exists" || true
check_error "Failed to configure OIDC provider: {{ nextcloud_oidc_provider_id }}"
{% if nextcloud_oidc_single_login %}
# Enable single login (auto-redirect to SSO if only one provider)
occ config:app:set user_oidc allow_multiple_user_backends --value=0 --quiet
check_error "Failed to enable single login mode"
{% endif %}
{% endif %}
# =================================================================
# Exit Status
# =================================================================
if [ $ERRORS -gt 0 ]; then
echo "Configuration completed with $ERRORS error(s)" >&2
exit 1
else
echo "Nextcloud configuration completed successfully"
exit 0
fi

View File

@@ -58,6 +58,10 @@
Referrer-Policy "no-referrer"
# Disable FLoC tracking
Permissions-Policy "interest-cohort=()"
# Robot indexing policy
X-Robots-Tag "noindex, nofollow"
# Cross-domain policy
X-Permitted-Cross-Domain-Policies "none"
# Remove server header
-Server
}

View File

@@ -23,9 +23,6 @@ Volume={{ nextcloud_data_dir }}:/var/www/html/data:Z
# Configuration (private - contains secrets)
Volume={{ nextcloud_config_dir }}:/var/www/html/config:Z
# Custom apps (world-readable)
Volume={{ nextcloud_custom_apps_dir }}:/var/www/html/custom_apps:Z
# Infrastructure sockets (mounted with world-readable permissions on host)
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
Volume={{ valkey_unix_socket_path | dirname }}:{{ valkey_unix_socket_path | dirname }}:Z

View File

@@ -1,34 +0,0 @@
<?php
/**
* Redis/Valkey Caching Configuration for Nextcloud
*
* This file provides Redis caching for Nextcloud application-level operations
* (distributed cache, file locking) WITHOUT enabling Redis for PHP sessions.
*
* IMPORTANT: This overrides the default /usr/src/nextcloud/config/redis.config.php
* which checks for REDIS_HOST environment variable. We deploy this custom version
* to enable Redis caching while keeping PHP sessions file-based for stability.
*
* Why not use REDIS_HOST env var?
* - Setting REDIS_HOST enables BOTH Redis sessions AND Redis caching
* - Redis session handling can cause severe performance issues:
* * Session lock contention under high concurrency
* * Infinite lock retries blocking FPM workers
* * Timeout orphaning leaving locks unreleased
* * Worker pool exhaustion causing cascading failures
*
* This configuration provides the benefits of Redis caching (fast distributed
* cache, reliable file locking) while avoiding the pitfalls of Redis sessions.
*
* Managed by: Ansible Nextcloud role
* Template: roles/nextcloud/templates/redis.config.php.j2
*/
$CONFIG = array(
'memcache.distributed' => '\OC\Memcache\Redis',
'memcache.locking' => '\OC\Memcache\Redis',
'redis' => array(
'host' => '{{ valkey_unix_socket_path }}',
'password' => '{{ valkey_password }}',
),
);

View File

@@ -46,6 +46,10 @@ podman_registry_blocked: false
podman_default_network: "bridge"
podman_network_security: true
# Trusted container subnets (allowed through firewall)
podman_trusted_subnets:
- "10.88.0.0/16"
# =================================================================
# Storage Configuration
# =================================================================

View File

@@ -10,4 +10,9 @@
systemd:
name: podman
state: restarted
when: podman_service_enabled | default(true)
when: podman_service_enabled | default(true)
- name: reload nftables
systemd:
name: nftables
state: reloaded

View File

@@ -42,6 +42,22 @@
backup: yes
notify: restart podman
- name: Create default podman network with DNS enabled
command: podman network create podman --subnet 10.88.0.0/16
register: podman_network_create
changed_when: "'podman' in podman_network_create.stdout"
failed_when:
- podman_network_create.rc != 0
- "'already exists' not in podman_network_create.stderr"
- name: Deploy podman firewall rules
template:
src: podman.nft.j2
dest: /etc/nftables.d/10-podman.nft
mode: '0644'
backup: yes
notify: reload nftables
- name: Enable podman system service (if enabled)
systemd:
name: podman

View File

@@ -0,0 +1,32 @@
#!/usr/sbin/nft -f
# =================================================================
# Podman Container Network Firewall Rules
# =================================================================
# Rick-Infra Infrastructure - Podman Role
# Priority: 10 (loaded after base rules, before drop rules)
#
# Purpose:
# - Allow container-to-host communication for services (PostgreSQL, Valkey)
# - Allow container outbound traffic for DNS, package updates, etc.
# - Enable NAT/masquerading for container networks
#
# Security Model:
# - Containers are trusted (they run our own services)
# - All container egress traffic is allowed (simplified management)
# - Container ingress is controlled by application-specific port publishing
#
# Architecture:
# - Containers access host services via Unix sockets or host.containers.internal
# - Caddy reverse proxy handles all external traffic
# - No direct container port exposure to internet
# Add rules to INPUT chain - Allow trusted container subnets
{% for subnet in podman_trusted_subnets %}
add rule inet filter input ip saddr {{ subnet }} accept comment "Podman containers: {{ subnet }}"
{% endfor %}
# Add rules to FORWARD chain - Enable container forwarding
add rule inet filter forward ct state established,related accept comment "Allow established connections"
add rule inet filter forward iifname "podman0" accept comment "Allow outbound from podman bridge"
add rule inet filter forward oifname "podman0" ct state established,related accept comment "Allow inbound to podman bridge (established)"

331
roles/vaultwarden/README.md Normal file
View File

@@ -0,0 +1,331 @@
# Vaultwarden Password Manager Role
Self-contained Vaultwarden (Bitwarden-compatible) password manager deployment using Podman and PostgreSQL.
## Overview
This role deploys Vaultwarden as a Podman Quadlet container with:
- **PostgreSQL backend** via Unix socket (777 permissions)
- **Caddy reverse proxy** with HTTPS and WebSocket support
- **SSO integration** ready (Authentik OpenID Connect)
- **SMTP support** for email notifications (optional)
- **Admin panel** for management
## Architecture
```
Internet → Caddy (HTTPS) → Vaultwarden Container → PostgreSQL (Unix socket)
/data volume
```
### Components
- **Container Image**: `vaultwarden/server:latest` (Docker Hub)
- **User**: System user `vaultwarden` (non-root)
- **Port**: 8080 (localhost only)
- **Domain**: `vault.jnss.me`
- **Database**: PostgreSQL via Unix socket at `/var/run/postgresql`
- **Data**: `/opt/vaultwarden/data`
## Dependencies
**Managed Hosts:**
- `postgresql` role (provides database and Unix socket)
- `caddy` role (provides reverse proxy)
**Control Node:**
- `argon2` command-line tool (automatically installed if not present)
- Used to hash the admin token securely on the control node
- Available in most package managers: `pacman -S argon2`, `apt install argon2`, etc.
## Configuration
### Required Variables
Must be defined in vault (e.g., `group_vars/homelab/vault.yml`):
```yaml
# Database password
vault_vaultwarden_db_password: "secure-database-password"
# Admin token (plain text - will be hashed automatically during deployment)
vault_vaultwarden_admin_token: "your-secure-admin-token"
# SMTP password (if using email)
vault_vaultwarden_smtp_password: "smtp-password" # optional
# SSO credentials (if using Authentik integration)
vault_vaultwarden_sso_client_id: "vaultwarden" # optional
vault_vaultwarden_sso_client_secret: "sso-secret" # optional
```
### Optional Variables
Override in `group_vars` or `host_vars`:
```yaml
# Domain
vaultwarden_domain: "vault.jnss.me"
# Container version
vaultwarden_version: "latest"
# Registration controls
vaultwarden_signups_allowed: false # Disable open registration
vaultwarden_invitations_allowed: true # Allow existing users to invite
# SMTP Configuration
vaultwarden_smtp_enabled: true
vaultwarden_smtp_host: "smtp.example.com"
vaultwarden_smtp_port: 587
vaultwarden_smtp_from: "vault@jnss.me"
vaultwarden_smtp_username: "smtp-user"
# SSO Configuration (Authentik)
vaultwarden_sso_enabled: true
vaultwarden_sso_authority: "https://auth.jnss.me"
```
## Usage
### Deploy Vaultwarden
```bash
# Full deployment
ansible-playbook rick-infra.yml --tags vaultwarden
# Or via site.yml
ansible-playbook site.yml --tags vaultwarden -l homelab
```
### Access Admin Panel
1. Set admin token in vault file (plain text):
```yaml
# Generate a secure token
vault_vaultwarden_admin_token: "$(openssl rand -base64 32)"
```
2. The role automatically hashes the token during deployment:
- Hashing occurs on the **control node** using `argon2` CLI
- Uses OWASP recommended settings (19MiB memory, 2 iterations, 1 thread)
- Idempotent: same token always produces the same hash
- The `argon2` package is automatically installed if not present
3. Access: `https://vault.jnss.me/admin` (use the plain text token from step 1)
### Configure SSO (Authentik Integration)
> ⚠️ **IMPORTANT: SSO Feature Status (as of December 2025)**
>
> SSO is currently **only available in `vaultwarden/server:testing` images**.
> The stable release (v1.34.3) does **NOT** include SSO functionality.
>
> **Current Deployment Status:**
> - This role is configured with SSO settings ready for when SSO reaches stable release
> - Using `vaultwarden_version: "latest"` (stable) - SSO will not appear
> - To test SSO now: Set `vaultwarden_version: "testing"` (not recommended for production)
> - To wait for stable: Keep current configuration, SSO will activate automatically when available
>
> **References:**
> - [Vaultwarden Wiki - SSO Documentation](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect)
> - [Vaultwarden Testing Features](https://github.com/dani-garcia/vaultwarden/wiki#testing-features)
>
> **Decision:** This deployment keeps SSO configured but uses stable image until SSO feature is production-ready.
Following the [Authentik integration guide](https://integrations.goauthentik.io/security/vaultwarden/):
1. **In Authentik**: Create OAuth2/OpenID Provider
- **Name**: `Vaultwarden`
- **Client Type**: `Confidential`
- **Redirect URIs**: `https://vault.jnss.me/identity/connect/oidc-signin` (must be strict/exact match)
- **Scopes**: Under "Advanced protocol settings", ensure these scope mappings are selected:
- `authentik default OAuth Mapping: OpenID 'openid'`
- `authentik default OAuth Mapping: OpenID 'email'`
- `authentik default OAuth Mapping: OpenID 'profile'`
- `authentik default OAuth Mapping: OpenID 'offline_access'` ⚠️ **Required**
- **Access token validity**: Set to more than 5 minutes
- **Note the Client ID, Client Secret, and application slug** (from URL or provider settings)
2. **Update Vault Variables**:
```yaml
vault_vaultwarden_sso_client_id: "<client-id-from-authentik>"
vault_vaultwarden_sso_client_secret: "<client-secret-from-authentik>"
```
3. **Enable SSO and set authority** in `group_vars/homelab/main.yml`:
```yaml
vaultwarden_sso_enabled: true
# Replace 'vaultwarden' with your actual application slug
vaultwarden_sso_authority: "https://auth.jnss.me/application/o/vaultwarden/"
```
4. **Optional: SSO-Only Mode** (disable password login):
```yaml
vaultwarden_sso_only: true # Requires SSO, disables email+password
```
5. **Redeploy**:
```bash
ansible-playbook rick-infra.yml --tags vaultwarden
```
6. **Test**: Log out, enter a verified email on login page, click "Use single sign-on"
- **Note**: With `vaultwarden_version: "latest"`, SSO button will not appear (feature not in stable yet)
## Security Considerations
### Database Access
- Uses PostgreSQL Unix socket with **777 permissions**
- Security maintained via password authentication (scram-sha-256)
- See: `docs/socket-permissions-architecture.md`
### Admin Token
- **Never commit plain admin token to git**
- Use Ansible Vault for `vault_vaultwarden_admin_token`
- Rotate periodically via admin panel
### User Registration
- Default: **Disabled** (`vaultwarden_signups_allowed: false`)
- Users must be invited by existing users or created via admin panel
- Prevents unauthorized account creation
## Maintenance
### Backup
Backup the following:
```bash
# Database backup (via PostgreSQL role)
sudo -u postgres pg_dump vaultwarden > vaultwarden-backup.sql
# Data directory (attachments, icons, etc.)
tar -czf vaultwarden-data-backup.tar.gz /opt/vaultwarden/data
```
### Update Container
```bash
# Pull new image and restart
ansible-playbook rick-infra.yml --tags vaultwarden
```
### View Logs
```bash
# Service logs
journalctl -u vaultwarden -f
# Container logs
podman logs vaultwarden -f
```
### Restart Service
```bash
systemctl restart vaultwarden
```
## Troubleshooting
### Container won't start
```bash
# Check container status
systemctl status vaultwarden
# Check container directly
podman ps -a
podman logs vaultwarden
# Verify database connectivity
sudo -u vaultwarden psql -h /var/run/postgresql -U vaultwarden -d vaultwarden -c "SELECT 1;"
```
### Database connection errors
1. Verify PostgreSQL is running: `systemctl status postgresql`
2. Check socket exists: `ls -la /var/run/postgresql/.s.PGSQL.5432`
3. Verify socket permissions: Should be `srwxrwxrwx` (777)
4. Test connection as vaultwarden user (see above)
### Can't access admin panel
1. Verify admin token is set in vault file (plain text)
2. Check that the token was hashed successfully during deployment
3. Ensure you're using the plain text token to log in
4. Redeploy to regenerate hash if needed
### SSO not appearing / not working
**Most Common Issue: Using Stable Image**
SSO is only available in testing images. Check your deployment:
```bash
# Check current image version
podman inspect vaultwarden --format '{{.ImageName}}'
# Check API config for SSO support
curl -s http://127.0.0.1:8080/api/config | grep -o '"sso":"[^"]*"'
# Empty string "" = SSO not available in this image
# URL present = SSO is available
```
**If using `vaultwarden_version: "latest"`**: SSO will not appear (feature not in stable yet)
- **To test SSO**: Set `vaultwarden_version: "testing"` in role defaults or group/host vars
- **For production**: Wait for SSO to reach stable release (recommended)
**If using `vaultwarden_version: "testing"` and SSO still not working**:
1. Verify Authentik provider configuration:
- Check that `offline_access` scope mapping is added
- Verify redirect URI matches exactly: `https://vault.jnss.me/identity/connect/oidc-signin`
- Ensure access token validity is > 5 minutes
2. Verify SSO authority URL includes full path with slug:
- Should be: `https://auth.jnss.me/application/o/<your-slug>/`
- Not just: `https://auth.jnss.me`
3. Check client ID and secret in vault match Authentik
4. Verify all required scopes: `openid email profile offline_access`
5. Check Vaultwarden logs for SSO-related errors:
```bash
podman logs vaultwarden 2>&1 | grep -i sso
```
6. Test SSO flow: Log out, enter verified email, click "Use single sign-on"
## File Structure
```
roles/vaultwarden/
├── defaults/
│ └── main.yml # Default variables
├── handlers/
│ └── main.yml # Service restart handlers
├── meta/
│ └── main.yml # Role dependencies
├── tasks/
│ ├── main.yml # Main orchestration
│ ├── user.yml # User and directory setup
│ └── database.yml # PostgreSQL setup
├── templates/
│ ├── vaultwarden.container # Quadlet container definition
│ ├── vaultwarden.env.j2 # Environment configuration
│ └── vaultwarden.caddy.j2 # Caddy reverse proxy config
└── README.md # This file
```
## References
- [Vaultwarden Documentation](https://github.com/dani-garcia/vaultwarden/wiki)
- [PostgreSQL Backend Guide](https://github.com/dani-garcia/vaultwarden/wiki/Using-the-PostgreSQL-Backend)
- [SSO Configuration](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect)
- [Socket Permissions Architecture](../../docs/socket-permissions-architecture.md)
## License
MIT

View File

@@ -0,0 +1,171 @@
# Vaultwarden Role - Required Vault Variables
This document lists all vault-encrypted variables required by the Vaultwarden role.
## Required Variables
These variables **must** be defined in your vault file (e.g., `group_vars/homelab/vault.yml`):
### Database Credentials
```yaml
# PostgreSQL database password for vaultwarden user
vault_vaultwarden_db_password: "your-secure-database-password-here"
```
**Generation**:
```bash
openssl rand -base64 32
```
### Admin Panel Access
```yaml
# Plain text admin token (will be hashed during deployment)
vault_vaultwarden_admin_token: "your-secret-admin-token"
```
**Generation**:
```bash
# Generate a secure random token
openssl rand -base64 32
```
**Important**:
- Store as **plain text** in vault (the role will hash it automatically)
- Use the same token to access `/admin` panel
- The token is automatically hashed on the **control node** using argon2id
- Hashing uses OWASP recommended settings (m=19456, t=2, p=1)
- Hashing is **idempotent**: same token always produces same hash
- The `argon2` package is automatically installed if not present on control node
- Never commit the vault file unencrypted to git
## Optional Variables
These variables are only needed if you enable specific features:
### SMTP Configuration
Required if `vaultwarden_smtp_enabled: true`:
```yaml
# SMTP password for sending emails
vault_vaultwarden_smtp_password: "smtp-password-here"
```
### SSO Integration (Authentik)
> ⚠️ **SSO Feature Status (December 2025)**
>
> SSO is only available in `vaultwarden/server:testing` images (not in stable yet).
> This role is configured with SSO ready for when it reaches stable release.
>
> Current deployment uses `vaultwarden_version: "latest"` (stable) - SSO credentials
> below are configured but SSO will not appear until feature reaches stable.
Required if `vaultwarden_sso_enabled: true`:
```yaml
# OAuth2 Client ID from Authentik
vault_vaultwarden_sso_client_id: "your-client-id-here"
# OAuth2 Client Secret from Authentik
vault_vaultwarden_sso_client_secret: "your-client-secret-here"
```
**Setup** (following [Authentik integration guide](https://integrations.goauthentik.io/security/vaultwarden/)):
1. Create OAuth2/OpenID Provider in Authentik:
- Redirect URI: `https://vault.jnss.me/identity/connect/oidc-signin` (exact match)
- Add scope mappings: `openid`, `email`, `profile`, `offline_access` (required)
- Access token validity: > 5 minutes
- Note the **application slug** from the provider URL
2. Copy Client ID and Secret from Authentik
3. Add credentials to vault file
4. Set the SSO authority URL in role configuration:
```yaml
vaultwarden_sso_enabled: true
vaultwarden_sso_authority: "https://auth.jnss.me/application/o/<your-slug>/"
```
5. Deploy the role
6. **Wait for SSO to reach stable**, or use `vaultwarden_version: "testing"` to test now
**Important**:
- The SSO authority must include the full path with application slug
- The `offline_access` scope mapping is **required** for Vaultwarden SSO
- Access token must be valid for more than 5 minutes
- SSO is configured and ready but will activate when stable release includes it
## Example Vault File
```yaml
---
# group_vars/homelab/vault.yml (encrypted with ansible-vault)
# Vaultwarden - Database
vault_vaultwarden_db_password: "xK9mP2nR5tY8wQ3vZ7cB6sA4dF1gH0jL"
# Vaultwarden - Admin Panel (plain text, will be hashed automatically)
vault_vaultwarden_admin_token: "MySecureAdminToken123!"
# Vaultwarden - SMTP (optional)
vault_vaultwarden_smtp_password: "smtp-app-password-here"
# Vaultwarden - SSO (optional)
vault_vaultwarden_sso_client_id: "vaultwarden"
vault_vaultwarden_sso_client_secret: "sso-secret-from-authentik"
```
## Vault File Management
### Encrypt Vault File
```bash
ansible-vault encrypt group_vars/homelab/vault.yml
```
### Edit Vault File
```bash
ansible-vault edit group_vars/homelab/vault.yml
```
### Decrypt Vault File (temporary)
```bash
ansible-vault decrypt group_vars/homelab/vault.yml
# Make changes
ansible-vault encrypt group_vars/homelab/vault.yml
```
## Security Best Practices
1. **Never commit unencrypted vault files**
2. **Use strong passwords** (32+ characters for database, admin token)
3. **Rotate credentials periodically** (especially admin token)
4. **Limit vault password access** (use password manager)
5. **Use separate passwords** for different services
6. **Back up vault password** (secure location, not in git)
## Verifying Variables
Test if variables are properly loaded:
```bash
ansible -m debug -a "var=vault_vaultwarden_db_password" homelab --ask-vault-pass
```
## Troubleshooting
### Variable not found error
- Ensure vault file is in correct location: `group_vars/homelab/vault.yml`
- Verify file is encrypted: `file group_vars/homelab/vault.yml`
- Check variable name matches exactly (case-sensitive)
- Provide vault password with `--ask-vault-pass`
### Admin token not working
- Verify the plain text token in vault matches what you're entering
- Check for extra whitespace in vault file
- Ensure the token was hashed successfully during deployment (check ansible output)
- Try redeploying the role to regenerate the hash

View File

@@ -0,0 +1,109 @@
---
# =================================================================
# Vaultwarden Password Manager Role - Default Variables
# =================================================================
# Self-contained Vaultwarden deployment with Podman and PostgreSQL
# =================================================================
# Service Configuration
# =================================================================
# Service user and directories
vaultwarden_user: vaultwarden
vaultwarden_group: vaultwarden
vaultwarden_home: /opt/vaultwarden
vaultwarden_data_dir: "{{ vaultwarden_home }}/data"
# Container configuration
# NOTE: SSO feature is only available in "testing" tag (as of Dec 2025)
# Using "latest" (stable) means SSO will not appear even if configured
# SSO settings below are configured and ready for when feature reaches stable
vaultwarden_version: "latest"
vaultwarden_image: "vaultwarden/server"
# Service management
vaultwarden_service_enabled: true
vaultwarden_service_state: "started"
# =================================================================
# Database Configuration (Self-managed)
# =================================================================
vaultwarden_db_name: "vaultwarden"
vaultwarden_db_user: "vaultwarden"
vaultwarden_db_password: "{{ vault_vaultwarden_db_password }}"
# =================================================================
# Network Configuration
# =================================================================
vaultwarden_domain: "vault.jnss.me"
vaultwarden_http_port: 8080
# =================================================================
# Vaultwarden Core Configuration
# =================================================================
# Admin panel access token (plain text, will be hashed during deployment)
vaultwarden_admin_token_plain: "{{ vault_vaultwarden_admin_token }}"
# Registration and invitation controls
vaultwarden_signups_allowed: false # Disable open registration
vaultwarden_invitations_allowed: true # Allow existing users to invite
vaultwarden_show_password_hint: false # Don't show password hints
# WebSocket support for live sync
vaultwarden_websocket_enabled: true
# =================================================================
# Email Configuration (Optional)
# =================================================================
vaultwarden_smtp_enabled: true
vaultwarden_smtp_host: "smtp.titan.email"
vaultwarden_smtp_port: 587
vaultwarden_smtp_from: "hello@jnss.me"
vaultwarden_smtp_username: "hello@jnss.me"
vaultwarden_smtp_password: "{{ vault_smtp_password | default('') }}"
vaultwarden_smtp_security: "starttls" # Options: starttls, force_tls, off
# =================================================================
# SSO Configuration (Optional - Authentik Integration)
# =================================================================
vaultwarden_sso_enabled: false
# SSO Provider Configuration (Authentik)
vaultwarden_sso_client_id: "{{ vault_vaultwarden_sso_client_id | default('') }}"
vaultwarden_sso_client_secret: "{{ vault_vaultwarden_sso_client_secret | default('') }}"
# Authority must include full path with application slug
vaultwarden_sso_authority: "https://{{ authentik_domain }}/application/o/vaultwarden/"
vaultwarden_sso_scopes: "openid email profile offline_access"
# Additional SSO settings (per Authentik integration guide)
vaultwarden_sso_only: false # Set to true to disable email+password login and require SSO
vaultwarden_sso_signups_match_email: true # Match first SSO login to existing account by email
vaultwarden_sso_allow_unknown_email_verification: false
vaultwarden_sso_client_cache_expiration: 0
# Domain whitelist for SSO signups (comma-separated domains, empty = all)
vaultwarden_sso_signups_domains_whitelist: ""
# =================================================================
# Caddy Integration
# =================================================================
# Caddy configuration (assumes caddy role provides these variables)
caddy_sites_enabled_dir: "/etc/caddy/sites-enabled"
caddy_log_dir: "/var/log/caddy"
caddy_user: "caddy"
# =================================================================
# Infrastructure Dependencies (Read-only)
# =================================================================
# PostgreSQL socket configuration (managed by postgresql role)
postgresql_unix_socket_directories: "/var/run/postgresql"
postgresql_client_group: "postgres-clients"
postgresql_port: 5432
postgresql_unix_socket_enabled: true

View File

@@ -0,0 +1,16 @@
---
# Vaultwarden Password Manager - Service Handlers
- name: reload systemd
systemd:
daemon_reload: true
- name: restart vaultwarden
systemd:
name: vaultwarden
state: restarted
- name: reload caddy
systemd:
name: caddy
state: reloaded

View File

@@ -0,0 +1,25 @@
---
# Vaultwarden Password Manager - Role Metadata
dependencies:
- role: postgresql
- role: caddy
galaxy_info:
author: Rick Infrastructure Team
description: Vaultwarden password manager deployment with PostgreSQL and Caddy
license: MIT
min_ansible_version: "2.14"
platforms:
- name: ArchLinux
versions:
- all
galaxy_tags:
- vaultwarden
- bitwarden
- password-manager
- security
- postgresql
- podman

View File

@@ -0,0 +1,51 @@
---
# Database setup for Vaultwarden - PostgreSQL via Unix Socket
- name: Test PostgreSQL socket connectivity
postgresql_ping:
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
login_user: "{{ vaultwarden_user }}"
become: true
become_user: "{{ vaultwarden_user }}"
- name: Create Vaultwarden database user via socket
postgresql_user:
name: "{{ vaultwarden_db_user }}"
password: "{{ vaultwarden_db_password }}"
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
login_user: postgres
become: true
become_user: postgres
- name: Create Vaultwarden database via socket
postgresql_db:
name: "{{ vaultwarden_db_name }}"
owner: "{{ vaultwarden_db_user }}"
encoding: UTF8
template: template0
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
login_user: postgres
become: true
become_user: postgres
- name: Grant Vaultwarden database privileges
postgresql_privs:
db: "{{ vaultwarden_db_name }}"
privs: ALL
type: database
role: "{{ vaultwarden_db_user }}"
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
login_user: postgres
become: true
become_user: postgres
- name: Display database setup status
debug:
msg: |
Vaultwarden database setup complete!
Database: {{ vaultwarden_db_name }}
User: {{ vaultwarden_db_user }}
Connection: Unix socket ({{ postgresql_unix_socket_directories }})
Ready for Vaultwarden container deployment

View File

@@ -0,0 +1,57 @@
---
# Hash admin token on Ansible control node using argon2
- name: Check if argon2 is available on control node
command: which argon2
register: argon2_check
delegate_to: localhost
become: false
changed_when: false
failed_when: false
- name: Install argon2 on control node if not present
package:
name: argon2
state: present
delegate_to: localhost
become: false
when: argon2_check.rc != 0
run_once: true
- name: Generate deterministic salt from domain
set_fact:
vaultwarden_salt_source: "{{ vaultwarden_domain }}-{{ vaultwarden_sso_authority }}"
no_log: true
- name: Create base64-encoded salt for argon2
shell: echo -n "{{ vaultwarden_salt_source }}" | sha256sum | cut -d' ' -f1 | head -c 22
register: admin_token_salt
delegate_to: localhost
become: false
changed_when: false
no_log: true
- name: Hash admin token using argon2 (OWASP preset)
shell: echo -n "{{ vaultwarden_admin_token_plain }}" | argon2 "{{ admin_token_salt.stdout }}" -id -t 2 -k 19456 -p 1 -e
register: admin_token_hash_result
delegate_to: localhost
become: false
changed_when: false
no_log: true
- name: Extract hashed admin token
set_fact:
vaultwarden_admin_token_hashed: "{{ admin_token_hash_result.stdout | trim }}"
no_log: true
- name: Display token hash status
debug:
msg: |
Admin token hashed successfully on control node
Hash algorithm: argon2id
Preset: OWASP (m=19456, t=2, p=1)
Format: PHC string (Vaultwarden compatible)
Idempotent: Same token always produces same hash
The hashed token will be used in the environment configuration

View File

@@ -0,0 +1,107 @@
---
# Vaultwarden Password Manager Role - Main Tasks
# Self-contained deployment with Podman and Unix sockets
- name: Setup vaultwarden user and directories
include_tasks: user.yml
tags: [user, setup]
- name: Setup database access and permissions
include_tasks: database.yml
tags: [database, setup]
- name: Pull vaultwarden container image
containers.podman.podman_image:
name: "{{ vaultwarden_image }}:{{ vaultwarden_version }}"
state: present
tags: [containers, image-pull]
- name: Hash admin token on host
include_tasks: hash_admin_token.yml
tags: [config, admin-token]
- name: Deploy environment configuration
template:
src: vaultwarden.env.j2
dest: "{{ vaultwarden_home }}/.env"
owner: "{{ vaultwarden_user }}"
group: "{{ vaultwarden_group }}"
mode: '0600'
backup: true
notify:
- restart vaultwarden
tags: [config]
- name: Create Quadlet systemd directory
file:
path: /etc/containers/systemd
state: directory
mode: '0755'
- name: Deploy Quadlet container file
template:
src: vaultwarden.container
dest: /etc/containers/systemd/vaultwarden.container
mode: '0644'
notify:
- reload systemd
- restart vaultwarden
tags: [containers, deployment]
- name: Deploy Caddy configuration
template:
src: vaultwarden.caddy.j2
dest: "{{ caddy_sites_enabled_dir }}/vaultwarden.caddy"
owner: root
group: "{{ caddy_user }}"
mode: '0644'
notify: reload caddy
tags: [caddy, reverse-proxy]
- name: Ensure PostgreSQL is running
systemd:
name: postgresql
state: started
- name: Wait for PostgreSQL socket to be ready
wait_for:
path: "{{ postgresql_unix_socket_directories }}/.s.PGSQL.{{ postgresql_port }}"
timeout: 30
when: postgresql_unix_socket_enabled
- name: Enable and start Vaultwarden service (system scope)
systemd:
name: vaultwarden
enabled: "{{ vaultwarden_service_enabled }}"
state: "{{ vaultwarden_service_state }}"
daemon_reload: true
tags: [containers, service]
- name: Wait for Vaultwarden to be ready
uri:
url: "http://127.0.0.1:{{ vaultwarden_http_port }}/"
method: GET
status_code: [200, 302]
timeout: 30
retries: 10
delay: 15
register: vaultwarden_health_check
tags: [verification, health-check]
- name: Display Vaultwarden deployment status
debug:
msg: |
Vaultwarden Password Manager deployed successfully!
Domain: {{ vaultwarden_domain }}
Database: {{ vaultwarden_db_name }} (Unix socket)
Container: {{ vaultwarden_image }}:{{ vaultwarden_version }}
Admin Panel: https://{{ vaultwarden_domain }}/admin
Ready for user registration and password management!
Next Steps:
- Access https://{{ vaultwarden_domain }}/admin with your admin token
- Configure additional settings (SMTP, SSO, etc.)
- Invite users or create accounts
tags: [verification]

View File

@@ -0,0 +1,28 @@
---
# Vaultwarden User Management - Service-Specific User Setup
- name: Create vaultwarden group
group:
name: "{{ vaultwarden_group }}"
system: true
- name: Create vaultwarden user
user:
name: "{{ vaultwarden_user }}"
group: "{{ vaultwarden_group }}"
system: true
shell: /bin/bash
home: "{{ vaultwarden_home }}"
create_home: true
comment: "Vaultwarden password manager service"
- name: Create vaultwarden directories
file:
path: "{{ item }}"
state: directory
owner: "{{ vaultwarden_user }}"
group: "{{ vaultwarden_group }}"
mode: '0755'
loop:
- "{{ vaultwarden_home }}"
- "{{ vaultwarden_data_dir }}"

View File

@@ -0,0 +1,35 @@
# Vaultwarden Password Manager
{{ vaultwarden_domain }} {
# Notifications endpoint (WebSocket for live sync)
@websocket {
path /notifications/hub
}
reverse_proxy @websocket http://127.0.0.1:{{ vaultwarden_http_port }} {
header_up Upgrade {http.request.header.Upgrade}
header_up Connection {http.request.header.Connection}
}
# Regular HTTP traffic
reverse_proxy http://127.0.0.1:{{ vaultwarden_http_port }} {
header_up Host {host}
header_up X-Real-IP {remote_host}
header_up X-Forwarded-Proto https
header_up X-Forwarded-For {remote_host}
}
# Security headers
header {
X-Frame-Options SAMEORIGIN
X-Content-Type-Options nosniff
X-XSS-Protection "1; mode=block"
Referrer-Policy strict-origin-when-cross-origin
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
}
# Logging
log {
output file {{ caddy_log_dir }}/vaultwarden.log
level INFO
format json
}
}

View File

@@ -0,0 +1,26 @@
[Unit]
Description=Vaultwarden Password Manager Container
After=network-online.target postgresql.service
Wants=network-online.target
[Container]
ContainerName=vaultwarden
Image={{ vaultwarden_image }}:{{ vaultwarden_version }}
EnvironmentFile={{ vaultwarden_home }}/.env
# Volume mounts
# Application data (includes database, attachments, sends, icons, etc.)
Volume={{ vaultwarden_data_dir }}:/data:Z
# Infrastructure socket (PostgreSQL access with 777 permissions on host)
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
# Expose HTTP port to localhost only (Caddy will reverse proxy)
PublishPort=127.0.0.1:{{ vaultwarden_http_port }}:80
[Service]
Restart=always
TimeoutStartSec=300
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,73 @@
# Vaultwarden Environment Configuration
# Generated by Ansible - DO NOT EDIT MANUALLY
# =================================================================
# Database Configuration (PostgreSQL via Unix Socket)
# =================================================================
DATABASE_URL=postgresql://{{ vaultwarden_db_user }}:{{ vaultwarden_db_password }}@/{{ vaultwarden_db_name }}?host={{ postgresql_unix_socket_directories }}
# =================================================================
# Domain Configuration
# =================================================================
DOMAIN=https://{{ vaultwarden_domain }}
# =================================================================
# Admin Configuration
# =================================================================
ADMIN_TOKEN={{ vaultwarden_admin_token_hashed }}
# =================================================================
# Registration and Invitation Controls
# =================================================================
SIGNUPS_ALLOWED={{ vaultwarden_signups_allowed | lower }}
INVITATIONS_ALLOWED={{ vaultwarden_invitations_allowed | lower }}
SHOW_PASSWORD_HINT={{ vaultwarden_show_password_hint | lower }}
# =================================================================
# WebSocket Configuration (for live sync)
# =================================================================
WEBSOCKET_ENABLED={{ vaultwarden_websocket_enabled | lower }}
# =================================================================
# SMTP Configuration (Optional)
# =================================================================
{% if vaultwarden_smtp_enabled %}
SMTP_HOST={{ vaultwarden_smtp_host }}
SMTP_PORT={{ vaultwarden_smtp_port }}
SMTP_FROM={{ vaultwarden_smtp_from }}
SMTP_SECURITY={{ vaultwarden_smtp_security }}
{% if vaultwarden_smtp_username %}
SMTP_USERNAME={{ vaultwarden_smtp_username }}
SMTP_PASSWORD={{ vaultwarden_smtp_password }}
{% endif %}
{% endif %}
# =================================================================
# SSO Configuration (Optional - Authentik Integration)
# =================================================================
{% if vaultwarden_sso_enabled %}
SSO_ENABLED=true
SSO_ONLY={{ vaultwarden_sso_only | lower }}
SSO_CLIENT_ID={{ vaultwarden_sso_client_id }}
SSO_CLIENT_SECRET={{ vaultwarden_sso_client_secret }}
SSO_AUTHORITY={{ vaultwarden_sso_authority }}
SSO_SCOPES="{{ vaultwarden_sso_scopes }}"
SSO_SIGNUPS_MATCH_EMAIL={{ vaultwarden_sso_signups_match_email | lower }}
SSO_ALLOW_UNKNOWN_EMAIL_VERIFICATION={{ vaultwarden_sso_allow_unknown_email_verification | lower }}
SSO_CLIENT_CACHE_EXPIRATION={{ vaultwarden_sso_client_cache_expiration }}
{% if vaultwarden_sso_signups_domains_whitelist %}
SSO_SIGNUPS_DOMAINS_WHITELIST={{ vaultwarden_sso_signups_domains_whitelist }}
{% endif %}
{% endif %}
# =================================================================
# Security and Performance
# =================================================================
# Disable user registration via email (use admin panel or invitations)
SIGNUPS_VERIFY=false
# Log level (trace, debug, info, warn, error, off)
LOG_LEVEL=info
# Rocket configuration
ROCKET_WORKERS=10