Compare commits
9 Commits
cf71fb3a8d
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
| 1f3f111d88 | |||
| eca3b4a726 | |||
| 9bb9405b90 | |||
| bfd6f22f0e | |||
| 89b43180fc | |||
| 846ab74f87 | |||
| 90bbcd97b1 | |||
| 1be7122251 | |||
| 467e79c84b |
211
docs/gitea-email-troubleshooting.md
Normal file
211
docs/gitea-email-troubleshooting.md
Normal file
@@ -0,0 +1,211 @@
|
|||||||
|
# Gitea Email Configuration Troubleshooting
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Attempted to configure Gitea email functionality using Titan Email (Hostinger) SMTP service. Email sending is currently **non-functional** due to SMTP authentication rejection by Titan Email servers.
|
||||||
|
|
||||||
|
## Configuration Details
|
||||||
|
|
||||||
|
### Email Provider
|
||||||
|
- **Provider:** Titan Email (by Hostinger)
|
||||||
|
- **Account:** hello@jnss.me
|
||||||
|
- **SMTP Server:** smtp.titan.email
|
||||||
|
- **Ports Tested:** 587 (STARTTLS), 465 (SSL/TLS)
|
||||||
|
|
||||||
|
### Gitea Configuration
|
||||||
|
```ini
|
||||||
|
[mailer]
|
||||||
|
ENABLED = true
|
||||||
|
PROTOCOL = smtp+starttls
|
||||||
|
SMTP_ADDR = smtp.titan.email
|
||||||
|
SMTP_PORT = 587
|
||||||
|
FROM = hello@jnss.me
|
||||||
|
USER = hello@jnss.me
|
||||||
|
PASSWD = <vault_gitea_smtp_password>
|
||||||
|
SUBJECT_PREFIX = [Gitea]
|
||||||
|
SEND_AS_PLAIN_TEXT = false
|
||||||
|
SMTP_AUTH = PLAIN
|
||||||
|
```
|
||||||
|
|
||||||
|
## Issue Description
|
||||||
|
|
||||||
|
Gitea fails to send emails with the following error:
|
||||||
|
```
|
||||||
|
Failed to send emails: failed to authenticate SMTP: 535 5.7.8 Error: authentication failed
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting Performed
|
||||||
|
|
||||||
|
### 1. Credential Verification
|
||||||
|
- ✅ **Webmail access:** Successfully logged into https://mail.titan.email/ with credentials
|
||||||
|
- ✅ **Send/Receive:** Can send and receive emails through webmail interface
|
||||||
|
- ✅ **Password confirmed:** Tested multiple times, credentials are correct
|
||||||
|
|
||||||
|
### 2. SMTP Connectivity Tests
|
||||||
|
- ✅ **Port 587 (STARTTLS):** Connection successful, TLS upgrade successful
|
||||||
|
- ✅ **Port 465 (SSL/TLS):** Connection successful with implicit TLS
|
||||||
|
- ✅ **DNS Resolution:** smtp.titan.email resolves correctly to multiple IPs
|
||||||
|
|
||||||
|
### 3. Authentication Method Testing
|
||||||
|
|
||||||
|
**Manual SMTP tests from VPS (69.62.119.31):**
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Test Results:
|
||||||
|
AUTH PLAIN: ❌ 535 5.7.8 Error: authentication failed
|
||||||
|
AUTH LOGIN: ❌ 535 5.7.8 Error: authentication failed
|
||||||
|
```
|
||||||
|
|
||||||
|
**Both authentication methods rejected by server despite correct credentials.**
|
||||||
|
|
||||||
|
### 4. Configuration Iterations Tested
|
||||||
|
|
||||||
|
#### Iteration 1: Port 465 with smtps
|
||||||
|
```ini
|
||||||
|
PROTOCOL = smtps
|
||||||
|
SMTP_PORT = 465
|
||||||
|
```
|
||||||
|
**Result:** Authentication failed (535)
|
||||||
|
|
||||||
|
#### Iteration 2: Port 587 with smtp+starttls
|
||||||
|
```ini
|
||||||
|
PROTOCOL = smtp+starttls
|
||||||
|
SMTP_PORT = 587
|
||||||
|
```
|
||||||
|
**Result:** Authentication failed (535)
|
||||||
|
|
||||||
|
#### Iteration 3: Explicit AUTH PLAIN
|
||||||
|
```ini
|
||||||
|
PROTOCOL = smtp+starttls
|
||||||
|
SMTP_PORT = 587
|
||||||
|
SMTP_AUTH = PLAIN
|
||||||
|
```
|
||||||
|
**Result:** Authentication failed (535)
|
||||||
|
|
||||||
|
#### Iteration 4: Removed conflicting TLS settings
|
||||||
|
Removed:
|
||||||
|
- `ENABLE_TLS = true` (conflicted with PROTOCOL)
|
||||||
|
- `SKIP_VERIFY = false` (deprecated)
|
||||||
|
|
||||||
|
**Result:** Authentication still failed (535)
|
||||||
|
|
||||||
|
### 5. Debug Output Analysis
|
||||||
|
|
||||||
|
SMTP conversation debug output revealed:
|
||||||
|
```
|
||||||
|
send: 'AUTH PLAIN AGhlbGxvQGpuc3MubWUASGVsbG8xMjMh\r\n'
|
||||||
|
reply: b'535 5.7.8 Error: authentication failed: \r\n'
|
||||||
|
|
||||||
|
send: 'AUTH LOGIN aGVsbG8Aam5zcy5tZQ==\r\n'
|
||||||
|
reply: b'334 UGFzc3dvcmQ6\r\n'
|
||||||
|
send: 'SGVsbG8xMjMh\r\n'
|
||||||
|
reply: b'535 5.7.8 Error: authentication failed: UGFzc3dvcmQ6\r\n'
|
||||||
|
```
|
||||||
|
|
||||||
|
**Analysis:** Server accepts both AUTH PLAIN and AUTH LOGIN in EHLO response but rejects actual authentication attempts for both methods.
|
||||||
|
|
||||||
|
## Root Cause Analysis
|
||||||
|
|
||||||
|
### What Works
|
||||||
|
- ✅ SMTP server connectivity (both ports)
|
||||||
|
- ✅ TLS/STARTTLS negotiation
|
||||||
|
- ✅ Webmail authentication with same credentials
|
||||||
|
- ✅ Email sending through webmail
|
||||||
|
|
||||||
|
### What Doesn't Work
|
||||||
|
- ❌ SMTP AUTH PLAIN from VPS
|
||||||
|
- ❌ SMTP AUTH LOGIN from VPS
|
||||||
|
- ❌ Both fail with identical error: 535 5.7.8
|
||||||
|
|
||||||
|
### Conclusion
|
||||||
|
|
||||||
|
**The issue is NOT a Gitea configuration problem.** The SMTP server is actively rejecting authentication attempts despite:
|
||||||
|
- Correct credentials (verified in webmail)
|
||||||
|
- Proper TLS establishment
|
||||||
|
- Correct authentication protocol usage
|
||||||
|
|
||||||
|
## Possible Causes
|
||||||
|
|
||||||
|
1. **SMTP Access Disabled:** Titan Email may require SMTP/IMAP access to be explicitly enabled in Hostinger control panel or Titan settings
|
||||||
|
2. **IP-Based Restrictions:** VPS IP (69.62.119.31) may be blocked or require whitelisting
|
||||||
|
3. **Account Verification Required:** Account may need additional verification for SMTP access
|
||||||
|
4. **Service-Level Restriction:** Titan Email plan may not include SMTP access for external applications
|
||||||
|
5. **Missing Activation:** SMTP feature may require separate activation from webmail access
|
||||||
|
|
||||||
|
## Attempted Solutions
|
||||||
|
|
||||||
|
### Configuration Changes
|
||||||
|
- [x] Tested both port 587 (STARTTLS) and 465 (SSL/TLS)
|
||||||
|
- [x] Tried AUTH PLAIN and AUTH LOGIN methods
|
||||||
|
- [x] Removed conflicting TLS settings (ENABLE_TLS, SKIP_VERIFY)
|
||||||
|
- [x] Updated password in vault and redeployed
|
||||||
|
- [x] Verified minimal clean configuration
|
||||||
|
|
||||||
|
### External Tests
|
||||||
|
- [ ] Test SMTP from different IP (local machine vs VPS)
|
||||||
|
- [ ] Check Hostinger control panel for SMTP toggle
|
||||||
|
- [ ] Contact Hostinger/Titan support
|
||||||
|
- [ ] Verify account has SMTP privileges
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
### Immediate Next Steps
|
||||||
|
1. **Check Hostinger Control Panel:**
|
||||||
|
- Log into hpanel.hostinger.com
|
||||||
|
- Navigate to Emails → hello@jnss.me
|
||||||
|
- Look for SMTP/IMAP access toggle or settings
|
||||||
|
|
||||||
|
2. **Test from Different IP:**
|
||||||
|
- Test SMTP authentication from local machine
|
||||||
|
- If successful: IP blocking issue (request VPS IP whitelist)
|
||||||
|
- If failed: Account-level restriction
|
||||||
|
|
||||||
|
3. **Contact Support:**
|
||||||
|
- Provide error: "535 5.7.8 authentication failed"
|
||||||
|
- Request SMTP access verification for hello@jnss.me
|
||||||
|
- Ask if SMTP requires separate activation
|
||||||
|
|
||||||
|
### Alternative Email Solutions
|
||||||
|
|
||||||
|
If Titan Email SMTP cannot be resolved:
|
||||||
|
|
||||||
|
1. **Use Different Email Provider:**
|
||||||
|
- Gmail (with App Passwords)
|
||||||
|
- SendGrid (free tier: 100 emails/day)
|
||||||
|
- Mailgun (free tier: 5,000 emails/month)
|
||||||
|
- AWS SES (free tier: 62,000 emails/month)
|
||||||
|
|
||||||
|
2. **Use Local Mail Server:**
|
||||||
|
- Install Postfix on VPS
|
||||||
|
- Configure as relay
|
||||||
|
- More complex but full control
|
||||||
|
|
||||||
|
3. **Disable Email Features:**
|
||||||
|
- Set `ENABLED = false` in [mailer]
|
||||||
|
- OAuth account linking won't work
|
||||||
|
- Password reset requires admin intervention
|
||||||
|
- No email notifications
|
||||||
|
|
||||||
|
## Current Status
|
||||||
|
|
||||||
|
**Email functionality: DISABLED**
|
||||||
|
|
||||||
|
Configuration is correct but non-functional due to SMTP authentication rejection by Titan Email servers.
|
||||||
|
|
||||||
|
## Files Modified
|
||||||
|
|
||||||
|
- `roles/gitea/defaults/main.yml` - Email configuration variables
|
||||||
|
- `roles/gitea/templates/app.ini.j2` - Mailer section configuration
|
||||||
|
- `host_vars/arch-vps/vault.yml` - SMTP password
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- Gitea Mailer Documentation: https://docs.gitea.com/administration/config-cheat-sheet#mailer-mailer
|
||||||
|
- SMTP Error Codes: https://www.greenend.org.uk/rjk/tech/smtpreplies.html
|
||||||
|
- Titan Email Settings: https://support.hostinger.com/en/collections/3363865-titan-email
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Date:** 2025-12-19
|
||||||
|
**Investigated by:** OpenCode AI Assistant
|
||||||
|
**Status:** Unresolved - Awaiting Titan Email SMTP access verification
|
||||||
326
docs/jnss-web-deployment.md
Normal file
326
docs/jnss-web-deployment.md
Normal file
@@ -0,0 +1,326 @@
|
|||||||
|
# jnss-web Static Site Deployment Guide
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This document describes the deployment process for jnss-web, a SvelteKit-based static website serving as the primary entrypoint for jnss.me.
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
- **Technology**: SvelteKit v2 with static adapter
|
||||||
|
- **Build Tool**: Bun
|
||||||
|
- **Deployment**: Ansible playbook with git-based artifact deployment
|
||||||
|
- **Web Server**: Caddy (direct file serving)
|
||||||
|
- **Repository**: git.jnss.me/joakim/jnss-web
|
||||||
|
- **Branch Strategy**:
|
||||||
|
- `main` - Source code
|
||||||
|
- `deploy` - Build artifacts only
|
||||||
|
|
||||||
|
## Server Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
/opt/jnss-web-repo/ # Git clone of deploy branch
|
||||||
|
/var/www/jnss-web/ # Synced build artifacts (served by Caddy)
|
||||||
|
/etc/caddy/sites-enabled/
|
||||||
|
└── jnss-web.caddy # Site config with www redirect
|
||||||
|
/var/log/caddy/jnss-web.log # Access logs
|
||||||
|
```
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
### 1. Local Development (main branch)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/dev/jnss-web
|
||||||
|
git checkout main
|
||||||
|
|
||||||
|
# Make changes to source code
|
||||||
|
# ...
|
||||||
|
|
||||||
|
# Test locally
|
||||||
|
bun run dev
|
||||||
|
|
||||||
|
# Build to verify
|
||||||
|
bun run build
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Prepare Deploy Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build production version
|
||||||
|
bun run build
|
||||||
|
|
||||||
|
# Switch to deploy branch
|
||||||
|
git checkout deploy
|
||||||
|
|
||||||
|
# Merge changes from main
|
||||||
|
git merge main --no-commit
|
||||||
|
|
||||||
|
# Rebuild to ensure fresh build
|
||||||
|
bun run build
|
||||||
|
|
||||||
|
# Add only build artifacts
|
||||||
|
git add build/
|
||||||
|
|
||||||
|
# Commit with descriptive message
|
||||||
|
git commit -m "Deploy: Add new feature X"
|
||||||
|
|
||||||
|
# Push to Gitea
|
||||||
|
git push origin deploy
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Deploy to Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/rick-infra
|
||||||
|
|
||||||
|
# Run deployment playbook
|
||||||
|
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml
|
||||||
|
|
||||||
|
# Watch logs (optional)
|
||||||
|
ssh root@arch-vps 'tail -f /var/log/caddy/jnss-web.log'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Playbook Details
|
||||||
|
|
||||||
|
### Variables
|
||||||
|
|
||||||
|
Located in `playbooks/deploy-jnss-web.yml`:
|
||||||
|
|
||||||
|
- `jnss_web_repo_owner`: Your Gitea username (default: "joakim")
|
||||||
|
- `jnss_web_branch`: Branch to deploy (default: "deploy")
|
||||||
|
- `jnss_web_domain`: Primary domain (default: "jnss.me")
|
||||||
|
|
||||||
|
### Tags
|
||||||
|
|
||||||
|
- `jnss-web` - All jnss-web tasks
|
||||||
|
- `deploy` - Git clone/sync tasks
|
||||||
|
- `caddy` - Caddy configuration tasks
|
||||||
|
|
||||||
|
### Example Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full deployment
|
||||||
|
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml
|
||||||
|
|
||||||
|
# Only update Caddy config (no git pull)
|
||||||
|
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml --tags caddy
|
||||||
|
|
||||||
|
# Only sync files (skip Caddy reload)
|
||||||
|
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml --tags deploy
|
||||||
|
```
|
||||||
|
|
||||||
|
## Initial Setup
|
||||||
|
|
||||||
|
### jnss-web Project Configuration
|
||||||
|
|
||||||
|
The following changes need to be made in the jnss-web project:
|
||||||
|
|
||||||
|
#### 1. Install Static Adapter
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cd ~/dev/jnss-web
|
||||||
|
bun add -D @sveltejs/adapter-static
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 2. Update svelte.config.js
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
import adapter from '@sveltejs/adapter-static';
|
||||||
|
|
||||||
|
/** @type {import('@sveltejs/kit').Config} */
|
||||||
|
const config = {
|
||||||
|
kit: {
|
||||||
|
adapter: adapter({
|
||||||
|
pages: 'build',
|
||||||
|
assets: 'build',
|
||||||
|
fallback: undefined,
|
||||||
|
precompress: false,
|
||||||
|
strict: true
|
||||||
|
})
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
export default config;
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 3. Create src/routes/+layout.js
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
export const prerender = true;
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 4. Update .gitignore
|
||||||
|
|
||||||
|
Comment out or remove the build directory from .gitignore:
|
||||||
|
|
||||||
|
```gitignore
|
||||||
|
# build/ <- Comment this out or remove it
|
||||||
|
```
|
||||||
|
|
||||||
|
#### 5. Create Deploy Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build the site first
|
||||||
|
bun run build
|
||||||
|
|
||||||
|
# Create and switch to deploy branch
|
||||||
|
git checkout -b deploy
|
||||||
|
|
||||||
|
# Remove source files (keep only build artifacts)
|
||||||
|
git rm -r src static *.config.js package.json bun.lock jsconfig.json .npmrc
|
||||||
|
|
||||||
|
# Keep only build/ and documentation
|
||||||
|
git add build/ README.md
|
||||||
|
|
||||||
|
# Commit
|
||||||
|
git commit -m "Initial deploy branch with build artifacts"
|
||||||
|
|
||||||
|
# Push to Gitea
|
||||||
|
git push -u origin deploy
|
||||||
|
|
||||||
|
# Switch back to main
|
||||||
|
git checkout main
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Build Directory Not Found
|
||||||
|
|
||||||
|
**Error**: "Build directory not found in repository"
|
||||||
|
|
||||||
|
**Solution**: Ensure you pushed the build artifacts to the deploy branch:
|
||||||
|
```bash
|
||||||
|
git checkout deploy
|
||||||
|
ls -la build/ # Should exist
|
||||||
|
git log --oneline -1 # Verify latest commit
|
||||||
|
```
|
||||||
|
|
||||||
|
### Caddy Not Serving New Content
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
```bash
|
||||||
|
# SSH to server
|
||||||
|
ssh root@arch-vps
|
||||||
|
|
||||||
|
# Check Caddy status
|
||||||
|
systemctl status caddy
|
||||||
|
|
||||||
|
# Reload Caddy manually
|
||||||
|
systemctl reload caddy
|
||||||
|
|
||||||
|
# Check logs
|
||||||
|
journalctl -u caddy -f
|
||||||
|
```
|
||||||
|
|
||||||
|
### Permission Issues
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
```bash
|
||||||
|
# SSH to server
|
||||||
|
ssh root@arch-vps
|
||||||
|
|
||||||
|
# Fix ownership
|
||||||
|
chown -R caddy:caddy /var/www/jnss-web
|
||||||
|
|
||||||
|
# Verify
|
||||||
|
ls -la /var/www/jnss-web
|
||||||
|
```
|
||||||
|
|
||||||
|
### Git Clone Fails
|
||||||
|
|
||||||
|
**Error**: Unable to clone from git.jnss.me
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
- Verify repository is public in Gitea
|
||||||
|
- Test clone manually: `git clone https://git.jnss.me/joakim/jnss-web.git /tmp/test`
|
||||||
|
- Check Gitea service status
|
||||||
|
|
||||||
|
## Rollback Procedure
|
||||||
|
|
||||||
|
### Option 1: Rollback Deploy Branch
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In jnss-web repository
|
||||||
|
git checkout deploy
|
||||||
|
|
||||||
|
# Find previous commit
|
||||||
|
git log --oneline
|
||||||
|
|
||||||
|
# Reset to previous commit
|
||||||
|
git reset --hard <commit-hash>
|
||||||
|
|
||||||
|
# Force push
|
||||||
|
git push -f origin deploy
|
||||||
|
|
||||||
|
# Re-run deployment
|
||||||
|
cd ~/rick-infra
|
||||||
|
ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Option 2: Quick Server-Side Rollback
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# SSH to server
|
||||||
|
ssh root@arch-vps
|
||||||
|
|
||||||
|
# Go to repo directory
|
||||||
|
cd /opt/jnss-web-repo
|
||||||
|
|
||||||
|
# Reset to previous commit
|
||||||
|
git reset --hard HEAD~1
|
||||||
|
|
||||||
|
# Re-sync
|
||||||
|
rsync -av --delete build/ /var/www/jnss-web/
|
||||||
|
|
||||||
|
# Reload Caddy
|
||||||
|
systemctl reload caddy
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
- Repository is public (contains only build artifacts)
|
||||||
|
- No secrets in build output
|
||||||
|
- Caddy serves only /var/www/jnss-web (no parent directory access)
|
||||||
|
- Security headers configured in Caddy
|
||||||
|
- HTTPS enforced via Caddy with Let's Encrypt
|
||||||
|
|
||||||
|
## Performance Optimizations
|
||||||
|
|
||||||
|
- Static assets cached for 1 year (immutable)
|
||||||
|
- HTML cached for 1 hour with revalidation
|
||||||
|
- Gzip compression enabled
|
||||||
|
- No server-side processing required
|
||||||
|
|
||||||
|
## Monitoring
|
||||||
|
|
||||||
|
### Check Deployment Status
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# From control machine
|
||||||
|
ansible homelab -i inventory/hosts.yml -m shell -a "ls -lh /var/www/jnss-web"
|
||||||
|
```
|
||||||
|
|
||||||
|
### View Access Logs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ssh root@arch-vps 'tail -100 /var/log/caddy/jnss-web.log | jq .'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Site Health
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -I https://jnss.me
|
||||||
|
curl -I https://www.jnss.me # Should redirect to jnss.me
|
||||||
|
```
|
||||||
|
|
||||||
|
## Files Created
|
||||||
|
|
||||||
|
This deployment adds the following files to rick-infra:
|
||||||
|
|
||||||
|
- `playbooks/deploy-jnss-web.yml` - Main deployment playbook
|
||||||
|
- `playbooks/templates/jnss-web.caddy.j2` - Caddy configuration template
|
||||||
|
- `docs/jnss-web-deployment.md` - This documentation
|
||||||
|
|
||||||
|
And modifies:
|
||||||
|
|
||||||
|
- `roles/caddy/templates/Caddyfile.j2` - Removed default site section
|
||||||
311
docs/metrics-deployment-guide.md
Normal file
311
docs/metrics-deployment-guide.md
Normal file
@@ -0,0 +1,311 @@
|
|||||||
|
# Metrics Stack Deployment Guide
|
||||||
|
|
||||||
|
Complete guide to deploying the monitoring stack (VictoriaMetrics, Grafana, node_exporter) on rick-infra.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
The metrics stack provides:
|
||||||
|
- **System monitoring**: CPU, memory, disk, network via node_exporter
|
||||||
|
- **Time-series storage**: VictoriaMetrics (Prometheus-compatible, 7x less RAM)
|
||||||
|
- **Visualization**: Grafana with Authentik SSO integration
|
||||||
|
- **Access**: `https://metrics.jnss.me` with role-based permissions
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
User → metrics.jnss.me (HTTPS)
|
||||||
|
↓
|
||||||
|
Caddy (Reverse Proxy)
|
||||||
|
↓
|
||||||
|
Grafana (OAuth → Authentik for SSO)
|
||||||
|
↓
|
||||||
|
VictoriaMetrics (Time-series DB)
|
||||||
|
↑
|
||||||
|
node_exporter (System Metrics)
|
||||||
|
```
|
||||||
|
|
||||||
|
All services run on localhost only, following rick-infra security principles.
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### 1. Caddy Deployed
|
||||||
|
```bash
|
||||||
|
ansible-playbook rick-infra.yml --tags caddy
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Authentik Deployed
|
||||||
|
```bash
|
||||||
|
ansible-playbook rick-infra.yml --tags authentik
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. DNS Configuration
|
||||||
|
Ensure `metrics.jnss.me` points to arch-vps IP:
|
||||||
|
```bash
|
||||||
|
dig metrics.jnss.me # Should return 69.62.119.31
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 1: Configure Authentik OAuth Provider
|
||||||
|
|
||||||
|
### Create OAuth2/OIDC Provider
|
||||||
|
|
||||||
|
1. Login to Authentik at `https://auth.jnss.me`
|
||||||
|
|
||||||
|
2. Navigate to **Applications → Providers** → **Create**
|
||||||
|
|
||||||
|
3. Configure provider:
|
||||||
|
- **Name**: `Grafana`
|
||||||
|
- **Type**: `OAuth2/OpenID Provider`
|
||||||
|
- **Authentication flow**: `default-authentication-flow`
|
||||||
|
- **Authorization flow**: `default-provider-authorization-explicit-consent`
|
||||||
|
- **Client type**: `Confidential`
|
||||||
|
- **Client ID**: `grafana`
|
||||||
|
- **Client Secret**: Click **Generate** and **copy the secret**
|
||||||
|
- **Redirect URIs**: `https://metrics.jnss.me/login/generic_oauth`
|
||||||
|
- **Signing Key**: Select auto-generated key
|
||||||
|
- **Scopes**: `openid`, `profile`, `email`, `groups`
|
||||||
|
|
||||||
|
4. Click **Finish**
|
||||||
|
|
||||||
|
### Create Application
|
||||||
|
|
||||||
|
1. Navigate to **Applications** → **Create**
|
||||||
|
|
||||||
|
2. Configure application:
|
||||||
|
- **Name**: `Grafana`
|
||||||
|
- **Slug**: `grafana`
|
||||||
|
- **Provider**: Select `Grafana` provider created above
|
||||||
|
- **Launch URL**: `https://metrics.jnss.me`
|
||||||
|
|
||||||
|
3. Click **Create**
|
||||||
|
|
||||||
|
### Create Groups (Optional)
|
||||||
|
|
||||||
|
For role-based access control:
|
||||||
|
|
||||||
|
1. Navigate to **Directory → Groups** → **Create**
|
||||||
|
|
||||||
|
2. Create groups:
|
||||||
|
- **grafana-admins**: Full admin access to Grafana
|
||||||
|
- **grafana-editors**: Can create/edit dashboards
|
||||||
|
- All other users get Viewer access
|
||||||
|
|
||||||
|
3. Add users to groups as needed
|
||||||
|
|
||||||
|
## Step 2: Configure Vault Variables
|
||||||
|
|
||||||
|
Edit vault file:
|
||||||
|
```bash
|
||||||
|
ansible-vault edit host_vars/arch-vps/vault.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
Add these variables:
|
||||||
|
```yaml
|
||||||
|
# Grafana admin password (for emergency local login)
|
||||||
|
vault_grafana_admin_password: "your-secure-admin-password"
|
||||||
|
|
||||||
|
# Grafana secret key (generate with: openssl rand -base64 32)
|
||||||
|
vault_grafana_secret_key: "your-random-32-char-secret-key"
|
||||||
|
|
||||||
|
# OAuth credentials from Authentik
|
||||||
|
vault_grafana_oauth_client_id: "grafana"
|
||||||
|
vault_grafana_oauth_client_secret: "paste-secret-from-authentik-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
Save and close (`:wq` in vim).
|
||||||
|
|
||||||
|
## Step 3: Deploy Metrics Stack
|
||||||
|
|
||||||
|
Deploy all components:
|
||||||
|
```bash
|
||||||
|
ansible-playbook rick-infra.yml --tags metrics
|
||||||
|
```
|
||||||
|
|
||||||
|
This will:
|
||||||
|
1. Install and configure VictoriaMetrics
|
||||||
|
2. Install and configure node_exporter
|
||||||
|
3. Install and configure Grafana with OAuth
|
||||||
|
4. Deploy Caddy configuration for `metrics.jnss.me`
|
||||||
|
|
||||||
|
Expected output:
|
||||||
|
```
|
||||||
|
PLAY RECAP *******************************************************
|
||||||
|
arch-vps : ok=25 changed=15 unreachable=0 failed=0 skipped=0
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 4: Verify Deployment
|
||||||
|
|
||||||
|
### Check Services
|
||||||
|
|
||||||
|
SSH to arch-vps and verify services:
|
||||||
|
```bash
|
||||||
|
# Check all services are running
|
||||||
|
systemctl status victoriametrics grafana node_exporter
|
||||||
|
|
||||||
|
# Check service health
|
||||||
|
curl http://127.0.0.1:8428/health # VictoriaMetrics
|
||||||
|
curl http://127.0.0.1:9100/metrics # node_exporter
|
||||||
|
curl http://127.0.0.1:3000/api/health # Grafana
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check HTTPS Access
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -I https://metrics.jnss.me
|
||||||
|
# Should return 200 or 302 (redirect to Authentik)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Metrics Collection
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check VictoriaMetrics scrape targets
|
||||||
|
curl http://127.0.0.1:8428/api/v1/targets
|
||||||
|
|
||||||
|
# Should show node_exporter as "up"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 5: Access Grafana
|
||||||
|
|
||||||
|
1. Navigate to `https://metrics.jnss.me`
|
||||||
|
2. Click **"Sign in with Authentik"**
|
||||||
|
3. Login with your Authentik credentials
|
||||||
|
4. You should be redirected to Grafana dashboard
|
||||||
|
|
||||||
|
First login will:
|
||||||
|
- Auto-create your Grafana user
|
||||||
|
- Assign role based on Authentik group membership
|
||||||
|
- Grant access to default organization
|
||||||
|
|
||||||
|
## Step 6: Verify Data Source
|
||||||
|
|
||||||
|
1. In Grafana, navigate to **Connections → Data sources**
|
||||||
|
2. Verify **VictoriaMetrics** is listed and default
|
||||||
|
3. Click on VictoriaMetrics → **Save & test**
|
||||||
|
4. Should show green "Data source is working" message
|
||||||
|
|
||||||
|
## Step 7: Create First Dashboard
|
||||||
|
|
||||||
|
### Option 1: Import Community Dashboard (Recommended)
|
||||||
|
|
||||||
|
1. Navigate to **Dashboards → Import**
|
||||||
|
2. Enter dashboard ID: `1860` (Node Exporter Full)
|
||||||
|
3. Click **Load**
|
||||||
|
4. Select **VictoriaMetrics** as data source
|
||||||
|
5. Click **Import**
|
||||||
|
|
||||||
|
You now have a comprehensive system monitoring dashboard!
|
||||||
|
|
||||||
|
### Option 2: Create Custom Dashboard
|
||||||
|
|
||||||
|
1. Navigate to **Dashboards → New → New Dashboard**
|
||||||
|
2. Click **Add visualization**
|
||||||
|
3. Select **VictoriaMetrics** data source
|
||||||
|
4. Enter PromQL query:
|
||||||
|
```promql
|
||||||
|
# CPU usage
|
||||||
|
100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
|
||||||
|
```
|
||||||
|
5. Click **Apply**
|
||||||
|
|
||||||
|
## Step 8: Configure Alerting (Optional)
|
||||||
|
|
||||||
|
Grafana supports alerting on metrics. Configure via **Alerting → Alert rules**.
|
||||||
|
|
||||||
|
Example alert for high CPU:
|
||||||
|
```promql
|
||||||
|
avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100 < 20
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### OAuth Login Fails
|
||||||
|
|
||||||
|
**Symptom**: Redirect to Authentik, but returns error after login
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. Verify redirect URI in Authentik matches exactly: `https://metrics.jnss.me/login/generic_oauth`
|
||||||
|
2. Check Grafana logs: `journalctl -u grafana -f`
|
||||||
|
3. Verify OAuth credentials in vault match Authentik
|
||||||
|
|
||||||
|
### No Metrics in Grafana
|
||||||
|
|
||||||
|
**Symptom**: Data source working, but no data in dashboards
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. Check VictoriaMetrics targets: `curl http://127.0.0.1:8428/api/v1/targets`
|
||||||
|
2. Verify node_exporter is up: `systemctl status node_exporter`
|
||||||
|
3. Check time range in Grafana (top right) - try "Last 5 minutes"
|
||||||
|
|
||||||
|
### Can't Access metrics.jnss.me
|
||||||
|
|
||||||
|
**Symptom**: Connection timeout or SSL error
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. Verify DNS: `dig metrics.jnss.me`
|
||||||
|
2. Check Caddy is running: `systemctl status caddy`
|
||||||
|
3. Check Caddy logs: `journalctl -u caddy -f`
|
||||||
|
4. Verify Caddy config loaded: `ls /etc/caddy/sites/grafana.caddy`
|
||||||
|
|
||||||
|
### Wrong Grafana Role
|
||||||
|
|
||||||
|
**Symptom**: User has wrong permissions (e.g., Viewer instead of Admin)
|
||||||
|
|
||||||
|
**Solution**:
|
||||||
|
1. Verify user is in correct Authentik group (`grafana-admins` or `grafana-editors`)
|
||||||
|
2. Logout of Grafana and login again
|
||||||
|
3. Check role mapping expression in `roles/metrics/defaults/main.yml`:
|
||||||
|
```yaml
|
||||||
|
grafana_oauth_role_attribute_path: "contains(groups, 'grafana-admins') && 'Admin' || contains(groups, 'grafana-editors') && 'Editor' || 'Viewer'"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
### Add More Hosts
|
||||||
|
|
||||||
|
To monitor additional hosts (e.g., mini-vps):
|
||||||
|
|
||||||
|
1. Deploy node_exporter to target host
|
||||||
|
2. Update VictoriaMetrics scrape config to include remote targets
|
||||||
|
3. Configure remote_write or federation
|
||||||
|
|
||||||
|
### Add Service Metrics
|
||||||
|
|
||||||
|
To monitor containerized services:
|
||||||
|
|
||||||
|
1. Expose `/metrics` endpoint in application (port 8080)
|
||||||
|
2. Add scrape config in `roles/metrics/templates/scrape.yml.j2`:
|
||||||
|
```yaml
|
||||||
|
- job_name: 'myservice'
|
||||||
|
static_configs:
|
||||||
|
- targets: ['127.0.0.1:8080']
|
||||||
|
```
|
||||||
|
3. Redeploy metrics role
|
||||||
|
|
||||||
|
### Set Up Alerting
|
||||||
|
|
||||||
|
1. Configure notification channels in Grafana (Email, Slack, etc.)
|
||||||
|
2. Create alert rules for critical metrics
|
||||||
|
3. Set up on-call rotation if needed
|
||||||
|
|
||||||
|
## Security Notes
|
||||||
|
|
||||||
|
- All metrics services run on localhost only
|
||||||
|
- Grafana is the only internet-facing component (via Caddy HTTPS)
|
||||||
|
- OAuth provides SSO with Authentik (no separate Grafana passwords)
|
||||||
|
- systemd hardening enabled on all services
|
||||||
|
- Default admin account should only be used for emergencies
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- **VictoriaMetrics Docs**: https://docs.victoriametrics.com/
|
||||||
|
- **Grafana Docs**: https://grafana.com/docs/
|
||||||
|
- **PromQL Guide**: https://prometheus.io/docs/prometheus/latest/querying/basics/
|
||||||
|
- **Dashboard Library**: https://grafana.com/grafana/dashboards/
|
||||||
|
- **Authentik OAuth**: https://goauthentik.io/docs/providers/oauth2/
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
For issues specific to rick-infra metrics deployment:
|
||||||
|
1. Check service logs: `journalctl -u <service> -f`
|
||||||
|
2. Review role README: `roles/metrics/README.md`
|
||||||
|
3. Verify vault variables are correctly set
|
||||||
|
4. Ensure Authentik OAuth provider is properly configured
|
||||||
307
docs/vaultwarden-sso-status.md
Normal file
307
docs/vaultwarden-sso-status.md
Normal file
@@ -0,0 +1,307 @@
|
|||||||
|
# Vaultwarden SSO Feature Status and Configuration
|
||||||
|
|
||||||
|
**Document Date:** December 21, 2025
|
||||||
|
**Last Updated:** December 21, 2025
|
||||||
|
**Status:** SSO Configured, Waiting for Stable Release
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
Vaultwarden has been successfully deployed with **SSO integration pre-configured** for Authentik. However, SSO functionality is currently **only available in testing images** and not yet in the stable release. This document explains the current state, our decision to wait for stable, and how to activate SSO when it becomes available.
|
||||||
|
|
||||||
|
## Current Deployment Status
|
||||||
|
|
||||||
|
### What's Working
|
||||||
|
- ✅ Vaultwarden deployed successfully at `https://vault.jnss.me`
|
||||||
|
- ✅ PostgreSQL backend via Unix socket
|
||||||
|
- ✅ Admin panel accessible and working
|
||||||
|
- ✅ Email/password authentication working
|
||||||
|
- ✅ SMTP notifications configured
|
||||||
|
- ✅ All SSO environment variables correctly configured
|
||||||
|
- ✅ Authentik OAuth2 provider created and ready
|
||||||
|
|
||||||
|
### What's Not Working (By Design)
|
||||||
|
- ❌ SSO login option not appearing on login page
|
||||||
|
- ❌ "Use single sign-on" button missing
|
||||||
|
|
||||||
|
**Reason:** Using stable image (`vaultwarden/server:latest` v1.34.3) which does not include SSO code.
|
||||||
|
|
||||||
|
## Investigation Summary (Dec 21, 2025)
|
||||||
|
|
||||||
|
### Problem Reported
|
||||||
|
User deployed Vaultwarden with `vaultwarden_sso_enabled: true` and configured Authentik integration following official guides, but no SSO option appeared on the login page.
|
||||||
|
|
||||||
|
### Root Cause Identified
|
||||||
|
After investigation including:
|
||||||
|
- Service status check (healthy, running normally)
|
||||||
|
- Environment variable verification (all SSO vars present and correct)
|
||||||
|
- Configuration review (matches Authentik integration guide perfectly)
|
||||||
|
- API endpoint inspection (`/api/config` returns `"sso":""`)
|
||||||
|
- Official documentation review
|
||||||
|
|
||||||
|
**Finding:** SSO feature is only compiled into `vaultwarden/server:testing` images, not stable releases.
|
||||||
|
|
||||||
|
### Evidence
|
||||||
|
|
||||||
|
From [Vaultwarden Official Wiki](https://github.com/dani-garcia/vaultwarden/wiki):
|
||||||
|
|
||||||
|
> **Testing features**
|
||||||
|
>
|
||||||
|
> Features available in the `testing` docker image:
|
||||||
|
> - Single Sign-On (SSO), see [Documentation](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect)
|
||||||
|
|
||||||
|
From [SSO Documentation Page](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect):
|
||||||
|
|
||||||
|
> ⚠️ **Important**
|
||||||
|
>
|
||||||
|
> ‼️ ‼️ ‼️
|
||||||
|
> SSO is currently only available in the `:testing` tagged images!
|
||||||
|
> The current stable `v1.34.3` **does not** contain the SSO feature.
|
||||||
|
> ‼️ ‼️ ‼️
|
||||||
|
|
||||||
|
### API Response Analysis
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# API config endpoint shows SSO not available
|
||||||
|
curl -s http://127.0.0.1:8080/api/config | grep -o '"sso":"[^"]*"'
|
||||||
|
# Returns: "sso":"" ← Empty = feature not compiled in
|
||||||
|
|
||||||
|
# Environment variables are set correctly
|
||||||
|
podman exec vaultwarden env | grep -i sso
|
||||||
|
SSO_ENABLED=true
|
||||||
|
SSO_ONLY=false
|
||||||
|
SSO_CLIENT_ID=DDOQXdwFn6pi4FtSvo7PK5b63pRzyD552xapTZGr
|
||||||
|
SSO_CLIENT_SECRET=02D308Sle2w2NPsi7UaXb3bvKKK4punFDT2LiVqKpzvEFqgPpyLysA8Z5yS4g8t4LYmsI9txLE02l5MtWP5R2RBavLhYHjNFHcwEmvYB94bOJw45YmgiGePaW4NHKcfY
|
||||||
|
SSO_AUTHORITY=https://auth.jnss.me/application/o/vaultwarden/
|
||||||
|
SSO_SCOPES="openid email profile offline_access"
|
||||||
|
# ... etc (all correct)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Conclusion:** Environment configured correctly, feature simply not available in stable release.
|
||||||
|
|
||||||
|
## Decision: Wait for Stable Release
|
||||||
|
|
||||||
|
### Rationale
|
||||||
|
|
||||||
|
**Why NOT switch to testing image:**
|
||||||
|
1. **Production stability** - This is a password manager handling sensitive credentials
|
||||||
|
2. **Testing images are volatile** - Frequent updates, potential bugs
|
||||||
|
3. **No ETA for stable** - SSO marked as "testing feature" indefinitely
|
||||||
|
4. **Current auth works fine** - Email/password login is secure and functional
|
||||||
|
5. **Configuration is ready** - When SSO reaches stable, it will work immediately
|
||||||
|
|
||||||
|
**Why keep SSO configured:**
|
||||||
|
1. **Future-ready** - No additional work needed when SSO stabilizes
|
||||||
|
2. **No harm** - Environment variables are ignored by stable image
|
||||||
|
3. **Documentation** - Clear record of SSO setup for future reference
|
||||||
|
4. **Authentik provider ready** - Already created and configured
|
||||||
|
|
||||||
|
### Alternatives Considered
|
||||||
|
|
||||||
|
| Option | Pros | Cons | Decision |
|
||||||
|
|--------|------|------|----------|
|
||||||
|
| Use `testing` image | SSO available now | Unstable, potential data loss, frequent breaking changes | ❌ Rejected |
|
||||||
|
| Wait for stable | Stable, reliable, secure | No SSO until unknown future date | ✅ **Selected** |
|
||||||
|
| Remove SSO config | Cleaner config | Requires reconfiguration later | ❌ Rejected |
|
||||||
|
| Dual deployment | Test SSO separately | Resource waste, complexity | ❌ Rejected |
|
||||||
|
|
||||||
|
## Current Configuration
|
||||||
|
|
||||||
|
### Ansible Role Variables
|
||||||
|
|
||||||
|
Location: `roles/vaultwarden/defaults/main.yml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Container version (stable, no SSO)
|
||||||
|
vaultwarden_version: "latest"
|
||||||
|
|
||||||
|
# SSO enabled (ready for when feature reaches stable)
|
||||||
|
vaultwarden_sso_enabled: true
|
||||||
|
vaultwarden_sso_only: false
|
||||||
|
vaultwarden_sso_client_id: "{{ vault_vaultwarden_sso_client_id }}"
|
||||||
|
vaultwarden_sso_client_secret: "{{ vault_vaultwarden_sso_client_secret }}"
|
||||||
|
vaultwarden_sso_authority: "https://auth.jnss.me/application/o/vaultwarden/"
|
||||||
|
vaultwarden_sso_scopes: "openid email profile offline_access"
|
||||||
|
vaultwarden_sso_signups_match_email: true
|
||||||
|
vaultwarden_sso_allow_unknown_email_verification: false
|
||||||
|
vaultwarden_sso_client_cache_expiration: 0
|
||||||
|
```
|
||||||
|
|
||||||
|
### Vault Variables (Encrypted)
|
||||||
|
|
||||||
|
Location: `group_vars/homelab/vault.yml`
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# SSO credentials from Authentik
|
||||||
|
vault_vaultwarden_sso_client_id: "DDOQXdwFn6pi4FtSvo7PK5b63pRzyD552xapTZGr"
|
||||||
|
vault_vaultwarden_sso_client_secret: "02D308Sle2w2NPsi7UaXb3bvKKK4punFDT2LiVqKpzvEFqgPpyLysA8Z5yS4g8t4LYmsI9txLE02l5MtWP5R2RBavLhYHjNFHcwEmvYB94bOJw45YmgiGePaW4NHKcfY"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Authentik Provider Configuration
|
||||||
|
|
||||||
|
**Provider Details:**
|
||||||
|
- **Name:** Vaultwarden
|
||||||
|
- **Type:** OAuth2/OpenID Connect Provider
|
||||||
|
- **Client Type:** Confidential
|
||||||
|
- **Client ID:** `DDOQXdwFn6pi4FtSvo7PK5b63pRzyD552xapTZGr`
|
||||||
|
- **Application Slug:** `vaultwarden`
|
||||||
|
- **Redirect URI:** `https://vault.jnss.me/identity/connect/oidc-signin`
|
||||||
|
|
||||||
|
**Scopes Configured:**
|
||||||
|
- ✅ `authentik default OAuth Mapping: OpenID 'openid'`
|
||||||
|
- ✅ `authentik default OAuth Mapping: OpenID 'email'`
|
||||||
|
- ✅ `authentik default OAuth Mapping: OpenID 'profile'`
|
||||||
|
- ✅ `authentik default OAuth Mapping: OpenID 'offline_access'`
|
||||||
|
|
||||||
|
**Token Settings:**
|
||||||
|
- Access token validity: > 5 minutes
|
||||||
|
- Refresh token enabled via `offline_access` scope
|
||||||
|
|
||||||
|
**Authority URL:** `https://auth.jnss.me/application/o/vaultwarden/`
|
||||||
|
|
||||||
|
### Deployed Environment (VPS)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deployed environment file
|
||||||
|
Location: /opt/vaultwarden/.env
|
||||||
|
|
||||||
|
# All SSO variables present and correct
|
||||||
|
SSO_ENABLED=true
|
||||||
|
SSO_AUTHORITY=https://auth.jnss.me/application/o/vaultwarden/
|
||||||
|
SSO_SCOPES="openid email profile offline_access"
|
||||||
|
# ... etc
|
||||||
|
```
|
||||||
|
|
||||||
|
## How to Activate SSO (Future)
|
||||||
|
|
||||||
|
When Vaultwarden SSO reaches stable release:
|
||||||
|
|
||||||
|
### Automatic Activation (Recommended)
|
||||||
|
|
||||||
|
1. **Monitor Vaultwarden releases** for SSO in stable:
|
||||||
|
- Watch: https://github.com/dani-garcia/vaultwarden/releases
|
||||||
|
- Look for: SSO feature in stable image changelog
|
||||||
|
|
||||||
|
2. **Update container** (standard maintenance):
|
||||||
|
```bash
|
||||||
|
ansible-playbook rick-infra.yml --tags vaultwarden --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Verify SSO is available**:
|
||||||
|
```bash
|
||||||
|
ssh root@69.62.119.31 "curl -s http://127.0.0.1:8080/api/config | grep sso"
|
||||||
|
# Should return: "sso":"https://vault.jnss.me" or similar
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Test SSO**:
|
||||||
|
- Navigate to: https://vault.jnss.me
|
||||||
|
- Log out if logged in
|
||||||
|
- Enter verified email address
|
||||||
|
- Click "Use single sign-on" button
|
||||||
|
- Should redirect to Authentik login
|
||||||
|
|
||||||
|
**No configuration changes needed** - Everything is already set up correctly.
|
||||||
|
|
||||||
|
### Manual Testing (Use Testing Image)
|
||||||
|
|
||||||
|
If you want to test SSO before stable release:
|
||||||
|
|
||||||
|
1. **Backup current deployment**:
|
||||||
|
```bash
|
||||||
|
ansible-playbook playbooks/backup-vaultwarden.yml # Create if needed
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Change to testing image** in `roles/vaultwarden/defaults/main.yml`:
|
||||||
|
```yaml
|
||||||
|
vaultwarden_version: "testing"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Deploy**:
|
||||||
|
```bash
|
||||||
|
ansible-playbook rick-infra.yml --tags vaultwarden --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Test SSO** (same as above)
|
||||||
|
|
||||||
|
5. **Revert to stable** when testing complete:
|
||||||
|
```yaml
|
||||||
|
vaultwarden_version: "latest"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Warning:** Testing images may contain bugs, data corruption risks, or breaking changes. Not recommended for production password manager.
|
||||||
|
|
||||||
|
## Verification Commands
|
||||||
|
|
||||||
|
### Check Current Image Version
|
||||||
|
```bash
|
||||||
|
ssh root@69.62.119.31 "podman inspect vaultwarden --format '{{.ImageName}}'"
|
||||||
|
# Expected: docker.io/vaultwarden/server:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check SSO API Status
|
||||||
|
```bash
|
||||||
|
ssh root@69.62.119.31 "curl -s http://127.0.0.1:8080/api/config | grep -o '\"sso\":\"[^\"]*\"'"
|
||||||
|
# Current: "sso":"" (empty = not available)
|
||||||
|
# Future: "sso":"https://vault.jnss.me" (URL = available)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check SSO Environment Variables
|
||||||
|
```bash
|
||||||
|
ssh root@69.62.119.31 "podman exec vaultwarden env | grep -i sso | sort"
|
||||||
|
# Should show all SSO_* variables configured correctly
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Vaultwarden Version
|
||||||
|
```bash
|
||||||
|
ssh root@69.62.119.31 "podman exec vaultwarden /vaultwarden --version"
|
||||||
|
# Current: Vaultwarden 1.34.3
|
||||||
|
```
|
||||||
|
|
||||||
|
## Documentation Updates
|
||||||
|
|
||||||
|
The following files have been updated to document this finding:
|
||||||
|
|
||||||
|
1. **`roles/vaultwarden/README.md`**
|
||||||
|
- Added warning banner in SSO configuration section
|
||||||
|
- Updated troubleshooting section with SSO status checks
|
||||||
|
- Documented stable vs testing image behavior
|
||||||
|
|
||||||
|
2. **`roles/vaultwarden/VAULT_VARIABLES.md`**
|
||||||
|
- Added SSO feature status warning
|
||||||
|
- Documented that credentials are ready but inactive
|
||||||
|
|
||||||
|
3. **`roles/vaultwarden/defaults/main.yml`**
|
||||||
|
- Added comments explaining SSO availability
|
||||||
|
|
||||||
|
4. **`docs/vaultwarden-sso-status.md`** (this document)
|
||||||
|
- Complete investigation findings
|
||||||
|
- Configuration reference
|
||||||
|
- Activation procedures
|
||||||
|
|
||||||
|
## Timeline
|
||||||
|
|
||||||
|
- **2025-07-30:** Vaultwarden v1.34.3 released (current stable)
|
||||||
|
- **2025-12-21:** Vaultwarden deployed with SSO pre-configured
|
||||||
|
- **2025-12-21:** Investigation completed, SSO status documented
|
||||||
|
- **TBD:** SSO feature reaches stable release (no ETA)
|
||||||
|
- **Future:** Automatic SSO activation on next deployment
|
||||||
|
|
||||||
|
## Related Documentation
|
||||||
|
|
||||||
|
- [Vaultwarden Official Wiki](https://github.com/dani-garcia/vaultwarden/wiki)
|
||||||
|
- [SSO Documentation](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect)
|
||||||
|
- [Authentik Integration Guide](https://integrations.goauthentik.io/security/vaultwarden/)
|
||||||
|
- [Vaultwarden Testing Features](https://github.com/dani-garcia/vaultwarden/wiki#testing-features)
|
||||||
|
|
||||||
|
## Contact
|
||||||
|
|
||||||
|
For questions about this deployment:
|
||||||
|
- Infrastructure repo: `/home/fitz/rick-infra`
|
||||||
|
- Role location: `roles/vaultwarden/`
|
||||||
|
- Service: `vault.jnss.me`
|
||||||
|
- Host: `arch-vps` (69.62.119.31)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Status:** SSO configured and ready, waiting for upstream stable release. No action required.
|
||||||
@@ -65,7 +65,8 @@ nextcloud_db_password: "{{ vault_nextcloud_db_password }}"
|
|||||||
nextcloud_valkey_db: 2 # Authentik uses 1
|
nextcloud_valkey_db: 2 # Authentik uses 1
|
||||||
|
|
||||||
# Admin configuration
|
# Admin configuration
|
||||||
nextcloud_admin_user: "admin"
|
nextcloud_admin_user: "joakim"
|
||||||
|
nextcloud_admin_email: "joakim@jnss.me"
|
||||||
nextcloud_admin_password: "{{ vault_nextcloud_admin_password }}"
|
nextcloud_admin_password: "{{ vault_nextcloud_admin_password }}"
|
||||||
|
|
||||||
# Service configuration
|
# Service configuration
|
||||||
|
|||||||
@@ -5,15 +5,21 @@
|
|||||||
|
|
||||||
- [ ] What gets served on jnss.me?
|
- [ ] What gets served on jnss.me?
|
||||||
- [ ] Backups
|
- [ ] Backups
|
||||||
|
- [x] Titan email provider support. For smtp access to hello@jnss.me
|
||||||
|
|
||||||
|
- [ ] Vaultvarden
|
||||||
|
|
||||||
- [ ] Configure and set up Nextcloud
|
- [ ] Configure and set up Nextcloud
|
||||||
- [ ] OAuth
|
- [ ] OAuth
|
||||||
- [ ] Settings
|
- [ ] Settings
|
||||||
- [ ] Contacts and calendars
|
- [ ] Contacts and calendars
|
||||||
- [ ] Storage bucket integration?
|
- [ ] Storage bucket integration?
|
||||||
|
- [x] SMTP setup for email sending
|
||||||
|
|
||||||
- [x] Gitea
|
- [x] Gitea
|
||||||
- [x] SSH passthrough setup
|
- [x] SSH passthrough setup
|
||||||
|
- [x] Figure out how to disable registration and local password
|
||||||
|
- [x] SMTP setup for email sending
|
||||||
|
|
||||||
- [ ] Authentik Invitations for users?
|
- [ ] Authentik Invitations for users?
|
||||||
|
|
||||||
|
|||||||
133
playbooks/deploy-jnss-web.yml
Normal file
133
playbooks/deploy-jnss-web.yml
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
---
|
||||||
|
# ================================================================
|
||||||
|
# jnss-web Static Site Deployment Playbook
|
||||||
|
# ================================================================
|
||||||
|
# Deploys the jnss-web SvelteKit static site to jnss.me
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ansible-playbook -i inventory/hosts.yml playbooks/deploy-jnss-web.yml
|
||||||
|
#
|
||||||
|
# This playbook:
|
||||||
|
# - Clones the jnss-web repository (deploy branch) to a temp directory
|
||||||
|
# - Syncs build artifacts to /var/www/jnss-web
|
||||||
|
# - Deploys Caddy configuration for jnss.me with www redirect
|
||||||
|
# - Reloads Caddy to serve the new site
|
||||||
|
# ================================================================
|
||||||
|
|
||||||
|
- name: Deploy jnss-web static site
|
||||||
|
hosts: homelab
|
||||||
|
become: true
|
||||||
|
|
||||||
|
vars:
|
||||||
|
# Git repository configuration
|
||||||
|
jnss_web_repo_url: "https://git.jnss.me/joakim/jnss-web.git"
|
||||||
|
jnss_web_branch: "deploy"
|
||||||
|
|
||||||
|
# Server paths
|
||||||
|
jnss_web_root: "/var/www/jnss-web"
|
||||||
|
|
||||||
|
# Domain configuration
|
||||||
|
jnss_web_domain: "jnss.me"
|
||||||
|
|
||||||
|
# Caddy configuration
|
||||||
|
caddy_user: "caddy"
|
||||||
|
caddy_sites_enabled_dir: "/etc/caddy/sites-enabled"
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# ============================================================
|
||||||
|
# Git Repository Management
|
||||||
|
# ============================================================
|
||||||
|
|
||||||
|
- name: Create temporary directory for git clone
|
||||||
|
tempfile:
|
||||||
|
state: directory
|
||||||
|
suffix: -jnss-web
|
||||||
|
register: temp_clone_dir
|
||||||
|
tags: [jnss-web, deploy]
|
||||||
|
|
||||||
|
- name: Clone jnss-web repository to temp directory
|
||||||
|
git:
|
||||||
|
repo: "{{ jnss_web_repo_url }}"
|
||||||
|
dest: "{{ temp_clone_dir.path }}"
|
||||||
|
version: "{{ jnss_web_branch }}"
|
||||||
|
depth: 1
|
||||||
|
tags: [jnss-web, deploy]
|
||||||
|
|
||||||
|
- name: Verify build directory exists in repository
|
||||||
|
stat:
|
||||||
|
path: "{{ temp_clone_dir.path }}/index.html"
|
||||||
|
register: build_dir
|
||||||
|
tags: [jnss-web, deploy]
|
||||||
|
|
||||||
|
- name: Fail if index.html not found
|
||||||
|
fail:
|
||||||
|
msg: "Build index.html not found in repository root. Ensure the deploy branch contains the built artifacts."
|
||||||
|
when: not build_dir.stat.exists
|
||||||
|
tags: [jnss-web, deploy]
|
||||||
|
|
||||||
|
# ============================================================
|
||||||
|
# Web Root Deployment
|
||||||
|
# ============================================================
|
||||||
|
|
||||||
|
- name: Remove old web root
|
||||||
|
file:
|
||||||
|
path: "{{ jnss_web_root }}"
|
||||||
|
state: absent
|
||||||
|
tags: [jnss-web, deploy]
|
||||||
|
|
||||||
|
- name: Create fresh web root directory
|
||||||
|
file:
|
||||||
|
path: "{{ jnss_web_root }}"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ caddy_user }}"
|
||||||
|
group: "{{ caddy_user }}"
|
||||||
|
mode: '0755'
|
||||||
|
tags: [jnss-web, deploy]
|
||||||
|
|
||||||
|
- name: Copy build files to web root
|
||||||
|
copy:
|
||||||
|
src: "{{ temp_clone_dir.path }}/"
|
||||||
|
dest: "{{ jnss_web_root }}/"
|
||||||
|
owner: "{{ caddy_user }}"
|
||||||
|
group: "{{ caddy_user }}"
|
||||||
|
mode: '0755'
|
||||||
|
remote_src: true
|
||||||
|
tags: [jnss-web, deploy]
|
||||||
|
|
||||||
|
- name: Clean up temporary clone directory
|
||||||
|
file:
|
||||||
|
path: "{{ temp_clone_dir.path }}"
|
||||||
|
state: absent
|
||||||
|
tags: [jnss-web, deploy]
|
||||||
|
|
||||||
|
# ============================================================
|
||||||
|
# Caddy Configuration
|
||||||
|
# ============================================================
|
||||||
|
|
||||||
|
- name: Deploy Caddy configuration for jnss-web
|
||||||
|
template:
|
||||||
|
src: templates/jnss-web.caddy.j2
|
||||||
|
dest: "{{ caddy_sites_enabled_dir }}/jnss-web.caddy"
|
||||||
|
owner: root
|
||||||
|
group: "{{ caddy_user }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: reload caddy
|
||||||
|
tags: [jnss-web, caddy]
|
||||||
|
|
||||||
|
- name: Validate Caddy configuration
|
||||||
|
command: caddy validate --config /etc/caddy/Caddyfile
|
||||||
|
register: caddy_validate
|
||||||
|
changed_when: false
|
||||||
|
tags: [jnss-web, caddy]
|
||||||
|
|
||||||
|
- name: Display Caddy validation result
|
||||||
|
debug:
|
||||||
|
msg: "Caddy configuration is valid"
|
||||||
|
when: caddy_validate.rc == 0
|
||||||
|
tags: [jnss-web, caddy]
|
||||||
|
|
||||||
|
handlers:
|
||||||
|
- name: reload caddy
|
||||||
|
systemd:
|
||||||
|
name: caddy
|
||||||
|
state: reloaded
|
||||||
203
playbooks/remove-nextcloud.yml
Normal file
203
playbooks/remove-nextcloud.yml
Normal file
@@ -0,0 +1,203 @@
|
|||||||
|
---
|
||||||
|
# =================================================================
|
||||||
|
# Nextcloud Removal Playbook
|
||||||
|
# =================================================================
|
||||||
|
# Rick-Infra - Clean removal of Nextcloud installation
|
||||||
|
#
|
||||||
|
# This playbook removes all Nextcloud components:
|
||||||
|
# - Systemd services and timers
|
||||||
|
# - Container and images
|
||||||
|
# - Data directories
|
||||||
|
# - Database and user
|
||||||
|
# - Caddy configuration
|
||||||
|
# - System user and groups
|
||||||
|
#
|
||||||
|
# Usage: ansible-playbook playbooks/remove-nextcloud.yml -i inventory/hosts.yml
|
||||||
|
|
||||||
|
- name: Remove Nextcloud Installation
|
||||||
|
hosts: arch-vps
|
||||||
|
become: yes
|
||||||
|
gather_facts: yes
|
||||||
|
|
||||||
|
vars:
|
||||||
|
nextcloud_user: nextcloud
|
||||||
|
nextcloud_group: nextcloud
|
||||||
|
nextcloud_home: /opt/nextcloud
|
||||||
|
nextcloud_db_name: nextcloud
|
||||||
|
nextcloud_db_user: nextcloud
|
||||||
|
caddy_sites_enabled_dir: /etc/caddy/sites-enabled
|
||||||
|
|
||||||
|
tasks:
|
||||||
|
# ============================================
|
||||||
|
# Stop and Disable Services
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Stop and disable nextcloud-cron timer
|
||||||
|
systemd:
|
||||||
|
name: nextcloud-cron.timer
|
||||||
|
state: stopped
|
||||||
|
enabled: no
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Stop and disable nextcloud-cron service
|
||||||
|
systemd:
|
||||||
|
name: nextcloud-cron.service
|
||||||
|
state: stopped
|
||||||
|
enabled: no
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Stop and disable nextcloud service
|
||||||
|
systemd:
|
||||||
|
name: nextcloud.service
|
||||||
|
state: stopped
|
||||||
|
enabled: no
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# Remove Container and Images
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Remove nextcloud container (if running)
|
||||||
|
command: podman rm -f nextcloud
|
||||||
|
register: container_remove
|
||||||
|
changed_when: container_remove.rc == 0
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# Remove Systemd Units
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Remove nextcloud-cron systemd units
|
||||||
|
file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: absent
|
||||||
|
loop:
|
||||||
|
- /etc/systemd/system/nextcloud-cron.timer
|
||||||
|
- /etc/systemd/system/nextcloud-cron.service
|
||||||
|
|
||||||
|
- name: Remove nextcloud quadlet file
|
||||||
|
file:
|
||||||
|
path: /etc/containers/systemd/nextcloud.container
|
||||||
|
state: absent
|
||||||
|
|
||||||
|
- name: Reload systemd daemon
|
||||||
|
systemd:
|
||||||
|
daemon_reload: yes
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# Remove Database
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Drop nextcloud database
|
||||||
|
become_user: postgres
|
||||||
|
postgresql_db:
|
||||||
|
name: "{{ nextcloud_db_name }}"
|
||||||
|
state: absent
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Drop nextcloud database user
|
||||||
|
become_user: postgres
|
||||||
|
postgresql_user:
|
||||||
|
name: "{{ nextcloud_db_user }}"
|
||||||
|
state: absent
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# Remove Caddy Configuration
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Remove nextcloud Caddy configuration
|
||||||
|
file:
|
||||||
|
path: "{{ caddy_sites_enabled_dir }}/nextcloud.caddy"
|
||||||
|
state: absent
|
||||||
|
notify: reload caddy
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# Remove Data Directories
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Remove nextcloud home directory (including all data)
|
||||||
|
file:
|
||||||
|
path: "{{ nextcloud_home }}"
|
||||||
|
state: absent
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# Remove User and Groups
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Remove nextcloud user
|
||||||
|
user:
|
||||||
|
name: "{{ nextcloud_user }}"
|
||||||
|
state: absent
|
||||||
|
remove: yes
|
||||||
|
force: yes
|
||||||
|
|
||||||
|
- name: Remove nextcloud group
|
||||||
|
group:
|
||||||
|
name: "{{ nextcloud_group }}"
|
||||||
|
state: absent
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# Clean Up Remaining Files
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Find nextcloud-related files in /tmp
|
||||||
|
find:
|
||||||
|
paths: /tmp
|
||||||
|
patterns: "nextcloud*,nc_*"
|
||||||
|
file_type: any
|
||||||
|
register: tmp_files
|
||||||
|
|
||||||
|
- name: Remove nextcloud temp files
|
||||||
|
file:
|
||||||
|
path: "{{ item.path }}"
|
||||||
|
state: absent
|
||||||
|
loop: "{{ tmp_files.files }}"
|
||||||
|
when: tmp_files.files | length > 0
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Remove caddy logs for nextcloud
|
||||||
|
file:
|
||||||
|
path: /var/log/caddy/nextcloud.log
|
||||||
|
state: absent
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
# ============================================
|
||||||
|
# Verification
|
||||||
|
# ============================================
|
||||||
|
|
||||||
|
- name: Verify nextcloud service is removed
|
||||||
|
command: systemctl list-units --all nextcloud*
|
||||||
|
register: units_check
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Verify nextcloud container is removed
|
||||||
|
command: podman ps -a --filter name=nextcloud
|
||||||
|
register: container_check
|
||||||
|
changed_when: false
|
||||||
|
|
||||||
|
- name: Display removal status
|
||||||
|
debug:
|
||||||
|
msg: |
|
||||||
|
✅ Nextcloud removal complete!
|
||||||
|
|
||||||
|
Removed components:
|
||||||
|
- ⏹️ Nextcloud service and cron timer
|
||||||
|
- 🐳 Container: {{ 'Removed' if container_remove.rc == 0 else 'Not found' }}
|
||||||
|
- 🗄️ Database: {{ nextcloud_db_name }}
|
||||||
|
- 📁 Data directory: {{ nextcloud_home }}
|
||||||
|
- 👤 System user: {{ nextcloud_user }}
|
||||||
|
- 🌐 Caddy configuration
|
||||||
|
|
||||||
|
Remaining services:
|
||||||
|
{{ units_check.stdout }}
|
||||||
|
|
||||||
|
Containers:
|
||||||
|
{{ container_check.stdout }}
|
||||||
|
|
||||||
|
handlers:
|
||||||
|
- name: reload caddy
|
||||||
|
systemd:
|
||||||
|
name: caddy
|
||||||
|
state: reloaded
|
||||||
|
failed_when: false
|
||||||
@@ -322,8 +322,8 @@
|
|||||||
sysctl_file: /etc/sysctl.d/99-security.conf
|
sysctl_file: /etc/sysctl.d/99-security.conf
|
||||||
reload: yes
|
reload: yes
|
||||||
loop:
|
loop:
|
||||||
# Disable IP forwarding
|
# Enable IP forwarding (required for container networking)
|
||||||
- { name: 'net.ipv4.ip_forward', value: '0' }
|
- { name: 'net.ipv4.ip_forward', value: '1' }
|
||||||
- { name: 'net.ipv6.conf.all.forwarding', value: '0' }
|
- { name: 'net.ipv6.conf.all.forwarding', value: '0' }
|
||||||
|
|
||||||
# Disable source routing
|
# Disable source routing
|
||||||
|
|||||||
57
playbooks/templates/jnss-web.caddy.j2
Normal file
57
playbooks/templates/jnss-web.caddy.j2
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
# jnss-web Static Site Configuration
|
||||||
|
# Generated by Ansible - DO NOT EDIT MANUALLY
|
||||||
|
|
||||||
|
# WWW Redirect - apex is primary
|
||||||
|
www.{{ jnss_web_domain }} {
|
||||||
|
redir https://{{ jnss_web_domain }}{uri} permanent
|
||||||
|
}
|
||||||
|
|
||||||
|
# Primary Domain
|
||||||
|
{{ jnss_web_domain }} {
|
||||||
|
root * {{ jnss_web_root }}
|
||||||
|
file_server
|
||||||
|
|
||||||
|
# SPA routing - serve index.html for all routes
|
||||||
|
try_files {path} /index.html
|
||||||
|
|
||||||
|
# Security headers
|
||||||
|
header {
|
||||||
|
X-Frame-Options SAMEORIGIN
|
||||||
|
X-Content-Type-Options nosniff
|
||||||
|
X-XSS-Protection "1; mode=block"
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
Permissions-Policy "geolocation=(), microphone=(), camera=()"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Cache static assets aggressively
|
||||||
|
@static {
|
||||||
|
path /_app/* /assets/* /icons/* *.ico *.png *.jpg *.jpeg *.svg *.webp *.woff *.woff2 *.css *.js
|
||||||
|
}
|
||||||
|
header @static {
|
||||||
|
Cache-Control "public, max-age=31536000, immutable"
|
||||||
|
Vary "Accept-Encoding"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Cache HTML with shorter duration
|
||||||
|
@html {
|
||||||
|
path *.html /
|
||||||
|
}
|
||||||
|
header @html {
|
||||||
|
Cache-Control "public, max-age=3600, must-revalidate"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Enable compression
|
||||||
|
encode gzip
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
log {
|
||||||
|
output file /var/log/caddy/jnss-web.log {
|
||||||
|
roll_size 100mb
|
||||||
|
roll_keep 5
|
||||||
|
}
|
||||||
|
format json {
|
||||||
|
time_format "2006-01-02T15:04:05.000Z07:00"
|
||||||
|
}
|
||||||
|
level INFO
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -9,9 +9,12 @@
|
|||||||
# - Nextcloud cloud storage
|
# - Nextcloud cloud storage
|
||||||
# - Authentik SSO/authentication
|
# - Authentik SSO/authentication
|
||||||
# - Gitea git hosting
|
# - Gitea git hosting
|
||||||
|
# - Vaultwarden password manager
|
||||||
|
# - Metrics (VictoriaMetrics, Grafana, node_exporter)
|
||||||
#
|
#
|
||||||
# Usage:
|
# Usage:
|
||||||
# ansible-playbook playbooks/homelab.yml
|
# ansible-playbook rick-infra.yml
|
||||||
|
# ansible-playbook rick-infra.yml --tags metrics
|
||||||
|
|
||||||
# - import_playbook: playbooks/security.yml
|
# - import_playbook: playbooks/security.yml
|
||||||
- name: Deploy Homelab Infrastructure
|
- name: Deploy Homelab Infrastructure
|
||||||
@@ -20,18 +23,33 @@
|
|||||||
gather_facts: true
|
gather_facts: true
|
||||||
|
|
||||||
tasks:
|
tasks:
|
||||||
|
# - name: Deploy Caddy
|
||||||
|
# include_role:
|
||||||
|
# name: caddy
|
||||||
|
# tags: ['caddy']
|
||||||
|
|
||||||
|
- name: Deploy Metrics Stack
|
||||||
|
include_role:
|
||||||
|
name: metrics
|
||||||
|
tags: ['metrics', 'monitoring', 'grafana', 'victoriametrics']
|
||||||
|
|
||||||
# - name: Deploy Authentik
|
# - name: Deploy Authentik
|
||||||
# include_role:
|
# include_role:
|
||||||
# name: authentik
|
# name: authentik
|
||||||
# tags: ['authentik', 'sso', 'auth']
|
# tags: ['authentik', 'sso', 'auth']
|
||||||
|
|
||||||
- name: Deploy Gitea
|
# - name: Deploy Gitea
|
||||||
include_role:
|
# include_role:
|
||||||
name: gitea
|
# name: gitea
|
||||||
tags: ['gitea', 'git', 'development']
|
# tags: ['gitea', 'git', 'development']
|
||||||
|
|
||||||
# - name: Deploy Nextcloud
|
# - name: Deploy Nextcloud
|
||||||
# include_role:
|
# include_role:
|
||||||
# name: nextcloud
|
# name: nextcloud
|
||||||
# tags: ['nextcloud', 'cloud', 'storage']
|
# tags: ['nextcloud', 'cloud', 'storage']
|
||||||
|
|
||||||
|
# - name: Deploy Vaultwarden
|
||||||
|
# include_role:
|
||||||
|
# name: vaultwarden
|
||||||
|
# tags: ['vaultwarden', 'vault', 'password-manager', 'security']
|
||||||
|
|
||||||
|
|||||||
@@ -81,7 +81,6 @@
|
|||||||
owner: root
|
owner: root
|
||||||
group: "{{ caddy_user }}"
|
group: "{{ caddy_user }}"
|
||||||
mode: '0644'
|
mode: '0644'
|
||||||
backup: true
|
|
||||||
notify: reload caddy
|
notify: reload caddy
|
||||||
tags: [caddy, reverse-proxy]
|
tags: [caddy, reverse-proxy]
|
||||||
|
|
||||||
|
|||||||
@@ -120,7 +120,7 @@
|
|||||||
owner: root
|
owner: root
|
||||||
group: "{{ caddy_user }}"
|
group: "{{ caddy_user }}"
|
||||||
mode: '0640'
|
mode: '0640'
|
||||||
backup: yes
|
backup: no
|
||||||
notify: reload caddy
|
notify: reload caddy
|
||||||
|
|
||||||
- name: Check Caddyfile syntax (basic check)
|
- name: Check Caddyfile syntax (basic check)
|
||||||
|
|||||||
@@ -21,43 +21,3 @@
|
|||||||
|
|
||||||
# Import service configurations
|
# Import service configurations
|
||||||
import {{ caddy_sites_enabled_dir }}/*
|
import {{ caddy_sites_enabled_dir }}/*
|
||||||
|
|
||||||
# Primary domain: {{ caddy_domain }}
|
|
||||||
{{ caddy_domain }} {
|
|
||||||
{% if caddy_tls_enabled %}
|
|
||||||
{% if caddy_dns_provider == "cloudflare" and cloudflare_api_token %}
|
|
||||||
# DNS challenge for automatic TLS
|
|
||||||
tls {
|
|
||||||
dns cloudflare {{ cloudflare_api_token }}
|
|
||||||
resolvers {{ caddy_dns_resolvers | join(' ') }}
|
|
||||||
}
|
|
||||||
{% elif caddy_tls_email %}
|
|
||||||
# HTTP challenge for automatic TLS
|
|
||||||
tls {{ caddy_tls_email }}
|
|
||||||
{% endif %}
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
# Serve static content
|
|
||||||
root * {{ caddy_default_site_root }}
|
|
||||||
file_server
|
|
||||||
|
|
||||||
# Logging
|
|
||||||
log {
|
|
||||||
{% if caddy_log_format == "json" %}
|
|
||||||
output file {{ caddy_log_dir }}/{{ caddy_domain | replace('.', '_') }}.log {
|
|
||||||
roll_size 100mb
|
|
||||||
roll_keep 5
|
|
||||||
}
|
|
||||||
format json {
|
|
||||||
time_format "2006-01-02T15:04:05.000Z07:00"
|
|
||||||
}
|
|
||||||
level {{ caddy_log_level }}
|
|
||||||
{% else %}
|
|
||||||
output file {{ caddy_log_dir }}/{{ caddy_domain | replace('.', '_') }}.log {
|
|
||||||
roll_size 100mb
|
|
||||||
roll_keep 5
|
|
||||||
}
|
|
||||||
level {{ caddy_log_level }}
|
|
||||||
{% endif %}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -57,9 +57,100 @@ gitea_run_mode: "prod"
|
|||||||
gitea_default_branch: "main"
|
gitea_default_branch: "main"
|
||||||
gitea_enable_lfs: true
|
gitea_enable_lfs: true
|
||||||
|
|
||||||
# Security settings
|
# =================================================================
|
||||||
gitea_disable_registration: false
|
# Private Git Server & OAuth Configuration
|
||||||
gitea_require_signin: false
|
# =================================================================
|
||||||
|
|
||||||
|
# Access Control - Private server with public repos allowed
|
||||||
|
gitea_disable_registration: true # No public registration (admin only)
|
||||||
|
gitea_require_signin: false # Require sign-in (unauthorized users read-only)
|
||||||
|
gitea_show_registration_button: false # Hide registration UI
|
||||||
|
|
||||||
|
# OAuth Configuration - Preferred but not forced
|
||||||
|
gitea_enable_password_signin: false # Hide password login form
|
||||||
|
gitea_enable_basic_auth: true # Keep password API auth as backup
|
||||||
|
gitea_oauth2_auto_registration: true # Auto-create OAuth users
|
||||||
|
gitea_oauth2_account_linking: "login" # Show account linking page
|
||||||
|
gitea_oauth2_username_source: "preferred_username"
|
||||||
|
gitea_oauth2_update_avatar: true
|
||||||
|
gitea_oauth2_scopes: "profile,email,groups"
|
||||||
|
gitea_oauth2_register_email_confirm: false
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Email Configuration (Titan Email via Hostinger)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
gitea_mailer_enabled: true
|
||||||
|
gitea_mailer_protocol: "smtp+starttls" # Port 587 with STARTTLS
|
||||||
|
gitea_smtp_addr: "smtp.titan.email"
|
||||||
|
gitea_smtp_port: 587
|
||||||
|
gitea_mailer_from: "hello@jnss.me"
|
||||||
|
gitea_mailer_user: "hello@jnss.me"
|
||||||
|
gitea_mailer_password: "{{ vault_smtp_password }}"
|
||||||
|
gitea_mailer_subject_prefix: "[Gitea]"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Enhanced Security Settings
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Session Security
|
||||||
|
gitea_session_provider: "file"
|
||||||
|
gitea_session_cookie_name: "gitea_session"
|
||||||
|
gitea_session_life_time: 3600 # 1 hour
|
||||||
|
gitea_cookie_secure: true # HTTPS-only cookies
|
||||||
|
gitea_session_same_site: "strict" # Strict CSRF protection
|
||||||
|
|
||||||
|
# Security Hardening
|
||||||
|
gitea_csrf_cookie_httponly: true # Prevent XSS on CSRF token
|
||||||
|
gitea_password_check_pwn: true # Check password breach database
|
||||||
|
gitea_reverse_proxy_limit: 1 # Trust only one proxy (Caddy)
|
||||||
|
gitea_reverse_proxy_trusted_proxies: "127.0.0.0/8,::1/128"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Repository Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Privacy Defaults (private by default, public allowed)
|
||||||
|
gitea_default_private: "private" # New repos are private
|
||||||
|
gitea_default_push_create_private: true # Push-created repos are private
|
||||||
|
# Note: NOT setting gitea_force_private - allows public repos
|
||||||
|
|
||||||
|
# Repository Features
|
||||||
|
gitea_disabled_repo_units: "repo.ext_issues,repo.ext_wiki"
|
||||||
|
gitea_enable_push_create_user: false # Require manual repo creation
|
||||||
|
gitea_enable_push_create_org: false
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Features & Capabilities
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# CI/CD Actions
|
||||||
|
gitea_actions_enabled: true # Enable Gitea Actions
|
||||||
|
gitea_actions_default_url: "github" # Use GitHub actions
|
||||||
|
gitea_actions_log_retention_days: 90
|
||||||
|
gitea_actions_artifact_retention_days: 30
|
||||||
|
|
||||||
|
# Repository Mirroring
|
||||||
|
gitea_mirror_enabled: true
|
||||||
|
gitea_mirror_default_interval: "8h"
|
||||||
|
gitea_mirror_min_interval: "1h"
|
||||||
|
|
||||||
|
# Organization & User Management
|
||||||
|
gitea_allow_create_org: true # Users can create orgs
|
||||||
|
|
||||||
|
# API Configuration
|
||||||
|
gitea_api_swagger_enabled: false # Disable API docs
|
||||||
|
|
||||||
|
# Webhook Security
|
||||||
|
gitea_webhook_allowed_hosts: "private,loopback"
|
||||||
|
gitea_webhook_skip_tls_verify: false
|
||||||
|
gitea_webhook_deliver_timeout: 5
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Service Explore Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
gitea_explore_require_signin: false # Allow browsing public content
|
||||||
|
|
||||||
# =================================================================
|
# =================================================================
|
||||||
# SSH Mode Configuration
|
# SSH Mode Configuration
|
||||||
|
|||||||
@@ -6,9 +6,19 @@ APP_NAME = {{ gitea_app_name }}
|
|||||||
RUN_MODE = {{ gitea_run_mode }}
|
RUN_MODE = {{ gitea_run_mode }}
|
||||||
|
|
||||||
[repository]
|
[repository]
|
||||||
|
# === Repository Storage ===
|
||||||
ROOT = {{ gitea_home }}/repositories
|
ROOT = {{ gitea_home }}/repositories
|
||||||
DEFAULT_BRANCH = {{ gitea_default_branch }}
|
DEFAULT_BRANCH = {{ gitea_default_branch }}
|
||||||
|
|
||||||
|
# === Privacy Defaults ===
|
||||||
|
DEFAULT_PRIVATE = {{ gitea_default_private }}
|
||||||
|
DEFAULT_PUSH_CREATE_PRIVATE = {{ gitea_default_push_create_private | lower }}
|
||||||
|
|
||||||
|
# === Repository Features ===
|
||||||
|
DISABLED_REPO_UNITS = {{ gitea_disabled_repo_units }}
|
||||||
|
ENABLE_PUSH_CREATE_USER = {{ gitea_enable_push_create_user | lower }}
|
||||||
|
ENABLE_PUSH_CREATE_ORG = {{ gitea_enable_push_create_org | lower }}
|
||||||
|
|
||||||
[server]
|
[server]
|
||||||
PROTOCOL = http
|
PROTOCOL = http
|
||||||
DOMAIN = {{ gitea_http_domain }}
|
DOMAIN = {{ gitea_http_domain }}
|
||||||
@@ -41,17 +51,63 @@ SSL_MODE = disable
|
|||||||
CHARSET = utf8
|
CHARSET = utf8
|
||||||
|
|
||||||
[security]
|
[security]
|
||||||
|
# === Core Security ===
|
||||||
INSTALL_LOCK = true
|
INSTALL_LOCK = true
|
||||||
SECRET_KEY = {{ ansible_machine_id }}{{ gitea_db_password | hash('sha256') }}
|
SECRET_KEY = {{ ansible_machine_id }}{{ gitea_db_password | hash('sha256') }}
|
||||||
INTERNAL_TOKEN = {{ (ansible_machine_id + gitea_db_password) | hash('sha256') }}
|
INTERNAL_TOKEN = {{ (ansible_machine_id + gitea_db_password) | hash('sha256') }}
|
||||||
|
|
||||||
|
# === Enhanced Security ===
|
||||||
|
CSRF_COOKIE_HTTP_ONLY = {{ gitea_csrf_cookie_httponly | lower }}
|
||||||
|
PASSWORD_CHECK_PWN = {{ gitea_password_check_pwn | lower }}
|
||||||
|
REVERSE_PROXY_LIMIT = {{ gitea_reverse_proxy_limit }}
|
||||||
|
REVERSE_PROXY_TRUSTED_PROXIES = {{ gitea_reverse_proxy_trusted_proxies }}
|
||||||
|
|
||||||
[service]
|
[service]
|
||||||
|
# === Access Control ===
|
||||||
DISABLE_REGISTRATION = {{ gitea_disable_registration | lower }}
|
DISABLE_REGISTRATION = {{ gitea_disable_registration | lower }}
|
||||||
REQUIRE_SIGNIN_VIEW = {{ gitea_require_signin | lower }}
|
REQUIRE_SIGNIN_VIEW = {{ gitea_require_signin | lower }}
|
||||||
|
SHOW_REGISTRATION_BUTTON = {{ gitea_show_registration_button | lower }}
|
||||||
|
|
||||||
|
# === OAuth Configuration ===
|
||||||
|
ENABLE_PASSWORD_SIGNIN_FORM = {{ gitea_enable_password_signin | lower }}
|
||||||
|
ENABLE_BASIC_AUTHENTICATION = {{ gitea_enable_basic_auth | lower }}
|
||||||
|
|
||||||
|
# === Defaults ===
|
||||||
DEFAULT_KEEP_EMAIL_PRIVATE = true
|
DEFAULT_KEEP_EMAIL_PRIVATE = true
|
||||||
DEFAULT_ALLOW_CREATE_ORGANIZATION = true
|
DEFAULT_ALLOW_CREATE_ORGANIZATION = {{ gitea_allow_create_org | lower }}
|
||||||
NO_REPLY_ADDRESS = noreply@{{ gitea_http_domain }}
|
NO_REPLY_ADDRESS = noreply@{{ gitea_http_domain }}
|
||||||
|
|
||||||
|
[oauth2_client]
|
||||||
|
# === Authentik OAuth Integration ===
|
||||||
|
ENABLE_AUTO_REGISTRATION = {{ gitea_oauth2_auto_registration | lower }}
|
||||||
|
ACCOUNT_LINKING = {{ gitea_oauth2_account_linking }}
|
||||||
|
USERNAME = {{ gitea_oauth2_username_source }}
|
||||||
|
UPDATE_AVATAR = {{ gitea_oauth2_update_avatar | lower }}
|
||||||
|
OPENID_CONNECT_SCOPES = {{ gitea_oauth2_scopes }}
|
||||||
|
REGISTER_EMAIL_CONFIRM = {{ gitea_oauth2_register_email_confirm | lower }}
|
||||||
|
|
||||||
|
[mailer]
|
||||||
|
ENABLED = {{ gitea_mailer_enabled | lower }}
|
||||||
|
{% if gitea_mailer_enabled %}
|
||||||
|
PROTOCOL = {{ gitea_mailer_protocol }}
|
||||||
|
SMTP_ADDR = {{ gitea_smtp_addr }}
|
||||||
|
SMTP_PORT = {{ gitea_smtp_port }}
|
||||||
|
FROM = {{ gitea_mailer_from }}
|
||||||
|
USER = {{ gitea_mailer_user }}
|
||||||
|
PASSWD = {{ gitea_mailer_password }}
|
||||||
|
SUBJECT_PREFIX = {{ gitea_mailer_subject_prefix }}
|
||||||
|
SEND_AS_PLAIN_TEXT = false
|
||||||
|
SMTP_AUTH = PLAIN
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
[session]
|
||||||
|
# === Session Security ===
|
||||||
|
PROVIDER = {{ gitea_session_provider }}
|
||||||
|
COOKIE_NAME = {{ gitea_session_cookie_name }}
|
||||||
|
COOKIE_SECURE = {{ gitea_cookie_secure | lower }}
|
||||||
|
SESSION_LIFE_TIME = {{ gitea_session_life_time }}
|
||||||
|
SAME_SITE = {{ gitea_session_same_site }}
|
||||||
|
|
||||||
[log]
|
[log]
|
||||||
MODE = console
|
MODE = console
|
||||||
LEVEL = Info
|
LEVEL = Info
|
||||||
@@ -66,4 +122,37 @@ CONTENT_PATH = {{ gitea_home }}/data/lfs
|
|||||||
[git]
|
[git]
|
||||||
PATH = /usr/bin/git
|
PATH = /usr/bin/git
|
||||||
|
|
||||||
# Rick-Infra: Simplified Gitea configuration for self-contained service
|
[actions]
|
||||||
|
# === CI/CD Configuration ===
|
||||||
|
ENABLED = {{ gitea_actions_enabled | lower }}
|
||||||
|
{% if gitea_actions_enabled %}
|
||||||
|
DEFAULT_ACTIONS_URL = {{ gitea_actions_default_url }}
|
||||||
|
LOG_RETENTION_DAYS = {{ gitea_actions_log_retention_days }}
|
||||||
|
ARTIFACT_RETENTION_DAYS = {{ gitea_actions_artifact_retention_days }}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
[mirror]
|
||||||
|
# === Repository Mirroring ===
|
||||||
|
ENABLED = {{ gitea_mirror_enabled | lower }}
|
||||||
|
DISABLE_NEW_PULL = false
|
||||||
|
DISABLE_NEW_PUSH = false
|
||||||
|
DEFAULT_INTERVAL = {{ gitea_mirror_default_interval }}
|
||||||
|
MIN_INTERVAL = {{ gitea_mirror_min_interval }}
|
||||||
|
|
||||||
|
[api]
|
||||||
|
# === API Configuration ===
|
||||||
|
ENABLE_SWAGGER = {{ gitea_api_swagger_enabled | lower }}
|
||||||
|
MAX_RESPONSE_ITEMS = 50
|
||||||
|
DEFAULT_PAGING_NUM = 30
|
||||||
|
|
||||||
|
[webhook]
|
||||||
|
# === Webhook Security ===
|
||||||
|
ALLOWED_HOST_LIST = {{ gitea_webhook_allowed_hosts }}
|
||||||
|
SKIP_TLS_VERIFY = {{ gitea_webhook_skip_tls_verify | lower }}
|
||||||
|
DELIVER_TIMEOUT = {{ gitea_webhook_deliver_timeout }}
|
||||||
|
|
||||||
|
[service.explore]
|
||||||
|
# === Public Content Exploration ===
|
||||||
|
REQUIRE_SIGNIN_VIEW = {{ gitea_explore_require_signin | lower }}
|
||||||
|
|
||||||
|
# Rick-Infra: Private Gitea configuration with OAuth and email support
|
||||||
|
|||||||
325
roles/metrics/README.md
Normal file
325
roles/metrics/README.md
Normal file
@@ -0,0 +1,325 @@
|
|||||||
|
# Metrics Role
|
||||||
|
|
||||||
|
Complete monitoring stack for rick-infra providing system metrics collection, storage, and visualization with SSO integration.
|
||||||
|
|
||||||
|
## Components
|
||||||
|
|
||||||
|
### VictoriaMetrics
|
||||||
|
- **Purpose**: Time-series database for metrics storage
|
||||||
|
- **Type**: Native systemd service
|
||||||
|
- **Listen**: `127.0.0.1:8428` (localhost only)
|
||||||
|
- **Features**:
|
||||||
|
- Prometheus-compatible API and PromQL
|
||||||
|
- 7x less RAM usage than Prometheus
|
||||||
|
- Single binary deployment
|
||||||
|
- 12-month data retention by default
|
||||||
|
|
||||||
|
### Grafana
|
||||||
|
- **Purpose**: Metrics visualization and dashboarding
|
||||||
|
- **Type**: Native systemd service
|
||||||
|
- **Listen**: `127.0.0.1:3000` (localhost only, proxied via Caddy)
|
||||||
|
- **Domain**: `metrics.jnss.me`
|
||||||
|
- **Features**:
|
||||||
|
- OAuth/OIDC integration with Authentik
|
||||||
|
- Role-based access control via Authentik groups
|
||||||
|
- VictoriaMetrics as default data source
|
||||||
|
|
||||||
|
### node_exporter
|
||||||
|
- **Purpose**: System metrics collection
|
||||||
|
- **Type**: Native systemd service
|
||||||
|
- **Listen**: `127.0.0.1:9100` (localhost only)
|
||||||
|
- **Metrics**: CPU, memory, disk, network, systemd units
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────┐
|
||||||
|
│ metrics.jnss.me (Grafana Dashboard) │
|
||||||
|
│ ┌─────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Caddy (HTTPS) │ │
|
||||||
|
│ │ ↓ │ │
|
||||||
|
│ │ Grafana (OAuth → Authentik) │ │
|
||||||
|
│ │ ↓ │ │
|
||||||
|
│ │ VictoriaMetrics (Prometheus-compatible) │ │
|
||||||
|
│ │ ↑ │ │
|
||||||
|
│ │ node_exporter (System Metrics) │ │
|
||||||
|
│ └─────────────────────────────────────────────────┘ │
|
||||||
|
└─────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deployment
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
1. **Caddy role deployed** - Required for HTTPS proxy
|
||||||
|
2. **Authentik deployed** - Required for OAuth/SSO
|
||||||
|
3. **Vault variables configured**:
|
||||||
|
```yaml
|
||||||
|
# In host_vars/arch-vps/vault.yml
|
||||||
|
vault_grafana_admin_password: "secure-admin-password"
|
||||||
|
vault_grafana_secret_key: "random-secret-key-32-chars"
|
||||||
|
vault_grafana_oauth_client_id: "grafana"
|
||||||
|
vault_grafana_oauth_client_secret: "oauth-client-secret-from-authentik"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Authentik Configuration
|
||||||
|
|
||||||
|
Before deployment, create OAuth2/OIDC provider in Authentik:
|
||||||
|
|
||||||
|
1. **Create Provider**:
|
||||||
|
- Name: `Grafana`
|
||||||
|
- Type: `OAuth2/OpenID Provider`
|
||||||
|
- Client ID: `grafana`
|
||||||
|
- Client Secret: Generate and save to vault
|
||||||
|
- Redirect URIs: `https://metrics.jnss.me/login/generic_oauth`
|
||||||
|
- Signing Key: Auto-generated
|
||||||
|
|
||||||
|
2. **Create Application**:
|
||||||
|
- Name: `Grafana`
|
||||||
|
- Slug: `grafana`
|
||||||
|
- Provider: Select Grafana provider created above
|
||||||
|
|
||||||
|
3. **Create Groups** (optional, for role mapping):
|
||||||
|
- `grafana-admins` - Full admin access
|
||||||
|
- `grafana-editors` - Can create/edit dashboards
|
||||||
|
- Users without these groups get Viewer access
|
||||||
|
|
||||||
|
### Deploy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Deploy complete metrics stack
|
||||||
|
ansible-playbook rick-infra.yml --tags metrics
|
||||||
|
|
||||||
|
# Deploy individual components
|
||||||
|
ansible-playbook rick-infra.yml --tags victoriametrics
|
||||||
|
ansible-playbook rick-infra.yml --tags grafana
|
||||||
|
ansible-playbook rick-infra.yml --tags node_exporter
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verify Deployment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check service status
|
||||||
|
ansible homelab -a "systemctl status victoriametrics grafana node_exporter"
|
||||||
|
|
||||||
|
# Check metrics collection
|
||||||
|
curl http://127.0.0.1:9100/metrics # node_exporter metrics
|
||||||
|
curl http://127.0.0.1:8428/metrics # VictoriaMetrics metrics
|
||||||
|
curl http://127.0.0.1:8428/api/v1/targets # Scrape targets
|
||||||
|
|
||||||
|
# Access Grafana
|
||||||
|
curl -I https://metrics.jnss.me/ # Should redirect to Authentik login
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Access Dashboard
|
||||||
|
|
||||||
|
1. Navigate to `https://metrics.jnss.me`
|
||||||
|
2. Click "Sign in with Authentik"
|
||||||
|
3. Authenticate via Authentik SSO
|
||||||
|
4. Access granted based on Authentik group membership
|
||||||
|
|
||||||
|
### Role Mapping
|
||||||
|
|
||||||
|
Grafana roles are automatically assigned based on Authentik groups:
|
||||||
|
|
||||||
|
- **Admin**: Members of `grafana-admins` group
|
||||||
|
- Full administrative access
|
||||||
|
- Can manage users, data sources, plugins
|
||||||
|
- Can create/edit/delete all dashboards
|
||||||
|
|
||||||
|
- **Editor**: Members of `grafana-editors` group
|
||||||
|
- Can create and edit dashboards
|
||||||
|
- Cannot manage users or data sources
|
||||||
|
|
||||||
|
- **Viewer**: All other authenticated users
|
||||||
|
- Read-only access to dashboards
|
||||||
|
- Cannot create or edit dashboards
|
||||||
|
|
||||||
|
### Creating Dashboards
|
||||||
|
|
||||||
|
Grafana comes with VictoriaMetrics pre-configured as the default data source. Use PromQL queries:
|
||||||
|
|
||||||
|
```promql
|
||||||
|
# CPU usage
|
||||||
|
100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
|
||||||
|
|
||||||
|
# Memory usage
|
||||||
|
node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes
|
||||||
|
|
||||||
|
# Disk usage
|
||||||
|
100 - ((node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100)
|
||||||
|
|
||||||
|
# Network traffic
|
||||||
|
irate(node_network_receive_bytes_total[5m])
|
||||||
|
```
|
||||||
|
|
||||||
|
### Import Community Dashboards
|
||||||
|
|
||||||
|
1. Browse dashboards at https://grafana.com/grafana/dashboards/
|
||||||
|
2. Recommended for node_exporter:
|
||||||
|
- Dashboard ID: 1860 (Node Exporter Full)
|
||||||
|
- Dashboard ID: 11074 (Node Exporter for Prometheus)
|
||||||
|
3. Import via Grafana UI: Dashboards → Import → Enter ID
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Customization
|
||||||
|
|
||||||
|
Key configuration options in `roles/metrics/defaults/main.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Data retention
|
||||||
|
victoriametrics_retention_period: "12" # months
|
||||||
|
|
||||||
|
# Scrape interval
|
||||||
|
victoriametrics_scrape_interval: "15s"
|
||||||
|
|
||||||
|
# OAuth role mapping (JMESPath expression)
|
||||||
|
grafana_oauth_role_attribute_path: "contains(groups, 'grafana-admins') && 'Admin' || contains(groups, 'grafana-editors') && 'Editor' || 'Viewer'"
|
||||||
|
|
||||||
|
# Memory limits
|
||||||
|
victoriametrics_memory_allowed_percent: "60"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Adding Scrape Targets
|
||||||
|
|
||||||
|
Edit `roles/metrics/templates/scrape.yml.j2`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
scrape_configs:
|
||||||
|
# Add custom application metrics
|
||||||
|
- job_name: 'myapp'
|
||||||
|
static_configs:
|
||||||
|
- targets: ['127.0.0.1:8080']
|
||||||
|
labels:
|
||||||
|
service: 'myapp'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Operations
|
||||||
|
|
||||||
|
### Service Management
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# VictoriaMetrics
|
||||||
|
systemctl status victoriametrics
|
||||||
|
systemctl restart victoriametrics
|
||||||
|
journalctl -u victoriametrics -f
|
||||||
|
|
||||||
|
# Grafana
|
||||||
|
systemctl status grafana
|
||||||
|
systemctl restart grafana
|
||||||
|
journalctl -u grafana -f
|
||||||
|
|
||||||
|
# node_exporter
|
||||||
|
systemctl status node_exporter
|
||||||
|
systemctl restart node_exporter
|
||||||
|
journalctl -u node_exporter -f
|
||||||
|
```
|
||||||
|
|
||||||
|
### Data Locations
|
||||||
|
|
||||||
|
```
|
||||||
|
/var/lib/victoriametrics/ # Time-series data
|
||||||
|
/var/lib/grafana/ # Grafana database and dashboards
|
||||||
|
/var/log/grafana/ # Grafana logs
|
||||||
|
/etc/victoriametrics/ # VictoriaMetrics config
|
||||||
|
/etc/grafana/ # Grafana config
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
|
||||||
|
VictoriaMetrics data is stored in `/var/lib/victoriametrics`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Stop service
|
||||||
|
systemctl stop victoriametrics
|
||||||
|
|
||||||
|
# Backup data
|
||||||
|
tar -czf victoriametrics-backup-$(date +%Y%m%d).tar.gz /var/lib/victoriametrics
|
||||||
|
|
||||||
|
# Start service
|
||||||
|
systemctl start victoriametrics
|
||||||
|
```
|
||||||
|
|
||||||
|
Grafana dashboards are stored in SQLite database at `/var/lib/grafana/grafana.db`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Backup Grafana
|
||||||
|
systemctl stop grafana
|
||||||
|
tar -czf grafana-backup-$(date +%Y%m%d).tar.gz /var/lib/grafana /etc/grafana
|
||||||
|
systemctl start grafana
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security
|
||||||
|
|
||||||
|
### Authentication
|
||||||
|
- Grafana protected by Authentik OAuth/OIDC
|
||||||
|
- Local admin account available for emergency access
|
||||||
|
- All services bind to localhost only
|
||||||
|
|
||||||
|
### Network Security
|
||||||
|
- VictoriaMetrics: `127.0.0.1:8428` (no external access)
|
||||||
|
- Grafana: `127.0.0.1:3000` (proxied via Caddy with HTTPS)
|
||||||
|
- node_exporter: `127.0.0.1:9100` (no external access)
|
||||||
|
|
||||||
|
### systemd Hardening
|
||||||
|
All services run with security restrictions:
|
||||||
|
- `NoNewPrivileges=true`
|
||||||
|
- `ProtectSystem=strict`
|
||||||
|
- `ProtectHome=true`
|
||||||
|
- `PrivateTmp=true`
|
||||||
|
- Read-only filesystem (except data directories)
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Grafana OAuth Not Working
|
||||||
|
|
||||||
|
1. Check Authentik provider configuration:
|
||||||
|
```bash
|
||||||
|
# Verify redirect URI matches
|
||||||
|
# https://metrics.jnss.me/login/generic_oauth
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Check Grafana logs:
|
||||||
|
```bash
|
||||||
|
journalctl -u grafana -f
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Verify OAuth credentials in vault match Authentik
|
||||||
|
|
||||||
|
### No Metrics in Grafana
|
||||||
|
|
||||||
|
1. Check VictoriaMetrics scrape targets:
|
||||||
|
```bash
|
||||||
|
curl http://127.0.0.1:8428/api/v1/targets
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Check node_exporter is running:
|
||||||
|
```bash
|
||||||
|
systemctl status node_exporter
|
||||||
|
curl http://127.0.0.1:9100/metrics
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Check VictoriaMetrics logs:
|
||||||
|
```bash
|
||||||
|
journalctl -u victoriametrics -f
|
||||||
|
```
|
||||||
|
|
||||||
|
### High Memory Usage
|
||||||
|
|
||||||
|
VictoriaMetrics is configured to use max 60% of available memory. Adjust if needed:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# In roles/metrics/defaults/main.yml
|
||||||
|
victoriametrics_memory_allowed_percent: "40" # Reduce to 40%
|
||||||
|
```
|
||||||
|
|
||||||
|
## See Also
|
||||||
|
|
||||||
|
- [VictoriaMetrics Documentation](https://docs.victoriametrics.com/)
|
||||||
|
- [Grafana Documentation](https://grafana.com/docs/)
|
||||||
|
- [node_exporter GitHub](https://github.com/prometheus/node_exporter)
|
||||||
|
- [PromQL Documentation](https://prometheus.io/docs/prometheus/latest/querying/basics/)
|
||||||
|
- [Authentik OAuth Integration](https://goauthentik.io/docs/providers/oauth2/)
|
||||||
178
roles/metrics/defaults/main.yml
Normal file
178
roles/metrics/defaults/main.yml
Normal file
@@ -0,0 +1,178 @@
|
|||||||
|
---
|
||||||
|
# =================================================================
|
||||||
|
# Metrics Infrastructure Role - Complete Monitoring Stack
|
||||||
|
# =================================================================
|
||||||
|
# Provides VictoriaMetrics, Grafana, and node_exporter as unified stack
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# VictoriaMetrics Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Service Management
|
||||||
|
victoriametrics_service_enabled: true
|
||||||
|
victoriametrics_service_state: "started"
|
||||||
|
|
||||||
|
# Version
|
||||||
|
victoriametrics_version: "1.105.0"
|
||||||
|
|
||||||
|
# Network Security (localhost only)
|
||||||
|
victoriametrics_listen_address: "127.0.0.1:8428"
|
||||||
|
|
||||||
|
# Storage Configuration
|
||||||
|
victoriametrics_data_dir: "/var/lib/victoriametrics"
|
||||||
|
victoriametrics_retention_period: "12" # months
|
||||||
|
|
||||||
|
# User/Group
|
||||||
|
victoriametrics_user: "victoriametrics"
|
||||||
|
victoriametrics_group: "victoriametrics"
|
||||||
|
|
||||||
|
# Performance Settings
|
||||||
|
victoriametrics_memory_allowed_percent: "30"
|
||||||
|
victoriametrics_storage_min_free_disk_space_bytes: "10GB"
|
||||||
|
|
||||||
|
# Scrape Configuration
|
||||||
|
victoriametrics_scrape_config_dir: "/etc/victoriametrics"
|
||||||
|
victoriametrics_scrape_config_file: "{{ victoriametrics_scrape_config_dir }}/scrape.yml"
|
||||||
|
victoriametrics_scrape_interval: "15s"
|
||||||
|
victoriametrics_scrape_timeout: "10s"
|
||||||
|
|
||||||
|
# systemd security
|
||||||
|
victoriametrics_systemd_security: true
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Grafana Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Service Management
|
||||||
|
grafana_service_enabled: true
|
||||||
|
grafana_service_state: "started"
|
||||||
|
|
||||||
|
# Version
|
||||||
|
grafana_version: "11.4.0"
|
||||||
|
|
||||||
|
# Network Security (localhost only - proxied via Caddy)
|
||||||
|
grafana_listen_address: "127.0.0.1"
|
||||||
|
grafana_listen_port: 3420
|
||||||
|
|
||||||
|
# User/Group
|
||||||
|
grafana_user: "grafana"
|
||||||
|
grafana_group: "grafana"
|
||||||
|
|
||||||
|
# Directories
|
||||||
|
grafana_data_dir: "/var/lib/grafana"
|
||||||
|
grafana_logs_dir: "/var/log/grafana"
|
||||||
|
grafana_plugins_dir: "/var/lib/grafana/plugins"
|
||||||
|
grafana_provisioning_dir: "/etc/grafana/provisioning"
|
||||||
|
|
||||||
|
# Domain Configuration
|
||||||
|
grafana_domain: "metrics.{{ caddy_domain }}"
|
||||||
|
grafana_root_url: "https://{{ grafana_domain }}"
|
||||||
|
|
||||||
|
# Default admin (used only for initial setup)
|
||||||
|
grafana_admin_user: "admin"
|
||||||
|
grafana_admin_password: "{{ vault_grafana_admin_password }}"
|
||||||
|
|
||||||
|
# Disable registration (OAuth only)
|
||||||
|
grafana_allow_signup: false
|
||||||
|
grafana_disable_login_form: false # Keep fallback login
|
||||||
|
|
||||||
|
# OAuth/OIDC Configuration (Authentik)
|
||||||
|
grafana_oauth_enabled: true
|
||||||
|
grafana_oauth_name: "Authentik"
|
||||||
|
grafana_oauth_client_id: "{{ vault_grafana_oauth_client_id }}"
|
||||||
|
grafana_oauth_client_secret: "{{ vault_grafana_oauth_client_secret }}"
|
||||||
|
|
||||||
|
# Authentik OAuth endpoints
|
||||||
|
grafana_oauth_auth_url: "https://{{ authentik_domain }}/application/o/authorize/"
|
||||||
|
grafana_oauth_token_url: "https://{{ authentik_domain }}/application/o/token/"
|
||||||
|
grafana_oauth_api_url: "https://{{ authentik_domain }}/application/o/userinfo/"
|
||||||
|
|
||||||
|
# OAuth role mapping
|
||||||
|
grafana_oauth_role_attribute_path: "(contains(groups, 'authentik Admins') || contains(groups, 'grafana-admins')) && 'Admin' || contains(groups, 'grafana-editors') && 'Editor' || 'Viewer'"
|
||||||
|
grafana_oauth_allow_sign_up: true # Auto-create users from OAuth
|
||||||
|
grafana_oauth_scopes: "openid profile email groups"
|
||||||
|
|
||||||
|
# Data Source Configuration
|
||||||
|
grafana_datasource_vm_enabled: true
|
||||||
|
grafana_datasource_vm_url: "http://{{ victoriametrics_listen_address }}"
|
||||||
|
grafana_datasource_vm_name: "VictoriaMetrics"
|
||||||
|
|
||||||
|
# Security
|
||||||
|
grafana_systemd_security: true
|
||||||
|
grafana_cookie_secure: true
|
||||||
|
grafana_cookie_samesite: "lax"
|
||||||
|
|
||||||
|
# Database (SQLite by default)
|
||||||
|
grafana_database_type: "sqlite3"
|
||||||
|
grafana_database_path: "{{ grafana_data_dir }}/grafana.db"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Node Exporter Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Service Management
|
||||||
|
node_exporter_service_enabled: true
|
||||||
|
node_exporter_service_state: "started"
|
||||||
|
|
||||||
|
# Version
|
||||||
|
node_exporter_version: "1.8.2"
|
||||||
|
|
||||||
|
# Network Security (localhost only)
|
||||||
|
node_exporter_listen_address: "127.0.0.1:9100"
|
||||||
|
|
||||||
|
# User/Group
|
||||||
|
node_exporter_user: "node_exporter"
|
||||||
|
node_exporter_group: "node_exporter"
|
||||||
|
|
||||||
|
# Enabled collectors
|
||||||
|
node_exporter_enabled_collectors:
|
||||||
|
- cpu
|
||||||
|
- diskstats
|
||||||
|
- filesystem
|
||||||
|
- loadavg
|
||||||
|
- meminfo
|
||||||
|
- netdev
|
||||||
|
- netstat
|
||||||
|
- stat
|
||||||
|
- time
|
||||||
|
- uname
|
||||||
|
- vmstat
|
||||||
|
- systemd
|
||||||
|
|
||||||
|
# Disabled collectors
|
||||||
|
node_exporter_disabled_collectors:
|
||||||
|
- mdadm
|
||||||
|
|
||||||
|
# Filesystem collector configuration
|
||||||
|
node_exporter_filesystem_ignored_fs_types:
|
||||||
|
- tmpfs
|
||||||
|
- devtmpfs
|
||||||
|
- devfs
|
||||||
|
- iso9660
|
||||||
|
- overlay
|
||||||
|
- aufs
|
||||||
|
- squashfs
|
||||||
|
|
||||||
|
node_exporter_filesystem_ignored_mount_points:
|
||||||
|
- /var/lib/containers/storage/.*
|
||||||
|
- /run/.*
|
||||||
|
- /sys/.*
|
||||||
|
- /proc/.*
|
||||||
|
|
||||||
|
# systemd security
|
||||||
|
node_exporter_systemd_security: true
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Infrastructure Notes
|
||||||
|
# =================================================================
|
||||||
|
# Complete monitoring stack:
|
||||||
|
# - VictoriaMetrics: Time-series database (Prometheus-compatible)
|
||||||
|
# - Grafana: Visualization with Authentik OAuth integration
|
||||||
|
# - node_exporter: System metrics collection
|
||||||
|
#
|
||||||
|
# Role mapping via Authentik groups:
|
||||||
|
# - grafana-admins: Full admin access
|
||||||
|
# - grafana-editors: Can create/edit dashboards
|
||||||
|
# - Default: Viewer access
|
||||||
|
#
|
||||||
|
# All services run on localhost only, proxied via Caddy
|
||||||
23
roles/metrics/handlers/main.yml
Normal file
23
roles/metrics/handlers/main.yml
Normal file
@@ -0,0 +1,23 @@
|
|||||||
|
---
|
||||||
|
- name: restart victoriametrics
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: victoriametrics
|
||||||
|
state: restarted
|
||||||
|
daemon_reload: true
|
||||||
|
|
||||||
|
- name: restart node_exporter
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: node_exporter
|
||||||
|
state: restarted
|
||||||
|
daemon_reload: true
|
||||||
|
|
||||||
|
- name: restart grafana
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: grafana
|
||||||
|
state: restarted
|
||||||
|
daemon_reload: true
|
||||||
|
|
||||||
|
- name: reload caddy
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: caddy
|
||||||
|
state: reloaded
|
||||||
3
roles/metrics/meta/main.yml
Normal file
3
roles/metrics/meta/main.yml
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
---
|
||||||
|
dependencies:
|
||||||
|
- role: caddy
|
||||||
9
roles/metrics/tasks/caddy.yml
Normal file
9
roles/metrics/tasks/caddy.yml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
- name: Deploy Grafana Caddy configuration
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: grafana.caddy.j2
|
||||||
|
dest: /etc/caddy/sites-enabled/grafana.caddy
|
||||||
|
owner: caddy
|
||||||
|
group: caddy
|
||||||
|
mode: '0644'
|
||||||
|
notify: reload caddy
|
||||||
90
roles/metrics/tasks/grafana.yml
Normal file
90
roles/metrics/tasks/grafana.yml
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
---
|
||||||
|
- name: Create Grafana system user
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: "{{ grafana_user }}"
|
||||||
|
system: true
|
||||||
|
create_home: false
|
||||||
|
shell: /usr/sbin/nologin
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Create Grafana directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ grafana_user }}"
|
||||||
|
group: "{{ grafana_group }}"
|
||||||
|
mode: '0755'
|
||||||
|
loop:
|
||||||
|
- "{{ grafana_data_dir }}"
|
||||||
|
- "{{ grafana_logs_dir }}"
|
||||||
|
- "{{ grafana_plugins_dir }}"
|
||||||
|
- "{{ grafana_provisioning_dir }}"
|
||||||
|
- "{{ grafana_provisioning_dir }}/datasources"
|
||||||
|
- "{{ grafana_provisioning_dir }}/dashboards"
|
||||||
|
- "{{ grafana_data_dir }}/dashboards"
|
||||||
|
- /etc/grafana
|
||||||
|
|
||||||
|
- name: Download Grafana binary
|
||||||
|
ansible.builtin.get_url:
|
||||||
|
url: "https://dl.grafana.com/oss/release/grafana-{{ grafana_version }}.linux-amd64.tar.gz"
|
||||||
|
dest: "/tmp/grafana-{{ grafana_version }}.tar.gz"
|
||||||
|
mode: '0644'
|
||||||
|
register: grafana_download
|
||||||
|
|
||||||
|
- name: Extract Grafana
|
||||||
|
ansible.builtin.unarchive:
|
||||||
|
src: "/tmp/grafana-{{ grafana_version }}.tar.gz"
|
||||||
|
dest: /opt
|
||||||
|
remote_src: true
|
||||||
|
creates: "/opt/grafana-v{{ grafana_version }}"
|
||||||
|
when: grafana_download.changed
|
||||||
|
|
||||||
|
- name: Create Grafana symlink
|
||||||
|
ansible.builtin.file:
|
||||||
|
src: "/opt/grafana-v{{ grafana_version }}"
|
||||||
|
dest: /opt/grafana
|
||||||
|
state: link
|
||||||
|
|
||||||
|
- name: Deploy Grafana configuration
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: grafana.ini.j2
|
||||||
|
dest: /etc/grafana/grafana.ini
|
||||||
|
owner: "{{ grafana_user }}"
|
||||||
|
group: "{{ grafana_group }}"
|
||||||
|
mode: '0640'
|
||||||
|
notify: restart grafana
|
||||||
|
|
||||||
|
- name: Deploy VictoriaMetrics datasource provisioning
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: datasource-victoriametrics.yml.j2
|
||||||
|
dest: "{{ grafana_provisioning_dir }}/datasources/victoriametrics.yml"
|
||||||
|
owner: "{{ grafana_user }}"
|
||||||
|
group: "{{ grafana_group }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: restart grafana
|
||||||
|
when: grafana_datasource_vm_enabled
|
||||||
|
|
||||||
|
- name: Deploy dashboard provisioning
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: dashboards.yml.j2
|
||||||
|
dest: "{{ grafana_provisioning_dir }}/dashboards/default.yml"
|
||||||
|
owner: "{{ grafana_user }}"
|
||||||
|
group: "{{ grafana_group }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: restart grafana
|
||||||
|
|
||||||
|
- name: Deploy Grafana systemd service
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: grafana.service.j2
|
||||||
|
dest: /etc/systemd/system/grafana.service
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
notify: restart grafana
|
||||||
|
|
||||||
|
- name: Enable and start Grafana service
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: grafana
|
||||||
|
enabled: "{{ grafana_service_enabled }}"
|
||||||
|
state: "{{ grafana_service_state }}"
|
||||||
|
daemon_reload: true
|
||||||
20
roles/metrics/tasks/main.yml
Normal file
20
roles/metrics/tasks/main.yml
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
---
|
||||||
|
# =================================================================
|
||||||
|
# Metrics Stack Deployment
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
- name: Deploy VictoriaMetrics
|
||||||
|
ansible.builtin.include_tasks: victoriametrics.yml
|
||||||
|
tags: [metrics, victoriametrics]
|
||||||
|
|
||||||
|
- name: Deploy node_exporter
|
||||||
|
ansible.builtin.include_tasks: node_exporter.yml
|
||||||
|
tags: [metrics, node_exporter]
|
||||||
|
|
||||||
|
- name: Deploy Grafana
|
||||||
|
ansible.builtin.include_tasks: grafana.yml
|
||||||
|
tags: [metrics, grafana]
|
||||||
|
|
||||||
|
- name: Deploy Caddy configuration for Grafana
|
||||||
|
ansible.builtin.include_tasks: caddy.yml
|
||||||
|
tags: [metrics, caddy]
|
||||||
49
roles/metrics/tasks/node_exporter.yml
Normal file
49
roles/metrics/tasks/node_exporter.yml
Normal file
@@ -0,0 +1,49 @@
|
|||||||
|
---
|
||||||
|
- name: Create node_exporter system user
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: "{{ node_exporter_user }}"
|
||||||
|
system: true
|
||||||
|
create_home: false
|
||||||
|
shell: /usr/sbin/nologin
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Download node_exporter binary
|
||||||
|
ansible.builtin.get_url:
|
||||||
|
url: "https://github.com/prometheus/node_exporter/releases/download/v{{ node_exporter_version }}/node_exporter-{{ node_exporter_version }}.linux-amd64.tar.gz"
|
||||||
|
dest: "/tmp/node_exporter-{{ node_exporter_version }}.tar.gz"
|
||||||
|
mode: '0644'
|
||||||
|
register: node_exporter_download
|
||||||
|
|
||||||
|
- name: Extract node_exporter binary
|
||||||
|
ansible.builtin.unarchive:
|
||||||
|
src: "/tmp/node_exporter-{{ node_exporter_version }}.tar.gz"
|
||||||
|
dest: /tmp
|
||||||
|
remote_src: true
|
||||||
|
creates: "/tmp/node_exporter-{{ node_exporter_version }}.linux-amd64"
|
||||||
|
when: node_exporter_download.changed
|
||||||
|
|
||||||
|
- name: Copy node_exporter binary to /usr/local/bin
|
||||||
|
ansible.builtin.copy:
|
||||||
|
src: "/tmp/node_exporter-{{ node_exporter_version }}.linux-amd64/node_exporter"
|
||||||
|
dest: /usr/local/bin/node_exporter
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0755'
|
||||||
|
remote_src: true
|
||||||
|
when: node_exporter_download.changed
|
||||||
|
|
||||||
|
- name: Deploy node_exporter systemd service
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: node_exporter.service.j2
|
||||||
|
dest: /etc/systemd/system/node_exporter.service
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
notify: restart node_exporter
|
||||||
|
|
||||||
|
- name: Enable and start node_exporter service
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: node_exporter
|
||||||
|
enabled: "{{ node_exporter_service_enabled }}"
|
||||||
|
state: "{{ node_exporter_service_state }}"
|
||||||
|
daemon_reload: true
|
||||||
66
roles/metrics/tasks/victoriametrics.yml
Normal file
66
roles/metrics/tasks/victoriametrics.yml
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
---
|
||||||
|
- name: Create VictoriaMetrics system user
|
||||||
|
ansible.builtin.user:
|
||||||
|
name: "{{ victoriametrics_user }}"
|
||||||
|
system: true
|
||||||
|
create_home: false
|
||||||
|
shell: /usr/sbin/nologin
|
||||||
|
state: present
|
||||||
|
|
||||||
|
- name: Create VictoriaMetrics directories
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ victoriametrics_user }}"
|
||||||
|
group: "{{ victoriametrics_group }}"
|
||||||
|
mode: '0755'
|
||||||
|
loop:
|
||||||
|
- "{{ victoriametrics_data_dir }}"
|
||||||
|
- "{{ victoriametrics_scrape_config_dir }}"
|
||||||
|
|
||||||
|
- name: Download VictoriaMetrics binary
|
||||||
|
ansible.builtin.get_url:
|
||||||
|
url: "https://github.com/VictoriaMetrics/VictoriaMetrics/releases/download/v{{ victoriametrics_version }}/victoria-metrics-linux-amd64-v{{ victoriametrics_version }}.tar.gz"
|
||||||
|
dest: "/tmp/victoria-metrics-v{{ victoriametrics_version }}.tar.gz"
|
||||||
|
mode: '0644'
|
||||||
|
register: victoriametrics_download
|
||||||
|
|
||||||
|
- name: Extract VictoriaMetrics binary
|
||||||
|
ansible.builtin.unarchive:
|
||||||
|
src: "/tmp/victoria-metrics-v{{ victoriametrics_version }}.tar.gz"
|
||||||
|
dest: /usr/local/bin
|
||||||
|
remote_src: true
|
||||||
|
creates: /usr/local/bin/victoria-metrics-prod
|
||||||
|
when: victoriametrics_download.changed
|
||||||
|
|
||||||
|
- name: Set VictoriaMetrics binary permissions
|
||||||
|
ansible.builtin.file:
|
||||||
|
path: /usr/local/bin/victoria-metrics-prod
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Deploy VictoriaMetrics scrape configuration
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: scrape.yml.j2
|
||||||
|
dest: "{{ victoriametrics_scrape_config_file }}"
|
||||||
|
owner: "{{ victoriametrics_user }}"
|
||||||
|
group: "{{ victoriametrics_group }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: restart victoriametrics
|
||||||
|
|
||||||
|
- name: Deploy VictoriaMetrics systemd service
|
||||||
|
ansible.builtin.template:
|
||||||
|
src: victoriametrics.service.j2
|
||||||
|
dest: /etc/systemd/system/victoriametrics.service
|
||||||
|
owner: root
|
||||||
|
group: root
|
||||||
|
mode: '0644'
|
||||||
|
notify: restart victoriametrics
|
||||||
|
|
||||||
|
- name: Enable and start VictoriaMetrics service
|
||||||
|
ansible.builtin.systemd:
|
||||||
|
name: victoriametrics
|
||||||
|
enabled: "{{ victoriametrics_service_enabled }}"
|
||||||
|
state: "{{ victoriametrics_service_state }}"
|
||||||
|
daemon_reload: true
|
||||||
12
roles/metrics/templates/dashboards.yml.j2
Normal file
12
roles/metrics/templates/dashboards.yml.j2
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
apiVersion: 1
|
||||||
|
|
||||||
|
providers:
|
||||||
|
- name: 'default'
|
||||||
|
orgId: 1
|
||||||
|
folder: ''
|
||||||
|
type: file
|
||||||
|
disableDeletion: false
|
||||||
|
updateIntervalSeconds: 10
|
||||||
|
allowUiUpdates: true
|
||||||
|
options:
|
||||||
|
path: {{ grafana_data_dir }}/dashboards
|
||||||
12
roles/metrics/templates/datasource-victoriametrics.yml.j2
Normal file
12
roles/metrics/templates/datasource-victoriametrics.yml.j2
Normal file
@@ -0,0 +1,12 @@
|
|||||||
|
apiVersion: 1
|
||||||
|
|
||||||
|
datasources:
|
||||||
|
- name: {{ grafana_datasource_vm_name }}
|
||||||
|
type: prometheus
|
||||||
|
access: proxy
|
||||||
|
url: {{ grafana_datasource_vm_url }}
|
||||||
|
isDefault: true
|
||||||
|
editable: true
|
||||||
|
jsonData:
|
||||||
|
httpMethod: POST
|
||||||
|
timeInterval: 15s
|
||||||
26
roles/metrics/templates/grafana.caddy.j2
Normal file
26
roles/metrics/templates/grafana.caddy.j2
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
# Grafana Metrics Dashboard
|
||||||
|
{{ grafana_domain }} {
|
||||||
|
reverse_proxy http://{{ grafana_listen_address }}:{{ grafana_listen_port }} {
|
||||||
|
header_up Host {host}
|
||||||
|
header_up X-Real-IP {remote_host}
|
||||||
|
header_up X-Forwarded-Proto https
|
||||||
|
header_up X-Forwarded-For {remote_host}
|
||||||
|
header_up X-Forwarded-Host {host}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Security headers
|
||||||
|
header {
|
||||||
|
X-Frame-Options SAMEORIGIN
|
||||||
|
X-Content-Type-Options nosniff
|
||||||
|
X-XSS-Protection "1; mode=block"
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
log {
|
||||||
|
output file {{ caddy_log_dir }}/grafana.log
|
||||||
|
level INFO
|
||||||
|
format json
|
||||||
|
}
|
||||||
|
}
|
||||||
68
roles/metrics/templates/grafana.ini.j2
Normal file
68
roles/metrics/templates/grafana.ini.j2
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# Grafana Configuration
|
||||||
|
# Managed by Ansible - DO NOT EDIT MANUALLY
|
||||||
|
|
||||||
|
[paths]
|
||||||
|
data = {{ grafana_data_dir }}
|
||||||
|
logs = {{ grafana_logs_dir }}
|
||||||
|
plugins = {{ grafana_plugins_dir }}
|
||||||
|
provisioning = {{ grafana_provisioning_dir }}
|
||||||
|
|
||||||
|
[server]
|
||||||
|
http_addr = {{ grafana_listen_address }}
|
||||||
|
http_port = {{ grafana_listen_port }}
|
||||||
|
domain = {{ grafana_domain }}
|
||||||
|
root_url = {{ grafana_root_url }}
|
||||||
|
enforce_domain = true
|
||||||
|
enable_gzip = true
|
||||||
|
|
||||||
|
[database]
|
||||||
|
type = {{ grafana_database_type }}
|
||||||
|
{% if grafana_database_type == 'sqlite3' %}
|
||||||
|
path = {{ grafana_database_path }}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
[security]
|
||||||
|
admin_user = {{ grafana_admin_user }}
|
||||||
|
admin_password = {{ grafana_admin_password }}
|
||||||
|
secret_key = {{ vault_grafana_secret_key }}
|
||||||
|
cookie_secure = {{ grafana_cookie_secure | lower }}
|
||||||
|
cookie_samesite = {{ grafana_cookie_samesite }}
|
||||||
|
disable_gravatar = true
|
||||||
|
disable_initial_admin_creation = false
|
||||||
|
|
||||||
|
[users]
|
||||||
|
allow_sign_up = {{ grafana_allow_signup | lower }}
|
||||||
|
allow_org_create = false
|
||||||
|
auto_assign_org = true
|
||||||
|
auto_assign_org_role = Viewer
|
||||||
|
|
||||||
|
[auth]
|
||||||
|
disable_login_form = {{ grafana_disable_login_form | lower }}
|
||||||
|
oauth_auto_login = false
|
||||||
|
|
||||||
|
{% if grafana_oauth_enabled %}
|
||||||
|
[auth.generic_oauth]
|
||||||
|
enabled = true
|
||||||
|
name = {{ grafana_oauth_name }}
|
||||||
|
client_id = {{ grafana_oauth_client_id }}
|
||||||
|
client_secret = {{ grafana_oauth_client_secret }}
|
||||||
|
scopes = {{ grafana_oauth_scopes }}
|
||||||
|
auth_url = {{ grafana_oauth_auth_url }}
|
||||||
|
token_url = {{ grafana_oauth_token_url }}
|
||||||
|
api_url = {{ grafana_oauth_api_url }}
|
||||||
|
allow_sign_up = {{ grafana_oauth_allow_sign_up | lower }}
|
||||||
|
role_attribute_path = {{ grafana_oauth_role_attribute_path }}
|
||||||
|
use_pkce = true
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
[log]
|
||||||
|
mode = console
|
||||||
|
level = info
|
||||||
|
|
||||||
|
[analytics]
|
||||||
|
reporting_enabled = false
|
||||||
|
check_for_updates = false
|
||||||
|
check_for_plugin_updates = false
|
||||||
|
|
||||||
|
[snapshots]
|
||||||
|
external_enabled = false
|
||||||
36
roles/metrics/templates/grafana.service.j2
Normal file
36
roles/metrics/templates/grafana.service.j2
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Grafana visualization platform
|
||||||
|
Documentation=https://grafana.com/docs/
|
||||||
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User={{ grafana_user }}
|
||||||
|
Group={{ grafana_group }}
|
||||||
|
|
||||||
|
WorkingDirectory=/opt/grafana
|
||||||
|
ExecStart=/opt/grafana/bin/grafana-server \
|
||||||
|
--config=/etc/grafana/grafana.ini \
|
||||||
|
--homepath=/opt/grafana
|
||||||
|
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5s
|
||||||
|
|
||||||
|
# Security hardening
|
||||||
|
{% if grafana_systemd_security %}
|
||||||
|
NoNewPrivileges=true
|
||||||
|
PrivateTmp=true
|
||||||
|
ProtectSystem=strict
|
||||||
|
ProtectHome=true
|
||||||
|
ReadWritePaths={{ grafana_data_dir }} {{ grafana_logs_dir }}
|
||||||
|
ProtectKernelTunables=true
|
||||||
|
ProtectKernelModules=true
|
||||||
|
ProtectControlGroups=true
|
||||||
|
RestrictRealtime=true
|
||||||
|
RestrictNamespaces=true
|
||||||
|
LockPersonality=true
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
42
roles/metrics/templates/node_exporter.service.j2
Normal file
42
roles/metrics/templates/node_exporter.service.j2
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Prometheus Node Exporter
|
||||||
|
Documentation=https://github.com/prometheus/node_exporter
|
||||||
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User={{ node_exporter_user }}
|
||||||
|
Group={{ node_exporter_group }}
|
||||||
|
|
||||||
|
ExecStart=/usr/local/bin/node_exporter \
|
||||||
|
--web.listen-address={{ node_exporter_listen_address }} \
|
||||||
|
{% for collector in node_exporter_enabled_collectors %}
|
||||||
|
--collector.{{ collector }} \
|
||||||
|
{% endfor %}
|
||||||
|
{% for collector in node_exporter_disabled_collectors %}
|
||||||
|
--no-collector.{{ collector }} \
|
||||||
|
{% endfor %}
|
||||||
|
--collector.filesystem.fs-types-exclude="{{ node_exporter_filesystem_ignored_fs_types | join('|') }}" \
|
||||||
|
--collector.filesystem.mount-points-exclude="{{ node_exporter_filesystem_ignored_mount_points | join('|') }}"
|
||||||
|
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5s
|
||||||
|
|
||||||
|
# Security hardening
|
||||||
|
{% if node_exporter_systemd_security %}
|
||||||
|
NoNewPrivileges=true
|
||||||
|
PrivateTmp=true
|
||||||
|
ProtectSystem=strict
|
||||||
|
ProtectHome=true
|
||||||
|
ProtectKernelTunables=true
|
||||||
|
ProtectKernelModules=true
|
||||||
|
ProtectControlGroups=true
|
||||||
|
RestrictRealtime=true
|
||||||
|
RestrictNamespaces=true
|
||||||
|
LockPersonality=true
|
||||||
|
ReadOnlyPaths=/
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
22
roles/metrics/templates/scrape.yml.j2
Normal file
22
roles/metrics/templates/scrape.yml.j2
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
global:
|
||||||
|
scrape_interval: {{ victoriametrics_scrape_interval }}
|
||||||
|
scrape_timeout: {{ victoriametrics_scrape_timeout }}
|
||||||
|
external_labels:
|
||||||
|
environment: '{{ "homelab" if inventory_hostname in groups["homelab"] else "production" }}'
|
||||||
|
host: '{{ inventory_hostname }}'
|
||||||
|
|
||||||
|
scrape_configs:
|
||||||
|
# VictoriaMetrics self-monitoring
|
||||||
|
- job_name: 'victoriametrics'
|
||||||
|
static_configs:
|
||||||
|
- targets: ['{{ victoriametrics_listen_address }}']
|
||||||
|
labels:
|
||||||
|
service: 'victoriametrics'
|
||||||
|
|
||||||
|
# Node exporter for system metrics
|
||||||
|
- job_name: 'node'
|
||||||
|
static_configs:
|
||||||
|
- targets: ['{{ node_exporter_listen_address }}']
|
||||||
|
labels:
|
||||||
|
service: 'node_exporter'
|
||||||
|
instance: '{{ inventory_hostname }}'
|
||||||
41
roles/metrics/templates/victoriametrics.service.j2
Normal file
41
roles/metrics/templates/victoriametrics.service.j2
Normal file
@@ -0,0 +1,41 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=VictoriaMetrics time-series database
|
||||||
|
Documentation=https://docs.victoriametrics.com/
|
||||||
|
After=network-online.target
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=simple
|
||||||
|
User={{ victoriametrics_user }}
|
||||||
|
Group={{ victoriametrics_group }}
|
||||||
|
|
||||||
|
ExecStart=/usr/local/bin/victoria-metrics-prod \
|
||||||
|
-storageDataPath={{ victoriametrics_data_dir }} \
|
||||||
|
-retentionPeriod={{ victoriametrics_retention_period }} \
|
||||||
|
-httpListenAddr={{ victoriametrics_listen_address }} \
|
||||||
|
-promscrape.config={{ victoriametrics_scrape_config_file }} \
|
||||||
|
-memory.allowedPercent={{ victoriametrics_memory_allowed_percent }} \
|
||||||
|
-storage.minFreeDiskSpaceBytes={{ victoriametrics_storage_min_free_disk_space_bytes }}
|
||||||
|
|
||||||
|
ExecReload=/bin/kill -HUP $MAINPID
|
||||||
|
|
||||||
|
Restart=on-failure
|
||||||
|
RestartSec=5s
|
||||||
|
|
||||||
|
# Security hardening
|
||||||
|
{% if victoriametrics_systemd_security %}
|
||||||
|
NoNewPrivileges=true
|
||||||
|
PrivateTmp=true
|
||||||
|
ProtectSystem=strict
|
||||||
|
ProtectHome=true
|
||||||
|
ReadWritePaths={{ victoriametrics_data_dir }}
|
||||||
|
ProtectKernelTunables=true
|
||||||
|
ProtectKernelModules=true
|
||||||
|
ProtectControlGroups=true
|
||||||
|
RestrictRealtime=true
|
||||||
|
RestrictNamespaces=true
|
||||||
|
LockPersonality=true
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
@@ -52,9 +52,17 @@ See `defaults/main.yml` for all configurable variables.
|
|||||||
Define these in your `host_vars/` with `ansible-vault`:
|
Define these in your `host_vars/` with `ansible-vault`:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
|
# Core credentials (required)
|
||||||
vault_nextcloud_db_password: "secure-database-password"
|
vault_nextcloud_db_password: "secure-database-password"
|
||||||
vault_nextcloud_admin_password: "secure-admin-password"
|
vault_nextcloud_admin_password: "secure-admin-password"
|
||||||
vault_valkey_password: "secure-valkey-password"
|
vault_valkey_password: "secure-valkey-password"
|
||||||
|
|
||||||
|
# Email credentials (optional - only if email enabled)
|
||||||
|
vault_nextcloud_smtp_password: "secure-smtp-password"
|
||||||
|
|
||||||
|
# OIDC credentials (optional - only if OIDC enabled)
|
||||||
|
vault_nextcloud_oidc_client_id: "nextcloud-client-id-from-authentik"
|
||||||
|
vault_nextcloud_oidc_client_secret: "nextcloud-client-secret-from-authentik"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Key Variables
|
### Key Variables
|
||||||
@@ -91,27 +99,449 @@ This role uses a **two-phase deployment** approach to work correctly with the Ne
|
|||||||
6. Container runs `occ maintenance:install` with PostgreSQL
|
6. Container runs `occ maintenance:install` with PostgreSQL
|
||||||
7. Installation creates `config.php` with database credentials
|
7. Installation creates `config.php` with database credentials
|
||||||
|
|
||||||
### Phase 2: Custom Configuration (automatic)
|
### Phase 2: Configuration via OCC Script (automatic)
|
||||||
8. Ansible waits for `occ status` to report `installed: true`
|
8. Ansible waits for `occ status` to report `installed: true`
|
||||||
9. Ansible deploys custom `redis.config.php` (overwrites default)
|
9. Ansible deploys and runs configuration script inside container
|
||||||
10. Container restart applies custom configuration
|
10. Script configures system settings via OCC commands:
|
||||||
|
- Redis caching (without sessions)
|
||||||
|
- Maintenance window and phone region
|
||||||
|
- Database optimizations (indices, bigint, mimetypes)
|
||||||
|
|
||||||
**Why this order?**
|
**Why this order?**
|
||||||
|
|
||||||
The Nextcloud container's entrypoint uses `version.php` as a marker to determine if installation is needed. If you deploy any files into `/opt/nextcloud/config/` before the container starts, the initialization process fails:
|
The Nextcloud container's entrypoint uses `version.php` as a marker to determine if installation is needed. We must wait for the container's auto-installation to complete before running configuration commands:
|
||||||
|
|
||||||
- Container copies files including `version.php`
|
- Container must complete first-time setup (copy files, run `occ maintenance:install`)
|
||||||
- Entrypoint sees `version.php` exists → assumes already installed
|
- OCC commands require a fully initialized Nextcloud installation
|
||||||
- Skips running `occ maintenance:install`
|
- Running configuration after installation avoids conflicts with the entrypoint script
|
||||||
- Result: Empty `config.php`, 503 errors
|
|
||||||
|
|
||||||
By deploying custom configs **after** installation completes, we:
|
**Configuration Method:**
|
||||||
- ✅ Allow the container's auto-installation to run properly
|
|
||||||
- ✅ Override specific configs (like Redis) after the fact
|
This role uses **OCC commands via a script** rather than config files because:
|
||||||
- ✅ Maintain idempotency (subsequent runs just update configs)
|
- ✅ **Explicit and verifiable** - Run `occ config:list system` to see exact state
|
||||||
|
- ✅ **No file conflicts** - Avoids issues with Docker image's built-in config files
|
||||||
|
- ✅ **Fully idempotent** - Safe to re-run during updates
|
||||||
|
- ✅ **Single source of truth** - All configuration in one script template
|
||||||
|
|
||||||
See the official [Nextcloud Docker documentation](https://github.com/nextcloud/docker#auto-configuration-via-environment-variables) for more details on the auto-configuration process.
|
See the official [Nextcloud Docker documentation](https://github.com/nextcloud/docker#auto-configuration-via-environment-variables) for more details on the auto-configuration process.
|
||||||
|
|
||||||
|
## Installed Apps
|
||||||
|
|
||||||
|
This role automatically installs and enables the following apps:
|
||||||
|
|
||||||
|
- **user_oidc** - OpenID Connect authentication backend for SSO integration
|
||||||
|
- **calendar** - Calendar and scheduling application (CalDAV)
|
||||||
|
- **contacts** - Contact management application (CardDAV)
|
||||||
|
|
||||||
|
To customize the app list, override these variables in your `host_vars`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
nextcloud_apps_install:
|
||||||
|
- user_oidc
|
||||||
|
- calendar
|
||||||
|
- contacts
|
||||||
|
- tasks # Add more apps as needed
|
||||||
|
- deck
|
||||||
|
- mail
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## OIDC/SSO Integration
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
|
||||||
|
Before enabling OIDC, you must create an OIDC application/provider in your identity provider (e.g., Authentik):
|
||||||
|
|
||||||
|
**For Authentik:**
|
||||||
|
|
||||||
|
1. Navigate to **Applications → Providers**
|
||||||
|
2. Click **Create** → **OAuth2/OpenID Provider**
|
||||||
|
3. Configure:
|
||||||
|
- **Name**: `Nextcloud`
|
||||||
|
- **Authorization flow**: `default-authentication-flow` (or your preferred flow)
|
||||||
|
- **Client type**: `Confidential`
|
||||||
|
- **Client ID**: Generate or specify (save this)
|
||||||
|
- **Client Secret**: Generate or specify (save this)
|
||||||
|
- **Redirect URIs**: `https://cloud.jnss.me/apps/user_oidc/code`
|
||||||
|
- **Signing Key**: Select your signing certificate
|
||||||
|
- **Scopes**: Add `openid`, `profile`, `email`
|
||||||
|
|
||||||
|
4. Create **Application**:
|
||||||
|
- Navigate to **Applications → Applications**
|
||||||
|
- Click **Create**
|
||||||
|
- **Name**: `Nextcloud`
|
||||||
|
- **Slug**: `nextcloud`
|
||||||
|
- **Provider**: Select the provider created above
|
||||||
|
- **Launch URL**: `https://cloud.jnss.me`
|
||||||
|
|
||||||
|
5. Note the **Discovery URL**: `https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration`
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Enable OIDC in your `host_vars/arch-vps/main.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# OIDC Configuration
|
||||||
|
nextcloud_oidc_enabled: true
|
||||||
|
nextcloud_oidc_provider_id: "authentik" # Provider identifier (slug)
|
||||||
|
nextcloud_oidc_provider_name: "Authentik SSO" # Display name on login button
|
||||||
|
nextcloud_oidc_discovery_url: "https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration"
|
||||||
|
|
||||||
|
# Security settings (recommended defaults)
|
||||||
|
nextcloud_oidc_unique_uid: true # Prevents account takeover between providers
|
||||||
|
nextcloud_oidc_check_bearer: false
|
||||||
|
nextcloud_oidc_send_id_token_hint: true
|
||||||
|
|
||||||
|
# Attribute mappings (defaults work for most providers)
|
||||||
|
nextcloud_oidc_mapping_display_name: "name"
|
||||||
|
nextcloud_oidc_mapping_email: "email"
|
||||||
|
nextcloud_oidc_mapping_uid: "preferred_username" # Or "sub" for UUID
|
||||||
|
|
||||||
|
# Optional: Enable single login (auto-redirect to SSO)
|
||||||
|
nextcloud_oidc_single_login: false # Set to true to force SSO login
|
||||||
|
```
|
||||||
|
|
||||||
|
Add credentials to your vault file `host_vars/arch-vps/vault.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
vault_nextcloud_oidc_client_id: "nextcloud-client-id-from-authentik"
|
||||||
|
vault_nextcloud_oidc_client_secret: "nextcloud-client-secret-from-authentik"
|
||||||
|
```
|
||||||
|
|
||||||
|
### OIDC Scopes
|
||||||
|
|
||||||
|
The following scopes are requested from your OIDC provider by default:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
nextcloud_oidc_scope: "email profile nextcloud openid"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Standard scopes:**
|
||||||
|
- `openid` - Required for OpenID Connect (contains no claims itself)
|
||||||
|
- `email` - User's email address (`email` and `email_verified` claims)
|
||||||
|
- `profile` - User's profile information (`name`, `given_name`, `preferred_username`, `picture`, etc.)
|
||||||
|
|
||||||
|
**Custom scope for Authentik:**
|
||||||
|
- `nextcloud` - Custom scope mapping you create in Authentik (contains `groups`, `quota`, `user_id`)
|
||||||
|
|
||||||
|
#### Creating the Nextcloud Scope Mapping in Authentik
|
||||||
|
|
||||||
|
The `nextcloud` scope must be created as a custom property mapping in Authentik:
|
||||||
|
|
||||||
|
1. Log in to Authentik as administrator
|
||||||
|
2. Navigate to **Customization** → **Property mappings** → **Create**
|
||||||
|
3. Select type: **Scope mapping**
|
||||||
|
4. Configure:
|
||||||
|
- **Name**: `Nextcloud Profile`
|
||||||
|
- **Scope name**: `nextcloud`
|
||||||
|
- **Expression**:
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Extract all groups the user is a member of
|
||||||
|
groups = [group.name for group in user.ak_groups.all()]
|
||||||
|
|
||||||
|
# In Nextcloud, administrators must be members of a fixed group called "admin"
|
||||||
|
# If a user is an admin in authentik, ensure that "admin" is appended to their group list
|
||||||
|
if user.is_superuser and "admin" not in groups:
|
||||||
|
groups.append("admin")
|
||||||
|
|
||||||
|
return {
|
||||||
|
"name": request.user.name,
|
||||||
|
"groups": groups,
|
||||||
|
# Set a quota by using the "nextcloud_quota" property in the user's attributes
|
||||||
|
"quota": user.group_attributes().get("nextcloud_quota", None),
|
||||||
|
# To connect an existing Nextcloud user, set "nextcloud_user_id" to the Nextcloud username
|
||||||
|
"user_id": user.attributes.get("nextcloud_user_id", str(user.uuid)),
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
5. Click **Finish**
|
||||||
|
6. Navigate to your Nextcloud provider → **Advanced protocol settings**
|
||||||
|
7. Add `Nextcloud Profile` to **Scopes** (in addition to the default scopes)
|
||||||
|
|
||||||
|
### Group Provisioning and Synchronization
|
||||||
|
|
||||||
|
Automatically sync user group membership from Authentik to Nextcloud.
|
||||||
|
|
||||||
|
**Default configuration:**
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
nextcloud_oidc_group_provisioning: true # Auto-create groups from Authentik
|
||||||
|
nextcloud_oidc_mapping_groups: "groups" # Claim containing group list
|
||||||
|
```
|
||||||
|
|
||||||
|
**How it works:**
|
||||||
|
|
||||||
|
1. User logs in via OIDC
|
||||||
|
2. Authentik sends group membership in the `groups` claim (from the custom scope)
|
||||||
|
3. Nextcloud automatically:
|
||||||
|
- Creates groups that don't exist in Nextcloud
|
||||||
|
- Adds user to those groups
|
||||||
|
- Removes user from groups they're no longer member of in Authentik
|
||||||
|
|
||||||
|
**Example: Making a user an admin**
|
||||||
|
|
||||||
|
Nextcloud requires admins to be in a group literally named `admin`. The custom scope mapping (above) automatically adds `"admin"` to the groups list for Authentik superusers.
|
||||||
|
|
||||||
|
Alternatively, manually create a group in Authentik called `admin` and add users to it.
|
||||||
|
|
||||||
|
**Quota management:**
|
||||||
|
|
||||||
|
Set storage quotas by adding the `nextcloud_quota` attribute to Authentik groups or users:
|
||||||
|
|
||||||
|
1. In Authentik, navigate to **Directory** → **Groups** → select your group
|
||||||
|
2. Under **Attributes**, add:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"nextcloud_quota": "15 GB"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
3. Users in this group will have a 15 GB quota in Nextcloud
|
||||||
|
4. If not set, quota is unlimited
|
||||||
|
|
||||||
|
### Complete Authentik Setup Guide
|
||||||
|
|
||||||
|
Follow these steps to set up OIDC authentication with Authentik:
|
||||||
|
|
||||||
|
**Step 1: Create the Custom Scope Mapping**
|
||||||
|
|
||||||
|
See [Creating the Nextcloud Scope Mapping in Authentik](#creating-the-nextcloud-scope-mapping-in-authentik) above.
|
||||||
|
|
||||||
|
**Step 2: Create the OAuth2/OpenID Provider**
|
||||||
|
|
||||||
|
1. In Authentik, navigate to **Applications** → **Providers**
|
||||||
|
2. Click **Create** → **OAuth2/OpenID Provider**
|
||||||
|
3. Configure:
|
||||||
|
- **Name**: `Nextcloud`
|
||||||
|
- **Authorization flow**: `default-authentication-flow` (or your preferred flow)
|
||||||
|
- **Client type**: `Confidential`
|
||||||
|
- **Client ID**: Generate or specify (save this for later)
|
||||||
|
- **Client Secret**: Generate or specify (save this for later)
|
||||||
|
- **Redirect URIs**: `https://cloud.jnss.me/apps/user_oidc/code`
|
||||||
|
- **Signing Key**: Select your signing certificate
|
||||||
|
- Under **Advanced protocol settings**:
|
||||||
|
- **Scopes**: Add `openid`, `email`, `profile`, and `Nextcloud Profile` (the custom scope created in Step 1)
|
||||||
|
- **Subject mode**: `Based on the User's UUID` (or `Based on the User's username` if you prefer usernames)
|
||||||
|
|
||||||
|
**Step 3: Create the Application**
|
||||||
|
|
||||||
|
1. Navigate to **Applications** → **Applications**
|
||||||
|
2. Click **Create**
|
||||||
|
3. Configure:
|
||||||
|
- **Name**: `Nextcloud`
|
||||||
|
- **Slug**: `nextcloud`
|
||||||
|
- **Provider**: Select the provider created in Step 2
|
||||||
|
- **Launch URL**: `https://cloud.jnss.me` (optional)
|
||||||
|
|
||||||
|
**Step 4: Note the Discovery URL**
|
||||||
|
|
||||||
|
The discovery URL follows this pattern:
|
||||||
|
```
|
||||||
|
https://auth.jnss.me/application/o/<slug>/.well-known/openid-configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
For the application slug `nextcloud`, it will be:
|
||||||
|
```
|
||||||
|
https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 5: Configure Nextcloud Role Variables**
|
||||||
|
|
||||||
|
In your `host_vars/arch-vps/main.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
nextcloud_oidc_enabled: true
|
||||||
|
nextcloud_oidc_provider_id: "authentik"
|
||||||
|
nextcloud_oidc_provider_name: "Authentik"
|
||||||
|
nextcloud_oidc_discovery_url: "https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration"
|
||||||
|
nextcloud_oidc_scope: "email profile nextcloud openid"
|
||||||
|
nextcloud_oidc_mapping_uid: "preferred_username" # Or "sub" for UUID-based IDs
|
||||||
|
nextcloud_oidc_mapping_display_name: "name"
|
||||||
|
nextcloud_oidc_mapping_email: "email"
|
||||||
|
nextcloud_oidc_mapping_groups: "groups"
|
||||||
|
nextcloud_oidc_mapping_quota: "quota"
|
||||||
|
nextcloud_oidc_group_provisioning: true
|
||||||
|
```
|
||||||
|
|
||||||
|
In your `host_vars/arch-vps/vault.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
vault_nextcloud_oidc_client_id: "nextcloud" # Client ID from Authentik
|
||||||
|
vault_nextcloud_oidc_client_secret: "very-long-secret-from-authentik" # Client Secret from Authentik
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 6: Deploy and Test**
|
||||||
|
|
||||||
|
Run the Nextcloud playbook:
|
||||||
|
```bash
|
||||||
|
ansible-playbook -i inventory/hosts.yml site.yml --tags nextcloud --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
|
### Supported OIDC Providers
|
||||||
|
|
||||||
|
The `user_oidc` app supports any **OpenID Connect 1.0** compliant provider:
|
||||||
|
|
||||||
|
- **Authentik** (recommended for self-hosted)
|
||||||
|
- **Keycloak**
|
||||||
|
- **Auth0**
|
||||||
|
- **Okta**
|
||||||
|
- **Azure AD / Microsoft Entra ID**
|
||||||
|
- **Google Identity Platform**
|
||||||
|
- **GitHub** (via OIDC)
|
||||||
|
- **GitLab**
|
||||||
|
- **Authelia**
|
||||||
|
- **Kanidm**
|
||||||
|
- Any other OIDC 1.0 compliant provider
|
||||||
|
|
||||||
|
The `nextcloud_oidc_provider_id` is just an identifier slug - you can use any value like `authentik`, `keycloak`, `auth0`, `mycompany-sso`, etc.
|
||||||
|
|
||||||
|
### Verification
|
||||||
|
|
||||||
|
After deployment:
|
||||||
|
|
||||||
|
1. **Check provider configuration:**
|
||||||
|
```bash
|
||||||
|
podman exec --user www-data nextcloud php occ user_oidc:provider
|
||||||
|
podman exec --user www-data nextcloud php occ user_oidc:provider authentik
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Test login:**
|
||||||
|
- Visit `https://cloud.jnss.me`
|
||||||
|
- You should see a "Log in with Authentik SSO" button
|
||||||
|
- Click it to test SSO flow
|
||||||
|
- User account should be auto-created on first login
|
||||||
|
|
||||||
|
3. **Check user mapping:**
|
||||||
|
```bash
|
||||||
|
podman exec --user www-data nextcloud php occ user:list
|
||||||
|
```
|
||||||
|
|
||||||
|
### Troubleshooting OIDC
|
||||||
|
|
||||||
|
**Login button doesn't appear:**
|
||||||
|
```bash
|
||||||
|
# Check if user_oidc app is enabled
|
||||||
|
podman exec --user www-data nextcloud php occ app:list | grep user_oidc
|
||||||
|
|
||||||
|
# Enable if needed
|
||||||
|
podman exec --user www-data nextcloud php occ app:enable user_oidc
|
||||||
|
```
|
||||||
|
|
||||||
|
**Discovery URL errors:**
|
||||||
|
```bash
|
||||||
|
# Test discovery URL is accessible from container
|
||||||
|
podman exec nextcloud curl -k https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration
|
||||||
|
```
|
||||||
|
|
||||||
|
**JWKS cache issues:**
|
||||||
|
```bash
|
||||||
|
# Clear JWKS cache
|
||||||
|
podman exec --user www-data nextcloud php occ user_oidc:provider authentik \
|
||||||
|
--clientid='your-client-id'
|
||||||
|
```
|
||||||
|
|
||||||
|
## Email Configuration
|
||||||
|
|
||||||
|
Configure Nextcloud to send emails for password resets, notifications, and sharing.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
|
Enable email in your `host_vars/arch-vps/main.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Email Configuration
|
||||||
|
nextcloud_email_enabled: true
|
||||||
|
nextcloud_smtp_host: "smtp.fastmail.com"
|
||||||
|
nextcloud_smtp_port: 587
|
||||||
|
nextcloud_smtp_secure: "tls" # tls, ssl, or empty
|
||||||
|
nextcloud_smtp_username: "nextcloud@jnss.me"
|
||||||
|
nextcloud_mail_from_address: "nextcloud"
|
||||||
|
nextcloud_mail_domain: "jnss.me"
|
||||||
|
|
||||||
|
# Set admin user's email address
|
||||||
|
nextcloud_admin_email: "admin@jnss.me"
|
||||||
|
```
|
||||||
|
|
||||||
|
Add SMTP password to vault `host_vars/arch-vps/vault.yml`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
vault_nextcloud_smtp_password: "your-smtp-app-password"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Common SMTP Providers
|
||||||
|
|
||||||
|
**Fastmail:**
|
||||||
|
```yaml
|
||||||
|
nextcloud_smtp_host: "smtp.fastmail.com"
|
||||||
|
nextcloud_smtp_port: 587
|
||||||
|
nextcloud_smtp_secure: "tls"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Gmail (App Password required):**
|
||||||
|
```yaml
|
||||||
|
nextcloud_smtp_host: "smtp.gmail.com"
|
||||||
|
nextcloud_smtp_port: 587
|
||||||
|
nextcloud_smtp_secure: "tls"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Office 365:**
|
||||||
|
```yaml
|
||||||
|
nextcloud_smtp_host: "smtp.office365.com"
|
||||||
|
nextcloud_smtp_port: 587
|
||||||
|
nextcloud_smtp_secure: "tls"
|
||||||
|
```
|
||||||
|
|
||||||
|
**SMTP2GO:**
|
||||||
|
```yaml
|
||||||
|
nextcloud_smtp_host: "mail.smtp2go.com"
|
||||||
|
nextcloud_smtp_port: 587
|
||||||
|
nextcloud_smtp_secure: "tls"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Verification
|
||||||
|
|
||||||
|
After deployment:
|
||||||
|
|
||||||
|
1. **Check SMTP configuration:**
|
||||||
|
```bash
|
||||||
|
podman exec --user www-data nextcloud php occ config:list system | grep mail
|
||||||
|
```
|
||||||
|
|
||||||
|
2. **Check admin email:**
|
||||||
|
```bash
|
||||||
|
podman exec --user www-data nextcloud php occ user:setting admin settings email
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Send test email via Web UI:**
|
||||||
|
- Log in as admin
|
||||||
|
- Settings → Administration → Basic settings
|
||||||
|
- Scroll to "Email server"
|
||||||
|
- Click "Send email" button
|
||||||
|
- Check recipient inbox
|
||||||
|
|
||||||
|
### Troubleshooting Email
|
||||||
|
|
||||||
|
**Test SMTP connection from container:**
|
||||||
|
```bash
|
||||||
|
# Install swaks if needed (for testing)
|
||||||
|
podman exec nextcloud apk add --no-cache swaks
|
||||||
|
|
||||||
|
# Test SMTP connection
|
||||||
|
podman exec nextcloud swaks \
|
||||||
|
--to recipient@example.com \
|
||||||
|
--from nextcloud@jnss.me \
|
||||||
|
--server smtp.fastmail.com:587 \
|
||||||
|
--auth LOGIN \
|
||||||
|
--auth-user nextcloud@jnss.me \
|
||||||
|
--auth-password 'your-password' \
|
||||||
|
--tls
|
||||||
|
```
|
||||||
|
|
||||||
|
**Check Nextcloud logs:**
|
||||||
|
```bash
|
||||||
|
podman exec --user www-data nextcloud php occ log:watch
|
||||||
|
```
|
||||||
|
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
### Include in Playbook
|
### Include in Playbook
|
||||||
@@ -250,7 +680,7 @@ This role uses a **split caching strategy** for optimal performance and stabilit
|
|||||||
- `memcache.local`: APCu (in-memory opcode cache)
|
- `memcache.local`: APCu (in-memory opcode cache)
|
||||||
- `memcache.distributed`: Redis (shared cache, file locking)
|
- `memcache.distributed`: Redis (shared cache, file locking)
|
||||||
- `memcache.locking`: Redis (transactional file locking)
|
- `memcache.locking`: Redis (transactional file locking)
|
||||||
- Configuration: Via custom `redis.config.php` template
|
- Configuration: Via OCC commands in configuration script
|
||||||
|
|
||||||
**Why not Redis sessions?**
|
**Why not Redis sessions?**
|
||||||
|
|
||||||
@@ -262,7 +692,7 @@ The official Nextcloud Docker image enables Redis session handling when `REDIS_H
|
|||||||
4. **Worker exhaustion**: Limited FPM workers (default 5) all become blocked
|
4. **Worker exhaustion**: Limited FPM workers (default 5) all become blocked
|
||||||
5. **Cascading failure**: New requests queue, timeouts accumulate, locks orphan
|
5. **Cascading failure**: New requests queue, timeouts accumulate, locks orphan
|
||||||
|
|
||||||
This role disables Redis sessions by **not setting** `REDIS_HOST` in the environment, while still providing Redis caching via a custom `redis.config.php` that is deployed independently.
|
This role disables Redis sessions by **not setting** `REDIS_HOST` in the environment, while still providing Redis caching via OCC configuration commands.
|
||||||
|
|
||||||
**If you need Redis sessions** (e.g., multi-server setup with session sharing), you must:
|
**If you need Redis sessions** (e.g., multi-server setup with session sharing), you must:
|
||||||
1. Enable `REDIS_HOST` in `nextcloud.env.j2`
|
1. Enable `REDIS_HOST` in `nextcloud.env.j2`
|
||||||
|
|||||||
@@ -1,22 +1,104 @@
|
|||||||
# Nextcloud Role - Required Vault Variables
|
# Nextcloud Role - Vault Variables
|
||||||
|
|
||||||
This role requires the following encrypted variables to be defined in your vault file (typically `host_vars/<hostname>/vault.yml`).
|
This document describes all vault-encrypted variables used by the Nextcloud role.
|
||||||
|
|
||||||
## Required Variables
|
## Required Variables
|
||||||
|
|
||||||
Add these to your encrypted vault file:
|
These variables **must** be defined in your vault file for the role to function:
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
# Nextcloud database password
|
# =================================================================
|
||||||
|
# Core Credentials (REQUIRED)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# PostgreSQL database password for Nextcloud user
|
||||||
vault_nextcloud_db_password: "CHANGE_ME_secure_database_password"
|
vault_nextcloud_db_password: "CHANGE_ME_secure_database_password"
|
||||||
|
|
||||||
# Nextcloud admin account password
|
# Nextcloud admin user password
|
||||||
vault_nextcloud_admin_password: "CHANGE_ME_secure_admin_password"
|
vault_nextcloud_admin_password: "CHANGE_ME_secure_admin_password"
|
||||||
|
|
||||||
# Valkey/Redis password (shared infrastructure)
|
# Valkey (Redis) password for caching (shared infrastructure)
|
||||||
vault_valkey_password: "CHANGE_ME_secure_valkey_password"
|
vault_valkey_password: "CHANGE_ME_secure_valkey_password"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Optional Variables
|
||||||
|
|
||||||
|
These variables are only required if you enable the corresponding features:
|
||||||
|
|
||||||
|
### Email/SMTP Configuration
|
||||||
|
|
||||||
|
Only required if `nextcloud_email_enabled: true`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# =================================================================
|
||||||
|
# Email/SMTP Credentials (OPTIONAL)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# SMTP server password for sending emails
|
||||||
|
# Used with nextcloud_smtp_username for authentication
|
||||||
|
vault_nextcloud_smtp_password: "your-smtp-password-or-app-password"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example for Gmail:**
|
||||||
|
- Use an [App Password](https://support.google.com/accounts/answer/185833)
|
||||||
|
- Do NOT use your main Google account password
|
||||||
|
|
||||||
|
**Example for Fastmail:**
|
||||||
|
- Use an [App Password](https://www.fastmail.help/hc/en-us/articles/360058752854)
|
||||||
|
|
||||||
|
### OIDC/SSO Configuration
|
||||||
|
|
||||||
|
Only required if `nextcloud_oidc_enabled: true`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# =================================================================
|
||||||
|
# OIDC/SSO Credentials (OPTIONAL)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# OAuth2/OIDC Client ID from your identity provider
|
||||||
|
vault_nextcloud_oidc_client_id: "nextcloud"
|
||||||
|
|
||||||
|
# OAuth2/OIDC Client Secret from your identity provider
|
||||||
|
# IMPORTANT: Keep this secret! Anyone with this can impersonate your app
|
||||||
|
vault_nextcloud_oidc_client_secret: "very-long-random-secret-from-authentik"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Complete Vault File Example
|
||||||
|
|
||||||
|
Here's a complete example of a vault file with all possible variables:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
# =================================================================
|
||||||
|
# Example Vault File
|
||||||
|
# =================================================================
|
||||||
|
# File: host_vars/arch-vps/vault.yml
|
||||||
|
# Encrypted with: ansible-vault encrypt host_vars/arch-vps/vault.yml
|
||||||
|
|
||||||
|
# Caddy TLS
|
||||||
|
vault_caddy_tls_email: "admin@jnss.me"
|
||||||
|
vault_cloudflare_api_token: "your-cloudflare-token"
|
||||||
|
|
||||||
|
# Authentik
|
||||||
|
vault_authentik_db_password: "authentik-db-password"
|
||||||
|
vault_authentik_secret_key: "authentik-secret-key"
|
||||||
|
vault_authentik_admin_password: "authentik-admin-password"
|
||||||
|
|
||||||
|
# Valkey (shared infrastructure)
|
||||||
|
vault_valkey_password: "V4lk3y!P@ssw0rd#R3d1s"
|
||||||
|
|
||||||
|
# Nextcloud - Core (always required)
|
||||||
|
vault_nextcloud_db_password: "XkN8vQ2mP9wR5tY7uI0oP3sA6dF8gH1j"
|
||||||
|
vault_nextcloud_admin_password: "AdminP@ssw0rd!SecureAndL0ng"
|
||||||
|
|
||||||
|
# Nextcloud - Email (optional)
|
||||||
|
vault_nextcloud_smtp_password: "fastmail-app-password-xyz123"
|
||||||
|
|
||||||
|
# Nextcloud - OIDC (optional)
|
||||||
|
vault_nextcloud_oidc_client_id: "nextcloud"
|
||||||
|
vault_nextcloud_oidc_client_secret: "aksk_authentik_secret_very_long_random_string"
|
||||||
|
```
|
||||||
|
|
||||||
## Creating/Editing Vault File
|
## Creating/Editing Vault File
|
||||||
|
|
||||||
### First Time Setup
|
### First Time Setup
|
||||||
@@ -37,6 +119,13 @@ ansible-vault edit host_vars/arch-vps/vault.yml
|
|||||||
# Add the Nextcloud variables, then save and exit
|
# Add the Nextcloud variables, then save and exit
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### View Vault Contents
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# View vault file contents
|
||||||
|
ansible-vault view host_vars/arch-vps/vault.yml
|
||||||
|
```
|
||||||
|
|
||||||
### Password Generation
|
### Password Generation
|
||||||
|
|
||||||
Generate secure passwords:
|
Generate secure passwords:
|
||||||
@@ -49,39 +138,26 @@ openssl rand -base64 32
|
|||||||
pwgen -s 32 1
|
pwgen -s 32 1
|
||||||
```
|
```
|
||||||
|
|
||||||
## Example Vault File
|
## Running Playbooks with Vault
|
||||||
|
|
||||||
Your `host_vars/arch-vps/vault.yml` should include:
|
### Interactive Password Prompt
|
||||||
|
|
||||||
```yaml
|
|
||||||
---
|
|
||||||
# Caddy TLS
|
|
||||||
vault_caddy_tls_email: "admin@jnss.me"
|
|
||||||
vault_cloudflare_api_token: "your-cloudflare-token"
|
|
||||||
|
|
||||||
# Authentik
|
|
||||||
vault_authentik_db_password: "authentik-db-password"
|
|
||||||
vault_authentik_secret_key: "authentik-secret-key"
|
|
||||||
vault_authentik_admin_password: "authentik-admin-password"
|
|
||||||
|
|
||||||
# Nextcloud (ADD THESE)
|
|
||||||
vault_nextcloud_db_password: "generated-password-1"
|
|
||||||
vault_nextcloud_admin_password: "generated-password-2"
|
|
||||||
|
|
||||||
# Valkey (shared infrastructure)
|
|
||||||
vault_valkey_password: "valkey-password"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Deployment
|
|
||||||
|
|
||||||
When deploying, you'll need to provide the vault password:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Deploy with vault password prompt
|
ansible-playbook -i inventory/hosts.yml site.yml --ask-vault-pass
|
||||||
ansible-playbook -i inventory/hosts.yml site.yml --tags nextcloud --ask-vault-pass
|
```
|
||||||
|
|
||||||
# Or use a password file
|
### Using a Password File
|
||||||
ansible-playbook -i inventory/hosts.yml site.yml --tags nextcloud --vault-password-file ~/.vault_pass
|
|
||||||
|
```bash
|
||||||
|
# Create password file (DO NOT COMMIT THIS!)
|
||||||
|
echo 'your-vault-password' > .vault_pass
|
||||||
|
chmod 600 .vault_pass
|
||||||
|
|
||||||
|
# Add to .gitignore
|
||||||
|
echo '.vault_pass' >> .gitignore
|
||||||
|
|
||||||
|
# Run playbook
|
||||||
|
ansible-playbook -i inventory/hosts.yml site.yml --vault-password-file .vault_pass
|
||||||
```
|
```
|
||||||
|
|
||||||
## Security Notes
|
## Security Notes
|
||||||
@@ -92,6 +168,29 @@ ansible-playbook -i inventory/hosts.yml site.yml --tags nextcloud --vault-passwo
|
|||||||
- Store vault password securely (password manager, encrypted file, etc.)
|
- Store vault password securely (password manager, encrypted file, etc.)
|
||||||
- Consider using `ansible-vault rekey` to change vault password periodically
|
- Consider using `ansible-vault rekey` to change vault password periodically
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### "Vault password incorrect"
|
||||||
|
|
||||||
|
**Problem:** Wrong vault password entered
|
||||||
|
|
||||||
|
**Solution:** Verify you're using the correct vault password
|
||||||
|
|
||||||
|
### "vault_nextcloud_db_password is undefined"
|
||||||
|
|
||||||
|
**Problem:** Variable not defined in vault file or vault file not loaded
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
1. Verify variable exists in vault file:
|
||||||
|
```bash
|
||||||
|
ansible-vault view host_vars/arch-vps/vault.yml | grep vault_nextcloud
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Ensure you're using `--ask-vault-pass`:
|
||||||
|
```bash
|
||||||
|
ansible-playbook -i inventory/hosts.yml site.yml --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
## Verification
|
## Verification
|
||||||
|
|
||||||
Check that variables are properly encrypted:
|
Check that variables are properly encrypted:
|
||||||
@@ -103,3 +202,8 @@ cat host_vars/arch-vps/vault.yml
|
|||||||
# Decrypt and view (requires password)
|
# Decrypt and view (requires password)
|
||||||
ansible-vault view host_vars/arch-vps/vault.yml
|
ansible-vault view host_vars/arch-vps/vault.yml
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Reference
|
||||||
|
|
||||||
|
- [Ansible Vault Documentation](https://docs.ansible.com/ansible/latest/user_guide/vault.html)
|
||||||
|
- [Best Practices for Variables and Vaults](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html#variables-and-vaults)
|
||||||
|
|||||||
@@ -15,7 +15,6 @@ nextcloud_home: /opt/nextcloud
|
|||||||
nextcloud_html_dir: "{{ nextcloud_home }}/html"
|
nextcloud_html_dir: "{{ nextcloud_home }}/html"
|
||||||
nextcloud_data_dir: "{{ nextcloud_home }}/data"
|
nextcloud_data_dir: "{{ nextcloud_home }}/data"
|
||||||
nextcloud_config_dir: "{{ nextcloud_home }}/config"
|
nextcloud_config_dir: "{{ nextcloud_home }}/config"
|
||||||
nextcloud_custom_apps_dir: "{{ nextcloud_home }}/custom_apps"
|
|
||||||
|
|
||||||
# Container configuration (FPM variant)
|
# Container configuration (FPM variant)
|
||||||
nextcloud_version: "stable-fpm"
|
nextcloud_version: "stable-fpm"
|
||||||
@@ -52,6 +51,7 @@ nextcloud_domain: "cloud.jnss.me"
|
|||||||
|
|
||||||
# Admin user (auto-configured on first run)
|
# Admin user (auto-configured on first run)
|
||||||
nextcloud_admin_user: "admin"
|
nextcloud_admin_user: "admin"
|
||||||
|
nextcloud_admin_email: "admin@jnss.me"
|
||||||
nextcloud_admin_password: "{{ vault_nextcloud_admin_password }}"
|
nextcloud_admin_password: "{{ vault_nextcloud_admin_password }}"
|
||||||
|
|
||||||
# Trusted domains (space-separated)
|
# Trusted domains (space-separated)
|
||||||
@@ -67,6 +67,87 @@ nextcloud_overwriteprotocol: "https"
|
|||||||
nextcloud_php_memory_limit: "512M"
|
nextcloud_php_memory_limit: "512M"
|
||||||
nextcloud_php_upload_limit: "512M"
|
nextcloud_php_upload_limit: "512M"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Background Jobs Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
nextcloud_background_jobs_mode: "cron" # Options: ajax, webcron, cron
|
||||||
|
nextcloud_cron_interval: "5min" # How often cron runs (systemd timer)
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Nextcloud System Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
nextcloud_maintenance_window_start: 4 # Start hour (UTC) for maintenance window
|
||||||
|
nextcloud_default_phone_region: "NO" # Default phone region code (ISO 3166-1 alpha-2)
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Apps Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Apps to install and enable
|
||||||
|
nextcloud_apps_install:
|
||||||
|
- user_oidc
|
||||||
|
- calendar
|
||||||
|
- contacts
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Email/SMTP Configuration (Optional)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
nextcloud_email_enabled: true # Master switch - set to true to enable SMTP
|
||||||
|
|
||||||
|
# SMTP Server Configuration
|
||||||
|
nextcloud_smtp_mode: "smtp" # smtp, sendmail, qmail
|
||||||
|
nextcloud_smtp_host: "smtp.titan.email" # e.g., smtp.gmail.com, smtp.fastmail.com
|
||||||
|
nextcloud_smtp_port: 587 # 587 for TLS, 465 for SSL, 25 for plain
|
||||||
|
nextcloud_smtp_secure: "tls" # tls, ssl, or empty string for no encryption
|
||||||
|
nextcloud_smtp_auth: true # Enable SMTP authentication
|
||||||
|
nextcloud_smtp_authtype: "PLAIN" # LOGIN or PLAIN
|
||||||
|
nextcloud_smtp_username: "hello@jnss.me" # SMTP username
|
||||||
|
nextcloud_smtp_password: "{{ vault_smtp_password | default('') }}"
|
||||||
|
|
||||||
|
# Email Addressing
|
||||||
|
nextcloud_mail_from_address: "hello" # Local part only (before @)
|
||||||
|
nextcloud_mail_domain: "jnss.me" # Domain part (after @)
|
||||||
|
|
||||||
|
# Admin User Email (set at line 55 in Core Configuration section)
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# OIDC/SSO Configuration (Optional)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
nextcloud_oidc_enabled: true # Master switch - set to true to enable OIDC
|
||||||
|
|
||||||
|
# Provider Configuration
|
||||||
|
nextcloud_oidc_provider_id: "authentik" # Provider identifier (slug)
|
||||||
|
nextcloud_oidc_provider_name: "Authentik" # Display name (shown on login button)
|
||||||
|
nextcloud_oidc_client_id: "{{ vault_nextcloud_oidc_client_id | default('') }}"
|
||||||
|
nextcloud_oidc_client_secret: "{{ vault_nextcloud_oidc_client_secret | default('') }}"
|
||||||
|
nextcloud_oidc_discovery_url: "https://auth.jnss.me/application/o/nextcloud/.well-known/openid-configuration" # Full discovery URL, e.g., https://auth.example.com/application/o/nextcloud/.well-known/openid-configuration
|
||||||
|
|
||||||
|
# Scopes (based on Authentik integration guide)
|
||||||
|
# The 'nextcloud' scope is a custom scope you must create in Authentik
|
||||||
|
nextcloud_oidc_scope: "email profile nextcloud openid"
|
||||||
|
|
||||||
|
# Provider Options
|
||||||
|
nextcloud_oidc_unique_uid: false # Hash provider+user ID to prevent account takeover (recommended: true)
|
||||||
|
nextcloud_oidc_check_bearer: false # Check bearer tokens for API/WebDAV calls
|
||||||
|
nextcloud_oidc_send_id_token_hint: true # Send ID token hint during logout
|
||||||
|
|
||||||
|
# Attribute Mappings (based on Authentik integration guide)
|
||||||
|
nextcloud_oidc_mapping_display_name: "name" # Claim for display name
|
||||||
|
nextcloud_oidc_mapping_email: "email" # Claim for email
|
||||||
|
nextcloud_oidc_mapping_quota: "quota" # Claim for quota (from Authentik property mapping)
|
||||||
|
nextcloud_oidc_mapping_uid: "preferred_username" # Claim for user ID
|
||||||
|
nextcloud_oidc_mapping_groups: "groups" # Claim for groups (from Authentik property mapping)
|
||||||
|
|
||||||
|
# Group Provisioning (based on Authentik integration guide)
|
||||||
|
nextcloud_oidc_group_provisioning: true # Auto-create groups from OIDC provider
|
||||||
|
|
||||||
|
# Single Login Option
|
||||||
|
nextcloud_oidc_single_login: true # If true and only one provider, auto-redirect to SSO
|
||||||
|
|
||||||
# =================================================================
|
# =================================================================
|
||||||
# Caddy Integration
|
# Caddy Integration
|
||||||
# =================================================================
|
# =================================================================
|
||||||
|
|||||||
36
roles/nextcloud/tasks/configure.yml
Normal file
36
roles/nextcloud/tasks/configure.yml
Normal file
@@ -0,0 +1,36 @@
|
|||||||
|
---
|
||||||
|
# =================================================================
|
||||||
|
# Nextcloud Configuration via Script
|
||||||
|
# =================================================================
|
||||||
|
# Rick-Infra - Nextcloud Role
|
||||||
|
#
|
||||||
|
# Deploys and runs a configuration script inside the Nextcloud
|
||||||
|
# container to set system configuration via OCC commands.
|
||||||
|
|
||||||
|
- name: Deploy Nextcloud configuration script
|
||||||
|
template:
|
||||||
|
src: configure-nextcloud.sh.j2
|
||||||
|
dest: "{{ nextcloud_config_dir }}/configure.sh"
|
||||||
|
mode: '0755'
|
||||||
|
tags: [config, nextcloud-config]
|
||||||
|
|
||||||
|
- name: Run Nextcloud configuration script
|
||||||
|
command: podman exec --user www-data nextcloud bash /var/www/html/config/configure.sh
|
||||||
|
register: nc_config_result
|
||||||
|
changed_when: false # Script output doesn't indicate changes reliably
|
||||||
|
failed_when: nc_config_result.rc != 0
|
||||||
|
tags: [config, nextcloud-config]
|
||||||
|
|
||||||
|
- name: Display configuration script output
|
||||||
|
debug:
|
||||||
|
msg: "{{ nc_config_result.stdout_lines }}"
|
||||||
|
when: nc_config_result.stdout | length > 0
|
||||||
|
tags: [config, nextcloud-config]
|
||||||
|
|
||||||
|
- name: Display configuration script errors
|
||||||
|
debug:
|
||||||
|
msg: "{{ nc_config_result.stderr_lines }}"
|
||||||
|
when:
|
||||||
|
- nc_config_result.stderr | length > 0
|
||||||
|
- nc_config_result.rc != 0
|
||||||
|
tags: [config, nextcloud-config]
|
||||||
72
roles/nextcloud/tasks/cron.yml
Normal file
72
roles/nextcloud/tasks/cron.yml
Normal file
@@ -0,0 +1,72 @@
|
|||||||
|
---
|
||||||
|
# =================================================================
|
||||||
|
# Nextcloud Background Jobs Configuration
|
||||||
|
# =================================================================
|
||||||
|
# Rick-Infra - Nextcloud Role
|
||||||
|
#
|
||||||
|
# Configures systemd timer for reliable background job execution
|
||||||
|
# instead of Ajax-based cron (which requires user activity)
|
||||||
|
|
||||||
|
- name: Create nextcloud cron service
|
||||||
|
copy:
|
||||||
|
content: |
|
||||||
|
[Unit]
|
||||||
|
Description=Nextcloud Background Jobs (cron.php)
|
||||||
|
Documentation=https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html
|
||||||
|
After=nextcloud.service
|
||||||
|
Requires=nextcloud.service
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
ExecStart=/usr/bin/podman exec --user www-data nextcloud php -f /var/www/html/cron.php
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
StandardOutput=journal
|
||||||
|
StandardError=journal
|
||||||
|
SyslogIdentifier=nextcloud-cron
|
||||||
|
dest: /etc/systemd/system/nextcloud-cron.service
|
||||||
|
mode: '0644'
|
||||||
|
backup: yes
|
||||||
|
notify: reload systemd
|
||||||
|
|
||||||
|
- name: Create nextcloud cron timer
|
||||||
|
copy:
|
||||||
|
content: |
|
||||||
|
[Unit]
|
||||||
|
Description=Nextcloud Background Jobs Timer
|
||||||
|
Documentation=https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/background_jobs_configuration.html
|
||||||
|
|
||||||
|
[Timer]
|
||||||
|
OnBootSec=5min
|
||||||
|
OnUnitActiveSec={{ nextcloud_cron_interval }}
|
||||||
|
Unit=nextcloud-cron.service
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=timers.target
|
||||||
|
dest: /etc/systemd/system/nextcloud-cron.timer
|
||||||
|
mode: '0644'
|
||||||
|
backup: yes
|
||||||
|
notify: reload systemd
|
||||||
|
|
||||||
|
- name: Enable and start nextcloud cron timer
|
||||||
|
systemd:
|
||||||
|
name: nextcloud-cron.timer
|
||||||
|
enabled: yes
|
||||||
|
state: started
|
||||||
|
daemon_reload: yes
|
||||||
|
|
||||||
|
- name: Configure Nextcloud to use cron for background jobs
|
||||||
|
command: >
|
||||||
|
podman exec --user www-data nextcloud
|
||||||
|
php occ background:cron
|
||||||
|
register: nextcloud_cron_mode
|
||||||
|
changed_when: "'background jobs mode changed' in nextcloud_cron_mode.stdout or 'Set mode for background jobs to' in nextcloud_cron_mode.stdout"
|
||||||
|
failed_when:
|
||||||
|
- nextcloud_cron_mode.rc != 0
|
||||||
|
- "'mode for background jobs is currently' not in nextcloud_cron_mode.stdout"
|
||||||
|
|
||||||
|
- name: Verify cron timer is active
|
||||||
|
command: systemctl is-active nextcloud-cron.timer
|
||||||
|
register: timer_status
|
||||||
|
changed_when: false
|
||||||
|
failed_when: timer_status.stdout != "active"
|
||||||
@@ -40,7 +40,6 @@
|
|||||||
- "{{ nextcloud_html_dir }}"
|
- "{{ nextcloud_html_dir }}"
|
||||||
- "{{ nextcloud_data_dir }}"
|
- "{{ nextcloud_data_dir }}"
|
||||||
- "{{ nextcloud_config_dir }}"
|
- "{{ nextcloud_config_dir }}"
|
||||||
- "{{ nextcloud_custom_apps_dir }}"
|
|
||||||
tags: [setup, directories]
|
tags: [setup, directories]
|
||||||
|
|
||||||
- name: Deploy environment configuration
|
- name: Deploy environment configuration
|
||||||
@@ -52,12 +51,9 @@
|
|||||||
notify: restart nextcloud
|
notify: restart nextcloud
|
||||||
tags: [config]
|
tags: [config]
|
||||||
|
|
||||||
# NOTE: Custom Redis config is deployed AFTER installation completes (see below)
|
# NOTE: Nextcloud is configured via OCC commands in a script after installation
|
||||||
# to avoid interfering with the container's first-time initialization process
|
# completes. This avoids interfering with the container's initialization process
|
||||||
|
# and provides a clean, explicit configuration approach.
|
||||||
# NOTE: redis-session-override.ini is NOT deployed because we use file-based sessions
|
|
||||||
# (not Redis sessions). If you enable REDIS_HOST in the future, you'll need to add
|
|
||||||
# proper session lock configuration.
|
|
||||||
|
|
||||||
- name: Create Quadlet systemd directory (system scope)
|
- name: Create Quadlet systemd directory (system scope)
|
||||||
file:
|
file:
|
||||||
@@ -82,7 +78,6 @@
|
|||||||
owner: root
|
owner: root
|
||||||
group: "{{ caddy_user }}"
|
group: "{{ caddy_user }}"
|
||||||
mode: '0644'
|
mode: '0644'
|
||||||
backup: true
|
|
||||||
notify: reload caddy
|
notify: reload caddy
|
||||||
tags: [caddy, reverse-proxy]
|
tags: [caddy, reverse-proxy]
|
||||||
|
|
||||||
@@ -130,13 +125,20 @@
|
|||||||
changed_when: false
|
changed_when: false
|
||||||
tags: [verification]
|
tags: [verification]
|
||||||
|
|
||||||
- name: Deploy custom Redis caching configuration (post-installation)
|
- name: Configure Nextcloud via OCC script
|
||||||
template:
|
include_tasks: configure.yml
|
||||||
src: redis.config.php.j2
|
tags: [config, configure]
|
||||||
dest: "{{ nextcloud_config_dir }}/redis.config.php"
|
|
||||||
mode: '0644'
|
- name: Truncate nextcloud.log to prevent bloat
|
||||||
notify: restart nextcloud
|
shell: |
|
||||||
tags: [config, redis]
|
podman exec nextcloud truncate -s 0 /var/www/html/data/nextcloud.log || true
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
tags: [maintenance, cleanup]
|
||||||
|
|
||||||
|
- name: Configure background jobs (cron)
|
||||||
|
include_tasks: cron.yml
|
||||||
|
tags: [cron, background-jobs]
|
||||||
|
|
||||||
- name: Display Nextcloud deployment status
|
- name: Display Nextcloud deployment status
|
||||||
debug:
|
debug:
|
||||||
@@ -152,7 +154,8 @@
|
|||||||
⚙️ Configuration:
|
⚙️ Configuration:
|
||||||
- Redis caching enabled (application-level cache & file locking)
|
- Redis caching enabled (application-level cache & file locking)
|
||||||
- PHP sessions use file-based storage (not Redis)
|
- PHP sessions use file-based storage (not Redis)
|
||||||
- Custom redis.config.php deployed post-installation
|
- Database optimizations applied
|
||||||
|
- Configuration via OCC commands
|
||||||
|
|
||||||
🚀 Ready for file storage and collaboration!
|
🚀 Ready for file storage and collaboration!
|
||||||
|
|
||||||
|
|||||||
189
roles/nextcloud/templates/configure-nextcloud.sh.j2
Normal file
189
roles/nextcloud/templates/configure-nextcloud.sh.j2
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
# =================================================================
|
||||||
|
# Nextcloud Configuration Script
|
||||||
|
# =================================================================
|
||||||
|
# Rick-Infra - Nextcloud Role
|
||||||
|
#
|
||||||
|
# This script configures Nextcloud via OCC commands after initial
|
||||||
|
# installation. It is generated from Ansible variables and runs
|
||||||
|
# inside the Nextcloud container.
|
||||||
|
#
|
||||||
|
# Generated by: roles/nextcloud/templates/configure-nextcloud.sh.j2
|
||||||
|
# Managed by: Ansible
|
||||||
|
|
||||||
|
set +e # Continue on errors, report at end
|
||||||
|
ERRORS=0
|
||||||
|
|
||||||
|
# Helper function for OCC
|
||||||
|
occ() {
|
||||||
|
php /var/www/html/occ "$@" 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Track errors
|
||||||
|
check_error() {
|
||||||
|
if [ $? -ne 0 ]; then
|
||||||
|
ERRORS=$((ERRORS + 1))
|
||||||
|
echo "ERROR: $1" >&2
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Redis Caching Configuration
|
||||||
|
# =================================================================
|
||||||
|
# Configure Redis for application-level caching and file locking
|
||||||
|
# WITHOUT enabling Redis sessions (which can cause performance issues)
|
||||||
|
|
||||||
|
occ config:system:set memcache.distributed --value='\OC\Memcache\Redis' --quiet
|
||||||
|
check_error "Failed to set memcache.distributed"
|
||||||
|
|
||||||
|
occ config:system:set memcache.locking --value='\OC\Memcache\Redis' --quiet
|
||||||
|
check_error "Failed to set memcache.locking"
|
||||||
|
|
||||||
|
occ config:system:set redis host --value='{{ valkey_unix_socket_path }}' --quiet
|
||||||
|
check_error "Failed to set redis.host"
|
||||||
|
|
||||||
|
occ config:system:set redis password --value='{{ valkey_password }}' --quiet
|
||||||
|
check_error "Failed to set redis.password"
|
||||||
|
|
||||||
|
occ config:system:set redis dbindex --value={{ nextcloud_valkey_db }} --type=integer --quiet
|
||||||
|
check_error "Failed to set redis.dbindex"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Maintenance Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
occ config:system:set maintenance_window_start --value={{ nextcloud_maintenance_window_start }} --type=integer --quiet
|
||||||
|
check_error "Failed to set maintenance_window_start"
|
||||||
|
|
||||||
|
occ config:system:set default_phone_region --value='{{ nextcloud_default_phone_region }}' --quiet
|
||||||
|
check_error "Failed to set default_phone_region"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Database Optimization
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Add missing database indices
|
||||||
|
occ db:add-missing-indices --quiet
|
||||||
|
check_error "Failed to add missing database indices"
|
||||||
|
|
||||||
|
# Convert filecache to bigint
|
||||||
|
occ db:convert-filecache-bigint --no-interaction --quiet
|
||||||
|
check_error "Failed to convert filecache to bigint"
|
||||||
|
|
||||||
|
# Update mimetype database mappings
|
||||||
|
occ maintenance:repair --include-expensive --quiet
|
||||||
|
check_error "Failed to run maintenance:repair"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# App Installation and Enablement
|
||||||
|
# =================================================================
|
||||||
|
# Install apps first, then enable them. This must happen before
|
||||||
|
# app-specific configuration (e.g., OIDC provider setup)
|
||||||
|
|
||||||
|
{% if nextcloud_apps_install is defined and nextcloud_apps_install | length > 0 %}
|
||||||
|
# Install apps
|
||||||
|
{% for app in nextcloud_apps_install %}
|
||||||
|
occ app:install {{ app }} --quiet 2>&1 | grep -v "already installed" || true
|
||||||
|
check_error "Failed to install app: {{ app }}"
|
||||||
|
{% endfor %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Email/SMTP Configuration
|
||||||
|
# =================================================================
|
||||||
|
{% if nextcloud_email_enabled | default(false) %}
|
||||||
|
# Configure SMTP mode
|
||||||
|
occ config:system:set mail_smtpmode --value={{ nextcloud_smtp_mode }} --quiet
|
||||||
|
check_error "Failed to set mail_smtpmode"
|
||||||
|
|
||||||
|
# Configure SMTP server
|
||||||
|
occ config:system:set mail_smtphost --value='{{ nextcloud_smtp_host }}' --quiet
|
||||||
|
check_error "Failed to set mail_smtphost"
|
||||||
|
|
||||||
|
occ config:system:set mail_smtpport --value={{ nextcloud_smtp_port }} --type=integer --quiet
|
||||||
|
check_error "Failed to set mail_smtpport"
|
||||||
|
|
||||||
|
{% if nextcloud_smtp_secure %}
|
||||||
|
occ config:system:set mail_smtpsecure --value={{ nextcloud_smtp_secure }} --quiet
|
||||||
|
check_error "Failed to set mail_smtpsecure"
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if nextcloud_smtp_auth %}
|
||||||
|
# Configure SMTP authentication
|
||||||
|
occ config:system:set mail_smtpauth --value=1 --type=integer --quiet
|
||||||
|
check_error "Failed to set mail_smtpauth"
|
||||||
|
|
||||||
|
occ config:system:set mail_smtpauthtype --value={{ nextcloud_smtp_authtype }} --quiet
|
||||||
|
check_error "Failed to set mail_smtpauthtype"
|
||||||
|
|
||||||
|
occ config:system:set mail_smtpname --value='{{ nextcloud_smtp_username }}' --quiet
|
||||||
|
check_error "Failed to set mail_smtpname"
|
||||||
|
|
||||||
|
occ config:system:set mail_smtppassword --value='{{ nextcloud_smtp_password }}' --quiet
|
||||||
|
check_error "Failed to set mail_smtppassword"
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# Configure email addressing
|
||||||
|
occ config:system:set mail_from_address --value='{{ nextcloud_mail_from_address }}' --quiet
|
||||||
|
check_error "Failed to set mail_from_address"
|
||||||
|
|
||||||
|
occ config:system:set mail_domain --value='{{ nextcloud_mail_domain }}' --quiet
|
||||||
|
check_error "Failed to set mail_domain"
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# Set admin user email address
|
||||||
|
{% if nextcloud_admin_email %}
|
||||||
|
occ user:setting {{ nextcloud_admin_user }} settings email '{{ nextcloud_admin_email }}' --quiet
|
||||||
|
check_error "Failed to set admin user email"
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# OIDC/SSO Provider Configuration
|
||||||
|
# =================================================================
|
||||||
|
{% if nextcloud_oidc_enabled | default(false) %}
|
||||||
|
# Configure OIDC provider (creates if doesn't exist, updates if exists)
|
||||||
|
occ user_oidc:provider {{ nextcloud_oidc_provider_id }} \
|
||||||
|
--clientid='{{ nextcloud_oidc_client_id }}' \
|
||||||
|
--clientsecret='{{ nextcloud_oidc_client_secret }}' \
|
||||||
|
--discoveryuri='{{ nextcloud_oidc_discovery_url }}' \
|
||||||
|
--scope='{{ nextcloud_oidc_scope }}' \
|
||||||
|
--unique-uid={{ '1' if nextcloud_oidc_unique_uid else '0' }} \
|
||||||
|
--check-bearer={{ '1' if nextcloud_oidc_check_bearer else '0' }} \
|
||||||
|
--send-id-token-hint={{ '1' if nextcloud_oidc_send_id_token_hint else '0' }} \
|
||||||
|
{% if nextcloud_oidc_mapping_display_name %}
|
||||||
|
--mapping-display-name='{{ nextcloud_oidc_mapping_display_name }}' \
|
||||||
|
{% endif %}
|
||||||
|
{% if nextcloud_oidc_mapping_email %}
|
||||||
|
--mapping-email='{{ nextcloud_oidc_mapping_email }}' \
|
||||||
|
{% endif %}
|
||||||
|
{% if nextcloud_oidc_mapping_quota %}
|
||||||
|
--mapping-quota='{{ nextcloud_oidc_mapping_quota }}' \
|
||||||
|
{% endif %}
|
||||||
|
{% if nextcloud_oidc_mapping_uid %}
|
||||||
|
--mapping-uid='{{ nextcloud_oidc_mapping_uid }}' \
|
||||||
|
{% endif %}
|
||||||
|
{% if nextcloud_oidc_mapping_groups %}
|
||||||
|
--mapping-groups='{{ nextcloud_oidc_mapping_groups }}' \
|
||||||
|
{% endif %}
|
||||||
|
--group-provisioning={{ '1' if nextcloud_oidc_group_provisioning else '0' }} \
|
||||||
|
--quiet 2>&1 | grep -v "already exists" || true
|
||||||
|
check_error "Failed to configure OIDC provider: {{ nextcloud_oidc_provider_id }}"
|
||||||
|
|
||||||
|
{% if nextcloud_oidc_single_login %}
|
||||||
|
# Enable single login (auto-redirect to SSO if only one provider)
|
||||||
|
occ config:app:set user_oidc allow_multiple_user_backends --value=0 --quiet
|
||||||
|
check_error "Failed to enable single login mode"
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Exit Status
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
if [ $ERRORS -gt 0 ]; then
|
||||||
|
echo "Configuration completed with $ERRORS error(s)" >&2
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
echo "Nextcloud configuration completed successfully"
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
@@ -58,6 +58,10 @@
|
|||||||
Referrer-Policy "no-referrer"
|
Referrer-Policy "no-referrer"
|
||||||
# Disable FLoC tracking
|
# Disable FLoC tracking
|
||||||
Permissions-Policy "interest-cohort=()"
|
Permissions-Policy "interest-cohort=()"
|
||||||
|
# Robot indexing policy
|
||||||
|
X-Robots-Tag "noindex, nofollow"
|
||||||
|
# Cross-domain policy
|
||||||
|
X-Permitted-Cross-Domain-Policies "none"
|
||||||
# Remove server header
|
# Remove server header
|
||||||
-Server
|
-Server
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -23,9 +23,6 @@ Volume={{ nextcloud_data_dir }}:/var/www/html/data:Z
|
|||||||
# Configuration (private - contains secrets)
|
# Configuration (private - contains secrets)
|
||||||
Volume={{ nextcloud_config_dir }}:/var/www/html/config:Z
|
Volume={{ nextcloud_config_dir }}:/var/www/html/config:Z
|
||||||
|
|
||||||
# Custom apps (world-readable)
|
|
||||||
Volume={{ nextcloud_custom_apps_dir }}:/var/www/html/custom_apps:Z
|
|
||||||
|
|
||||||
# Infrastructure sockets (mounted with world-readable permissions on host)
|
# Infrastructure sockets (mounted with world-readable permissions on host)
|
||||||
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
|
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
|
||||||
Volume={{ valkey_unix_socket_path | dirname }}:{{ valkey_unix_socket_path | dirname }}:Z
|
Volume={{ valkey_unix_socket_path | dirname }}:{{ valkey_unix_socket_path | dirname }}:Z
|
||||||
|
|||||||
@@ -1,34 +0,0 @@
|
|||||||
<?php
|
|
||||||
/**
|
|
||||||
* Redis/Valkey Caching Configuration for Nextcloud
|
|
||||||
*
|
|
||||||
* This file provides Redis caching for Nextcloud application-level operations
|
|
||||||
* (distributed cache, file locking) WITHOUT enabling Redis for PHP sessions.
|
|
||||||
*
|
|
||||||
* IMPORTANT: This overrides the default /usr/src/nextcloud/config/redis.config.php
|
|
||||||
* which checks for REDIS_HOST environment variable. We deploy this custom version
|
|
||||||
* to enable Redis caching while keeping PHP sessions file-based for stability.
|
|
||||||
*
|
|
||||||
* Why not use REDIS_HOST env var?
|
|
||||||
* - Setting REDIS_HOST enables BOTH Redis sessions AND Redis caching
|
|
||||||
* - Redis session handling can cause severe performance issues:
|
|
||||||
* * Session lock contention under high concurrency
|
|
||||||
* * Infinite lock retries blocking FPM workers
|
|
||||||
* * Timeout orphaning leaving locks unreleased
|
|
||||||
* * Worker pool exhaustion causing cascading failures
|
|
||||||
*
|
|
||||||
* This configuration provides the benefits of Redis caching (fast distributed
|
|
||||||
* cache, reliable file locking) while avoiding the pitfalls of Redis sessions.
|
|
||||||
*
|
|
||||||
* Managed by: Ansible Nextcloud role
|
|
||||||
* Template: roles/nextcloud/templates/redis.config.php.j2
|
|
||||||
*/
|
|
||||||
|
|
||||||
$CONFIG = array(
|
|
||||||
'memcache.distributed' => '\OC\Memcache\Redis',
|
|
||||||
'memcache.locking' => '\OC\Memcache\Redis',
|
|
||||||
'redis' => array(
|
|
||||||
'host' => '{{ valkey_unix_socket_path }}',
|
|
||||||
'password' => '{{ valkey_password }}',
|
|
||||||
),
|
|
||||||
);
|
|
||||||
@@ -46,6 +46,10 @@ podman_registry_blocked: false
|
|||||||
podman_default_network: "bridge"
|
podman_default_network: "bridge"
|
||||||
podman_network_security: true
|
podman_network_security: true
|
||||||
|
|
||||||
|
# Trusted container subnets (allowed through firewall)
|
||||||
|
podman_trusted_subnets:
|
||||||
|
- "10.88.0.0/16"
|
||||||
|
|
||||||
# =================================================================
|
# =================================================================
|
||||||
# Storage Configuration
|
# Storage Configuration
|
||||||
# =================================================================
|
# =================================================================
|
||||||
|
|||||||
@@ -11,3 +11,8 @@
|
|||||||
name: podman
|
name: podman
|
||||||
state: restarted
|
state: restarted
|
||||||
when: podman_service_enabled | default(true)
|
when: podman_service_enabled | default(true)
|
||||||
|
|
||||||
|
- name: reload nftables
|
||||||
|
systemd:
|
||||||
|
name: nftables
|
||||||
|
state: reloaded
|
||||||
@@ -42,6 +42,22 @@
|
|||||||
backup: yes
|
backup: yes
|
||||||
notify: restart podman
|
notify: restart podman
|
||||||
|
|
||||||
|
- name: Create default podman network with DNS enabled
|
||||||
|
command: podman network create podman --subnet 10.88.0.0/16
|
||||||
|
register: podman_network_create
|
||||||
|
changed_when: "'podman' in podman_network_create.stdout"
|
||||||
|
failed_when:
|
||||||
|
- podman_network_create.rc != 0
|
||||||
|
- "'already exists' not in podman_network_create.stderr"
|
||||||
|
|
||||||
|
- name: Deploy podman firewall rules
|
||||||
|
template:
|
||||||
|
src: podman.nft.j2
|
||||||
|
dest: /etc/nftables.d/10-podman.nft
|
||||||
|
mode: '0644'
|
||||||
|
backup: yes
|
||||||
|
notify: reload nftables
|
||||||
|
|
||||||
- name: Enable podman system service (if enabled)
|
- name: Enable podman system service (if enabled)
|
||||||
systemd:
|
systemd:
|
||||||
name: podman
|
name: podman
|
||||||
|
|||||||
32
roles/podman/templates/podman.nft.j2
Normal file
32
roles/podman/templates/podman.nft.j2
Normal file
@@ -0,0 +1,32 @@
|
|||||||
|
#!/usr/sbin/nft -f
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Podman Container Network Firewall Rules
|
||||||
|
# =================================================================
|
||||||
|
# Rick-Infra Infrastructure - Podman Role
|
||||||
|
# Priority: 10 (loaded after base rules, before drop rules)
|
||||||
|
#
|
||||||
|
# Purpose:
|
||||||
|
# - Allow container-to-host communication for services (PostgreSQL, Valkey)
|
||||||
|
# - Allow container outbound traffic for DNS, package updates, etc.
|
||||||
|
# - Enable NAT/masquerading for container networks
|
||||||
|
#
|
||||||
|
# Security Model:
|
||||||
|
# - Containers are trusted (they run our own services)
|
||||||
|
# - All container egress traffic is allowed (simplified management)
|
||||||
|
# - Container ingress is controlled by application-specific port publishing
|
||||||
|
#
|
||||||
|
# Architecture:
|
||||||
|
# - Containers access host services via Unix sockets or host.containers.internal
|
||||||
|
# - Caddy reverse proxy handles all external traffic
|
||||||
|
# - No direct container port exposure to internet
|
||||||
|
|
||||||
|
# Add rules to INPUT chain - Allow trusted container subnets
|
||||||
|
{% for subnet in podman_trusted_subnets %}
|
||||||
|
add rule inet filter input ip saddr {{ subnet }} accept comment "Podman containers: {{ subnet }}"
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
# Add rules to FORWARD chain - Enable container forwarding
|
||||||
|
add rule inet filter forward ct state established,related accept comment "Allow established connections"
|
||||||
|
add rule inet filter forward iifname "podman0" accept comment "Allow outbound from podman bridge"
|
||||||
|
add rule inet filter forward oifname "podman0" ct state established,related accept comment "Allow inbound to podman bridge (established)"
|
||||||
331
roles/vaultwarden/README.md
Normal file
331
roles/vaultwarden/README.md
Normal file
@@ -0,0 +1,331 @@
|
|||||||
|
# Vaultwarden Password Manager Role
|
||||||
|
|
||||||
|
Self-contained Vaultwarden (Bitwarden-compatible) password manager deployment using Podman and PostgreSQL.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
This role deploys Vaultwarden as a Podman Quadlet container with:
|
||||||
|
- **PostgreSQL backend** via Unix socket (777 permissions)
|
||||||
|
- **Caddy reverse proxy** with HTTPS and WebSocket support
|
||||||
|
- **SSO integration** ready (Authentik OpenID Connect)
|
||||||
|
- **SMTP support** for email notifications (optional)
|
||||||
|
- **Admin panel** for management
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
Internet → Caddy (HTTPS) → Vaultwarden Container → PostgreSQL (Unix socket)
|
||||||
|
↓
|
||||||
|
/data volume
|
||||||
|
```
|
||||||
|
|
||||||
|
### Components
|
||||||
|
|
||||||
|
- **Container Image**: `vaultwarden/server:latest` (Docker Hub)
|
||||||
|
- **User**: System user `vaultwarden` (non-root)
|
||||||
|
- **Port**: 8080 (localhost only)
|
||||||
|
- **Domain**: `vault.jnss.me`
|
||||||
|
- **Database**: PostgreSQL via Unix socket at `/var/run/postgresql`
|
||||||
|
- **Data**: `/opt/vaultwarden/data`
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
**Managed Hosts:**
|
||||||
|
- `postgresql` role (provides database and Unix socket)
|
||||||
|
- `caddy` role (provides reverse proxy)
|
||||||
|
|
||||||
|
**Control Node:**
|
||||||
|
- `argon2` command-line tool (automatically installed if not present)
|
||||||
|
- Used to hash the admin token securely on the control node
|
||||||
|
- Available in most package managers: `pacman -S argon2`, `apt install argon2`, etc.
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
### Required Variables
|
||||||
|
|
||||||
|
Must be defined in vault (e.g., `group_vars/homelab/vault.yml`):
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Database password
|
||||||
|
vault_vaultwarden_db_password: "secure-database-password"
|
||||||
|
|
||||||
|
# Admin token (plain text - will be hashed automatically during deployment)
|
||||||
|
vault_vaultwarden_admin_token: "your-secure-admin-token"
|
||||||
|
|
||||||
|
# SMTP password (if using email)
|
||||||
|
vault_vaultwarden_smtp_password: "smtp-password" # optional
|
||||||
|
|
||||||
|
# SSO credentials (if using Authentik integration)
|
||||||
|
vault_vaultwarden_sso_client_id: "vaultwarden" # optional
|
||||||
|
vault_vaultwarden_sso_client_secret: "sso-secret" # optional
|
||||||
|
```
|
||||||
|
|
||||||
|
### Optional Variables
|
||||||
|
|
||||||
|
Override in `group_vars` or `host_vars`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Domain
|
||||||
|
vaultwarden_domain: "vault.jnss.me"
|
||||||
|
|
||||||
|
# Container version
|
||||||
|
vaultwarden_version: "latest"
|
||||||
|
|
||||||
|
# Registration controls
|
||||||
|
vaultwarden_signups_allowed: false # Disable open registration
|
||||||
|
vaultwarden_invitations_allowed: true # Allow existing users to invite
|
||||||
|
|
||||||
|
# SMTP Configuration
|
||||||
|
vaultwarden_smtp_enabled: true
|
||||||
|
vaultwarden_smtp_host: "smtp.example.com"
|
||||||
|
vaultwarden_smtp_port: 587
|
||||||
|
vaultwarden_smtp_from: "vault@jnss.me"
|
||||||
|
vaultwarden_smtp_username: "smtp-user"
|
||||||
|
|
||||||
|
# SSO Configuration (Authentik)
|
||||||
|
vaultwarden_sso_enabled: true
|
||||||
|
vaultwarden_sso_authority: "https://auth.jnss.me"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
|
||||||
|
### Deploy Vaultwarden
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Full deployment
|
||||||
|
ansible-playbook rick-infra.yml --tags vaultwarden
|
||||||
|
|
||||||
|
# Or via site.yml
|
||||||
|
ansible-playbook site.yml --tags vaultwarden -l homelab
|
||||||
|
```
|
||||||
|
|
||||||
|
### Access Admin Panel
|
||||||
|
|
||||||
|
1. Set admin token in vault file (plain text):
|
||||||
|
```yaml
|
||||||
|
# Generate a secure token
|
||||||
|
vault_vaultwarden_admin_token: "$(openssl rand -base64 32)"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. The role automatically hashes the token during deployment:
|
||||||
|
- Hashing occurs on the **control node** using `argon2` CLI
|
||||||
|
- Uses OWASP recommended settings (19MiB memory, 2 iterations, 1 thread)
|
||||||
|
- Idempotent: same token always produces the same hash
|
||||||
|
- The `argon2` package is automatically installed if not present
|
||||||
|
|
||||||
|
3. Access: `https://vault.jnss.me/admin` (use the plain text token from step 1)
|
||||||
|
|
||||||
|
### Configure SSO (Authentik Integration)
|
||||||
|
|
||||||
|
> ⚠️ **IMPORTANT: SSO Feature Status (as of December 2025)**
|
||||||
|
>
|
||||||
|
> SSO is currently **only available in `vaultwarden/server:testing` images**.
|
||||||
|
> The stable release (v1.34.3) does **NOT** include SSO functionality.
|
||||||
|
>
|
||||||
|
> **Current Deployment Status:**
|
||||||
|
> - This role is configured with SSO settings ready for when SSO reaches stable release
|
||||||
|
> - Using `vaultwarden_version: "latest"` (stable) - SSO will not appear
|
||||||
|
> - To test SSO now: Set `vaultwarden_version: "testing"` (not recommended for production)
|
||||||
|
> - To wait for stable: Keep current configuration, SSO will activate automatically when available
|
||||||
|
>
|
||||||
|
> **References:**
|
||||||
|
> - [Vaultwarden Wiki - SSO Documentation](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect)
|
||||||
|
> - [Vaultwarden Testing Features](https://github.com/dani-garcia/vaultwarden/wiki#testing-features)
|
||||||
|
>
|
||||||
|
> **Decision:** This deployment keeps SSO configured but uses stable image until SSO feature is production-ready.
|
||||||
|
|
||||||
|
Following the [Authentik integration guide](https://integrations.goauthentik.io/security/vaultwarden/):
|
||||||
|
|
||||||
|
1. **In Authentik**: Create OAuth2/OpenID Provider
|
||||||
|
- **Name**: `Vaultwarden`
|
||||||
|
- **Client Type**: `Confidential`
|
||||||
|
- **Redirect URIs**: `https://vault.jnss.me/identity/connect/oidc-signin` (must be strict/exact match)
|
||||||
|
- **Scopes**: Under "Advanced protocol settings", ensure these scope mappings are selected:
|
||||||
|
- `authentik default OAuth Mapping: OpenID 'openid'`
|
||||||
|
- `authentik default OAuth Mapping: OpenID 'email'`
|
||||||
|
- `authentik default OAuth Mapping: OpenID 'profile'`
|
||||||
|
- `authentik default OAuth Mapping: OpenID 'offline_access'` ⚠️ **Required**
|
||||||
|
- **Access token validity**: Set to more than 5 minutes
|
||||||
|
- **Note the Client ID, Client Secret, and application slug** (from URL or provider settings)
|
||||||
|
|
||||||
|
2. **Update Vault Variables**:
|
||||||
|
```yaml
|
||||||
|
vault_vaultwarden_sso_client_id: "<client-id-from-authentik>"
|
||||||
|
vault_vaultwarden_sso_client_secret: "<client-secret-from-authentik>"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Enable SSO and set authority** in `group_vars/homelab/main.yml`:
|
||||||
|
```yaml
|
||||||
|
vaultwarden_sso_enabled: true
|
||||||
|
# Replace 'vaultwarden' with your actual application slug
|
||||||
|
vaultwarden_sso_authority: "https://auth.jnss.me/application/o/vaultwarden/"
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **Optional: SSO-Only Mode** (disable password login):
|
||||||
|
```yaml
|
||||||
|
vaultwarden_sso_only: true # Requires SSO, disables email+password
|
||||||
|
```
|
||||||
|
|
||||||
|
5. **Redeploy**:
|
||||||
|
```bash
|
||||||
|
ansible-playbook rick-infra.yml --tags vaultwarden
|
||||||
|
```
|
||||||
|
|
||||||
|
6. **Test**: Log out, enter a verified email on login page, click "Use single sign-on"
|
||||||
|
- **Note**: With `vaultwarden_version: "latest"`, SSO button will not appear (feature not in stable yet)
|
||||||
|
|
||||||
|
## Security Considerations
|
||||||
|
|
||||||
|
### Database Access
|
||||||
|
|
||||||
|
- Uses PostgreSQL Unix socket with **777 permissions**
|
||||||
|
- Security maintained via password authentication (scram-sha-256)
|
||||||
|
- See: `docs/socket-permissions-architecture.md`
|
||||||
|
|
||||||
|
### Admin Token
|
||||||
|
|
||||||
|
- **Never commit plain admin token to git**
|
||||||
|
- Use Ansible Vault for `vault_vaultwarden_admin_token`
|
||||||
|
- Rotate periodically via admin panel
|
||||||
|
|
||||||
|
### User Registration
|
||||||
|
|
||||||
|
- Default: **Disabled** (`vaultwarden_signups_allowed: false`)
|
||||||
|
- Users must be invited by existing users or created via admin panel
|
||||||
|
- Prevents unauthorized account creation
|
||||||
|
|
||||||
|
## Maintenance
|
||||||
|
|
||||||
|
### Backup
|
||||||
|
|
||||||
|
Backup the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Database backup (via PostgreSQL role)
|
||||||
|
sudo -u postgres pg_dump vaultwarden > vaultwarden-backup.sql
|
||||||
|
|
||||||
|
# Data directory (attachments, icons, etc.)
|
||||||
|
tar -czf vaultwarden-data-backup.tar.gz /opt/vaultwarden/data
|
||||||
|
```
|
||||||
|
|
||||||
|
### Update Container
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Pull new image and restart
|
||||||
|
ansible-playbook rick-infra.yml --tags vaultwarden
|
||||||
|
```
|
||||||
|
|
||||||
|
### View Logs
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Service logs
|
||||||
|
journalctl -u vaultwarden -f
|
||||||
|
|
||||||
|
# Container logs
|
||||||
|
podman logs vaultwarden -f
|
||||||
|
```
|
||||||
|
|
||||||
|
### Restart Service
|
||||||
|
|
||||||
|
```bash
|
||||||
|
systemctl restart vaultwarden
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Container won't start
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check container status
|
||||||
|
systemctl status vaultwarden
|
||||||
|
|
||||||
|
# Check container directly
|
||||||
|
podman ps -a
|
||||||
|
podman logs vaultwarden
|
||||||
|
|
||||||
|
# Verify database connectivity
|
||||||
|
sudo -u vaultwarden psql -h /var/run/postgresql -U vaultwarden -d vaultwarden -c "SELECT 1;"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Database connection errors
|
||||||
|
|
||||||
|
1. Verify PostgreSQL is running: `systemctl status postgresql`
|
||||||
|
2. Check socket exists: `ls -la /var/run/postgresql/.s.PGSQL.5432`
|
||||||
|
3. Verify socket permissions: Should be `srwxrwxrwx` (777)
|
||||||
|
4. Test connection as vaultwarden user (see above)
|
||||||
|
|
||||||
|
### Can't access admin panel
|
||||||
|
|
||||||
|
1. Verify admin token is set in vault file (plain text)
|
||||||
|
2. Check that the token was hashed successfully during deployment
|
||||||
|
3. Ensure you're using the plain text token to log in
|
||||||
|
4. Redeploy to regenerate hash if needed
|
||||||
|
|
||||||
|
### SSO not appearing / not working
|
||||||
|
|
||||||
|
**Most Common Issue: Using Stable Image**
|
||||||
|
|
||||||
|
SSO is only available in testing images. Check your deployment:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check current image version
|
||||||
|
podman inspect vaultwarden --format '{{.ImageName}}'
|
||||||
|
|
||||||
|
# Check API config for SSO support
|
||||||
|
curl -s http://127.0.0.1:8080/api/config | grep -o '"sso":"[^"]*"'
|
||||||
|
# Empty string "" = SSO not available in this image
|
||||||
|
# URL present = SSO is available
|
||||||
|
```
|
||||||
|
|
||||||
|
**If using `vaultwarden_version: "latest"`**: SSO will not appear (feature not in stable yet)
|
||||||
|
- **To test SSO**: Set `vaultwarden_version: "testing"` in role defaults or group/host vars
|
||||||
|
- **For production**: Wait for SSO to reach stable release (recommended)
|
||||||
|
|
||||||
|
**If using `vaultwarden_version: "testing"` and SSO still not working**:
|
||||||
|
|
||||||
|
1. Verify Authentik provider configuration:
|
||||||
|
- Check that `offline_access` scope mapping is added
|
||||||
|
- Verify redirect URI matches exactly: `https://vault.jnss.me/identity/connect/oidc-signin`
|
||||||
|
- Ensure access token validity is > 5 minutes
|
||||||
|
2. Verify SSO authority URL includes full path with slug:
|
||||||
|
- Should be: `https://auth.jnss.me/application/o/<your-slug>/`
|
||||||
|
- Not just: `https://auth.jnss.me`
|
||||||
|
3. Check client ID and secret in vault match Authentik
|
||||||
|
4. Verify all required scopes: `openid email profile offline_access`
|
||||||
|
5. Check Vaultwarden logs for SSO-related errors:
|
||||||
|
```bash
|
||||||
|
podman logs vaultwarden 2>&1 | grep -i sso
|
||||||
|
```
|
||||||
|
6. Test SSO flow: Log out, enter verified email, click "Use single sign-on"
|
||||||
|
|
||||||
|
## File Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
roles/vaultwarden/
|
||||||
|
├── defaults/
|
||||||
|
│ └── main.yml # Default variables
|
||||||
|
├── handlers/
|
||||||
|
│ └── main.yml # Service restart handlers
|
||||||
|
├── meta/
|
||||||
|
│ └── main.yml # Role dependencies
|
||||||
|
├── tasks/
|
||||||
|
│ ├── main.yml # Main orchestration
|
||||||
|
│ ├── user.yml # User and directory setup
|
||||||
|
│ └── database.yml # PostgreSQL setup
|
||||||
|
├── templates/
|
||||||
|
│ ├── vaultwarden.container # Quadlet container definition
|
||||||
|
│ ├── vaultwarden.env.j2 # Environment configuration
|
||||||
|
│ └── vaultwarden.caddy.j2 # Caddy reverse proxy config
|
||||||
|
└── README.md # This file
|
||||||
|
```
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [Vaultwarden Documentation](https://github.com/dani-garcia/vaultwarden/wiki)
|
||||||
|
- [PostgreSQL Backend Guide](https://github.com/dani-garcia/vaultwarden/wiki/Using-the-PostgreSQL-Backend)
|
||||||
|
- [SSO Configuration](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-SSO-support-using-OpenId-Connect)
|
||||||
|
- [Socket Permissions Architecture](../../docs/socket-permissions-architecture.md)
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT
|
||||||
171
roles/vaultwarden/VAULT_VARIABLES.md
Normal file
171
roles/vaultwarden/VAULT_VARIABLES.md
Normal file
@@ -0,0 +1,171 @@
|
|||||||
|
# Vaultwarden Role - Required Vault Variables
|
||||||
|
|
||||||
|
This document lists all vault-encrypted variables required by the Vaultwarden role.
|
||||||
|
|
||||||
|
## Required Variables
|
||||||
|
|
||||||
|
These variables **must** be defined in your vault file (e.g., `group_vars/homelab/vault.yml`):
|
||||||
|
|
||||||
|
### Database Credentials
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# PostgreSQL database password for vaultwarden user
|
||||||
|
vault_vaultwarden_db_password: "your-secure-database-password-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generation**:
|
||||||
|
```bash
|
||||||
|
openssl rand -base64 32
|
||||||
|
```
|
||||||
|
|
||||||
|
### Admin Panel Access
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# Plain text admin token (will be hashed during deployment)
|
||||||
|
vault_vaultwarden_admin_token: "your-secret-admin-token"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Generation**:
|
||||||
|
```bash
|
||||||
|
# Generate a secure random token
|
||||||
|
openssl rand -base64 32
|
||||||
|
```
|
||||||
|
|
||||||
|
**Important**:
|
||||||
|
- Store as **plain text** in vault (the role will hash it automatically)
|
||||||
|
- Use the same token to access `/admin` panel
|
||||||
|
- The token is automatically hashed on the **control node** using argon2id
|
||||||
|
- Hashing uses OWASP recommended settings (m=19456, t=2, p=1)
|
||||||
|
- Hashing is **idempotent**: same token always produces same hash
|
||||||
|
- The `argon2` package is automatically installed if not present on control node
|
||||||
|
- Never commit the vault file unencrypted to git
|
||||||
|
|
||||||
|
## Optional Variables
|
||||||
|
|
||||||
|
These variables are only needed if you enable specific features:
|
||||||
|
|
||||||
|
### SMTP Configuration
|
||||||
|
|
||||||
|
Required if `vaultwarden_smtp_enabled: true`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# SMTP password for sending emails
|
||||||
|
vault_vaultwarden_smtp_password: "smtp-password-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSO Integration (Authentik)
|
||||||
|
|
||||||
|
> ⚠️ **SSO Feature Status (December 2025)**
|
||||||
|
>
|
||||||
|
> SSO is only available in `vaultwarden/server:testing` images (not in stable yet).
|
||||||
|
> This role is configured with SSO ready for when it reaches stable release.
|
||||||
|
>
|
||||||
|
> Current deployment uses `vaultwarden_version: "latest"` (stable) - SSO credentials
|
||||||
|
> below are configured but SSO will not appear until feature reaches stable.
|
||||||
|
|
||||||
|
Required if `vaultwarden_sso_enabled: true`:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# OAuth2 Client ID from Authentik
|
||||||
|
vault_vaultwarden_sso_client_id: "your-client-id-here"
|
||||||
|
|
||||||
|
# OAuth2 Client Secret from Authentik
|
||||||
|
vault_vaultwarden_sso_client_secret: "your-client-secret-here"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Setup** (following [Authentik integration guide](https://integrations.goauthentik.io/security/vaultwarden/)):
|
||||||
|
1. Create OAuth2/OpenID Provider in Authentik:
|
||||||
|
- Redirect URI: `https://vault.jnss.me/identity/connect/oidc-signin` (exact match)
|
||||||
|
- Add scope mappings: `openid`, `email`, `profile`, `offline_access` (required)
|
||||||
|
- Access token validity: > 5 minutes
|
||||||
|
- Note the **application slug** from the provider URL
|
||||||
|
2. Copy Client ID and Secret from Authentik
|
||||||
|
3. Add credentials to vault file
|
||||||
|
4. Set the SSO authority URL in role configuration:
|
||||||
|
```yaml
|
||||||
|
vaultwarden_sso_enabled: true
|
||||||
|
vaultwarden_sso_authority: "https://auth.jnss.me/application/o/<your-slug>/"
|
||||||
|
```
|
||||||
|
5. Deploy the role
|
||||||
|
6. **Wait for SSO to reach stable**, or use `vaultwarden_version: "testing"` to test now
|
||||||
|
|
||||||
|
**Important**:
|
||||||
|
- The SSO authority must include the full path with application slug
|
||||||
|
- The `offline_access` scope mapping is **required** for Vaultwarden SSO
|
||||||
|
- Access token must be valid for more than 5 minutes
|
||||||
|
- SSO is configured and ready but will activate when stable release includes it
|
||||||
|
|
||||||
|
## Example Vault File
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
# group_vars/homelab/vault.yml (encrypted with ansible-vault)
|
||||||
|
|
||||||
|
# Vaultwarden - Database
|
||||||
|
vault_vaultwarden_db_password: "xK9mP2nR5tY8wQ3vZ7cB6sA4dF1gH0jL"
|
||||||
|
|
||||||
|
# Vaultwarden - Admin Panel (plain text, will be hashed automatically)
|
||||||
|
vault_vaultwarden_admin_token: "MySecureAdminToken123!"
|
||||||
|
|
||||||
|
# Vaultwarden - SMTP (optional)
|
||||||
|
vault_vaultwarden_smtp_password: "smtp-app-password-here"
|
||||||
|
|
||||||
|
# Vaultwarden - SSO (optional)
|
||||||
|
vault_vaultwarden_sso_client_id: "vaultwarden"
|
||||||
|
vault_vaultwarden_sso_client_secret: "sso-secret-from-authentik"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Vault File Management
|
||||||
|
|
||||||
|
### Encrypt Vault File
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-vault encrypt group_vars/homelab/vault.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Edit Vault File
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-vault edit group_vars/homelab/vault.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Decrypt Vault File (temporary)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-vault decrypt group_vars/homelab/vault.yml
|
||||||
|
# Make changes
|
||||||
|
ansible-vault encrypt group_vars/homelab/vault.yml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Security Best Practices
|
||||||
|
|
||||||
|
1. **Never commit unencrypted vault files**
|
||||||
|
2. **Use strong passwords** (32+ characters for database, admin token)
|
||||||
|
3. **Rotate credentials periodically** (especially admin token)
|
||||||
|
4. **Limit vault password access** (use password manager)
|
||||||
|
5. **Use separate passwords** for different services
|
||||||
|
6. **Back up vault password** (secure location, not in git)
|
||||||
|
|
||||||
|
## Verifying Variables
|
||||||
|
|
||||||
|
Test if variables are properly loaded:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible -m debug -a "var=vault_vaultwarden_db_password" homelab --ask-vault-pass
|
||||||
|
```
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Variable not found error
|
||||||
|
|
||||||
|
- Ensure vault file is in correct location: `group_vars/homelab/vault.yml`
|
||||||
|
- Verify file is encrypted: `file group_vars/homelab/vault.yml`
|
||||||
|
- Check variable name matches exactly (case-sensitive)
|
||||||
|
- Provide vault password with `--ask-vault-pass`
|
||||||
|
|
||||||
|
### Admin token not working
|
||||||
|
|
||||||
|
- Verify the plain text token in vault matches what you're entering
|
||||||
|
- Check for extra whitespace in vault file
|
||||||
|
- Ensure the token was hashed successfully during deployment (check ansible output)
|
||||||
|
- Try redeploying the role to regenerate the hash
|
||||||
109
roles/vaultwarden/defaults/main.yml
Normal file
109
roles/vaultwarden/defaults/main.yml
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
---
|
||||||
|
# =================================================================
|
||||||
|
# Vaultwarden Password Manager Role - Default Variables
|
||||||
|
# =================================================================
|
||||||
|
# Self-contained Vaultwarden deployment with Podman and PostgreSQL
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Service Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Service user and directories
|
||||||
|
vaultwarden_user: vaultwarden
|
||||||
|
vaultwarden_group: vaultwarden
|
||||||
|
vaultwarden_home: /opt/vaultwarden
|
||||||
|
vaultwarden_data_dir: "{{ vaultwarden_home }}/data"
|
||||||
|
|
||||||
|
# Container configuration
|
||||||
|
# NOTE: SSO feature is only available in "testing" tag (as of Dec 2025)
|
||||||
|
# Using "latest" (stable) means SSO will not appear even if configured
|
||||||
|
# SSO settings below are configured and ready for when feature reaches stable
|
||||||
|
vaultwarden_version: "latest"
|
||||||
|
vaultwarden_image: "vaultwarden/server"
|
||||||
|
|
||||||
|
# Service management
|
||||||
|
vaultwarden_service_enabled: true
|
||||||
|
vaultwarden_service_state: "started"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Database Configuration (Self-managed)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
vaultwarden_db_name: "vaultwarden"
|
||||||
|
vaultwarden_db_user: "vaultwarden"
|
||||||
|
vaultwarden_db_password: "{{ vault_vaultwarden_db_password }}"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Network Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
vaultwarden_domain: "vault.jnss.me"
|
||||||
|
vaultwarden_http_port: 8080
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Vaultwarden Core Configuration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Admin panel access token (plain text, will be hashed during deployment)
|
||||||
|
vaultwarden_admin_token_plain: "{{ vault_vaultwarden_admin_token }}"
|
||||||
|
|
||||||
|
# Registration and invitation controls
|
||||||
|
vaultwarden_signups_allowed: false # Disable open registration
|
||||||
|
vaultwarden_invitations_allowed: true # Allow existing users to invite
|
||||||
|
vaultwarden_show_password_hint: false # Don't show password hints
|
||||||
|
|
||||||
|
# WebSocket support for live sync
|
||||||
|
vaultwarden_websocket_enabled: true
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Email Configuration (Optional)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
vaultwarden_smtp_enabled: true
|
||||||
|
vaultwarden_smtp_host: "smtp.titan.email"
|
||||||
|
vaultwarden_smtp_port: 587
|
||||||
|
vaultwarden_smtp_from: "hello@jnss.me"
|
||||||
|
vaultwarden_smtp_username: "hello@jnss.me"
|
||||||
|
vaultwarden_smtp_password: "{{ vault_smtp_password | default('') }}"
|
||||||
|
vaultwarden_smtp_security: "starttls" # Options: starttls, force_tls, off
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# SSO Configuration (Optional - Authentik Integration)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
vaultwarden_sso_enabled: false
|
||||||
|
|
||||||
|
# SSO Provider Configuration (Authentik)
|
||||||
|
vaultwarden_sso_client_id: "{{ vault_vaultwarden_sso_client_id | default('') }}"
|
||||||
|
vaultwarden_sso_client_secret: "{{ vault_vaultwarden_sso_client_secret | default('') }}"
|
||||||
|
# Authority must include full path with application slug
|
||||||
|
vaultwarden_sso_authority: "https://{{ authentik_domain }}/application/o/vaultwarden/"
|
||||||
|
vaultwarden_sso_scopes: "openid email profile offline_access"
|
||||||
|
|
||||||
|
# Additional SSO settings (per Authentik integration guide)
|
||||||
|
vaultwarden_sso_only: false # Set to true to disable email+password login and require SSO
|
||||||
|
vaultwarden_sso_signups_match_email: true # Match first SSO login to existing account by email
|
||||||
|
vaultwarden_sso_allow_unknown_email_verification: false
|
||||||
|
vaultwarden_sso_client_cache_expiration: 0
|
||||||
|
|
||||||
|
# Domain whitelist for SSO signups (comma-separated domains, empty = all)
|
||||||
|
vaultwarden_sso_signups_domains_whitelist: ""
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Caddy Integration
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# Caddy configuration (assumes caddy role provides these variables)
|
||||||
|
caddy_sites_enabled_dir: "/etc/caddy/sites-enabled"
|
||||||
|
caddy_log_dir: "/var/log/caddy"
|
||||||
|
caddy_user: "caddy"
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Infrastructure Dependencies (Read-only)
|
||||||
|
# =================================================================
|
||||||
|
|
||||||
|
# PostgreSQL socket configuration (managed by postgresql role)
|
||||||
|
postgresql_unix_socket_directories: "/var/run/postgresql"
|
||||||
|
postgresql_client_group: "postgres-clients"
|
||||||
|
postgresql_port: 5432
|
||||||
|
postgresql_unix_socket_enabled: true
|
||||||
16
roles/vaultwarden/handlers/main.yml
Normal file
16
roles/vaultwarden/handlers/main.yml
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
---
|
||||||
|
# Vaultwarden Password Manager - Service Handlers
|
||||||
|
|
||||||
|
- name: reload systemd
|
||||||
|
systemd:
|
||||||
|
daemon_reload: true
|
||||||
|
|
||||||
|
- name: restart vaultwarden
|
||||||
|
systemd:
|
||||||
|
name: vaultwarden
|
||||||
|
state: restarted
|
||||||
|
|
||||||
|
- name: reload caddy
|
||||||
|
systemd:
|
||||||
|
name: caddy
|
||||||
|
state: reloaded
|
||||||
25
roles/vaultwarden/meta/main.yml
Normal file
25
roles/vaultwarden/meta/main.yml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
---
|
||||||
|
# Vaultwarden Password Manager - Role Metadata
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
- role: postgresql
|
||||||
|
- role: caddy
|
||||||
|
|
||||||
|
galaxy_info:
|
||||||
|
author: Rick Infrastructure Team
|
||||||
|
description: Vaultwarden password manager deployment with PostgreSQL and Caddy
|
||||||
|
license: MIT
|
||||||
|
min_ansible_version: "2.14"
|
||||||
|
|
||||||
|
platforms:
|
||||||
|
- name: ArchLinux
|
||||||
|
versions:
|
||||||
|
- all
|
||||||
|
|
||||||
|
galaxy_tags:
|
||||||
|
- vaultwarden
|
||||||
|
- bitwarden
|
||||||
|
- password-manager
|
||||||
|
- security
|
||||||
|
- postgresql
|
||||||
|
- podman
|
||||||
51
roles/vaultwarden/tasks/database.yml
Normal file
51
roles/vaultwarden/tasks/database.yml
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
---
|
||||||
|
# Database setup for Vaultwarden - PostgreSQL via Unix Socket
|
||||||
|
|
||||||
|
- name: Test PostgreSQL socket connectivity
|
||||||
|
postgresql_ping:
|
||||||
|
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||||
|
login_user: "{{ vaultwarden_user }}"
|
||||||
|
become: true
|
||||||
|
become_user: "{{ vaultwarden_user }}"
|
||||||
|
|
||||||
|
- name: Create Vaultwarden database user via socket
|
||||||
|
postgresql_user:
|
||||||
|
name: "{{ vaultwarden_db_user }}"
|
||||||
|
password: "{{ vaultwarden_db_password }}"
|
||||||
|
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||||
|
login_user: postgres
|
||||||
|
become: true
|
||||||
|
become_user: postgres
|
||||||
|
|
||||||
|
- name: Create Vaultwarden database via socket
|
||||||
|
postgresql_db:
|
||||||
|
name: "{{ vaultwarden_db_name }}"
|
||||||
|
owner: "{{ vaultwarden_db_user }}"
|
||||||
|
encoding: UTF8
|
||||||
|
template: template0
|
||||||
|
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||||
|
login_user: postgres
|
||||||
|
become: true
|
||||||
|
become_user: postgres
|
||||||
|
|
||||||
|
- name: Grant Vaultwarden database privileges
|
||||||
|
postgresql_privs:
|
||||||
|
db: "{{ vaultwarden_db_name }}"
|
||||||
|
privs: ALL
|
||||||
|
type: database
|
||||||
|
role: "{{ vaultwarden_db_user }}"
|
||||||
|
login_unix_socket: "{{ postgresql_unix_socket_directories }}"
|
||||||
|
login_user: postgres
|
||||||
|
become: true
|
||||||
|
become_user: postgres
|
||||||
|
|
||||||
|
- name: Display database setup status
|
||||||
|
debug:
|
||||||
|
msg: |
|
||||||
|
Vaultwarden database setup complete!
|
||||||
|
|
||||||
|
Database: {{ vaultwarden_db_name }}
|
||||||
|
User: {{ vaultwarden_db_user }}
|
||||||
|
Connection: Unix socket ({{ postgresql_unix_socket_directories }})
|
||||||
|
|
||||||
|
Ready for Vaultwarden container deployment
|
||||||
57
roles/vaultwarden/tasks/hash_admin_token.yml
Normal file
57
roles/vaultwarden/tasks/hash_admin_token.yml
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
---
|
||||||
|
# Hash admin token on Ansible control node using argon2
|
||||||
|
|
||||||
|
- name: Check if argon2 is available on control node
|
||||||
|
command: which argon2
|
||||||
|
register: argon2_check
|
||||||
|
delegate_to: localhost
|
||||||
|
become: false
|
||||||
|
changed_when: false
|
||||||
|
failed_when: false
|
||||||
|
|
||||||
|
- name: Install argon2 on control node if not present
|
||||||
|
package:
|
||||||
|
name: argon2
|
||||||
|
state: present
|
||||||
|
delegate_to: localhost
|
||||||
|
become: false
|
||||||
|
when: argon2_check.rc != 0
|
||||||
|
run_once: true
|
||||||
|
|
||||||
|
- name: Generate deterministic salt from domain
|
||||||
|
set_fact:
|
||||||
|
vaultwarden_salt_source: "{{ vaultwarden_domain }}-{{ vaultwarden_sso_authority }}"
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Create base64-encoded salt for argon2
|
||||||
|
shell: echo -n "{{ vaultwarden_salt_source }}" | sha256sum | cut -d' ' -f1 | head -c 22
|
||||||
|
register: admin_token_salt
|
||||||
|
delegate_to: localhost
|
||||||
|
become: false
|
||||||
|
changed_when: false
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Hash admin token using argon2 (OWASP preset)
|
||||||
|
shell: echo -n "{{ vaultwarden_admin_token_plain }}" | argon2 "{{ admin_token_salt.stdout }}" -id -t 2 -k 19456 -p 1 -e
|
||||||
|
register: admin_token_hash_result
|
||||||
|
delegate_to: localhost
|
||||||
|
become: false
|
||||||
|
changed_when: false
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Extract hashed admin token
|
||||||
|
set_fact:
|
||||||
|
vaultwarden_admin_token_hashed: "{{ admin_token_hash_result.stdout | trim }}"
|
||||||
|
no_log: true
|
||||||
|
|
||||||
|
- name: Display token hash status
|
||||||
|
debug:
|
||||||
|
msg: |
|
||||||
|
Admin token hashed successfully on control node
|
||||||
|
|
||||||
|
Hash algorithm: argon2id
|
||||||
|
Preset: OWASP (m=19456, t=2, p=1)
|
||||||
|
Format: PHC string (Vaultwarden compatible)
|
||||||
|
Idempotent: Same token always produces same hash
|
||||||
|
|
||||||
|
The hashed token will be used in the environment configuration
|
||||||
107
roles/vaultwarden/tasks/main.yml
Normal file
107
roles/vaultwarden/tasks/main.yml
Normal file
@@ -0,0 +1,107 @@
|
|||||||
|
---
|
||||||
|
# Vaultwarden Password Manager Role - Main Tasks
|
||||||
|
# Self-contained deployment with Podman and Unix sockets
|
||||||
|
|
||||||
|
- name: Setup vaultwarden user and directories
|
||||||
|
include_tasks: user.yml
|
||||||
|
tags: [user, setup]
|
||||||
|
|
||||||
|
- name: Setup database access and permissions
|
||||||
|
include_tasks: database.yml
|
||||||
|
tags: [database, setup]
|
||||||
|
|
||||||
|
- name: Pull vaultwarden container image
|
||||||
|
containers.podman.podman_image:
|
||||||
|
name: "{{ vaultwarden_image }}:{{ vaultwarden_version }}"
|
||||||
|
state: present
|
||||||
|
tags: [containers, image-pull]
|
||||||
|
|
||||||
|
- name: Hash admin token on host
|
||||||
|
include_tasks: hash_admin_token.yml
|
||||||
|
tags: [config, admin-token]
|
||||||
|
|
||||||
|
- name: Deploy environment configuration
|
||||||
|
template:
|
||||||
|
src: vaultwarden.env.j2
|
||||||
|
dest: "{{ vaultwarden_home }}/.env"
|
||||||
|
owner: "{{ vaultwarden_user }}"
|
||||||
|
group: "{{ vaultwarden_group }}"
|
||||||
|
mode: '0600'
|
||||||
|
backup: true
|
||||||
|
notify:
|
||||||
|
- restart vaultwarden
|
||||||
|
tags: [config]
|
||||||
|
|
||||||
|
- name: Create Quadlet systemd directory
|
||||||
|
file:
|
||||||
|
path: /etc/containers/systemd
|
||||||
|
state: directory
|
||||||
|
mode: '0755'
|
||||||
|
|
||||||
|
- name: Deploy Quadlet container file
|
||||||
|
template:
|
||||||
|
src: vaultwarden.container
|
||||||
|
dest: /etc/containers/systemd/vaultwarden.container
|
||||||
|
mode: '0644'
|
||||||
|
notify:
|
||||||
|
- reload systemd
|
||||||
|
- restart vaultwarden
|
||||||
|
tags: [containers, deployment]
|
||||||
|
|
||||||
|
- name: Deploy Caddy configuration
|
||||||
|
template:
|
||||||
|
src: vaultwarden.caddy.j2
|
||||||
|
dest: "{{ caddy_sites_enabled_dir }}/vaultwarden.caddy"
|
||||||
|
owner: root
|
||||||
|
group: "{{ caddy_user }}"
|
||||||
|
mode: '0644'
|
||||||
|
notify: reload caddy
|
||||||
|
tags: [caddy, reverse-proxy]
|
||||||
|
|
||||||
|
- name: Ensure PostgreSQL is running
|
||||||
|
systemd:
|
||||||
|
name: postgresql
|
||||||
|
state: started
|
||||||
|
|
||||||
|
- name: Wait for PostgreSQL socket to be ready
|
||||||
|
wait_for:
|
||||||
|
path: "{{ postgresql_unix_socket_directories }}/.s.PGSQL.{{ postgresql_port }}"
|
||||||
|
timeout: 30
|
||||||
|
when: postgresql_unix_socket_enabled
|
||||||
|
|
||||||
|
- name: Enable and start Vaultwarden service (system scope)
|
||||||
|
systemd:
|
||||||
|
name: vaultwarden
|
||||||
|
enabled: "{{ vaultwarden_service_enabled }}"
|
||||||
|
state: "{{ vaultwarden_service_state }}"
|
||||||
|
daemon_reload: true
|
||||||
|
tags: [containers, service]
|
||||||
|
|
||||||
|
- name: Wait for Vaultwarden to be ready
|
||||||
|
uri:
|
||||||
|
url: "http://127.0.0.1:{{ vaultwarden_http_port }}/"
|
||||||
|
method: GET
|
||||||
|
status_code: [200, 302]
|
||||||
|
timeout: 30
|
||||||
|
retries: 10
|
||||||
|
delay: 15
|
||||||
|
register: vaultwarden_health_check
|
||||||
|
tags: [verification, health-check]
|
||||||
|
|
||||||
|
- name: Display Vaultwarden deployment status
|
||||||
|
debug:
|
||||||
|
msg: |
|
||||||
|
Vaultwarden Password Manager deployed successfully!
|
||||||
|
|
||||||
|
Domain: {{ vaultwarden_domain }}
|
||||||
|
Database: {{ vaultwarden_db_name }} (Unix socket)
|
||||||
|
Container: {{ vaultwarden_image }}:{{ vaultwarden_version }}
|
||||||
|
Admin Panel: https://{{ vaultwarden_domain }}/admin
|
||||||
|
|
||||||
|
Ready for user registration and password management!
|
||||||
|
|
||||||
|
Next Steps:
|
||||||
|
- Access https://{{ vaultwarden_domain }}/admin with your admin token
|
||||||
|
- Configure additional settings (SMTP, SSO, etc.)
|
||||||
|
- Invite users or create accounts
|
||||||
|
tags: [verification]
|
||||||
28
roles/vaultwarden/tasks/user.yml
Normal file
28
roles/vaultwarden/tasks/user.yml
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
---
|
||||||
|
# Vaultwarden User Management - Service-Specific User Setup
|
||||||
|
|
||||||
|
- name: Create vaultwarden group
|
||||||
|
group:
|
||||||
|
name: "{{ vaultwarden_group }}"
|
||||||
|
system: true
|
||||||
|
|
||||||
|
- name: Create vaultwarden user
|
||||||
|
user:
|
||||||
|
name: "{{ vaultwarden_user }}"
|
||||||
|
group: "{{ vaultwarden_group }}"
|
||||||
|
system: true
|
||||||
|
shell: /bin/bash
|
||||||
|
home: "{{ vaultwarden_home }}"
|
||||||
|
create_home: true
|
||||||
|
comment: "Vaultwarden password manager service"
|
||||||
|
|
||||||
|
- name: Create vaultwarden directories
|
||||||
|
file:
|
||||||
|
path: "{{ item }}"
|
||||||
|
state: directory
|
||||||
|
owner: "{{ vaultwarden_user }}"
|
||||||
|
group: "{{ vaultwarden_group }}"
|
||||||
|
mode: '0755'
|
||||||
|
loop:
|
||||||
|
- "{{ vaultwarden_home }}"
|
||||||
|
- "{{ vaultwarden_data_dir }}"
|
||||||
35
roles/vaultwarden/templates/vaultwarden.caddy.j2
Normal file
35
roles/vaultwarden/templates/vaultwarden.caddy.j2
Normal file
@@ -0,0 +1,35 @@
|
|||||||
|
# Vaultwarden Password Manager
|
||||||
|
{{ vaultwarden_domain }} {
|
||||||
|
# Notifications endpoint (WebSocket for live sync)
|
||||||
|
@websocket {
|
||||||
|
path /notifications/hub
|
||||||
|
}
|
||||||
|
reverse_proxy @websocket http://127.0.0.1:{{ vaultwarden_http_port }} {
|
||||||
|
header_up Upgrade {http.request.header.Upgrade}
|
||||||
|
header_up Connection {http.request.header.Connection}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Regular HTTP traffic
|
||||||
|
reverse_proxy http://127.0.0.1:{{ vaultwarden_http_port }} {
|
||||||
|
header_up Host {host}
|
||||||
|
header_up X-Real-IP {remote_host}
|
||||||
|
header_up X-Forwarded-Proto https
|
||||||
|
header_up X-Forwarded-For {remote_host}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Security headers
|
||||||
|
header {
|
||||||
|
X-Frame-Options SAMEORIGIN
|
||||||
|
X-Content-Type-Options nosniff
|
||||||
|
X-XSS-Protection "1; mode=block"
|
||||||
|
Referrer-Policy strict-origin-when-cross-origin
|
||||||
|
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Logging
|
||||||
|
log {
|
||||||
|
output file {{ caddy_log_dir }}/vaultwarden.log
|
||||||
|
level INFO
|
||||||
|
format json
|
||||||
|
}
|
||||||
|
}
|
||||||
26
roles/vaultwarden/templates/vaultwarden.container
Normal file
26
roles/vaultwarden/templates/vaultwarden.container
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=Vaultwarden Password Manager Container
|
||||||
|
After=network-online.target postgresql.service
|
||||||
|
Wants=network-online.target
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
ContainerName=vaultwarden
|
||||||
|
Image={{ vaultwarden_image }}:{{ vaultwarden_version }}
|
||||||
|
EnvironmentFile={{ vaultwarden_home }}/.env
|
||||||
|
|
||||||
|
# Volume mounts
|
||||||
|
# Application data (includes database, attachments, sends, icons, etc.)
|
||||||
|
Volume={{ vaultwarden_data_dir }}:/data:Z
|
||||||
|
|
||||||
|
# Infrastructure socket (PostgreSQL access with 777 permissions on host)
|
||||||
|
Volume={{ postgresql_unix_socket_directories }}:{{ postgresql_unix_socket_directories }}:Z
|
||||||
|
|
||||||
|
# Expose HTTP port to localhost only (Caddy will reverse proxy)
|
||||||
|
PublishPort=127.0.0.1:{{ vaultwarden_http_port }}:80
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
TimeoutStartSec=300
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=multi-user.target
|
||||||
73
roles/vaultwarden/templates/vaultwarden.env.j2
Normal file
73
roles/vaultwarden/templates/vaultwarden.env.j2
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
# Vaultwarden Environment Configuration
|
||||||
|
# Generated by Ansible - DO NOT EDIT MANUALLY
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Database Configuration (PostgreSQL via Unix Socket)
|
||||||
|
# =================================================================
|
||||||
|
DATABASE_URL=postgresql://{{ vaultwarden_db_user }}:{{ vaultwarden_db_password }}@/{{ vaultwarden_db_name }}?host={{ postgresql_unix_socket_directories }}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Domain Configuration
|
||||||
|
# =================================================================
|
||||||
|
DOMAIN=https://{{ vaultwarden_domain }}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Admin Configuration
|
||||||
|
# =================================================================
|
||||||
|
ADMIN_TOKEN={{ vaultwarden_admin_token_hashed }}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Registration and Invitation Controls
|
||||||
|
# =================================================================
|
||||||
|
SIGNUPS_ALLOWED={{ vaultwarden_signups_allowed | lower }}
|
||||||
|
INVITATIONS_ALLOWED={{ vaultwarden_invitations_allowed | lower }}
|
||||||
|
SHOW_PASSWORD_HINT={{ vaultwarden_show_password_hint | lower }}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# WebSocket Configuration (for live sync)
|
||||||
|
# =================================================================
|
||||||
|
WEBSOCKET_ENABLED={{ vaultwarden_websocket_enabled | lower }}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# SMTP Configuration (Optional)
|
||||||
|
# =================================================================
|
||||||
|
{% if vaultwarden_smtp_enabled %}
|
||||||
|
SMTP_HOST={{ vaultwarden_smtp_host }}
|
||||||
|
SMTP_PORT={{ vaultwarden_smtp_port }}
|
||||||
|
SMTP_FROM={{ vaultwarden_smtp_from }}
|
||||||
|
SMTP_SECURITY={{ vaultwarden_smtp_security }}
|
||||||
|
{% if vaultwarden_smtp_username %}
|
||||||
|
SMTP_USERNAME={{ vaultwarden_smtp_username }}
|
||||||
|
SMTP_PASSWORD={{ vaultwarden_smtp_password }}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# SSO Configuration (Optional - Authentik Integration)
|
||||||
|
# =================================================================
|
||||||
|
{% if vaultwarden_sso_enabled %}
|
||||||
|
SSO_ENABLED=true
|
||||||
|
SSO_ONLY={{ vaultwarden_sso_only | lower }}
|
||||||
|
SSO_CLIENT_ID={{ vaultwarden_sso_client_id }}
|
||||||
|
SSO_CLIENT_SECRET={{ vaultwarden_sso_client_secret }}
|
||||||
|
SSO_AUTHORITY={{ vaultwarden_sso_authority }}
|
||||||
|
SSO_SCOPES="{{ vaultwarden_sso_scopes }}"
|
||||||
|
SSO_SIGNUPS_MATCH_EMAIL={{ vaultwarden_sso_signups_match_email | lower }}
|
||||||
|
SSO_ALLOW_UNKNOWN_EMAIL_VERIFICATION={{ vaultwarden_sso_allow_unknown_email_verification | lower }}
|
||||||
|
SSO_CLIENT_CACHE_EXPIRATION={{ vaultwarden_sso_client_cache_expiration }}
|
||||||
|
{% if vaultwarden_sso_signups_domains_whitelist %}
|
||||||
|
SSO_SIGNUPS_DOMAINS_WHITELIST={{ vaultwarden_sso_signups_domains_whitelist }}
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# =================================================================
|
||||||
|
# Security and Performance
|
||||||
|
# =================================================================
|
||||||
|
# Disable user registration via email (use admin panel or invitations)
|
||||||
|
SIGNUPS_VERIFY=false
|
||||||
|
|
||||||
|
# Log level (trace, debug, info, warn, error, off)
|
||||||
|
LOG_LEVEL=info
|
||||||
|
|
||||||
|
# Rocket configuration
|
||||||
|
ROCKET_WORKERS=10
|
||||||
Reference in New Issue
Block a user