chore: reorganize repository structure

- Move docker-compose files to .docker/compose/
- Move docker-entrypoint.sh to .docker/
- Move DOCKER.md to .docker/README.md
- Move 16 implementation docs to docs/implementation/
- Delete test artifacts (block_test.txt, caddy_*.json)
- Update all references in Dockerfile, Makefile, tasks, scripts
- Add .github/instructions/structure.instructions.md for enforcement
- Update CHANGELOG.md

Root level reduced from 81 items to ~35 visible items.
This commit is contained in:
GitHub Actions
2025-12-21 04:57:31 +00:00
parent af8384046c
commit 05c2045f06
44 changed files with 492 additions and 395 deletions
+246
View File
@@ -0,0 +1,246 @@
# Docker Deployment Guide
Charon is designed for Docker-first deployment, making it easy for home users to run Caddy without learning Caddyfile syntax.
## Directory Structure
```text
.docker/
├── compose/ # Docker Compose files
│ ├── docker-compose.yml # Main production compose
│ ├── docker-compose.dev.yml # Development overrides
│ ├── docker-compose.local.yml # Local development
│ ├── docker-compose.remote.yml # Remote deployment
│ └── docker-compose.override.yml # Personal overrides (gitignored)
├── docker-entrypoint.sh # Container entrypoint script
└── README.md # This file
```
## Quick Start
```bash
# Clone the repository
git clone https://github.com/Wikid82/charon.git
cd charon
# Start the stack (using new location)
docker compose -f .docker/compose/docker-compose.yml up -d
# Access the UI
open http://localhost:8080
```
## Usage
When running docker-compose commands, specify the compose file location:
```bash
# Production
docker compose -f .docker/compose/docker-compose.yml up -d
# Development
docker compose -f .docker/compose/docker-compose.yml -f .docker/compose/docker-compose.dev.yml up -d
# Local development
docker compose -f .docker/compose/docker-compose.local.yml up -d
# With personal overrides
docker compose -f .docker/compose/docker-compose.yml -f .docker/compose/docker-compose.override.yml up -d
```
## Architecture
Charon runs as a **single container** that includes:
1. **Caddy Server**: The reverse proxy engine (ports 80/443).
2. **Charon Backend**: The Go API that manages Caddy via its API (binary: `charon`, `cpmp` symlink preserved).
3. **Charon Frontend**: The React web interface (port 8080).
This unified architecture simplifies deployment, updates, and data management.
```text
┌──────────────────────────────────────────┐
│ Container (charon / cpmp) │
│ │
│ ┌──────────┐ API ┌──────────────┐ │
│ │ Caddy │◄──:2019──┤ Charon App │ │
│ │ (Proxy) │ │ (Manager) │ │
│ └────┬─────┘ └──────┬───────┘ │
│ │ │ │
└───────┼───────────────────────┼──────────┘
│ :80, :443 │ :8080
▼ ▼
Internet Web UI
```
## Configuration
### Volumes
Persist your data by mounting these volumes:
| Host Path | Container Path | Description |
|-----------|----------------|-------------|
| `./data` | `/app/data` | **Critical**. Stores the SQLite database (default `charon.db`, `cpm.db` fallback) and application logs. |
| `./caddy_data` | `/data` | **Critical**. Stores Caddy's SSL certificates and keys. |
| `./caddy_config` | `/config` | Stores Caddy's autosave configuration. |
### Environment Variables
Configure the application via `docker-compose.yml`:
| Variable | Default | Description |
|----------|---------|-------------|
| `CHARON_ENV` | `production` | Set to `development` for verbose logging (`CPM_ENV` supported for backward compatibility). |
| `CHARON_HTTP_PORT` | `8080` | Port for the Web UI (`CPM_HTTP_PORT` supported for backward compatibility). |
| `CHARON_DB_PATH` | `/app/data/charon.db` | Path to the SQLite database (`CPM_DB_PATH` supported for backward compatibility). |
| `CHARON_CADDY_ADMIN_API` | `http://localhost:2019` | Internal URL for Caddy API (`CPM_CADDY_ADMIN_API` supported for backward compatibility). |
## NAS Deployment Guides
### Synology (Container Manager / Docker)
1. **Prepare Folders**: Create a folder `docker/charon` (or `docker/cpmp` for backward compatibility) and subfolders `data`, `caddy_data`, and `caddy_config`.
2. **Download Image**: Search for `ghcr.io/wikid82/charon` in the Registry and download the `latest` tag.
3. **Launch Container**:
- **Network**: Use `Host` mode (recommended for Caddy to see real client IPs) OR bridge mode mapping ports `80:80`, `443:443`, and `8080:8080`.
- **Volume Settings**:
- `/docker/charon/data` -> `/app/data` (or `/docker/cpmp/data` -> `/app/data` for backward compatibility)
- `/docker/charon/caddy_data` -> `/data` (or `/docker/cpmp/caddy_data` -> `/data` for backward compatibility)
- `/docker/charon/caddy_config` -> `/config` (or `/docker/cpmp/caddy_config` -> `/config` for backward compatibility)
- **Environment**: Add `CHARON_ENV=production` (or `CPM_ENV=production` for backward compatibility).
4. **Finish**: Start the container and access `http://YOUR_NAS_IP:8080`.
### Unraid
1. **Community Apps**: (Coming Soon) Search for "charon".
2. **Manual Install**:
- Click **Add Container**.
- **Name**: Charon
- **Repository**: `ghcr.io/wikid82/charon:latest`
- **Network Type**: Bridge
- **WebUI**: `http://[IP]:[PORT:8080]`
- **Port mappings**:
- Container Port: `80` -> Host Port: `80`
- Container Port: `443` -> Host Port: `443`
- Container Port: `8080` -> Host Port: `8080`
- **Paths**:
- `/mnt/user/appdata/charon/data` -> `/app/data` (or `/mnt/user/appdata/cpmp/data` -> `/app/data` for backward compatibility)
- `/mnt/user/appdata/charon/caddy_data` -> `/data` (or `/mnt/user/appdata/cpmp/caddy_data` -> `/data` for backward compatibility)
- `/mnt/user/appdata/charon/caddy_config` -> `/config` (or `/mnt/user/appdata/cpmp/caddy_config` -> `/config` for backward compatibility)
3. **Apply**: Click Done to pull and start.
## Troubleshooting
### App can't reach Caddy
**Symptom**: "Caddy unreachable" errors in logs
**Solution**: Since both run in the same container, this usually means Caddy failed to start. Check logs:
```bash
docker compose -f .docker/compose/docker-compose.yml logs app
```
### Certificates not working
**Symptom**: HTTP works but HTTPS fails
**Check**:
1. Port 80/443 are accessible from the internet
2. DNS points to your server
3. Caddy logs: `docker compose -f .docker/compose/docker-compose.yml logs app | grep -i acme`
### Config changes not applied
**Symptom**: Changes in UI don't affect routing
**Debug**:
```bash
# View current Caddy config
curl http://localhost:2019/config/ | jq
# Check Charon logs
docker compose -f .docker/compose/docker-compose.yml logs app
# Manual config reload
curl -X POST http://localhost:8080/api/v1/caddy/reload
```
## Updating
Pull the latest images and restart:
```bash
docker compose -f .docker/compose/docker-compose.yml pull
docker compose -f .docker/compose/docker-compose.yml up -d
```
For specific versions:
```bash
# Edit docker-compose.yml to pin version
image: ghcr.io/wikid82/charon:v1.0.0
docker compose -f .docker/compose/docker-compose.yml up -d
```
## Building from Source
```bash
# Build multi-arch images
docker buildx build --platform linux/amd64,linux/arm64 -t charon:local .
# Or use Make
make docker-build
```
## Security Considerations
1. **Caddy admin API**: Keep port 2019 internal (not exposed in production compose)
2. **Management UI**: Add authentication (Issue #7) before exposing to internet
3. **Certificates**: Caddy stores private keys in `caddy_data` - protect this volume
4. **Database**: SQLite file contains all config - backup regularly
## Integration with Existing Caddy
If you already have Caddy running, you can point Charon to it:
```yaml
environment:
- CPM_CADDY_ADMIN_API=http://your-caddy-host:2019
```
**Warning**: Charon will replace Caddy's entire configuration. Backup first!
## Performance Tuning
For high-traffic deployments:
```yaml
# docker-compose.yml
services:
app:
deploy:
resources:
limits:
memory: 512M
reservations:
memory: 256M
```
## Important Notes
- **Override Location Change**: The `docker-compose.override.yml` file has moved from
the project root to `.docker/compose/`. Update your local workflows accordingly.
- Personal override files (`.docker/compose/docker-compose.override.yml`) are gitignored
and should contain machine-specific configurations only.
## Next Steps
- Configure your first proxy host via UI
- Enable automatic HTTPS (happens automatically)
- Add authentication (Issue #7)
- Integrate CrowdSec (Issue #15)
+50
View File
@@ -0,0 +1,50 @@
# Docker Compose Files
This directory contains all Docker Compose configuration variants for Charon.
## File Descriptions
| File | Purpose |
|------|---------|
| `docker-compose.yml` | Main production compose configuration. Base services and production settings. |
| `docker-compose.dev.yml` | Development overrides. Enables hot-reload, debug logging, and development tools. |
| `docker-compose.local.yml` | Local development configuration. Standalone setup for local testing. |
| `docker-compose.remote.yml` | Remote deployment configuration. Settings for deploying to remote servers. |
| `docker-compose.override.yml` | Personal local overrides. **Gitignored** - use for machine-specific settings. |
## Usage Patterns
### Production Deployment
```bash
docker compose -f .docker/compose/docker-compose.yml up -d
```
### Development Mode
```bash
docker compose -f .docker/compose/docker-compose.yml \
-f .docker/compose/docker-compose.dev.yml up -d
```
### Local Testing
```bash
docker compose -f .docker/compose/docker-compose.local.yml up -d
```
### With Personal Overrides
Create your own `docker-compose.override.yml` in this directory for personal
configurations (port mappings, volume paths, etc.). This file is gitignored.
```bash
docker compose -f .docker/compose/docker-compose.yml \
-f .docker/compose/docker-compose.override.yml up -d
```
## Notes
- Always use the `-f` flag to specify compose file paths from the project root
- The override file is automatically ignored by git - do not commit personal settings
- See project tasks in VS Code for convenient pre-configured commands
+40
View File
@@ -0,0 +1,40 @@
# Development override - use with: docker-compose -f docker-compose.yml -f docker-compose.dev.yml up
services:
app:
image: ghcr.io/wikid82/charon:dev
# Development: expose Caddy admin API externally for debugging
ports:
- "80:80"
- "443:443"
- "443:443/udp"
- "8080:8080"
- "2019:2019" # Caddy admin API (dev only)
environment:
- CHARON_ENV=development
- CPM_ENV=development
- CHARON_HTTP_PORT=8080
- CPM_HTTP_PORT=80
- CHARON_DB_PATH=/app/data/charon.db
- CHARON_FRONTEND_DIR=/app/frontend/dist
- CHARON_CADDY_ADMIN_API=http://localhost:2019
- CHARON_CADDY_CONFIG_DIR=/app/data/caddy
# Security Services (Optional)
# 🚨 DEPRECATED: Use GUI toggle in Security dashboard instead
#- CPM_SECURITY_CROWDSEC_MODE=disabled # ⚠️ DEPRECATED
#- CPM_SECURITY_CROWDSEC_API_URL= # ⚠️ DEPRECATED
#- CPM_SECURITY_CROWDSEC_API_KEY= # ⚠️ DEPRECATED
#- CPM_SECURITY_WAF_MODE=disabled
#- CPM_SECURITY_RATELIMIT_ENABLED=false
#- CPM_SECURITY_ACL_ENABLED=false
- FEATURE_CERBERUS_ENABLED=true
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro # For local container discovery
- crowdsec_data:/app/data/crowdsec
# Mount your existing Caddyfile for automatic import (optional)
# - ./my-existing-Caddyfile:/import/Caddyfile:ro
# - ./sites:/import/sites:ro # If your Caddyfile imports other files
volumes:
crowdsec_data:
driver: local
+57
View File
@@ -0,0 +1,57 @@
services:
charon:
image: charon:local
container_name: charon
restart: unless-stopped
ports:
- "80:80" # HTTP (Caddy proxy)
- "443:443" # HTTPS (Caddy proxy)
- "443:443/udp" # HTTP/3 (Caddy proxy)
- "8080:8080" # Management UI (Charon)
- "2345:2345" # Delve Debugger
environment:
- CHARON_ENV=development
- CHARON_DEBUG=1
- TZ=America/New_York
- CHARON_HTTP_PORT=8080
- CHARON_DB_PATH=/app/data/charon.db
- CHARON_FRONTEND_DIR=/app/frontend/dist
- CHARON_CADDY_ADMIN_API=http://localhost:2019
- CHARON_CADDY_CONFIG_DIR=/app/data/caddy
- CHARON_CADDY_BINARY=caddy
- CHARON_IMPORT_CADDYFILE=/import/Caddyfile
- CHARON_IMPORT_DIR=/app/data/imports
- CHARON_ACME_STAGING=false
- FEATURE_CERBERUS_ENABLED=true
extra_hosts:
- "host.docker.internal:host-gateway"
cap_add:
- SYS_PTRACE
security_opt:
- seccomp:unconfined
volumes:
- charon_data:/app/data
- caddy_data:/data
- caddy_config:/config
- crowdsec_data:/app/data/crowdsec
- /var/run/docker.sock:/var/run/docker.sock:ro # For local container discovery
- ./backend:/app/backend:ro # Mount source for debugging
# Mount your existing Caddyfile for automatic import (optional)
# - <PATH_TO_YOUR_CADDYFILE>:/import/Caddyfile:ro
# - <PATH_TO_YOUR_SITES_DIR>:/import/sites:ro # If your Caddyfile imports other files
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/api/v1/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
volumes:
charon_data:
driver: local
caddy_data:
driver: local
caddy_config:
driver: local
crowdsec_data:
driver: local
+19
View File
@@ -0,0 +1,19 @@
version: '3.9'
services:
# Run this service on your REMOTE servers (not the one running Charon)
# to allow Charon to discover containers running there (legacy: CPMP).
docker-socket-proxy:
image: alpine/socat
container_name: docker-socket-proxy
restart: unless-stopped
ports:
# Expose port 2375.
# ⚠️ SECURITY WARNING: Ensure this port is NOT accessible from the public internet!
# Use a VPN (Tailscale, WireGuard) or a private local network (LAN).
- "2375:2375"
volumes:
# Give the proxy access to the host's Docker socket
- /var/run/docker.sock:/var/run/docker.sock:ro
# Forward TCP traffic from port 2375 to the internal Docker socket
command: tcp-listen:2375,fork,reuseaddr unix-connect:/var/run/docker.sock
+67
View File
@@ -0,0 +1,67 @@
services:
charon:
image: ghcr.io/wikid82/charon:latest
container_name: charon
restart: unless-stopped
ports:
- "80:80" # HTTP (Caddy proxy)
- "443:443" # HTTPS (Caddy proxy)
- "443:443/udp" # HTTP/3 (Caddy proxy)
- "8080:8080" # Management UI (Charon)
environment:
- CHARON_ENV=production # CHARON_ preferred; CPM_ values still supported
- TZ=UTC # Set timezone (e.g., America/New_York)
- CHARON_HTTP_PORT=8080
- CHARON_DB_PATH=/app/data/charon.db
- CHARON_FRONTEND_DIR=/app/frontend/dist
- CHARON_CADDY_ADMIN_API=http://localhost:2019
- CHARON_CADDY_CONFIG_DIR=/app/data/caddy
- CHARON_CADDY_BINARY=caddy
- CHARON_IMPORT_CADDYFILE=/import/Caddyfile
- CHARON_IMPORT_DIR=/app/data/imports
# Security Services (Optional)
# 🚨 DEPRECATED: CrowdSec environment variables are no longer used.
# CrowdSec is now GUI-controlled via the Security dashboard.
# Remove these lines and use the GUI toggle instead.
# See: https://wikid82.github.io/charon/migration-guide
#- CERBERUS_SECURITY_CROWDSEC_MODE=disabled # ⚠️ DEPRECATED - Use GUI toggle
#- CERBERUS_SECURITY_CROWDSEC_API_URL= # ⚠️ DEPRECATED - External mode removed
#- CERBERUS_SECURITY_CROWDSEC_API_KEY= # ⚠️ DEPRECATED - External mode removed
#- CERBERUS_SECURITY_WAF_MODE=disabled # disabled, enabled
#- CERBERUS_SECURITY_RATELIMIT_ENABLED=false
#- CERBERUS_SECURITY_ACL_ENABLED=false
# Backward compatibility: CPM_ prefixed variables are still supported
# 🚨 DEPRECATED: Use GUI toggle instead (see Security dashboard)
#- CPM_SECURITY_CROWDSEC_MODE=disabled # ⚠️ DEPRECATED
#- CPM_SECURITY_CROWDSEC_API_URL= # ⚠️ DEPRECATED
#- CPM_SECURITY_CROWDSEC_API_KEY= # ⚠️ DEPRECATED
#- CPM_SECURITY_WAF_MODE=disabled
#- CPM_SECURITY_RATELIMIT_ENABLED=false
#- CPM_SECURITY_ACL_ENABLED=false
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- cpm_data:/app/data # existing data (legacy name); charon will also use this path by default for backward compatibility
- caddy_data:/data
- caddy_config:/config
- crowdsec_data:/app/data/crowdsec
- /var/run/docker.sock:/var/run/docker.sock:ro # For local container discovery
# Mount your existing Caddyfile for automatic import (optional)
# - ./my-existing-Caddyfile:/import/Caddyfile:ro
# - ./sites:/import/sites:ro # If your Caddyfile imports other files
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:8080/api/v1/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
volumes:
cpm_data:
driver: local
caddy_data:
driver: local
caddy_config:
driver: local
crowdsec_data:
driver: local
+217
View File
@@ -0,0 +1,217 @@
#!/bin/sh
set -e
# Entrypoint script to run both Caddy and Charon in a single container
# This simplifies deployment for home users
echo "Starting Charon with integrated Caddy..."
# ============================================================================
# Volume Permission Handling for Non-Root User
# ============================================================================
# When running as non-root user (charon), mounted volumes may have incorrect
# permissions. This section ensures the application can write to required paths.
# Note: This runs as the charon user, so we can only fix owned directories.
# Ensure /app/data exists and is writable (primary data volume)
if [ ! -w "/app/data" ] 2>/dev/null; then
echo "Warning: /app/data is not writable. Please ensure volume permissions are correct."
echo " Run: docker run ... -v charon_data:/app/data ..."
echo " Or fix permissions: chown -R 1000:1000 /path/to/volume"
fi
# Ensure /config exists and is writable (Caddy config volume)
if [ ! -w "/config" ] 2>/dev/null; then
echo "Warning: /config is not writable. Please ensure volume permissions are correct."
fi
# Create required subdirectories in writable volumes
mkdir -p /app/data/caddy 2>/dev/null || true
mkdir -p /app/data/crowdsec 2>/dev/null || true
mkdir -p /app/data/geoip 2>/dev/null || true
# ============================================================================
# CrowdSec Initialization
# ============================================================================
# Note: CrowdSec agent is not auto-started. Lifecycle is GUI-controlled via backend handlers.
# Initialize CrowdSec configuration if cscli is present
if command -v cscli >/dev/null; then
echo "Initializing CrowdSec configuration..."
# Define persistent paths
CS_PERSIST_DIR="/app/data/crowdsec"
CS_CONFIG_DIR="$CS_PERSIST_DIR/config"
CS_DATA_DIR="$CS_PERSIST_DIR/data"
# Ensure persistent directories exist (within writable volume)
mkdir -p "$CS_CONFIG_DIR" 2>/dev/null || echo "Warning: Cannot create $CS_CONFIG_DIR"
mkdir -p "$CS_DATA_DIR" 2>/dev/null || echo "Warning: Cannot create $CS_DATA_DIR"
# Log directories are created at build time with correct ownership
# Only attempt to create if they don't exist (first run scenarios)
mkdir -p /var/log/crowdsec 2>/dev/null || true
mkdir -p /var/log/caddy 2>/dev/null || true
# Initialize persistent config if key files are missing
if [ ! -f "$CS_CONFIG_DIR/config.yaml" ]; then
echo "Initializing persistent CrowdSec configuration..."
if [ -d "/etc/crowdsec.dist" ]; then
cp -r /etc/crowdsec.dist/* "$CS_CONFIG_DIR/" 2>/dev/null || echo "Warning: Could not copy dist config"
elif [ -d "/etc/crowdsec" ] && [ ! -L "/etc/crowdsec" ]; then
# Fallback if .dist is missing
cp -r /etc/crowdsec/* "$CS_CONFIG_DIR/" 2>/dev/null || echo "Warning: Could not copy config"
fi
fi
# Link /etc/crowdsec to persistent config for runtime compatibility
# Note: This symlink is created at build time; verify it exists
if [ -L "/etc/crowdsec" ]; then
echo "CrowdSec config symlink verified: /etc/crowdsec -> $CS_CONFIG_DIR"
else
echo "Warning: /etc/crowdsec symlink not found. CrowdSec may use volume config directly."
fi
# Create/update acquisition config for Caddy logs
if [ ! -f "/etc/crowdsec/acquis.yaml" ] || [ ! -s "/etc/crowdsec/acquis.yaml" ]; then
echo "Creating acquisition configuration for Caddy logs..."
cat > /etc/crowdsec/acquis.yaml << 'ACQUIS_EOF'
# Caddy access logs acquisition
# CrowdSec will monitor these files for security events
source: file
filenames:
- /var/log/caddy/access.log
- /var/log/caddy/*.log
labels:
type: caddy
ACQUIS_EOF
fi
# Ensure hub directory exists in persistent storage
mkdir -p /etc/crowdsec/hub
# Perform variable substitution
export CFG=/etc/crowdsec
export DATA="$CS_DATA_DIR"
export PID=/var/run/crowdsec.pid
export LOG=/var/log/crowdsec.log
# Process config.yaml and user.yaml with envsubst
# We use a temp file to avoid issues with reading/writing same file
for file in /etc/crowdsec/config.yaml /etc/crowdsec/user.yaml; do
if [ -f "$file" ]; then
envsubst < "$file" > "$file.tmp" && mv "$file.tmp" "$file"
fi
done
# Configure CrowdSec LAPI to use port 8085 to avoid conflict with Charon (port 8080)
if [ -f "/etc/crowdsec/config.yaml" ]; then
sed -i 's|listen_uri: 127.0.0.1:8080|listen_uri: 127.0.0.1:8085|g' /etc/crowdsec/config.yaml
sed -i 's|listen_uri: 0.0.0.0:8080|listen_uri: 127.0.0.1:8085|g' /etc/crowdsec/config.yaml
fi
# Update local_api_credentials.yaml to use correct port
if [ -f "/etc/crowdsec/local_api_credentials.yaml" ]; then
sed -i 's|url: http://127.0.0.1:8080|url: http://127.0.0.1:8085|g' /etc/crowdsec/local_api_credentials.yaml
sed -i 's|url: http://localhost:8080|url: http://127.0.0.1:8085|g' /etc/crowdsec/local_api_credentials.yaml
fi
# Update hub index to ensure CrowdSec can start
if [ ! -f "/etc/crowdsec/hub/.index.json" ]; then
echo "Updating CrowdSec hub index..."
timeout 60s cscli hub update 2>/dev/null || echo "⚠️ Hub update timed out or failed, continuing..."
fi
# Ensure local machine is registered (auto-heal for volume/config mismatch)
# We force registration because we just restored configuration (and likely credentials)
echo "Registering local machine..."
cscli machines add -a --force 2>/dev/null || echo "Warning: Machine registration may have failed"
# Install hub items (parsers, scenarios, collections) if local mode enabled
if [ "$SECURITY_CROWDSEC_MODE" = "local" ]; then
echo "Installing CrowdSec hub items..."
if [ -x /usr/local/bin/install_hub_items.sh ]; then
/usr/local/bin/install_hub_items.sh 2>/dev/null || echo "Warning: Some hub items may not have installed"
fi
fi
fi
# CrowdSec Lifecycle Management:
# CrowdSec configuration is initialized above (symlinks, directories, hub updates)
# However, the CrowdSec agent is NOT auto-started in the entrypoint.
# Instead, CrowdSec lifecycle is managed by the backend handlers via GUI controls.
# This makes CrowdSec consistent with other security features (WAF, ACL, Rate Limiting).
# Users enable/disable CrowdSec using the Security dashboard toggle, which calls:
# - POST /api/v1/admin/crowdsec/start (to start the agent)
# - POST /api/v1/admin/crowdsec/stop (to stop the agent)
# This approach provides:
# - Consistent user experience across all security features
# - No environment variable dependency
# - Real-time control without container restart
# - Proper integration with Charon's security orchestration
echo "CrowdSec configuration initialized. Agent lifecycle is GUI-controlled."
# Start Caddy in the background with initial empty config
echo '{"admin":{"listen":"0.0.0.0:2019"},"apps":{}}' > /config/caddy.json
# Use JSON config directly; no adapter needed
caddy run --config /config/caddy.json &
CADDY_PID=$!
echo "Caddy started (PID: $CADDY_PID)"
# Wait for Caddy to be ready
echo "Waiting for Caddy admin API..."
i=1
while [ "$i" -le 30 ]; do
if wget -q -O- http://127.0.0.1:2019/config/ > /dev/null 2>&1; then
echo "Caddy is ready!"
break
fi
i=$((i+1))
sleep 1
done
# Start Charon management application
echo "Starting Charon management application..."
DEBUG_FLAG=${CHARON_DEBUG:-$CPMP_DEBUG}
DEBUG_PORT=${CHARON_DEBUG_PORT:-$CPMP_DEBUG_PORT}
if [ "$DEBUG_FLAG" = "1" ]; then
echo "Running Charon under Delve (port $DEBUG_PORT)"
bin_path=/app/charon
if [ ! -f "$bin_path" ]; then
bin_path=/app/cpmp
fi
/usr/local/bin/dlv exec "$bin_path" --headless --listen=":$DEBUG_PORT" --api-version=2 --accept-multiclient --continue --log -- &
else
bin_path=/app/charon
if [ ! -f "$bin_path" ]; then
bin_path=/app/cpmp
fi
"$bin_path" &
fi
APP_PID=$!
echo "Charon started (PID: $APP_PID)"
shutdown() {
echo "Shutting down..."
kill -TERM "$APP_PID" 2>/dev/null || true
kill -TERM "$CADDY_PID" 2>/dev/null || true
# Note: CrowdSec process lifecycle is managed by backend handlers
# The backend will handle graceful CrowdSec shutdown when the container stops
wait "$APP_PID" 2>/dev/null || true
wait "$CADDY_PID" 2>/dev/null || true
exit 0
}
# Trap signals for graceful shutdown
trap 'shutdown' TERM INT
echo "Charon is running!"
echo " - Management UI: http://localhost:8080"
echo " - Caddy Proxy: http://localhost:80, https://localhost:443"
echo " - Caddy Admin API: http://localhost:2019"
# Wait loop: exit when either process dies, then shutdown the other
while kill -0 "$APP_PID" 2>/dev/null && kill -0 "$CADDY_PID" 2>/dev/null; do
sleep 1
done
echo "A process exited, initiating shutdown..."
shutdown