feat: Add comprehensive testing plan for Charon rate limiter
This commit is contained in:
@@ -458,6 +458,179 @@ func buildRateLimitHandler(host *models.ProxyHost, secCfg *models.SecurityConfig
|
||||
return h, nil
|
||||
}
|
||||
|
||||
#### 2.3 Rate Limiting — Test Plan (Detailed)
|
||||
|
||||
**Summary:** This section contains a complete test plan to validate rate limiting configuration generation and runtime enforcement in Charon. Tests are grouped into Unit, Integration and E2E categories and prioritize quick unit coverage and high-impact integration tests.
|
||||
|
||||
Goal: Verify the following behavior:
|
||||
- The `RateLimitBurst`, `RateLimitRequests`, `RateLimitWindowSec` and `RateLimitBypassList` fields are used by `buildRateLimitHandler` and emitted into the Caddy JSON configuration.
|
||||
- The Caddy `rate_limit` handler configuration uses `{http.request.remote.host}` as the key.
|
||||
- Bypass IPs are excluded from rate limiting by creating a subroute with `remote_ip` matcher.
|
||||
- At runtime, Caddy enforces limits, returns `X-RateLimit-Limit`, `X-RateLimit-Remaining` and `Retry-After` headers (or documented equivalents), and resets counters after the configured window.
|
||||
- The plugin behaves correctly across multiple client IPs and respects bypass lists.
|
||||
|
||||
-----
|
||||
|
||||
2.3.1 Files to Add / Edit
|
||||
- New scripts: `scripts/rate_limit_integration.sh` (shell integration test), `scripts/rate_limit_e2e.sh` (optional extended tests).
|
||||
- New integration test: `backend/integration/rate_limit_integration_test.go` (go test wrapper runs script).
|
||||
- Unit tests to add/edit: `backend/internal/caddy/config_test.go` (add `TestGenerateConfig_WithRateLimitBypassList` and `TestBuildRateLimitHandler_KeyIsRemoteHost`), `backend/internal/api/handlers/security_ratelimit_test.go` (validate API fields persist & UpdateConfig accepts rate_limit fields).
|
||||
|
||||
2.3.2 Unit Tests (fast, run in CI pre-merge)
|
||||
- File: `backend/internal/caddy/config_test.go`
|
||||
- `TestGenerateConfig_WithRateLimitBypassList`
|
||||
- Input: call `GenerateConfig` with `secCfg` set with `RateLimitEnable:true` and `RateLimitBypassList:"10.0.0.0/8,127.0.0.1/32"`; include one host.
|
||||
- Assertions:
|
||||
- The generated `Config` contains a route with `handler:"subroute"` or a `rate_limit` handler containing the bypass CIDRs (CIDRs found in JSON output).
|
||||
- `RateLimitHandler` contains `rate_limits` map and `static` zone.
|
||||
- `TestBuildRateLimitHandler_KeyIsRemoteHost`
|
||||
- Input: `secCfg` with valid values.
|
||||
- Assertions: the static zone `key` is `{http.request.remote.host}`.
|
||||
- `TestBuildRateLimitHandler_DefaultBurstAndMax` (already present) and `TestParseBypassCIDRs` (existing) remain required.
|
||||
|
||||
2.3.3 Integration Tests (CI gated, Docker required)
|
||||
We will add a scripted integration test to run inside CI or locally with Docker. The test will:
|
||||
- Start the `charon:local` image (build if not present) in a detached container named `charon-debug`.
|
||||
- Create a simple HTTP backend (httpbin/kennethreitz/httpbin) called `ratelimit-backend` (or `httpbin`).
|
||||
- Create a proxy host `ratelimit.local` pointing to the backend via the Charon API (use /api/v1/proxy-hosts).
|
||||
- Set `SecurityConfig` (POST /api/v1/security/config) with short windows for speed, e.g.:
|
||||
```json
|
||||
{"name":"default","enabled":true,"rate_limit_enable":true,"rate_limit_requests":3,"rate_limit_window_sec":10,"rate_limit_burst":1}
|
||||
```
|
||||
- Validate that Caddy Admin API at `http://localhost:2019/config` includes a `rate_limit` handler and, where applicable, a `subroute` with bypass CIDRs (if `RateLimitBypassList` set).
|
||||
- Execute the runtime checks:
|
||||
- Using a single client IP, send 3 requests in quick succession expecting HTTP 200.
|
||||
- The 4th request (same client IP) should return HTTP 429 (Too Many Requests) and include a `Retry-After` header.
|
||||
- On allowed responses, assert that `X-RateLimit-Limit` equals 3 and `X-RateLimit-Remaining` decrements.
|
||||
- Wait until the configured `RateLimitWindowSec` elapses, and confirm requests are allowed again (headers reset).
|
||||
|
||||
- Bypass List Validation:
|
||||
- Set `RateLimitBypassList` to contain the requester's IP (or `127.0.0.1/32` when client runs from the host). Confirm repeated requests do not get `429`, and `X-RateLimit-*` headers may be absent or indicate non-enforcement.
|
||||
|
||||
- Multi-IP Isolation:
|
||||
- Spin up two client containers with different IPs (via Docker network `--subnet` + `--ip`). Each should have independent counters; both able to make configured number requests without affecting the other.
|
||||
|
||||
- X-Forwarded-For behavior (Confirm remote.host is used as key):
|
||||
- Send requests with `X-Forwarded-For` different than the container IP; observe rate counters still use the connection IP unless Caddy remote_ip plugin explicitly configured to respect XFF.
|
||||
|
||||
- Test Example (Shell Snippet to assert headers)
|
||||
```bash
|
||||
# Single request driver - check headers
|
||||
curl -s -D - -o /dev/null -H "Host: ratelimit.local" http://localhost/post
|
||||
# Expect headers: X-RateLimit-Limit: 3, X-RateLimit-Remaining: <number>
|
||||
```
|
||||
|
||||
- Script name: `scripts/rate_limit_integration.sh` (mirrors style of `coraza_integration.sh`).
|
||||
|
||||
- Manage flaky behavior:
|
||||
- Retry a couple times and log Caddy admin API output on failure for debugging.
|
||||
+
|
||||
2.3.4 E2E Tests (Longer, optional)
|
||||
- Create `scripts/rate_limit_e2e.sh` which spins up the same environment but runs broader scenarios:
|
||||
- High-rate bursts (WindowSec small and Requests small) to test burst allowance/consumption.
|
||||
- Multi-minute stress run (not for every CI pass) to check long-term behavior and reset across windows.
|
||||
- SPA / browser test using Playwright / Cypress to validate UI controls (admin toggles rate limit presets and sets bypass list) and ensures that applied config is effective at runtime.
|
||||
|
||||
2.3.5 Mock/Stub Guidance
|
||||
- IP Addresses
|
||||
- Use Docker network subnets and `docker run --network containers_default --ip 172.25.0.10` to guarantee client IP addresses for tests and to exercise bypass list behavior.
|
||||
- For tests run from host with `curl`, include `--interface` or `--local-port` if needed to force source IP (less reliable than container-based approach).
|
||||
- X-Forwarded-For
|
||||
- Add `-H "X-Forwarded-For: 10.0.0.5"` to `curl` requests; assert that plugin uses real connection IP by default. If future changes enable `real_ip` handling in Caddy, tests should be updated to reflect the new behavior.
|
||||
- Timing Windows
|
||||
- Keep small values (2-10 seconds) while maintaining reliability (1s windows are often flaky). For CI environment, `RateLimitWindowSec=10` with `RateLimitRequests=3` and `Burst=1` is a stable, fast choice.
|
||||
|
||||
2.3.6 Test Data and Assertions (Explicit)
|
||||
- Unit Test: `TestBuildRateLimitHandler_ValidConfig`
|
||||
- Input: secCfg{Requests:100, WindowSec:60, Burst:25}
|
||||
- Assert: `h["handler"] == "rate_limit"`, `static".max_events == 100`, `burst == 25`.
|
||||
|
||||
- Integration Test: `TestRateLimit_Enforcement_Basic`
|
||||
- Input: RateLimitRequests=3, RateLimitWindowSec=10, Burst=1, no bypass list
|
||||
- Actions: Send 4 rapid requests using client container
|
||||
- Expected outputs: [200, 200, 200, 429], 4th returns Retry-After or explicit block message
|
||||
- Assert: Allowed responses include `X-RateLimit-Limit: 3`, and `X-RateLimit-Remaining` decreasing
|
||||
|
||||
- Integration Test: `TestRateLimit_BypassList_SkipsLimit`
|
||||
- Input: Same as above + `RateLimitBypassList` contains client IP CIDR
|
||||
- Expected outputs: All requests 200 (no 429)
|
||||
|
||||
- Integration Test: `TestRateLimit_MultiClient_Isolation`
|
||||
- Input: As above
|
||||
- Actions: Client A sends 3 requests, Client B sends 3 requests
|
||||
- Expected: Both clients unaffected by the other; both get 200 responses for their first 3 requests
|
||||
|
||||
- Integration Test: `TestRateLimit_Window_Reset`
|
||||
- Input: As above
|
||||
- Actions: Exhaust quota (get 429), wait `RateLimitWindowSec + 1`, issue a new request
|
||||
- Expected: New request is 200 again
|
||||
|
||||
2.3.7 Test Harness - Example Go Integration Test
|
||||
Use the same approach as `backend/integration/coraza_integration_test.go`, run the script and check output for expected messages. Example test file: `backend/integration/rate_limit_integration_test.go`:
|
||||
|
||||
```go
|
||||
//go:build integration
|
||||
// +build integration
|
||||
|
||||
package integration
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os/exec"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
)
|
||||
|
||||
func TestRateLimitIntegration(t *testing.T) {
|
||||
t.Parallel()
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Minute)
|
||||
defer cancel()
|
||||
cmd := exec.CommandContext(ctx, "bash", "./scripts/rate_limit_integration.sh")
|
||||
out, err := cmd.CombinedOutput()
|
||||
t.Logf("rate_limit_integration script output:\n%s", string(out))
|
||||
if err != nil {
|
||||
t.Fatalf("rate_limit integration failed: %v", err)
|
||||
}
|
||||
if !strings.Contains(string(out), "Rate limit enforcement succeeded") {
|
||||
t.Fatalf("unexpected script output, rate limiting assertion not found")
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2.3.8 CI and Pre-commit Hooks
|
||||
- Add an integration CI job that runs the Docker-based script and the integration `go` test suite in a separate job to avoid blocking unit test runs on tools requiring Docker. Use a job matrix with `services: docker` and timeouts set appropriately.
|
||||
- Do not add integration scripts to pre-commit (too heavy); keep pre-commit focused on `go fmt`, `go vet`, `go test ./...` (unit tests), `npm test`, and lint rules.
|
||||
- Use the workspace `tasks.json` to add a `Coraza: Run Integration Script` style task for rate limit integration that mirrors `scripts/coraza_integration.sh`.
|
||||
|
||||
2.3.9 .gitignore / .codecov.yml / Dockerfile changes
|
||||
- .gitignore
|
||||
- Add `test-results/rate_limit/` to avoid committing local script logs.
|
||||
- Add `scripts/rate_limit_integration.sh` output files (if any) to ignore.
|
||||
- .codecov.yml
|
||||
- Optional: If you want integration test coverage included, remove `**/integration/**` from `ignore` or add a specific `backend/integration/*_test.go` to be included. (Caveat: integration coverage may not be reproducible across CI).
|
||||
- .dockerignore
|
||||
- Ensure `scripts/` and `backend/integration` are not copied to reduce build context size if not needed in Docker build.
|
||||
- Dockerfile
|
||||
- Confirm presence of `--with github.com/mholt/caddy-ratelimit` in the xcaddy build (it is present in base Dockerfile). Add comment and assert plugin presence in integration script by checking `caddy version` or `caddy list` available modules.
|
||||
|
||||
2.3.10 Prioritization
|
||||
- P0: Integration test `TestRateLimit_Enforcement_Basic` (high confidence: verifies actual runtime limit enforcement and header presence)
|
||||
- P1: Unit tests verifying config building (`TestGenerateConfig_WithRateLimitBypassList`, `TestBuildRateLimitHandler_KeyIsRemoteHost`) and API tests for `POST /security/config` handling rate limit fields
|
||||
- P2: Integration tests for bypass list, multi-client isolation, window reset
|
||||
- P3: E2E tests for UI configuration of rate limiting and long-running stress tests
|
||||
|
||||
2.3.11 Next Steps
|
||||
- Implement `scripts/rate_limit_integration.sh` and `backend/integration/rate_limit_integration_test.go` following `coraza_integration.sh` as the blueprint.
|
||||
- Add unit tests to `backend/internal/caddy/config_test.go` and API handler tests in `backend/internal/api/handlers/security_ratelimit_test.go`.
|
||||
- Add Docker network helpers and ensure `docker run --ip` is used to control client IPs during integration.
|
||||
- Run all new tests locally (Docker required) and in CI. Add integration job to GitHub Actions with `runs-on: ubuntu-latest`, `services: docker` and appropriate timeouts.
|
||||
|
||||
-----
|
||||
|
||||
This test plan should serve as a complete specification for testing rate limiting behavior across unit, integration, and E2E tiers. The next iteration will include scripted test implementations and Jenkins/GHA job snippets for CI.
|
||||
|
||||
|
||||
func parseBypassList(list string) []string {
|
||||
var cidrs []string
|
||||
for _, part := range strings.Split(list, ",") {
|
||||
|
||||
494
docs/plans/rate_limiter_testing_plan.md
Normal file
494
docs/plans/rate_limiter_testing_plan.md
Normal file
@@ -0,0 +1,494 @@
|
||||
# Charon Rate Limiter Testing Plan
|
||||
|
||||
**Version:** 1.0
|
||||
**Date:** 2025-12-12
|
||||
**Status:** 🔵 READY FOR TESTING
|
||||
|
||||
---
|
||||
|
||||
## 1. Rate Limiter Implementation Summary
|
||||
|
||||
### 1.1 Algorithm
|
||||
|
||||
The Charon rate limiter uses the **`mholt/caddy-ratelimit`** Caddy module, which implements a **sliding window algorithm** with a **ring buffer** implementation.
|
||||
|
||||
**Key characteristics:**
|
||||
- **Sliding window**: Looks back `<window>` duration and checks if `<max_events>` events have occurred
|
||||
- **Ring buffer**: Memory-efficient `O(Kn)` where K = max events, n = number of rate limiters
|
||||
- **Automatic Retry-After**: The module automatically sets `Retry-After` header on 429 responses
|
||||
- **Scoped by client IP**: Rate limiting uses `{http.request.remote.host}` as the key
|
||||
|
||||
### 1.2 Configuration Model
|
||||
|
||||
**Source:** [backend/internal/models/security_config.go](../../backend/internal/models/security_config.go)
|
||||
|
||||
| Field | Type | JSON Key | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `RateLimitEnable` | `bool` | `rate_limit_enable` | Master toggle for rate limiting |
|
||||
| `RateLimitRequests` | `int` | `rate_limit_requests` | Max requests allowed in window |
|
||||
| `RateLimitWindowSec` | `int` | `rate_limit_window_sec` | Time window in seconds |
|
||||
| `RateLimitBurst` | `int` | `rate_limit_burst` | Burst allowance (default: 20% of requests) |
|
||||
| `RateLimitBypassList` | `string` | `rate_limit_bypass_list` | Comma-separated CIDRs to bypass |
|
||||
|
||||
### 1.3 Default Values & Presets
|
||||
|
||||
**Source:** [backend/internal/api/handlers/security_ratelimit_test.go](../../backend/internal/api/handlers/security_ratelimit_test.go)
|
||||
|
||||
| Preset ID | Name | Requests | Window (sec) | Burst | Use Case |
|
||||
|-----------|------|----------|--------------|-------|----------|
|
||||
| `standard` | Standard Web | 100 | 60 | 20 | General web traffic |
|
||||
| `api` | API | (varies) | (varies) | (varies) | API endpoints |
|
||||
| `login` | Login Protection | 5 | 300 | 2 | Authentication endpoints |
|
||||
| `relaxed` | Relaxed | (varies) | (varies) | (varies) | Low-risk endpoints |
|
||||
|
||||
### 1.4 Client Identification
|
||||
|
||||
**Key:** `{http.request.remote.host}`
|
||||
|
||||
Rate limiting is scoped per-client using the remote host IP address. Each unique client IP gets its own rate limit counter.
|
||||
|
||||
### 1.5 Headers Returned
|
||||
|
||||
The `caddy-ratelimit` module returns:
|
||||
- **HTTP 429** response when limit exceeded
|
||||
- **`Retry-After`** header indicating seconds until the client can retry
|
||||
|
||||
> **Note:** The module does NOT return `X-RateLimit-Limit`, `X-RateLimit-Remaining` headers by default. These are not part of the `caddy-ratelimit` module's standard output.
|
||||
|
||||
### 1.6 Enabling Rate Limiting
|
||||
|
||||
Rate limiting requires **two conditions**:
|
||||
|
||||
1. **Cerberus must be enabled:**
|
||||
- Set `feature.cerberus.enabled` = `true` in Settings
|
||||
- OR set env `CERBERUS_SECURITY_CERBERUS_ENABLED=true`
|
||||
|
||||
2. **Rate Limit Mode must be enabled:**
|
||||
- Set env `CERBERUS_SECURITY_RATELIMIT_MODE=enabled`
|
||||
- (There is no runtime DB toggle for rate limit mode currently)
|
||||
|
||||
**Source:** [backend/internal/caddy/manager.go#L412-465](../../backend/internal/caddy/manager.go)
|
||||
|
||||
### 1.7 Generated Caddy JSON Structure
|
||||
|
||||
**Source:** [backend/internal/caddy/config.go#L990-1065](../../backend/internal/caddy/config.go)
|
||||
|
||||
```json
|
||||
{
|
||||
"handler": "rate_limit",
|
||||
"rate_limits": {
|
||||
"static": {
|
||||
"key": "{http.request.remote.host}",
|
||||
"window": "60s",
|
||||
"max_events": 100,
|
||||
"burst": 20
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
With bypass list, wraps in subroute:
|
||||
```json
|
||||
{
|
||||
"handler": "subroute",
|
||||
"routes": [
|
||||
{
|
||||
"match": [{"remote_ip": {"ranges": ["10.0.0.0/8", "192.168.1.0/24"]}}],
|
||||
"handle": []
|
||||
},
|
||||
{
|
||||
"handle": [{"handler": "rate_limit", ...}]
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Existing Test Coverage
|
||||
|
||||
### 2.1 Unit Tests
|
||||
|
||||
| Test File | Tests | Coverage |
|
||||
|-----------|-------|----------|
|
||||
| [config_test.go](../../backend/internal/caddy/config_test.go) | `TestBuildRateLimitHandler_*` (8 tests) | Handler building, burst defaults, bypass list parsing |
|
||||
| [security_ratelimit_test.go](../../backend/internal/api/handlers/security_ratelimit_test.go) | `TestSecurityHandler_GetRateLimitPresets*` (3 tests) | Preset endpoint |
|
||||
|
||||
### 2.2 Unit Test Functions
|
||||
|
||||
From [config_test.go](../../backend/internal/caddy/config_test.go):
|
||||
- `TestBuildRateLimitHandler_Disabled` - nil config returns nil handler
|
||||
- `TestBuildRateLimitHandler_InvalidValues` - zero/negative values return nil
|
||||
- `TestBuildRateLimitHandler_ValidConfig` - correct caddy-ratelimit format
|
||||
- `TestBuildRateLimitHandler_JSONFormat` - valid JSON output
|
||||
- `TestGenerateConfig_WithRateLimiting` - handler included in route
|
||||
- `TestBuildRateLimitHandler_UsesBurst` - configured burst is used
|
||||
- `TestBuildRateLimitHandler_DefaultBurst` - default 20% calculation
|
||||
- `TestBuildRateLimitHandler_BypassList` - subroute with bypass CIDRs
|
||||
- `TestBuildRateLimitHandler_BypassList_PlainIPs` - IP to CIDR conversion
|
||||
- `TestBuildRateLimitHandler_BypassList_InvalidEntries` - invalid entries ignored
|
||||
- `TestBuildRateLimitHandler_BypassList_Empty` - empty list = plain handler
|
||||
- `TestBuildRateLimitHandler_BypassList_AllInvalid` - all invalid = plain handler
|
||||
|
||||
### 2.3 Missing Integration Tests
|
||||
|
||||
There is **NO existing integration test** for rate limiting. The pattern to follow is:
|
||||
- [scripts/coraza_integration.sh](../../scripts/coraza_integration.sh) - WAF integration test
|
||||
- [backend/integration/coraza_integration_test.go](../../backend/integration/coraza_integration_test.go) - Go test wrapper
|
||||
|
||||
---
|
||||
|
||||
## 3. Testing Plan
|
||||
|
||||
### 3.1 Prerequisites
|
||||
|
||||
1. **Docker running** with Charon image built
|
||||
2. **Cerberus enabled**:
|
||||
```bash
|
||||
export CERBERUS_SECURITY_CERBERUS_ENABLED=true
|
||||
export CERBERUS_SECURITY_RATELIMIT_MODE=enabled
|
||||
```
|
||||
3. **Proxy host configured** that can receive test requests
|
||||
|
||||
### 3.2 Test Setup Commands
|
||||
|
||||
```bash
|
||||
# Build and run Charon with rate limiting enabled
|
||||
docker build -t charon:local .
|
||||
docker run -d --name charon-ratelimit-test \
|
||||
-p 8080:8080 \
|
||||
-p 80:80 \
|
||||
-e CHARON_ENV=development \
|
||||
-e CERBERUS_SECURITY_CERBERUS_ENABLED=true \
|
||||
-e CERBERUS_SECURITY_RATELIMIT_MODE=enabled \
|
||||
charon:local
|
||||
|
||||
# Wait for startup
|
||||
sleep 5
|
||||
|
||||
# Create temp cookie file
|
||||
TMP_COOKIE=$(mktemp)
|
||||
|
||||
# Register and login
|
||||
curl -s -X POST -H "Content-Type: application/json" \
|
||||
-d '{"email":"test@example.com","password":"password123","name":"Tester"}' \
|
||||
http://localhost:8080/api/v1/auth/register
|
||||
|
||||
curl -s -X POST -H "Content-Type: application/json" \
|
||||
-d '{"email":"test@example.com","password":"password123"}' \
|
||||
-c ${TMP_COOKIE} \
|
||||
http://localhost:8080/api/v1/auth/login
|
||||
|
||||
# Create a simple proxy host for testing
|
||||
curl -s -X POST -H "Content-Type: application/json" \
|
||||
-b ${TMP_COOKIE} \
|
||||
-d '{
|
||||
"domain_names": "ratelimit.test.local",
|
||||
"forward_scheme": "http",
|
||||
"forward_host": "httpbin.org",
|
||||
"forward_port": 80,
|
||||
"enabled": true
|
||||
}' \
|
||||
http://localhost:8080/api/v1/proxy-hosts
|
||||
```
|
||||
|
||||
### 3.3 Configure Rate Limit via API
|
||||
|
||||
```bash
|
||||
# Set aggressive rate limit for testing (3 requests per 10 seconds)
|
||||
curl -s -X POST -H "Content-Type: application/json" \
|
||||
-b ${TMP_COOKIE} \
|
||||
-d '{
|
||||
"name": "default",
|
||||
"enabled": true,
|
||||
"rate_limit_enable": true,
|
||||
"rate_limit_requests": 3,
|
||||
"rate_limit_window_sec": 10,
|
||||
"rate_limit_burst": 1,
|
||||
"rate_limit_bypass_list": ""
|
||||
}' \
|
||||
http://localhost:8080/api/v1/security/config
|
||||
|
||||
# Wait for Caddy to reload
|
||||
sleep 5
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Test Cases & Curl Commands
|
||||
|
||||
### 4.1 Test Case: Basic Rate Limit Enforcement
|
||||
|
||||
**Objective:** Verify requests are blocked after exceeding limit
|
||||
|
||||
```bash
|
||||
echo "=== TEST: Basic Rate Limit Enforcement ==="
|
||||
echo "Config: 3 requests / 10 seconds"
|
||||
|
||||
# Make 5 rapid requests, expect 3 to succeed, 2 to fail
|
||||
for i in 1 2 3 4 5; do
|
||||
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Host: ratelimit.test.local" \
|
||||
http://localhost/get)
|
||||
echo "Request $i: HTTP $RESPONSE"
|
||||
done
|
||||
|
||||
# Expected output:
|
||||
# Request 1: HTTP 200
|
||||
# Request 2: HTTP 200
|
||||
# Request 3: HTTP 200
|
||||
# Request 4: HTTP 429
|
||||
# Request 5: HTTP 429
|
||||
```
|
||||
|
||||
### 4.2 Test Case: Retry-After Header
|
||||
|
||||
**Objective:** Verify 429 response includes Retry-After header
|
||||
|
||||
```bash
|
||||
echo "=== TEST: Retry-After Header ==="
|
||||
|
||||
# Exhaust rate limit
|
||||
for i in 1 2 3; do
|
||||
curl -s -o /dev/null -H "Host: ratelimit.test.local" http://localhost/get
|
||||
done
|
||||
|
||||
# Check 429 response headers
|
||||
HEADERS=$(curl -s -D - -o /dev/null \
|
||||
-H "Host: ratelimit.test.local" \
|
||||
http://localhost/get)
|
||||
|
||||
echo "$HEADERS" | grep -i "retry-after"
|
||||
# Expected: Retry-After: <seconds>
|
||||
```
|
||||
|
||||
### 4.3 Test Case: Window Reset
|
||||
|
||||
**Objective:** Verify rate limit resets after window expires
|
||||
|
||||
```bash
|
||||
echo "=== TEST: Window Reset ==="
|
||||
|
||||
# Exhaust rate limit
|
||||
for i in 1 2 3; do
|
||||
curl -s -o /dev/null -H "Host: ratelimit.test.local" http://localhost/get
|
||||
done
|
||||
|
||||
# Verify blocked
|
||||
BLOCKED=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Host: ratelimit.test.local" http://localhost/get)
|
||||
echo "Immediately after: HTTP $BLOCKED (expect 429)"
|
||||
|
||||
# Wait for window to reset (10 sec + buffer)
|
||||
echo "Waiting 12 seconds for window reset..."
|
||||
sleep 12
|
||||
|
||||
# Should succeed again
|
||||
RESET=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Host: ratelimit.test.local" http://localhost/get)
|
||||
echo "After window reset: HTTP $RESET (expect 200)"
|
||||
```
|
||||
|
||||
### 4.4 Test Case: Bypass List
|
||||
|
||||
**Objective:** Verify bypass IPs are not rate limited
|
||||
|
||||
```bash
|
||||
echo "=== TEST: Bypass List ==="
|
||||
|
||||
# First, configure bypass list with localhost
|
||||
curl -s -X POST -H "Content-Type: application/json" \
|
||||
-b ${TMP_COOKIE} \
|
||||
-d '{
|
||||
"name": "default",
|
||||
"enabled": true,
|
||||
"rate_limit_enable": true,
|
||||
"rate_limit_requests": 3,
|
||||
"rate_limit_window_sec": 10,
|
||||
"rate_limit_burst": 1,
|
||||
"rate_limit_bypass_list": "127.0.0.1/32, 172.17.0.0/16"
|
||||
}' \
|
||||
http://localhost:8080/api/v1/security/config
|
||||
|
||||
sleep 5
|
||||
|
||||
# Make 10 requests - all should succeed if IP is bypassed
|
||||
for i in $(seq 1 10); do
|
||||
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Host: ratelimit.test.local" \
|
||||
http://localhost/get)
|
||||
echo "Request $i: HTTP $RESPONSE (expect 200)"
|
||||
done
|
||||
```
|
||||
|
||||
### 4.5 Test Case: Different Clients Isolated
|
||||
|
||||
**Objective:** Verify rate limits are per-client, not global
|
||||
|
||||
```bash
|
||||
echo "=== TEST: Client Isolation ==="
|
||||
|
||||
# This test requires two different source IPs
|
||||
# In Docker, you can test by:
|
||||
# 1. Making requests from host (one IP)
|
||||
# 2. Making requests from inside another container (different IP)
|
||||
|
||||
# From container A (simulate with X-Forwarded-For if configured)
|
||||
docker run --rm --network host curlimages/curl:latest \
|
||||
curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Host: ratelimit.test.local" \
|
||||
http://localhost/get
|
||||
|
||||
# Note: True client isolation testing requires actual different IPs
|
||||
# This is better suited for an integration test script
|
||||
```
|
||||
|
||||
### 4.6 Test Case: Burst Handling
|
||||
|
||||
**Objective:** Verify burst allowance works correctly
|
||||
|
||||
```bash
|
||||
echo "=== TEST: Burst Handling ==="
|
||||
|
||||
# Configure with burst > 1
|
||||
curl -s -X POST -H "Content-Type: application/json" \
|
||||
-b ${TMP_COOKIE} \
|
||||
-d '{
|
||||
"name": "default",
|
||||
"enabled": true,
|
||||
"rate_limit_enable": true,
|
||||
"rate_limit_requests": 10,
|
||||
"rate_limit_window_sec": 60,
|
||||
"rate_limit_burst": 5,
|
||||
"rate_limit_bypass_list": ""
|
||||
}' \
|
||||
http://localhost:8080/api/v1/security/config
|
||||
|
||||
sleep 5
|
||||
|
||||
# Make 5 rapid requests (burst should allow)
|
||||
for i in 1 2 3 4 5; do
|
||||
RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Host: ratelimit.test.local" \
|
||||
http://localhost/get)
|
||||
echo "Request $i: HTTP $RESPONSE"
|
||||
done
|
||||
# All should be 200 due to burst allowance
|
||||
```
|
||||
|
||||
### 4.7 Test Case: Verify Caddy Config Contains Rate Limit
|
||||
|
||||
**Objective:** Confirm rate_limit handler is in running Caddy config
|
||||
|
||||
```bash
|
||||
echo "=== TEST: Caddy Config Verification ==="
|
||||
|
||||
# Query Caddy admin API for current config
|
||||
CONFIG=$(curl -s http://localhost:2019/config)
|
||||
|
||||
# Check for rate_limit handler
|
||||
echo "$CONFIG" | grep -o '"handler":"rate_limit"' | head -1
|
||||
# Expected: "handler":"rate_limit"
|
||||
|
||||
# Check for correct key
|
||||
echo "$CONFIG" | grep -o '"key":"{http.request.remote.host}"' | head -1
|
||||
# Expected: "key":"{http.request.remote.host}"
|
||||
|
||||
# Full handler inspection
|
||||
echo "$CONFIG" | python3 -c "
|
||||
import sys, json
|
||||
config = json.load(sys.stdin)
|
||||
servers = config.get('apps', {}).get('http', {}).get('servers', {})
|
||||
for name, server in servers.items():
|
||||
for route in server.get('routes', []):
|
||||
for handler in route.get('handle', []):
|
||||
if handler.get('handler') == 'rate_limit':
|
||||
print(json.dumps(handler, indent=2))
|
||||
" 2>/dev/null || echo "Rate limit handler found (use jq for pretty print)"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Verification Checklist
|
||||
|
||||
### 5.1 Unit Test Checklist
|
||||
|
||||
- [ ] `go test ./internal/caddy/... -v -run TestBuildRateLimitHandler` passes
|
||||
- [ ] `go test ./internal/api/handlers/... -v -run TestSecurityHandler_GetRateLimitPresets` passes
|
||||
|
||||
### 5.2 Configuration Checklist
|
||||
|
||||
- [ ] `SecurityConfig.RateLimitRequests` is stored and retrieved correctly
|
||||
- [ ] `SecurityConfig.RateLimitWindowSec` is stored and retrieved correctly
|
||||
- [ ] `SecurityConfig.RateLimitBurst` defaults to 20% when zero
|
||||
- [ ] `SecurityConfig.RateLimitBypassList` parses comma-separated CIDRs
|
||||
- [ ] Plain IPs in bypass list are converted to /32 CIDRs
|
||||
- [ ] Invalid entries in bypass list are silently ignored
|
||||
|
||||
### 5.3 Runtime Checklist
|
||||
|
||||
- [ ] Rate limiting only active when `CERBERUS_SECURITY_RATELIMIT_MODE=enabled`
|
||||
- [ ] Rate limiting only active when `CERBERUS_SECURITY_CERBERUS_ENABLED=true`
|
||||
- [ ] Caddy config includes `rate_limit` handler when enabled
|
||||
- [ ] Handler uses `{http.request.remote.host}` as key
|
||||
|
||||
### 5.4 Response Behavior Checklist
|
||||
|
||||
- [ ] HTTP 429 returned when limit exceeded
|
||||
- [ ] `Retry-After` header present on 429 responses
|
||||
- [ ] Rate limit resets after window expires
|
||||
- [ ] Bypassed IPs not rate limited
|
||||
|
||||
### 5.5 Edge Case Checklist
|
||||
|
||||
- [ ] Zero/negative requests value = no rate limiting
|
||||
- [ ] Zero/negative window value = no rate limiting
|
||||
- [ ] Empty bypass list = no subroute wrapper
|
||||
- [ ] All-invalid bypass list = no subroute wrapper
|
||||
|
||||
---
|
||||
|
||||
## 6. Existing Test Files Reference
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| [backend/internal/caddy/config_test.go](../../backend/internal/caddy/config_test.go) | Unit tests for `buildRateLimitHandler` |
|
||||
| [backend/internal/caddy/config_generate_additional_test.go](../../backend/internal/caddy/config_generate_additional_test.go) | `GenerateConfig` with rate limiting |
|
||||
| [backend/internal/api/handlers/security_ratelimit_test.go](../../backend/internal/api/handlers/security_ratelimit_test.go) | Preset API endpoint tests |
|
||||
| [scripts/coraza_integration.sh](../../scripts/coraza_integration.sh) | Template for integration script |
|
||||
| [backend/integration/coraza_integration_test.go](../../backend/integration/coraza_integration_test.go) | Template for Go integration test wrapper |
|
||||
|
||||
---
|
||||
|
||||
## 7. Recommended Next Steps
|
||||
|
||||
1. **Run existing unit tests:**
|
||||
```bash
|
||||
cd backend && go test ./internal/caddy/... -v -run TestBuildRateLimitHandler
|
||||
cd backend && go test ./internal/api/handlers/... -v -run TestSecurityHandler_GetRateLimitPresets
|
||||
```
|
||||
|
||||
2. **Create integration test script:** `scripts/rate_limit_integration.sh` following the `coraza_integration.sh` pattern
|
||||
|
||||
3. **Create Go test wrapper:** `backend/integration/rate_limit_integration_test.go`
|
||||
|
||||
4. **Add VS Code task:** Update `.vscode/tasks.json` with rate limit integration task
|
||||
|
||||
5. **Manual verification:** Run curl commands from Section 4 against a live Charon instance
|
||||
|
||||
---
|
||||
|
||||
## 8. Known Limitations
|
||||
|
||||
1. **No `X-RateLimit-*` headers:** The `caddy-ratelimit` module only provides `Retry-After`, not standard `X-RateLimit-Limit`/`X-RateLimit-Remaining` headers.
|
||||
|
||||
2. **No runtime toggle:** Rate limit mode is set via environment variable only; there's no DB-backed runtime toggle like WAF/ACL have.
|
||||
|
||||
3. **Burst behavior:** The `caddy-ratelimit` module documentation notes burst is not a direct parameter; Charon passes it but actual behavior depends on the module's implementation.
|
||||
|
||||
4. **Distributed mode not configured:** The `caddy-ratelimit` module supports distributed rate limiting across a cluster, but Charon doesn't currently expose this configuration.
|
||||
|
||||
---
|
||||
|
||||
**Document Status:** Complete
|
||||
**Last Updated:** 2025-12-12
|
||||
Reference in New Issue
Block a user