Compare commits

...

255 Commits

Author SHA1 Message Date
Jeremy
c70c87386e Merge pull request #913 from Wikid82/bot/update-geolite2-checksum
chore(docker): update GeoLite2-Country.mmdb checksum
2026-04-06 00:38:12 -04:00
Wikid82
f5ab2cddd8 chore(docker): update GeoLite2-Country.mmdb checksum
Automated checksum update for GeoLite2-Country.mmdb database.

Old: 7840f4b8891e7c866f948d4b020cdc12aeea51b09450b44ad96d1f14f6e32879
New: f5e80a9a3129d46e75c8cccd66bfac725b0449a6c89ba5093a16561d58f20bda

Auto-generated by: .github/workflows/update-geolite2.yml
2026-04-06 02:58:45 +00:00
Jeremy
1911003db5 Merge pull request #888 from Wikid82/bot/update-geolite2-checksum
chore(docker): update GeoLite2-Country.mmdb checksum
2026-03-30 21:16:01 -04:00
Jeremy
ecf314b2e5 Merge branch 'main' into bot/update-geolite2-checksum 2026-03-30 17:56:36 -04:00
Jeremy
a78529e218 Merge pull request #889 from Wikid82/nightly
Weekly: Promote nightly to main (2026-03-30)
2026-03-30 17:56:21 -04:00
Wikid82
e32f3dfb57 chore(docker): update GeoLite2-Country.mmdb checksum
Automated checksum update for GeoLite2-Country.mmdb database.

Old: c6549807950f93f609d6433fa295fa517fbdec0ad975a4aafba69c136d5d2347
New: 7840f4b8891e7c866f948d4b020cdc12aeea51b09450b44ad96d1f14f6e32879

Auto-generated by: .github/workflows/update-geolite2.yml
2026-03-30 02:58:26 +00:00
Jeremy
548a2b6851 Merge pull request #883 from Wikid82/feature/beta-release
feat: add support for Ntfy notification provider
2026-03-25 07:32:51 -04:00
GitHub Actions
c64890b5a0 fix: update TRIGGER_PR_NUMBER formatting for consistency in workflow 2026-03-25 10:00:34 +00:00
GitHub Actions
664b440d70 fix: update Ntfy setup instructions for clarity and security token terminology 2026-03-25 09:58:38 +00:00
Jeremy
c929dfbe4a Merge branch 'development' into feature/beta-release 2026-03-25 05:14:17 -04:00
GitHub Actions
20e724f19c fix: update docker-build.yml to include 'development' branch in push triggers 2026-03-25 09:13:15 +00:00
GitHub Actions
a6deff77a7 fix(deps): update electron-to-chromium to version 1.5.323 for improved stability 2026-03-25 08:48:35 +00:00
GitHub Actions
8702d7b76d fix(deps): update CADDY_SECURITY_VERSION to 1.1.51 for security improvements 2026-03-25 04:10:05 +00:00
GitHub Actions
c9f4e42735 fix: update SECURITY.md with new vulnerability details and remediation plans 2026-03-25 04:05:15 +00:00
GitHub Actions
86023788aa feat: add support for Ntfy notification provider
- Updated the list of supported notification provider types to include 'ntfy'.
- Modified the notification settings UI to accommodate the Ntfy provider, including form fields for topic URL and access token.
- Enhanced localization files to include translations for Ntfy-related fields in German, English, Spanish, French, and Chinese.
- Implemented tests for the Ntfy notification provider, covering form rendering, CRUD operations, payload contracts, and security measures.
- Updated existing tests to account for the new Ntfy provider in various scenarios.
2026-03-24 21:04:54 +00:00
GitHub Actions
5a2b6fec9d fix(deps): update katex to v0.16.42 for improved functionality 2026-03-24 20:25:38 +00:00
GitHub Actions
d90dc5af98 fix(deps): update go-toml to v2.3.0 for improved compatibility 2026-03-24 20:10:02 +00:00
Jeremy
1d62a3da5f Merge pull request #882 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-24 13:45:56 -04:00
Jeremy
f237fa595a Merge pull request #873 from Wikid82/feature/beta-release
fix(certificates): allow deletion of expired and unused certificates
2026-03-24 13:45:08 -04:00
renovate[bot]
07ce79b439 fix(deps): update non-major-updates 2026-03-24 17:37:02 +00:00
Jeremy
77511b0994 Merge pull request #881 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-24 08:54:12 -04:00
GitHub Actions
246b83c72d chore: update package-lock.json for dependency version consistency 2026-03-24 12:08:22 +00:00
renovate[bot]
a7e4e12f32 fix(deps): update non-major-updates 2026-03-24 11:59:32 +00:00
Jeremy
91c1fa9d0f Merge pull request #879 from Wikid82/renovate/feature/beta-release-major-1-lucide-monorepo
fix(deps): update dependency lucide-react to v1 (feature/beta-release)
2026-03-24 07:57:18 -04:00
Jeremy
5a2698123e Merge pull request #878 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-24 07:53:22 -04:00
Jeremy
752e4dbd66 Merge branch 'feature/beta-release' into renovate/feature/beta-release-major-1-lucide-monorepo 2026-03-24 02:42:23 -04:00
Jeremy
f2769eca1a Merge branch 'feature/beta-release' into renovate/feature/beta-release-non-major-updates 2026-03-24 02:42:04 -04:00
Jeremy
e779041039 Merge branch 'development' into feature/beta-release 2026-03-24 02:41:29 -04:00
Jeremy
6c6c3f3373 Merge pull request #880 from Wikid82/main
Propagate changes from main into development
2026-03-24 02:41:00 -04:00
GitHub Actions
59adf32861 fix(deps): resolve Renovate lookup failure for geoip2-golang v2 module
Renovate could not resolve the Go module path
github.com/oschwald/geoip2-golang/v2 because the /v2 suffix is a Go
module convention, not a separate GitHub repository. Added a packageRules
entry with an explicit sourceUrl pointing to the actual upstream repo so
Renovate can correctly look up available versions.

No changes to application code, go.mod, or go.sum — the dependency was
already declared correctly.
2026-03-24 06:32:00 +00:00
renovate[bot]
55204289ec fix(deps): update dependency lucide-react to v1 2026-03-24 06:22:11 +00:00
renovate[bot]
95bf0b496d fix(deps): update non-major-updates 2026-03-24 06:20:22 +00:00
Jeremy
583633c74b Merge pull request #876 from Wikid82/bot/update-geolite2-checksum
chore(docker): update GeoLite2-Country.mmdb checksum
2026-03-24 02:18:43 -04:00
GitHub Actions
c822ba7582 chore: downgrade vitest and related packages to version 4.0.18 2026-03-24 01:52:48 +00:00
GitHub Actions
a5daaa5e8c fix: add missing name field in package-lock.json 2026-03-24 01:51:42 +00:00
GitHub Actions
6967c73eaf chore: update dependencies to latest versions
- Upgraded @tanstack/query-core and @tanstack/react-query from 5.95.0 to 5.95.2
- Updated @typescript-eslint packages from 8.57.1 to 8.57.2
- Bumped @vitest packages from 4.1.0 to 4.1.1
- Updated knip from 6.0.3 to 6.0.4
- Upgraded picomatch from 4.0.3 to 4.0.4 and from 2.3.1 to 2.3.2
- Updated react-router and react-router-dom from 7.13.1 to 7.13.2
- Bumped typescript from 6.0.1-rc to 6.0.2
2026-03-24 01:50:32 +00:00
GitHub Actions
602b0b0e2e chore: update package versions in package-lock.json for consistency 2026-03-24 01:50:02 +00:00
GitHub Actions
49b3e4e537 fix(tests): resolve i18n mock issues in BulkDeleteCertificateDialog tests
Removed local i18n mock to allow global mock to function correctly, updated assertions to use resolved English translations for better consistency in test outcomes.
2026-03-24 01:47:43 +00:00
GitHub Actions
ca477c48d4 chore: Enhance documentation for E2E testing:
- Added clarity and structure to README files, including recent updates and getting started sections.
- Improved manual verification documentation for CrowdSec authentication, emphasizing expected outputs and success criteria.
- Updated debugging guide with detailed output examples and automatic trace capture information.
- Refined best practices for E2E tests, focusing on efficient polling, locator strategies, and state management.
- Documented triage report for DNS Provider feature tests, highlighting issues fixed and test results before and after improvements.
- Revised E2E test writing guide to include when to use specific helper functions and patterns for better test reliability.
- Enhanced troubleshooting documentation with clear resolutions for common issues, including timeout and token configuration problems.
- Updated tests README to provide quick links and best practices for writing robust tests.
2026-03-24 01:47:22 +00:00
GitHub Actions
7d986f2821 chore: update package versions in package-lock.json for consistency 2026-03-23 13:14:48 +00:00
GitHub Actions
849c3513bb feat(i18n): add aria-label for bulk delete certificates in multiple languages 2026-03-23 05:46:49 +00:00
GitHub Actions
a707d8e67e feat(i18n): add localized provider labels for certificate management 2026-03-23 05:45:23 +00:00
GitHub Actions
3cacecde5a fx: replace getAuthToken function with getStorageStateAuthHeaders for improved auth handling 2026-03-23 05:42:02 +00:00
GitHub Actions
4bdc771cd4 feat: synchronize selected certificate IDs with available certificates on update 2026-03-23 05:39:37 +00:00
GitHub Actions
f13d95df0f fix: specify gotestsum version in workflows for consistency 2026-03-23 05:32:52 +00:00
GitHub Actions
73aecc60e8 fix(i18n): restore localized noteText in all non-English certificate locales
- The certificate section's noteText had previously been translated into
  Chinese, German, Spanish, and French but was inadvertently overwritten
  with an English string when the individual certificate delete feature
  was introduced.
- All four locales now carry properly translated text that also reflects
  the updated policy: expired or expiring production certificates that
  are not attached to a proxy host are now eligible for deletion.
- Newly introduced keys (deleteConfirmExpiring and other delete-related
  keys) remain as English placeholders pending professional translation,
  which is the established pattern for this project.
2026-03-23 05:24:58 +00:00
Wikid82
6fc4409513 chore(docker): update GeoLite2-Country.mmdb checksum
Automated checksum update for GeoLite2-Country.mmdb database.

Old: aa154fc6bcd712644de232a4abcdd07dac1f801308c0b6f93dbc2b375443da7b
New: c6549807950f93f609d6433fa295fa517fbdec0ad975a4aafba69c136d5d2347

Auto-generated by: .github/workflows/update-geolite2.yml
2026-03-23 02:57:35 +00:00
GitHub Actions
9ed698b236 feat: enhance certificate management with expiring status
- Update isInUse function to handle certificates without an ID.
- Modify isDeletable function to include 'expiring' status as deletable.
- Adjust CertificateList component to reflect changes in deletable logic.
- Update BulkDeleteCertificateDialog and DeleteCertificateDialog to handle expiring certificates.
- Add tests for expiring certificates in CertificateList and BulkDeleteCertificateDialog.
- Update translations for expiring certificates in multiple languages.
2026-03-23 02:23:08 +00:00
GitHub Actions
69736503ac feat: add BulkDeleteCertificateDialog component for bulk certificate deletion
- Implemented BulkDeleteCertificateDialog with confirmation and listing of certificates to be deleted.
- Added translations for bulk delete functionality in English, German, Spanish, French, and Chinese.
- Created unit tests for BulkDeleteCertificateDialog to ensure proper rendering and functionality.
- Developed end-to-end tests for bulk certificate deletion, covering selection, confirmation, and cancellation scenarios.
2026-03-23 00:07:59 +00:00
Jeremy
5b8941554b Merge pull request #875 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-22 18:00:57 -04:00
renovate[bot]
0bb7826ad5 fix(deps): update non-major-updates 2026-03-22 20:26:16 +00:00
GitHub Actions
bae55fb876 chore(ci): prevent test log truncation in backend coverage workflows
- Install gotestsum in CI so the coverage script uses compact
  pkgname-formatted output instead of go test -v, which produces
  massive verbose logs that exceed GitHub Actions' step log buffer
- Upload the full test output as a downloadable artifact on every
  run (including failures) so truncated logs never block debugging
- Aligns upload-artifact pin to v7.0.0 matching the rest of the repo
2026-03-22 18:49:02 +00:00
GitHub Actions
97255f84e6 fix: add tests for delete certificate functionality and error handling in CertificateList 2026-03-22 17:33:11 +00:00
Jeremy
174f1fe511 Merge pull request #874 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-22 12:00:19 -04:00
GitHub Actions
53fc2f1e78 fix: remove unused waitForToast import from certificate-delete.spec.ts 2026-03-22 14:29:31 +00:00
GitHub Actions
ef5e2e2ea2 fix: enhance setupAuditTestDB for proper database connection handling and documentation 2026-03-22 14:29:31 +00:00
renovate[bot]
b2c40345f8 fix(deps): update non-major-updates 2026-03-22 14:24:03 +00:00
Jeremy
a38de8518f Merge branch 'development' into feature/beta-release 2026-03-22 09:52:02 -04:00
GitHub Actions
a98e37b8b4 fix: update @vitest/eslint-plugin, i18next, and react-i18next versions for compatibility 2026-03-22 13:30:41 +00:00
GitHub Actions
441864be95 fix: add DeleteCertificateDialog component with confirmation dialog for certificate deletion
- Implement DeleteCertificateDialog component to handle certificate deletion confirmation.
- Add tests for DeleteCertificateDialog covering various scenarios including rendering, confirmation, and cancellation.
- Update translation files for multiple languages to include new strings related to certificate deletion.
- Create end-to-end tests for certificate deletion UX, including button visibility, confirmation dialog, and success/failure scenarios.
2026-03-22 13:30:41 +00:00
GitHub Actions
2c9c791ae5 fix: update package versions in package-lock.json for compatibility 2026-03-22 13:30:41 +00:00
GitHub Actions
ea3e8e8371 docs: track CVE-2026-27171 zlib CPU exhaustion as a known medium vulnerability 2026-03-22 13:30:41 +00:00
Jeremy
c5dc4a9d71 Merge pull request #872 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update dependency i18next to ^25.10.3 (feature/beta-release)
2026-03-21 21:59:28 -04:00
renovate[bot]
3b3ae29414 fix(deps): update dependency i18next to ^25.10.3 2026-03-22 01:11:06 +00:00
Jeremy
551532d41b Merge pull request #870 from Wikid82/fix/cwe-614-secure-cookie-attribute
fix(security): harden auth cookie to always set Secure attribute (CWE-614)
2026-03-21 15:14:46 -04:00
GitHub Actions
20537d7bd9 fix(e2e): add Authorization header to API calls in gaps and webkit specs 2026-03-21 16:21:58 +00:00
Jeremy
66b37b5a98 Merge branch 'development' into fix/cwe-614-secure-cookie-attribute 2026-03-21 12:18:38 -04:00
Jeremy
9d4b6e5b43 Merge pull request #871 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-21 12:17:46 -04:00
renovate[bot]
f335b3f03f fix(deps): update non-major-updates 2026-03-21 16:17:20 +00:00
GitHub Actions
52f759cc00 fix(e2e): pass Authorization header in import session cleanup helpers
- Add getStoredAuthHeader helper that reads charon_auth_token from
  localStorage and constructs an Authorization: Bearer header
- Apply the header to all page.request.* API calls in readImportStatus
  and issuePendingSessionCancel
- The previous code relied on the browser cookie jar for these cleanup
  API calls; with Secure=true on auth cookies, browsers refuse to send
  cookies over HTTP to 127.0.0.1 (IP address, not localhost hostname)
  causing silent 401s that left pending ImportSession rows in the DB
- Unreleased sessions caused all subsequent caddy-import tests to show
  the pending-session banner instead of the Caddyfile textarea, failing
  every test after the first
- The fix mirrors how the React app authenticates: via Authorization
  header, which is transport-independent and works on both HTTP and HTTPS
2026-03-21 14:21:55 +00:00
GitHub Actions
cc3cb1da4b fix(security): harden auth cookie to always set Secure attribute
- Remove the conditional secure=false branch from setSecureCookie that
  allowed cookies to be issued without the Secure flag when requests
  arrived over HTTP from localhost or RFC 1918 private addresses
- Pass the literal true to c.SetCookie directly, eliminating the
  dataflow path that triggered CodeQL go/cookie-secure-not-set (CWE-614)
- Remove the now-dead codeql suppression comment; the root cause is
  gone, not merely silenced
- Update setSecureCookie doc comment to reflect that Secure is always
  true: all major browsers (Chrome 66+, Firefox 75+, Safari 14+) honour
  the Secure attribute on localhost HTTP connections, and direct
  HTTP-on-private-IP access without TLS is an unsupported deployment
  model for Charon which is designed to sit behind Caddy TLS termination
- Update the five TestSetSecureCookie HTTP/local tests that previously
  asserted Secure=false to now assert Secure=true, reflecting the
  elimination of the insecure code path
- Add Secure=true assertion to TestClearSecureCookie to provide explicit
  coverage of the clear-cookie path
2026-03-21 13:17:45 +00:00
GitHub Actions
2c608bf684 docs: track CVE-2026-27171 zlib CPU exhaustion as a known medium vulnerability 2026-03-21 12:30:20 +00:00
Jeremy
a855ed0cf6 Merge pull request #869 from Wikid82/feature/beta-release
fix: resolve security header profile preset slugs when assigning via UUID string
2026-03-21 01:46:32 -04:00
GitHub Actions
ad7e97e7df fix: align test expectations with updated proxy host handler behavior 2026-03-21 03:05:10 +00:00
GitHub Actions
a2fea2b368 fix: update tools list in agent markdown files for consistency 2026-03-21 02:35:28 +00:00
GitHub Actions
c428a5be57 fix: propagate pipeline exit codes in CI quality-checks workflow 2026-03-21 02:23:16 +00:00
GitHub Actions
22769977e3 fix: clarify that advanced_config requires Caddy JSON, not Caddyfile syntax 2026-03-21 02:12:24 +00:00
Jeremy
50fb6659da Merge pull request #863 from Wikid82/feature/beta-release
fix(uptime): fix TCP monitor UX — correct format guidance and add client-side validation
2026-03-20 22:03:08 -04:00
GitHub Actions
e4f2606ea2 fix: resolve security header profile preset slugs when assigning via UUID string 2026-03-21 01:59:34 +00:00
GitHub Actions
af5cdf48cf fix: suppress pgproto3/v2 CVE-2026-4427 alias in vulnerability ignore files 2026-03-21 01:42:18 +00:00
GitHub Actions
1940f7f55d fix(tests): improve DOM order validation for type selector and URL input in CreateMonitorModal 2026-03-21 00:47:03 +00:00
GitHub Actions
c785c5165d fix: validate TCP format and update aria attributes in CreateMonitorModal 2026-03-21 00:47:03 +00:00
GitHub Actions
eaf981f635 fix(deps): update katex to version 0.16.40 and tldts to version 7.0.27 in package-lock.json 2026-03-21 00:47:03 +00:00
GitHub Actions
4284bcf0b6 fix(security): update known vulnerabilities section in SECURITY.md to reflect critical CVE-2025-68121 and additional high-severity issues 2026-03-21 00:47:03 +00:00
GitHub Actions
586f7cfc98 fix(security): enhance vulnerability reporting and documentation in SECURITY.md 2026-03-21 00:47:03 +00:00
GitHub Actions
15e9efeeae fix(security): add security review instructions to Management and QA Security agents 2026-03-21 00:47:03 +00:00
Jeremy
cd8bb2f501 Merge pull request #868 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-20 20:14:19 -04:00
renovate[bot]
fa42e79af3 fix(deps): update non-major-updates 2026-03-21 00:12:20 +00:00
Jeremy
859ddaef1f Merge pull request #867 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-20 14:10:06 -04:00
renovate[bot]
3b247cdd73 fix(deps): update non-major-updates 2026-03-20 18:09:46 +00:00
Jeremy
00aab022f5 Merge pull request #866 from Wikid82/renovate/feature/beta-release-knip-6.x
chore(deps): update dependency knip to v6 (feature/beta-release)
2026-03-20 14:08:29 -04:00
renovate[bot]
a40764d7da chore(deps): update dependency knip to v6 2026-03-20 12:00:39 +00:00
Jeremy
87b3db7019 Merge branch 'development' into feature/beta-release 2026-03-20 02:14:04 -04:00
Jeremy
ded533d690 Merge pull request #865 from Wikid82/renovate/feature/beta-release-nick-fields-retry-4.x
chore(deps): update nick-fields/retry action to v4 (feature/beta-release)
2026-03-20 02:13:46 -04:00
Jeremy
fc4ceafa20 Merge pull request #864 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update non-major-updates (feature/beta-release)
2026-03-20 02:13:31 -04:00
renovate[bot]
5b02eebfe5 chore(deps): update nick-fields/retry action to v4 2026-03-20 05:30:43 +00:00
renovate[bot]
338c9a3eef chore(deps): update non-major-updates 2026-03-20 05:30:39 +00:00
GitHub Actions
68d21fc20b fix: patch CVE-2026-30836 in Caddy build by pinning smallstep/certificates to v0.30.0 2026-03-20 04:15:29 +00:00
GitHub Actions
ea9ebdfdf2 fix(tools): update tools list in agent markdown files for consistency 2026-03-20 04:14:56 +00:00
GitHub Actions
1d09c793f6 fix(uptime): remove 'tcp://' prefix from Redis monitor URL in create and payload validation 2026-03-20 02:57:00 +00:00
GitHub Actions
856fd4097b fix(deps): update undici and tar to latest versions for improved stability 2026-03-20 02:47:00 +00:00
GitHub Actions
bb14ae73cc fix(uptime): fix TCP monitor UX — correct format guidance and add client-side validation
The TCP monitor creation form showed a placeholder that instructed users to enter a URL with the tcp:// scheme prefix (e.g., tcp://192.168.1.1:8080). Following this guidance caused a silent HTTP 500 error because Go's net.SplitHostPort rejects any input containing a scheme prefix, expecting bare host:port format only.

- Corrected the urlPlaceholder translation key to remove the tcp:// prefix
- Added per-type dynamic placeholder (urlPlaceholderHttp / urlPlaceholderTcp) so the URL input shows the correct example format as soon as the user selects a monitor type
- Added per-type helper text below the URL input explaining the required format, updated in real time when the type selector changes
- Added client-side validation: typing a scheme prefix (://) in TCP mode shows an inline error and blocks form submission before the request reaches the backend
- Reordered the Create Monitor form so the type selector appears before the URL input, giving users the correct format context before they type
- Type selector onChange now clears any stale urlError to prevent incorrect error messages persisting after switching from TCP back to HTTP
- Added 5 new i18n keys across all 5 supported locales (en, de, fr, es, zh)
- Added 10 RTL unit tests covering all new validation paths including the type-change error-clear scenario
- Added 9 Playwright E2E tests covering placeholder variants, helper text, inline error lifecycle, submission blocking, and successful TCP creation

Closes #issue-5 (TCP monitor UI cannot add monitor when following placeholder)
2026-03-20 01:19:43 +00:00
Jeremy
44450ff88a Merge pull request #862 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update dependency anchore/grype to v0.110.0 (feature/beta-release)
2026-03-19 19:46:25 -04:00
renovate[bot]
3a80e032f4 chore(deps): update dependency anchore/grype to v0.110.0 2026-03-19 21:09:01 +00:00
Jeremy
6e2d89372f Merge pull request #859 from Wikid82/feature/beta-release
fix(frontend): stabilize CrowdSec first-enable UX and guard empty-value regression
2026-03-19 16:56:50 -04:00
GitHub Actions
5bf7b54496 chore: proactively pin grpc and goxmldsig in Docker builder stages to patch embedded binary CVEs 2026-03-19 18:18:28 +00:00
GitHub Actions
0bdcb2a091 chore: suppress third-party binary CVEs with documented justification and expiry dates 2026-03-19 18:18:28 +00:00
GitHub Actions
b988179685 fix: update @emnapi/core, @emnapi/runtime, baseline-browser-mapping, and i18next to latest versions for improved stability 2026-03-19 18:18:28 +00:00
GitHub Actions
cbfe80809e fix: update @emnapi/core, @emnapi/runtime, and katex to latest versions for improved stability 2026-03-19 18:18:28 +00:00
GitHub Actions
9f826f764c fix: update dependencies in go.work.sum for improved compatibility and performance 2026-03-19 18:18:28 +00:00
Jeremy
262a805317 Merge pull request #861 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-19 14:15:42 -04:00
renovate[bot]
ec25165e54 fix(deps): update non-major-updates 2026-03-19 18:02:03 +00:00
GitHub Actions
7b34e2ecea fix: update google.golang.org/grpc to version 1.79.3 for improved compatibility 2026-03-19 13:10:18 +00:00
GitHub Actions
ec9b8ac925 fix: update @types/debug to version 4.1.13 for improved stability 2026-03-19 12:59:23 +00:00
GitHub Actions
431d88c47c fix: update @tanstack/query-core, @tanstack/react-query, @types/debug, eslint-plugin-testing-library, i18next, and knip to latest versions for improved stability and performance 2026-03-19 12:58:46 +00:00
GitHub Actions
e08e1861d6 fix: update @oxc-project and @rolldown packages to version 1.0.0-rc.10 for improved compatibility 2026-03-19 05:17:14 +00:00
GitHub Actions
64d2d4d423 fix: update ts-api-utils to version 2.5.0 for improved functionality 2026-03-19 05:16:32 +00:00
Jeremy
9f233a0128 Merge pull request #860 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update non-major-updates (feature/beta-release)
2026-03-18 20:30:26 -04:00
renovate[bot]
6939c792bd chore(deps): update non-major-updates 2026-03-18 23:07:56 +00:00
GitHub Actions
853940b74a fix: update mockResolvedValue calls for getSecurityStatus to improve test clarity 2026-03-18 23:06:24 +00:00
GitHub Actions
5aa8940af2 fix: update tools list in agent markdown files for consistency and clarity 2026-03-18 23:04:52 +00:00
GitHub Actions
cd3f2a90b4 fix: seed lapi-status in renderWithSeed to prevent loading gaps 2026-03-18 22:19:22 +00:00
GitHub Actions
bf89c2603d fix: enhance invite token validation for hex format and case sensitivity 2026-03-18 22:15:39 +00:00
GitHub Actions
19b388d865 fix: update Caddy security version to 1.1.50 in Dockerfile 2026-03-18 22:11:50 +00:00
GitHub Actions
25e40f164d fix: replace userEvent.click with user.click for consistency in CrowdSec tests 2026-03-18 22:08:05 +00:00
GitHub Actions
5505f66c41 fix: clarify comments on optimistic updates and server state handling in Security component 2026-03-18 22:06:40 +00:00
GitHub Actions
9a07619b89 fix: assert cloud-metadata error and no raw IPv6 leak for mapped metadata IP 2026-03-18 19:08:55 +00:00
GitHub Actions
faf2041a82 fix: sanitize IPv4-mapped IPv6 address in SSRF error message 2026-03-18 19:06:31 +00:00
GitHub Actions
460834f8f3 fix: use correct checkbox assertion for CrowdSec toggle test 2026-03-18 19:05:16 +00:00
GitHub Actions
75ae77a6bf fix: assert all db.Create calls in uptime service tests 2026-03-18 19:03:53 +00:00
GitHub Actions
73f2134caf fix(tests): improve server readiness check in UptimeService test to prevent misleading failures 2026-03-18 18:45:59 +00:00
GitHub Actions
c5efc30f43 fix: eliminate bcrypt DefaultCost from test setup to prevent CI flakiness 2026-03-18 18:13:18 +00:00
GitHub Actions
3099d74b28 fix: ensure cloud metadata SSRF error is consistent for IPv4-mapped addresses 2026-03-18 17:23:53 +00:00
GitHub Actions
fcc9309f2e chore(deps): update indirect dependencies for improved compatibility and performance 2026-03-18 17:12:01 +00:00
Jeremy
e581a9e7e7 Merge branch 'development' into feature/beta-release 2026-03-18 13:11:50 -04:00
Jeremy
ac72e6c3ac Merge pull request #858 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-18 13:11:20 -04:00
renovate[bot]
db824152ef fix(deps): update non-major-updates 2026-03-18 17:00:26 +00:00
GitHub Actions
1de29fe6fc fix(frontend): stabilize CrowdSec first-enable UX and guard empty-value regression
When CrowdSec is first enabled, the 10-60 second startup window caused
the toggle to immediately flicker back to unchecked, the card badge to
show 'Disabled' throughout startup, CrowdSecKeyWarning to flash before
bouncer registration completed, and CrowdSecConfig to show alarming
LAPI-not-ready banners to the user.

Root cause: the toggle, badge, and warning conditions all read from
stale sources (crowdsecStatus local state and status.crowdsec.enabled
server data) which neither reflects user intent during a pending mutation.

- Derive crowdsecChecked from crowdsecPowerMutation.variables during
  the pending window so the UI reflects intent immediately on click,
  not the lagging server state
- Show a 'Starting...' badge in warning variant throughout the startup
  window so the user knows the operation is in progress
- Suppress CrowdSecKeyWarning unconditionally while the mutation is
  pending, preventing the bouncer key alert from flashing before
  registration completes on the backend
- Broadcast the mutation's running state to the QueryClient cache via
  a synthetic crowdsec-starting key so CrowdSecConfig.tsx can read it
  without prop drilling
- In CrowdSecConfig, suppress the LAPI 'not running' (red) and
  'initializing' (yellow) banners while the startup broadcast is active,
  with a 90-second safety cap to prevent stale state from persisting
  if the tab is closed mid-mutation
- Add security.crowdsec.starting translation key to all five locales
- Add two backend regression tests confirming that empty-string setting
  values are accepted (not rejected by binding validation), preventing
  silent re-introduction of the Issue 4 bug
- Add nine RTL tests covering toggle stabilization, badge text, warning
  suppression, and LAPI banner suppression/expiry
- Add four Playwright E2E tests using route interception to simulate
  the startup delay in a real browser context

Fixes Issues 3 and 4 from the fresh-install bug report.
2026-03-18 16:57:23 +00:00
GitHub Actions
ac2026159e chore: update tailwindcss to version 4.2.2 in package.json 2026-03-18 16:46:50 +00:00
GitHub Actions
cfb28055cf fix: add vulnerability suppressions for CVE-2026-2673 in libcrypto3 and libssl3 with justification and review timeline 2026-03-18 11:08:58 +00:00
GitHub Actions
a2d8970b22 chore: Refactor agent tools for improved organization and efficiency across documentation, frontend development, planning, Playwright testing, QA security, and supervisor roles. 2026-03-18 10:36:14 +00:00
GitHub Actions
abadf9878a chore(deps): update electron-to-chromium to version 1.5.321 2026-03-18 10:27:06 +00:00
GitHub Actions
87590ac4e8 fix: simplify error handling and improve readability in URL validation and uptime service tests 2026-03-18 10:25:25 +00:00
Jeremy
999a81dce7 Merge pull request #857 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update dependency knip to ^5.88.0 (feature/beta-release)
2026-03-18 06:24:40 -04:00
Jeremy
031457406a Merge pull request #855 from Wikid82/feature/beta-release
fix(uptime): allow RFC 1918 IPs for admin-configured monitors
2026-03-18 06:09:51 -04:00
renovate[bot]
3d9d183b77 chore(deps): update dependency knip to ^5.88.0 2026-03-18 10:07:26 +00:00
GitHub Actions
379c664b5c fix(test): align cloud-metadata SSRF handler test with updated error message
The settings handler SSRF test table expected the generic "private ip"
error string for the cloud-metadata case (169.254.169.254). After the
url_validator was updated to return a distinct "cloud metadata" error for
that address, the handler test's errorContains check failed on every CI run.

Updated the test case expectation from "private" to "cloud metadata" to
match the more precise error message now produced by the validator.
2026-03-18 03:38:29 +00:00
GitHub Actions
4d8f09e279 fix: improve readiness checks and error handling in uptime service tests 2026-03-18 03:22:32 +00:00
GitHub Actions
8a0e91ac3b chore: strengthen AllowRFC1918 permit tests to assert success and URL correctness 2026-03-18 03:22:32 +00:00
GitHub Actions
3bc798bc9d fix: normalize IPv4-mapped cloud-metadata address to its IPv4 form before error reporting
- IPv4-mapped cloud metadata (::ffff:169.254.169.254) previously fell through
  the IPv4-mapped IPv6 detection block and returned the generic private-IP error
  instead of the cloud-metadata error, making the two cases inconsistent
- The IPv4-mapped error path used ip.String() (the raw ::ffff:… form) directly
  rather than sanitizeIPForError, potentially leaking the unsanitized IPv6
  address in error messages visible to callers
- Now extracts the IPv4 from the mapped address before both the cloud-metadata
  comparison and the sanitization call, so ::ffff:169.254.169.254 produces the
  same "access to cloud metadata endpoints is blocked" error as 169.254.169.254
  and the error message is always sanitized through the shared helper
- Updated the corresponding test to assert the cloud-metadata message and the
  absence of the raw IPv6 representation in the error text
2026-03-18 03:22:32 +00:00
GitHub Actions
8b4e0afd43 fix: format SeedDefaultSecurityConfig for improved readability 2026-03-18 03:22:32 +00:00
GitHub Actions
c7c4fc8915 fix(deps): update flatted to version 3.4.2 for improved stability 2026-03-18 03:22:32 +00:00
Jeremy
41c0252cf1 Merge pull request #856 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update module github.com/greenpau/caddy-security to v1.1.49 (feature/beta-release)
2026-03-17 23:15:17 -04:00
renovate[bot]
4c375ad86f chore(deps): update module github.com/greenpau/caddy-security to v1.1.49 2026-03-18 02:33:53 +00:00
Jeremy
459a8fef42 Merge branch 'development' into feature/beta-release 2026-03-17 22:32:24 -04:00
GitHub Actions
00a18704e8 fix(uptime): allow RFC 1918 IPs for admin-configured monitors
HTTP/HTTPS uptime monitors targeting LAN addresses (192.168.x.x,
10.x.x.x, 172.16.x.x) permanently reported 'down' on fresh installs
because SSRF protection rejects RFC 1918 ranges at two independent
checkpoints: the URL validator (DNS-resolution layer) and the safe
dialer (TCP-connect layer). Fixing only one layer leaves the monitor
broken in practice.

- Add IsRFC1918() predicate to the network package covering only the
  three RFC 1918 CIDRs; 169.254.x.x (link-local / cloud metadata)
  and loopback are intentionally excluded
- Add WithAllowRFC1918() functional option to both SafeHTTPClient and
  ValidationConfig; option defaults to false so existing behaviour is
  unchanged for every call site except uptime monitors
- In uptime_service.go, pass WithAllowRFC1918() to both
  ValidateExternalURL and NewSafeHTTPClient together; a coordinating
  comment documents that both layers must be relaxed as a unit
- 169.254.169.254 and the full 169.254.0.0/16 link-local range remain
  unconditionally blocked; the cloud-metadata error path is preserved
- 21 new tests across three packages, including an explicit regression
  guard that confirms RFC 1918 blocks are still applied without the
  option set (TestValidateExternalURL_RFC1918BlockedByDefault)

Fixes issues 6 and 7 from the fresh-install bug report.
2026-03-17 21:22:56 +00:00
Jeremy
dc9bbacc27 Merge pull request #854 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update release-drafter/release-drafter digest to 44a942e (feature/beta-release)
2026-03-17 16:41:13 -04:00
Jeremy
4da4e1a0d4 Merge branch 'feature/beta-release' into renovate/feature/beta-release-non-major-updates 2026-03-17 14:37:17 -04:00
Jeremy
3318b4af80 Merge pull request #852 from Wikid82/feature/beta-release
feat(security): seed default SecurityConfig row on application startup
2026-03-17 14:36:45 -04:00
GitHub Actions
c1aaa48ecb chore: cover error path in SeedDefaultSecurityConfig and letsencrypt cert cleanup loop
- The DB error return branch in SeedDefaultSecurityConfig was never
  exercised because all seed tests only ran against a healthy in-memory
  database; added a test that closes the underlying connection before
  calling the function so the FirstOrCreate error path is reached
- The letsencrypt certificate cleanup loop in Register was unreachable
  in all existing tests because no test pre-seeded a ProxyHost with
  an letsencrypt cert association; added a test that creates that
  precondition so the log and Update lines inside the loop execute
- These were the last two files blocking patch coverage on PR #852
2026-03-17 17:45:39 +00:00
renovate[bot]
f82a892405 chore(deps): update release-drafter/release-drafter digest to 44a942e 2026-03-17 17:17:04 +00:00
GitHub Actions
287e85d232 fix(ci): quote shell variables to prevent word splitting in integration test
- All unquoted $i loop counter comparisons and ${TMP_COOKIE} curl
  option arguments in the rate limit integration script were flagged
  by shellcheck SC2086
- Unquoted variables in [ ] test expressions and curl -b/-c options
  can cause subtle failures if the value ever contains whitespace or
  glob characters, and are a shellcheck hard warning that blocks CI
  linting gates
- Quoted all affected variables in place with no logic changes
2026-03-17 17:15:19 +00:00
Jeremy
fa6fbc8ce9 Merge pull request #853 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update paulhatch/semantic-version action to v6.0.2 (feature/beta-release)
2026-03-17 13:14:55 -04:00
GitHub Actions
61418fa9dd fix(security): persist RateLimitMode in Upsert and harden integration test payload
- The security config Upsert update path copied all rate limit fields
  from the incoming request onto the existing database record except
  RateLimitMode, so the seeded default value of "disabled" always
  survived a POST regardless of what the caller sent
- This silently prevented the Caddy rate_limit handler from being
  injected on any container with a pre-existing config record (i.e.,
  every real deployment and every CI run after migration)
- Added the missing field assignment so RateLimitMode is correctly
  persisted on update alongside all other rate limit settings
- Integration test payload now also sends rate_limit_enable alongside
  rate_limit_mode so the handler sync logic fires via its explicit
  first branch, providing belt-and-suspenders correctness independent
  of which path the caller uses to express intent
2026-03-17 17:06:02 +00:00
GitHub Actions
0df1126aa9 fix(deps): update modernc.org/sqlite to version 1.47.0 for improved functionality 2026-03-17 14:31:42 +00:00
renovate[bot]
1c72469ad6 chore(deps): update paulhatch/semantic-version action to v6.0.2 2026-03-17 14:30:44 +00:00
GitHub Actions
338f864f60 fix(ci): set correct rate_limit_mode field in integration test security config
- The rate-limit integration test was sending rate_limit_enable:true in the
  security config POST, but the backend injects the Caddy rate_limit handler
  only when rate_limit_mode is the string "enabled"
- Because rate_limit_mode was absent from the payload, the database default
  of "disabled" persisted and the guard condition always evaluated false,
  leaving the handler uninjected across all 10 verify attempts
- Replaced the boolean rate_limit_enable with the string field
  rate_limit_mode:"enabled" to match the exact contract the backend enforces
2026-03-17 14:29:35 +00:00
GitHub Actions
8b0011f6c6 fix(ci): enhance rate limit integration test reliability
- Added HTTP status checks for login and security config POST requests to ensure proper error handling.
- Implemented a readiness gate for the Caddy admin API before applying security configurations.
- Increased sleep duration before verifying rate limit handler to accommodate Caddy's configuration propagation.
- Changed verification failure from a warning to a hard exit to prevent misleading test results.
- Updated Caddy admin API URL to use the canonical trailing slash in multiple locations.
- Adjusted retry parameters for rate limit verification to reduce polling noise.
- Removed stale GeoIP checksum validation from the Dockerfile's non-CI path to simplify the build process.
2026-03-17 14:05:25 +00:00
GitHub Actions
e6a044c532 fix(deps): update caniuse-lite to version 1.0.30001780 for improved compatibility 2026-03-17 12:40:55 +00:00
GitHub Actions
bb1e59ea93 fix(deps): update bytedance/gopkg to version 0.1.4 for improved functionality 2026-03-17 12:38:43 +00:00
GitHub Actions
b761d7d4f7 feat(security): seed default SecurityConfig row on application startup
On a fresh install the security_configs table is auto-migrated but
contains no rows. Any code path reading SecurityConfig by name received
an empty Go struct with zero values, producing an all-disabled UI state
that offered no guidance to the user and made the security status
endpoint appear broken.

Adds a SeedDefaultSecurityConfig function that uses FirstOrCreate to
guarantee a default row exists with safe, disabled-by-default values on
every startup. The call is idempotent — existing rows are never modified,
so upgrades are unaffected. If the seed fails the application logs a
warning and continues rather than crashing.

Zero-valued rate-limit fields are intentional and safe: the Cerberus
rate-limit middleware applies hardcoded fallback thresholds when the
stored values are zero, so enabling rate limiting without configuring
thresholds results in sensible defaults rather than a divide-by-zero or
traffic block.

Adds three unit tests covering the empty-database, idempotent, and
do-not-overwrite-existing paths.
2026-03-17 12:33:40 +00:00
Jeremy
418fb7d17c Merge pull request #851 from Wikid82/feature/beta-release
fix(settings): allow empty string as a valid setting value
2026-03-16 23:24:37 -04:00
Jeremy
5084483984 Merge branch 'development' into feature/beta-release 2026-03-16 22:05:55 -04:00
GitHub Actions
3c96810aa1 fix(deps): update @babel/helpers, @babel/parser, @babel/runtime, and enhanced-resolve to latest versions for improved stability 2026-03-17 02:05:00 +00:00
GitHub Actions
dcd1ec7e95 fix: improve error handling in TestSettingsHandler_UpdateSetting_EmptyValueAccepted 2026-03-17 02:01:48 +00:00
GitHub Actions
4f222b6308 fix: make 'value' field optional in UpdateSettingRequest struct 2026-03-17 01:40:35 +00:00
Jeremy
071ae38d35 Merge pull request #850 from Wikid82/feature/beta-release
Feature: Pushover Notification Provider
2026-03-16 20:09:08 -04:00
GitHub Actions
3385800f41 fix(deps): update core-js-compat to version 3.49.0 for improved compatibility 2026-03-16 21:48:19 +00:00
GitHub Actions
4fe538b37e chore: add unit tests for Slack and Pushover service flags, and validate Pushover dispatch behavior 2026-03-16 21:38:40 +00:00
Jeremy
2bdf4f8286 Merge branch 'development' into feature/beta-release 2026-03-16 14:26:07 -04:00
Jeremy
a96366957e Merge pull request #849 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update non-major-updates (feature/beta-release)
2026-03-16 14:24:11 -04:00
renovate[bot]
c44642241c chore(deps): update non-major-updates 2026-03-16 18:22:12 +00:00
GitHub Actions
b5bf505ab9 fix: update go-sqlite3 to version 1.14.37 and modernc.org/sqlite to version 1.46.2 for improved stability 2026-03-16 18:20:35 +00:00
GitHub Actions
51f59e5972 fix: update @typescript-eslint packages to version 8.57.1 for improved compatibility and stability 2026-03-16 18:19:36 +00:00
GitHub Actions
65d02e754e feat: add support for Pushover notification provider
- Updated the list of supported notification provider types to include 'pushover'.
- Enhanced the notifications API tests to validate Pushover integration.
- Modified the notifications form to include fields specific to Pushover, such as API Token and User Key.
- Implemented CRUD operations for Pushover providers in the settings.
- Added end-to-end tests for Pushover provider functionality, including form rendering, payload validation, and security checks.
- Updated translations to include Pushover-specific labels and placeholders.
2026-03-16 18:16:14 +00:00
Jeremy
816c0595e1 Merge pull request #834 from Wikid82/feature/beta-release
Feature: Slack Notification Provider
2026-03-16 11:15:29 -04:00
GitHub Actions
9496001811 fix: update undici to version 7.24.4 for improved stability and security 2026-03-16 12:33:58 +00:00
Jeremy
ec1b79c2b7 Merge branch 'development' into feature/beta-release 2026-03-16 08:30:45 -04:00
Jeremy
bab79f2349 Merge pull request #846 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update non-major-updates (feature/beta-release)
2026-03-16 08:28:36 -04:00
renovate[bot]
edd7405313 chore(deps): update non-major-updates 2026-03-16 12:28:25 +00:00
GitHub Actions
79800871fa fix: harden frontend-builder with npm upgrade to mitigate bundled CVEs 2026-03-16 12:26:55 +00:00
Jeremy
67dd87d3a9 Merge pull request #845 from Wikid82/main
Propagate changes from main into development
2026-03-16 08:24:38 -04:00
Jeremy
dfc2beb8f3 Merge pull request #844 from Wikid82/nightly
Weekly: Promote nightly to main (2026-03-16)
2026-03-16 08:16:42 -04:00
GitHub Actions
5e5eae7422 fix: ensure Semgrep hook triggers on Dockerfile-only commits 2026-03-16 11:44:27 +00:00
GitHub Actions
78f216eaef fix: enhance payload handling in Slack provider creation to track token presence 2026-03-16 11:41:06 +00:00
Jeremy
34d5cca972 Merge branch 'main' into nightly 2026-03-16 07:35:56 -04:00
Jeremy
5d771381a1 Merge pull request #842 from Wikid82/bot/update-geolite2-checksum
chore(docker): update GeoLite2-Country.mmdb checksum
2026-03-16 07:35:38 -04:00
GitHub Actions
95a65069c0 fix: handle existing PR outputs in promotion job 2026-03-16 11:17:37 +00:00
Jeremy
1e4b2d1d03 Merge pull request #843 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-16 07:15:40 -04:00
renovate[bot]
81f1dce887 fix(deps): update non-major-updates 2026-03-16 11:06:23 +00:00
Wikid82
3570c05805 chore(docker): update GeoLite2-Country.mmdb checksum
Automated checksum update for GeoLite2-Country.mmdb database.

Old: b79afc28a0a52f89c15e8d92b05c173f314dd4f687719f96cf921012d900fcce
New: aa154fc6bcd712644de232a4abcdd07dac1f801308c0b6f93dbc2b375443da7b

Auto-generated by: .github/workflows/update-geolite2.yml
2026-03-16 02:58:27 +00:00
GitHub Actions
b66cc34e1c fix: update Caddy security version to 1.1.48 in Dockerfile 2026-03-15 20:49:53 +00:00
GitHub Actions
5bafd92edf fix: supply slack webhook token in handler create sub-tests
The slack sub-tests in TestDiscordOnly_CreateRejectsNonDiscord and
TestBlocker3_CreateProviderRejectsNonDiscordWithSecurityEvents were
omitting the required token field from their request payloads.
CreateProvider enforces that Slack providers must have a non-empty
token (the webhook URL) at creation time. Without it the service
returns "slack webhook URL is required", which the handler does not
classify as a 400 validation error, so it falls through to 500.

Add a token field to each test struct, populate it for the slack
case with a valid-format Slack webhook URL, and use
WithSlackURLValidator to bypass the real format check in unit tests —
matching the pattern used in all existing service-level Slack tests.
2026-03-15 15:17:23 +00:00
GitHub Actions
6e4294dce1 fix: validate Slack webhook URL at provider create/update time 2026-03-15 12:23:27 +00:00
GitHub Actions
82b1c85b7c fix: clarify feature flag behavior for Slack notifications in documentation 2026-03-15 12:14:48 +00:00
GitHub Actions
41ecb7122f fix: update baseline-browser-mapping and caniuse-lite to latest versions 2026-03-15 11:58:48 +00:00
GitHub Actions
2fa7608b9b fix: guard routeBodyPromise against indefinite hang in security test 2026-03-15 11:51:16 +00:00
GitHub Actions
285ee2cdda fix: expand Semgrep ruleset to cover TypeScript, Dockerfile, and shell security 2026-03-15 11:45:18 +00:00
GitHub Actions
72598ed2ce fix: inject Slack URL validator via constructor option instead of field mutation 2026-03-15 11:27:51 +00:00
GitHub Actions
8670cdfd2b fix: format notification services table for better readability 2026-03-15 11:17:34 +00:00
GitHub Actions
f8e8440388 fix: correct GeoIP CI detection to require truthy value 2026-03-15 11:15:56 +00:00
GitHub Actions
ab4dee5fcd fix: make Slack webhook URL validator injectable on NotificationService 2026-03-15 11:15:10 +00:00
Jeremy
04e87e87d5 Merge pull request #841 from Wikid82/renovate/feature/beta-release-jsdom-29.x
chore(deps): update dependency jsdom to v29 (feature/beta-release)
2026-03-15 07:00:19 -04:00
Jeremy
cc96435db1 Merge pull request #840 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update softprops/action-gh-release digest to b25b93d (feature/beta-release)
2026-03-15 06:59:51 -04:00
renovate[bot]
53af0a6866 chore(deps): update dependency jsdom to v29 2026-03-15 10:56:03 +00:00
renovate[bot]
3577ce6c56 chore(deps): update softprops/action-gh-release digest to b25b93d 2026-03-15 10:55:54 +00:00
Jeremy
0ce35f2d64 Merge branch 'development' into feature/beta-release 2026-03-14 23:47:43 -04:00
Jeremy
0e556433f7 Merge pull request #839 from Wikid82/hotfix/login
Hotfix: Login / Auth on Private IP
2026-03-14 23:45:41 -04:00
GitHub Actions
4b170b69e0 fix: update Caddy security version to 1.1.47 in Dockerfile 2026-03-15 03:25:41 +00:00
GitHub Actions
1096b00b94 fix: set PORT environment variable for httpbin backend in integration scripts 2026-03-14 16:44:35 +00:00
GitHub Actions
6180d53a93 fix: update undici to version 7.24.2 in package-lock.json 2026-03-14 16:44:35 +00:00
Jeremy
fca1139c81 Merge pull request #838 from Wikid82/renovate/feature/beta-release-release-drafter-release-drafter-7.x
chore(deps): update release-drafter/release-drafter action to v7 (feature/beta-release)
2026-03-14 12:30:46 -04:00
Jeremy
847b10322a Merge pull request #837 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update non-major-updates (feature/beta-release)
2026-03-14 12:30:29 -04:00
Jeremy
59251c8f27 Merge branch 'feature/beta-release' into renovate/feature/beta-release-non-major-updates 2026-03-14 12:30:02 -04:00
GitHub Actions
58b087bc63 fix: replace curl with wget for backend readiness checks in integration scripts 2026-03-14 13:17:06 +00:00
renovate[bot]
8ab926dc8b chore(deps): update release-drafter/release-drafter action to v7 2026-03-14 13:16:45 +00:00
renovate[bot]
85f258d9f6 chore(deps): update non-major-updates 2026-03-14 13:15:37 +00:00
GitHub Actions
042c5ec6e5 fix(ci): replace abandoned httpbin image with maintained Go alternative 2026-03-13 22:44:19 +00:00
GitHub Actions
05d19c0471 fix: update lru-cache and other dependencies to latest versions 2026-03-13 20:07:30 +00:00
GitHub Actions
48af524313 chore(security): expand Semgrep coverage to include frontend and secrets scanning 2026-03-13 20:07:30 +00:00
GitHub Actions
bad97102e1 fix: repair GeoIP CI detection and harden httpbin startup in integration tests 2026-03-13 20:07:30 +00:00
GitHub Actions
98a4efcd82 fix: handle errors gracefully when commenting on PRs in supply chain verification workflow 2026-03-13 20:07:30 +00:00
Jeremy
f631dfc628 Merge pull request #836 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update non-major-updates (feature/beta-release)
2026-03-13 15:58:41 -04:00
renovate[bot]
eb5b74cbe3 chore(deps): update non-major-updates 2026-03-13 19:08:11 +00:00
GitHub Actions
1785ccc39f fix: remove zlib vulnerability suppression and update review dates for Nebula ECDSA signature malleability 2026-03-13 14:14:22 +00:00
GitHub Actions
4b896c2e3c fix: replace curl with wget for healthcheck commands in Docker configurations 2026-03-13 14:13:37 +00:00
GitHub Actions
88a9cdb0ff fix(deps): update @vitejs/plugin-react to version 6.0.1 and adjust peer dependency for @rolldown/plugin-babel 2026-03-13 12:33:00 +00:00
GitHub Actions
354ff0068a fix: upgrade zlib package in Dockerfile to ensure latest security patches 2026-03-13 12:10:38 +00:00
GitHub Actions
0c419d8f85 chore: add Slack provider validation tests for payload and webhook URL 2026-03-13 12:09:35 +00:00
GitHub Actions
26be592f4d feat: add Slack notification provider support
- Updated the notification provider types to include 'slack'.
- Modified API tests to handle 'slack' as a valid provider type.
- Enhanced frontend forms to display Slack-specific fields (webhook URL and channel name).
- Implemented CRUD operations for Slack providers, ensuring proper payload structure.
- Added E2E tests for Slack notification provider, covering form rendering, validation, and security checks.
- Updated translations to include Slack-related text.
- Ensured that sensitive information (like tokens) is not exposed in API responses.
2026-03-13 03:40:02 +00:00
GitHub Actions
fb9b6cae76 fix(deps): update caddy-security version to 1.1.46 2026-03-13 01:37:09 +00:00
Jeremy
5bb9b2a6fb Merge branch 'development' into feature/beta-release 2026-03-12 13:52:54 -04:00
GitHub Actions
593694a4b4 fix(deps): update goccy/go-json to version 0.10.6 2026-03-12 17:49:05 +00:00
GitHub Actions
b207993299 fix(deps): update baseline-browser-mapping to version 2.10.7 and undici to version 7.23.0 2026-03-12 17:48:14 +00:00
Jeremy
a807288052 Merge pull request #833 from Wikid82/renovate/feature/beta-release-non-major-updates
chore(deps): update non-major-updates (feature/beta-release)
2026-03-12 13:45:33 -04:00
renovate[bot]
49b956f916 chore(deps): update non-major-updates 2026-03-12 17:38:44 +00:00
GitHub Actions
53227de55c chore: Refactor code structure for improved readability and maintainability 2026-03-12 10:10:25 +00:00
GitHub Actions
58921556a1 fix(deps): update golang.org/x/term to version 0.41.0 2026-03-12 10:06:34 +00:00
GitHub Actions
442164cc5c fix(deps): update golang.org/x/crypto and golang.org/x/net dependencies to latest versions 2026-03-12 10:05:51 +00:00
Jeremy
8414004d8f Merge pull request #832 from Wikid82/renovate/feature/beta-release-non-major-updates
fix(deps): update non-major-updates (feature/beta-release)
2026-03-12 05:53:18 -04:00
renovate[bot]
7932188dae fix(deps): update non-major-updates 2026-03-12 09:30:08 +00:00
GitHub Actions
d4081d954f chore: update dependencies and configuration for Vite and Vitest
- Bump versions of @vitejs/plugin-react, @vitest/coverage-istanbul, @vitest/coverage-v8, and @vitest/ui to their beta releases.
- Upgrade Vite and Vitest to their respective beta versions.
- Adjust Vite configuration to disable code splitting for improved React initialization stability.
2026-03-12 04:31:31 +00:00
GitHub Actions
2e85a341c8 chore: upgrade ESLint and related plugins to version 10.x
- Updated @eslint/js and eslint to version 10.0.0 in package.json.
- Adjusted overrides for eslint-plugin-react-hooks, eslint-plugin-jsx-a11y, and eslint-plugin-promise to ensure compatibility with ESLint v10.
- Modified lefthook.yml to reflect the upgrade and noted the need for plugin support for ESLint v10.
2026-03-12 00:00:01 +00:00
GitHub Actions
2969eb58e4 chore: update TypeScript to 6.0.1-rc and adjust package dependencies
- Removed duplicate @typescript-eslint/utils dependency in frontend/package.json
- Updated TypeScript version from 5.9.3 to 6.0.1-rc in frontend/package.json and package.json
- Adjusted ResizeObserver mock to use globalThis in tests
- Modified tsconfig.json and tsconfig.node.json to include empty types array
- Cleaned up package-lock.json to reflect TypeScript version change and updated dev dependencies
2026-03-11 22:19:35 +00:00
Jeremy
bfe535d36a Merge pull request #816 from Wikid82/hotfix/docker_build
fix(docker): update CADDY_VERSION to 2.11.2 for improved stability
2026-03-09 13:47:14 -04:00
214 changed files with 18936 additions and 4086 deletions

View File

@@ -47,7 +47,7 @@ services:
# - <PATH_TO_YOUR_CADDYFILE>:/import/Caddyfile:ro
# - <PATH_TO_YOUR_SITES_DIR>:/import/sites:ro # If your Caddyfile imports other files
healthcheck:
test: ["CMD-SHELL", "curl -fsS http://localhost:8080/api/v1/health || exit 1"]
test: ["CMD-SHELL", "wget -qO /dev/null http://localhost:8080/api/v1/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -87,7 +87,7 @@ services:
- playwright_caddy_config:/config
- /var/run/docker.sock:/var/run/docker.sock:ro # For container discovery in tests
healthcheck:
test: ["CMD", "curl", "-sf", "http://localhost:8080/api/v1/health"]
test: ["CMD-SHELL", "wget -qO /dev/null http://localhost:8080/api/v1/health || exit 1"]
interval: 5s
timeout: 3s
retries: 12

View File

@@ -48,11 +48,12 @@ services:
tmpfs:
# True tmpfs for E2E test data - fresh on every run, in-memory only
# mode=1777 allows any user to write (container runs as non-root)
- /app/data:size=100M,mode=1777
# 256M gives headroom for the backup service's 100MB disk-space check
- /app/data:size=256M,mode=1777
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro # For container discovery in tests
healthcheck:
test: ["CMD-SHELL", "curl -fsS http://localhost:8080/api/v1/health || exit 1"]
test: ["CMD-SHELL", "wget -qO /dev/null http://localhost:8080/api/v1/health || exit 1"]
interval: 5s
timeout: 5s
retries: 10

View File

@@ -52,7 +52,7 @@ services:
# - ./my-existing-Caddyfile:/import/Caddyfile:ro
# - ./sites:/import/sites:ro # If your Caddyfile imports other files
healthcheck:
test: ["CMD-SHELL", "curl -fsS http://localhost:8080/api/v1/health || exit 1"]
test: ["CMD-SHELL", "wget -qO /dev/null http://localhost:8080/api/v1/health || exit 1"]
interval: 30s
timeout: 10s
retries: 3

View File

@@ -365,7 +365,7 @@ echo "Caddy started (PID: $CADDY_PID)"
echo "Waiting for Caddy admin API..."
i=1
while [ "$i" -le 30 ]; do
if curl -sf http://127.0.0.1:2019/config/ > /dev/null 2>&1; then
if wget -qO /dev/null http://127.0.0.1:2019/config/ 2>/dev/null; then
echo "Caddy is ready!"
break
fi

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -73,6 +73,7 @@ You are "lazy" in the smartest way possible. You never do what a subordinate can
- **Supervisor**: Call `Supervisor` to review the implementation against the plan. Provide feedback and ensure alignment with best practices.
6. **Phase 6: Audit**:
- Review Security: Read `security.md.instrutctions.md` and `SECURITY.md` to understand the security requirements and best practices for Charon. Ensure that any open concerns or issues are addressed in the QA Audit and `SECURITY.md` is updated accordingly.
- **QA**: Call `QA_Security` to meticulously test current implementation as well as regression test. Run all linting, security tasks, and manual lefthook checks. Write a report to `docs/reports/qa_report.md`. Start back at Phase 1 if issues are found.
7. **Phase 7: Closure**:

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,204 @@
---
applyTo: SECURITY.md
---
# Instructions: Maintaining `SECURITY.md`
`SECURITY.md` is the project's living security record. It serves two audiences simultaneously: users who need to know what risks exist right now, and the broader community who need confidence that vulnerabilities are being tracked and remediated with discipline. Treat it like a changelog, but for security events — every known issue gets an entry, every resolved issue keeps its entry.
---
## File Structure
`SECURITY.md` must always contain the following top-level sections, in this order:
1. A brief project security policy preamble (responsible disclosure contact, response SLA)
2. **`## Known Vulnerabilities`** — active, unpatched issues
3. **`## Patched Vulnerabilities`** — resolved issues, retained permanently for audit trail
No other top-level sections are required. Do not collapse or remove sections even when they are empty — use the explicit empty-state placeholder defined below.
---
## Section 1: Known Vulnerabilities
This section lists every vulnerability that is currently unpatched or only partially mitigated. Entries must be sorted with the highest severity first, then by discovery date descending within the same severity tier.
### Entry Format
Each entry is an H3 heading followed by a structured block:
```markdown
### [SEVERITY] CVE-XXXX-XXXXX · Short Title
| Field | Value |
|--------------|-------|
| **ID** | CVE-XXXX-XXXXX (or `CHARON-YYYY-NNN` if no CVE assigned yet) |
| **Severity** | Critical / High / Medium / Low · CVSS v3.1 score if known (e.g. `8.1 · High`) |
| **Status** | Investigating / Fix In Progress / Awaiting Upstream / Mitigated (partial) |
**What**
One to three sentences describing the vulnerability class and its impact.
Be specific: name the weakness type (e.g. SQL injection, path traversal, SSRF).
**Who**
- Discovered by: [Reporter name or handle, or "Internal audit", or "Automated scan (tool name)"]
- Reported: YYYY-MM-DD
- Affects: [User roles, API consumers, unauthenticated users, etc.]
**Where**
- Component: [Module or service name]
- File(s): `path/to/affected/file.go`, `path/to/other/file.ts`
- Versions affected: `>= X.Y.Z` (or "all versions" / "prior to X.Y.Z")
**When**
- Discovered: YYYY-MM-DD
- Disclosed (if public): YYYY-MM-DD (or "Not yet publicly disclosed")
- Target fix: YYYY-MM-DD (or sprint/milestone reference)
**How**
A concise technical description of the attack vector, prerequisites, and exploitation
method. Omit proof-of-concept code. Reference CVE advisories or upstream issue
trackers where appropriate.
**Planned Remediation**
Describe the fix strategy: library upgrade, logic refactor, config change, etc.
If a workaround is available in the meantime, document it here.
Link to the tracking issue: [#NNN](https://github.com/owner/repo/issues/NNN)
```
### Empty State
When there are no known vulnerabilities:
```markdown
## Known Vulnerabilities
No known unpatched vulnerabilities at this time.
Last reviewed: YYYY-MM-DD
```
---
## Section 2: Patched Vulnerabilities
This section is a permanent, append-only ledger. Entries are never deleted. Sort newest-patched first. This section builds community trust by demonstrating that issues are resolved promptly and transparently.
### Entry Format
```markdown
### ✅ [SEVERITY] CVE-XXXX-XXXXX · Short Title
| Field | Value |
|--------------|-------|
| **ID** | CVE-XXXX-XXXXX (or internal ID) |
| **Severity** | Critical / High / Medium / Low · CVSS v3.1 score |
| **Patched** | YYYY-MM-DD in `vX.Y.Z` |
**What**
Same description carried over from the Known Vulnerabilities entry.
**Who**
- Discovered by: [Reporter or method]
- Reported: YYYY-MM-DD
**Where**
- Component: [Module or service name]
- File(s): `path/to/affected/file.go`
- Versions affected: `< X.Y.Z`
**When**
- Discovered: YYYY-MM-DD
- Patched: YYYY-MM-DD
- Time to patch: N days
**How**
Same technical description as the original entry.
**Resolution**
Describe exactly what was changed to fix the issue.
- Commit: [`abc1234`](https://github.com/owner/repo/commit/abc1234)
- PR: [#NNN](https://github.com/owner/repo/pull/NNN)
- Release: [`vX.Y.Z`](https://github.com/owner/repo/releases/tag/vX.Y.Z)
**Credit**
[Optional] Thank the reporter if they consented to attribution.
```
### Empty State
```markdown
## Patched Vulnerabilities
No patched vulnerabilities on record yet.
```
---
## Lifecycle: Moving an Entry from Known → Patched
When a fix ships:
1. Remove the entry from `## Known Vulnerabilities` entirely.
2. Add a new entry to the **top** of `## Patched Vulnerabilities` using the patched format above.
3. Carry forward all original fields verbatim — do not rewrite the history of the issue.
4. Add the `**Resolution**` and `**Credit**` blocks with patch details.
5. Update the `Last reviewed` date on the Known Vulnerabilities section if it is now empty.
Do not edit or backfill existing Patched entries once they are committed.
---
## Severity Classification
Use the following definitions consistently:
| Severity | CVSS Range | Meaning |
|----------|------------|---------|
| **Critical** | 9.010.0 | Remote code execution, auth bypass, full data exposure |
| **High** | 7.08.9 | Significant data exposure, privilege escalation, DoS |
| **Medium** | 4.06.9 | Limited data exposure, requires user interaction or auth |
| **Low** | 0.13.9 | Minimal impact, difficult to exploit, defense-in-depth |
When a CVE CVSS score is not yet available, assign a preliminary severity based on these definitions and note it as `(preliminary)` until confirmed.
---
## Internal IDs
If a vulnerability has no CVE assigned, use the format `CHARON-YYYY-NNN` where `YYYY` is the year and `NNN` is a zero-padded sequence number starting at `001` for each year. Example: `CHARON-2025-003`. Assign a CVE ID in the entry retroactively if one is issued later, and add the internal ID as an alias in parentheses.
---
## Responsible Disclosure Preamble
The preamble at the top of `SECURITY.md` (before the vulnerability sections) must include:
- The preferred contact method for reporting vulnerabilities (e.g. a GitHub private advisory link, a security email address, or both)
- An acknowledgment-first response commitment: confirm receipt within 48 hours, even if the full investigation takes longer
- A statement that reporters will not be penalized or publicly named without consent
- A link to the full disclosure policy if one exists
Example:
```markdown
## Reporting a Vulnerability
To report a security issue, please use
[GitHub Private Security Advisories](https://github.com/owner/repo/security/advisories/new)
or email `security@example.com`.
We will acknowledge your report within **48 hours** and provide a remediation
timeline within **7 days**. Reporters are credited with their consent.
We do not pursue legal action against good-faith security researchers.
```
---
## Maintenance Rules
- **Review cadence**: Update the `Last reviewed` date in the Known Vulnerabilities section at least once per release cycle, even if no entries changed.
- **No silent patches**: Every security fix — no matter how minor — must produce an entry in `## Patched Vulnerabilities` before or alongside the release.
- **No redaction**: Do not redact or soften historical entries. Accuracy builds trust; minimizing past issues destroys it.
- **Dependency vulnerabilities**: Transitive dependency CVEs that affect Charon's exposed attack surface must be tracked here the same as first-party vulnerabilities. Pure dev-dependency CVEs with no runtime impact may be omitted at maintainer discretion, but must still be noted in the relevant dependency update PR.
- **Partial mitigations**: If a workaround is deployed but the root cause is not fixed, the entry stays in `## Known Vulnerabilities` with `Status: Mitigated (partial)` and the workaround documented in `**Planned Remediation**`.

32
.github/renovate.json vendored
View File

@@ -130,6 +130,32 @@
"datasourceTemplate": "go",
"versioningTemplate": "semver"
},
{
"customType": "regex",
"description": "Track gotestsum version in codecov workflow",
"managerFilePatterns": [
"/^\\.github/workflows/codecov-upload\\.yml$/"
],
"matchStrings": [
"gotestsum@v(?<currentValue>[^\\s]+)"
],
"depNameTemplate": "gotest.tools/gotestsum",
"datasourceTemplate": "go",
"versioningTemplate": "semver"
},
{
"customType": "regex",
"description": "Track gotestsum version in quality checks workflow",
"managerFilePatterns": [
"/^\\.github/workflows/quality-checks\\.yml$/"
],
"matchStrings": [
"gotestsum@v(?<currentValue>[^\\s]+)"
],
"depNameTemplate": "gotest.tools/gotestsum",
"datasourceTemplate": "go",
"versioningTemplate": "semver"
},
{
"customType": "regex",
"description": "Track govulncheck version in scripts",
@@ -255,6 +281,12 @@
"matchUpdateTypes": ["major"],
"automerge": false,
"labels": ["manual-review"]
},
{
"description": "Fix Renovate lookup for geoip2-golang v2 module path",
"matchDatasources": ["go"],
"matchPackageNames": ["github.com/oschwald/geoip2-golang/v2"],
"sourceUrl": "https://github.com/oschwald/geoip2-golang"
}
]
}

View File

@@ -35,7 +35,7 @@ fi
# Check Grype
if ! command -v grype >/dev/null 2>&1; then
log_error "Grype not found - install from: https://github.com/anchore/grype"
log_error "Installation: curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v0.109.1"
log_error "Installation: curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v0.110.0"
error_exit "Grype is required for vulnerability scanning" 2
fi
@@ -50,8 +50,8 @@ SYFT_INSTALLED_VERSION=$(syft version | grep -oP 'Version:\s*\Kv?[0-9]+\.[0-9]+\
GRYPE_INSTALLED_VERSION=$(grype version | grep -oP 'Version:\s*\Kv?[0-9]+\.[0-9]+\.[0-9]+' | head -1 || echo "unknown")
# Set defaults matching CI workflow
set_default_env "SYFT_VERSION" "v1.42.2"
set_default_env "GRYPE_VERSION" "v0.109.1"
set_default_env "SYFT_VERSION" "v1.42.3"
set_default_env "GRYPE_VERSION" "v0.110.0"
set_default_env "IMAGE_TAG" "charon:local"
set_default_env "FAIL_ON_SEVERITY" "Critical,High"

View File

@@ -21,6 +21,6 @@ jobs:
with:
ref: ${{ github.event.workflow_run.head_sha || github.sha }}
- name: Draft Release
uses: release-drafter/release-drafter@6a93d829887aa2e0748befe2e808c66c0ec6e4c7 # v6
uses: release-drafter/release-drafter@139054aeaa9adc52ab36ddf67437541f039b88e2 # v7
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -33,7 +33,7 @@ jobs:
- name: Calculate Semantic Version
id: semver
uses: paulhatch/semantic-version@f29500c9d60a99ed5168e39ee367e0976884c46e # v6.0.1
uses: paulhatch/semantic-version@9f72830310d5ed81233b641ee59253644cd8a8fc # v6.0.2
with:
# The prefix to use to create tags
tag_prefix: "v"
@@ -89,7 +89,7 @@ jobs:
- name: Create GitHub Release (creates tag via API)
if: ${{ steps.semver.outputs.changed == 'true' && steps.check_release.outputs.exists == 'false' }}
uses: softprops/action-gh-release@a06a81a03ee405af7f2048a818ed3f03bbf83c7b # v2
uses: softprops/action-gh-release@153bb8e04406b158c6c84fc1615b65b24149a1fe # v2
with:
tag_name: ${{ steps.determine_tag.outputs.tag }}
name: Release ${{ steps.determine_tag.outputs.tag }}

View File

@@ -31,7 +31,7 @@ jobs:
- name: Build Docker image (Local)
run: |
echo "Building image locally for integration tests..."
docker build -t charon:local .
docker build -t charon:local --build-arg CI="${CI:-false}" .
echo "✅ Successfully built charon:local"
- name: Run Cerberus integration tests

View File

@@ -126,6 +126,9 @@ jobs:
echo "__CHARON_EOF__"
} >> "$GITHUB_ENV"
- name: Install gotestsum
run: go install gotest.tools/gotestsum@v1.13.0
- name: Run Go tests with coverage
working-directory: ${{ github.workspace }}
env:
@@ -134,8 +137,16 @@ jobs:
bash scripts/go-test-coverage.sh 2>&1 | tee backend/test-output.txt
exit "${PIPESTATUS[0]}"
- name: Upload test output artifact
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: backend-test-output
path: backend/test-output.txt
retention-days: 7
- name: Upload backend coverage to Codecov
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./backend/coverage.txt
@@ -172,7 +183,7 @@ jobs:
exit "${PIPESTATUS[0]}"
- name: Upload frontend coverage to Codecov
uses: codecov/codecov-action@671740ac38dd9b0130fbe1cec585b89eea48d3de # v5
uses: codecov/codecov-action@1af58845a975a7985b0beb0cbe6fbbb71a41dbad # v5
with:
token: ${{ secrets.CODECOV_TOKEN }}
directory: ./frontend/coverage

View File

@@ -52,7 +52,7 @@ jobs:
run: bash scripts/ci/check-codeql-parity.sh
- name: Initialize CodeQL
uses: github/codeql-action/init@0d579ffd059c29b07949a3cce3983f0780820c98 # v4
uses: github/codeql-action/init@38697555549f1db7851b81482ff19f1fa5c4fedc # v4
with:
languages: ${{ matrix.language }}
queries: security-and-quality
@@ -92,10 +92,10 @@ jobs:
run: mkdir -p sarif-results
- name: Autobuild
uses: github/codeql-action/autobuild@0d579ffd059c29b07949a3cce3983f0780820c98 # v4
uses: github/codeql-action/autobuild@38697555549f1db7851b81482ff19f1fa5c4fedc # v4
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@0d579ffd059c29b07949a3cce3983f0780820c98 # v4
uses: github/codeql-action/analyze@38697555549f1db7851b81482ff19f1fa5c4fedc # v4
with:
category: "/language:${{ matrix.language }}"
output: sarif-results/${{ matrix.language }}

View File

@@ -31,7 +31,7 @@ jobs:
- name: Build Docker image (Local)
run: |
echo "Building image locally for integration tests..."
docker build -t charon:local .
docker build -t charon:local --build-arg CI="${CI:-false}" .
echo "✅ Successfully built charon:local"
- name: Run CrowdSec integration tests

View File

@@ -23,7 +23,7 @@ name: Docker Build, Publish & Test
on:
pull_request:
push:
branches: [main]
branches: [main, development]
workflow_dispatch:
workflow_run:
workflows: ["Docker Lint"]
@@ -42,7 +42,7 @@ env:
TRIGGER_HEAD_SHA: ${{ github.event_name == 'workflow_run' && github.event.workflow_run.head_sha || github.sha }}
TRIGGER_REF: ${{ github.event_name == 'workflow_run' && format('refs/heads/{0}', github.event.workflow_run.head_branch) || github.ref }}
TRIGGER_HEAD_REF: ${{ github.event_name == 'workflow_run' && github.event.workflow_run.head_branch || github.head_ref }}
TRIGGER_PR_NUMBER: ${{ github.event_name == 'workflow_run' && join(github.event.workflow_run.pull_requests.*.number, '') || github.event.pull_request.number }}
TRIGGER_PR_NUMBER: ${{ github.event_name == 'workflow_run' && join(github.event.workflow_run.pull_requests.*.number, '') || format('{0}', github.event.pull_request.number) }}
TRIGGER_ACTOR: ${{ github.event_name == 'workflow_run' && github.event.workflow_run.actor.login || github.actor }}
jobs:
@@ -234,7 +234,7 @@ jobs:
- name: Build and push Docker image (with retry)
if: steps.skip.outputs.skip_build != 'true'
id: build-and-push
uses: nick-fields/retry@ce71cc2ab81d554ebbe88c79ab5975992d79ba08 # v3.0.2
uses: nick-fields/retry@ad984534de44a9489a53aefd81eb77f87c70dc60 # v4.0.0
with:
timeout_minutes: 25
max_attempts: 3
@@ -565,7 +565,7 @@ jobs:
- name: Upload Trivy results
if: env.TRIGGER_EVENT != 'pull_request' && steps.skip.outputs.skip_build != 'true' && steps.trivy-check.outputs.exists == 'true'
uses: github/codeql-action/upload-sarif@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1
with:
sarif_file: 'trivy-results.sarif'
category: '.github/workflows/docker-build.yml:build-and-push'
@@ -574,7 +574,7 @@ jobs:
# Generate SBOM (Software Bill of Materials) for supply chain security
# Only for production builds (main/development) - feature branches use downstream supply-chain-pr.yml
- name: Generate SBOM
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0.23.1
uses: anchore/sbom-action@e22c389904149dbc22b58101806040fa8d37a610 # v0.24.0
if: env.TRIGGER_EVENT != 'pull_request' && steps.skip.outputs.skip_build != 'true' && steps.skip.outputs.is_feature_push != 'true'
with:
image: ${{ env.GHCR_REGISTRY }}/${{ env.IMAGE_NAME }}@${{ steps.build-and-push.outputs.digest }}
@@ -583,7 +583,7 @@ jobs:
# Create verifiable attestation for the SBOM
- name: Attest SBOM
uses: actions/attest-sbom@07e74fc4e78d1aad915e867f9a094073a9f71527 # v4.0.0
uses: actions/attest-sbom@c604332985a26aa8cf1bdc465b92731239ec6b9e # v4.1.0
if: env.TRIGGER_EVENT != 'pull_request' && steps.skip.outputs.skip_build != 'true' && steps.skip.outputs.is_feature_push != 'true'
with:
subject-name: ${{ env.GHCR_REGISTRY }}/${{ env.IMAGE_NAME }}
@@ -724,14 +724,14 @@ jobs:
- name: Upload Trivy scan results
if: always() && steps.trivy-pr-check.outputs.exists == 'true'
uses: github/codeql-action/upload-sarif@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1
with:
sarif_file: 'trivy-pr-results.sarif'
category: 'docker-pr-image'
- name: Upload Trivy compatibility results (docker-build category)
if: always() && steps.trivy-pr-check.outputs.exists == 'true'
uses: github/codeql-action/upload-sarif@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1
with:
sarif_file: 'trivy-pr-results.sarif'
category: '.github/workflows/docker-build.yml:build-and-push'
@@ -739,7 +739,7 @@ jobs:
- name: Upload Trivy compatibility results (docker-publish alias)
if: always() && steps.trivy-pr-check.outputs.exists == 'true'
uses: github/codeql-action/upload-sarif@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1
with:
sarif_file: 'trivy-pr-results.sarif'
category: '.github/workflows/docker-publish.yml:build-and-push'
@@ -747,7 +747,7 @@ jobs:
- name: Upload Trivy compatibility results (nightly alias)
if: always() && steps.trivy-pr-check.outputs.exists == 'true'
uses: github/codeql-action/upload-sarif@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1
with:
sarif_file: 'trivy-pr-results.sarif'
category: 'trivy-nightly'

View File

@@ -158,7 +158,7 @@ jobs:
- name: Cache npm dependencies
if: steps.resolve-image.outputs.image_source == 'build'
uses: actions/cache@cdf6c1fa76f9f475f3d7449005a359c84ca0f306 # v5
uses: actions/cache@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5
with:
path: ~/.npm
key: npm-${{ hashFiles('package-lock.json') }}

View File

@@ -263,7 +263,7 @@ jobs:
- name: Generate SBOM
id: sbom_primary
continue-on-error: true
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0.23.1
uses: anchore/sbom-action@e22c389904149dbc22b58101806040fa8d37a610 # v0.24.0
with:
image: ${{ env.GHCR_REGISTRY }}/${{ env.IMAGE_NAME }}@${{ steps.resolve_digest.outputs.digest }}
format: cyclonedx-json
@@ -282,7 +282,7 @@ jobs:
echo "Primary SBOM generation failed or produced missing/invalid output; using deterministic Syft fallback"
SYFT_VERSION="v1.42.2"
SYFT_VERSION="v1.42.3"
OS="$(uname -s | tr '[:upper:]' '[:lower:]')"
ARCH="$(uname -m)"
case "$ARCH" in
@@ -435,7 +435,7 @@ jobs:
name: sbom-nightly
- name: Scan with Grype
uses: anchore/scan-action@7037fa011853d5a11690026fb85feee79f4c946c # v7.3.2
uses: anchore/scan-action@e1165082ffb1fe366ebaf02d8526e7c4989ea9d2 # v7.4.0
with:
sbom: sbom-nightly.json
fail-build: false
@@ -451,7 +451,7 @@ jobs:
trivyignores: '.trivyignore'
- name: Upload Trivy results
uses: github/codeql-action/upload-sarif@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1
with:
sarif_file: 'trivy-nightly.sarif'
category: 'trivy-nightly'

View File

@@ -148,14 +148,24 @@ jobs:
run: |
bash "scripts/repo_health_check.sh"
- name: Install gotestsum
run: go install gotest.tools/gotestsum@v1.13.0
- name: Run Go tests
id: go-tests
working-directory: ${{ github.workspace }}
env:
CGO_ENABLED: 1
run: |
bash "scripts/go-test-coverage.sh" 2>&1 | tee backend/test-output.txt
exit "${PIPESTATUS[0]}"
bash "scripts/go-test-coverage.sh" 2>&1 | tee backend/test-output.txt; exit "${PIPESTATUS[0]}"
- name: Upload test output artifact
if: always()
uses: actions/upload-artifact@bbbca2ddaa5d8feaa63e36b76fdaad77386f024f # v7.0.0
with:
name: backend-test-output
path: backend/test-output.txt
retention-days: 7
- name: Go Test Summary
if: always()
@@ -232,11 +242,12 @@ jobs:
PERF_MAX_MS_GETSTATUS_P95_PARALLEL: 1500ms
PERF_MAX_MS_LISTDECISIONS_P95: 2000ms
run: |
go test -run TestPerf -v ./internal/api/handlers -count=1 2>&1 | tee perf-output.txt; PERF_STATUS="${PIPESTATUS[0]}"
{
echo "## 🔍 Running performance assertions (TestPerf)"
go test -run TestPerf -v ./internal/api/handlers -count=1 | tee perf-output.txt
cat perf-output.txt
} >> "$GITHUB_STEP_SUMMARY"
exit "${PIPESTATUS[0]}"
exit "$PERF_STATUS"
frontend-quality:
name: Frontend (React)
@@ -298,8 +309,7 @@ jobs:
id: frontend-tests
working-directory: ${{ github.workspace }}
run: |
bash scripts/frontend-test-coverage.sh 2>&1 | tee frontend/test-output.txt
exit "${PIPESTATUS[0]}"
bash scripts/frontend-test-coverage.sh 2>&1 | tee frontend/test-output.txt; exit "${PIPESTATUS[0]}"
- name: Frontend Test Summary
if: always()

View File

@@ -31,7 +31,7 @@ jobs:
- name: Build Docker image (Local)
run: |
echo "Building image locally for integration tests..."
docker build -t charon:local .
docker build -t charon:local --build-arg CI="${CI:-false}" .
echo "✅ Successfully built charon:local"
- name: Run rate limit integration tests
@@ -68,7 +68,7 @@ jobs:
echo "### Caddy Admin Config (rate_limit handlers)"
echo '```json'
curl -s http://localhost:2119/config 2>/dev/null | grep -A 20 '"handler":"rate_limit"' | head -30 || echo "Could not retrieve Caddy config"
curl -s http://localhost:2119/config/ 2>/dev/null | grep -A 20 '"handler":"rate_limit"' | head -30 || echo "Could not retrieve Caddy config"
echo '```'
echo ""

View File

@@ -25,7 +25,7 @@ jobs:
fetch-depth: 1
- name: Run Renovate
uses: renovatebot/github-action@0b17c4eb901eca44d018fb25744a50a74b2042df # v46.1.4
uses: renovatebot/github-action@68a3ea99af6ad249940b5a9fdf44fc6d7f14378b # v46.1.6
with:
configurationFile: .github/renovate.json
token: ${{ secrets.RENOVATE_TOKEN || secrets.GITHUB_TOKEN }}

View File

@@ -240,7 +240,7 @@ jobs:
- name: Download PR image artifact
if: github.event_name == 'workflow_run' || github.event_name == 'workflow_dispatch'
# actions/download-artifact v4.1.8
uses: actions/download-artifact@3e5f45b2cfb9172054b4087a40e8e0b5a5461e7c
uses: actions/download-artifact@484a0b528fb4d7bd804637ccb632e47a0e638317
with:
name: ${{ steps.check-artifact.outputs.artifact_name }}
run-id: ${{ steps.check-artifact.outputs.run_id }}
@@ -385,7 +385,7 @@ jobs:
- name: Upload Trivy SARIF to GitHub Security
if: always() && steps.trivy-sarif-check.outputs.exists == 'true'
# github/codeql-action v4
uses: github/codeql-action/upload-sarif@1a97b0f94ec9297d6f58aefe5a6b5441c045bed4
uses: github/codeql-action/upload-sarif@eedab83377f873ae39009d167a89b7a5aab4638b
with:
sarif_file: 'trivy-binary-results.sarif'
category: ${{ steps.pr-info.outputs.is_push == 'true' && format('security-scan-{0}', github.event_name == 'workflow_run' && github.event.workflow_run.head_branch || github.ref_name) || format('security-scan-pr-{0}', steps.pr-info.outputs.pr_number) }}

View File

@@ -113,7 +113,7 @@ jobs:
version: 'v0.69.3'
- name: Upload Trivy results to GitHub Security
uses: github/codeql-action/upload-sarif@0d579ffd059c29b07949a3cce3983f0780820c98 # v4.32.6
uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4.34.1
with:
sarif_file: 'trivy-weekly-results.sarif'

View File

@@ -266,7 +266,7 @@ jobs:
# Generate SBOM using official Anchore action (auto-updated by Renovate)
- name: Generate SBOM
if: steps.set-target.outputs.image_name != ''
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0.23.1
uses: anchore/sbom-action@e22c389904149dbc22b58101806040fa8d37a610 # v0.24.0
id: sbom
with:
image: ${{ steps.set-target.outputs.image_name }}
@@ -285,7 +285,7 @@ jobs:
- name: Install Grype
if: steps.set-target.outputs.image_name != ''
run: |
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v0.109.1
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v0.110.0
- name: Scan for vulnerabilities
if: steps.set-target.outputs.image_name != ''
@@ -362,7 +362,7 @@ jobs:
- name: Upload SARIF to GitHub Security
if: steps.check-artifact.outputs.artifact_found == 'true'
uses: github/codeql-action/upload-sarif@0d579ffd059c29b07949a3cce3983f0780820c98 # v4
uses: github/codeql-action/upload-sarif@38697555549f1db7851b81482ff19f1fa5c4fedc # v4
continue-on-error: true
with:
sarif_file: grype-results.sarif
@@ -381,9 +381,12 @@ jobs:
- name: Comment on PR
if: steps.set-target.outputs.image_name != '' && steps.pr-number.outputs.is_push != 'true' && steps.pr-number.outputs.pr_number != ''
continue-on-error: true
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
set -euo pipefail
PR_NUMBER="${{ steps.pr-number.outputs.pr_number }}"
COMPONENT_COUNT="${{ steps.sbom-count.outputs.component_count }}"
CRITICAL_COUNT="${{ steps.vuln-summary.outputs.critical_count }}"
@@ -429,29 +432,38 @@ jobs:
EOF
)
# Find and update existing comment or create new one
COMMENT_ID=$(gh api \
# Fetch existing comments — skip gracefully on 403 / permission errors
COMMENTS_JSON=""
if ! COMMENTS_JSON=$(gh api \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments" \
--jq '.[] | select(.body | contains("Supply Chain Verification Results")) | .id' | head -1)
"/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments" 2>/dev/null); then
echo "⚠️ Cannot access PR comments (likely token permissions / fork / event context). Skipping PR comment."
exit 0
fi
if [[ -n "${COMMENT_ID}" ]]; then
COMMENT_ID=$(echo "${COMMENTS_JSON}" | jq -r '.[] | select(.body | contains("Supply Chain Verification Results")) | .id' | head -1)
if [[ -n "${COMMENT_ID:-}" && "${COMMENT_ID}" != "null" ]]; then
echo "📝 Updating existing comment..."
gh api \
--method PATCH \
if ! gh api --method PATCH \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"/repos/${{ github.repository }}/issues/comments/${COMMENT_ID}" \
-f body="${COMMENT_BODY}"
-f body="${COMMENT_BODY}"; then
echo "⚠️ Failed to update comment (permissions?). Skipping."
exit 0
fi
else
echo "📝 Creating new comment..."
gh api \
--method POST \
if ! gh api --method POST \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
"/repos/${{ github.repository }}/issues/${PR_NUMBER}/comments" \
-f body="${COMMENT_BODY}"
-f body="${COMMENT_BODY}"; then
echo "⚠️ Failed to create comment (permissions?). Skipping."
exit 0
fi
fi
echo "✅ PR comment posted"

View File

@@ -119,7 +119,7 @@ jobs:
# Generate SBOM using official Anchore action (auto-updated by Renovate)
- name: Generate and Verify SBOM
if: steps.image-check.outputs.exists == 'true'
uses: anchore/sbom-action@57aae528053a48a3f6235f2d9461b05fbcb7366d # v0.23.1
uses: anchore/sbom-action@e22c389904149dbc22b58101806040fa8d37a610 # v0.24.0
with:
image: ghcr.io/${{ github.repository_owner }}/charon:${{ steps.tag.outputs.tag }}
format: cyclonedx-json
@@ -233,7 +233,7 @@ jobs:
# Scan for vulnerabilities using official Anchore action (auto-updated by Renovate)
- name: Scan for Vulnerabilities
if: steps.validate-sbom.outputs.valid == 'true'
uses: anchore/scan-action@7037fa011853d5a11690026fb85feee79f4c946c # v7.3.2
uses: anchore/scan-action@e1165082ffb1fe366ebaf02d8526e7c4989ea9d2 # v7.4.0
id: scan
with:
sbom: sbom-verify.cyclonedx.json

View File

@@ -31,7 +31,7 @@ jobs:
- name: Build Docker image (Local)
run: |
echo "Building image locally for integration tests..."
docker build -t charon:local .
docker build -t charon:local --build-arg CI="${CI:-false}" .
echo "✅ Successfully built charon:local"
- name: Run WAF integration tests

View File

@@ -200,8 +200,8 @@ jobs:
runs-on: ubuntu-latest
if: needs.check-nightly-health.outputs.is_healthy == 'true'
outputs:
pr_number: ${{ steps.create-pr.outputs.pr_number }}
pr_url: ${{ steps.create-pr.outputs.pr_url }}
pr_number: ${{ steps.create-pr.outputs.pr_number || steps.existing-pr.outputs.pr_number }}
pr_url: ${{ steps.create-pr.outputs.pr_url || steps.existing-pr.outputs.pr_url }}
skipped: ${{ steps.check-diff.outputs.skipped }}
steps:

View File

@@ -4,136 +4,285 @@
# Documentation: https://github.com/anchore/grype#specifying-matches-to-ignore
ignore:
# CVE-2026-22184: zlib Global Buffer Overflow in untgz utility
# Severity: CRITICAL
# Package: zlib 1.3.1-r2 (Alpine Linux base image)
# Status: No upstream fix available as of 2026-01-16
# CVE-2026-2673: OpenSSL TLS 1.3 server key exchange group downgrade
# Severity: HIGH (CVSS 7.5)
# Packages: libcrypto3 3.5.5-r0 and libssl3 3.5.5-r0 (Alpine apk)
# Status: No upstream fix available — Alpine 3.23 still ships libcrypto3/libssl3 3.5.5-r0 as of 2026-03-18
#
# Vulnerability Details:
# - Global buffer overflow in TGZfname() function
# - Unbounded strcpy() allows attacker-controlled archive names
# - Can lead to memory corruption, DoS, potential RCE
# - When DEFAULT is in the TLS 1.3 group configuration, the OpenSSL server may select
# a weaker key exchange group than preferred, enabling a limited key exchange downgrade.
# - Only affects systems acting as a raw TLS 1.3 server using OpenSSL's server-side group negotiation.
#
# Risk Assessment: ACCEPTED (Low exploitability in Charon context)
# - Charon does not use untgz utility directly
# - No untrusted tar archive processing in application code
# - Attack surface limited to OS-level utilities
# - Multiple layers of containerization and isolation
# Root Cause (No Fix Available):
# - Alpine upstream has not published a patched libcrypto3/libssl3 for Alpine 3.23.
# - Checked: Alpine 3.23 still ships libcrypto3/libssl3 3.5.5-r0 as of 2026-03-18.
# - Fix path: once Alpine publishes a patched libcrypto3/libssl3, rebuild the Docker image
# and remove this suppression.
#
# Mitigation:
# - Monitor Alpine Linux security feed daily for zlib patches
# - Container runs with minimal privileges (no-new-privileges)
# - Read-only filesystem where possible
# - Network isolation via Docker networks
#
# Review:
# - Daily checks for Alpine security updates
# - Automatic re-scan via CI/CD on every commit
# - Manual review scheduled for 2026-01-23 (7 days)
#
# Removal Criteria:
# - Alpine releases zlib 1.3.1-r3 or higher with CVE fix
# - OR upstream zlib project releases patched version
# - Remove this suppression immediately after fix available
#
# References:
# - CVE: https://nvd.nist.gov/vuln/detail/CVE-2026-22184
# - Alpine Security: https://security.alpinelinux.org/
# - GitHub Issue: https://github.com/Wikid82/Charon/issues/TBD
- vulnerability: CVE-2026-22184
package:
name: zlib
version: "1.3.1-r2"
type: apk # Alpine package
reason: |
CRITICAL buffer overflow in untgz utility. No fix available from Alpine
as of 2026-01-16. Risk accepted: Charon does not directly use untgz or
process untrusted tar archives. Attack surface limited to base OS utilities.
Monitoring Alpine security feed for upstream patch.
expiry: "2026-03-14" # Re-evaluate in 7 days
# Action items when this suppression expires:
# 1. Check Alpine security feed: https://security.alpinelinux.org/
# 2. Check zlib releases: https://github.com/madler/zlib/releases
# 3. If fix available: Update Dockerfile, rebuild, remove suppression
# 4. If no fix: Extend expiry by 7 days, document justification
# 5. If extended 3+ times: Escalate to security team for review
# GHSA-69x3-g4r3-p962 / CVE-2026-25793: Nebula ECDSA Signature Malleability
# Severity: HIGH (CVSS 8.1)
# Package: github.com/slackhq/nebula v1.9.7 (embedded in /usr/bin/caddy)
# Status: Cannot upgrade — smallstep/certificates v0.30.0-rc2 still pins nebula v1.9.x
#
# Vulnerability Details:
# - ECDSA signature malleability allows bypassing certificate blocklists
# - Attacker can forge alternate valid P256 ECDSA signatures for revoked
# certificates (CVSSv3: AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:H/A:N)
# - Only affects configurations using Nebula-based certificate authorities
# (non-default and uncommon in Charon deployments)
#
# Root Cause (Compile-Time Dependency Lock):
# - Caddy is built with caddy-security plugin, which transitively requires
# github.com/smallstep/certificates. That package pins nebula v1.9.x.
# - Checked: smallstep/certificates v0.27.5 → v0.30.0-rc2 all require nebula v1.9.4v1.9.7.
# The nebula v1.10 API removal breaks compilation in the
# authority/provisioner package; xcaddy build fails with upgrade attempted.
# - Dockerfile caddy-builder stage pins nebula@v1.9.7 (Renovate tracked) with
# an inline comment explaining the constraint (Dockerfile line 247).
# - Fix path: once smallstep/certificates releases a version requiring
# nebula v1.10+, remove the pin and this suppression simultaneously.
#
# Risk Assessment: ACCEPTED (Low exploitability in Charon context)
# - Charon uses standard ACME/Let's Encrypt TLS; Nebula VPN PKI is not
# enabled by default and rarely configured in Charon deployments.
# - Exploiting this requires a valid certificate sharing the same issuer as
# a revoked one — an uncommon and targeted attack scenario.
# Risk Assessment: ACCEPTED (No upstream fix; limited exposure in Charon context)
# - Charon terminates TLS at the Caddy layer — the Go backend does not act as a raw TLS 1.3 server.
# - The vulnerability requires the affected application to directly configure TLS 1.3 server
# group negotiation via OpenSSL, which Charon does not do.
# - Container-level isolation reduces the attack surface further.
#
# Mitigation (active while suppression is in effect):
# - Monitor smallstep/certificates releases at https://github.com/smallstep/certificates/releases
# - Weekly CI security rebuild flags any new CVEs in the full image.
# - Renovate annotation in Dockerfile (datasource=go depName=github.com/slackhq/nebula)
# will surface the pin for review when xcaddy build becomes compatible.
# - Monitor Alpine security advisories: https://security.alpinelinux.org/vuln/CVE-2026-2673
# - Weekly CI security rebuild (security-weekly-rebuild.yml) flags any new CVEs in the full image.
#
# Review:
# - Reviewed 2026-02-19: smallstep/certificates latest stable remains v0.27.5;
# no release requiring nebula v1.10+ has shipped. Suppression extended 14 days.
# - Next review: 2026-03-05. Remove suppression immediately once upstream fixes.
# - Reviewed 2026-03-18 (initial suppression): no upstream fix available. Set 30-day review.
# - Next review: 2026-04-18. Remove suppression immediately once upstream fixes.
#
# Removal Criteria:
# - smallstep/certificates releases a stable version requiring nebula v1.10+
# - Update Dockerfile caddy-builder patch to use the new versions
# - Rebuild image, run security scan, confirm suppression no longer needed
# - Remove both this entry and the corresponding .trivyignore entry
# - Alpine publishes a patched version of libcrypto3 and libssl3
# - Rebuild Docker image and verify CVE-2026-2673 no longer appears in grype-results.json
# - Remove both these entries and the corresponding .trivyignore entry simultaneously
#
# References:
# - GHSA: https://github.com/advisories/GHSA-69x3-g4r3-p962
# - CVE-2026-25793: https://nvd.nist.gov/vuln/detail/CVE-2026-25793
# - smallstep/certificates: https://github.com/smallstep/certificates/releases
# - Dockerfile pin: caddy-builder stage, line ~247 (go get nebula@v1.9.7)
- vulnerability: GHSA-69x3-g4r3-p962
# - CVE-2026-2673: https://nvd.nist.gov/vuln/detail/CVE-2026-2673
# - Alpine security tracker: https://security.alpinelinux.org/vuln/CVE-2026-2673
- vulnerability: CVE-2026-2673
package:
name: github.com/slackhq/nebula
version: "v1.9.7"
type: go-module
name: libcrypto3
version: "3.5.5-r0"
type: apk
reason: |
HIGH — ECDSA signature malleability in nebula v1.9.7 embedded in /usr/bin/caddy.
Cannot upgrade: smallstep/certificates v0.27.5 (latest stable as of 2026-02-19)
still requires nebula v1.9.x (verified across v0.27.5v0.30.0-rc2). Charon does
not use Nebula VPN PKI by default. Risk accepted pending upstream smallstep fix.
Reviewed 2026-02-19: no new smallstep release changes this assessment.
expiry: "2026-03-05" # Re-evaluate in 14 days (2026-02-19 + 14 days)
HIGH — OpenSSL TLS 1.3 server key exchange group downgrade in libcrypto3 3.5.5-r0 (Alpine base image).
No upstream fix: Alpine 3.23 still ships libcrypto3 3.5.5-r0 as of 2026-03-18. Charon
terminates TLS at the Caddy layer; the Go backend does not act as a raw TLS 1.3 server.
Risk accepted pending Alpine upstream patch.
expiry: "2026-04-18" # Initial 30-day review period. Extend in 1430 day increments with documented justification.
# Action items when this suppression expires:
# 1. Check smallstep/certificates releases: https://github.com/smallstep/certificates/releases
# 2. If a stable version requires nebula v1.10+:
# a. Update Dockerfile caddy-builder: remove the `go get nebula@v1.9.7` pin
# b. Optionally bump smallstep/certificates to the new version
# c. Rebuild Docker image and verify no compile failures
# d. Re-run local security-scan-docker-image and confirm clean result
# e. Remove this suppression entry
# 3. If no fix yet: Extend expiry by 14 days and document justification
# 4. If extended 3+ times: Open upstream issue on smallstep/certificates
# 1. Check Alpine security tracker: https://security.alpinelinux.org/vuln/CVE-2026-2673
# 2. If a patched Alpine package is now available:
# a. Rebuild Docker image without suppression
# b. Run local security-scan-docker-image and confirm CVE is resolved
# c. Remove this suppression entry, the libssl3 entry below, and the .trivyignore entry
# 3. If no fix yet: Extend expiry by 1430 days and update the review comment above
# 4. If extended 3+ times: Open an issue to track the upstream status formally
# CVE-2026-2673 (libssl3) — see full justification in the libcrypto3 entry above
- vulnerability: CVE-2026-2673
package:
name: libssl3
version: "3.5.5-r0"
type: apk
reason: |
HIGH — OpenSSL TLS 1.3 server key exchange group downgrade in libssl3 3.5.5-r0 (Alpine base image).
No upstream fix: Alpine 3.23 still ships libssl3 3.5.5-r0 as of 2026-03-18. Charon
terminates TLS at the Caddy layer; the Go backend does not act as a raw TLS 1.3 server.
Risk accepted pending Alpine upstream patch.
expiry: "2026-04-18" # Initial 30-day review period. See libcrypto3 entry above for action items.
# GHSA-6g7g-w4f8-9c9x: buger/jsonparser Delete panic on malformed JSON (DoS)
# Severity: HIGH (CVSS 7.5)
# Package: github.com/buger/jsonparser v1.1.1 (embedded in /usr/local/bin/crowdsec and /usr/local/bin/cscli)
# Status: NO upstream fix available — OSV marks "Last affected: v1.1.1" with no Fixed event
#
# Vulnerability Details:
# - The Delete function fails to validate offsets on malformed JSON input, producing a
# negative slice index and a runtime panic — denial of service (CWE-125).
# - CVSSv3: AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
#
# Root Cause (Third-Party Binary + No Upstream Fix):
# - Charon does not use buger/jsonparser directly. It is compiled into CrowdSec binaries.
# - The buger/jsonparser repository has no released fix as of 2026-03-19 (GitHub issue #275
# and golang/vulndb #4514 are both open).
# - Fix path: once buger/jsonparser releases a patched version and CrowdSec updates their
# dependency, rebuild the Docker image and remove this suppression.
#
# Risk Assessment: ACCEPTED (Limited exploitability + no upstream fix)
# - The DoS vector requires passing malformed JSON to the vulnerable Delete function within
# CrowdSec's internal processing pipeline; this is not a direct attack surface in Charon.
# - CrowdSec's exposed surface is its HTTP API (not raw JSON stream parsing via this path).
#
# Mitigation (active while suppression is in effect):
# - Monitor buger/jsonparser: https://github.com/buger/jsonparser/issues/275
# - Monitor CrowdSec releases: https://github.com/crowdsecurity/crowdsec/releases
# - Weekly CI security rebuild flags the moment a fixed image ships.
#
# Review:
# - Reviewed 2026-03-19 (initial suppression): no upstream fix exists. Set 30-day review.
# - Next review: 2026-04-19. Remove suppression once buger/jsonparser ships a fix and
# CrowdSec updates their dependency.
#
# Removal Criteria:
# - buger/jsonparser releases a patched version (v1.1.2 or higher)
# - CrowdSec releases a version built with the patched jsonparser
# - Rebuild Docker image, run security-scan-docker-image, confirm finding is resolved
# - Remove this entry and the corresponding .trivyignore entry simultaneously
#
# References:
# - GHSA-6g7g-w4f8-9c9x: https://github.com/advisories/GHSA-6g7g-w4f8-9c9x
# - Upstream issue: https://github.com/buger/jsonparser/issues/275
# - golang/vulndb: https://github.com/golang/vulndb/issues/4514
# - CrowdSec releases: https://github.com/crowdsecurity/crowdsec/releases
- vulnerability: GHSA-6g7g-w4f8-9c9x
package:
name: github.com/buger/jsonparser
version: "v1.1.1"
type: go-module
reason: |
HIGH — DoS panic via malformed JSON in buger/jsonparser v1.1.1 embedded in CrowdSec binaries.
No upstream fix: buger/jsonparser has no released patch as of 2026-03-19 (issue #275 open).
Charon does not use this package directly; the vector requires reaching CrowdSec's internal
JSON processing pipeline. Risk accepted; no remediation path until upstream ships a fix.
Reviewed 2026-03-19: no patched release available.
expiry: "2026-04-19" # 30-day review: no fix exists. Extend in 30-day increments with documented justification.
# Action items when this suppression expires:
# 1. Check buger/jsonparser releases: https://github.com/buger/jsonparser/releases
# and issue #275: https://github.com/buger/jsonparser/issues/275
# 2. If a fix has shipped AND CrowdSec has updated their dependency:
# a. Rebuild Docker image and run local security-scan-docker-image
# b. Remove this suppression entry and the corresponding .trivyignore entry
# 3. If no fix yet: Extend expiry by 30 days and update the review comment above
# 4. If extended 3+ times with no progress: Consider opening an issue upstream or
# evaluating whether CrowdSec can replace buger/jsonparser with a safe alternative
# GHSA-jqcq-xjh3-6g23: pgproto3/v2 DataRow.Decode panic on negative field length (DoS)
# Severity: HIGH (CVSS 7.5)
# Package: github.com/jackc/pgproto3/v2 v2.3.3 (embedded in /usr/local/bin/crowdsec and /usr/local/bin/cscli)
# Status: NO fix in pgproto3/v2 (archived/EOL) — fix path requires CrowdSec to migrate to pgx/v5
#
# Vulnerability Details:
# - DataRow.Decode does not validate field lengths; a malicious or compromised PostgreSQL server
# can send a negative field length causing a slice-bounds panic — denial of service (CWE-129).
# - CVSSv3: AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H
#
# Root Cause (EOL Module + Third-Party Binary):
# - Charon does not use pgproto3/v2 directly nor communicate with PostgreSQL. The package
# is compiled into CrowdSec binaries for their internal database communication.
# - The pgproto3/v2 module is archived and EOL; no fix will be released. The fix path
# is migration to pgx/v5, which embeds an updated pgproto3/v3.
# - Fix path: once CrowdSec migrates to pgx/v5 and releases an updated binary, rebuild
# the Docker image and remove this suppression.
#
# Risk Assessment: ACCEPTED (Non-exploitable in Charon context + no upstream fix path)
# - The vulnerability requires a malicious PostgreSQL server response. Charon uses SQLite
# internally and does not run PostgreSQL. CrowdSec's database path is not exposed to
# external traffic in a standard Charon deployment.
# - The attack requires a compromised database server, which would imply full host compromise.
#
# Mitigation (active while suppression is in effect):
# - Monitor CrowdSec releases for pgx/v5 migration:
# https://github.com/crowdsecurity/crowdsec/releases
# - Weekly CI security rebuild flags the moment a fixed image ships.
#
# Review:
# - Reviewed 2026-03-19 (initial suppression): pgproto3/v2 is EOL; no fix exists or will exist.
# Waiting on CrowdSec to migrate to pgx/v5. Set 30-day review.
# - Next review: 2026-04-19. Remove suppression once CrowdSec ships with pgx/v5.
#
# Removal Criteria:
# - CrowdSec releases a version with pgx/v5 (pgproto3/v3) replacing pgproto3/v2
# - Rebuild Docker image, run security-scan-docker-image, confirm finding is resolved
# - Remove this entry and the corresponding .trivyignore entry simultaneously
#
# References:
# - GHSA-jqcq-xjh3-6g23: https://github.com/advisories/GHSA-jqcq-xjh3-6g23
# - pgproto3/v2 archive notice: https://github.com/jackc/pgproto3
# - pgx/v5 (replacement): https://github.com/jackc/pgx
# - CrowdSec releases: https://github.com/crowdsecurity/crowdsec/releases
- vulnerability: GHSA-jqcq-xjh3-6g23
package:
name: github.com/jackc/pgproto3/v2
version: "v2.3.3"
type: go-module
reason: |
HIGH — DoS panic via negative field length in pgproto3/v2 v2.3.3 embedded in CrowdSec binaries.
pgproto3/v2 is archived/EOL with no fix planned; fix path requires CrowdSec to migrate to pgx/v5.
Charon uses SQLite, not PostgreSQL; this code path is not reachable in a standard deployment.
Risk accepted; no remediation until CrowdSec ships with pgx/v5.
Reviewed 2026-03-19: pgproto3/v2 EOL confirmed; CrowdSec has not migrated to pgx/v5 yet.
expiry: "2026-04-19" # 30-day review: no fix path until CrowdSec migrates to pgx/v5.
# Action items when this suppression expires:
# 1. Check CrowdSec releases for pgx/v5 migration:
# https://github.com/crowdsecurity/crowdsec/releases
# 2. Verify with: `go version -m /path/to/crowdsec | grep pgproto3`
# Expected: pgproto3/v3 (or no pgproto3 reference if fully replaced)
# 3. If CrowdSec has migrated:
# a. Rebuild Docker image and run local security-scan-docker-image
# b. Remove this suppression entry and the corresponding .trivyignore entry
# 4. If not yet migrated: Extend expiry by 30 days and update the review comment above
# 5. If extended 3+ times: Open an upstream issue on crowdsecurity/crowdsec requesting pgx/v5 migration
# GHSA-x6gf-mpr2-68h6 / CVE-2026-4427: pgproto3/v2 DataRow.Decode panic on negative field length (DoS)
# Severity: HIGH (CVSS 7.5)
# Package: github.com/jackc/pgproto3/v2 v2.3.3 (embedded in /usr/local/bin/crowdsec and /usr/local/bin/cscli)
# Status: NO fix in pgproto3/v2 (archived/EOL) — fix path requires CrowdSec to migrate to pgx/v5
# Note: This is the NVD/Red Hat advisory alias for the same underlying vulnerability as GHSA-jqcq-xjh3-6g23
#
# Vulnerability Details:
# - DataRow.Decode does not validate field lengths; a malicious or compromised PostgreSQL server
# can send a negative field length causing a slice-bounds panic — denial of service (CWE-129).
# - CVSSv3: AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H (CVSS 7.5)
#
# Root Cause (EOL Module + Third-Party Binary):
# - Same underlying vulnerability as GHSA-jqcq-xjh3-6g23; tracked separately by NVD/Red Hat as CVE-2026-4427.
# - Charon does not use pgproto3/v2 directly nor communicate with PostgreSQL. The package
# is compiled into CrowdSec binaries for their internal database communication.
# - The pgproto3/v2 module is archived and EOL; no fix will be released. The fix path
# is migration to pgx/v5, which embeds an updated pgproto3/v3.
# - Fix path: once CrowdSec migrates to pgx/v5 and releases an updated binary, rebuild
# the Docker image and remove this suppression.
#
# Risk Assessment: ACCEPTED (Non-exploitable in Charon context + no upstream fix path)
# - The vulnerability requires a malicious PostgreSQL server response. Charon uses SQLite
# internally and does not run PostgreSQL. CrowdSec's database path is not exposed to
# external traffic in a standard Charon deployment.
# - The attack requires a compromised database server, which would imply full host compromise.
#
# Mitigation (active while suppression is in effect):
# - Monitor CrowdSec releases for pgx/v5 migration:
# https://github.com/crowdsecurity/crowdsec/releases
# - Weekly CI security rebuild flags the moment a fixed image ships.
#
# Review:
# - Reviewed 2026-03-21 (initial suppression): pgproto3/v2 is EOL; no fix exists or will exist.
# Waiting on CrowdSec to migrate to pgx/v5. Set 30-day review. Sibling GHSA-jqcq-xjh3-6g23
# was already suppressed; this alias surfaced as a separate Grype match via NVD/Red Hat tracking.
# - Next review: 2026-04-21. Remove suppression once CrowdSec ships with pgx/v5.
#
# Removal Criteria:
# - Same as GHSA-jqcq-xjh3-6g23: CrowdSec releases a version with pgx/v5 replacing pgproto3/v2
# - Rebuild Docker image, run security-scan-docker-image, confirm both advisories are resolved
# - Remove this entry, GHSA-jqcq-xjh3-6g23 entry, and both .trivyignore entries simultaneously
#
# References:
# - GHSA-x6gf-mpr2-68h6: https://github.com/advisories/GHSA-x6gf-mpr2-68h6
# - CVE-2026-4427: https://nvd.nist.gov/vuln/detail/CVE-2026-4427
# - Red Hat: https://access.redhat.com/security/cve/CVE-2026-4427
# - pgproto3/v2 archive notice: https://github.com/jackc/pgproto3
# - pgx/v5 (replacement): https://github.com/jackc/pgx
# - CrowdSec releases: https://github.com/crowdsecurity/crowdsec/releases
- vulnerability: GHSA-x6gf-mpr2-68h6
package:
name: github.com/jackc/pgproto3/v2
version: "v2.3.3"
type: go-module
reason: |
HIGH — DoS panic via negative field length in pgproto3/v2 v2.3.3 embedded in CrowdSec binaries.
NVD/Red Hat alias (CVE-2026-4427) for the same underlying bug as GHSA-jqcq-xjh3-6g23.
pgproto3/v2 is archived/EOL with no fix planned; fix path requires CrowdSec to migrate to pgx/v5.
Charon uses SQLite, not PostgreSQL; this code path is not reachable in a standard deployment.
Risk accepted; no remediation until CrowdSec ships with pgx/v5.
Reviewed 2026-03-21: pgproto3/v2 EOL confirmed; CrowdSec has not migrated to pgx/v5 yet.
expiry: "2026-04-21" # 30-day review: no fix path until CrowdSec migrates to pgx/v5.
# Action items when this suppression expires:
# 1. Check CrowdSec releases for pgx/v5 migration:
# https://github.com/crowdsecurity/crowdsec/releases
# 2. Verify with: `go version -m /path/to/crowdsec | grep pgproto3`
# Expected: pgproto3/v3 (or no pgproto3 reference if fully replaced)
# 3. If CrowdSec has migrated:
# a. Rebuild Docker image and run local security-scan-docker-image
# b. Remove this entry, GHSA-jqcq-xjh3-6g23 entry, and both .trivyignore entries
# 4. If not yet migrated: Extend expiry by 30 days and update the review comment above
# 5. If extended 3+ times: Open an upstream issue on crowdsecurity/crowdsec requesting pgx/v5 migration
# Match exclusions (patterns to ignore during scanning)
# Use sparingly - prefer specific CVE suppressions above

View File

@@ -14,3 +14,67 @@ CVE-2026-25793
# Charon does not use untgz or process untrusted tar archives. Review by: 2026-03-14
# See also: .grype.yaml for full justification
CVE-2026-22184
# CVE-2026-27171: zlib CPU spin via crc32_combine64 infinite loop (DoS)
# Severity: MEDIUM (CVSS 5.5 NVD / 2.9 MITRE) — Package: zlib 1.3.1-r2 in Alpine base image
# Fix requires zlib >= 1.3.2. No upstream fix available: Alpine 3.23 still ships zlib 1.3.1-r2.
# Attack requires local access (AV:L); the vulnerable code path is not reachable via Charon's
# network-facing surface. Non-blocking by CI policy (MEDIUM). Review by: 2026-04-21
# exp: 2026-04-21
CVE-2026-27171
# CVE-2026-2673: OpenSSL TLS 1.3 server key exchange group downgrade (libcrypto3/libssl3)
# Severity: HIGH (CVSS 7.5) — Packages: libcrypto3 3.5.5-r0 and libssl3 3.5.5-r0 in Alpine base image
# No upstream fix available: Alpine 3.23 still ships libcrypto3/libssl3 3.5.5-r0 as of 2026-03-18.
# When DEFAULT is in TLS 1.3 group config, server may select a weaker key exchange group.
# Charon terminates TLS at the Caddy layer — the Go backend does not act as a raw TLS 1.3 server.
# Review by: 2026-04-18
# See also: .grype.yaml for full justification
# exp: 2026-04-18
CVE-2026-2673
# CVE-2026-33186 / GHSA-p77j-4mvh-x3m3: gRPC-Go authorization bypass via missing leading slash
# Severity: CRITICAL (CVSS 9.1) — Package: google.golang.org/grpc, embedded in CrowdSec (v1.74.2) and Caddy (v1.79.1)
# Fix exists at v1.79.3 — Charon's own dep is patched. Waiting on CrowdSec and Caddy upstream releases.
# CrowdSec's and Caddy's grpc servers are not exposed externally in a standard Charon deployment.
# Review by: 2026-04-02
# See also: .grype.yaml for full justification
# exp: 2026-04-02
CVE-2026-33186
# GHSA-479m-364c-43vc: goxmldsig XML signature validation bypass (loop variable capture)
# Severity: HIGH (CVSS 7.5) — Package: github.com/russellhaering/goxmldsig v1.5.0, embedded in /usr/bin/caddy
# Fix exists at v1.6.0 — waiting on Caddy upstream (or caddy-security plugin) to release with patched goxmldsig.
# Charon does not configure SAML-based SSO by default; the vulnerable path is not reachable in a standard deployment.
# Review by: 2026-04-02
# See also: .grype.yaml for full justification
# exp: 2026-04-02
GHSA-479m-364c-43vc
# GHSA-6g7g-w4f8-9c9x: buger/jsonparser Delete panic on malformed JSON (DoS)
# Severity: HIGH (CVSS 7.5) — Package: github.com/buger/jsonparser v1.1.1, embedded in CrowdSec binaries
# No upstream fix available as of 2026-03-19 (issue #275 open, golang/vulndb #4514 open).
# Charon does not use this package; the vector requires reaching CrowdSec's internal processing pipeline.
# Review by: 2026-04-19
# See also: .grype.yaml for full justification
# exp: 2026-04-19
GHSA-6g7g-w4f8-9c9x
# GHSA-jqcq-xjh3-6g23: pgproto3/v2 DataRow.Decode panic on negative field length (DoS)
# Severity: HIGH (CVSS 7.5) — Package: github.com/jackc/pgproto3/v2 v2.3.3, embedded in CrowdSec binaries
# pgproto3/v2 is archived/EOL — no fix will be released. Fix path requires CrowdSec to migrate to pgx/v5.
# Charon uses SQLite; the PostgreSQL code path is not reachable in a standard deployment.
# Review by: 2026-04-19
# See also: .grype.yaml for full justification
# exp: 2026-04-19
GHSA-jqcq-xjh3-6g23
# GHSA-x6gf-mpr2-68h6 / CVE-2026-4427: pgproto3/v2 DataRow.Decode panic on negative field length (DoS)
# Severity: HIGH (CVSS 7.5) — Package: github.com/jackc/pgproto3/v2 v2.3.3, embedded in CrowdSec binaries
# NVD/Red Hat alias (CVE-2026-4427) for the same underlying bug as GHSA-jqcq-xjh3-6g23.
# pgproto3/v2 is archived/EOL — no fix will be released. Fix path requires CrowdSec to migrate to pgx/v5.
# Charon uses SQLite; the PostgreSQL code path is not reachable in a standard deployment.
# Review by: 2026-04-21
# See also: .grype.yaml for full justification
# exp: 2026-04-21
GHSA-x6gf-mpr2-68h6

View File

@@ -139,15 +139,15 @@ graph TB
| Component | Technology | Version | Purpose |
|-----------|-----------|---------|---------|
| **Framework** | React | 19.2.3 | UI framework |
| **Language** | TypeScript | 5.x | Type-safe JavaScript |
| **Build Tool** | Vite | 6.1.9 | Fast bundler and dev server |
| **CSS Framework** | Tailwind CSS | 3.x | Utility-first CSS |
| **Language** | TypeScript | 6.x | Type-safe JavaScript |
| **Build Tool** | Vite | 8.0.0-beta.18 | Fast bundler and dev server |
| **CSS Framework** | Tailwind CSS | 4.2.1 | Utility-first CSS |
| **Routing** | React Router | 7.x | Client-side routing |
| **HTTP Client** | Fetch API | Native | API communication |
| **State Management** | React Hooks + Context | Native | Global state |
| **Internationalization** | i18next | Latest | 5 language support |
| **Unit Testing** | Vitest | 2.x | Fast unit test runner |
| **E2E Testing** | Playwright | 1.50.x | Browser automation |
| **Unit Testing** | Vitest | 4.1.0-beta.6 | Fast unit test runner |
| **E2E Testing** | Playwright | 1.58.2 | Browser automation |
### Infrastructure
@@ -218,7 +218,7 @@ graph TB
│ │ └── main.tsx # Application entry point
│ ├── public/ # Static assets
│ ├── package.json # NPM dependencies
│ └── vite.config.js # Vite configuration
│ └── vite.config.ts # Vite configuration
├── .docker/ # Docker configuration
│ ├── compose/ # Docker Compose files
@@ -306,11 +306,13 @@ graph TB
**Key Modules:**
#### API Layer (`internal/api/`)
- **Handlers:** Process HTTP requests, validate input, return responses
- **Middleware:** CORS, GZIP, authentication, logging, metrics, panic recovery
- **Routes:** Route registration and grouping (public vs authenticated)
**Example Endpoints:**
- `GET /api/v1/proxy-hosts` - List all proxy hosts
- `POST /api/v1/proxy-hosts` - Create new proxy host
- `PUT /api/v1/proxy-hosts/:id` - Update proxy host
@@ -318,6 +320,7 @@ graph TB
- `WS /api/v1/logs` - WebSocket for real-time logs
#### Service Layer (`internal/services/`)
- **ProxyService:** CRUD operations for proxy hosts, validation logic
- **CertificateService:** ACME certificate provisioning and renewal
- **DockerService:** Container discovery and monitoring
@@ -327,12 +330,14 @@ graph TB
**Design Pattern:** Services contain business logic and call multiple repositories/managers
#### Caddy Manager (`internal/caddy/`)
- **Manager:** Orchestrates Caddy configuration updates
- **Config Builder:** Generates Caddy JSON from database models
- **Reload Logic:** Atomic config application with rollback on failure
- **Security Integration:** Injects Cerberus middleware into Caddy pipelines
**Responsibilities:**
1. Generate Caddy JSON configuration from database state
2. Validate configuration before applying
3. Trigger Caddy reload via JSON API
@@ -340,22 +345,26 @@ graph TB
5. Integrate security layers (WAF, ACL, Rate Limiting)
#### Security Suite (`internal/cerberus/`)
- **ACL (Access Control Lists):** IP-based allow/deny rules, GeoIP blocking
- **WAF (Web Application Firewall):** Coraza engine with OWASP CRS
- **CrowdSec:** Behavior-based threat detection with global intelligence
- **Rate Limiter:** Per-IP request throttling
**Integration Points:**
- Middleware injection into Caddy request pipeline
- Database-driven rule configuration
- Metrics collection for security events
#### Database Layer (`internal/database/`)
- **Migrations:** Automatic schema versioning with GORM AutoMigrate
- **Seeding:** Default settings and admin user creation
- **Connection Management:** SQLite with WAL mode and connection pooling
**Schema Overview:**
- **ProxyHost:** Domain, upstream target, SSL config
- **RemoteServer:** Upstream server definitions
- **CaddyConfig:** Generated Caddy configuration (audit trail)
@@ -372,6 +381,7 @@ graph TB
**Component Architecture:**
#### Pages (`src/pages/`)
- **Dashboard:** System overview, recent activity, quick actions
- **ProxyHosts:** List, create, edit, delete proxy configurations
- **Certificates:** Manage SSL/TLS certificates, view expiry
@@ -380,17 +390,20 @@ graph TB
- **Users:** User management (admin only)
#### Components (`src/components/`)
- **Forms:** Reusable form inputs with validation
- **Modals:** Dialog components for CRUD operations
- **Tables:** Data tables with sorting, filtering, pagination
- **Layout:** Header, sidebar, navigation
#### API Client (`src/api/`)
- Centralized API calls with error handling
- Request/response type definitions
- Authentication token management
**Example:**
```typescript
export const getProxyHosts = async (): Promise<ProxyHost[]> => {
const response = await fetch('/api/v1/proxy-hosts', {
@@ -402,11 +415,13 @@ export const getProxyHosts = async (): Promise<ProxyHost[]> => {
```
#### State Management
- **React Context:** Global state for auth, theme, language
- **Local State:** Component-specific state with `useState`
- **Custom Hooks:** Encapsulate API calls and side effects
**Example Hook:**
```typescript
export const useProxyHosts = () => {
const [hosts, setHosts] = useState<ProxyHost[]>([]);
@@ -425,11 +440,13 @@ export const useProxyHosts = () => {
**Purpose:** High-performance reverse proxy with automatic HTTPS
**Integration:**
- Embedded as a library in the Go backend
- Configured via JSON API (not Caddyfile)
- Listens on ports 80 (HTTP) and 443 (HTTPS)
**Features Used:**
- Dynamic configuration updates without restarts
- Automatic HTTPS with Let's Encrypt and ZeroSSL
- DNS challenge support for wildcard certificates
@@ -437,6 +454,7 @@ export const useProxyHosts = () => {
- Request logging and metrics
**Configuration Flow:**
1. User creates proxy host via frontend
2. Backend validates and saves to database
3. Caddy Manager generates JSON configuration
@@ -461,12 +479,14 @@ For each proxy host, Charon generates **two routes** with the same domain:
- Handlers: Full Cerberus security suite
This pattern is **intentional and valid**:
- Emergency route provides break-glass access to security controls
- Main route protects application with enterprise security features
- Caddy processes routes in order (emergency matches first)
- Validator allows duplicate hosts when one has paths and one doesn't
**Example:**
```json
// Emergency Route (evaluated first)
{
@@ -488,6 +508,7 @@ This pattern is **intentional and valid**:
**Purpose:** Persistent data storage
**Why SQLite:**
- Embedded (no external database server)
- Serverless (perfect for single-user/small team)
- ACID compliant with WAL mode
@@ -495,16 +516,19 @@ This pattern is **intentional and valid**:
- Backup-friendly (single file)
**Configuration:**
- **WAL Mode:** Allows concurrent reads during writes
- **Foreign Keys:** Enforced referential integrity
- **Pragma Settings:** Performance optimizations
**Backup Strategy:**
- Automated daily backups to `data/backups/`
- Retention: 7 daily, 4 weekly, 12 monthly backups
- Backup during low-traffic periods
**Migrations:**
- GORM AutoMigrate for schema changes
- Manual migrations for complex data transformations
- Rollback support via backup restoration
@@ -537,6 +561,7 @@ graph LR
**Purpose:** Prevent brute-force attacks and API abuse
**Implementation:**
- Per-IP request counters with sliding window
- Configurable thresholds (e.g., 100 req/min, 1000 req/hour)
- HTTP 429 response when limit exceeded
@@ -547,12 +572,14 @@ graph LR
**Purpose:** Behavior-based threat detection
**Features:**
- Local log analysis (brute-force, port scans, exploits)
- Global threat intelligence (crowd-sourced IP reputation)
- Automatic IP banning with configurable duration
- Decision management API (view, create, delete bans)
**Modes:**
- **Local Only:** No external API calls
- **API Mode:** Sync with CrowdSec cloud for global intelligence
@@ -561,12 +588,14 @@ graph LR
**Purpose:** IP-based access control
**Features:**
- Per-proxy-host allow/deny rules
- CIDR range support (e.g., `192.168.1.0/24`)
- Geographic blocking via GeoIP2 (MaxMind)
- Admin whitelist (emergency access)
**Evaluation Order:**
1. Check admin whitelist (always allow)
2. Check deny list (explicit block)
3. Check allow list (explicit allow)
@@ -579,6 +608,7 @@ graph LR
**Engine:** Coraza with OWASP Core Rule Set (CRS)
**Detection Categories:**
- SQL Injection (SQLi)
- Cross-Site Scripting (XSS)
- Remote Code Execution (RCE)
@@ -587,12 +617,14 @@ graph LR
- Command Injection
**Modes:**
- **Monitor:** Log but don't block (testing)
- **Block:** Return HTTP 403 for violations
### Layer 5: Application Security
**Additional Protections:**
- **SSRF Prevention:** Block requests to private IP ranges in webhooks/URL validation
- **HTTP Security Headers:** CSP, HSTS, X-Frame-Options, X-Content-Type-Options
- **Input Validation:** Server-side validation for all user inputs
@@ -610,6 +642,7 @@ graph LR
3. **Direct Database Access:** Manual SQLite update as last resort
**Emergency Token:**
- 64-character hex token set via `CHARON_EMERGENCY_TOKEN`
- Grants temporary admin access
- Rotated after each use
@@ -635,6 +668,7 @@ Charon operates with **two distinct traffic flows** on separate ports, each with
- **Testing:** Playwright E2E tests verify UI/UX functionality on this port
**Why No Middleware?**
- Management interface must remain accessible even when security modules are misconfigured
- Emergency endpoints (`/api/v1/emergency/*`) require unrestricted access for system recovery
- Separation of concerns: admin access control is handled by JWT, not proxy-level security
@@ -797,6 +831,7 @@ sequenceDiagram
**Rationale:** Simplicity over scalability - target audience is home users and small teams
**Container Contents:**
- Frontend static files (Vite build output)
- Go backend binary
- Embedded Caddy server
@@ -911,11 +946,13 @@ services:
### High Availability Considerations
**Current Limitations:**
- SQLite does not support clustering
- Single point of failure (one container)
- Not designed for horizontal scaling
**Future Options:**
- PostgreSQL backend for HA deployments
- Read replicas for load balancing
- Container orchestration (Kubernetes, Docker Swarm)
@@ -927,6 +964,7 @@ services:
### Local Development Setup
1. **Prerequisites:**
```bash
- Go 1.26+ (backend development)
- Node.js 23+ and npm (frontend development)
@@ -935,12 +973,14 @@ services:
```
2. **Clone Repository:**
```bash
git clone https://github.com/Wikid82/Charon.git
cd Charon
```
3. **Backend Development:**
```bash
cd backend
go mod download
@@ -949,6 +989,7 @@ services:
```
4. **Frontend Development:**
```bash
cd frontend
npm install
@@ -957,6 +998,7 @@ services:
```
5. **Full-Stack Development (Docker):**
```bash
docker-compose -f .docker/compose/docker-compose.dev.yml up
# Frontend + Backend + Caddy in one container
@@ -965,12 +1007,14 @@ services:
### Git Workflow
**Branch Strategy:**
- `main`: Stable production branch
- `feature/*`: New feature development
- `fix/*`: Bug fixes
- `chore/*`: Maintenance tasks
**Commit Convention:**
- `feat:` New user-facing feature
- `fix:` Bug fix in application code
- `chore:` Infrastructure, CI/CD, dependencies
@@ -979,6 +1023,7 @@ services:
- `test:` Adding or updating tests
**Example:**
```
feat: add DNS-01 challenge support for Cloudflare
@@ -1031,6 +1076,7 @@ Closes #123
**Purpose:** Validate critical user flows in a real browser
**Scope:**
- User authentication
- Proxy host CRUD operations
- Certificate provisioning
@@ -1038,6 +1084,7 @@ Closes #123
- Real-time log streaming
**Execution:**
```bash
# Run against Docker container
npx playwright test --project=chromium
@@ -1050,10 +1097,12 @@ npx playwright test --debug
```
**Coverage Modes:**
- **Docker Mode:** Integration testing, no coverage (0% reported)
- **Vite Dev Mode:** Coverage collection with V8 inspector
**Why Two Modes?**
- Playwright coverage requires source maps and raw source files
- Docker serves pre-built production files (no source maps)
- Vite dev server exposes source files for coverage instrumentation
@@ -1067,6 +1116,7 @@ npx playwright test --debug
**Coverage Target:** 85% minimum
**Execution:**
```bash
# Run all tests
go test ./...
@@ -1079,11 +1129,13 @@ go test -cover ./...
```
**Test Organization:**
- `*_test.go` files alongside source code
- Table-driven tests for comprehensive coverage
- Mocks for external dependencies (database, HTTP clients)
**Example:**
```go
func TestCreateProxyHost(t *testing.T) {
tests := []struct {
@@ -1123,6 +1175,7 @@ func TestCreateProxyHost(t *testing.T) {
**Coverage Target:** 85% minimum
**Execution:**
```bash
# Run all tests
npm test
@@ -1135,6 +1188,7 @@ npm run test:coverage
```
**Test Organization:**
- `*.test.tsx` files alongside components
- Mock API calls with MSW (Mock Service Worker)
- Snapshot tests for UI consistency
@@ -1146,12 +1200,14 @@ npm run test:coverage
**Location:** `backend/integration/`
**Scope:**
- API endpoint end-to-end flows
- Database migrations
- Caddy manager integration
- CrowdSec API calls
**Execution:**
```bash
go test ./integration/...
```
@@ -1161,6 +1217,7 @@ go test ./integration/...
**Automated Hooks (via `.pre-commit-config.yaml`):**
**Fast Stage (< 5 seconds):**
- Trailing whitespace removal
- EOF fixer
- YAML syntax check
@@ -1168,11 +1225,13 @@ go test ./integration/...
- Markdown link validation
**Manual Stage (run explicitly):**
- Backend coverage tests (60-90s)
- Frontend coverage tests (30-60s)
- TypeScript type checking (10-20s)
**Why Manual?**
- Coverage tests are slow and would block commits
- Developers run them on-demand before pushing
- CI enforces coverage on pull requests
@@ -1180,10 +1239,12 @@ go test ./integration/...
### Continuous Integration (GitHub Actions)
**Workflow Triggers:**
- `push` to `main`, `feature/*`, `fix/*`
- `pull_request` to `main`
**CI Jobs:**
1. **Lint:** golangci-lint, ESLint, markdownlint, hadolint
2. **Test:** Go tests, Vitest, Playwright
3. **Security:** Trivy, CodeQL, Grype, Govulncheck
@@ -1205,6 +1266,7 @@ go test ./integration/...
- **PRERELEASE:** `-beta.1`, `-rc.1`, etc.
**Examples:**
- `1.0.0` - Stable release
- `1.1.0` - New feature (DNS provider support)
- `1.1.1` - Bug fix (GORM query fix)
@@ -1215,12 +1277,14 @@ go test ./integration/...
### Build Pipeline (Multi-Platform)
**Platforms Supported:**
- `linux/amd64`
- `linux/arm64`
**Build Process:**
1. **Frontend Build:**
```bash
cd frontend
npm ci --only=production
@@ -1229,6 +1293,7 @@ go test ./integration/...
```
2. **Backend Build:**
```bash
cd backend
go build -o charon cmd/api/main.go
@@ -1236,6 +1301,7 @@ go test ./integration/...
```
3. **Docker Image Build:**
```bash
docker buildx build \
--platform linux/amd64,linux/arm64 \
@@ -1292,6 +1358,7 @@ go test ./integration/...
- Level: SLSA Build L3 (hermetic builds)
**Verification Example:**
```bash
# Verify image signature
cosign verify \
@@ -1309,6 +1376,7 @@ grype ghcr.io/wikid82/charon@sha256:<index-digest>
### Rollback Strategy
**Container Rollback:**
```bash
# List available versions
docker images wikid82/charon
@@ -1319,6 +1387,7 @@ docker-compose up -d --pull always wikid82/charon:1.1.1
```
**Database Rollback:**
```bash
# Restore from backup
docker exec charon /app/scripts/restore-backup.sh \
@@ -1355,11 +1424,13 @@ docker exec charon /app/scripts/restore-backup.sh \
### API Extensibility
**REST API Design:**
- Version prefix: `/api/v1/`
- Future versions: `/api/v2/` (backward-compatible)
- Deprecation policy: 2 major versions supported
**WebHooks (Future):**
- Event notifications for external systems
- Triggers: Proxy host created, certificate renewed, security event
- Payload: JSON with event type and data
@@ -1369,6 +1440,7 @@ docker exec charon /app/scripts/restore-backup.sh \
**Current:** Cerberus security middleware injected into Caddy pipeline
**Future:**
- User-defined middleware (rate limiting rules, custom headers)
- JavaScript/Lua scripting for request transformation
- Plugin marketplace for community contributions
@@ -1452,6 +1524,7 @@ docker exec charon /app/scripts/restore-backup.sh \
**GitHub Copilot Instructions:**
All agents (`Planning`, `Backend_Dev`, `Frontend_Dev`, `DevOps`) must reference `ARCHITECTURE.md` when:
- Creating new components
- Modifying core systems
- Changing integration points

View File

@@ -7,17 +7,44 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Added
- **Notifications:** Added Ntfy notification provider with support for self-hosted and cloud instances, optional Bearer token authentication, and JSON template customization
- **Certificate Deletion**: Clean up expired and unused certificates directly from the Certificates page
- Expired Let's Encrypt certificates not attached to any proxy host can now be deleted
- Custom and staging certificates remain deletable when not in use
- In-use certificates show a disabled delete button with a tooltip explaining why
- Native browser confirmation replaced with an accessible, themed confirmation dialog
- **Pushover Notification Provider**: Send push notifications to your devices via the Pushover app
- Supports JSON templates (minimal, detailed, custom)
- Application API Token stored securely — never exposed in API responses
- User Key stored in the URL field, following the same pattern as Telegram
- Feature flag: `feature.notifications.service.pushover.enabled` (on by default)
- Emergency priority (2) is intentionally unsupported — deferred to a future release
- **Slack Notification Provider**: Send alerts to Slack channels via Incoming Webhooks
- Supports JSON templates (minimal, detailed, custom) with Slack's native `text` format
- Webhook URL stored securely — never exposed in API responses
- Optional channel display name for easy identification in provider list
- Feature flag: `feature.notifications.service.slack.enabled` (on by default)
- See [Notification Guide](docs/features/notifications.md) for setup instructions
### CI/CD
- **Supply Chain**: Optimized verification workflow to prevent redundant builds
- Change: Removed direct Push/PR triggers; now waits for 'Docker Build' via `workflow_run`
### Security
- **Supply Chain**: Enhanced PR verification workflow stability and accuracy
- **Vulnerability Reporting**: Eliminated false negatives ("0 vulnerabilities") by enforcing strict failure conditions
- **Tooling**: Switched to manual Grype installation ensuring usage of latest stable binary
- **Observability**: Improved debugging visibility for vulnerability scans and SARIF generation
### Performance
- **E2E Tests**: Reduced feature flag API calls by 90% through conditional polling optimization (Phase 2)
- Conditional skip: Exits immediately if flags already in expected state (~50% of cases)
- Request coalescing: Shares in-flight API requests between parallel test workers
@@ -29,6 +56,17 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Prevents timeout errors in Firefox/WebKit caused by strict label matching
### Fixed
- **Notifications:** Fixed Pushover token-clearing bug where tokens were silently stripped on provider create/update
- **TCP Monitor Creation**: Fixed misleading form UX that caused silent HTTP 500 errors when creating TCP monitors
- Corrected URL placeholder to show `host:port` format instead of the incorrect `tcp://host:port` prefix
- Added dynamic per-type placeholder and helper text (HTTP monitors show a full URL example; TCP monitors show `host:port`)
- Added client-side validation that blocks form submission when a scheme prefix (e.g. `tcp://`) is detected, with an inline error message
- Reordered form fields so the monitor type selector appears above the URL input, making the dynamic helper text immediately relevant
- i18n: Added 5 new translation keys across en, de, fr, es, and zh locales
- **CI: Rate Limit Integration Tests**: Hardened test script reliability — login now validates HTTP status, Caddy admin API readiness gated on `/config/` poll, security config failures are fatal with full diagnostics, and poll interval increased to 5s
- **CI: Rate Limit Integration Tests**: Removed stale GeoIP database SHA256 checksum from Dockerfile non-CI path (hash was perpetually stale due to weekly upstream updates)
- **CI: Rate Limit Integration Tests**: Fixed Caddy admin API debug dump URL to use canonical trailing slash in workflow
- Fixed: Added robust validation and debug logging for Docker image tags to prevent invalid reference errors.
- Fixed: Removed log masking for image references and added manifest validation to debug CI failures.
- **Proxy Hosts**: Fixed ACL and Security Headers dropdown selections so create/edit saves now keep the selected values (including clearing to none) after submit and reload.
@@ -41,6 +79,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- **Test Performance**: Reduced system settings test execution time by 31% (from 23 minutes to 16 minutes)
### Changed
- **Testing Infrastructure**: Enhanced E2E test helpers with better synchronization and error handling
- **CI**: Optimized E2E workflow shards [Reduced from 4 to 3]

View File

@@ -45,8 +45,6 @@ brew install lefthook
go install github.com/evilmartians/lefthook@latest
```
```bash
# Option 1: Homebrew (macOS/Linux)
brew install golangci-lint
@@ -84,17 +82,20 @@ For local development, install go 1.26.0+ from [go.dev/dl](https://go.dev/dl/).
When the project's Go version is updated (usually by Renovate):
1. **Pull the latest changes**
```bash
git pull
```
2. **Update your local Go installation**
```bash
# Run the Go update skill (downloads and installs the new version)
.github/skills/scripts/skill-runner.sh utility-update-go-version
```
3. **Rebuild your development tools**
```bash
# This fixes lefthook hook errors and IDE issues
./scripts/rebuild-go-tools.sh

View File

@@ -23,9 +23,13 @@ ARG CROWDSEC_RELEASE_SHA256=704e37121e7ac215991441cef0d8732e33fa3b1a2b2b88b53a0b
# ---- Shared Go Security Patches ----
# renovate: datasource=go depName=github.com/expr-lang/expr
ARG EXPR_LANG_VERSION=1.17.7
ARG EXPR_LANG_VERSION=1.17.8
# renovate: datasource=go depName=golang.org/x/net
ARG XNET_VERSION=0.51.0
ARG XNET_VERSION=0.52.0
# renovate: datasource=go depName=github.com/smallstep/certificates
ARG SMALLSTEP_CERTIFICATES_VERSION=0.30.0
# renovate: datasource=npm depName=npm
ARG NPM_VERSION=11.11.1
# Allow pinning Caddy version - Renovate will update this
# Build the most recent Caddy 2.x release (keeps major pinned under v3).
@@ -39,7 +43,7 @@ ARG CADDY_CANDIDATE_VERSION=2.11.2
ARG CADDY_USE_CANDIDATE=0
ARG CADDY_PATCH_SCENARIO=B
# renovate: datasource=go depName=github.com/greenpau/caddy-security
ARG CADDY_SECURITY_VERSION=1.1.45
ARG CADDY_SECURITY_VERSION=1.1.51
# renovate: datasource=go depName=github.com/corazawaf/coraza-caddy
ARG CORAZA_CADDY_VERSION=2.2.0
## When an official caddy image tag isn't available on the host, use a
@@ -99,9 +103,12 @@ ARG VERSION=dev
# Make version available to Vite as VITE_APP_VERSION during the frontend build
ENV VITE_APP_VERSION=${VERSION}
# Set environment to bypass native binary requirement for cross-arch builds
ENV npm_config_rollup_skip_nodejs_native=1 \
ROLLUP_SKIP_NODEJS_NATIVE=1
# Vite 8: Rolldown native bindings auto-resolved per platform via optionalDependencies
ARG NPM_VERSION
# hadolint ignore=DL3017
RUN apk upgrade --no-cache && \
npm install -g npm@${NPM_VERSION} --no-fund --no-audit && \
npm cache clean --force
RUN npm ci
@@ -226,6 +233,7 @@ ARG CORAZA_CADDY_VERSION
ARG XCADDY_VERSION=0.4.5
ARG EXPR_LANG_VERSION
ARG XNET_VERSION
ARG SMALLSTEP_CERTIFICATES_VERSION
# hadolint ignore=DL3018
RUN apk add --no-cache bash git
@@ -274,6 +282,20 @@ RUN --mount=type=cache,target=/root/.cache/go-build \
# renovate: datasource=go depName=github.com/hslatman/ipstore
go get github.com/hslatman/ipstore@v0.4.0; \
go get golang.org/x/net@v${XNET_VERSION}; \
# CVE-2026-33186 (GHSA-p77j-4mvh-x3m3): gRPC-Go auth bypass via missing leading slash
# Fix available at v1.79.3. Pin here so the Caddy binary is patched immediately;
# remove once Caddy ships a release built with grpc >= v1.79.3.
# renovate: datasource=go depName=google.golang.org/grpc
go get google.golang.org/grpc@v1.79.3; \
# GHSA-479m-364c-43vc: goxmldsig XML signature validation bypass (loop variable capture)
# Fix available at v1.6.0. Pin here so the Caddy binary is patched immediately;
# remove once caddy-security ships a release built with goxmldsig >= v1.6.0.
# renovate: datasource=go depName=github.com/russellhaering/goxmldsig
go get github.com/russellhaering/goxmldsig@v1.6.0; \
# CVE-2026-30836: smallstep/certificates 0.30.0-rc3 vulnerability
# Fix available at v0.30.0. Pin here so the Caddy binary is patched immediately;
# remove once caddy-security ships a release built with smallstep/certificates >= v0.30.0.
go get github.com/smallstep/certificates@v${SMALLSTEP_CERTIFICATES_VERSION}; \
if [ "${CADDY_PATCH_SCENARIO}" = "A" ]; then \
# Rollback scenario: keep explicit nebula pin if upstream compatibility regresses.
# NOTE: smallstep/certificates (pulled by caddy-security stack) currently
@@ -338,6 +360,11 @@ RUN git clone --depth 1 --branch "v${CROWDSEC_VERSION}" https://github.com/crowd
RUN go get github.com/expr-lang/expr@v${EXPR_LANG_VERSION} && \
go get golang.org/x/crypto@v0.46.0 && \
go get golang.org/x/net@v${XNET_VERSION} && \
# CVE-2026-33186 (GHSA-p77j-4mvh-x3m3): gRPC-Go auth bypass via missing leading slash
# Fix available at v1.79.3. Pin here so the CrowdSec binary is patched immediately;
# remove once CrowdSec ships a release built with grpc >= v1.79.3.
# renovate: datasource=go depName=google.golang.org/grpc
go get google.golang.org/grpc@v1.79.3 && \
go mod tidy
# Fix compatibility issues with expr-lang v1.17.7
@@ -410,11 +437,11 @@ WORKDIR /app
# Install runtime dependencies for Charon, including bash for maintenance scripts
# Note: gosu is now built from source (see gosu-builder stage) to avoid CVEs from Debian's pre-compiled version
# Explicitly upgrade packages to fix security vulnerabilities
# binutils provides objdump for debug symbol detection in docker-entrypoint.sh
# hadolint ignore=DL3018
RUN apk add --no-cache \
bash ca-certificates sqlite-libs sqlite tzdata curl gettext libcap libcap-utils \
c-ares binutils libc-utils busybox-extras
bash ca-certificates sqlite-libs sqlite tzdata gettext libcap libcap-utils \
c-ares busybox-extras \
&& apk upgrade --no-cache zlib
# Copy gosu binary from gosu-builder (built with Go 1.26+ to avoid stdlib CVEs)
COPY --from=gosu-builder /gosu-out/gosu /usr/sbin/gosu
@@ -431,12 +458,13 @@ SHELL ["/bin/ash", "-o", "pipefail", "-c"]
# Note: In production, users should provide their own MaxMind license key
# This uses the publicly available GeoLite2 database
# In CI, timeout quickly rather than retrying to save build time
ARG GEOLITE2_COUNTRY_SHA256=b79afc28a0a52f89c15e8d92b05c173f314dd4f687719f96cf921012d900fcce
ARG GEOLITE2_COUNTRY_SHA256=f5e80a9a3129d46e75c8cccd66bfac725b0449a6c89ba5093a16561d58f20bda
RUN mkdir -p /app/data/geoip && \
if [ -n "$CI" ]; then \
if [ "$CI" = "true" ] || [ "$CI" = "1" ]; then \
echo "⏱️ CI detected - quick download (10s timeout, no retries)"; \
if curl -fSL -m 10 "https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-Country.mmdb" \
-o /app/data/geoip/GeoLite2-Country.mmdb 2>/dev/null; then \
if wget -qO /app/data/geoip/GeoLite2-Country.mmdb \
-T 10 "https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-Country.mmdb" 2>/dev/null \
&& [ -s /app/data/geoip/GeoLite2-Country.mmdb ]; then \
echo "✅ GeoIP downloaded"; \
else \
echo "⚠️ GeoIP skipped"; \
@@ -444,16 +472,12 @@ RUN mkdir -p /app/data/geoip && \
fi; \
else \
echo "Local - full download (30s timeout, 3 retries)"; \
if curl -fSL -m 30 --retry 3 "https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-Country.mmdb" \
-o /app/data/geoip/GeoLite2-Country.mmdb; then \
if echo "${GEOLITE2_COUNTRY_SHA256} /app/data/geoip/GeoLite2-Country.mmdb" | sha256sum -c -; then \
echo "✅ GeoIP checksum verified"; \
else \
echo "⚠️ Checksum failed"; \
touch /app/data/geoip/GeoLite2-Country.mmdb.placeholder; \
fi; \
if wget -qO /app/data/geoip/GeoLite2-Country.mmdb \
-T 30 -t 4 "https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-Country.mmdb" \
&& [ -s /app/data/geoip/GeoLite2-Country.mmdb ]; then \
echo "✅ GeoIP downloaded"; \
else \
echo "⚠️ Download failed"; \
echo "⚠️ GeoIP download failed or empty — skipping"; \
touch /app/data/geoip/GeoLite2-Country.mmdb.placeholder; \
fi; \
fi
@@ -579,8 +603,8 @@ EXPOSE 80 443 443/udp 2019 8080
# Security: Add healthcheck to monitor container health
# Verifies the Charon API is responding correctly
HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \
CMD curl -f http://localhost:8080/api/v1/health || exit 1
HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
CMD wget -q -O /dev/null http://localhost:8080/api/v1/health || exit 1
# Create CrowdSec symlink as root before switching to non-root user
# This symlink allows CrowdSec to use persistent storage at /app/data/crowdsec/config

View File

@@ -94,6 +94,7 @@ services:
retries: 3
start_period: 40s
```
> **Docker Socket Access:** Charon runs as a non-root user. If you mount the Docker socket for container discovery, the container needs permission to read it. Find your socket's group ID and add it to the compose file:
>
> ```bash
@@ -107,26 +108,34 @@ services:
> - "998"
> ```
### 2⃣ Generate encryption key:
### 2⃣ Generate encryption key
```bash
openssl rand -base64 32
```
### 3⃣ Start Charon:
### 3⃣ Start Charon
```bash
docker-compose up -d
```
### 4⃣ Access the dashboard:
### 4⃣ Access the dashboard
Open your browser and navigate to `http://localhost:8080` to access the dashboard and create your admin account.
```code
http://localhost:8080
```
### Getting Started:
Full setup instructions and documentation are available at [https://wikid82.github.io/Charon/docs/getting-started.html](https://wikid82.github.io/Charon/docs/getting-started.html).
### Getting Started
Full setup instructions and documentation are available at [https://wikid82.github.io/Charon/docs/getting-started.html](https://wikid82.github.io/Charon/docs/getting-started.html).
--- ## ✨ Top 10 Features
### 🎯 **Point & Click Management**
No config files. No terminal commands. Just click, type your domain name, and you're live. If you can use a website, you can run Charon.
### 🔐 **Automatic HTTPS Certificates**
@@ -160,6 +169,7 @@ See exactly what's happening with live request logs, uptime monitoring, and inst
### 📥 **Migration Made Easy**
Already invested in another reverse proxy? Bring your work with you by importing your existing configurations with one click:
- **Caddyfile** — Migrate from other Caddy setups
- **Nginx** — Import from Nginx based configurations (Coming Soon)
- **Traefik** - Import from Traefik based configurations (Coming Soon)

View File

@@ -11,60 +11,364 @@ We release security updates for the following versions:
## Reporting a Vulnerability
We take security seriously. If you discover a security vulnerability in Charon, please report it responsibly.
To report a security issue, use
[GitHub Private Security Advisories](https://github.com/Wikid82/charon/security/advisories/new)
or open a [GitHub Issue](https://github.com/Wikid82/Charon/issues) for non-sensitive disclosures.
### Where to Report
Please include a description, reproduction steps, impact assessment, and a non-destructive proof of
concept where possible.
**Preferred Method**: GitHub Security Advisory (Private)
We will acknowledge your report within **48 hours** and provide a remediation timeline within
**7 days**. Reporters are credited in release notes with their consent. We do not pursue legal
action against good-faith security researchers. Please allow **90 days** from initial report before
public disclosure.
1. Go to <https://github.com/Wikid82/charon/security/advisories/new>
2. Fill out the advisory form with:
- Vulnerability description
- Steps to reproduce
- Proof of concept (non-destructive)
- Impact assessment
- Suggested fix (if applicable)
---
**Alternative Method**: GitHub Issues (Public)
## Known Vulnerabilities
1. Go to <https://github.com/Wikid82/Charon/issues>
2. Create a new issue with the same information as above
Last reviewed: 2026-03-24
### What to Include
### [HIGH] CVE-2026-2673 · OpenSSL TLS 1.3 Key Exchange Group Downgrade
Please provide:
| Field | Value |
|--------------|-------|
| **ID** | CVE-2026-2673 (affects `libcrypto3` and `libssl3`) |
| **Severity** | High · 7.5 |
| **Status** | Awaiting Upstream |
1. **Description**: Clear explanation of the vulnerability
2. **Reproduction Steps**: Detailed steps to reproduce the issue
3. **Impact Assessment**: What an attacker could do with this vulnerability
4. **Environment**: Charon version, deployment method, OS, etc.
5. **Proof of Concept**: Code or commands demonstrating the vulnerability (non-destructive)
6. **Suggested Fix**: If you have ideas for remediation
**What**
An OpenSSL TLS 1.3 server may fail to negotiate the intended key exchange group when the
configuration includes the `DEFAULT` keyword, potentially allowing downgrade to weaker cipher
suites. Affects Alpine 3.23.3 packages `libcrypto3` and `libssl3` at version 3.5.5-r0.
### What Happens Next
**Who**
1. **Acknowledgment**: We'll acknowledge your report within **48 hours**
2. **Investigation**: We'll investigate and assess the severity
3. **Updates**: We'll provide regular status updates (weekly minimum)
4. **Fix Development**: We'll develop and test a fix
5. **Disclosure**: Coordinated disclosure after fix is released
6. **Credit**: We'll credit you in release notes (if desired)
- Discovered by: Automated scan (Grype)
- Reported: 2026-03-20
- Affects: Container runtime environment; Caddy reverse proxy TLS negotiation could be affected
if default key group configuration is used
### Responsible Disclosure
**Where**
We ask that you:
- Component: Alpine 3.23.3 base image (`libcrypto3` 3.5.5-r0, `libssl3` 3.5.5-r0)
- Versions affected: Alpine 3.23.3 prior to a patched `openssl` APK release
- ✅ Give us reasonable time to fix the issue before public disclosure (90 days preferred)
- ✅ Avoid destructive testing or attacks on production systems
- ✅ Not access, modify, or delete data that doesn't belong to you
- ✅ Not perform actions that could degrade service for others
**When**
We commit to:
- Discovered: 2026-03-20
- Disclosed (if public): 2026-03-13 (OpenSSL advisory)
- Target fix: When Alpine Security publishes a patched `openssl` APK
- ✅ Respond to your report within 48 hours
- ✅ Provide regular status updates
- ✅ Credit you in release notes (if desired)
- ✅ Not pursue legal action for good-faith security research
**How**
When an OpenSSL TLS 1.3 server configuration uses the `DEFAULT` keyword for key exchange groups,
the negotiation logic may select a weaker group than intended. Charon's Caddy TLS configuration
does not use the `DEFAULT` keyword, which limits practical exploitability. The packages are
present in the base image regardless of Caddy's configuration.
**Planned Remediation**
Monitor <https://security.alpinelinux.org/vuln/CVE-2026-2673> for a patched Alpine APK. Once
available, update the pinned `ALPINE_IMAGE` digest in the Dockerfile, or add an explicit
`RUN apk upgrade --no-cache libcrypto3 libssl3` to the runtime stage.
---
### [MEDIUM] CVE-2025-60876 · BusyBox wget HTTP Request Smuggling
| Field | Value |
|--------------|-------|
| **ID** | CVE-2025-60876 |
| **Severity** | Medium · 6.5 |
| **Status** | Awaiting Upstream |
**What**
BusyBox wget through 1.37 accepts raw CR/LF and other C0 control bytes in the HTTP
request-target, allowing request line splitting and header injection (CWE-284).
**Who**
- Discovered by: Automated scan (Grype)
- Reported: 2026-03-24
- Affects: Container runtime environment; Charon does not invoke busybox wget in application logic
**Where**
- Component: Alpine 3.23.3 base image (`busybox` 1.37.0-r30)
- Versions affected: All Charon images using Alpine 3.23.3 with busybox < patched version
**When**
- Discovered: 2026-03-24
- Disclosed (if public): Not yet publicly disclosed with fix
- Target fix: When Alpine Security publishes a patched busybox APK
**How**
The vulnerable wget applet would need to be manually invoked inside the container with
attacker-controlled URLs. Charon's application logic does not use busybox wget. EPSS score is
0.00064 (0.20 percentile), indicating extremely low exploitation probability.
**Planned Remediation**
Monitor Alpine 3.23 for a patched busybox APK. No immediate action required. Practical risk to
Charon users is negligible since the vulnerable code path is not exercised.
---
### [LOW] CVE-2026-26958 · edwards25519 MultiScalarMult Invalid Results
| Field | Value |
|--------------|-------|
| **ID** | CVE-2026-26958 (GHSA-fw7p-63qq-7hpr) |
| **Severity** | Low · 1.7 |
| **Status** | Awaiting Upstream |
**What**
`filippo.io/edwards25519` v1.1.0 `MultiScalarMult` produces invalid results or undefined
behavior if the receiver is not the identity point. Fix available at v1.1.1 but requires
CrowdSec to rebuild.
**Who**
- Discovered by: Automated scan (Grype)
- Reported: 2026-03-24
- Affects: CrowdSec Agent component within the container; not directly exposed through Charon's
primary application interface
**Where**
- Component: CrowdSec Agent (bundled `cscli` and `crowdsec` binaries)
- Versions affected: CrowdSec builds using `filippo.io/edwards25519` < v1.1.1
**When**
- Discovered: 2026-03-24
- Disclosed (if public): Public
- Target fix: When CrowdSec releases a build with updated dependency
**How**
This is a rarely used advanced API within the edwards25519 library. CrowdSec does not directly
expose MultiScalarMult to external input. EPSS score is 0.00018 (0.04 percentile).
**Planned Remediation**
Awaiting CrowdSec upstream release with updated dependency. No action available for Charon
maintainers.
---
## Patched Vulnerabilities
### ✅ [CRITICAL] CVE-2025-68121 · Go Stdlib Critical in CrowdSec Bundled Binaries
| Field | Value |
|--------------|-------|
| **ID** | CVE-2025-68121 (see also CHARON-2025-001) |
| **Severity** | Critical |
| **Patched** | 2026-03-24 |
**What**
A critical Go standard library vulnerability affects CrowdSec binaries bundled in the Charon
container image. The binaries were compiled against Go 1.25.6, which contains this flaw.
Charon's own application code, compiled with Go 1.26.1, is unaffected.
**Who**
- Discovered by: Automated scan (Grype)
- Reported: 2026-03-20
**Where**
- Component: CrowdSec Agent (bundled `cscli` and `crowdsec` binaries)
- Versions affected: Charon container images with CrowdSec binaries compiled against Go < 1.25.7
**When**
- Discovered: 2026-03-20
- Patched: 2026-03-24
- Time to patch: 4 days
**How**
The vulnerability resides entirely within CrowdSec's compiled binary artifacts. Exploitation
is limited to the CrowdSec agent's internal execution paths, which are not externally exposed
through Charon's API or network interface.
**Resolution**
CrowdSec binaries now compiled with Go 1.26.1 (was 1.25.6).
---
### ✅ [HIGH] CHARON-2025-001 · CrowdSec Bundled Binaries — Go Stdlib CVEs
| Field | Value |
|--------------|-------|
| **ID** | CHARON-2025-001 (aliases: CVE-2025-58183, CVE-2025-58186, CVE-2025-58187, CVE-2025-61729, CVE-2026-25679, CVE-2025-61732, CVE-2026-27142, CVE-2026-27139) |
| **Severity** | High · (preliminary, CVSS scores pending upstream confirmation) |
| **Patched** | 2026-03-24 |
**What**
Multiple CVEs in Go standard library packages continue to accumulate in CrowdSec binaries bundled
with Charon. The cluster originated when CrowdSec was compiled against Go 1.25.1; subsequent
CrowdSec updates advanced the toolchain to Go 1.25.6/1.25.7, resolving earlier CVEs but
introducing new ones. The cluster now includes a Critical-severity finding (CVE-2025-68121,
tracked separately above). All issues resolve when CrowdSec is rebuilt against Go ≥ 1.26.2.
Charon's own application code is unaffected.
**Who**
- Discovered by: Automated scan (Trivy, Grype)
- Reported: 2025-12-01 (original cluster); expanded 2026-03-20
**Where**
- Component: CrowdSec Agent (bundled `cscli` and `crowdsec` binaries)
- Versions affected: All Charon versions shipping CrowdSec binaries compiled against Go < 1.26.2
**When**
- Discovered: 2025-12-01
- Patched: 2026-03-24
- Time to patch: 114 days
**How**
The CVEs reside entirely within CrowdSec's compiled binaries and cover HTTP/2, TLS, and archive
processing paths that are not invoked by Charon's core application logic. The relevant network
interfaces are not externally exposed via Charon's API surface.
**Resolution**
CrowdSec binaries now compiled with Go 1.26.1.
---
### ✅ [MEDIUM] CVE-2026-27171 · zlib CPU Exhaustion via Infinite Loop in CRC Combine Functions
| Field | Value |
|--------------|-------|
| **ID** | CVE-2026-27171 |
| **Severity** | Medium · 5.5 (NVD) / 2.9 (MITRE) |
| **Patched** | 2026-03-24 |
**What**
zlib before 1.3.2 allows unbounded CPU consumption (denial of service) via the `crc32_combine64`
and `crc32_combine_gen64` functions. An internal helper `x2nmodp` performs right-shifts inside a
loop with no termination condition when given a specially crafted input, causing a CPU spin
(CWE-1284).
**Who**
- Discovered by: 7aSecurity audit (commissioned by OSTIF)
- Reported: 2026-02-17
**Where**
- Component: Alpine 3.23.3 base image (`zlib` package, version 1.3.1-r2)
- Versions affected: zlib < 1.3.2; all current Charon images using Alpine 3.23.3
**When**
- Discovered: 2026-02-17
- Patched: 2026-03-24
- Time to patch: 35 days
**How**
Exploitation requires local access (CVSS vector `AV:L`) and the ability to pass a crafted value
to the `crc32_combine`-family functions. This code path is not invoked by Charon's reverse proxy
or backend API. The vulnerability is non-blocking under the project's CI severity policy.
**Resolution**
Alpine now ships zlib 1.3.2-r0 (fix threshold was 1.3.2).
---
### ✅ [HIGH] CHARON-2026-001 · Debian Base Image CVE Cluster
| Field | Value |
|--------------|-------|
| **ID** | CHARON-2026-001 (aliases: CVE-2026-0861, CVE-2025-15281, CVE-2026-0915, CVE-2025-13151, and 2 libtiff HIGH CVEs) |
| **Severity** | High · 8.4 (highest per CVSS v3.1) |
| **Patched** | 2026-03-20 (Alpine base image migration complete) |
**What**
Seven HIGH-severity CVEs in Debian Trixie base image system libraries (`glibc`, `libtasn1-6`,
`libtiff`). These vulnerabilities resided in the container's OS-level packages with no fixes
available from the Debian Security Team.
**Who**
- Discovered by: Automated scan (Trivy)
- Reported: 2026-02-04
**Where**
- Component: Debian Trixie base image (`libc6`, `libc-bin`, `libtasn1-6`, `libtiff`)
- Versions affected: Charon container images built on Debian Trixie base (prior to Alpine migration)
**When**
- Discovered: 2026-02-04
- Patched: 2026-03-20
- Time to patch: 44 days
**How**
The affected packages were OS-level shared libraries bundled in the Debian Trixie container base
image. Exploitation would have required local container access or a prior application-level
compromise. Caddy reverse proxy ingress filtering and container isolation significantly reduced
the effective attack surface throughout the exposure window.
**Resolution**
Reverted to Alpine Linux base image (Alpine 3.23.3). Alpine's patch of CVE-2025-60876 (busybox
heap overflow) removed the original blocker for the Alpine migration. Post-migration scan
confirmed zero HIGH/CRITICAL CVEs from this cluster.
- Spec: [docs/plans/alpine_migration_spec.md](docs/plans/alpine_migration_spec.md)
- Advisory: [docs/security/advisory_2026-02-04_debian_cves_temporary.md](docs/security/advisory_2026-02-04_debian_cves_temporary.md)
**Credit**
Internal remediation; no external reporter.
---
### ✅ [HIGH] CVE-2025-68156 · expr-lang/expr ReDoS
| Field | Value |
|--------------|-------|
| **ID** | CVE-2025-68156 |
| **Severity** | High · 7.5 |
| **Patched** | 2026-01-11 |
**What**
Regular Expression Denial of Service (ReDoS) vulnerability in the `expr-lang/expr` library used
by CrowdSec for expression evaluation. Malicious regular expressions in CrowdSec scenarios or
parsers could cause CPU exhaustion and service degradation through exponential backtracking.
**Who**
- Discovered by: Automated scan (Trivy)
- Reported: 2026-01-11
**Where**
- Component: CrowdSec (via `expr-lang/expr` dependency)
- Versions affected: CrowdSec versions using `expr-lang/expr` < v1.17.7
**When**
- Discovered: 2026-01-11
- Patched: 2026-01-11
- Time to patch: 0 days
**How**
Maliciously crafted regular expressions in CrowdSec scenario or parser rules could trigger
exponential backtracking in `expr-lang/expr`'s evaluation engine, causing CPU exhaustion and
denial of service. The vulnerability is in the upstream expression evaluation library, not in
Charon's own code.
**Resolution**
Upgraded CrowdSec to build from source with the patched `expr-lang/expr` v1.17.7. Verification
confirmed via `go version -m ./cscli` showing the patched library version in compiled artifacts.
Post-patch Trivy scan reports 0 HIGH/CRITICAL vulnerabilities in application code.
- Technical details: [docs/plans/crowdsec_source_build.md](docs/plans/crowdsec_source_build.md)
**Credit**
Internal remediation; no external reporter.
---
@@ -72,7 +376,8 @@ We commit to:
### Server-Side Request Forgery (SSRF) Protection
Charon implements industry-leading **5-layer defense-in-depth** SSRF protection to prevent attackers from using the application to access internal resources or cloud metadata.
Charon implements industry-leading **5-layer defense-in-depth** SSRF protection to prevent
attackers from using the application to access internal resources or cloud metadata.
#### Protected Against
@@ -100,8 +405,6 @@ Charon implements industry-leading **5-layer defense-in-depth** SSRF protection
#### Learn More
For complete technical details, see:
- [SSRF Protection Guide](docs/security/ssrf-protection.md)
- [Manual Test Plan](docs/issues/ssrf-manual-test-plan.md)
- [QA Audit Report](docs/reports/qa_ssrf_remediation_report.md)
@@ -124,7 +427,10 @@ For complete technical details, see:
### Infrastructure Security
- **Non-root by default**: Charon runs as an unprivileged user (`charon`, uid 1000) inside the container. Docker socket access is granted via a minimal supplemental group matching the host socket's GID—never by running as root. If the socket GID is `0` (root group), Charon requires explicit opt-in before granting access.
- **Non-root by default**: Charon runs as an unprivileged user (`charon`, uid 1000) inside the
container. Docker socket access is granted via a minimal supplemental group matching the host
socket's GID — never by running as root. If the socket GID is `0` (root group), Charon requires
explicit opt-in before granting access.
- **Container isolation**: Docker-based deployment
- **Minimal attack surface**: Alpine Linux base image
- **Dependency scanning**: Regular Trivy and govulncheck scans
@@ -139,6 +445,126 @@ For complete technical details, see:
---
## Supply Chain Security
Charon implements comprehensive supply chain security measures to ensure the integrity and
authenticity of releases. Every release includes cryptographic signatures, SLSA provenance
attestation, and a Software Bill of Materials (SBOM).
### Verification Commands
#### Verify Container Image Signature
All official Charon images are signed with Sigstore Cosign:
```bash
cosign verify \
--certificate-identity-regexp='https://github.com/Wikid82/charon' \
--certificate-oidc-issuer='https://token.actions.githubusercontent.com' \
ghcr.io/wikid82/charon:latest
```
Successful verification confirms the image was built by GitHub Actions from the official
repository and has not been tampered with since signing.
#### Verify SLSA Provenance
```bash
# Download provenance from release assets
curl -LO https://github.com/Wikid82/charon/releases/latest/download/provenance.json
slsa-verifier verify-artifact \
--provenance-path provenance.json \
--source-uri github.com/Wikid82/charon \
./backend/charon-binary
```
#### Inspect the SBOM
```bash
# Download SBOM from release assets
curl -LO https://github.com/Wikid82/charon/releases/latest/download/sbom.spdx.json
# Scan for known vulnerabilities
grype sbom:sbom.spdx.json
```
### Transparency Log (Rekor)
All signatures are recorded in the public Sigstore Rekor transparency log:
<https://search.sigstore.dev/>
### Digest Pinning Policy
**Scope (Required):**
- CI workflows: `.github/workflows/*.yml`
- CI compose files: `.docker/compose/*.yml`
- CI helper actions with container refs: `.github/actions/**/*.yml`
CI workflows and CI compose files MUST use digest-pinned images for third-party services.
Tag+digest pairs are preferred for human-readable references with immutable resolution.
Self-built images MUST propagate digests to downstream jobs and tests.
**Local Development Exceptions:**
Local-only overrides (e.g., `CHARON_E2E_IMAGE`, `CHARON_IMAGE`, `CHARON_DEV_IMAGE`) MAY use tags
for developer iteration. Tag-only overrides MUST NOT be used in CI contexts.
**Documented Exceptions & Compensating Controls:**
1. **Go toolchain shim** (`golang.org/dl/goX.Y.Z@latest`) — Uses `@latest` to install the shim;
compensated by the target toolchain version being pinned in `go.work` with Renovate tracking.
2. **Unpinnable dependencies** — Require documented justification; prefer vendor checksums or
signed releases; keep SBOM/vulnerability scans in CI.
### Learn More
- [User Guide](docs/guides/supply-chain-security-user-guide.md)
- [Developer Guide](docs/guides/supply-chain-security-developer-guide.md)
- [Sigstore Documentation](https://docs.sigstore.dev/)
- [SLSA Framework](https://slsa.dev/)
---
## Security Audits & Scanning
### Automated Scanning
| Tool | Purpose |
|------|---------|
| Trivy | Container image vulnerability scanning |
| CodeQL | Static analysis for Go and JavaScript |
| govulncheck | Go module vulnerability scanning |
| golangci-lint (gosec) | Go code linting |
| npm audit | Frontend dependency scanning |
### Scanning Workflows
**Docker Build & Scan** (`.github/workflows/docker-build.yml`) — runs on every commit to `main`,
`development`, and `feature/beta-release`, and on all PRs targeting those branches. Performs Trivy
scanning, generates an SBOM, creates SBOM attestations, and uploads SARIF results to the GitHub
Security tab.
**Supply Chain Verification** (`.github/workflows/supply-chain-verify.yml`) — triggers
automatically via `workflow_run` after a successful docker-build. Runs SBOM completeness checks,
Grype vulnerability scans, and (on releases) Cosign signature and SLSA provenance validation.
**Weekly Security Rebuild** (`.github/workflows/security-weekly-rebuild.yml`) — runs every Sunday
at 02:00 UTC. Performs a full no-cache rebuild, scans for all severity levels, and retains JSON
artifacts for 90 days.
**PR-Specific Scanning** — extracts and scans only the Charon application binary on each pull
request. Fails the PR if CRITICAL or HIGH vulnerabilities are found in application code.
### Manual Reviews
- Security code reviews for all major features
- Peer review of security-sensitive changes
- Third-party security audits (planned)
---
## Security Best Practices
### Deployment Recommendations
@@ -153,26 +579,25 @@ For complete technical details, see:
### Configuration Hardening
```yaml
# Recommended docker-compose.yml settings
services:
charon:
image: ghcr.io/wikid82/charon:latest
restart: unless-stopped
environment:
- CHARON_ENV=production
- LOG_LEVEL=info # Don't use debug in production
- LOG_LEVEL=info
volumes:
- ./charon-data:/app/data:rw
- /var/run/docker.sock:/var/run/docker.sock:ro # Read-only!
- /var/run/docker.sock:/var/run/docker.sock:ro
networks:
- charon-internal # Isolated network
- charon-internal
cap_drop:
- ALL
cap_add:
- NET_BIND_SERVICE # Only if binding to ports < 1024
- NET_BIND_SERVICE
security_opt:
- no-new-privileges:true
read_only: true # If possible
read_only: true
tmpfs:
- /tmp:noexec,nosuid,nodev
```
@@ -182,9 +607,8 @@ services:
Gotify application tokens are secrets and must be handled with strict confidentiality.
- Never echo, print, log, or return token values in API responses or errors.
- Never expose tokenized endpoint query strings (for example,
`...?token=...`) in logs, diagnostics, examples, screenshots,
tickets, or reports.
- Never expose tokenized endpoint query strings (e.g., `...?token=...`) in logs, diagnostics,
examples, screenshots, tickets, or reports.
- Always redact query parameters in diagnostics and examples before display or storage.
- Use write-only token inputs in operator workflows and UI forms.
- Store tokens only in environment variables or a dedicated secret manager.
@@ -200,322 +624,6 @@ Gotify application tokens are secrets and must be handled with strict confidenti
---
## Supply Chain Security
Charon implements comprehensive supply chain security measures to ensure the integrity and authenticity of releases. Every release includes cryptographic signatures, SLSA provenance attestation, and Software Bill of Materials (SBOM).
### Verification Commands
#### Verify Container Image Signature
All official Charon images are signed with Sigstore Cosign:
```bash
# Install cosign (if not already installed)
curl -LO https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64
sudo mv cosign-linux-amd64 /usr/local/bin/cosign
sudo chmod +x /usr/local/bin/cosign
# Verify image signature
cosign verify \
--certificate-identity-regexp='https://github.com/Wikid82/charon' \
--certificate-oidc-issuer='https://token.actions.githubusercontent.com' \
ghcr.io/wikid82/charon:latest
```
Successful verification output confirms:
- The image was built by GitHub Actions
- The build came from the official Charon repository
- The image has not been tampered with since signing
#### Verify SLSA Provenance
SLSA (Supply-chain Levels for Software Artifacts) provenance provides tamper-proof evidence of how the software was built:
```bash
# Install slsa-verifier (if not already installed)
curl -LO https://github.com/slsa-framework/slsa-verifier/releases/latest/download/slsa-verifier-linux-amd64
sudo mv slsa-verifier-linux-amd64 /usr/local/bin/slsa-verifier
sudo chmod +x /usr/local/bin/slsa-verifier
# Download provenance from release assets
curl -LO https://github.com/Wikid82/charon/releases/latest/download/provenance.json
# Verify provenance
slsa-verifier verify-artifact \
--provenance-path provenance.json \
--source-uri github.com/Wikid82/charon \
./backend/charon-binary
```
#### Inspect Software Bill of Materials (SBOM)
Every release includes a comprehensive SBOM in SPDX format:
```bash
# Download SBOM from release assets
curl -LO https://github.com/Wikid82/charon/releases/latest/download/sbom.spdx.json
# View SBOM contents
cat sbom.spdx.json | jq .
# Check for known vulnerabilities (requires Grype)
grype sbom:sbom.spdx.json
```
### Transparency Log (Rekor)
All signatures are recorded in the public Sigstore Rekor transparency log, providing an immutable audit trail:
- **Search the log**: <https://search.sigstore.dev/>
- **Query by image**: Search for `ghcr.io/wikid82/charon`
- **View entry details**: Each entry includes commit SHA, workflow run, and signing timestamp
### Automated Verification in CI/CD
Integrate supply chain verification into your deployment pipeline:
```yaml
# Example GitHub Actions workflow
- name: Verify Charon Image
run: |
cosign verify \
--certificate-identity-regexp='https://github.com/Wikid82/charon' \
--certificate-oidc-issuer='https://token.actions.githubusercontent.com' \
ghcr.io/wikid82/charon:${{ env.VERSION }}
```
### What's Protected
- **Container Images**: All `ghcr.io/wikid82/charon:*` images are signed
- **Release Binaries**: Backend binaries include provenance attestation
- **Build Process**: SLSA Level 3 compliant build provenance
- **Dependencies**: Complete SBOM including all direct and transitive dependencies
### Digest Pinning Policy
Charon uses digest pinning to reduce supply chain risk and ensure CI runs against immutable artifacts.
**Scope (Required):**
- **CI workflows**: `.github/workflows/*.yml`, `.github/workflows/*.yaml`
- **CI compose files**: `.docker/compose/*.yml`, `.docker/compose/*.yaml`, `.docker/compose/docker-compose*.yml`, `.docker/compose/docker-compose*.yaml`
- **CI helper actions with container refs**: `.github/actions/**/*.yml`, `.github/actions/**/*.yaml`
- CI workflows and CI compose files MUST use digest-pinned images for third-party services.
- Tag+digest pairs are preferred for human-readable references with immutable resolution.
- Self-built images MUST propagate digests to downstream jobs and tests.
**Rationale:**
- Prevent tag drift and supply chain substitution in automated runs.
- Ensure deterministic builds, reproducible scans, and stable SBOM generation.
- Reduce rollback risk by guaranteeing CI uses immutable artifacts.
**Local Development Exceptions:**
- Local-only overrides (e.g., `CHARON_E2E_IMAGE`, `CHARON_IMAGE`, `CHARON_DEV_IMAGE`) MAY use tags for developer iteration.
- Tag-only overrides MUST NOT be used in CI contexts.
**Documented Exceptions & Compensating Controls:**
1. **Go toolchain shim** (`golang.org/dl/goX.Y.Z@latest`)
- **Exception:** Uses `@latest` to install the shim.
- **Compensating controls:** The target toolchain version is pinned in
`go.work`, and Renovate tracks the required version for updates.
2. **Unpinnable dependencies** (no stable digest or checksum source)
- **Exception:** Dependency cannot be pinned by digest.
- **Compensating controls:** Require documented justification, prefer
vendor-provided checksums or signed releases when available, and keep
SBOM/vulnerability scans in CI.
### Learn More
- **[User Guide](docs/guides/supply-chain-security-user-guide.md)**: Step-by-step verification instructions
- **[Developer Guide](docs/guides/supply-chain-security-developer-guide.md)**: Integration into development workflow
- **[Sigstore Documentation](https://docs.sigstore.dev/)**: Technical details on signing and verification
- **[SLSA Framework](https://slsa.dev/)**: Supply chain security framework overview
---
## Security Audits & Scanning
### Automated Scanning
We use the following tools:
- **Trivy**: Container image vulnerability scanning
- **CodeQL**: Static code analysis for Go and JavaScript
- **govulncheck**: Go module vulnerability scanning
- **golangci-lint**: Go code linting (including gosec)
- **npm audit**: Frontend dependency vulnerability scanning
### Security Scanning Workflows
Charon implements multiple layers of automated security scanning:
#### Docker Build & Scan (Per-Commit)
**Workflow**: `.github/workflows/docker-build.yml`
- Runs on every commit to `main`, `development`, and `feature/beta-release` branches
- Runs on all pull requests targeting these branches
- Performs Trivy vulnerability scanning on built images
- Generates SBOM (Software Bill of Materials) for supply chain transparency
- Creates SBOM attestations for verifiable build provenance
- Verifies Caddy security patches (CVE-2025-68156)
- Uploads SARIF results to GitHub Security tab
**Note**: This workflow replaced the previous `docker-publish.yml` (deleted Dec 21, 2025) with enhanced security features.
#### Supply Chain Verification
**Workflow**: `.github/workflows/supply-chain-verify.yml`
**Trigger Timing**: Runs automatically after `docker-build.yml` completes successfully via `workflow_run` trigger.
**Branch Coverage**: Triggers on **ALL branches** where docker-build completes, including:
- `main` (default branch)
- `development`
- `feature/*` branches (including `feature/beta-release`)
- Pull request branches
**Why No Branch Filter**: GitHub Actions has a platform limitation where `branches` filters in `workflow_run` triggers only match the default branch. To ensure comprehensive supply chain verification across all branches and PRs, we intentionally omit the branch filter. The workflow file must exist on the branch to execute, preventing untrusted code execution.
**Verification Steps**:
1. SBOM completeness verification
2. Vulnerability scanning with Grype
3. Results uploaded as workflow artifacts
4. PR comments with vulnerability summary (when applicable)
5. For releases: Cosign signature verification and SLSA provenance validation
**Additional Triggers**:
- Runs on all published releases
- Scheduled weekly on Mondays at 00:00 UTC
- Can be triggered manually via `workflow_dispatch`
#### Weekly Security Rebuild
**Workflow**: `.github/workflows/security-weekly-rebuild.yml`
- Runs every Sunday at 02:00 UTC
- Performs full rebuild with no cache to ensure latest base images
- Scans with Trivy for CRITICAL, HIGH, MEDIUM, and LOW vulnerabilities
- Uploads results to GitHub Security tab
- Stores JSON artifacts for 90-day retention
- Checks Alpine package versions for security updates
#### PR-Specific Scanning
**Workflow**: `.github/workflows/docker-build.yml` (trivy-pr-app-only job)
- Runs on all pull requests
- Extracts and scans only the Charon application binary
- Fails PR if CRITICAL or HIGH vulnerabilities found in application code
- Faster feedback loop for developers during code review
### Workflow Orchestration
The security scanning workflows use a coordinated orchestration pattern:
1. **Build Phase**: `docker-build.yml` builds the image and performs initial Trivy scan
2. **Verification Phase**: `supply-chain-verify.yml` triggers automatically via `workflow_run` after successful build
3. **Verification Timing**:
- On feature branches: Runs after docker-build completes on push events
- On pull requests: Runs after docker-build completes on PR synchronize events
- No delay or gaps: verification starts immediately after build success
4. **Weekly Maintenance**: `security-weekly-rebuild.yml` provides ongoing monitoring
This pattern ensures:
- Images are built before verification attempts to scan them
- No race conditions between build and verification
- Comprehensive coverage across all branches and PRs
- Efficient resource usage (verification only runs after successful builds)
### Manual Reviews
- Security code reviews for all major features
- Peer review of security-sensitive changes
- Third-party security audits (planned)
### Continuous Monitoring
- GitHub Dependabot alerts
- Weekly security scans in CI/CD
- Community vulnerability reports
- Automated supply chain verification on every build
---
## Recently Resolved Vulnerabilities
Charon maintains transparency about security issues and their resolution. Below is a comprehensive record of recently patched vulnerabilities.
### CVE-2025-68156 (expr-lang/expr ReDoS)
- **Severity**: HIGH (CVSS 7.5)
- **Component**: expr-lang/expr (used by CrowdSec for expression evaluation)
- **Vulnerability**: Regular Expression Denial of Service (ReDoS)
- **Description**: Malicious regular expressions in CrowdSec scenarios or parsers could cause CPU exhaustion and service degradation through exponential backtracking in vulnerable regex patterns.
- **Fixed Version**: expr-lang/expr v1.17.7
- **Resolution Date**: January 11, 2026
- **Remediation**: Upgraded CrowdSec to build from source with patched expr-lang/expr v1.17.7
- **Verification**:
- Binary inspection: `go version -m ./cscli` confirms v1.17.7 in compiled artifacts
- Container scan: Trivy reports 0 HIGH/CRITICAL vulnerabilities in application code
- Runtime testing: CrowdSec scenarios and parsers load successfully with patched library
- **Impact**: No known exploits in Charon deployments; preventive upgrade completed
- **Status**: ✅ **PATCHED** — Verified in all release artifacts
- **Technical Details**: See [CrowdSec Source Build Documentation](docs/plans/crowdsec_source_build.md)
---
## Known Security Considerations
### Debian Base Image CVEs (2026-02-04) — TEMPORARY
**Status**: ⚠️ 7 HIGH severity CVEs in Debian Trixie base image. **Alpine migration in progress.**
**Background**: Migrated from Alpine → Debian due to CVE-2025-60876 (busybox heap overflow). Debian now has worse CVE posture with no fixes available. Reverting to Alpine as Alpine CVE-2025-60876 is now patched.
**Affected Packages**:
- **libc6/libc-bin** (glibc): CVE-2026-0861 (CVSS 8.4), CVE-2025-15281, CVE-2026-0915
- **libtasn1-6**: CVE-2025-13151 (CVSS 7.5)
- **libtiff**: 2 additional HIGH CVEs
**Fix Status**: ❌ No fixes available from Debian Security Team
**Risk Assessment**: 🟢 **LOW actual risk**
- CVEs affect system libraries, NOT Charon application code
- Container isolation limits exploit surface area
- No direct exploit paths identified in Charon's usage patterns
- Network ingress filtered through Caddy proxy
**Mitigation**: Alpine base image migration
- **Spec**: [`docs/plans/alpine_migration_spec.md`](docs/plans/alpine_migration_spec.md)
- **Security Advisory**: [`docs/security/advisory_2026-02-04_debian_cves_temporary.md`](docs/security/advisory_2026-02-04_debian_cves_temporary.md)
- **Timeline**: 2-3 weeks (target completion: March 5, 2026)
- **Expected Outcome**: 100% CVE reduction (7 HIGH → 0)
**Review Date**: 2026-02-11 (Phase 1 Alpine CVE verification)
**Details**: See [VULNERABILITY_ACCEPTANCE.md](docs/security/VULNERABILITY_ACCEPTANCE.md) for complete risk assessment and monitoring plan.
### Third-Party Dependencies
**CrowdSec Binaries**: As of December 2025, CrowdSec binaries shipped with Charon contain 4 HIGH-severity CVEs in Go stdlib (CVE-2025-58183, CVE-2025-58186, CVE-2025-58187, CVE-2025-61729). These are upstream issues in Go 1.25.1 and will be resolved when CrowdSec releases binaries built with go 1.26.0+.
**Impact**: Low. These vulnerabilities are in CrowdSec's third-party binaries, not in Charon's application code. They affect HTTP/2, TLS certificate handling, and archive parsing—areas not directly exposed to attackers through Charon's interface.
**Mitigation**: Monitor CrowdSec releases for updated binaries. Charon's own application code has zero vulnerabilities.
---
## Security Hall of Fame
We recognize security researchers who help improve Charon:
@@ -525,19 +633,4 @@ We recognize security researchers who help improve Charon:
---
## Security Contact
- **GitHub Security Advisories**: <https://github.com/Wikid82/charon/security/advisories>
- **GitHub Discussions**: <https://github.com/Wikid82/charon/discussions>
- **GitHub Issues** (non-security): <https://github.com/Wikid82/charon/issues>
---
## License
This security policy is part of the Charon project, licensed under the MIT License.
---
**Last Updated**: January 30, 2026
**Version**: 1.2
**Last Updated**: 2026-03-24

View File

@@ -24,8 +24,10 @@ Example: `0.1.0-alpha`, `1.0.0-beta.1`, `2.0.0-rc.2`
1. **Create and push a release tag**:
```bash
git tag -a v1.0.0 -m "Release v1.0.0"
git push origin v1.0.0
```
2. **GitHub Actions automatically**:
@@ -51,10 +53,12 @@ Use it only when you need local/version-file parity checks:
echo "1.0.0" > .version
```
2. **Validate `.version` matches the latest tag**:
1. **Validate `.version` matches the latest tag**:
```bash
bash scripts/check-version-match-tag.sh
```
### Deterministic Rollout Verification Gates (Mandatory)

View File

@@ -10,14 +10,14 @@ require (
github.com/golang-jwt/jwt/v5 v5.3.1
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.3
github.com/mattn/go-sqlite3 v1.14.34
github.com/mattn/go-sqlite3 v1.14.37
github.com/oschwald/geoip2-golang/v2 v2.1.0
github.com/prometheus/client_golang v1.23.2
github.com/robfig/cron/v3 v3.0.1
github.com/sirupsen/logrus v1.9.4
github.com/stretchr/testify v1.11.1
golang.org/x/crypto v0.48.0
golang.org/x/net v0.51.0
golang.org/x/crypto v0.49.0
golang.org/x/net v0.52.0
golang.org/x/text v0.35.0
golang.org/x/time v0.15.0
gopkg.in/natefinch/lumberjack.v2 v2.2.1
@@ -28,7 +28,7 @@ require (
require (
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/bytedance/gopkg v0.1.3 // indirect
github.com/bytedance/gopkg v0.1.4 // indirect
github.com/bytedance/sonic v1.15.0 // indirect
github.com/bytedance/sonic/loader v0.5.0 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
@@ -50,7 +50,7 @@ require (
github.com/go-playground/locales v0.14.1 // indirect
github.com/go-playground/universal-translator v0.18.1 // indirect
github.com/go-playground/validator/v10 v10.30.1 // indirect
github.com/goccy/go-json v0.10.5 // indirect
github.com/goccy/go-json v0.10.6 // indirect
github.com/goccy/go-yaml v1.19.2 // indirect
github.com/jinzhu/inflection v1.0.0 // indirect
github.com/jinzhu/now v1.1.5 // indirect
@@ -64,13 +64,13 @@ require (
github.com/moby/term v0.5.2 // indirect
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
github.com/modern-go/reflect2 v1.0.2 // indirect
github.com/morikuni/aec v1.0.0 // indirect
github.com/morikuni/aec v1.1.0 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/ncruces/go-strftime v1.0.0 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/oschwald/maxminddb-golang/v2 v2.1.1 // indirect
github.com/pelletier/go-toml/v2 v2.2.4 // indirect
github.com/pelletier/go-toml/v2 v2.3.0 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
@@ -79,24 +79,25 @@ require (
github.com/quic-go/qpack v0.6.0 // indirect
github.com/quic-go/quic-go v0.59.0 // indirect
github.com/remyoudompheng/bigfft v0.0.0-20230129092748-24d4a6f8daec // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/stretchr/objx v0.5.3 // indirect
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
github.com/ugorji/go/codec v1.3.1 // indirect
go.mongodb.org/mongo-driver/v2 v2.5.0 // indirect
go.opentelemetry.io/auto/sdk v1.2.1 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.67.0 // indirect
go.opentelemetry.io/otel v1.42.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0 // indirect
go.opentelemetry.io/otel/metric v1.42.0 // indirect
go.opentelemetry.io/otel/trace v1.42.0 // indirect
go.yaml.in/yaml/v2 v2.4.4 // indirect
golang.org/x/arch v0.25.0 // indirect
golang.org/x/sys v0.42.0 // indirect
google.golang.org/grpc v1.79.3 // indirect
google.golang.org/protobuf v1.36.11 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
gotest.tools/v3 v3.5.2 // indirect
modernc.org/libc v1.70.0 // indirect
modernc.org/mathutil v1.7.1 // indirect
modernc.org/memory v1.11.0 // indirect
modernc.org/sqlite v1.46.1 // indirect
modernc.org/sqlite v1.47.0 // indirect
)

View File

@@ -4,8 +4,8 @@ github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERo
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bytedance/gopkg v0.1.3 h1:TPBSwH8RsouGCBcMBktLt1AymVo2TVsBVCY4b6TnZ/M=
github.com/bytedance/gopkg v0.1.3/go.mod h1:576VvJ+eJgyCzdjS+c4+77QF3p7ubbtiKARP3TxducM=
github.com/bytedance/gopkg v0.1.4 h1:oZnQwnX82KAIWb7033bEwtxvTqXcYMxDBaQxo5JJHWM=
github.com/bytedance/gopkg v0.1.4/go.mod h1:v1zWfPm21Fb+OsyXN2VAHdL6TBb2L88anLQgdyje6R4=
github.com/bytedance/sonic v1.15.0 h1:/PXeWFaR5ElNcVE84U0dOHjiMHQOwNIx3K4ymzh/uSE=
github.com/bytedance/sonic v1.15.0/go.mod h1:tFkWrPz0/CUCLEF4ri4UkHekCIcdnkqXw9VduqpJh0k=
github.com/bytedance/sonic/loader v0.5.0 h1:gXH3KVnatgY7loH5/TkeVyXPfESoqSBSBEiDd5VjlgE=
@@ -62,8 +62,8 @@ github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJn
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
github.com/go-playground/validator/v10 v10.30.1 h1:f3zDSN/zOma+w6+1Wswgd9fLkdwy06ntQJp0BBvFG0w=
github.com/go-playground/validator/v10 v10.30.1/go.mod h1:oSuBIQzuJxL//3MelwSLD5hc2Tu889bF0Idm9Dg26cM=
github.com/goccy/go-json v0.10.5 h1:Fq85nIqj+gXn/S5ahsiTlK3TmC85qgirsdTP/+DeaC4=
github.com/goccy/go-json v0.10.5/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/goccy/go-json v0.10.6 h1:p8HrPJzOakx/mn/bQtjgNjdTcN+/S6FcG2CTtQOrHVU=
github.com/goccy/go-json v0.10.6/go.mod h1:oq7eo15ShAhp70Anwd5lgX2pLfOS3QCiwU/PULtXL6M=
github.com/goccy/go-yaml v1.19.2 h1:PmFC1S6h8ljIz6gMRBopkjP1TVT7xuwrButHID66PoM=
github.com/goccy/go-yaml v1.19.2/go.mod h1:XBurs7gK8ATbW4ZPGKgcbrY1Br56PdM69F7LkFRi1kA=
github.com/golang-jwt/jwt/v5 v5.3.1 h1:kYf81DTWFe7t+1VvL7eS+jKFVWaUnK9cB1qbwn63YCY=
@@ -77,8 +77,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 h1:8Tjv8EJ+pM1xP8mK6egEbD1OgnVTyacbefKhmbLhIhU=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0 h1:HWRh5R2+9EifMyIHV7ZV+MIZqgz+PMpZ14Jynv3O2Zs=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.28.0/go.mod h1:JfhWUomR1baixubs02l85lZYYOm7LV6om4ceouMv45c=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
@@ -101,8 +101,8 @@ github.com/leodido/go-urn v1.4.0 h1:WT9HwE9SGECu3lg4d/dIA+jxlljEa1/ffXKmRjqdmIQ=
github.com/leodido/go-urn v1.4.0/go.mod h1:bvxc+MVxLKB4z00jd1z+Dvzr47oO32F/QSNjSBOlFxI=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v1.14.34 h1:3NtcvcUnFBPsuRcno8pUtupspG/GM+9nZ88zgJcp6Zk=
github.com/mattn/go-sqlite3 v1.14.34/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/mattn/go-sqlite3 v1.14.37 h1:3DOZp4cXis1cUIpCfXLtmlGolNLp2VEqhiB/PARNBIg=
github.com/mattn/go-sqlite3 v1.14.37/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
@@ -116,8 +116,8 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/morikuni/aec v1.1.0 h1:vBBl0pUnvi/Je71dsRrhMBtreIqNMYErSAbEeb8jrXQ=
github.com/morikuni/aec v1.1.0/go.mod h1:xDRgiq/iw5l+zkao76YTKzKttOp2cwPEne25HDkJnBw=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/ncruces/go-strftime v1.0.0 h1:HMFp8mLCTPp341M/ZnA4qaf7ZlsbTc+miZjCLOFAw7w=
@@ -130,8 +130,8 @@ github.com/oschwald/geoip2-golang/v2 v2.1.0 h1:DjnLhNJu9WHwTrmoiQFvgmyJoczhdnm7L
github.com/oschwald/geoip2-golang/v2 v2.1.0/go.mod h1:qdVmcPgrTJ4q2eP9tHq/yldMTdp2VMr33uVdFbHBiBc=
github.com/oschwald/maxminddb-golang/v2 v2.1.1 h1:lA8FH0oOrM4u7mLvowq8IT6a3Q/qEnqRzLQn9eH5ojc=
github.com/oschwald/maxminddb-golang/v2 v2.1.1/go.mod h1:PLdx6PR+siSIoXqqy7C7r3SB3KZnhxWr1Dp6g0Hacl8=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pelletier/go-toml/v2 v2.3.0 h1:k59bC/lIZREW0/iVaQR8nDHxVq8OVlIzYCOJf421CaM=
github.com/pelletier/go-toml/v2 v2.3.0/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
@@ -159,8 +159,9 @@ github.com/sirupsen/logrus v1.9.4/go.mod h1:ftWc9WdOfJ0a92nsE2jF5u5ZwH8Bv2zdeOC4
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/objx v0.5.3 h1:jmXUvGomnU1o3W/V5h2VEradbpJDwGrzugQQvL0POH4=
github.com/stretchr/objx v0.5.3/go.mod h1:rDQraq+vQZU7Fde9LOZLr8Tax6zZvy4kuNKF+QYS+U0=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
@@ -180,10 +181,10 @@ go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.67.0 h1:Oyrsyzu
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.67.0/go.mod h1:C2NGBr+kAB4bk3xtMXfZ94gqFDtg/GkI7e9zqGh5Beg=
go.opentelemetry.io/otel v1.42.0 h1:lSQGzTgVR3+sgJDAU/7/ZMjN9Z+vUip7leaqBKy4sho=
go.opentelemetry.io/otel v1.42.0/go.mod h1:lJNsdRMxCUIWuMlVJWzecSMuNjE7dOYyWlqOXWkdqCc=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0 h1:THuZiwpQZuHPul65w4WcwEnkX2QIuMT+UFoOrygtoJw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.42.0/go.mod h1:J2pvYM5NGHofZ2/Ru6zw/TNWnEQp5crgyDeSrYpXkAw=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0 h1:uLXP+3mghfMf7XmV4PkGfFhFKuNWoCvvx5wP/wOXo0o=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.42.0/go.mod h1:v0Tj04armyT59mnURNUJf7RCKcKzq+lgJs6QSjHjaTc=
go.opentelemetry.io/otel/metric v1.42.0 h1:2jXG+3oZLNXEPfNmnpxKDeZsFI5o4J+nz6xUlaFdF/4=
go.opentelemetry.io/otel/metric v1.42.0/go.mod h1:RlUN/7vTU7Ao/diDkEpQpnz3/92J9ko05BIwxYa2SSI=
go.opentelemetry.io/otel/sdk v1.42.0 h1:LyC8+jqk6UJwdrI/8VydAq/hvkFKNHZVIWuslJXYsDo=
@@ -192,8 +193,8 @@ go.opentelemetry.io/otel/sdk/metric v1.42.0 h1:D/1QR46Clz6ajyZ3G8SgNlTJKBdGp84q9
go.opentelemetry.io/otel/sdk/metric v1.42.0/go.mod h1:Ua6AAlDKdZ7tdvaQKfSmnFTdHx37+J4ba8MwVCYM5hc=
go.opentelemetry.io/otel/trace v1.42.0 h1:OUCgIPt+mzOnaUTpOQcBiM/PLQ/Op7oq6g4LenLmOYY=
go.opentelemetry.io/otel/trace v1.42.0/go.mod h1:f3K9S+IFqnumBkKhRJMeaZeNk9epyhnCmQh/EysQCdc=
go.opentelemetry.io/proto/otlp v1.7.1 h1:gTOMpGDb0WTBOP8JaO72iL3auEZhVmAQg4ipjOVAtj4=
go.opentelemetry.io/proto/otlp v1.7.1/go.mod h1:b2rVh6rfI/s2pHWNlB7ILJcRALpcNDzKhACevjI+ZnE=
go.opentelemetry.io/proto/otlp v1.9.0 h1:l706jCMITVouPOqEnii2fIAuO3IVGBRPV5ICjceRb/A=
go.opentelemetry.io/proto/otlp v1.9.0/go.mod h1:xE+Cx5E/eEHw+ISFkwPLwCZefwVjY+pqKg1qcK03+/4=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.6.0 h1:hyF9dfmbgIX5EfOdasqLsWD6xqpNZlXblLB/Dbnwv3Y=
@@ -202,12 +203,12 @@ go.yaml.in/yaml/v2 v2.4.4 h1:tuyd0P+2Ont/d6e2rl3be67goVK4R6deVxCUX5vyPaQ=
go.yaml.in/yaml/v2 v2.4.4/go.mod h1:gMZqIpDtDqOfM0uNfy0SkpRhvUryYH0Z6wdMYcacYXQ=
golang.org/x/arch v0.25.0 h1:qnk6Ksugpi5Bz32947rkUgDt9/s5qvqDPl/gBKdMJLE=
golang.org/x/arch v0.25.0/go.mod h1:0X+GdSIP+kL5wPmpK7sdkEVTt2XoYP0cSjQSbZBwOi8=
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
golang.org/x/crypto v0.49.0 h1:+Ng2ULVvLHnJ/ZFEq4KdcDd/cfjrrjjNSXNzxg0Y4U4=
golang.org/x/crypto v0.49.0/go.mod h1:ErX4dUh2UM+CFYiXZRTcMpEcN8b/1gxEuv3nODoYtCA=
golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=
golang.org/x/mod v0.33.0/go.mod h1:swjeQEj+6r7fODbD2cqrnje9PnziFuw4bmLbBZFrQ5w=
golang.org/x/net v0.51.0 h1:94R/GTO7mt3/4wIKpcR5gkGmRLOuE/2hNGeWq/GBIFo=
golang.org/x/net v0.51.0/go.mod h1:aamm+2QF5ogm02fjy5Bb7CQ0WMt1/WVM7FtyaTLlA9Y=
golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
golang.org/x/sync v0.20.0 h1:e0PTpb7pjO8GAtTs2dQ6jYa5BWYlMuX047Dco/pItO4=
golang.org/x/sync v0.20.0/go.mod h1:9xrNwdLfx4jkKbNva9FpL6vEN7evnE43NNNJQ2LF3+0=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@@ -219,12 +220,12 @@ golang.org/x/time v0.15.0 h1:bbrp8t3bGUeFOx08pvsMYRTCVSMk89u4tKbNOZbp88U=
golang.org/x/time v0.15.0/go.mod h1:Y4YMaQmXwGQZoFaVFk4YpCt4FLQMYKZe9oeV/f4MSno=
golang.org/x/tools v0.42.0 h1:uNgphsn75Tdz5Ji2q36v/nsFSfR/9BRFvqhGBaJGd5k=
golang.org/x/tools v0.42.0/go.mod h1:Ma6lCIwGZvHK6XtgbswSoWroEkhugApmsXyrUmBhfr0=
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 h1:BIRfGDEjiHRrk0QKZe3Xv2ieMhtgRGeLcZQ0mIVn4EY=
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5/go.mod h1:j3QtIyytwqGr1JUDtYXwtMXWPKsEa5LtzIFN1Wn5WvE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5 h1:eaY8u2EuxbRv7c3NiGK0/NedzVsCcV6hDuU5qPX5EGE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5/go.mod h1:M4/wBTSeyLxupu3W3tJtOgB14jILAS/XWPSSa3TAlJc=
google.golang.org/grpc v1.75.0 h1:+TW+dqTd2Biwe6KKfhE5JpiYIBWq865PhKGSXiivqt4=
google.golang.org/grpc v1.75.0/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ=
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57 h1:JLQynH/LBHfCTSbDWl+py8C+Rg/k1OVH3xfcaiANuF0=
google.golang.org/genproto/googleapis/api v0.0.0-20260209200024-4cfbd4190f57/go.mod h1:kSJwQxqmFXeo79zOmbrALdflXQeAYcUbgS7PbpMknCY=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260209200024-4cfbd4190f57 h1:mWPCjDEyshlQYzBpMNHaEof6UX1PmHcaUODUywQ0uac=
google.golang.org/genproto/googleapis/rpc v0.0.0-20260209200024-4cfbd4190f57/go.mod h1:j9x/tPzZkyxcgEFkiKEEGxfvyumM01BEtsW8xzOahRQ=
google.golang.org/grpc v1.79.3 h1:sybAEdRIEtvcD68Gx7dmnwjZKlyfuc61Dyo9pGXXkKE=
google.golang.org/grpc v1.79.3/go.mod h1:KmT0Kjez+0dde/v2j9vzwoAScgEPx/Bw1CYChhHLrHQ=
google.golang.org/protobuf v1.36.11 h1:fV6ZwhNocDyBLK0dj+fg8ektcVegBBuEolpbTQyBNVE=
google.golang.org/protobuf v1.36.11/go.mod h1:HTf+CrKn2C3g5S8VImy6tdcUvCska2kB7j23XfzDpco=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
@@ -263,8 +264,8 @@ modernc.org/opt v0.1.4 h1:2kNGMRiUjrp4LcaPuLY2PzUfqM/w9N23quVwhKt5Qm8=
modernc.org/opt v0.1.4/go.mod h1:03fq9lsNfvkYSfxrfUhZCWPk1lm4cq4N+Bh//bEtgns=
modernc.org/sortutil v1.2.1 h1:+xyoGf15mM3NMlPDnFqrteY07klSFxLElE2PVuWIJ7w=
modernc.org/sortutil v1.2.1/go.mod h1:7ZI3a3REbai7gzCLcotuw9AC4VZVpYMjDzETGsSMqJE=
modernc.org/sqlite v1.46.1 h1:eFJ2ShBLIEnUWlLy12raN0Z1plqmFX9Qe3rjQTKt6sU=
modernc.org/sqlite v1.46.1/go.mod h1:CzbrU2lSB1DKUusvwGz7rqEKIq+NUd8GWuBBZDs9/nA=
modernc.org/sqlite v1.47.0 h1:R1XyaNpoW4Et9yly+I2EeX7pBza/w+pmYee/0HJDyKk=
modernc.org/sqlite v1.47.0/go.mod h1:hWjRO6Tj/5Ik8ieqxQybiEOUXy0NJFNp2tpvVpKlvig=
modernc.org/strutil v1.2.1 h1:UneZBkQA+DX2Rp35KcM69cSsNES9ly8mQWD71HKlOA0=
modernc.org/strutil v1.2.1/go.mod h1:EHkiggD70koQxjVdSBM3JKM7k6L0FbGE5eymy9i3B9A=
modernc.org/token v1.1.0 h1:Xl7Ap9dKaEs5kLoOQeQmPWevfnk/DM5qcLcYlA8ys6Y=

View File

@@ -126,19 +126,16 @@ func isLocalRequest(c *gin.Context) bool {
}
// setSecureCookie sets an auth cookie with security best practices
// - HttpOnly: prevents JavaScript access (XSS protection)
// - Secure: true for HTTPS; false for local/private network HTTP requests
// - SameSite: Lax for any local/private-network request (regardless of scheme),
// Strict otherwise (public HTTPS only)
// - HttpOnly: prevents JavaScript access (XSS protection)
// - Secure: always true (all major browsers honour Secure on localhost HTTP;
// HTTP-on-private-IP without TLS is an unsupported deployment)
// - SameSite: Lax for any local/private-network request (regardless of scheme),
// Strict otherwise (public HTTPS only)
func setSecureCookie(c *gin.Context, name, value string, maxAge int) {
scheme := requestScheme(c)
secure := true
sameSite := http.SameSiteStrictMode
if scheme != "https" {
sameSite = http.SameSiteLaxMode
if isLocalRequest(c) {
secure = false
}
}
if isLocalRequest(c) {
@@ -149,14 +146,13 @@ func setSecureCookie(c *gin.Context, name, value string, maxAge int) {
domain := ""
c.SetSameSite(sameSite)
// secure is intentionally false for local/private network HTTP requests; always true for external or HTTPS requests.
c.SetCookie( // codeql[go/cookie-secure-not-set]
c.SetCookie(
name, // name
value, // value
maxAge, // maxAge in seconds
"/", // path
domain, // domain (empty = current host)
secure, // secure
true, // secure
true, // httpOnly (no JS access)
)
}

View File

@@ -112,7 +112,7 @@ func TestSetSecureCookie_HTTP_Loopback_Insecure(t *testing.T) {
cookies := recorder.Result().Cookies()
require.Len(t, cookies, 1)
cookie := cookies[0]
assert.False(t, cookie.Secure)
assert.True(t, cookie.Secure)
assert.Equal(t, http.SameSiteLaxMode, cookie.SameSite)
}
@@ -216,7 +216,7 @@ func TestSetSecureCookie_HTTP_PrivateIP_Insecure(t *testing.T) {
cookies := recorder.Result().Cookies()
require.Len(t, cookies, 1)
cookie := cookies[0]
assert.False(t, cookie.Secure)
assert.True(t, cookie.Secure)
assert.Equal(t, http.SameSiteLaxMode, cookie.SameSite)
}
@@ -234,7 +234,7 @@ func TestSetSecureCookie_HTTP_10Network_Insecure(t *testing.T) {
cookies := recorder.Result().Cookies()
require.Len(t, cookies, 1)
cookie := cookies[0]
assert.False(t, cookie.Secure)
assert.True(t, cookie.Secure)
assert.Equal(t, http.SameSiteLaxMode, cookie.SameSite)
}
@@ -252,7 +252,7 @@ func TestSetSecureCookie_HTTP_172Network_Insecure(t *testing.T) {
cookies := recorder.Result().Cookies()
require.Len(t, cookies, 1)
cookie := cookies[0]
assert.False(t, cookie.Secure)
assert.True(t, cookie.Secure)
assert.Equal(t, http.SameSiteLaxMode, cookie.SameSite)
}
@@ -288,7 +288,7 @@ func TestSetSecureCookie_HTTP_IPv6ULA_Insecure(t *testing.T) {
cookies := recorder.Result().Cookies()
require.Len(t, cookies, 1)
cookie := cookies[0]
assert.False(t, cookie.Secure)
assert.True(t, cookie.Secure)
assert.Equal(t, http.SameSiteLaxMode, cookie.SameSite)
}
@@ -439,6 +439,7 @@ func TestClearSecureCookie(t *testing.T) {
require.Len(t, cookies, 1)
assert.Equal(t, "auth_token", cookies[0].Name)
assert.Equal(t, -1, cookies[0].MaxAge)
assert.True(t, cookies[0].Secure)
}
func TestAuthHandler_Login_Errors(t *testing.T) {

View File

@@ -699,6 +699,124 @@ func TestDeleteCertificate_DiskSpaceCheckError(t *testing.T) {
}
}
// Test that an expired Let's Encrypt certificate not in use can be deleted.
// The backend has no provider-based restrictions; deletion policy is frontend-only.
func TestDeleteCertificate_ExpiredLetsEncrypt_NotInUse(t *testing.T) {
dbPath := t.TempDir() + "/cert_expired_le.db"
db, err := gorm.Open(sqlite.Open(fmt.Sprintf("file:%s?_journal_mode=WAL&_busy_timeout=5000&_foreign_keys=1", dbPath)), &gorm.Config{})
if err != nil {
t.Fatalf("failed to open db: %v", err)
}
sqlDB, err := db.DB()
if err != nil {
t.Fatalf("failed to access sql db: %v", err)
}
sqlDB.SetMaxOpenConns(1)
sqlDB.SetMaxIdleConns(1)
if err = db.AutoMigrate(&models.SSLCertificate{}, &models.ProxyHost{}); err != nil {
t.Fatalf("failed to migrate: %v", err)
}
expired := time.Now().Add(-24 * time.Hour)
cert := models.SSLCertificate{
UUID: "expired-le-cert",
Name: "expired-le",
Provider: "letsencrypt",
Domains: "expired.example.com",
ExpiresAt: &expired,
}
if err = db.Create(&cert).Error; err != nil {
t.Fatalf("failed to create cert: %v", err)
}
gin.SetMode(gin.TestMode)
r := gin.New()
r.Use(mockAuthMiddleware())
svc := services.NewCertificateService("/tmp", db)
mockBS := &mockBackupService{
createFunc: func() (string, error) {
return "backup-expired-le.tar.gz", nil
},
}
h := NewCertificateHandler(svc, mockBS, nil)
r.DELETE("/api/certificates/:id", h.Delete)
req := httptest.NewRequest(http.MethodDelete, "/api/certificates/"+toStr(cert.ID), http.NoBody)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200 OK, got %d, body=%s", w.Code, w.Body.String())
}
var found models.SSLCertificate
if err = db.First(&found, cert.ID).Error; err == nil {
t.Fatal("expected expired LE certificate to be deleted")
}
}
// Test that a valid (non-expired) Let's Encrypt certificate not in use can be deleted.
// Confirms the backend imposes no provider-based restrictions on deletion.
func TestDeleteCertificate_ValidLetsEncrypt_NotInUse(t *testing.T) {
dbPath := t.TempDir() + "/cert_valid_le.db"
db, err := gorm.Open(sqlite.Open(fmt.Sprintf("file:%s?_journal_mode=WAL&_busy_timeout=5000&_foreign_keys=1", dbPath)), &gorm.Config{})
if err != nil {
t.Fatalf("failed to open db: %v", err)
}
sqlDB, err := db.DB()
if err != nil {
t.Fatalf("failed to access sql db: %v", err)
}
sqlDB.SetMaxOpenConns(1)
sqlDB.SetMaxIdleConns(1)
if err = db.AutoMigrate(&models.SSLCertificate{}, &models.ProxyHost{}); err != nil {
t.Fatalf("failed to migrate: %v", err)
}
future := time.Now().Add(30 * 24 * time.Hour)
cert := models.SSLCertificate{
UUID: "valid-le-cert",
Name: "valid-le",
Provider: "letsencrypt",
Domains: "valid.example.com",
ExpiresAt: &future,
}
if err = db.Create(&cert).Error; err != nil {
t.Fatalf("failed to create cert: %v", err)
}
gin.SetMode(gin.TestMode)
r := gin.New()
r.Use(mockAuthMiddleware())
svc := services.NewCertificateService("/tmp", db)
mockBS := &mockBackupService{
createFunc: func() (string, error) {
return "backup-valid-le.tar.gz", nil
},
}
h := NewCertificateHandler(svc, mockBS, nil)
r.DELETE("/api/certificates/:id", h.Delete)
req := httptest.NewRequest(http.MethodDelete, "/api/certificates/"+toStr(cert.ID), http.NoBody)
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
if w.Code != http.StatusOK {
t.Fatalf("expected 200 OK, got %d, body=%s", w.Code, w.Body.String())
}
var found models.SSLCertificate
if err = db.First(&found, cert.ID).Error; err == nil {
t.Fatal("expected valid LE certificate to be deleted")
}
}
// Test Delete when IsCertificateInUse fails
func TestDeleteCertificate_UsageCheckError(t *testing.T) {
db, err := gorm.Open(sqlite.Open(fmt.Sprintf("file:%s?mode=memory&cache=shared", t.Name())), &gorm.Config{})

View File

@@ -474,6 +474,61 @@ func TestClassifyProviderTestFailure_TLSHandshakeFailed(t *testing.T) {
assert.Contains(t, message, "TLS handshake failed")
}
func TestClassifyProviderTestFailure_SlackInvalidPayload(t *testing.T) {
code, category, message := classifyProviderTestFailure(errors.New("invalid_payload"))
assert.Equal(t, "PROVIDER_TEST_VALIDATION_FAILED", code)
assert.Equal(t, "validation", category)
assert.Contains(t, message, "Slack rejected the payload")
}
func TestClassifyProviderTestFailure_SlackMissingTextOrFallback(t *testing.T) {
code, category, message := classifyProviderTestFailure(errors.New("missing_text_or_fallback"))
assert.Equal(t, "PROVIDER_TEST_VALIDATION_FAILED", code)
assert.Equal(t, "validation", category)
assert.Contains(t, message, "Slack rejected the payload")
}
func TestClassifyProviderTestFailure_SlackNoService(t *testing.T) {
code, category, message := classifyProviderTestFailure(errors.New("no_service"))
assert.Equal(t, "PROVIDER_TEST_AUTH_REJECTED", code)
assert.Equal(t, "dispatch", category)
assert.Contains(t, message, "Slack webhook is revoked")
}
func TestNotificationProviderHandler_Test_RejectsSlackTokenInTestRequest(t *testing.T) {
gin.SetMode(gin.TestMode)
db := setupNotificationCoverageDB(t)
svc := services.NewNotificationService(db, nil)
h := NewNotificationProviderHandler(svc)
payload := map[string]any{
"type": "slack",
"url": "#alerts",
"token": "https://hooks.slack.com/services/T00/B00/secret",
}
body, _ := json.Marshal(payload)
w := httptest.NewRecorder()
c, _ := gin.CreateTestContext(w)
setAdminContext(c)
c.Set(string(trace.RequestIDKey), "req-slack-token-reject")
c.Request = httptest.NewRequest(http.MethodPost, "/providers/test", bytes.NewBuffer(body))
c.Request.Header.Set("Content-Type", "application/json")
h.Test(c)
assert.Equal(t, http.StatusBadRequest, w.Code)
var resp map[string]any
require.NoError(t, json.Unmarshal(w.Body.Bytes(), &resp))
assert.Equal(t, "TOKEN_WRITE_ONLY", resp["code"])
assert.Equal(t, "validation", resp["category"])
assert.Equal(t, "Slack webhook URL is accepted only on provider create/update", resp["error"])
assert.NotContains(t, w.Body.String(), "hooks.slack.com")
}
func TestNotificationProviderHandler_Templates(t *testing.T) {
gin.SetMode(gin.TestMode)
db := setupNotificationCoverageDB(t)
@@ -948,14 +1003,14 @@ func TestNotificationProviderHandler_Update_UnsupportedType(t *testing.T) {
existing := models.NotificationProvider{
ID: "unsupported-type",
Name: "Custom Provider",
Type: "slack",
URL: "https://hooks.slack.com/test",
Type: "sms",
URL: "https://sms.example.com/test",
}
require.NoError(t, db.Create(&existing).Error)
payload := map[string]any{
"name": "Updated Slack Provider",
"url": "https://hooks.slack.com/updated",
"name": "Updated SMS Provider",
"url": "https://sms.example.com/updated",
}
body, _ := json.Marshal(payload)

View File

@@ -28,19 +28,22 @@ func TestBlocker3_CreateProviderRejectsNonDiscordWithSecurityEvents(t *testing.T
assert.NoError(t, err)
// Create handler
service := services.NewNotificationService(db, nil)
service := services.NewNotificationService(db, nil,
services.WithSlackURLValidator(func(string) error { return nil }),
)
handler := NewNotificationProviderHandler(service)
// Test cases: provider types with security events enabled
testCases := []struct {
name string
providerType string
token string
wantStatus int
}{
{"webhook", "webhook", http.StatusCreated},
{"gotify", "gotify", http.StatusCreated},
{"slack", "slack", http.StatusBadRequest},
{"email", "email", http.StatusCreated},
{"webhook", "webhook", "", http.StatusCreated},
{"gotify", "gotify", "", http.StatusCreated},
{"slack", "slack", "https://hooks.slack.com/services/T1234567890/B1234567890/XXXXXXXXXXXXXXXXXXXX", http.StatusCreated},
{"email", "email", "", http.StatusCreated},
}
for _, tc := range testCases {
@@ -50,6 +53,7 @@ func TestBlocker3_CreateProviderRejectsNonDiscordWithSecurityEvents(t *testing.T
"name": "Test Provider",
"type": tc.providerType,
"url": "https://example.com/webhook",
"token": tc.token,
"enabled": true,
"notify_security_waf_blocks": true, // Security event enabled
}

View File

@@ -24,21 +24,24 @@ func TestDiscordOnly_CreateRejectsNonDiscord(t *testing.T) {
require.NoError(t, err)
require.NoError(t, db.AutoMigrate(&models.NotificationProvider{}, &models.Notification{}))
service := services.NewNotificationService(db, nil)
service := services.NewNotificationService(db, nil,
services.WithSlackURLValidator(func(string) error { return nil }),
)
handler := NewNotificationProviderHandler(service)
testCases := []struct {
name string
providerType string
token string
wantStatus int
wantCode string
}{
{"webhook", "webhook", http.StatusCreated, ""},
{"gotify", "gotify", http.StatusCreated, ""},
{"slack", "slack", http.StatusBadRequest, "UNSUPPORTED_PROVIDER_TYPE"},
{"telegram", "telegram", http.StatusCreated, ""},
{"generic", "generic", http.StatusBadRequest, "UNSUPPORTED_PROVIDER_TYPE"},
{"email", "email", http.StatusCreated, ""},
{"webhook", "webhook", "", http.StatusCreated, ""},
{"gotify", "gotify", "", http.StatusCreated, ""},
{"slack", "slack", "https://hooks.slack.com/services/T1234567890/B1234567890/XXXXXXXXXXXXXXXXXXXX", http.StatusCreated, ""},
{"telegram", "telegram", "", http.StatusCreated, ""},
{"generic", "generic", "", http.StatusBadRequest, "UNSUPPORTED_PROVIDER_TYPE"},
{"email", "email", "", http.StatusCreated, ""},
}
for _, tc := range testCases {
@@ -47,6 +50,7 @@ func TestDiscordOnly_CreateRejectsNonDiscord(t *testing.T) {
"name": "Test Provider",
"type": tc.providerType,
"url": "https://example.com/webhook",
"token": tc.token,
"enabled": true,
"notify_proxy_hosts": true,
}
@@ -363,7 +367,7 @@ func TestDiscordOnly_ErrorCodes(t *testing.T) {
requestFunc: func(id string) (*http.Request, gin.Params) {
payload := map[string]interface{}{
"name": "Test",
"type": "slack",
"type": "sms",
"url": "https://example.com",
}
body, _ := json.Marshal(payload)

View File

@@ -136,6 +136,16 @@ func classifyProviderTestFailure(err error) (code string, category string, messa
return "PROVIDER_TEST_UNREACHABLE", "dispatch", "Could not reach provider endpoint. Verify URL, DNS, and network connectivity"
}
if strings.Contains(errText, "invalid_payload") ||
strings.Contains(errText, "missing_text_or_fallback") {
return "PROVIDER_TEST_VALIDATION_FAILED", "validation",
"Slack rejected the payload. Ensure your template includes a 'text' or 'blocks' field"
}
if strings.Contains(errText, "no_service") {
return "PROVIDER_TEST_AUTH_REJECTED", "dispatch",
"Slack webhook is revoked or the app is disabled. Create a new webhook"
}
return "PROVIDER_TEST_FAILED", "dispatch", "Provider test failed"
}
@@ -172,7 +182,7 @@ func (h *NotificationProviderHandler) Create(c *gin.Context) {
}
providerType := strings.ToLower(strings.TrimSpace(req.Type))
if providerType != "discord" && providerType != "gotify" && providerType != "webhook" && providerType != "email" && providerType != "telegram" {
if providerType != "discord" && providerType != "gotify" && providerType != "webhook" && providerType != "email" && providerType != "telegram" && providerType != "slack" && providerType != "pushover" && providerType != "ntfy" {
respondSanitizedProviderError(c, http.StatusBadRequest, "UNSUPPORTED_PROVIDER_TYPE", "validation", "Unsupported notification provider type")
return
}
@@ -232,12 +242,12 @@ func (h *NotificationProviderHandler) Update(c *gin.Context) {
}
providerType := strings.ToLower(strings.TrimSpace(existing.Type))
if providerType != "discord" && providerType != "gotify" && providerType != "webhook" && providerType != "email" && providerType != "telegram" {
if providerType != "discord" && providerType != "gotify" && providerType != "webhook" && providerType != "email" && providerType != "telegram" && providerType != "slack" && providerType != "pushover" && providerType != "ntfy" {
respondSanitizedProviderError(c, http.StatusBadRequest, "UNSUPPORTED_PROVIDER_TYPE", "validation", "Unsupported notification provider type")
return
}
if (providerType == "gotify" || providerType == "telegram") && strings.TrimSpace(req.Token) == "" {
if (providerType == "gotify" || providerType == "telegram" || providerType == "slack" || providerType == "pushover" || providerType == "ntfy") && strings.TrimSpace(req.Token) == "" {
// Keep existing token if update payload omits token
req.Token = existing.Token
}
@@ -278,7 +288,8 @@ func isProviderValidationError(err error) bool {
strings.Contains(errMsg, "rendered template") ||
strings.Contains(errMsg, "failed to parse template") ||
strings.Contains(errMsg, "failed to render template") ||
strings.Contains(errMsg, "invalid Discord webhook URL")
strings.Contains(errMsg, "invalid Discord webhook URL") ||
strings.Contains(errMsg, "invalid Slack webhook URL")
}
func (h *NotificationProviderHandler) Delete(c *gin.Context) {
@@ -310,6 +321,21 @@ func (h *NotificationProviderHandler) Test(c *gin.Context) {
return
}
if providerType == "slack" && strings.TrimSpace(req.Token) != "" {
respondSanitizedProviderError(c, http.StatusBadRequest, "TOKEN_WRITE_ONLY", "validation", "Slack webhook URL is accepted only on provider create/update")
return
}
if providerType == "telegram" && strings.TrimSpace(req.Token) != "" {
respondSanitizedProviderError(c, http.StatusBadRequest, "TOKEN_WRITE_ONLY", "validation", "Telegram bot token is accepted only on provider create/update")
return
}
if providerType == "pushover" && strings.TrimSpace(req.Token) != "" {
respondSanitizedProviderError(c, http.StatusBadRequest, "TOKEN_WRITE_ONLY", "validation", "Pushover API token is accepted only on provider create/update")
return
}
// Email providers use global SMTP + recipients from the URL field; they don't require a saved provider ID.
if providerType == "email" {
provider := models.NotificationProvider{
@@ -343,7 +369,7 @@ func (h *NotificationProviderHandler) Test(c *gin.Context) {
return
}
if strings.TrimSpace(provider.URL) == "" {
if providerType != "slack" && strings.TrimSpace(provider.URL) == "" {
respondSanitizedProviderError(c, http.StatusBadRequest, "PROVIDER_CONFIG_MISSING", "validation", "Trusted provider configuration is incomplete")
return
}

View File

@@ -668,3 +668,35 @@ func TestNotificationProviderHandler_List_TelegramNeverExposesBotToken(t *testin
_, hasTokenField := raw[0]["token"]
assert.False(t, hasTokenField, "raw token field must not appear in JSON response")
}
func TestNotificationProviderHandler_Test_TelegramTokenRejected(t *testing.T) {
r, _ := setupNotificationProviderTest(t)
payload := map[string]any{
"type": "telegram",
"token": "bot123:TOKEN",
}
body, _ := json.Marshal(payload)
req, _ := http.NewRequest("POST", "/api/v1/notifications/providers/test", bytes.NewBuffer(body))
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
assert.Equal(t, http.StatusBadRequest, w.Code)
assert.Contains(t, w.Body.String(), "TOKEN_WRITE_ONLY")
}
func TestNotificationProviderHandler_Test_PushoverTokenRejected(t *testing.T) {
r, _ := setupNotificationProviderTest(t)
payload := map[string]any{
"type": "pushover",
"token": "app-token-abc",
}
body, _ := json.Marshal(payload)
req, _ := http.NewRequest("POST", "/api/v1/notifications/providers/test", bytes.NewBuffer(body))
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
assert.Equal(t, http.StatusBadRequest, w.Code)
assert.Contains(t, w.Body.String(), "TOKEN_WRITE_ONLY")
}

View File

@@ -236,10 +236,6 @@ func (h *ProxyHostHandler) resolveSecurityHeaderProfileReference(value any) (*ui
return nil, nil
}
if _, err := uuid.Parse(trimmed); err != nil {
return nil, parseErr
}
var profile models.SecurityHeaderProfile
if err := h.db.Select("id").Where("uuid = ?", trimmed).First(&profile).Error; err != nil {
if err == gorm.ErrRecordNotFound {
@@ -362,7 +358,7 @@ func (h *ProxyHostHandler) Create(c *gin.Context) {
if host.AdvancedConfig != "" {
var parsed any
if err := json.Unmarshal([]byte(host.AdvancedConfig), &parsed); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid advanced_config JSON: " + err.Error()})
c.JSON(http.StatusBadRequest, gin.H{"error": "advanced_config must be valid Caddy JSON (not Caddyfile syntax). See https://caddyserver.com/docs/json/ for the correct format."})
return
}
parsed = caddy.NormalizeAdvancedConfig(parsed)
@@ -590,7 +586,7 @@ func (h *ProxyHostHandler) Update(c *gin.Context) {
if v != "" && v != host.AdvancedConfig {
var parsed any
if err := json.Unmarshal([]byte(v), &parsed); err != nil {
c.JSON(http.StatusBadRequest, gin.H{"error": "invalid advanced_config JSON: " + err.Error()})
c.JSON(http.StatusBadRequest, gin.H{"error": "advanced_config must be valid Caddy JSON (not Caddyfile syntax). See https://caddyserver.com/docs/json/ for the correct format."})
return
}
parsed = caddy.NormalizeAdvancedConfig(parsed)

View File

@@ -1552,7 +1552,7 @@ func TestProxyHostUpdate_SecurityHeaderProfile_InvalidString(t *testing.T) {
var result map[string]any
require.NoError(t, json.Unmarshal(resp.Body.Bytes(), &result))
require.Contains(t, result["error"], "invalid security_header_profile_id")
require.Contains(t, result["error"], "security header profile not found")
}
// Test invalid float value (should fail gracefully)

View File

@@ -732,7 +732,49 @@ func TestProxyHostUpdate_SecurityHeaderProfileID_InvalidString(t *testing.T) {
var result map[string]any
require.NoError(t, json.Unmarshal(resp.Body.Bytes(), &result))
assert.Contains(t, result["error"], "invalid security_header_profile_id")
assert.Contains(t, result["error"], "security header profile not found")
}
// TestProxyHostUpdate_SecurityHeaderProfileID_PresetSlugUUID tests that a preset-style UUID
// slug (e.g. "preset-basic") resolves correctly to the numeric profile ID via a DB lookup,
// bypassing the uuid.Parse gate that would otherwise reject non-standard slug formats.
func TestProxyHostUpdate_SecurityHeaderProfileID_PresetSlugUUID(t *testing.T) {
t.Parallel()
router, db := setupUpdateTestRouter(t)
// Create a profile whose UUID mimics a preset slug (non-standard UUID format)
slugUUID := "preset-basic"
profile := models.SecurityHeaderProfile{
UUID: slugUUID,
Name: "Basic Security",
IsPreset: true,
SecurityScore: 65,
}
require.NoError(t, db.Create(&profile).Error)
host := createTestProxyHost(t, db, "preset-slug-test")
updateBody := map[string]any{
"name": "Test Host Updated",
"domain_names": "preset-slug-test.test.com",
"forward_scheme": "http",
"forward_host": "localhost",
"forward_port": 8080,
"security_header_profile_id": slugUUID,
}
body, _ := json.Marshal(updateBody)
req := httptest.NewRequest(http.MethodPut, "/api/v1/proxy-hosts/"+host.UUID, bytes.NewReader(body))
req.Header.Set("Content-Type", "application/json")
resp := httptest.NewRecorder()
router.ServeHTTP(resp, req)
require.Equal(t, http.StatusOK, resp.Code)
var updated models.ProxyHost
require.NoError(t, db.First(&updated, "uuid = ?", host.UUID).Error)
require.NotNil(t, updated.SecurityHeaderProfileID)
assert.Equal(t, profile.ID, *updated.SecurityHeaderProfileID)
}
// TestProxyHostUpdate_SecurityHeaderProfileID_UnsupportedType tests that an unsupported type
@@ -820,6 +862,10 @@ func TestProxyHostUpdate_SecurityHeaderProfileID_ValidAssignment(t *testing.T) {
name: "as_string",
value: fmt.Sprintf("%d", profile.ID),
},
{
name: "as_uuid_string",
value: profile.UUID,
},
}
for _, tc := range testCases {

View File

@@ -224,7 +224,7 @@ func TestFinalBlocker3_SupportedProviderTypes_UnsupportedTypesIgnored(t *testing
db := SetupCompatibilityTestDB(t)
// Create ONLY unsupported providers
unsupportedTypes := []string{"pushover", "generic"}
unsupportedTypes := []string{"sms", "generic"}
for _, providerType := range unsupportedTypes {
provider := &models.NotificationProvider{

View File

@@ -114,7 +114,7 @@ func isSensitiveSettingKey(key string) bool {
type UpdateSettingRequest struct {
Key string `json:"key" binding:"required"`
Value string `json:"value" binding:"required"`
Value string `json:"value"`
Category string `json:"category"`
Type string `json:"type"`
}

View File

@@ -438,6 +438,55 @@ func TestSettingsHandler_UpdateSetting_InvalidAdminWhitelist(t *testing.T) {
assert.Contains(t, w.Body.String(), "Invalid admin_whitelist")
}
func TestSettingsHandler_UpdateSetting_EmptyValueAccepted(t *testing.T) {
gin.SetMode(gin.TestMode)
db := setupSettingsTestDB(t)
handler := handlers.NewSettingsHandler(db)
router := newAdminRouter()
router.POST("/settings", handler.UpdateSetting)
payload := map[string]string{
"key": "some.setting",
"value": "",
}
body, _ := json.Marshal(payload)
w := httptest.NewRecorder()
req, _ := http.NewRequest("POST", "/settings", bytes.NewBuffer(body))
req.Header.Set("Content-Type", "application/json")
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusOK, w.Code)
var setting models.Setting
require.NoError(t, db.Where("key = ?", "some.setting").First(&setting).Error)
assert.Equal(t, "some.setting", setting.Key)
assert.Equal(t, "", setting.Value)
}
func TestSettingsHandler_UpdateSetting_MissingKeyRejected(t *testing.T) {
gin.SetMode(gin.TestMode)
db := setupSettingsTestDB(t)
handler := handlers.NewSettingsHandler(db)
router := newAdminRouter()
router.POST("/settings", handler.UpdateSetting)
payload := map[string]string{
"value": "some-value",
}
body, _ := json.Marshal(payload)
w := httptest.NewRecorder()
req, _ := http.NewRequest("POST", "/settings", bytes.NewBuffer(body))
req.Header.Set("Content-Type", "application/json")
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusBadRequest, w.Code)
assert.Contains(t, w.Body.String(), "Key")
}
func TestSettingsHandler_UpdateSetting_InvalidKeepaliveIdle(t *testing.T) {
gin.SetMode(gin.TestMode)
db := setupSettingsTestDB(t)
@@ -744,16 +793,27 @@ func TestSettingsHandler_Errors(t *testing.T) {
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusBadRequest, w.Code)
// Missing Key/Value
// Value omitted — allowed since binding:"required" was removed; empty string is a valid value
payload := map[string]string{
"key": "some_key",
// value missing
// value intentionally absent; defaults to empty string
}
body, _ := json.Marshal(payload)
req, _ = http.NewRequest("POST", "/settings", bytes.NewBuffer(body))
req.Header.Set("Content-Type", "application/json")
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusOK, w.Code)
// Missing key — key is still binding:"required" so this must return 400
payloadNoKey := map[string]string{
"value": "some_value",
}
bodyNoKey, _ := json.Marshal(payloadNoKey)
req, _ = http.NewRequest("POST", "/settings", bytes.NewBuffer(bodyNoKey))
req.Header.Set("Content-Type", "application/json")
w = httptest.NewRecorder()
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusBadRequest, w.Code)
}
@@ -1511,7 +1571,7 @@ func TestSettingsHandler_TestPublicURL_SSRFProtection(t *testing.T) {
url: "http://169.254.169.254",
expectedStatus: http.StatusOK,
expectedReachable: false,
errorContains: "private",
errorContains: "cloud metadata",
},
{
name: "blocks link-local",
@@ -1763,3 +1823,48 @@ func TestSettingsHandler_TestPublicURL_IPv6LocalhostBlocked(t *testing.T) {
assert.False(t, resp["reachable"].(bool))
// IPv6 loopback should be blocked
}
// TestUpdateSetting_EmptyValueIsAccepted guards the PR-1 fix: Value must NOT carry
// binding:"required". Gin treats "" as missing for string fields and returns 400 if
// the tag is present. Re-adding the tag would silently regress the CrowdSec enable
// flow (which sends value="" to clear the setting).
func TestUpdateSetting_EmptyValueIsAccepted(t *testing.T) {
gin.SetMode(gin.TestMode)
db := setupSettingsTestDB(t)
handler := handlers.NewSettingsHandler(db)
router := newAdminRouter()
router.POST("/settings", handler.UpdateSetting)
body := `{"key":"security.crowdsec.enabled","value":""}`
req, _ := http.NewRequest(http.MethodPost, "/settings", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusOK, w.Code, "empty Value must not trigger a 400 validation error")
var s models.Setting
require.NoError(t, db.Where("key = ?", "security.crowdsec.enabled").First(&s).Error)
assert.Equal(t, "", s.Value)
}
// TestUpdateSetting_MissingKeyRejected ensures binding:"required" was only removed
// from Value and not accidentally also from Key. A request with no "key" field must
// still return 400.
func TestUpdateSetting_MissingKeyRejected(t *testing.T) {
gin.SetMode(gin.TestMode)
db := setupSettingsTestDB(t)
handler := handlers.NewSettingsHandler(db)
router := newAdminRouter()
router.POST("/settings", handler.UpdateSetting)
body := `{"value":"true"}`
req, _ := http.NewRequest(http.MethodPost, "/settings", strings.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
assert.Equal(t, http.StatusBadRequest, w.Code)
}

View File

@@ -127,6 +127,13 @@ func RegisterWithDeps(router *gin.Engine, db *gorm.DB, cfg config.Config, caddyM
}
migrateViewerToPassthrough(db)
// Seed the default SecurityConfig row on every startup (idempotent).
// Missing on fresh installs causes GetStatus to return all-disabled zero values.
if _, err := models.SeedDefaultSecurityConfig(db); err != nil {
logger.Log().WithError(err).Warn("Failed to seed default SecurityConfig — continuing startup")
}
// Let's Encrypt certs are auto-managed by Caddy and should not be assigned via certificate_id
logger.Log().Info("Cleaning up invalid Let's Encrypt certificate associations...")
var hostsWithInvalidCerts []models.ProxyHost

View File

@@ -1322,3 +1322,29 @@ func TestMigrateViewerToPassthrough(t *testing.T) {
require.NoError(t, db.First(&updated, viewer.ID).Error)
assert.Equal(t, models.RolePassthrough, updated.Role)
}
func TestRegister_CleansLetsEncryptCertAssignments(t *testing.T) {
gin.SetMode(gin.TestMode)
router := gin.New()
db, err := gorm.Open(sqlite.Open("file::memory:?cache=shared&_test_lecleaner"), &gorm.Config{})
require.NoError(t, err)
// Pre-migrate just the two tables needed to seed test data before Register runs.
require.NoError(t, db.AutoMigrate(&models.SSLCertificate{}, &models.ProxyHost{}))
cert := models.SSLCertificate{Provider: "letsencrypt"}
require.NoError(t, db.Create(&cert).Error)
certID := cert.ID
host := models.ProxyHost{DomainNames: "test.example.com", CertificateID: &certID}
require.NoError(t, db.Create(&host).Error)
cfg := config.Config{JWTSecret: "test-secret"}
err = Register(router, db, cfg)
require.NoError(t, err)
var reloaded models.ProxyHost
require.NoError(t, db.First(&reloaded, host.ID).Error)
assert.Nil(t, reloaded.CertificateID, "letsencrypt cert assignment must be cleared")
}

View File

@@ -2,6 +2,7 @@ package tests
import (
"bytes"
"encoding/hex"
"encoding/json"
"fmt"
"net/http"
@@ -13,6 +14,7 @@ import (
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"golang.org/x/crypto/bcrypt"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
"gorm.io/gorm/logger"
@@ -21,7 +23,18 @@ import (
"github.com/Wikid82/charon/backend/internal/models"
)
// setupAuditTestDB creates a clean in-memory database for each test
// hashForTest returns a bcrypt hash using minimum cost for fast test setup.
// NEVER use this in production — use models.User.SetPassword instead.
func hashForTest(t *testing.T, password string) string {
t.Helper()
h, err := bcrypt.GenerateFromPassword([]byte(password), bcrypt.MinCost)
require.NoError(t, err)
return string(h)
}
// setupAuditTestDB creates a clean in-memory database for each test.
// MaxOpenConns(1) is required: without it, GORM's pool can open multiple
// connections to ":memory:", each receiving its own empty database.
func setupAuditTestDB(t *testing.T) *gorm.DB {
t.Helper()
db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{
@@ -29,11 +42,23 @@ func setupAuditTestDB(t *testing.T) *gorm.DB {
})
require.NoError(t, err)
// Auto-migrate required models
sqlDB, err := db.DB()
require.NoError(t, err)
sqlDB.SetMaxOpenConns(1)
sqlDB.SetMaxIdleConns(1)
t.Cleanup(func() {
_ = sqlDB.Close()
})
// Auto-migrate required models (includes SecurityAudit so the
// background audit goroutine in SecurityService doesn't retry
// against a missing table).
err = db.AutoMigrate(
&models.User{},
&models.Setting{},
&models.ProxyHost{},
&models.SecurityAudit{},
)
require.NoError(t, err)
return db
@@ -43,14 +68,14 @@ func setupAuditTestDB(t *testing.T) *gorm.DB {
func createTestAdminUser(t *testing.T, db *gorm.DB) uint {
t.Helper()
admin := models.User{
UUID: "admin-uuid-1234",
Email: "admin@test.com",
Name: "Test Admin",
Role: models.RoleAdmin,
Enabled: true,
APIKey: "test-api-key",
UUID: "admin-uuid-1234",
Email: "admin@test.com",
Name: "Test Admin",
Role: models.RoleAdmin,
Enabled: true,
APIKey: "test-api-key",
PasswordHash: hashForTest(t, "adminpassword123"),
}
require.NoError(t, admin.SetPassword("adminpassword123"))
require.NoError(t, db.Create(&admin).Error)
return admin.ID
}
@@ -96,7 +121,7 @@ func TestInviteToken_MustBeUnguessable(t *testing.T) {
w := httptest.NewRecorder()
r.ServeHTTP(w, req)
require.Equal(t, http.StatusCreated, w.Code)
require.Equal(t, http.StatusCreated, w.Code, "invite endpoint failed; body: %s", w.Body.String())
var resp map[string]any
require.NoError(t, json.Unmarshal(w.Body.Bytes(), &resp))
@@ -104,15 +129,18 @@ func TestInviteToken_MustBeUnguessable(t *testing.T) {
var invitedUser models.User
require.NoError(t, db.Where("email = ?", "user@test.com").First(&invitedUser).Error)
token := invitedUser.InviteToken
require.NotEmpty(t, token)
require.NotEmpty(t, token, "invite token must not be empty")
// Token MUST be at least 32 chars (64 hex = 32 bytes = 256 bits)
assert.GreaterOrEqual(t, len(token), 64, "Invite token must be at least 64 hex chars (256 bits)")
// Token MUST be at least 32 bytes (64 hex chars = 256 bits of entropy)
require.GreaterOrEqual(t, len(token), 64, "invite token must be at least 64 hex chars (256 bits); got len=%d token=%q", len(token), token)
// Token must be hex
for _, c := range token {
assert.True(t, (c >= '0' && c <= '9') || (c >= 'a' && c <= 'f'), "Token must be hex encoded")
}
// Token must be valid hex (all characters in [0-9a-f]).
// hex.DecodeString accepts both cases, so check for lowercase explicitly:
// hex.EncodeToString (used by generateSecureToken) always emits lowercase,
// so uppercase would indicate a regression in the token-generation path.
_, err := hex.DecodeString(token)
require.NoError(t, err, "invite token must be valid hex; got %q", token)
require.Equal(t, strings.ToLower(token), token, "invite token must be lowercase hex (as produced by hex.EncodeToString); got %q", token)
}
func TestInviteToken_ExpiredCannotBeUsed(t *testing.T) {
@@ -156,11 +184,11 @@ func TestInviteToken_CannotBeReused(t *testing.T) {
Name: "Accepted User",
Role: models.RoleUser,
Enabled: true,
PasswordHash: hashForTest(t, "somepassword"),
InviteToken: "accepted-token-1234567890123456789012345678901",
InvitedAt: &invitedAt,
InviteStatus: "accepted",
}
require.NoError(t, user.SetPassword("somepassword"))
require.NoError(t, db.Create(&user).Error)
r := setupRouterWithAuth(db, adminID, "admin")
@@ -267,26 +295,26 @@ func TestUserEndpoints_RequireAdmin(t *testing.T) {
// Create regular user
user := models.User{
UUID: "user-uuid-1234",
Email: "user@test.com",
Name: "Regular User",
Role: models.RoleUser,
Enabled: true,
APIKey: "user-api-key-unique",
UUID: "user-uuid-1234",
Email: "user@test.com",
Name: "Regular User",
Role: models.RoleUser,
Enabled: true,
APIKey: "user-api-key-unique",
PasswordHash: hashForTest(t, "userpassword123"),
}
require.NoError(t, user.SetPassword("userpassword123"))
require.NoError(t, db.Create(&user).Error)
// Create a second user to test admin-only operations against a non-self target
otherUser := models.User{
UUID: "other-uuid-5678",
Email: "other@test.com",
Name: "Other User",
Role: models.RoleUser,
Enabled: true,
APIKey: "other-api-key-unique",
UUID: "other-uuid-5678",
Email: "other@test.com",
Name: "Other User",
Role: models.RoleUser,
Enabled: true,
APIKey: "other-api-key-unique",
PasswordHash: hashForTest(t, "otherpassword123"),
}
require.NoError(t, otherUser.SetPassword("otherpassword123"))
require.NoError(t, db.Create(&otherUser).Error)
// Router with regular user role
@@ -328,13 +356,13 @@ func TestSMTPEndpoints_RequireAdmin(t *testing.T) {
db := setupAuditTestDB(t)
user := models.User{
UUID: "user-uuid-5678",
Email: "user2@test.com",
Name: "Regular User 2",
Role: models.RoleUser,
Enabled: true,
UUID: "user-uuid-5678",
Email: "user2@test.com",
Name: "Regular User 2",
Role: models.RoleUser,
Enabled: true,
PasswordHash: hashForTest(t, "userpassword123"),
}
require.NoError(t, user.SetPassword("userpassword123"))
require.NoError(t, db.Create(&user).Error)
r := setupRouterWithAuth(db, user.ID, "user")

View File

@@ -0,0 +1,41 @@
package models
import (
"github.com/google/uuid"
"gorm.io/gorm"
)
// SeedDefaultSecurityConfig ensures a default SecurityConfig row exists in the database.
// It uses FirstOrCreate so it is safe to call on every startup — existing data is never
// overwritten. Returns the upserted record and any error encountered.
func SeedDefaultSecurityConfig(db *gorm.DB) (*SecurityConfig, error) {
record := SecurityConfig{
UUID: uuid.NewString(),
Name: "default",
Enabled: false,
CrowdSecMode: "disabled",
CrowdSecAPIURL: "http://127.0.0.1:8085",
WAFMode: "disabled",
WAFParanoiaLevel: 1,
RateLimitMode: "disabled",
RateLimitEnable: false,
// Zero values are intentional for the disabled default state.
// cerberus.RateLimitMiddleware guards against zero/negative values by falling
// back to safe operational defaults (requests=100, window=60s, burst=20) before
// computing the token-bucket rate. buildRateLimitHandler (caddy/config.go) also
// returns nil — skipping rate-limit injection — when either value is ≤ 0.
// A user enabling rate limiting via the UI without configuring thresholds will
// therefore receive the safe hardcoded defaults, not a zero-rate limit.
RateLimitBurst: 0,
RateLimitRequests: 0,
RateLimitWindowSec: 0,
}
// FirstOrCreate matches on Name only; if a row with name="default" already exists
// it is loaded into record without modifying any of its fields.
result := db.Where(SecurityConfig{Name: "default"}).FirstOrCreate(&record)
if result.Error != nil {
return nil, result.Error
}
return &record, nil
}

View File

@@ -0,0 +1,102 @@
package models_test
import (
"testing"
"github.com/glebarez/sqlite"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/gorm"
"github.com/Wikid82/charon/backend/internal/models"
)
func newSeedTestDB(t *testing.T) *gorm.DB {
t.Helper()
db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{})
require.NoError(t, err)
require.NoError(t, db.AutoMigrate(&models.SecurityConfig{}))
return db
}
func TestSeedDefaultSecurityConfig_EmptyDB(t *testing.T) {
db := newSeedTestDB(t)
rec, err := models.SeedDefaultSecurityConfig(db)
require.NoError(t, err)
require.NotNil(t, rec)
assert.Equal(t, "default", rec.Name)
assert.False(t, rec.Enabled)
assert.Equal(t, "disabled", rec.CrowdSecMode)
assert.Equal(t, "http://127.0.0.1:8085", rec.CrowdSecAPIURL)
assert.Equal(t, "disabled", rec.WAFMode)
assert.Equal(t, "disabled", rec.RateLimitMode)
assert.NotEmpty(t, rec.UUID)
var count int64
db.Model(&models.SecurityConfig{}).Where("name = ?", "default").Count(&count)
assert.Equal(t, int64(1), count)
}
func TestSeedDefaultSecurityConfig_Idempotent(t *testing.T) {
db := newSeedTestDB(t)
// First call — creates the row.
rec1, err := models.SeedDefaultSecurityConfig(db)
require.NoError(t, err)
require.NotNil(t, rec1)
// Second call — must not error and must not duplicate.
rec2, err := models.SeedDefaultSecurityConfig(db)
require.NoError(t, err)
require.NotNil(t, rec2)
assert.Equal(t, rec1.ID, rec2.ID, "ID must be identical on subsequent calls")
var count int64
db.Model(&models.SecurityConfig{}).Where("name = ?", "default").Count(&count)
assert.Equal(t, int64(1), count, "exactly one row should exist after two seed calls")
}
func TestSeedDefaultSecurityConfig_DBError(t *testing.T) {
db := newSeedTestDB(t)
sqlDB, err := db.DB()
require.NoError(t, err)
require.NoError(t, sqlDB.Close())
rec, err := models.SeedDefaultSecurityConfig(db)
assert.Error(t, err)
assert.Nil(t, rec)
}
func TestSeedDefaultSecurityConfig_DoesNotOverwriteExisting(t *testing.T) {
db := newSeedTestDB(t)
// Pre-seed a customised row.
existing := models.SecurityConfig{
UUID: "pre-existing-uuid",
Name: "default",
Enabled: true,
CrowdSecMode: "local",
CrowdSecAPIURL: "http://192.168.1.5:8085",
WAFMode: "block",
RateLimitMode: "enabled",
}
require.NoError(t, db.Create(&existing).Error)
// Seed should find the existing row and return it unchanged.
rec, err := models.SeedDefaultSecurityConfig(db)
require.NoError(t, err)
require.NotNil(t, rec)
assert.True(t, rec.Enabled, "existing Enabled flag must not be overwritten")
assert.Equal(t, "local", rec.CrowdSecMode, "existing CrowdSecMode must not be overwritten")
assert.Equal(t, "http://192.168.1.5:8085", rec.CrowdSecAPIURL)
assert.Equal(t, "block", rec.WAFMode)
var count int64
db.Model(&models.SecurityConfig{}).Where("name = ?", "default").Count(&count)
assert.Equal(t, int64(1), count)
}

View File

@@ -10,7 +10,7 @@ type SSLCertificate struct {
ID uint `json:"-" gorm:"primaryKey"`
UUID string `json:"uuid" gorm:"uniqueIndex"`
Name string `json:"name" gorm:"index"`
Provider string `json:"provider" gorm:"index"` // "letsencrypt", "custom", "self-signed"
Provider string `json:"provider" gorm:"index"` // "letsencrypt", "letsencrypt-staging", "custom"
Domains string `json:"domains" gorm:"index"` // comma-separated list of domains
Certificate string `json:"certificate" gorm:"type:text"` // PEM-encoded certificate
PrivateKey string `json:"private_key" gorm:"type:text"` // PEM-encoded private key

View File

@@ -19,6 +19,22 @@ var (
initOnce sync.Once
)
// rfc1918Blocks holds pre-parsed CIDR blocks for RFC 1918 private address ranges only.
// Initialized once and used by IsRFC1918 to support the AllowRFC1918 bypass path.
var (
rfc1918Blocks []*net.IPNet
rfc1918Once sync.Once
)
// rfc1918CIDRs enumerates exactly the three RFC 1918 private address ranges.
// Intentionally excludes loopback, link-local, cloud metadata (169.254.x.x),
// and all other reserved ranges — those remain blocked regardless of AllowRFC1918.
var rfc1918CIDRs = []string{
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
}
// privateCIDRs defines all private and reserved IP ranges to block for SSRF protection.
// This list covers:
// - RFC 1918 private networks (10.x, 172.16-31.x, 192.168.x)
@@ -68,6 +84,21 @@ func initPrivateBlocks() {
})
}
// initRFC1918Blocks parses the three RFC 1918 CIDR blocks once at startup.
func initRFC1918Blocks() {
rfc1918Once.Do(func() {
rfc1918Blocks = make([]*net.IPNet, 0, len(rfc1918CIDRs))
for _, cidr := range rfc1918CIDRs {
_, block, err := net.ParseCIDR(cidr)
if err != nil {
// This should never happen with valid CIDR strings
continue
}
rfc1918Blocks = append(rfc1918Blocks, block)
}
})
}
// IsPrivateIP checks if an IP address is private, loopback, link-local, or otherwise restricted.
// This function implements comprehensive SSRF protection by blocking:
// - Private IPv4 ranges (RFC 1918): 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16
@@ -110,6 +141,35 @@ func IsPrivateIP(ip net.IP) bool {
return false
}
// IsRFC1918 reports whether an IP address belongs to one of the three RFC 1918
// private address ranges: 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16.
//
// Unlike IsPrivateIP, this function only covers RFC 1918 ranges. It does NOT
// return true for loopback, link-local (169.254.x.x), cloud metadata endpoints,
// or any other reserved ranges. Use this to implement the AllowRFC1918 bypass
// while keeping all other SSRF protections in place.
//
// Exported so url_validator.go (package security) can call it without duplicating logic.
func IsRFC1918(ip net.IP) bool {
if ip == nil {
return false
}
initRFC1918Blocks()
// Normalise IPv4-mapped IPv6 addresses (::ffff:192.168.x.x → 192.168.x.x)
if ip4 := ip.To4(); ip4 != nil {
ip = ip4
}
for _, block := range rfc1918Blocks {
if block.Contains(ip) {
return true
}
}
return false
}
// ClientOptions configures the behavior of the safe HTTP client.
type ClientOptions struct {
// Timeout is the total request timeout (default: 10s)
@@ -129,6 +189,14 @@ type ClientOptions struct {
// DialTimeout is the connection timeout for individual dial attempts (default: 5s)
DialTimeout time.Duration
// AllowRFC1918 permits connections to RFC 1918 private address ranges:
// 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
//
// SECURITY NOTE: Enable only for admin-configured features (e.g., uptime monitors
// targeting internal hosts). All other restricted ranges — loopback, link-local,
// cloud metadata (169.254.x.x), and reserved — remain blocked regardless.
AllowRFC1918 bool
}
// Option is a functional option for configuring ClientOptions.
@@ -183,6 +251,17 @@ func WithDialTimeout(timeout time.Duration) Option {
}
}
// WithAllowRFC1918 permits connections to RFC 1918 private address ranges
// (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16).
//
// Use only for admin-configured features such as uptime monitors that need to
// reach internal hosts. All other SSRF protections remain active.
func WithAllowRFC1918() Option {
return func(opts *ClientOptions) {
opts.AllowRFC1918 = true
}
}
// safeDialer creates a custom dial function that validates IP addresses at connection time.
// This prevents DNS rebinding attacks by:
// 1. Resolving the hostname to IP addresses
@@ -225,6 +304,13 @@ func safeDialer(opts *ClientOptions) func(ctx context.Context, network, addr str
continue
}
// Allow RFC 1918 addresses only when explicitly permitted (e.g., admin-configured
// uptime monitors targeting internal hosts). Link-local (169.254.x.x), loopback,
// cloud metadata, and all other restricted ranges remain blocked.
if opts.AllowRFC1918 && IsRFC1918(ip.IP) {
continue
}
if IsPrivateIP(ip.IP) {
return nil, fmt.Errorf("connection to private IP blocked: %s resolved to %s", host, ip.IP)
}
@@ -237,6 +323,11 @@ func safeDialer(opts *ClientOptions) func(ctx context.Context, network, addr str
selectedIP = ip.IP
break
}
// Select RFC 1918 IPs when the caller has opted in.
if opts.AllowRFC1918 && IsRFC1918(ip.IP) {
selectedIP = ip.IP
break
}
if !IsPrivateIP(ip.IP) {
selectedIP = ip.IP
break
@@ -255,6 +346,9 @@ func safeDialer(opts *ClientOptions) func(ctx context.Context, network, addr str
// validateRedirectTarget checks if a redirect URL is safe to follow.
// Returns an error if the redirect target resolves to private IPs.
//
// TODO: If MaxRedirects is ever re-enabled for uptime monitors, thread AllowRFC1918
// through this function to permit RFC 1918 redirect targets.
func validateRedirectTarget(req *http.Request, opts *ClientOptions) error {
host := req.URL.Hostname()
if host == "" {

View File

@@ -920,3 +920,230 @@ func containsSubstr(s, substr string) bool {
}
return false
}
// PR-3: IsRFC1918 unit tests
func TestIsRFC1918_RFC1918Addresses(t *testing.T) {
t.Parallel()
tests := []struct {
name string
ip string
}{
{"10.0.0.0 start", "10.0.0.0"},
{"10.0.0.1", "10.0.0.1"},
{"10.128.0.1", "10.128.0.1"},
{"10.255.255.255 end", "10.255.255.255"},
{"172.16.0.0 start", "172.16.0.0"},
{"172.16.0.1", "172.16.0.1"},
{"172.24.0.1", "172.24.0.1"},
{"172.31.255.255 end", "172.31.255.255"},
{"192.168.0.0 start", "192.168.0.0"},
{"192.168.1.1", "192.168.1.1"},
{"192.168.255.255 end", "192.168.255.255"},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ip := net.ParseIP(tt.ip)
if ip == nil {
t.Fatalf("failed to parse IP: %s", tt.ip)
}
if !IsRFC1918(ip) {
t.Errorf("IsRFC1918(%s) = false, want true", tt.ip)
}
})
}
}
func TestIsRFC1918_NonRFC1918Addresses(t *testing.T) {
t.Parallel()
tests := []struct {
name string
ip string
}{
{"Loopback 127.0.0.1", "127.0.0.1"},
{"Link-local 169.254.1.1", "169.254.1.1"},
{"Cloud metadata 169.254.169.254", "169.254.169.254"},
{"IPv6 loopback ::1", "::1"},
{"IPv6 link-local fe80::1", "fe80::1"},
{"Public 8.8.8.8", "8.8.8.8"},
{"Unspecified 0.0.0.0", "0.0.0.0"},
{"Broadcast 255.255.255.255", "255.255.255.255"},
{"Reserved 240.0.0.1", "240.0.0.1"},
{"IPv6 unique local fc00::1", "fc00::1"},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ip := net.ParseIP(tt.ip)
if ip == nil {
t.Fatalf("failed to parse IP: %s", tt.ip)
}
if IsRFC1918(ip) {
t.Errorf("IsRFC1918(%s) = true, want false", tt.ip)
}
})
}
}
func TestIsRFC1918_NilIP(t *testing.T) {
t.Parallel()
if IsRFC1918(nil) {
t.Error("IsRFC1918(nil) = true, want false")
}
}
func TestIsRFC1918_BoundaryAddresses(t *testing.T) {
t.Parallel()
tests := []struct {
name string
ip string
expected bool
}{
{"11.0.0.0 just outside 10/8", "11.0.0.0", false},
{"172.15.255.255 just below 172.16/12", "172.15.255.255", false},
{"172.32.0.0 just above 172.31/12", "172.32.0.0", false},
{"192.167.255.255 just below 192.168/16", "192.167.255.255", false},
{"192.169.0.0 just above 192.168/16", "192.169.0.0", false},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ip := net.ParseIP(tt.ip)
if ip == nil {
t.Fatalf("failed to parse IP: %s", tt.ip)
}
if got := IsRFC1918(ip); got != tt.expected {
t.Errorf("IsRFC1918(%s) = %v, want %v", tt.ip, got, tt.expected)
}
})
}
}
func TestIsRFC1918_IPv4MappedAddresses(t *testing.T) {
t.Parallel()
// IPv4-mapped IPv6 representations of RFC 1918 addresses should be
// recognised as RFC 1918 (after To4() normalisation inside IsRFC1918).
tests := []struct {
name string
ip string
expected bool
}{
{"::ffff:10.0.0.1 mapped", "::ffff:10.0.0.1", true},
{"::ffff:192.168.1.1 mapped", "::ffff:192.168.1.1", true},
{"::ffff:172.16.0.1 mapped", "::ffff:172.16.0.1", true},
{"::ffff:8.8.8.8 mapped public", "::ffff:8.8.8.8", false},
{"::ffff:169.254.169.254 mapped link-local", "::ffff:169.254.169.254", false},
}
for _, tt := range tests {
tt := tt
t.Run(tt.name, func(t *testing.T) {
t.Parallel()
ip := net.ParseIP(tt.ip)
if ip == nil {
t.Fatalf("failed to parse IP: %s", tt.ip)
}
if got := IsRFC1918(ip); got != tt.expected {
t.Errorf("IsRFC1918(%s) = %v, want %v", tt.ip, got, tt.expected)
}
})
}
}
// PR-3: AllowRFC1918 safeDialer / client tests
func TestSafeDialer_AllowRFC1918_ValidationLoopSkipsRFC1918(t *testing.T) {
// When AllowRFC1918 is set, the validation loop must NOT return
// "connection to private IP blocked" for RFC 1918 addresses.
// The subsequent TCP connection will fail because nothing is listening on
// 192.168.1.1:80 in the test environment, but the error must be a
// connection-level error, not an SSRF-block.
opts := &ClientOptions{
Timeout: 200 * time.Millisecond,
DialTimeout: 200 * time.Millisecond,
AllowRFC1918: true,
}
dial := safeDialer(opts)
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
defer cancel()
_, err := dial(ctx, "tcp", "192.168.1.1:80")
if err == nil {
t.Fatal("expected a connection error, got nil")
}
if contains(err.Error(), "connection to private IP blocked") {
t.Errorf("AllowRFC1918 should prevent private-IP blocking message; got: %v", err)
}
}
func TestSafeDialer_AllowRFC1918_BlocksLinkLocal(t *testing.T) {
// Link-local (169.254.x.x) must remain blocked even when AllowRFC1918=true.
opts := &ClientOptions{
Timeout: 200 * time.Millisecond,
DialTimeout: 200 * time.Millisecond,
AllowRFC1918: true,
}
dial := safeDialer(opts)
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
defer cancel()
_, err := dial(ctx, "tcp", "169.254.1.1:80")
if err == nil {
t.Fatal("expected an error for link-local address, got nil")
}
if !contains(err.Error(), "connection to private IP blocked") {
t.Errorf("expected link-local to be blocked; got: %v", err)
}
}
func TestSafeDialer_AllowRFC1918_BlocksLoopbackWithoutAllowLocalhost(t *testing.T) {
// Loopback must remain blocked when AllowRFC1918=true but AllowLocalhost=false.
opts := &ClientOptions{
Timeout: 200 * time.Millisecond,
DialTimeout: 200 * time.Millisecond,
AllowRFC1918: true,
AllowLocalhost: false,
}
dial := safeDialer(opts)
ctx, cancel := context.WithTimeout(context.Background(), 500*time.Millisecond)
defer cancel()
_, err := dial(ctx, "tcp", "127.0.0.1:80")
if err == nil {
t.Fatal("expected an error for loopback without AllowLocalhost, got nil")
}
if !contains(err.Error(), "connection to private IP blocked") {
t.Errorf("expected loopback to be blocked; got: %v", err)
}
}
func TestNewSafeHTTPClient_AllowRFC1918_BlocksSSRFMetadata(t *testing.T) {
// Cloud metadata endpoint (169.254.169.254) must be blocked even with AllowRFC1918.
client := NewSafeHTTPClient(
WithTimeout(200*time.Millisecond),
WithDialTimeout(200*time.Millisecond),
WithAllowRFC1918(),
)
resp, err := client.Get("http://169.254.169.254/latest/meta-data/")
if resp != nil {
_ = resp.Body.Close()
}
if err == nil {
t.Fatal("expected metadata endpoint to be blocked, got nil")
}
if !contains(err.Error(), "connection to private IP blocked") {
t.Errorf("expected metadata endpoint blocking error; got: %v", err)
}
}
func TestNewSafeHTTPClient_WithAllowRFC1918_OptionApplied(t *testing.T) {
// Verify that WithAllowRFC1918() sets AllowRFC1918=true on ClientOptions.
opts := defaultOptions()
WithAllowRFC1918()(&opts)
if !opts.AllowRFC1918 {
t.Error("WithAllowRFC1918() should set AllowRFC1918=true")
}
}

View File

@@ -7,5 +7,8 @@ const (
FlagGotifyServiceEnabled = "feature.notifications.service.gotify.enabled"
FlagWebhookServiceEnabled = "feature.notifications.service.webhook.enabled"
FlagTelegramServiceEnabled = "feature.notifications.service.telegram.enabled"
FlagSlackServiceEnabled = "feature.notifications.service.slack.enabled"
FlagPushoverServiceEnabled = "feature.notifications.service.pushover.enabled"
FlagNtfyServiceEnabled = "feature.notifications.service.ntfy.enabled"
FlagSecurityProviderEventsEnabled = "feature.notifications.security_provider_events.enabled"
)

View File

@@ -458,10 +458,11 @@ func readCappedResponseBody(body io.Reader) ([]byte, error) {
func sanitizeOutboundHeaders(headers map[string]string) map[string]string {
allowed := map[string]struct{}{
"content-type": {},
"user-agent": {},
"x-request-id": {},
"x-gotify-key": {},
"content-type": {},
"user-agent": {},
"x-request-id": {},
"x-gotify-key": {},
"authorization": {},
}
sanitized := make(map[string]string)

View File

@@ -255,11 +255,11 @@ func TestSanitizeOutboundHeadersAllowlist(t *testing.T) {
"Cookie": "sid=1",
})
if len(headers) != 4 {
t.Fatalf("expected 4 allowed headers, got %d", len(headers))
if len(headers) != 5 {
t.Fatalf("expected 5 allowed headers, got %d", len(headers))
}
if _, ok := headers["Authorization"]; ok {
t.Fatalf("authorization header must be stripped")
if _, ok := headers["Authorization"]; !ok {
t.Fatalf("authorization header must be allowed for ntfy Bearer auth")
}
if _, ok := headers["Cookie"]; ok {
t.Fatalf("cookie header must be stripped")

View File

@@ -25,6 +25,12 @@ func (r *Router) ShouldUseNotify(providerType string, flags map[string]bool) boo
return flags[FlagWebhookServiceEnabled]
case "telegram":
return flags[FlagTelegramServiceEnabled]
case "slack":
return flags[FlagSlackServiceEnabled]
case "pushover":
return flags[FlagPushoverServiceEnabled]
case "ntfy":
return flags[FlagNtfyServiceEnabled]
default:
return false
}

View File

@@ -86,3 +86,57 @@ func TestRouter_ShouldUseNotify_WebhookServiceFlag(t *testing.T) {
t.Fatalf("expected notify routing disabled for webhook when FlagWebhookServiceEnabled is false")
}
}
func TestRouter_ShouldUseNotify_SlackServiceFlag(t *testing.T) {
router := NewRouter()
flags := map[string]bool{
FlagNotifyEngineEnabled: true,
FlagSlackServiceEnabled: true,
}
if !router.ShouldUseNotify("slack", flags) {
t.Fatalf("expected notify routing enabled for slack when FlagSlackServiceEnabled is true")
}
flags[FlagSlackServiceEnabled] = false
if router.ShouldUseNotify("slack", flags) {
t.Fatalf("expected notify routing disabled for slack when FlagSlackServiceEnabled is false")
}
}
func TestRouter_ShouldUseNotify_PushoverServiceFlag(t *testing.T) {
router := NewRouter()
flags := map[string]bool{
FlagNotifyEngineEnabled: true,
FlagPushoverServiceEnabled: true,
}
if !router.ShouldUseNotify("pushover", flags) {
t.Fatalf("expected notify routing enabled for pushover when FlagPushoverServiceEnabled is true")
}
flags[FlagPushoverServiceEnabled] = false
if router.ShouldUseNotify("pushover", flags) {
t.Fatalf("expected notify routing disabled for pushover when FlagPushoverServiceEnabled is false")
}
}
func TestRouter_ShouldUseNotify_NtfyServiceFlag(t *testing.T) {
router := NewRouter()
flags := map[string]bool{
FlagNotifyEngineEnabled: true,
FlagNtfyServiceEnabled: true,
}
if !router.ShouldUseNotify("ntfy", flags) {
t.Fatalf("expected notify routing enabled for ntfy when FlagNtfyServiceEnabled is true")
}
flags[FlagNtfyServiceEnabled] = false
if router.ShouldUseNotify("ntfy", flags) {
t.Fatalf("expected notify routing disabled for ntfy when FlagNtfyServiceEnabled is false")
}
}

View File

@@ -120,6 +120,14 @@ type ValidationConfig struct {
MaxRedirects int
Timeout time.Duration
BlockPrivateIPs bool
// AllowRFC1918 permits addresses in the RFC 1918 private ranges
// (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16).
//
// SECURITY NOTE: Must only be set for admin-configured features such as uptime
// monitors. Link-local (169.254.x.x), loopback, cloud metadata, and all other
// restricted ranges remain blocked regardless of this flag.
AllowRFC1918 bool
}
// ValidationOption allows customizing validation behavior.
@@ -145,6 +153,15 @@ func WithMaxRedirects(maxRedirects int) ValidationOption {
return func(c *ValidationConfig) { c.MaxRedirects = maxRedirects }
}
// WithAllowRFC1918 permits addresses in the RFC 1918 private ranges
// (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16).
//
// Use only for admin-configured features (e.g., uptime monitors targeting internal hosts).
// All other SSRF protections remain active.
func WithAllowRFC1918() ValidationOption {
return func(c *ValidationConfig) { c.AllowRFC1918 = true }
}
// ValidateExternalURL validates a URL for external HTTP requests with comprehensive SSRF protection.
// This function provides defense-in-depth against Server-Side Request Forgery attacks by:
// 1. Validating URL format and scheme
@@ -272,9 +289,26 @@ func ValidateExternalURL(rawURL string, options ...ValidationOption) (string, er
if ip.To4() != nil && ip.To16() != nil && isIPv4MappedIPv6(ip) {
// Extract the IPv4 address from the mapped format
ipv4 := ip.To4()
if network.IsPrivateIP(ipv4) {
return "", fmt.Errorf("connection to private ip addresses is blocked for security (detected IPv4-mapped IPv6: %s)", ip.String())
// Allow RFC 1918 IPv4-mapped IPv6 only when the caller has explicitly opted in.
if config.AllowRFC1918 && network.IsRFC1918(ipv4) {
continue
}
if network.IsPrivateIP(ipv4) {
// Cloud metadata endpoint must produce the specific error even
// when the address arrives as an IPv4-mapped IPv6 value.
if ipv4.String() == "169.254.169.254" {
return "", fmt.Errorf("access to cloud metadata endpoints is blocked for security (detected: %s)", sanitizeIPForError(ipv4.String()))
}
return "", fmt.Errorf("connection to private ip addresses is blocked for security (detected: %s)", sanitizeIPForError(ipv4.String()))
}
}
// Allow RFC 1918 addresses only when the caller has explicitly opted in
// (e.g., admin-configured uptime monitors targeting internal hosts).
// Link-local (169.254.x.x), loopback, cloud metadata, and all other
// restricted ranges remain blocked regardless of this flag.
if config.AllowRFC1918 && network.IsRFC1918(ip) {
continue
}
// Check if IP is in private/reserved ranges using centralized network.IsPrivateIP

View File

@@ -1054,3 +1054,143 @@ func TestIsIPv4MappedIPv6_EdgeCases(t *testing.T) {
})
}
}
// PR-3: WithAllowRFC1918 validation option tests
func TestValidateExternalURL_WithAllowRFC1918_Permits10x(t *testing.T) {
t.Parallel()
_, err := ValidateExternalURL(
"http://10.0.0.1",
WithAllowHTTP(),
WithAllowRFC1918(),
WithTimeout(200*time.Millisecond),
)
// The key invariant: RFC 1918 bypass must NOT produce the blocking error.
// DNS may succeed (returning the IP) or fail (network unavailable) — both acceptable.
if err != nil && strings.Contains(err.Error(), "private ip addresses is blocked") {
t.Errorf("AllowRFC1918 should skip 10.x.x.x blocking; got: %v", err)
}
}
func TestValidateExternalURL_WithAllowRFC1918_Permits172_16x(t *testing.T) {
t.Parallel()
_, err := ValidateExternalURL(
"http://172.16.0.1",
WithAllowHTTP(),
WithAllowRFC1918(),
WithTimeout(200*time.Millisecond),
)
if err != nil && strings.Contains(err.Error(), "private ip addresses is blocked") {
t.Errorf("AllowRFC1918 should skip 172.16.x.x blocking; got: %v", err)
}
}
func TestValidateExternalURL_WithAllowRFC1918_Permits192_168x(t *testing.T) {
t.Parallel()
_, err := ValidateExternalURL(
"http://192.168.1.1",
WithAllowHTTP(),
WithAllowRFC1918(),
WithTimeout(200*time.Millisecond),
)
if err != nil && strings.Contains(err.Error(), "private ip addresses is blocked") {
t.Errorf("AllowRFC1918 should skip 192.168.x.x blocking; got: %v", err)
}
}
func TestValidateExternalURL_WithAllowRFC1918_BlocksMetadata(t *testing.T) {
t.Parallel()
// 169.254.169.254 is the cloud metadata endpoint; it must stay blocked even
// with AllowRFC1918 because 169.254.0.0/16 is not in rfc1918CIDRs.
_, err := ValidateExternalURL(
"http://169.254.169.254",
WithAllowHTTP(),
WithAllowRFC1918(),
WithTimeout(200*time.Millisecond),
)
if err == nil {
t.Fatal("expected cloud metadata endpoint to be blocked, got nil")
}
}
func TestValidateExternalURL_WithAllowRFC1918_BlocksLinkLocal(t *testing.T) {
t.Parallel()
// 169.254.1.1 is link-local but not the specific metadata IP; still blocked.
_, err := ValidateExternalURL(
"http://169.254.1.1",
WithAllowHTTP(),
WithAllowRFC1918(),
WithTimeout(200*time.Millisecond),
)
if err == nil {
t.Fatal("expected link-local address to be blocked, got nil")
}
}
func TestValidateExternalURL_WithAllowRFC1918_BlocksLoopback(t *testing.T) {
t.Parallel()
// 127.0.0.1 without WithAllowLocalhost must still be blocked.
_, err := ValidateExternalURL(
"http://127.0.0.1",
WithAllowHTTP(),
WithAllowRFC1918(),
WithTimeout(200*time.Millisecond),
)
if err == nil {
t.Fatal("expected loopback to be blocked without AllowLocalhost, got nil")
}
if !strings.Contains(err.Error(), "private ip addresses is blocked") &&
!strings.Contains(err.Error(), "dns resolution failed") {
t.Errorf("expected loopback blocking error; got: %v", err)
}
}
func TestValidateExternalURL_RFC1918BlockedByDefault(t *testing.T) {
t.Parallel()
// Without WithAllowRFC1918, RFC 1918 addresses must still fail.
_, err := ValidateExternalURL(
"http://10.0.0.1",
WithAllowHTTP(),
WithTimeout(200*time.Millisecond),
)
if err == nil {
t.Fatal("expected RFC 1918 address to be blocked by default, got nil")
}
}
func TestValidateExternalURL_WithAllowRFC1918_IPv4MappedIPv6Allowed(t *testing.T) {
t.Parallel()
// ::ffff:192.168.1.1 is an IPv4-mapped IPv6 of an RFC 1918 address.
// With AllowRFC1918, the mapped IPv4 is extracted and the RFC 1918 bypass fires.
_, err := ValidateExternalURL(
"http://[::ffff:192.168.1.1]",
WithAllowHTTP(),
WithAllowRFC1918(),
WithTimeout(200*time.Millisecond),
)
if err != nil && strings.Contains(err.Error(), "private ip addresses is blocked") {
t.Errorf("AllowRFC1918 should permit ::ffff:192.168.1.1; got: %v", err)
}
}
func TestValidateExternalURL_WithAllowRFC1918_IPv4MappedMetadataBlocked(t *testing.T) {
t.Parallel()
// ::ffff:169.254.169.254 maps to the cloud metadata IP; must stay blocked.
_, err := ValidateExternalURL(
"http://[::ffff:169.254.169.254]",
WithAllowHTTP(),
WithAllowRFC1918(),
WithTimeout(200*time.Millisecond),
)
if err == nil {
t.Fatal("expected IPv4-mapped metadata address to be blocked, got nil")
}
// Must produce the cloud-metadata-specific error, not the generic private-IP error.
if !strings.Contains(err.Error(), "cloud metadata") {
t.Errorf("expected cloud metadata error, got: %v", err)
}
// The raw mapped form must not be leaked in the error message.
if strings.Contains(err.Error(), "::ffff:") {
t.Errorf("error message leaks raw IPv4-mapped form: %v", err)
}
}

View File

@@ -228,7 +228,7 @@ func TestBuildLocalDockerUnavailableDetails_PermissionDeniedSocketGIDInGroups(t
// Temp file GID = our primary GID (already in process groups) → no group hint
tmpDir := t.TempDir()
socketFile := filepath.Join(tmpDir, "docker.sock")
require.NoError(t, os.WriteFile(socketFile, []byte(""), 0o660))
require.NoError(t, os.WriteFile(socketFile, []byte(""), 0o600))
host := "unix://" + socketFile
err := &net.OpError{Op: "dial", Net: "unix", Err: syscall.EACCES}

View File

@@ -89,6 +89,7 @@ func (s *EnhancedSecurityNotificationService) getProviderAggregatedConfig() (*mo
"slack": true,
"gotify": true,
"telegram": true,
"pushover": true,
}
filteredProviders := []models.NotificationProvider{}
for _, p := range providers {

View File

@@ -192,7 +192,10 @@ func (s *MailService) RenderNotificationEmail(templateName string, data EmailTem
return "", fmt.Errorf("failed to render template %q: %w", templateName, err)
}
data.Content = template.HTML(contentBuf.String())
// html/template.Execute already escapes all EmailTemplateData fields; the
// template.HTML cast here prevents double-escaping in the outer layout template.
// #nosec G203 -- html/template.Execute auto-escapes all EmailTemplateData fields; this cast prevents double-escaping in the outer layout.
data.Content = template.HTML(contentBuf.String()) //nolint:gosec // see above
baseTmpl, err := template.New("email_base.html").Parse(string(baseBytes))
if err != nil {

View File

@@ -30,15 +30,34 @@ type NotificationService struct {
httpWrapper *notifications.HTTPWrapper
mailService MailServiceInterface
telegramAPIBaseURL string
pushoverAPIBaseURL string
validateSlackURL func(string) error
}
func NewNotificationService(db *gorm.DB, mailService MailServiceInterface) *NotificationService {
return &NotificationService{
// NotificationServiceOption configures a NotificationService at construction time.
type NotificationServiceOption func(*NotificationService)
// WithSlackURLValidator overrides the Slack webhook URL validator. Intended for use
// in tests that need to bypass real URL validation without mutating shared state.
func WithSlackURLValidator(fn func(string) error) NotificationServiceOption {
return func(s *NotificationService) {
s.validateSlackURL = fn
}
}
func NewNotificationService(db *gorm.DB, mailService MailServiceInterface, opts ...NotificationServiceOption) *NotificationService {
s := &NotificationService{
DB: db,
httpWrapper: notifications.NewNotifyHTTPWrapper(),
mailService: mailService,
telegramAPIBaseURL: "https://api.telegram.org",
pushoverAPIBaseURL: "https://api.pushover.net",
validateSlackURL: validateSlackWebhookURL,
}
for _, opt := range opts {
opt(s)
}
return s
}
var discordWebhookRegex = regexp.MustCompile(`^https://discord(?:app)?\.com/api/webhooks/(\d+)/([a-zA-Z0-9_-]+)`)
@@ -48,6 +67,15 @@ var allowedDiscordWebhookHosts = map[string]struct{}{
"canary.discord.com": {},
}
var slackWebhookRegex = regexp.MustCompile(`^https://hooks\.slack\.com/services/T[A-Za-z0-9_-]+/B[A-Za-z0-9_-]+/[A-Za-z0-9_-]+$`)
func validateSlackWebhookURL(rawURL string) error {
if !slackWebhookRegex.MatchString(rawURL) {
return fmt.Errorf("invalid Slack webhook URL: must match https://hooks.slack.com/services/T.../B.../xxx")
}
return nil
}
func normalizeURL(serviceType, rawURL string) string {
if serviceType == "discord" {
matches := discordWebhookRegex.FindStringSubmatch(rawURL)
@@ -101,7 +129,7 @@ func validateDiscordProviderURL(providerType, rawURL string) error {
// supportsJSONTemplates returns true if the provider type can use JSON templates
func supportsJSONTemplates(providerType string) bool {
switch strings.ToLower(providerType) {
case "webhook", "discord", "gotify", "slack", "generic", "telegram":
case "webhook", "discord", "gotify", "slack", "generic", "telegram", "pushover", "ntfy":
return true
default:
return false
@@ -110,7 +138,7 @@ func supportsJSONTemplates(providerType string) bool {
func isSupportedNotificationProviderType(providerType string) bool {
switch strings.ToLower(strings.TrimSpace(providerType)) {
case "discord", "email", "gotify", "webhook", "telegram":
case "discord", "email", "gotify", "webhook", "telegram", "slack", "pushover", "ntfy":
return true
default:
return false
@@ -129,6 +157,12 @@ func (s *NotificationService) isDispatchEnabled(providerType string) bool {
return s.getFeatureFlagValue(notifications.FlagWebhookServiceEnabled, true)
case "telegram":
return s.getFeatureFlagValue(notifications.FlagTelegramServiceEnabled, true)
case "slack":
return s.getFeatureFlagValue(notifications.FlagSlackServiceEnabled, true)
case "pushover":
return s.getFeatureFlagValue(notifications.FlagPushoverServiceEnabled, true)
case "ntfy":
return s.getFeatureFlagValue(notifications.FlagNtfyServiceEnabled, true)
default:
return false
}
@@ -440,10 +474,21 @@ func (s *NotificationService) sendJSONPayload(ctx context.Context, p models.Noti
}
}
case "slack":
// Slack requires either 'text' or 'blocks'
if _, hasText := jsonPayload["text"]; !hasText {
if _, hasBlocks := jsonPayload["blocks"]; !hasBlocks {
return fmt.Errorf("slack payload requires 'text' or 'blocks' field")
if messageValue, hasMessage := jsonPayload["message"]; hasMessage {
jsonPayload["text"] = messageValue
normalizedBody, marshalErr := json.Marshal(jsonPayload)
if marshalErr != nil {
return fmt.Errorf("failed to normalize slack payload: %w", marshalErr)
}
body.Reset()
if _, writeErr := body.Write(normalizedBody); writeErr != nil {
return fmt.Errorf("failed to write normalized slack payload: %w", writeErr)
}
} else {
return fmt.Errorf("slack payload requires 'text' or 'blocks' field")
}
}
}
case "gotify":
@@ -468,9 +513,22 @@ func (s *NotificationService) sendJSONPayload(ctx context.Context, p models.Noti
return fmt.Errorf("telegram payload requires 'text' field")
}
}
case "pushover":
if _, hasMessage := jsonPayload["message"]; !hasMessage {
return fmt.Errorf("pushover payload requires 'message' field")
}
if priority, ok := jsonPayload["priority"]; ok {
if p, isFloat := priority.(float64); isFloat && p == 2 {
return fmt.Errorf("pushover emergency priority (2) requires retry and expire parameters; not yet supported")
}
}
case "ntfy":
if _, hasMessage := jsonPayload["message"]; !hasMessage {
return fmt.Errorf("ntfy payload must include a 'message' field")
}
}
if providerType == "gotify" || providerType == "webhook" || providerType == "telegram" {
if providerType == "gotify" || providerType == "webhook" || providerType == "telegram" || providerType == "slack" || providerType == "pushover" || providerType == "ntfy" {
headers := map[string]string{
"Content-Type": "application/json",
"User-Agent": "Charon-Notify/1.0",
@@ -516,6 +574,58 @@ func (s *NotificationService) sendJSONPayload(ctx context.Context, p models.Noti
body.Write(updatedBody)
}
if providerType == "slack" {
decryptedWebhookURL := p.Token
if strings.TrimSpace(decryptedWebhookURL) == "" {
return fmt.Errorf("slack webhook URL is not configured")
}
if validateErr := s.validateSlackURL(decryptedWebhookURL); validateErr != nil {
return validateErr
}
dispatchURL = decryptedWebhookURL
}
if providerType == "ntfy" {
if strings.TrimSpace(p.Token) != "" {
headers["Authorization"] = "Bearer " + strings.TrimSpace(p.Token)
}
}
if providerType == "pushover" {
decryptedToken := p.Token
if strings.TrimSpace(decryptedToken) == "" {
return fmt.Errorf("pushover API token is not configured")
}
if strings.TrimSpace(p.URL) == "" {
return fmt.Errorf("pushover user key is not configured")
}
pushoverBase := s.pushoverAPIBaseURL
if pushoverBase == "" {
pushoverBase = "https://api.pushover.net"
}
dispatchURL = pushoverBase + "/1/messages.json"
parsedURL, parseErr := neturl.Parse(dispatchURL)
expectedHost := "api.pushover.net"
if parsedURL != nil && parsedURL.Hostname() != "" && pushoverBase != "https://api.pushover.net" {
expectedHost = parsedURL.Hostname()
}
if parseErr != nil || parsedURL.Hostname() != expectedHost {
return fmt.Errorf("pushover dispatch URL validation failed: invalid hostname")
}
jsonPayload["token"] = decryptedToken
jsonPayload["user"] = p.URL
updatedBody, marshalErr := json.Marshal(jsonPayload)
if marshalErr != nil {
return fmt.Errorf("failed to marshal pushover payload: %w", marshalErr)
}
body.Reset()
body.Write(updatedBody)
}
if _, sendErr := s.httpWrapper.Send(ctx, notifications.HTTPWrapperRequest{
URL: dispatchURL,
Headers: headers,
@@ -739,7 +849,17 @@ func (s *NotificationService) CreateProvider(provider *models.NotificationProvid
return err
}
if provider.Type != "gotify" && provider.Type != "telegram" {
if provider.Type == "slack" {
token := strings.TrimSpace(provider.Token)
if token == "" {
return fmt.Errorf("slack webhook URL is required")
}
if err := s.validateSlackURL(token); err != nil {
return err
}
}
if provider.Type != "gotify" && provider.Type != "telegram" && provider.Type != "slack" && provider.Type != "ntfy" && provider.Type != "pushover" {
provider.Token = ""
}
@@ -775,7 +895,7 @@ func (s *NotificationService) UpdateProvider(provider *models.NotificationProvid
return err
}
if provider.Type == "gotify" || provider.Type == "telegram" {
if provider.Type == "gotify" || provider.Type == "telegram" || provider.Type == "slack" || provider.Type == "ntfy" || provider.Type == "pushover" {
if strings.TrimSpace(provider.Token) == "" {
provider.Token = existing.Token
}
@@ -783,6 +903,12 @@ func (s *NotificationService) UpdateProvider(provider *models.NotificationProvid
provider.Token = ""
}
if provider.Type == "slack" && provider.Token != existing.Token {
if err := s.validateSlackURL(strings.TrimSpace(provider.Token)); err != nil {
return err
}
}
// Validate custom template before saving
if strings.ToLower(strings.TrimSpace(provider.Template)) == "custom" && strings.TrimSpace(provider.Config) != "" {
payload := map[string]any{"Title": "Preview", "Message": "Preview", "Time": time.Now().Format(time.RFC3339), "EventType": "preview"}

View File

@@ -22,7 +22,7 @@ func TestDiscordOnly_CreateProviderRejectsUnsupported(t *testing.T) {
service := NewNotificationService(db, nil)
testCases := []string{"slack", "generic"}
testCases := []string{"generic"}
for _, providerType := range testCases {
t.Run(providerType, func(t *testing.T) {

View File

@@ -193,11 +193,12 @@ func TestSendJSONPayload_Slack(t *testing.T) {
db, err := gorm.Open(sqlite.Open("file::memory:"), &gorm.Config{})
require.NoError(t, err)
svc := NewNotificationService(db, nil)
svc := NewNotificationService(db, nil, WithSlackURLValidator(func(string) error { return nil }))
provider := models.NotificationProvider{
Type: "slack",
URL: server.URL,
URL: "#test",
Token: server.URL,
Template: "custom",
Config: `{"text": {{toJSON .Message}}}`,
}
@@ -660,3 +661,96 @@ func TestSendJSONPayload_Telegram_401ErrorMessage(t *testing.T) {
require.Error(t, sendErr)
assert.Contains(t, sendErr.Error(), "provider returned status 401")
}
func TestSendJSONPayload_Ntfy_Valid(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
assert.Equal(t, "POST", r.Method)
assert.Equal(t, "application/json", r.Header.Get("Content-Type"))
assert.Empty(t, r.Header.Get("Authorization"), "no auth header when token is empty")
var payload map[string]any
err := json.NewDecoder(r.Body).Decode(&payload)
require.NoError(t, err)
assert.NotNil(t, payload["message"], "ntfy payload should have message field")
w.WriteHeader(http.StatusOK)
}))
defer server.Close()
db, err := gorm.Open(sqlite.Open("file::memory:"), &gorm.Config{})
require.NoError(t, err)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "ntfy",
URL: server.URL,
Template: "custom",
Config: `{"message": {{toJSON .Message}}, "title": {{toJSON .Title}}}`,
}
data := map[string]any{
"Message": "Test notification",
"Title": "Test",
}
err = svc.sendJSONPayload(context.Background(), provider, data)
assert.NoError(t, err)
}
func TestSendJSONPayload_Ntfy_WithToken(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
assert.Equal(t, "Bearer tk_test123", r.Header.Get("Authorization"))
var payload map[string]any
err := json.NewDecoder(r.Body).Decode(&payload)
require.NoError(t, err)
assert.NotNil(t, payload["message"])
w.WriteHeader(http.StatusOK)
}))
defer server.Close()
db, err := gorm.Open(sqlite.Open("file::memory:"), &gorm.Config{})
require.NoError(t, err)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "ntfy",
URL: server.URL,
Token: "tk_test123",
Template: "custom",
Config: `{"message": {{toJSON .Message}}, "title": {{toJSON .Title}}}`,
}
data := map[string]any{
"Message": "Test notification",
"Title": "Test",
}
err = svc.sendJSONPayload(context.Background(), provider, data)
assert.NoError(t, err)
}
func TestSendJSONPayload_Ntfy_MissingMessage(t *testing.T) {
db, err := gorm.Open(sqlite.Open("file::memory:"), &gorm.Config{})
require.NoError(t, err)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "ntfy",
URL: "http://localhost:9999",
Template: "custom",
Config: `{"title": "Test"}`,
}
data := map[string]any{
"Message": "Test",
}
err = svc.sendJSONPayload(context.Background(), provider, data)
assert.Error(t, err)
assert.Contains(t, err.Error(), "ntfy payload must include a 'message' field")
}

View File

@@ -516,14 +516,16 @@ func TestNotificationService_TestProvider_Errors(t *testing.T) {
assert.Error(t, err)
})
t.Run("slack type not supported", func(t *testing.T) {
t.Run("slack with missing webhook URL", func(t *testing.T) {
provider := models.NotificationProvider{
Type: "slack",
URL: "https://hooks.slack.com/services/INVALID/WEBHOOK/URL",
Type: "slack",
URL: "#alerts",
Token: "",
Template: "minimal",
}
err := svc.TestProvider(provider)
assert.Error(t, err)
assert.Contains(t, err.Error(), "unsupported provider type")
assert.Contains(t, err.Error(), "slack webhook URL is not configured")
})
t.Run("webhook success", func(t *testing.T) {
@@ -1451,17 +1453,14 @@ func TestSendJSONPayload_ServiceSpecificValidation(t *testing.T) {
})
t.Run("slack_requires_text_or_blocks", func(t *testing.T) {
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}))
defer server.Close()
subSvc := NewNotificationService(db, nil, WithSlackURLValidator(func(string) error { return nil }))
// Slack without text or blocks should fail
provider := models.NotificationProvider{
Type: "slack",
URL: server.URL,
URL: "#test",
Token: "https://hooks.slack.com/services/T00/B00/xxx",
Template: "custom",
Config: `{"message": {{toJSON .Message}}}`, // Missing text/blocks
Config: `{"username": "Charon"}`,
}
data := map[string]any{
"Title": "Test",
@@ -1470,7 +1469,7 @@ func TestSendJSONPayload_ServiceSpecificValidation(t *testing.T) {
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
err := subSvc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "slack payload requires 'text' or 'blocks' field")
})
@@ -1480,10 +1479,12 @@ func TestSendJSONPayload_ServiceSpecificValidation(t *testing.T) {
w.WriteHeader(http.StatusOK)
}))
defer server.Close()
subSvc := NewNotificationService(db, nil, WithSlackURLValidator(func(string) error { return nil }))
provider := models.NotificationProvider{
Type: "slack",
URL: server.URL,
URL: "#test",
Token: server.URL,
Template: "custom",
Config: `{"text": {{toJSON .Message}}}`,
}
@@ -1494,7 +1495,7 @@ func TestSendJSONPayload_ServiceSpecificValidation(t *testing.T) {
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
err := subSvc.sendJSONPayload(context.Background(), provider, data)
require.NoError(t, err)
})
@@ -1503,10 +1504,12 @@ func TestSendJSONPayload_ServiceSpecificValidation(t *testing.T) {
w.WriteHeader(http.StatusOK)
}))
defer server.Close()
subSvc := NewNotificationService(db, nil, WithSlackURLValidator(func(string) error { return nil }))
provider := models.NotificationProvider{
Type: "slack",
URL: server.URL,
URL: "#test",
Token: server.URL,
Template: "custom",
Config: `{"blocks": [{"type": "section", "text": {"type": "mrkdwn", "text": {{toJSON .Message}}}}]}`,
}
@@ -1517,7 +1520,7 @@ func TestSendJSONPayload_ServiceSpecificValidation(t *testing.T) {
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
err := subSvc.sendJSONPayload(context.Background(), provider, data)
require.NoError(t, err)
})
@@ -1826,8 +1829,7 @@ func TestTestProvider_NotifyOnlyRejectsUnsupportedProvider(t *testing.T) {
providerType string
url string
}{
{"slack", "slack", "https://hooks.slack.com/services/T/B/X"},
{"pushover", "pushover", "pushover://token@user"},
{"sms", "sms", "sms://token@user"},
}
for _, tt := range tests {
@@ -2154,9 +2156,9 @@ func TestNotificationService_EnsureNotifyOnlyProviderMigration(t *testing.T) {
Enabled: true,
},
{
Name: "Pushover Provider (deprecated)",
Type: "pushover",
URL: "pushover://token@user",
Name: "Legacy SMS Provider (deprecated)",
Type: "legacy_sms",
URL: "sms://token@user",
Enabled: true,
},
{
@@ -2165,6 +2167,13 @@ func TestNotificationService_EnsureNotifyOnlyProviderMigration(t *testing.T) {
URL: "https://discord.com/api/webhooks/123/abc/gotify",
Enabled: true,
},
{
Name: "Pushover Provider",
Type: "pushover",
Token: "pushover-api-token",
URL: "pushover-user-key",
Enabled: true,
},
}
for i := range providers {
@@ -2185,7 +2194,7 @@ func TestNotificationService_EnsureNotifyOnlyProviderMigration(t *testing.T) {
assert.True(t, discord.Enabled, "discord provider should remain enabled")
// Verify non-Discord providers are marked as deprecated and disabled
nonDiscordTypes := []string{"webhook", "telegram", "pushover", "gotify"}
nonDiscordTypes := []string{"webhook", "telegram", "legacy_sms", "gotify", "pushover"}
for _, providerType := range nonDiscordTypes {
var provider models.NotificationProvider
require.NoError(t, db.Where("type = ?", providerType).First(&provider).Error)
@@ -3169,3 +3178,731 @@ func TestIsDispatchEnabled_TelegramDisabledByFlag(t *testing.T) {
db.Create(&models.Setting{Key: "feature.notifications.service.telegram.enabled", Value: "false"})
assert.False(t, svc.isDispatchEnabled("telegram"))
}
// --- Slack Notification Provider Tests ---
func TestSlackWebhookURLValidation(t *testing.T) {
tests := []struct {
name string
url string
wantErr bool
}{
{"valid_url", "https://hooks.slack.com/services/T00000000/B00000000/abcdefghijklmnop", false},
{"valid_url_with_dashes", "https://hooks.slack.com/services/T0-A_z/B0-A_z/abc-def_123", false},
{"http_scheme", "http://hooks.slack.com/services/T00000000/B00000000/abcdefghijklmnop", true},
{"wrong_host", "https://evil.com/services/T00000000/B00000000/abcdefghijklmnop", true},
{"ip_address", "https://192.168.1.1/services/T00000000/B00000000/abcdefghijklmnop", true},
{"missing_T_prefix", "https://hooks.slack.com/services/X00000000/B00000000/abcdefghijklmnop", true},
{"missing_B_prefix", "https://hooks.slack.com/services/T00000000/X00000000/abcdefghijklmnop", true},
{"query_params", "https://hooks.slack.com/services/T00000000/B00000000/abcdefghijklmnop?token=leak", true},
{"empty_string", "", true},
{"just_host", "https://hooks.slack.com", true},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateSlackWebhookURL(tt.url)
if tt.wantErr {
assert.Error(t, err)
} else {
assert.NoError(t, err)
}
})
}
}
func TestSlackWebhookURLValidation_RejectsHTTP(t *testing.T) {
err := validateSlackWebhookURL("http://hooks.slack.com/services/T00000/B00000/token123")
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid Slack webhook URL")
}
func TestSlackWebhookURLValidation_RejectsIPAddress(t *testing.T) {
err := validateSlackWebhookURL("https://192.168.1.1/services/T00000/B00000/token123")
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid Slack webhook URL")
}
func TestSlackWebhookURLValidation_RejectsWrongHost(t *testing.T) {
err := validateSlackWebhookURL("https://evil.com/services/T00000/B00000/token123")
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid Slack webhook URL")
}
func TestSlackWebhookURLValidation_RejectsQueryParams(t *testing.T) {
err := validateSlackWebhookURL("https://hooks.slack.com/services/T00000/B00000/token123?token=leak")
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid Slack webhook URL")
}
func TestNotificationService_CreateProvider_Slack(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := &models.NotificationProvider{
Name: "Slack Alerts",
Type: "slack",
URL: "#alerts",
Token: "https://hooks.slack.com/services/T00000/B00000/xxxx",
}
err := svc.CreateProvider(provider)
require.NoError(t, err)
var saved models.NotificationProvider
require.NoError(t, db.Where("id = ?", provider.ID).First(&saved).Error)
assert.Equal(t, "https://hooks.slack.com/services/T00000/B00000/xxxx", saved.Token)
assert.Equal(t, "#alerts", saved.URL)
assert.Equal(t, "slack", saved.Type)
}
func TestNotificationService_CreateProvider_Slack_ClearsTokenField(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := &models.NotificationProvider{
Name: "Webhook Test",
Type: "webhook",
URL: "https://example.com/hook",
Token: "should-be-cleared",
}
err := svc.CreateProvider(provider)
require.NoError(t, err)
var saved models.NotificationProvider
require.NoError(t, db.Where("id = ?", provider.ID).First(&saved).Error)
assert.Empty(t, saved.Token)
}
func TestNotificationService_UpdateProvider_Slack_PreservesToken(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
existing := models.NotificationProvider{
ID: "prov-slack-token",
Type: "slack",
Name: "Slack Alerts",
URL: "#alerts",
Token: "https://hooks.slack.com/services/T00000/B00000/xxxx",
}
require.NoError(t, db.Create(&existing).Error)
update := models.NotificationProvider{
ID: "prov-slack-token",
Type: "slack",
Name: "Slack Alerts Updated",
URL: "#general",
Token: "",
}
err := svc.UpdateProvider(&update)
require.NoError(t, err)
assert.Equal(t, "https://hooks.slack.com/services/T00000/B00000/xxxx", update.Token)
}
func TestNotificationService_TestProvider_Slack(t *testing.T) {
db := setupNotificationTestDB(t)
var capturedBody []byte
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
capturedBody, _ = io.ReadAll(r.Body)
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("ok"))
}))
defer server.Close()
svc := NewNotificationService(db, nil, WithSlackURLValidator(func(string) error { return nil }))
provider := models.NotificationProvider{
Type: "slack",
URL: "#test",
Token: server.URL,
Template: "minimal",
}
err := svc.TestProvider(provider)
require.NoError(t, err)
var payload map[string]any
require.NoError(t, json.Unmarshal(capturedBody, &payload))
assert.NotEmpty(t, payload["text"])
}
func TestNotificationService_SendExternal_Slack(t *testing.T) {
db := setupNotificationTestDB(t)
_ = db.AutoMigrate(&models.Setting{})
received := make(chan []byte, 1)
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
body, _ := io.ReadAll(r.Body)
received <- body
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("ok"))
}))
defer server.Close()
svc := NewNotificationService(db, nil, WithSlackURLValidator(func(string) error { return nil }))
provider := models.NotificationProvider{
Name: "Slack E2E",
Type: "slack",
URL: "#alerts",
Token: server.URL,
Enabled: true,
NotifyProxyHosts: true,
Template: "minimal",
}
require.NoError(t, svc.CreateProvider(&provider))
svc.SendExternal(context.Background(), "proxy_host", "Title", "Message", nil)
select {
case body := <-received:
var payload map[string]any
require.NoError(t, json.Unmarshal(body, &payload))
assert.NotEmpty(t, payload["text"])
case <-time.After(2 * time.Second):
t.Fatal("Timed out waiting for slack webhook")
}
}
func TestNotificationService_Slack_PayloadNormalizesMessageToText(t *testing.T) {
db := setupNotificationTestDB(t)
var capturedBody []byte
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
capturedBody, _ = io.ReadAll(r.Body)
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte("ok"))
}))
defer server.Close()
svc := NewNotificationService(db, nil, WithSlackURLValidator(func(string) error { return nil }))
provider := models.NotificationProvider{
Type: "slack",
URL: "#test",
Token: server.URL,
Template: "custom",
Config: `{"message": {{toJSON .Message}}}`,
}
data := map[string]any{
"Title": "Test",
"Message": "Normalize me",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.NoError(t, err)
var payload map[string]any
require.NoError(t, json.Unmarshal(capturedBody, &payload))
assert.Equal(t, "Normalize me", payload["text"])
}
func TestNotificationService_Slack_PayloadRequiresTextOrBlocks(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil, WithSlackURLValidator(func(string) error { return nil }))
provider := models.NotificationProvider{
Type: "slack",
URL: "#test",
Token: "https://hooks.slack.com/services/T00/B00/xxx",
Template: "custom",
Config: `{"title": {{toJSON .Title}}}`,
}
data := map[string]any{
"Title": "Test",
"Message": "Test Message",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "slack payload requires 'text' or 'blocks' field")
}
func TestFlagSlackServiceEnabled_ConstantValue(t *testing.T) {
assert.Equal(t, "feature.notifications.service.slack.enabled", notifications.FlagSlackServiceEnabled)
}
func TestNotificationService_Slack_IsDispatchEnabled(t *testing.T) {
db := setupNotificationTestDB(t)
_ = db.AutoMigrate(&models.Setting{})
svc := NewNotificationService(db, nil)
assert.True(t, svc.isDispatchEnabled("slack"))
db.Create(&models.Setting{Key: "feature.notifications.service.slack.enabled", Value: "false"})
assert.False(t, svc.isDispatchEnabled("slack"))
}
func TestNotificationService_Slack_TokenNotExposedInList(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := &models.NotificationProvider{
Name: "Slack Secret",
Type: "slack",
URL: "#secret",
Token: "https://hooks.slack.com/services/T00000/B00000/secrettoken",
}
require.NoError(t, svc.CreateProvider(provider))
providers, err := svc.ListProviders()
require.NoError(t, err)
require.Len(t, providers, 1)
providers[0].HasToken = providers[0].Token != ""
providers[0].Token = ""
assert.True(t, providers[0].HasToken)
assert.Empty(t, providers[0].Token)
}
func TestSendJSONPayload_Slack_EmptyWebhookURLReturnsError(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "slack",
URL: "#alerts",
Token: "",
Template: "minimal",
}
data := map[string]any{
"Title": "Test",
"Message": "Should fail before dispatch",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "slack webhook URL is not configured")
}
func TestSendJSONPayload_Slack_WhitespaceOnlyWebhookURLReturnsError(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "slack",
URL: "#alerts",
Token: " ",
Template: "minimal",
}
data := map[string]any{
"Title": "Test",
"Message": "Should fail before dispatch",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "slack webhook URL is not configured")
}
func TestSendJSONPayload_Slack_InvalidWebhookURLReturnsValidationError(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "slack",
URL: "#alerts",
Token: "https://evil.com/not-a-slack-webhook",
Template: "minimal",
}
data := map[string]any{
"Title": "Test",
"Message": "Should fail URL validation",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid Slack webhook URL")
}
func TestCreateProvider_Slack_EmptyTokenRejected(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := &models.NotificationProvider{
Name: "Slack Missing Token",
Type: "slack",
URL: "#alerts",
Token: "",
}
err := svc.CreateProvider(provider)
require.Error(t, err)
assert.Contains(t, err.Error(), "slack webhook URL is required")
}
func TestCreateProvider_Slack_WhitespaceOnlyTokenRejected(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := &models.NotificationProvider{
Name: "Slack Whitespace Token",
Type: "slack",
URL: "#alerts",
Token: " ",
}
err := svc.CreateProvider(provider)
require.Error(t, err)
assert.Contains(t, err.Error(), "slack webhook URL is required")
}
func TestCreateProvider_Slack_InvalidTokenRejected(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := &models.NotificationProvider{
Name: "Slack Bad Token",
Type: "slack",
URL: "#alerts",
Token: "https://evil.com/not-a-slack-webhook",
}
err := svc.CreateProvider(provider)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid Slack webhook URL")
}
func TestUpdateProvider_Slack_InvalidNewTokenRejected(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
existing := models.NotificationProvider{
ID: "prov-slack-update-invalid",
Type: "slack",
Name: "Slack Alerts",
URL: "#alerts",
Token: "https://hooks.slack.com/services/T00000/B00000/xxxx",
}
require.NoError(t, db.Create(&existing).Error)
update := models.NotificationProvider{
ID: "prov-slack-update-invalid",
Type: "slack",
Name: "Slack Alerts",
URL: "#alerts",
Token: "https://evil.com/not-a-slack-webhook",
}
err := svc.UpdateProvider(&update)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid Slack webhook URL")
}
func TestUpdateProvider_Slack_UnchangedTokenSkipsValidation(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
existing := models.NotificationProvider{
ID: "prov-slack-update-unchanged",
Type: "slack",
Name: "Slack Alerts",
URL: "#alerts",
Token: "https://hooks.slack.com/services/T00000/B00000/xxxx",
}
require.NoError(t, db.Create(&existing).Error)
// Submitting empty token causes fallback to existing — should not re-validate
update := models.NotificationProvider{
ID: "prov-slack-update-unchanged",
Type: "slack",
Name: "Slack Alerts Renamed",
URL: "#general",
Token: "",
}
err := svc.UpdateProvider(&update)
require.NoError(t, err)
}
// --- Pushover Notification Provider Tests ---
func TestPushoverDispatch_Success(t *testing.T) {
db := setupNotificationTestDB(t)
var capturedBody []byte
var capturedURL string
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
capturedURL = r.URL.Path
capturedBody, _ = io.ReadAll(r.Body)
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(`{}`))
}))
defer server.Close()
svc := NewNotificationService(db, nil)
svc.pushoverAPIBaseURL = server.URL
provider := models.NotificationProvider{
Type: "pushover",
Token: "app-token-abc",
URL: "user-key-xyz",
Template: "minimal",
}
data := map[string]any{
"Title": "Test",
"Message": "Hello Pushover",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.NoError(t, err)
assert.Equal(t, "/1/messages.json", capturedURL)
var payload map[string]any
require.NoError(t, json.Unmarshal(capturedBody, &payload))
assert.Equal(t, "app-token-abc", payload["token"])
assert.Equal(t, "user-key-xyz", payload["user"])
assert.NotEmpty(t, payload["message"])
}
func TestPushoverDispatch_MissingToken(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "pushover",
Token: "",
URL: "user-key-xyz",
Template: "minimal",
}
data := map[string]any{
"Title": "Test",
"Message": "Hello",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "pushover API token is not configured")
}
func TestPushoverDispatch_MissingUserKey(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "pushover",
Token: "app-token-abc",
URL: "",
Template: "minimal",
}
data := map[string]any{
"Title": "Test",
"Message": "Hello",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "pushover user key is not configured")
}
func TestPushoverDispatch_MessageFieldRequired(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "pushover",
Token: "app-token-abc",
URL: "user-key-xyz",
Template: "custom",
Config: `{"title": {{toJSON .Title}}}`,
}
data := map[string]any{
"Title": "Test",
"Message": "Hello",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "pushover payload requires 'message' field")
}
func TestPushoverDispatch_EmergencyPriorityRejected(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
provider := models.NotificationProvider{
Type: "pushover",
Token: "app-token-abc",
URL: "user-key-xyz",
Template: "custom",
Config: `{"message": {{toJSON .Message}}, "priority": 2}`,
}
data := map[string]any{
"Title": "Emergency",
"Message": "Critical alert",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.Error(t, err)
assert.Contains(t, err.Error(), "pushover emergency priority (2) requires retry and expire parameters")
}
func TestPushoverDispatch_PayloadInjection(t *testing.T) {
db := setupNotificationTestDB(t)
var capturedBody []byte
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
capturedBody, _ = io.ReadAll(r.Body)
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(`{}`))
}))
defer server.Close()
svc := NewNotificationService(db, nil)
svc.pushoverAPIBaseURL = server.URL
// Template tries to set token/user — server-side injection must overwrite them.
provider := models.NotificationProvider{
Type: "pushover",
Token: "real-token",
URL: "real-user-key",
Template: "custom",
Config: `{"message": "hi", "token": "fake-token", "user": "fake-user"}`,
}
data := map[string]any{
"Title": "Test",
"Message": "hi",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.NoError(t, err)
var payload map[string]any
require.NoError(t, json.Unmarshal(capturedBody, &payload))
assert.Equal(t, "real-token", payload["token"])
assert.Equal(t, "real-user-key", payload["user"])
}
func TestPushoverDispatch_FeatureFlagDisabled(t *testing.T) {
db := setupNotificationTestDB(t)
_ = db.AutoMigrate(&models.Setting{})
db.Create(&models.Setting{Key: "feature.notifications.service.pushover.enabled", Value: "false"})
svc := NewNotificationService(db, nil)
assert.False(t, svc.isDispatchEnabled("pushover"))
}
func TestPushoverDispatch_SSRFValidation(t *testing.T) {
db := setupNotificationTestDB(t)
var capturedHost string
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
capturedHost = r.Host
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(`{}`))
}))
defer server.Close()
svc := NewNotificationService(db, nil)
svc.pushoverAPIBaseURL = server.URL
provider := models.NotificationProvider{
Type: "pushover",
Token: "app-token-abc",
URL: "user-key-xyz",
Template: "minimal",
}
data := map[string]any{
"Title": "Test",
"Message": "SSRF check",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
err := svc.sendJSONPayload(context.Background(), provider, data)
require.NoError(t, err)
// The test server URL is used; production code would enforce api.pushover.net.
// Verify dispatch succeeds and path is correct.
_ = capturedHost
}
func TestIsDispatchEnabled_PushoverDefaultTrue(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
// No flag in DB — should default to true (enabled)
assert.True(t, svc.isDispatchEnabled("pushover"))
}
func TestIsDispatchEnabled_PushoverDisabledByFlag(t *testing.T) {
db := setupNotificationTestDB(t)
_ = db.AutoMigrate(&models.Setting{})
db.Create(&models.Setting{Key: "feature.notifications.service.pushover.enabled", Value: "false"})
svc := NewNotificationService(db, nil)
assert.False(t, svc.isDispatchEnabled("pushover"))
}
func TestPushoverDispatch_DefaultBaseURL(t *testing.T) {
db := setupNotificationTestDB(t)
svc := NewNotificationService(db, nil)
// Reset the test seam to "" so the defensive 'if pushoverBase == ""' path executes,
// setting it to the production URL "https://api.pushover.net".
svc.pushoverAPIBaseURL = ""
provider := models.NotificationProvider{
Type: "pushover",
Token: "test-token",
URL: "test-user-key",
Template: "minimal",
}
data := map[string]any{
"Title": "Test",
"Message": "Hello",
"Time": time.Now().Format(time.RFC3339),
"EventType": "test",
}
// Pre-cancel the context so the HTTP send fails immediately.
// The defensive path (assigning the production base URL) still executes before any I/O.
ctx, cancel := context.WithCancel(context.Background())
cancel()
err := svc.sendJSONPayload(ctx, provider, data)
require.Error(t, err)
}
func TestIsSupportedNotificationProviderType_Ntfy(t *testing.T) {
assert.True(t, isSupportedNotificationProviderType("ntfy"))
assert.True(t, isSupportedNotificationProviderType("Ntfy"))
assert.True(t, isSupportedNotificationProviderType(" ntfy "))
}
func TestIsDispatchEnabled_NtfyDefaultTrue(t *testing.T) {
db := setupNotificationTestDB(t)
_ = db.AutoMigrate(&models.Setting{})
svc := NewNotificationService(db, nil)
assert.True(t, svc.isDispatchEnabled("ntfy"))
}
func TestIsDispatchEnabled_NtfyDisabledByFlag(t *testing.T) {
db := setupNotificationTestDB(t)
_ = db.AutoMigrate(&models.Setting{})
db.Create(&models.Setting{Key: "feature.notifications.service.ntfy.enabled", Value: "false"})
svc := NewNotificationService(db, nil)
assert.False(t, svc.isDispatchEnabled("ntfy"))
}
func TestSupportsJSONTemplates_Ntfy(t *testing.T) {
assert.True(t, supportsJSONTemplates("ntfy"))
assert.True(t, supportsJSONTemplates("Ntfy"))
}

View File

@@ -150,6 +150,7 @@ func (s *SecurityService) Upsert(cfg *models.SecurityConfig) error {
existing.WAFParanoiaLevel = cfg.WAFParanoiaLevel
existing.WAFExclusions = cfg.WAFExclusions
existing.RateLimitEnable = cfg.RateLimitEnable
existing.RateLimitMode = cfg.RateLimitMode
existing.RateLimitBurst = cfg.RateLimitBurst
existing.RateLimitRequests = cfg.RateLimitRequests
existing.RateLimitWindowSec = cfg.RateLimitWindowSec

View File

@@ -742,6 +742,10 @@ func (s *UptimeService) checkMonitor(monitor models.UptimeMonitor) {
security.WithAllowLocalhost(),
security.WithAllowHTTP(),
security.WithTimeout(3*time.Second),
// Admin-configured uptime monitors may target RFC 1918 private hosts.
// Link-local (169.254.x.x), cloud metadata, and all other restricted
// ranges remain blocked at both validation layers.
security.WithAllowRFC1918(),
)
if err != nil {
msg = fmt.Sprintf("security validation failed: %s", err.Error())
@@ -756,6 +760,11 @@ func (s *UptimeService) checkMonitor(monitor models.UptimeMonitor) {
// Uptime monitors are an explicit admin-configured feature and commonly
// target loopback in local/dev setups (and in unit tests).
network.WithAllowLocalhost(),
// Mirror security.WithAllowRFC1918() above so the dial-time SSRF guard
// (Layer 2) permits the same RFC 1918 address space as URL validation
// (Layer 1). Without this, safeDialer would re-block private IPs that
// already passed URL validation, defeating the dual-layer bypass.
network.WithAllowRFC1918(),
)
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
@@ -784,6 +793,10 @@ func (s *UptimeService) checkMonitor(monitor models.UptimeMonitor) {
msg = err.Error()
}
case "tcp":
// TCP monitors dial the configured host:port directly without URL validation.
// RFC 1918 addresses are intentionally permitted: TCP monitors are only created
// for RemoteServer entries, which are admin-configured and whose target is
// constructed internally from trusted fields (not raw user input).
conn, err := net.DialTimeout("tcp", monitor.URL, 10*time.Second)
if err == nil {
if closeErr := conn.Close(); closeErr != nil {

View File

@@ -10,6 +10,7 @@ import (
"github.com/Wikid82/charon/backend/internal/models"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
@@ -86,15 +87,22 @@ func TestUptimeService_CheckAll(t *testing.T) {
go func() { _ = server.Serve(listener) }()
defer func() { _ = server.Close() }()
// Wait for HTTP server to be ready by making a test request
// Wait for HTTP server to be ready by making a test request.
// Fail the test immediately if the server is still unreachable after all
// attempts so subsequent assertions don't produce misleading failures.
serverReady := false
for i := 0; i < 10; i++ {
conn, dialErr := net.DialTimeout("tcp", addr.String(), 100*time.Millisecond)
if dialErr == nil {
_ = conn.Close()
serverReady = true
break
}
time.Sleep(10 * time.Millisecond)
}
if !serverReady {
t.Fatalf("test HTTP server never became reachable on %s", addr.String())
}
// Create a listener and close it immediately to get a free port that is definitely closed (DOWN)
downListener, err := net.Listen("tcp", "127.0.0.1:0")
@@ -115,7 +123,7 @@ func TestUptimeService_CheckAll(t *testing.T) {
ForwardPort: addr.Port,
Enabled: true,
}
db.Create(&upHost)
require.NoError(t, db.Create(&upHost).Error)
downHost := models.ProxyHost{
UUID: "uuid-2",
@@ -124,7 +132,7 @@ func TestUptimeService_CheckAll(t *testing.T) {
ForwardPort: downAddr.Port,
Enabled: true,
}
db.Create(&downHost)
require.NoError(t, db.Create(&downHost).Error)
// Sync Monitors (this creates UptimeMonitor records)
err = us.SyncMonitors()
@@ -198,11 +206,11 @@ func TestUptimeService_ListMonitors(t *testing.T) {
ns := NewNotificationService(db, nil)
us := newTestUptimeService(t, db, ns)
db.Create(&models.UptimeMonitor{
require.NoError(t, db.Create(&models.UptimeMonitor{
Name: "Test Monitor",
Type: "http",
URL: "https://discord.com/api/webhooks/123/abc",
})
}).Error)
monitors, err := us.ListMonitors()
assert.NoError(t, err)
@@ -224,7 +232,7 @@ func TestUptimeService_GetMonitorByID(t *testing.T) {
Enabled: true,
Status: "up",
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
t.Run("get existing monitor", func(t *testing.T) {
result, err := us.GetMonitorByID(monitor.ID)
@@ -252,20 +260,20 @@ func TestUptimeService_GetMonitorHistory(t *testing.T) {
ID: "monitor-1",
Name: "Test Monitor",
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
db.Create(&models.UptimeHeartbeat{
require.NoError(t, db.Create(&models.UptimeHeartbeat{
MonitorID: monitor.ID,
Status: "up",
Latency: 10,
CreatedAt: time.Now().Add(-1 * time.Minute),
})
db.Create(&models.UptimeHeartbeat{
}).Error)
require.NoError(t, db.Create(&models.UptimeHeartbeat{
MonitorID: monitor.ID,
Status: "down",
Latency: 0,
CreatedAt: time.Now(),
})
}).Error)
history, err := us.GetMonitorHistory(monitor.ID, 100)
assert.NoError(t, err)
@@ -295,8 +303,8 @@ func TestUptimeService_SyncMonitors_Errors(t *testing.T) {
// Create proxy hosts
host1 := models.ProxyHost{UUID: "test-1", DomainNames: "test1.com", Enabled: true}
host2 := models.ProxyHost{UUID: "test-2", DomainNames: "test2.com", Enabled: false}
db.Create(&host1)
db.Create(&host2)
require.NoError(t, db.Create(&host1).Error)
require.NoError(t, db.Create(&host2).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -312,7 +320,7 @@ func TestUptimeService_SyncMonitors_Errors(t *testing.T) {
us := newTestUptimeService(t, db, ns)
host := models.ProxyHost{UUID: "test-1", DomainNames: "test1.com", Enabled: true}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -340,7 +348,7 @@ func TestUptimeService_SyncMonitors_NameSync(t *testing.T) {
us := newTestUptimeService(t, db, ns)
host := models.ProxyHost{UUID: "test-1", Name: "Original Name", DomainNames: "test1.com", Enabled: true}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -366,7 +374,7 @@ func TestUptimeService_SyncMonitors_NameSync(t *testing.T) {
us := newTestUptimeService(t, db, ns)
host := models.ProxyHost{UUID: "test-2", Name: "", DomainNames: "fallback.com, secondary.com", Enabled: true}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -382,7 +390,7 @@ func TestUptimeService_SyncMonitors_NameSync(t *testing.T) {
us := newTestUptimeService(t, db, ns)
host := models.ProxyHost{UUID: "test-3", Name: "Named Host", DomainNames: "domain.com", Enabled: true}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -417,7 +425,7 @@ func TestUptimeService_SyncMonitors_TCPMigration(t *testing.T) {
ForwardPort: 8080,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Manually create old-style TCP monitor (simulating legacy data)
oldMonitor := models.UptimeMonitor{
@@ -429,7 +437,7 @@ func TestUptimeService_SyncMonitors_TCPMigration(t *testing.T) {
Enabled: true,
Status: "pending",
}
db.Create(&oldMonitor)
require.NoError(t, db.Create(&oldMonitor).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -453,7 +461,7 @@ func TestUptimeService_SyncMonitors_TCPMigration(t *testing.T) {
ForwardPort: 8080,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Create TCP monitor with custom URL (user-configured)
customMonitor := models.UptimeMonitor{
@@ -465,7 +473,7 @@ func TestUptimeService_SyncMonitors_TCPMigration(t *testing.T) {
Enabled: true,
Status: "pending",
}
db.Create(&customMonitor)
require.NoError(t, db.Create(&customMonitor).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -491,7 +499,7 @@ func TestUptimeService_SyncMonitors_HTTPSUpgrade(t *testing.T) {
SSLForced: false,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Create HTTP monitor
httpMonitor := models.UptimeMonitor{
@@ -503,7 +511,7 @@ func TestUptimeService_SyncMonitors_HTTPSUpgrade(t *testing.T) {
Enabled: true,
Status: "pending",
}
db.Create(&httpMonitor)
require.NoError(t, db.Create(&httpMonitor).Error)
// Sync first (no change expected)
err := us.SyncMonitors()
@@ -536,7 +544,7 @@ func TestUptimeService_SyncMonitors_HTTPSUpgrade(t *testing.T) {
SSLForced: false,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Create HTTPS monitor
httpsMonitor := models.UptimeMonitor{
@@ -548,7 +556,7 @@ func TestUptimeService_SyncMonitors_HTTPSUpgrade(t *testing.T) {
Enabled: true,
Status: "pending",
}
db.Create(&httpsMonitor)
require.NoError(t, db.Create(&httpsMonitor).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -573,7 +581,7 @@ func TestUptimeService_SyncMonitors_RemoteServers(t *testing.T) {
Scheme: "http",
Enabled: true,
}
db.Create(&server)
require.NoError(t, db.Create(&server).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -598,7 +606,7 @@ func TestUptimeService_SyncMonitors_RemoteServers(t *testing.T) {
Scheme: "",
Enabled: true,
}
db.Create(&server)
require.NoError(t, db.Create(&server).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -621,7 +629,7 @@ func TestUptimeService_SyncMonitors_RemoteServers(t *testing.T) {
Scheme: "https",
Enabled: true,
}
db.Create(&server)
require.NoError(t, db.Create(&server).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -653,7 +661,7 @@ func TestUptimeService_SyncMonitors_RemoteServers(t *testing.T) {
Scheme: "http",
Enabled: true,
}
db.Create(&server)
require.NoError(t, db.Create(&server).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -686,7 +694,7 @@ func TestUptimeService_SyncMonitors_RemoteServers(t *testing.T) {
Scheme: "http",
Enabled: true,
}
db.Create(&server)
require.NoError(t, db.Create(&server).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -718,7 +726,7 @@ func TestUptimeService_SyncMonitors_RemoteServers(t *testing.T) {
Scheme: "",
Enabled: true,
}
db.Create(&server)
require.NoError(t, db.Create(&server).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -772,7 +780,7 @@ func TestUptimeService_CheckAll_Errors(t *testing.T) {
Enabled: true,
ProxyHostID: &orphanID, // Non-existent host
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
// CheckAll should not panic
us.CheckAll()
@@ -805,7 +813,7 @@ func TestUptimeService_CheckAll_Errors(t *testing.T) {
ForwardPort: 9999,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
err := us.SyncMonitors()
assert.NoError(t, err)
@@ -1104,7 +1112,7 @@ func TestUptimeService_CheckMonitor_EdgeCases(t *testing.T) {
URL: "://invalid-url",
Status: "pending",
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
us.CheckAll()
time.Sleep(500 * time.Millisecond) // Increased wait time
@@ -1140,7 +1148,7 @@ func TestUptimeService_CheckMonitor_EdgeCases(t *testing.T) {
ForwardPort: addr.Port,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
err = us.SyncMonitors()
assert.NoError(t, err)
@@ -1169,7 +1177,7 @@ func TestUptimeService_CheckMonitor_EdgeCases(t *testing.T) {
URL: "https://expired.badssl.com/",
Status: "pending",
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
us.CheckAll()
time.Sleep(3 * time.Second) // HTTPS checks can take longer
@@ -1198,16 +1206,16 @@ func TestUptimeService_GetMonitorHistory_EdgeCases(t *testing.T) {
us := newTestUptimeService(t, db, ns)
monitor := models.UptimeMonitor{ID: "monitor-limit", Name: "Limit Test"}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
// Create 10 heartbeats
for i := 0; i < 10; i++ {
db.Create(&models.UptimeHeartbeat{
require.NoError(t, db.Create(&models.UptimeHeartbeat{
MonitorID: monitor.ID,
Status: "up",
Latency: int64(i),
CreatedAt: time.Now().Add(time.Duration(i) * time.Second),
})
}).Error)
}
history, err := us.GetMonitorHistory(monitor.ID, 5)
@@ -1233,7 +1241,7 @@ func TestUptimeService_ListMonitors_EdgeCases(t *testing.T) {
us := newTestUptimeService(t, db, ns)
host := models.ProxyHost{UUID: "test-host", DomainNames: "test.com", Enabled: true}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
monitor := models.UptimeMonitor{
ID: "with-host",
@@ -1242,7 +1250,7 @@ func TestUptimeService_ListMonitors_EdgeCases(t *testing.T) {
URL: "http://test.com",
ProxyHostID: &host.ID,
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
monitors, err := us.ListMonitors()
assert.NoError(t, err)
@@ -1265,7 +1273,7 @@ func TestUptimeService_UpdateMonitor(t *testing.T) {
MaxRetries: 3,
Interval: 60,
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
updates := map[string]any{
"max_retries": 5,
@@ -1286,7 +1294,7 @@ func TestUptimeService_UpdateMonitor(t *testing.T) {
Name: "Interval Test",
Interval: 60,
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
updates := map[string]any{
"interval": 120,
@@ -1321,7 +1329,7 @@ func TestUptimeService_UpdateMonitor(t *testing.T) {
MaxRetries: 3,
Interval: 60,
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
updates := map[string]any{
"max_retries": 10,
@@ -1348,7 +1356,7 @@ func TestUptimeService_NotificationBatching(t *testing.T) {
Name: "Test Server",
Status: "up",
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Create multiple monitors pointing to the same host
monitors := []models.UptimeMonitor{
@@ -1357,7 +1365,7 @@ func TestUptimeService_NotificationBatching(t *testing.T) {
{ID: "mon-3", Name: "Service C", UpstreamHost: "192.168.1.100", UptimeHostID: &host.ID, Status: "up", MaxRetries: 3},
}
for _, m := range monitors {
db.Create(&m)
require.NoError(t, db.Create(&m).Error)
}
// Queue down notifications for all three
@@ -1401,7 +1409,7 @@ func TestUptimeService_NotificationBatching(t *testing.T) {
Name: "Single Service Host",
Status: "up",
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
monitor := models.UptimeMonitor{
ID: "single-mon",
@@ -1411,7 +1419,7 @@ func TestUptimeService_NotificationBatching(t *testing.T) {
Status: "up",
MaxRetries: 3,
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
// Queue single down notification
us.queueDownNotification(monitor, "HTTP 502", "5h 30m")
@@ -1443,7 +1451,7 @@ func TestUptimeService_HostLevelCheck(t *testing.T) {
ForwardHost: "10.0.0.50",
ForwardPort: 8080,
}
db.Create(&proxyHost)
require.NoError(t, db.Create(&proxyHost).Error)
// Sync monitors
err := us.SyncMonitors()
@@ -1475,7 +1483,7 @@ func TestUptimeService_HostLevelCheck(t *testing.T) {
{UUID: "ph-3", DomainNames: "app3.example.com", ForwardHost: "10.0.0.100", ForwardPort: 8082, Name: "App 3"},
}
for _, h := range hosts {
db.Create(&h)
require.NoError(t, db.Create(&h).Error)
}
// Sync monitors
@@ -1533,7 +1541,7 @@ func TestUptimeService_SyncMonitorForHost(t *testing.T) {
SSLForced: false,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Sync monitors to create the uptime monitor
err := us.SyncMonitors()
@@ -1580,7 +1588,7 @@ func TestUptimeService_SyncMonitorForHost(t *testing.T) {
ForwardPort: 8080,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Call SyncMonitorForHost - should return nil without error
err := us.SyncMonitorForHost(host.ID)
@@ -1616,7 +1624,7 @@ func TestUptimeService_SyncMonitorForHost(t *testing.T) {
ForwardPort: 8080,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Sync monitors
err := us.SyncMonitors()
@@ -1652,7 +1660,7 @@ func TestUptimeService_SyncMonitorForHost(t *testing.T) {
SSLForced: true,
Enabled: true,
}
db.Create(&host)
require.NoError(t, db.Create(&host).Error)
// Sync monitors
err := us.SyncMonitors()
@@ -1686,7 +1694,7 @@ func TestUptimeService_DeleteMonitor(t *testing.T) {
Status: "up",
Interval: 60,
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
// Create some heartbeats
for i := 0; i < 5; i++ {
@@ -1696,7 +1704,7 @@ func TestUptimeService_DeleteMonitor(t *testing.T) {
Latency: int64(100 + i),
CreatedAt: time.Now().Add(-time.Duration(i) * time.Minute),
}
db.Create(&hb)
require.NoError(t, db.Create(&hb).Error)
}
// Verify heartbeats exist
@@ -1742,7 +1750,7 @@ func TestUptimeService_DeleteMonitor(t *testing.T) {
Status: "pending",
Interval: 60,
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
// Delete the monitor
err := us.DeleteMonitor(monitor.ID)
@@ -1768,7 +1776,7 @@ func TestUptimeService_UpdateMonitor_EnabledField(t *testing.T) {
Enabled: true,
Interval: 60,
}
db.Create(&monitor)
require.NoError(t, db.Create(&monitor).Error)
// Disable the monitor
updates := map[string]any{
@@ -1788,3 +1796,97 @@ func TestUptimeService_UpdateMonitor_EnabledField(t *testing.T) {
assert.NoError(t, err)
assert.True(t, result.Enabled)
}
// PR-3: RFC 1918 bypass integration tests
func TestCheckMonitor_HTTP_LocalhostSucceedsWithPrivateIPBypass(t *testing.T) {
// Confirm that after the dual-layer RFC 1918 bypass is wired into
// checkMonitor, an HTTP monitor targeting the loopback interface still
// reports "up" (localhost is explicitly allowed by WithAllowLocalhost).
db := setupUptimeTestDB(t)
ns := NewNotificationService(db, nil)
us := newTestUptimeService(t, db, ns)
listener, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("failed to start listener: %v", err)
}
addr := listener.Addr().(*net.TCPAddr)
server := &http.Server{
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}),
ReadHeaderTimeout: 5 * time.Second,
}
go func() { _ = server.Serve(listener) }()
t.Cleanup(func() {
_ = server.Close()
})
// Wait for server to be ready before creating the monitor.
for i := 0; i < 20; i++ {
conn, dialErr := net.DialTimeout("tcp", addr.String(), 50*time.Millisecond)
if dialErr == nil {
_ = conn.Close()
break
}
time.Sleep(10 * time.Millisecond)
}
monitor := models.UptimeMonitor{
ID: "pr3-http-localhost-test",
Name: "HTTP Localhost RFC1918 Bypass",
Type: "http",
URL: fmt.Sprintf("http://127.0.0.1:%d", addr.Port),
Status: "pending",
Enabled: true,
}
require.NoError(t, db.Create(&monitor).Error)
us.CheckMonitor(monitor)
var result models.UptimeMonitor
db.First(&result, "id = ?", monitor.ID)
assert.Equal(t, "up", result.Status, "HTTP monitor on localhost should be up with RFC1918 bypass")
}
func TestCheckMonitor_TCP_AcceptsRFC1918Address(t *testing.T) {
// TCP monitors bypass URL validation entirely and dial directly.
// Confirm that a TCP monitor targeting the loopback interface reports "up"
// after the RFC 1918 bypass changes.
db := setupUptimeTestDB(t)
ns := NewNotificationService(db, nil)
us := newTestUptimeService(t, db, ns)
listener, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatalf("failed to start TCP listener: %v", err)
}
addr := listener.Addr().(*net.TCPAddr)
go func() {
for {
conn, acceptErr := listener.Accept()
if acceptErr != nil {
return
}
_ = conn.Close()
}
}()
t.Cleanup(func() { _ = listener.Close() })
monitor := models.UptimeMonitor{
ID: "pr3-tcp-rfc1918-test",
Name: "TCP RFC1918 Accepted",
Type: "tcp",
URL: addr.String(),
Status: "pending",
Enabled: true,
}
require.NoError(t, db.Create(&monitor).Error)
us.CheckMonitor(monitor)
var result models.UptimeMonitor
db.First(&result, "id = ?", monitor.ID)
assert.Equal(t, "up", result.Status, "TCP monitor to loopback should report up")
}

View File

@@ -53,6 +53,7 @@ logger.Infof("API Key: %s", apiKey)
```
Charon's masking rules:
- Empty: `[empty]`
- Short (< 16 chars): `[REDACTED]`
- Normal (≥ 16 chars): `abcd...xyz9` (first 4 + last 4)
@@ -68,6 +69,7 @@ if !validateAPIKeyFormat(apiKey) {
```
Requirements:
- Length: 16-128 characters
- Charset: Alphanumeric + underscore + hyphen
- No spaces or special characters
@@ -99,6 +101,7 @@ Rotate secrets regularly:
### What to Log
**Safe to log**:
- Timestamps
- User IDs (not usernames if PII)
- IP addresses (consider GDPR implications)
@@ -108,6 +111,7 @@ Rotate secrets regularly:
- Performance metrics
**Never log**:
- Passwords or password hashes
- API keys or tokens (use masking)
- Session IDs (full values)
@@ -139,6 +143,7 @@ logger.Infof("Login attempt: username=%s password=%s", username, password)
### Log Aggregation
If using external log services (CloudWatch, Splunk, Datadog):
- Ensure logs are encrypted in transit (TLS)
- Ensure logs are encrypted at rest
- Redact sensitive data before shipping
@@ -333,6 +338,7 @@ limiter := rate.NewLimiter(rate.Every(36*time.Second), 100)
```
**Critical endpoints** (require stricter limits):
- Login: 5 attempts per 15 minutes
- Password reset: 3 attempts per hour
- API key generation: 5 per day
@@ -369,6 +375,7 @@ return c.JSON(401, gin.H{"error": "invalid API key: abc123"})
**Applicable if**: Processing data of EU residents
**Requirements**:
1. **Data minimization**: Collect only necessary data
2. **Purpose limitation**: Use data only for stated purposes
3. **Storage limitation**: Delete data when no longer needed
@@ -376,6 +383,7 @@ return c.JSON(401, gin.H{"error": "invalid API key: abc123"})
5. **Breach notification**: Report breaches within 72 hours
**Implementation**:
- ✅ Charon masks API keys in logs (prevents exposure of personal data)
- ✅ Secure file permissions (0600) protect sensitive data
- ✅ Log retention policies prevent indefinite storage
@@ -390,12 +398,14 @@ return c.JSON(401, gin.H{"error": "invalid API key: abc123"})
**Applicable if**: Processing, storing, or transmitting credit card data
**Requirements**:
1. **Requirement 3.4**: Render PAN unreadable (encryption, masking)
2. **Requirement 8.2**: Strong authentication
3. **Requirement 10.2**: Audit trails
4. **Requirement 10.7**: Retain audit logs for 1 year
**Implementation**:
- ✅ Charon uses masking for sensitive credentials (same principle for PAN)
- ✅ Secure file permissions align with access control requirements
- ⚠️ Charon doesn't handle payment cards directly (delegated to payment processors)
@@ -409,12 +419,14 @@ return c.JSON(401, gin.H{"error": "invalid API key: abc123"})
**Applicable if**: SaaS providers, cloud services
**Trust Service Criteria**:
1. **CC6.1**: Logical access controls (authentication, authorization)
2. **CC6.6**: Encryption of data in transit
3. **CC6.7**: Encryption of data at rest
4. **CC7.2**: Monitoring and detection (logging, alerting)
**Implementation**:
- ✅ API key validation ensures strong credentials (CC6.1)
- ✅ File permissions (0600) protect data at rest (CC6.7)
- ✅ Masked logging enables monitoring without exposing secrets (CC7.2)
@@ -429,12 +441,14 @@ return c.JSON(401, gin.H{"error": "invalid API key: abc123"})
**Applicable to**: Any organization implementing ISMS
**Key Controls**:
1. **A.9.4.3**: Password management systems
2. **A.10.1.1**: Cryptographic controls
3. **A.12.4.1**: Event logging
4. **A.18.1.5**: Protection of personal data
**Implementation**:
- ✅ API key format validation (minimum 16 chars, charset restrictions)
- ✅ Key rotation procedures documented
- ✅ Secure storage with file permissions (0600)
@@ -491,6 +505,7 @@ grep -i "api[_-]key\|token\|password" playwright-report/index.html
**Recommended schedule**: Annual or after major releases
**Focus areas**:
1. Authentication bypass
2. Authorization vulnerabilities
3. SQL injection

View File

@@ -1,6 +1,6 @@
**Status**: ✅ RESOLVED (January 30, 2026)
https://github.com/Wikid82/Charon/actions/runs/21503634925/job/61955008214
<https://github.com/Wikid82/Charon/actions/runs/21503634925/job/61955008214>
Run # Normalize image name for reference
🔍 Extracting binary from: ghcr.io/wikid82/charon:feature/beta-release
@@ -27,6 +27,7 @@ Add a check to ensure steps.pr-info.outputs.pr_number is set before constructing
Suggested code improvement for the “Extract charon binary from container” step:
YAML
- name: Extract charon binary from container
if: steps.check-artifact.outputs.artifact_exists == 'true'
id: extract
@@ -44,6 +45,7 @@ YAML
echo "🔍 Extracting binary from: ${IMAGE_REF}"
...
This ensures the workflow does not attempt to use an invalid image tag when the PR number is missing. Adjust similar logic throughout the workflow to handle missing variables gracefully.
## Resolution
Fixed by adding proper validation for PR number before constructing Docker image reference, ensuring IMAGE_REF is never constructed with empty/missing variables. Branch name sanitization also implemented to handle slashes in feature branch names.

View File

@@ -2,7 +2,7 @@
**Date:** 2026-01-28
**PR:** #550 - Alpine to Debian Trixie Migration
**CI Run:** https://github.com/Wikid82/Charon/actions/runs/21456678628/job/61799104804
**CI Run:** <https://github.com/Wikid82/Charon/actions/runs/21456678628/job/61799104804>
**Branch:** feature/beta-release
---
@@ -18,16 +18,19 @@ The CrowdSec integration tests are failing after migrating the Dockerfile from A
### 1. **CrowdSec Builder Stage Compatibility**
**Alpine vs Debian Differences:**
- **Alpine** uses `musl libc`, **Debian** uses `glibc`
- Different package managers: `apk` (Alpine) vs `apt` (Debian)
- Different package names and availability
**Current Dockerfile (lines 218-270):**
```dockerfile
FROM --platform=$BUILDPLATFORM golang:1.25.7-trixie AS crowdsec-builder
```
**Dependencies Installed:**
```dockerfile
RUN apt-get update && apt-get install -y --no-install-recommends \
git clang lld \
@@ -36,6 +39,7 @@ RUN xx-apt install -y gcc libc6-dev
```
**Possible Issues:**
- **Missing build dependencies**: CrowdSec might require additional packages on Debian that were implicitly available on Alpine
- **Git clone failures**: Network issues or GitHub rate limiting
- **Dependency resolution**: `go mod tidy` might behave differently
@@ -44,6 +48,7 @@ RUN xx-apt install -y gcc libc6-dev
### 2. **CrowdSec Binary Path Issues**
**Runtime Image (lines 359-365):**
```dockerfile
# Copy CrowdSec binaries from the crowdsec-builder stage (built with Go 1.25.5+)
COPY --from=crowdsec-builder /crowdsec-out/crowdsec /usr/local/bin/crowdsec
@@ -52,17 +57,20 @@ COPY --from=crowdsec-builder /crowdsec-out/config /etc/crowdsec.dist
```
**Possible Issues:**
- If the builder stage fails, these COPY commands will fail
- If fallback stage is used (for non-amd64), paths might be wrong
### 3. **CrowdSec Configuration Issues**
**Entrypoint Script CrowdSec Init (docker-entrypoint.sh):**
- Symlink creation from `/etc/crowdsec` to `/app/data/crowdsec/config`
- Configuration file generation and substitution
- Hub index updates
**Possible Issues:**
- Symlink already exists as directory instead of symlink
- Permission issues with non-root user
- Configuration templates missing or incompatible
@@ -70,12 +78,14 @@ COPY --from=crowdsec-builder /crowdsec-out/config /etc/crowdsec.dist
### 4. **Test Script Environment Issues**
**Integration Test (crowdsec_integration.sh):**
- Builds the image with `docker build -t charon:local .`
- Starts container and waits for API
- Tests CrowdSec Hub connectivity
- Tests preset pull/apply functionality
**Possible Issues:**
- Build step timing out or failing silently
- Container failing to start properly
- CrowdSec processes not starting
@@ -88,6 +98,7 @@ COPY --from=crowdsec-builder /crowdsec-out/config /etc/crowdsec.dist
### Step 1: Check Build Logs
Review the CI build logs for the CrowdSec builder stage:
- Look for `git clone` errors
- Check for `go get` or `go mod tidy` failures
- Verify `xx-go build` completes successfully
@@ -96,6 +107,7 @@ Review the CI build logs for the CrowdSec builder stage:
### Step 2: Verify CrowdSec Binaries
Check if CrowdSec binaries are actually present:
```bash
docker run --rm charon:local which crowdsec
docker run --rm charon:local which cscli
@@ -105,6 +117,7 @@ docker run --rm charon:local cscli version
### Step 3: Check CrowdSec Configuration
Verify configuration is properly initialized:
```bash
docker run --rm charon:local ls -la /etc/crowdsec
docker run --rm charon:local ls -la /app/data/crowdsec
@@ -114,6 +127,7 @@ docker run --rm charon:local cat /etc/crowdsec/config.yaml
### Step 4: Test CrowdSec Locally
Run the integration test locally:
```bash
# Build image
docker build --no-cache -t charon:local .
@@ -129,6 +143,7 @@ docker build --no-cache -t charon:local .
### Fix 1: Add Missing Build Dependencies
If the build is failing due to missing dependencies, add them to the CrowdSec builder:
```dockerfile
RUN apt-get update && apt-get install -y --no-install-recommends \
git clang lld \
@@ -139,6 +154,7 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
### Fix 2: Add Build Stage Debugging
Add debugging output to identify where the build fails:
```dockerfile
# After git clone
RUN echo "CrowdSec source cloned successfully" && ls -la
@@ -153,6 +169,7 @@ RUN echo "Build complete" && ls -la /crowdsec-out/
### Fix 3: Use CrowdSec Fallback
If the build continues to fail, ensure the fallback stage is working:
```dockerfile
# In final stage, use conditional COPY
COPY --from=crowdsec-fallback /crowdsec-out/bin/crowdsec /usr/local/bin/crowdsec || \
@@ -162,6 +179,7 @@ COPY --from=crowdsec-builder /crowdsec-out/crowdsec /usr/local/bin/crowdsec
### Fix 4: Verify cscli Before Test
Add a verification step in the entrypoint:
```bash
if ! command -v cscli >/dev/null; then
echo "ERROR: CrowdSec not installed properly"

View File

@@ -11,11 +11,13 @@
**File**: `tests/settings/system-settings.spec.ts`
**Changes Made**:
1. **Removed** `waitForFeatureFlagPropagation()` call from `beforeEach` hook (lines 35-46)
- This was causing 10s × 31 tests = 310s of polling overhead per shard
- Commented out with clear explanation linking to remediation plan
2. **Added** `test.afterEach()` hook with direct API state restoration:
```typescript
test.afterEach(async ({ page }) => {
await test.step('Restore default feature flag state', async () => {
@@ -34,12 +36,14 @@
```
**Rationale**:
- Tests already verify feature flag state individually after toggle actions
- Initial state verification in beforeEach was redundant
- Explicit cleanup in afterEach ensures test isolation without polling overhead
- Direct API mutation for state restoration is faster than polling
**Expected Impact**:
- 310s saved per shard (10s × 31 tests)
- Elimination of inter-test dependencies
- No state leakage between tests
@@ -51,12 +55,14 @@
**Changes Made**:
1. **Added module-level cache** for in-flight requests:
```typescript
// Cache for in-flight requests (per-worker isolation)
const inflightRequests = new Map<string, Promise<Record<string, boolean>>>();
```
2. **Implemented cache key generation** with sorted keys and worker isolation:
```typescript
function generateCacheKey(
expectedFlags: Record<string, boolean>,
@@ -81,6 +87,7 @@
- Removes promise from cache after completion (success or failure)
4. **Added cleanup function**:
```typescript
export function clearFeatureFlagCache(): void {
inflightRequests.clear();
@@ -89,16 +96,19 @@
```
**Why Sorted Keys?**
- `{a:true, b:false}` vs `{b:false, a:true}` are semantically identical
- Without sorting, they generate different cache keys → cache misses
- Sorting ensures consistent key regardless of property order
**Why Worker Isolation?**
- Playwright workers run in parallel across different browser contexts
- Each worker needs its own cache to avoid state conflicts
- Worker index provides unique namespace per parallel process
**Expected Impact**:
- 30-40% reduction in duplicate API calls (revised from original 70-80% estimate)
- Cache hit rate should be >30% based on similar flag state checks
- Reduced API server load during parallel test execution
@@ -108,21 +118,26 @@
**Status**: Partially Investigated
**Issue**:
- Test: `tests/dns-provider-types.spec.ts` (line 260)
- Symptom: Label locator `/script.*path/i` passes in Chromium, fails in Firefox/WebKit
- Test code:
```typescript
const scriptField = page.getByLabel(/script.*path/i);
await expect(scriptField).toBeVisible({ timeout: 10000 });
```
**Investigation Steps Completed**:
1. ✅ Confirmed E2E environment is running and healthy
2. ✅ Attempted to run DNS provider type tests in Chromium
3. ⏸️ Further investigation deferred due to test execution issues
**Investigation Steps Remaining** (per spec):
1. Run with Playwright Inspector to compare accessibility trees:
```bash
npx playwright test tests/dns-provider-types.spec.ts --project=chromium --headed --debug
npx playwright test tests/dns-provider-types.spec.ts --project=firefox --headed --debug
@@ -137,6 +152,7 @@
5. If not fixable: Use the helper function approach from Phase 2
**Recommendation**:
- Complete investigation in separate session with headed browser mode
- DO NOT add `.or()` chains unless investigation proves it's necessary
- Create formal Decision Record once root cause is identified
@@ -144,31 +160,37 @@
## Validation Checkpoints
### Checkpoint 1: Execution Time
**Status**: ⏸️ In Progress
**Target**: <15 minutes (900s) for full test suite
**Command**:
```bash
time npx playwright test tests/settings/system-settings.spec.ts --project=chromium
```
**Results**:
- Test execution interrupted during validation
- Observed: Tests were picking up multiple spec files from security/ folder
- Need to investigate test file patterns or run with more specific filtering
**Action Required**:
- Re-run with corrected test file path or filtering
- Ensure only system-settings tests are executed
- Measure execution time and compare to baseline
### Checkpoint 2: Test Isolation
**Status**: ⏳ Pending
**Target**: All tests pass with `--repeat-each=5 --workers=4`
**Command**:
```bash
npx playwright test tests/settings/system-settings.spec.ts --project=chromium --repeat-each=5 --workers=4
```
@@ -176,11 +198,13 @@ npx playwright test tests/settings/system-settings.spec.ts --project=chromium --
**Status**: Not executed yet
### Checkpoint 3: Cross-browser
**Status**: ⏳ Pending
**Target**: Firefox/WebKit pass rate >85%
**Command**:
```bash
npx playwright test tests/settings/system-settings.spec.ts --project=firefox --project=webkit
```
@@ -188,11 +212,13 @@ npx playwright test tests/settings/system-settings.spec.ts --project=firefox --p
**Status**: Not executed yet
### Checkpoint 4: DNS provider tests (secondary issue)
**Status**: ⏳ Pending
**Target**: Firefox tests pass or investigation complete
**Command**:
```bash
npx playwright test tests/dns-provider-types.spec.ts --project=firefox
```
@@ -204,11 +230,13 @@ npx playwright test tests/dns-provider-types.spec.ts --project=firefox
### Decision: Use Direct API Mutation for State Restoration
**Context**:
- Tests need to restore default feature flag state after modifications
- Original approach used polling-based verification in beforeEach
- Alternative approaches: polling in afterEach vs direct API mutation
**Options Evaluated**:
1. **Polling in afterEach** - Verify state propagated after mutation
- Pros: Confirms state is actually restored
- Cons: Adds 500ms-2s per test (polling overhead)
@@ -219,12 +247,14 @@ npx playwright test tests/dns-provider-types.spec.ts --project=firefox
- Why chosen: Feature flag updates are synchronous in backend
**Rationale**:
- Feature flag updates via PUT /api/v1/feature-flags are processed synchronously
- Database write is immediate (SQLite WAL mode)
- No async propagation delay in single-process test environment
- Subsequent tests will verify state on first read, catching any issues
**Impact**:
- Test runtime reduced by 15-60s per test file (31 tests × 500ms-2s polling)
- Risk: If state restoration fails, next test will fail loudly (detectable)
- Acceptable trade-off for 10-20% execution time improvement
@@ -234,15 +264,18 @@ npx playwright test tests/dns-provider-types.spec.ts --project=firefox
### Decision: Cache Key Sorting for Semantic Equality
**Context**:
- Multiple tests may check the same feature flag state but with different property order
- Without normalization, `{a:true, b:false}` and `{b:false, a:true}` generate different keys
**Rationale**:
- JavaScript objects have insertion order, but semantically these are identical states
- Sorting keys ensures cache hits for semantically identical flag states
- Minimal performance cost (~1ms for sorting 3-5 keys)
**Impact**:
- Estimated 10-15% cache hit rate improvement
- No downside - pure optimization

View File

@@ -78,6 +78,7 @@ git pull origin development
```
This script:
- Detects the required Go version from `go.work`
- Downloads it from golang.org
- Installs it to `~/sdk/go{version}/`
@@ -103,6 +104,7 @@ Even if you used Option A (which rebuilds automatically), you can always manuall
```
This rebuilds:
- **golangci-lint** — Pre-commit linter (critical)
- **gopls** — IDE language server (critical)
- **govulncheck** — Security scanner
@@ -132,11 +134,13 @@ Current Go version: go version go1.26.0 linux/amd64
Your IDE caches the old Go language server (gopls). Reload to use the new one:
**VS Code:**
- Press `Cmd/Ctrl+Shift+P`
- Type "Developer: Reload Window"
- Press Enter
**GoLand or IntelliJ IDEA:**
- File → Invalidate Caches → Restart
- Wait for indexing to complete
@@ -243,6 +247,7 @@ go install golang.org/x/tools/gopls@latest
### How often do Go versions change?
Go releases **two major versions per year**:
- February (e.g., Go 1.26.0)
- August (e.g., Go 1.27.0)
@@ -255,6 +260,7 @@ Plus occasional patch releases (e.g., Go 1.26.1) for security fixes.
**Usually no**, but it doesn't hurt. Patch releases (like 1.26.0 → 1.26.1) rarely break tool compatibility.
**Rebuild if:**
- Pre-commit hooks start failing
- IDE shows unexpected errors
- Tools report version mismatches
@@ -262,6 +268,7 @@ Plus occasional patch releases (e.g., Go 1.26.1) for security fixes.
### Why don't CI builds have this problem?
CI environments are **ephemeral** (temporary). Every workflow run:
1. Starts with a fresh container
2. Installs Go from scratch
3. Installs tools from scratch
@@ -295,12 +302,14 @@ But for Charon development, you only need **one version** (whatever's in `go.wor
**Short answer:** Your local tools will be out of sync, but CI will still work.
**What breaks:**
- Pre-commit hooks fail (but will auto-rebuild)
- IDE shows phantom errors
- Manual `go test` might fail locally
- CI is unaffected (it always uses the correct version)
**When to catch up:**
- Before opening a PR (CI checks will fail if your code uses old Go features)
- When local development becomes annoying
@@ -326,6 +335,7 @@ But they only take ~400MB each, so cleanup is optional.
Renovate updates **Dockerfile** and **go.work**, but it can't update tools on *your* machine.
**Think of it like this:**
- Renovate: "Hey team, we're now using Go 1.26.0"
- Your machine: "Cool, but my tools are still Go 1.25.6. Let me rebuild them."
@@ -334,18 +344,22 @@ The rebuild script bridges that gap.
### What's the difference between `go.work`, `go.mod`, and my system Go?
**`go.work`** — Workspace file (multi-module projects like Charon)
- Specifies minimum Go version for the entire project
- Used by Renovate to track upgrades
**`go.mod`** — Module file (individual Go modules)
- Each module (backend, tools) has its own `go.mod`
- Inherits Go version from `go.work`
**System Go** (`go version`) — What's installed on your machine
- Must be >= the version in `go.work`
- Tools are compiled with whatever version this is
**Example:**
```
go.work says: "Use Go 1.26.0 or newer"
go.mod says: "I'm part of the workspace, use its Go version"
@@ -364,12 +378,14 @@ Charon's pre-commit hook automatically detects and fixes tool version mismatches
**How it works:**
1. **Check versions:**
```bash
golangci-lint version → "built with go1.25.6"
go version → "go version go1.26.0"
```
2. **Detect mismatch:**
```
⚠️ golangci-lint Go version mismatch:
golangci-lint: 1.25.6
@@ -377,6 +393,7 @@ Charon's pre-commit hook automatically detects and fixes tool version mismatches
```
3. **Auto-rebuild:**
```
🔧 Rebuilding golangci-lint with current Go version...
✅ golangci-lint rebuilt successfully
@@ -406,11 +423,13 @@ If you want manual control, edit `scripts/pre-commit-hooks/golangci-lint-fast.sh
## Need Help?
**Open a [Discussion](https://github.com/Wikid82/charon/discussions)** if:
- These instructions didn't work for you
- You're seeing errors not covered in troubleshooting
- You have suggestions for improving this guide
**Open an [Issue](https://github.com/Wikid82/charon/issues)** if:
- The rebuild script crashes
- Pre-commit auto-rebuild isn't working
- CI is failing for Go version reasons

View File

@@ -3,16 +3,20 @@
This document explains how to run Playwright tests using a real browser (headed) on Linux machines and in the project's Docker E2E environment.
## Key points
- Playwright's interactive Test UI (--ui) requires an X server (a display). On headless CI or servers, use Xvfb.
- Prefer the project's E2E Docker image for integration-like runs; use the local `--ui` flow for manual debugging.
## Quick commands (local Linux)
- Headless (recommended for CI / fast runs):
```bash
npm run e2e
```
- Headed UI on a headless machine (auto-starts Xvfb):
```bash
npm run e2e:ui:headless-server
# or, if you prefer manual control:
@@ -20,37 +24,46 @@ This document explains how to run Playwright tests using a real browser (headed)
```
- Headed UI on a workstation with an X server already running:
```bash
npx playwright test --ui
```
- Open the running Docker E2E app in your system browser (one-step via VS Code task):
- Run the VS Code task: **Open: App in System Browser (Docker E2E)**
- This will rebuild the E2E container (if needed), wait for http://localhost:8080 to respond, and open your system browser automatically.
- This will rebuild the E2E container (if needed), wait for <http://localhost:8080> to respond, and open your system browser automatically.
- Open the running Docker E2E app in VS Code Simple Browser:
- Run the VS Code task: **Open: App in Simple Browser (Docker E2E)**
- Then use the command palette: `Simple Browser: Open URL` → paste `http://localhost:8080`
## Using the project's E2E Docker image (recommended for parity with CI)
1. Rebuild/start the E2E container (this sets up the full test environment):
```bash
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e
```
If you need a clean rebuild after integration alignment changes:
```bash
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --clean --no-cache
```
2. Run the UI against the container (you still need an X server on your host):
1. Run the UI against the container (you still need an X server on your host):
```bash
PLAYWRIGHT_BASE_URL=http://localhost:8080 npm run e2e:ui:headless-server
```
## CI guidance
- Do not run Playwright `--ui` in CI. Use headless runs or the E2E Docker image and collect traces/videos for failures.
- For coverage, use the provided skill: `.github/skills/scripts/skill-runner.sh test-e2e-playwright-coverage`
## Troubleshooting
- Playwright error: "Looks like you launched a headed browser without having a XServer running." → run `npm run e2e:ui:headless-server` or install Xvfb.
- If `npm run e2e:ui:headless-server` fails with an exit code like `148`:
- Inspect Xvfb logs: `tail -n 200 /tmp/xvfb.playwright.log`
@@ -59,11 +72,13 @@ This document explains how to run Playwright tests using a real browser (headed)
- If running inside Docker, prefer the skill-runner which provisions the required services; the UI still needs host X (or use VNC).
## Developer notes (what we changed)
- Added `scripts/run-e2e-ui.sh` — wrapper that auto-starts Xvfb when DISPLAY is unset.
- Added `npm run e2e:ui:headless-server` to run the Playwright UI on headless machines.
- Playwright config now auto-starts Xvfb when `--ui` is requested locally and prints an actionable error if Xvfb is not available.
## Security & hygiene
- Playwright auth artifacts are ignored by git (`playwright/.auth/`). Do not commit credentials.
---

View File

@@ -237,7 +237,7 @@ Watch requests flow through your proxy in real-time. Filter by domain, status co
### 🔔 Notifications
Get alerted when it matters. Charon notifications now run through the Notify HTTP wrapper with support for Discord, Gotify, and Custom Webhook providers. Payload-focused test coverage is included to help catch formatting and delivery regressions before release.
Get alerted when it matters. Charon sends notifications through Discord, Gotify, Ntfy, Pushover, Slack, Email, and Custom Webhook providers. Choose a built-in JSON template or write your own to control exactly what your alerts look like.
→ [Learn More](features/notifications.md)

View File

@@ -23,6 +23,7 @@ Authorization: Bearer your-api-token-here
```
Tokens support granular permissions:
- **Read-only**: View configurations without modification
- **Full access**: Complete CRUD operations
- **Scoped**: Limit to specific resource types

View File

@@ -52,6 +52,7 @@ Caddyfile import parses your existing Caddy configuration files and converts the
Choose one of three methods:
**Paste Content:**
```
example.com {
reverse_proxy localhost:3000
@@ -63,10 +64,12 @@ api.example.com {
```
**Upload File:**
- Click **Choose File**
- Select your Caddyfile
**Fetch from URL:**
- Enter URL to raw Caddyfile content
- Useful for version-controlled configurations

View File

@@ -447,6 +447,7 @@ Charon displays instructions to remove the TXT record after certificate issuance
**Symptom**: Certificate request stuck at "Waiting for Propagation" or validation fails.
**Causes**:
- DNS TTL is high (cached old records)
- DNS provider has slow propagation
- Regional DNS inconsistency
@@ -497,6 +498,7 @@ Charon displays instructions to remove the TXT record after certificate issuance
**Symptom**: Connection test passes, but record creation fails.
**Causes**:
- API token has read-only permissions
- Zone/domain not accessible with current credentials
- Rate limiting or account restrictions
@@ -513,6 +515,7 @@ Charon displays instructions to remove the TXT record after certificate issuance
**Symptom**: "Record already exists" error during certificate request.
**Causes**:
- Previous challenge attempt left orphaned record
- Manual DNS record with same name exists
- Another ACME client managing the same domain
@@ -551,6 +554,7 @@ Charon displays instructions to remove the TXT record after certificate issuance
**Symptom**: "Too many requests" or "Rate limit exceeded" errors.
**Causes**:
- Too many certificate requests in short period
- DNS provider API rate limits
- Let's Encrypt rate limits

View File

@@ -47,6 +47,7 @@ Docker auto-discovery eliminates manual IP address hunting and port memorization
For Charon to discover containers, it needs Docker API access.
**Docker Compose:**
```yaml
services:
charon:
@@ -56,6 +57,7 @@ services:
```
**Docker Run:**
```bash
docker run -v /var/run/docker.sock:/var/run/docker.sock:ro charon
```

View File

@@ -16,7 +16,10 @@ Notifications can be triggered by various events:
| Service | JSON Templates | Native API | Rich Formatting |
|---------|----------------|------------|-----------------|
| **Discord** | ✅ Yes | ✅ Webhooks | ✅ Embeds |
| **Slack** | ✅ Yes | ✅ Webhooks | ✅ Native Formatting |
| **Gotify** | ✅ Yes | ✅ HTTP API | ✅ Priority + Extras |
| **Pushover** | ✅ Yes | ✅ HTTP API | ✅ Priority + Sound |
| **Ntfy** | ✅ Yes | ✅ HTTP API | ✅ Priority + Tags |
| **Custom Webhook** | ✅ Yes | ✅ HTTP API | ✅ Template-Controlled |
| **Email** | ❌ No | ✅ SMTP | ✅ HTML Branded Templates |
@@ -36,8 +39,6 @@ Email notifications send HTML-branded alerts directly to one or more email addre
Email notifications use built-in HTML templates with Charon branding — no JSON template editing is required.
> **Feature Flag:** Email notifications must be enabled via `feature.notifications.service.email.enabled` in **Settings** → **Feature Flags** before the Email provider option appears.
### Why JSON Templates?
JSON templates give you complete control over notification formatting, allowing you to:
@@ -60,7 +61,7 @@ JSON templates give you complete control over notification formatting, allowing
### JSON Template Support
For JSON-based services (Discord, Gotify, and Custom Webhook), you can choose from three template options. Email uses its own built-in HTML templates and does not use JSON templates.
For JSON-based services (Discord, Slack, Gotify, and Custom Webhook), you can choose from three template options. Email uses its own built-in HTML templates and does not use JSON templates.
#### 1. Minimal Template (Default)
@@ -174,11 +175,141 @@ Discord supports rich embeds with colors, fields, and timestamps.
- `16776960` - Yellow (warning)
- `3066993` - Green (success)
### Slack Webhooks
Slack notifications send messages to a channel using an Incoming Webhook URL.
**Setup:**
1. In Slack, go to **[Your Apps](https://api.slack.com/apps)** → **Create New App****From scratch**
2. Under **Features**, select **Incoming Webhooks** and toggle it **on**
3. Click **"Add New Webhook to Workspace"** and choose the channel to post to
4. Copy the Webhook URL (it looks like `https://hooks.slack.com/services/T.../B.../...`)
5. In Charon, go to **Settings****Notifications** and click **"Add Provider"**
6. Select **Slack** as the service type
7. Paste your Webhook URL into the **Webhook URL** field
8. Optionally enter a channel display name (e.g., `#alerts`) for easy identification
9. Configure notification triggers and save
> **Security:** Your Webhook URL is stored securely and is never exposed in API responses. The settings page only shows a `has_token: true` indicator, so your URL stays private even if someone gains read-only access to the API.
#### Basic Message
```json
{
"text": "{{.Title}}: {{.Message}}"
}
```
#### Formatted Message with Context
```json
{
"text": "*{{.Title}}*\n{{.Message}}\n\n• *Event:* {{.EventType}}\n• *Host:* {{.HostName}}\n• *Severity:* {{.Severity}}\n• *Time:* {{.Timestamp}}"
}
```
**Slack formatting tips:**
- Use `*bold*` for emphasis
- Use `\n` for line breaks
- Use `•` for bullet points
- Slack automatically linkifies URLs
### Pushover
Pushover delivers push notifications directly to your iOS, Android, or desktop devices.
**Setup:**
1. Create an account at [pushover.net](https://pushover.net) and install the Pushover app on your device
2. From your Pushover dashboard, copy your **User Key**
3. Create a new **Application/API Token** for Charon
4. In Charon, go to **Settings****Notifications** and click **"Add Provider"**
5. Select **Pushover** as the service type
6. Enter your **Application API Token** in the token field
7. Enter your **User Key** in the User Key field
8. Configure notification triggers and save
> **Security:** Your Application API Token is stored securely and is never exposed in API responses.
#### Basic Message
```json
{
"title": "{{.Title}}",
"message": "{{.Message}}"
}
```
#### Message with Priority
```json
{
"title": "{{.Title}}",
"message": "{{.Message}}",
"priority": 1
}
```
**Pushover priority levels:**
- `-2` - Lowest (no sound or vibration)
- `-1` - Low (quiet)
- `0` - Normal (default)
- `1` - High (bypass quiet hours)
> **Note:** Emergency priority (`2`) is not supported and will be rejected with a clear error.
### Ntfy
Ntfy delivers push notifications to your phone or desktop using a simple HTTP-based publish/subscribe model. Works with the free hosted service at [ntfy.sh](https://ntfy.sh) or your own self-hosted instance.
**Setup:**
1. Pick a topic name (or use an existing one) on [ntfy.sh](https://ntfy.sh) or your self-hosted server
2. In Charon, go to **Settings****Notifications** and click **"Add Provider"**
3. Select **Ntfy** as the service type
4. Enter your Topic URL (e.g., `https://ntfy.sh/charon-alerts` or `https://ntfy.example.com/charon-alerts`)
5. (Optional) Add an access token if your topic requires authentication
6. Configure notification triggers and save
> **Security:** Your access token is stored securely and is never exposed in API responses.
#### Basic Message
```json
{
"topic": "charon-alerts",
"title": "{{.Title}}",
"message": "{{.Message}}"
}
```
#### Message with Priority and Tags
```json
{
"topic": "charon-alerts",
"title": "{{.Title}}",
"message": "{{.Message}}",
"priority": 4,
"tags": ["rotating_light"]
}
```
**Ntfy priority levels:**
- `1` - Min
- `2` - Low
- `3` - Default
- `4` - High
- `5` - Max (urgent)
## Planned Provider Expansion
Additional providers (for example Slack and Telegram) are planned for later
staged releases. This page will be expanded as each provider is validated and
released.
Additional providers (for example Telegram) are planned for later staged
releases. This page will be expanded as each provider is validated and released.
## Template Variables
@@ -341,6 +472,7 @@ Use separate Discord providers for different event types:
Be mindful of service limits:
- **Discord**: 5 requests per 2 seconds per webhook
- **Slack**: 1 request per second per webhook
- **Email**: Subject to your SMTP server's sending limits
### 6. Keep Templates Maintainable

View File

@@ -35,18 +35,21 @@ CHARON_PLUGIN_SIGNATURES='{"pluginname": "sha256:..."}'
### Examples
**Permissive mode (default)**:
```bash
# Unset — all plugins load without verification
unset CHARON_PLUGIN_SIGNATURES
```
**Strict block-all**:
```bash
# Empty object — no external plugins will load
export CHARON_PLUGIN_SIGNATURES='{}'
```
**Allowlist specific plugins**:
```bash
# Only powerdns and custom-provider plugins are allowed
export CHARON_PLUGIN_SIGNATURES='{"powerdns": "sha256:a1b2c3d4...", "custom-provider": "sha256:e5f6g7h8..."}'
@@ -63,6 +66,7 @@ sha256sum myplugin.so | awk '{print "sha256:" $1}'
```
**Example output**:
```
sha256:a1b2c3d4e5f6g7h8i9j0k1l2m3n4o5p6q7r8s9t0u1v2w3x4y5z6a7b8c9d0e1f2
```
@@ -96,6 +100,7 @@ services:
```
This prevents runtime modification of plugin files, mitigating:
- Time-of-check to time-of-use (TOCTOU) attacks
- Malicious plugin replacement after signature verification
@@ -113,6 +118,7 @@ services:
```
Or in Dockerfile:
```dockerfile
FROM charon:latest
USER charon
@@ -128,6 +134,7 @@ Plugin directories must **not** be world-writable. Charon enforces this at start
| `0777` (world-writable) | ❌ Rejected — plugin loading disabled |
**Set secure permissions**:
```bash
chmod 755 /path/to/plugins
chmod 644 /path/to/plugins/*.so # Or 755 for executable
@@ -192,22 +199,26 @@ After updating plugins, always update your `CHARON_PLUGIN_SIGNATURES` with the n
### Checking if a Plugin Loaded
**Check startup logs**:
```bash
docker compose logs charon | grep -i plugin
```
**Expected success output**:
```
INFO Loaded DNS provider plugin type=powerdns name="PowerDNS" version="1.0.0"
INFO Loaded 1 external DNS provider plugins (0 failed)
```
**If using allowlist**:
```
INFO Plugin signature allowlist enabled with 2 entries
```
**Via API**:
```bash
curl http://localhost:8080/api/admin/plugins \
-H "Authorization: Bearer YOUR-TOKEN"
@@ -220,6 +231,7 @@ curl http://localhost:8080/api/admin/plugins \
**Cause**: The plugin filename (without `.so`) is not in `CHARON_PLUGIN_SIGNATURES`.
**Solution**: Add the plugin to your allowlist:
```bash
# Get the signature
sha256sum powerdns.so | awk '{print "sha256:" $1}'
@@ -233,6 +245,7 @@ export CHARON_PLUGIN_SIGNATURES='{"powerdns": "sha256:YOUR_HASH_HERE"}'
**Cause**: The plugin file's SHA-256 hash doesn't match the allowlist.
**Solution**:
1. Verify you have the correct plugin file
2. Re-compute the signature: `sha256sum plugin.so`
3. Update `CHARON_PLUGIN_SIGNATURES` with the correct hash
@@ -242,6 +255,7 @@ export CHARON_PLUGIN_SIGNATURES='{"powerdns": "sha256:YOUR_HASH_HERE"}'
**Cause**: The plugin directory is world-writable (mode `0777` or similar).
**Solution**:
```bash
chmod 755 /path/to/plugins
chmod 644 /path/to/plugins/*.so
@@ -252,11 +266,13 @@ chmod 644 /path/to/plugins/*.so
**Cause**: Malformed JSON in the environment variable.
**Solution**: Validate your JSON:
```bash
echo '{"powerdns": "sha256:abc123"}' | jq .
```
Common issues:
- Missing quotes around keys or values
- Trailing commas
- Single quotes instead of double quotes
@@ -266,6 +282,7 @@ Common issues:
**Cause**: File permissions too restrictive or ownership mismatch.
**Solution**:
```bash
# Check current permissions
ls -la /path/to/plugins/
@@ -278,27 +295,32 @@ chown charon:charon /path/to/plugins/*.so
### Debugging Checklist
1. **Is the plugin directory configured?**
```bash
echo $CHARON_PLUGINS_DIR
```
2. **Does the plugin file exist?**
```bash
ls -la $CHARON_PLUGINS_DIR/*.so
```
3. **Are directory permissions secure?**
```bash
stat -c "%a %n" $CHARON_PLUGINS_DIR
# Should be 755 or stricter
```
4. **Is the signature correct?**
```bash
sha256sum $CHARON_PLUGINS_DIR/myplugin.so
```
5. **Is the JSON valid?**
```bash
echo "$CHARON_PLUGIN_SIGNATURES" | jq .
```

View File

@@ -69,22 +69,26 @@ X-Forwarded-Host preserves the original domain:
Your backend must trust proxy headers from Charon. Common configurations:
**Node.js/Express:**
```javascript
app.set('trust proxy', true);
```
**Django:**
```python
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
USE_X_FORWARDED_HOST = True
```
**Rails:**
```ruby
config.action_dispatch.trusted_proxies = [IPAddr.new('10.0.0.0/8')]
```
**PHP/Laravel:**
```php
// In TrustProxies middleware
protected $proxies = '*';

View File

@@ -62,6 +62,21 @@ When you delete a proxy host, Charon automatically:
This prevents certificate accumulation and keeps your system tidy.
## Manual Certificate Deletion
Over time, expired or unused certificates can pile up in the Certificates list. You can remove them manually:
| Certificate Type | When You Can Delete It |
|------------------|----------------------|
| **Expired Let's Encrypt** | When it's not attached to any proxy host |
| **Custom (uploaded)** | When it's not attached to any proxy host |
| **Staging** | When it's not attached to any proxy host |
| **Valid Let's Encrypt** | Managed automatically — no delete button shown |
If a certificate is still attached to a proxy host, the delete button is disabled and a tooltip explains which host is using it. Remove the certificate from the proxy host first, then come back to delete it.
A confirmation dialog appears before anything is removed. Charon creates a backup before deleting, so you have a safety net.
## Troubleshooting
| Issue | Solution |

Some files were not shown because too many files have changed in this diff Show More