chore: refactor end-to-end tests for emergency server and feature toggles
- Implemented tests for the emergency server (Tier 2) to validate health checks, security reset functionality, and independent access. - Created a comprehensive suite for system settings feature toggles, ensuring proper state management and API call metrics reporting. - Removed redundant feature toggle tests from the system settings spec to maintain clarity and focus. - Enhanced test isolation by restoring default feature flag states after each test.
This commit is contained in:
3
.github/workflows/e2e-tests-split.yml
vendored
3
.github/workflows/e2e-tests-split.yml
vendored
@@ -871,7 +871,6 @@ jobs:
|
||||
tests/core \
|
||||
tests/dns-provider-crud.spec.ts \
|
||||
tests/dns-provider-types.spec.ts \
|
||||
tests/emergency-server \
|
||||
tests/integration \
|
||||
tests/manual-dns-provider.spec.ts \
|
||||
tests/monitoring \
|
||||
@@ -1052,7 +1051,6 @@ jobs:
|
||||
tests/core \
|
||||
tests/dns-provider-crud.spec.ts \
|
||||
tests/dns-provider-types.spec.ts \
|
||||
tests/emergency-server \
|
||||
tests/integration \
|
||||
tests/manual-dns-provider.spec.ts \
|
||||
tests/monitoring \
|
||||
@@ -1233,7 +1231,6 @@ jobs:
|
||||
tests/core \
|
||||
tests/dns-provider-crud.spec.ts \
|
||||
tests/dns-provider-types.spec.ts \
|
||||
tests/emergency-server \
|
||||
tests/integration \
|
||||
tests/manual-dns-provider.spec.ts \
|
||||
tests/monitoring \
|
||||
|
||||
@@ -1,335 +1,188 @@
|
||||
---
|
||||
title: "CI Pipeline Reliability and Docker Tagging"
|
||||
title: "E2E Security Test Isolation"
|
||||
status: "draft"
|
||||
scope: "ci/linting, ci/integration, docker/publishing"
|
||||
notes: Restore Go linting parity, prevent integration-stage cancellation after successful image builds, and correct Docker tag outputs across CI workflows.
|
||||
scope: "e2e/ci, tests/playwright"
|
||||
notes: Separate security-toggling Playwright tests from non-security shards to prevent ACL, WAF, and rate-limit contamination.
|
||||
---
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
This plan expands the CI scope to address three related gaps: missing Go
|
||||
lint enforcement, integration jobs being cancelled after a successful
|
||||
image build, and incomplete Docker tag outputs on Docker Hub. The
|
||||
intended outcome is a predictable pipeline where linting blocks early,
|
||||
integration and E2E gates complete reliably, and registries receive the
|
||||
full tag set required for traceability and stable consumption.
|
||||
This plan addresses E2E test contamination where security-focused tests are executed in non-security shards. The goal is to isolate tests that toggle Cerberus, ACL, WAF, CrowdSec, or rate limiting so non-security shards remain stable and do not hit global security state changes. The scope includes Playwright test organization and the E2E workflow split.
|
||||
|
||||
Objectives:
|
||||
|
||||
- Reinstate golangci-lint in the pipeline lint stage.
|
||||
- Use the fast config that already blocks local commits.
|
||||
- Ensure golangci-lint config is valid for the version used in CI.
|
||||
- Remove CI-only leniency so lint failures block merges.
|
||||
- Prevent integration jobs from being cancelled when image builds have
|
||||
already completed successfully.
|
||||
- Ensure Docker Hub and GHCR receive SHA-only and branch+SHA tags, plus
|
||||
latest/dev/nightly tags for main/development/nightly branches.
|
||||
- Keep CI behavior consistent across pre-commit, Makefile, VS Code
|
||||
tasks, and GitHub Actions workflows.
|
||||
- Identify which Playwright tests in non-security shards toggle or reset security modules.
|
||||
- Separate security-toggling tests into security-only execution paths.
|
||||
- Keep non-security shards stable by preventing global security state changes within those shards.
|
||||
- Preserve current coverage of security behaviors while avoiding cross-shard interference.
|
||||
|
||||
## 2. Research Findings
|
||||
|
||||
### 2.1 Current CI State (Linting)
|
||||
### 2.1 Non-Security Shard Inputs
|
||||
|
||||
- The main pipeline is [ .github/workflows/ci-pipeline.yml ] and its
|
||||
lint job runs repo health, Hadolint, GORM scanner, and frontend lint.
|
||||
There is no Go lint step in this pipeline.
|
||||
- A separate manual workflow, [ .github/workflows/quality-checks.yml ],
|
||||
runs golangci-lint with `continue-on-error: true`, which means CI does
|
||||
not block on Go lint failures.
|
||||
The non-security shards in the E2E workflow run a fixed set of directories and files in [ .github/workflows/e2e-tests-split.yml ](../../.github/workflows/e2e-tests-split.yml). The inputs include tests/settings, tests/integration, and tests/emergency-server, which contain security-toggling behavior.
|
||||
|
||||
### 2.2 Integration Cancellation Symptoms
|
||||
### 2.2 Security-Toggling Tests in Settings
|
||||
|
||||
- [ .github/workflows/ci-pipeline.yml ] defines workflow-level
|
||||
concurrency:
|
||||
`group: ci-manual-pipeline-${{ github.workflow }}-${{ github.ref_name }}`
|
||||
with `cancel-in-progress: true`.
|
||||
- Integration jobs depend on `build-image` and gate on
|
||||
`inputs.run_integration != false` and
|
||||
`needs.build-image.outputs.push_image == 'true'`.
|
||||
- Integration-gate fails if any dependent integration job reports
|
||||
`failure` or `cancelled`, and runs with `if: always()`.
|
||||
- A workflow-level cancellation after the build-image job completes will
|
||||
cancel downstream integration jobs even though the build succeeded.
|
||||
[ tests/settings/system-settings.spec.ts ](../../tests/settings/system-settings.spec.ts) toggles Cerberus and CrowdSec feature flags via the feature flags API and resets those flags after each test. These tests change global security state and can affect unrelated shards running in parallel.
|
||||
|
||||
### 2.3 Current Image Tag Outputs
|
||||
### 2.3 Emergency Server Tests
|
||||
|
||||
- In [ .github/workflows/ci-pipeline.yml ], the `Compute image tags`
|
||||
step emits:
|
||||
- `DEFAULT_TAG` (sha-<short> or pr-<number>-<short>)
|
||||
- latest/dev/nightly tags based on `github.ref_name`
|
||||
- In [ .github/workflows/docker-build.yml ], `docker/metadata-action`
|
||||
emits tags including:
|
||||
- `type=raw,value=pr-${{ env.TRIGGER_PR_NUMBER }}-{{sha}}` for PRs
|
||||
- `type=sha,format=short` for non-PRs
|
||||
- feature branch tag via `steps.feature-tag.outputs.tag`
|
||||
- `latest` only when `is_default_branch` is true
|
||||
- `dev` only when `env.TRIGGER_REF == 'refs/heads/development'`
|
||||
- Docker Hub currently shows only PR and SHA-prefixed tags for some
|
||||
builds; SHA-only and branch+SHA tags are not emitted consistently.
|
||||
- Nightly tagging exists in [ .github/workflows/nightly-build.yml ],
|
||||
but the main Docker build workflow does not emit a `nightly` tag based
|
||||
on branch detection.
|
||||
[ tests/emergency-server/tier2-validation.spec.ts ](../../tests/emergency-server/tier2-validation.spec.ts) calls the emergency security reset endpoint and validates rate limiting behavior on the emergency server. This directly disables security modules during execution and should be treated as security enforcement coverage.
|
||||
|
||||
### 2.4 Global Security Reset in Test Setup
|
||||
|
||||
[ tests/global-setup.ts ](../../tests/global-setup.ts) performs an emergency security reset and verifies that ACL and rate limiting are disabled before tests run. This is intended for cleanup, but it reinforces that global security state is shared across shards and is sensitive to security toggles.
|
||||
|
||||
Observed behavior in [ tests/global-setup.ts ](../../tests/global-setup.ts):
|
||||
|
||||
- Always validates `CHARON_EMERGENCY_TOKEN` and fails fast if missing or invalid.
|
||||
- Executes pre-auth and authenticated `emergencySecurityReset()`.
|
||||
- Runs `verifySecurityDisabled()` after the authenticated reset.
|
||||
|
||||
This means non-security shards still perform a global security reset even when `CHARON_SECURITY_TESTS_ENABLED` is set to `false` in the workflow.
|
||||
|
||||
### 2.5 Security Test Suites Already Isolated
|
||||
|
||||
The workflow already routes tests/security and tests/security-enforcement into dedicated security jobs. These suites include explicit security module enablement and enforcement checks, such as rate-limit enforcement in [ tests/security-enforcement/rate-limit-enforcement.spec.ts ](../../tests/security-enforcement/rate-limit-enforcement.spec.ts) and dashboard toggles in [ tests/security/security-dashboard.spec.ts ](../../tests/security/security-dashboard.spec.ts).
|
||||
|
||||
### 2.6 Integration Tests Touch Security Domains
|
||||
|
||||
Some integration tests create access lists and navigate to security pages, for example [ tests/integration/multi-feature-workflows.spec.ts ](../../tests/integration/multi-feature-workflows.spec.ts). These do not explicitly toggle security modules, but they use security-domain resources that may depend on Cerberus state and should be reviewed for compatibility with Cerberus being disabled.
|
||||
|
||||
## 3. Technical Specifications
|
||||
|
||||
### 3.1 CI Lint Job (Pipeline)
|
||||
### 3.1 Security Test Classification Rules
|
||||
|
||||
Add a Go lint step to the lint job in
|
||||
[ .github/workflows/ci-pipeline.yml ]:
|
||||
Classify a test as security-affecting if it does any of the following:
|
||||
|
||||
- Tooling: `golangci/golangci-lint-action`.
|
||||
- Working directory: `backend`.
|
||||
- Config: `backend/.golangci-fast.yml`.
|
||||
- Timeout: match config intent (2m fast, or 5m if parity with other
|
||||
pipeline steps is preferred).
|
||||
- Failures: do not allow `continue-on-error`.
|
||||
- Calls the emergency security reset endpoint.
|
||||
- Sets or toggles feature flags related to Cerberus, ACL, WAF, CrowdSec, or rate limiting.
|
||||
- Enables or disables security modules via settings or admin controls.
|
||||
- Depends on rate limiting behavior or ACL/WAF enforcement for assertions.
|
||||
|
||||
### 3.2 CI Lint Job (Manual Quality Checks)
|
||||
### 3.2 Isolation Strategy Options
|
||||
|
||||
Update [ .github/workflows/quality-checks.yml ] to align with local
|
||||
blocking behavior:
|
||||
Option A (preferred): Move security-affecting tests into dedicated security folders
|
||||
|
||||
- Remove `continue-on-error: true` from the golangci-lint step.
|
||||
- Ensure the step points to `backend/.golangci-fast.yml` or runs in
|
||||
`backend` so that the config is picked up deterministically.
|
||||
- Pin golangci-lint version to the same major used in CI pipeline to
|
||||
avoid config drift.
|
||||
- Move or split tests from tests/settings/system-settings.spec.ts into a new security-focused file under tests/security or tests/security-enforcement.
|
||||
- Move tests/emergency-server to tests/security-enforcement or tests/security, depending on whether they validate enforcement behavior or emergency pathways.
|
||||
- Keep non-security shards limited to tests that do not mutate security state.
|
||||
|
||||
### 3.3 Integration Cancellation Root Cause and Fix
|
||||
Option B: Use Playwright tags and workflow filters
|
||||
|
||||
Investigate and address workflow-level cancellation affecting
|
||||
integration jobs after `build-image` completes.
|
||||
- Tag security-affecting tests with a consistent tag such as @security-affecting.
|
||||
- Update security jobs to run tagged tests and non-security jobs to exclude them using grep or grep-invert.
|
||||
|
||||
Required investigation steps:
|
||||
Option C: Update non-security job inputs to explicitly exclude security-affecting files
|
||||
|
||||
- Inspect recent CI runs for cancellation reasons in the Actions UI
|
||||
(workflow-level cancellation vs job-level failure).
|
||||
- Confirm whether cancellations coincide with the workflow-level
|
||||
concurrency group in [ .github/workflows/ci-pipeline.yml ].
|
||||
- Verify `inputs.run_integration` values are only populated on
|
||||
`workflow_dispatch` events and evaluate the behavior on
|
||||
`pull_request` events.
|
||||
- Verify `needs.build-image.outputs.push_image` and
|
||||
`needs.build-image.outputs.image_ref_dockerhub` are set for non-fork
|
||||
pull requests and branch pushes.
|
||||
- Remove tests/settings/system-settings.spec.ts and tests/emergency-server from non-security shard inputs.
|
||||
- Add those tests to the security job inputs.
|
||||
|
||||
Proposed fix (preferred):
|
||||
Decision: Prefer Option A with a fallback to Option B if the team wants to keep files in their current directories. Option C is acceptable as a short-term mitigation but is less maintainable long-term.
|
||||
|
||||
- Remove workflow-level concurrency from
|
||||
[ .github/workflows/ci-pipeline.yml ] and instead apply job-level
|
||||
concurrency to the build-image job only, keeping cancellation limited
|
||||
to redundant builds while allowing downstream integration/E2E/coverage
|
||||
jobs to finish.
|
||||
- Add explicit guards to integration jobs:
|
||||
`if: needs.build-image.result == 'success' &&
|
||||
needs.build-image.outputs.push_image == 'true' &&
|
||||
needs.build-image.outputs.image_ref_dockerhub != '' &&
|
||||
(inputs.run_integration != false)`.
|
||||
- Update the integration-gate logic to treat `skipped` jobs as
|
||||
non-fatal and only fail on `failure` or `cancelled` when
|
||||
`needs.build-image.result == 'success'` and `push_image == 'true'`.
|
||||
### 3.3 Workflow Separation Rules
|
||||
|
||||
Alternative fix (not recommended; does not meet primary objective):
|
||||
Update [ .github/workflows/e2e-tests-split.yml ](../../.github/workflows/e2e-tests-split.yml) so:
|
||||
|
||||
- Keep workflow-level concurrency but change to
|
||||
`cancel-in-progress: ${{ github.event_name == 'pull_request' }}` so
|
||||
branch pushes and manual dispatches complete all downstream jobs.
|
||||
- This option still cancels PR runs after successful builds, which
|
||||
conflicts with the primary objective of allowing integration gates
|
||||
to complete reliably.
|
||||
- Security jobs explicitly include all security-affecting tests, including those moved from settings and emergency-server.
|
||||
- Non-security jobs do not include any files or directories that toggle or reset security modules.
|
||||
- If tags are used, security jobs should run only tagged tests and non-security jobs should invert the tag.
|
||||
|
||||
### 3.4 Image Tag Outputs (CI Pipeline)
|
||||
### 3.4 Test Organization Changes
|
||||
|
||||
Update the `Compute image tags` step in
|
||||
[ .github/workflows/ci-pipeline.yml ] to emit additional tags.
|
||||
Planned file moves and splits:
|
||||
|
||||
Required additions:
|
||||
- Split tests/settings/system-settings.spec.ts so security-affecting tests move to a dedicated security-focused test file under tests/security.
|
||||
- Move tests/emergency-server into a security-enforcement folder.
|
||||
- Review integration tests for dependencies on security module state and move or tag as needed.
|
||||
|
||||
- SHA-only tag (short SHA, no prefix):
|
||||
`${SHORT_SHA}` for both GHCR and Docker Hub.
|
||||
- Tag normalization rules for `SANITIZED_BRANCH`:
|
||||
- Ensure the tag is non-empty after sanitization.
|
||||
- Ensure the first character is `[a-z0-9]`; if it would start with
|
||||
`-` or `.`, normalize by trimming leading `-` or `.` and recheck.
|
||||
- Replace non-alphanumeric characters with `-` and collapse multiple
|
||||
`-` characters into one.
|
||||
- Limit the tag length to 128 characters after normalization.
|
||||
- Fallback: if the sanitized result is empty or still invalid after
|
||||
normalization, use `branch` as the fallback prefix.
|
||||
- Branch+SHA tag for non-PR events using a sanitized branch name derived
|
||||
from `github.ref_name` (lowercase, `/` → `-`, non-alnum → `-`,
|
||||
trimmed, collapsed). Example:
|
||||
`${SANITIZED_BRANCH}-${SHORT_SHA}`.
|
||||
- Preserve existing `pr-${PR_NUMBER}-${SHORT_SHA}` for PRs.
|
||||
- Keep `latest`, `dev`, and `nightly` tags based on:
|
||||
`github.ref_name == 'main' | 'development' | 'nightly'`.
|
||||
Concrete list of tests to move from [ tests/settings/system-settings.spec.ts ](../../tests/settings/system-settings.spec.ts) into a new file [ tests/security/system-settings-feature-toggles.spec.ts ](../../tests/security/system-settings-feature-toggles.spec.ts):
|
||||
|
||||
Decision point: SHA-only tags for PR builds
|
||||
- Feature Toggles:
|
||||
- "should toggle Cerberus security feature"
|
||||
- "should toggle CrowdSec console enrollment"
|
||||
- "should toggle uptime monitoring"
|
||||
- "should persist feature toggle changes"
|
||||
- "should show overlay during feature update"
|
||||
- Feature Toggles - Advanced Scenarios (Phase 4):
|
||||
- "should handle concurrent toggle operations"
|
||||
- "should retry on 500 Internal Server Error"
|
||||
- "should fail gracefully after max retries exceeded"
|
||||
- "should verify initial feature flag state before tests"
|
||||
|
||||
- Option A (recommended): publish SHA-only tags only for trusted
|
||||
branches (main/development/nightly and non-fork pushes). PR builds
|
||||
continue to use `pr-${PR_NUMBER}-${SHORT_SHA}` without SHA-only tags.
|
||||
- Option B: publish SHA-only tags for PR builds when image push is
|
||||
enabled for a non-fork authorized run (e.g., same-repo PRs), in
|
||||
addition to PR-prefixed tags.
|
||||
- Assumption (default until decided): follow Option A to avoid
|
||||
ambiguous SHA-only tags for untrusted PR contexts.
|
||||
Note: The `test.afterEach` feature flag reset and `test.afterAll` API metrics reporting currently tied to toggles should move with the toggle suite into [ tests/security/system-settings-feature-toggles.spec.ts ](../../tests/security/system-settings-feature-toggles.spec.ts) to keep state cleanup scoped to the security job.
|
||||
|
||||
Required step-level variables and expressions:
|
||||
Concrete emergency server file moves:
|
||||
|
||||
- Step: `Compute image tags` (id: `tags`).
|
||||
- Variables: `SHORT_SHA`, `DEFAULT_TAG`, `PR_NUMBER`, `SANITIZED_BRANCH`.
|
||||
- Expressions:
|
||||
- `${{ github.event_name }}`
|
||||
- `${{ github.ref_name }}`
|
||||
- `${{ github.event.pull_request.number }}`
|
||||
- Move [ tests/emergency-server/emergency-server.spec.ts ](../../tests/emergency-server/emergency-server.spec.ts) to [ tests/security-enforcement/emergency-server/emergency-server.spec.ts ](../../tests/security-enforcement/emergency-server/emergency-server.spec.ts).
|
||||
- Move [ tests/emergency-server/tier2-validation.spec.ts ](../../tests/emergency-server/tier2-validation.spec.ts) to [ tests/security-enforcement/emergency-server/tier2-validation.spec.ts ](../../tests/security-enforcement/emergency-server/tier2-validation.spec.ts).
|
||||
|
||||
### 3.5 Image Tag Outputs (docker-build.yml)
|
||||
### 3.5 Error Handling and Edge Cases
|
||||
|
||||
Update [ .github/workflows/docker-build.yml ] `Generate Docker metadata`
|
||||
tags to match the required outputs.
|
||||
- Parallel shards must not toggle global security state at the same time.
|
||||
- Tests that require Cerberus enabled must run only in security jobs where Cerberus is enabled by environment or explicit setup.
|
||||
- If global setup performs a security reset, security jobs must re-enable required modules before assertions.
|
||||
|
||||
Required additions:
|
||||
### 3.6 Global Setup Conditioning (Critical)
|
||||
|
||||
- Add SHA-only short tag for all events:
|
||||
`type=sha,format=short,prefix=,suffix=`.
|
||||
- Add branch+SHA short tag for non-PR events using a sanitized branch
|
||||
name derived from `env.TRIGGER_REF` or `env.TRIGGER_HEAD_BRANCH`.
|
||||
- Apply the same tag normalization rules as the CI pipeline
|
||||
(`SANITIZED_BRANCH` non-empty, leading character normalized, length
|
||||
<= 128, fallback to `branch`).
|
||||
- Add explicit branch tags for main/development/nightly based on
|
||||
`env.TRIGGER_REF` (do not rely on `is_default_branch` for
|
||||
workflow_run triggers):
|
||||
- `type=raw,value=latest,enable=${{ env.TRIGGER_REF == 'refs/heads/main' }}`
|
||||
- `type=raw,value=dev,enable=${{ env.TRIGGER_REF == 'refs/heads/development' }}`
|
||||
- `type=raw,value=nightly,enable=${{ env.TRIGGER_REF == 'refs/heads/nightly' }}`
|
||||
Global setup must not reset security in non-security shards. Add a guard in [ tests/global-setup.ts ](../../tests/global-setup.ts):
|
||||
|
||||
Required step names and variables:
|
||||
|
||||
- Step: `Compute feature branch tag` (id: `feature-tag`) remains for
|
||||
`refs/heads/feature/*`.
|
||||
- New step: `Compute branch+sha tag` (id: `branch-tag`) for all
|
||||
non-PR events using `TRIGGER_REF`.
|
||||
- Metadata step: `Generate Docker metadata` (id: `meta`).
|
||||
- Expressions:
|
||||
- `${{ env.TRIGGER_EVENT }}`
|
||||
- `${{ env.TRIGGER_REF }}`
|
||||
- `${{ env.TRIGGER_HEAD_SHA }}`
|
||||
- `${{ env.TRIGGER_PR_NUMBER }}`
|
||||
- `${{ steps.branch-tag.outputs.tag }}`
|
||||
|
||||
### 3.6 Repository Hygiene Review (Requested)
|
||||
|
||||
- [ .gitignore ]: No change required for CI updates; no new artifacts
|
||||
introduced by the tag changes.
|
||||
- [ codecov.yml ]: No change required; coverage configuration remains
|
||||
correct.
|
||||
- [ .dockerignore ]: No change required; CI-only YAML edits are already
|
||||
excluded from Docker build context.
|
||||
- [ Dockerfile ]: No change required; tagging logic is CI-only.
|
||||
- [ Branch tag normalization ]: No new files required; logic should be
|
||||
implemented in existing CI steps only.
|
||||
- Only validate `CHARON_EMERGENCY_TOKEN`, call `emergencySecurityReset()`, and run `verifySecurityDisabled()` when `CHARON_SECURITY_TESTS_ENABLED === 'true'`.
|
||||
- For non-security shards (`CHARON_SECURITY_TESTS_ENABLED !== 'true'`), skip all security reset logic and continue with health checks and test data cleanup only.
|
||||
- Preserve existing behavior for security shards so enforcement tests still run against a deterministic baseline.
|
||||
|
||||
## 4. Implementation Plan
|
||||
|
||||
### Phase 1: Playwright Tests (Behavior Baseline)
|
||||
|
||||
- Confirm that no UI behavior is affected by CI-only changes.
|
||||
- Keep this phase as a verification note: E2E is unchanged and can be
|
||||
re-run if CI changes surface unexpected side effects.
|
||||
- Confirm the current security toggle behavior in system settings and emergency server tests.
|
||||
- Define expected outcomes for toggling Cerberus and CrowdSec so that moved tests retain coverage.
|
||||
|
||||
### Phase 2: Pipeline Lint Restoration
|
||||
### Phase 2: Security-Affecting Test Identification
|
||||
|
||||
- Add a Go lint step to the lint job in
|
||||
[ .github/workflows/ci-pipeline.yml ].
|
||||
- Use `backend/.golangci-fast.yml` and ensure the step blocks on
|
||||
failure.
|
||||
- Keep the lint job dependency order intact (repo health → Hadolint →
|
||||
GORM scan → Go lint → frontend lint).
|
||||
- Inventory tests in tests/settings, tests/emergency-server, and tests/integration against the security-affecting rules.
|
||||
- Create a list of files to move, split, or tag.
|
||||
|
||||
### Phase 3: Integration Cancellation Fix
|
||||
### Phase 3: Test Restructuring
|
||||
|
||||
- Remove workflow-level concurrency from
|
||||
[ .github/workflows/ci-pipeline.yml ] and add job-level concurrency
|
||||
on `build-image` only.
|
||||
- Add explicit `if` guards to integration jobs based on
|
||||
`needs.build-image.result`, `needs.build-image.outputs.push_image`,
|
||||
and `needs.build-image.outputs.image_ref_dockerhub`.
|
||||
- Update `integration-gate` to ignore `skipped` results when integration
|
||||
is not expected to run and only fail on `failure` or `cancelled` when
|
||||
build-image succeeded and pushed an image.
|
||||
- Split tests/settings/system-settings.spec.ts to isolate security toggles into [ tests/security/system-settings-feature-toggles.spec.ts ](../../tests/security/system-settings-feature-toggles.spec.ts) using the concrete list above.
|
||||
- Move emergency server tests into [ tests/security-enforcement/emergency-server/ ](../../tests/security-enforcement/emergency-server/) using the concrete list above.
|
||||
- If integration tests require security modules enabled, relocate or tag them.
|
||||
|
||||
### Phase 4: Docker Tagging Updates
|
||||
### Phase 4: Workflow Updates
|
||||
|
||||
- Update `Compute image tags` in
|
||||
[ .github/workflows/ci-pipeline.yml ] to emit SHA-only and
|
||||
branch+SHA tags in addition to the existing PR and branch tags.
|
||||
- Update `Generate Docker metadata` in
|
||||
[ .github/workflows/docker-build.yml ] to emit SHA-only, branch+SHA,
|
||||
and explicit latest/dev/nightly tags based on `env.TRIGGER_REF`.
|
||||
- Add tag normalization logic in both workflows to ensure valid Docker
|
||||
tag prefixes (non-empty, valid leading character, <= 128 length,
|
||||
fallback when sanitized branch is empty or invalid).
|
||||
- Update non-security shard inputs in [ .github/workflows/e2e-tests-split.yml ](../../.github/workflows/e2e-tests-split.yml):
|
||||
- Remove `tests/emergency-server` from non-security job inputs.
|
||||
- Keep `tests/settings` but ensure the moved security toggle suite lives under `tests/security` so it is not picked up.
|
||||
- Update security job inputs to include the relocated emergency server folder:
|
||||
- Ensure `tests/security-enforcement/emergency-server` is included (already covered by `tests/security-enforcement/` once moved).
|
||||
- Security jobs already include `tests/security/`, which will pick up `tests/security/system-settings-feature-toggles.spec.ts`.
|
||||
- If tags are adopted, add grep filters to the security and non-security job commands.
|
||||
|
||||
### Phase 5: Validation and Guardrails
|
||||
|
||||
- Verify CI logs show the golangci-lint version and config in use.
|
||||
- Confirm integration jobs are no longer cancelled after successful
|
||||
builds when new runs are queued.
|
||||
- Validate that Docker Hub and GHCR tags include:
|
||||
- SHA-only short tags
|
||||
- Branch+SHA short tags
|
||||
- latest/dev/nightly tags for main/development/nightly branches
|
||||
- Run the security jobs and non-security jobs separately and confirm no security-related tests execute in non-security shards.
|
||||
- Confirm rate limit and ACL enforcement tests only run under security jobs with Cerberus enabled.
|
||||
- Capture and review Playwright reports for cross-shard contamination indicators.
|
||||
|
||||
## 5. Acceptance Criteria (EARS)
|
||||
|
||||
- WHEN a pull request or manual pipeline run executes, THE SYSTEM SHALL
|
||||
run golangci-lint in the pipeline lint stage using
|
||||
`backend/.golangci-fast.yml`.
|
||||
- WHEN golangci-lint finds violations, THE SYSTEM SHALL fail the
|
||||
pipeline lint stage and block downstream jobs.
|
||||
- WHEN the manual quality workflow runs, THE SYSTEM SHALL enforce the
|
||||
same blocking behavior and fast config as pre-commit.
|
||||
- WHEN a build-image job completes successfully and image push is
|
||||
enabled for a non-fork authorized run, THE SYSTEM SHALL allow
|
||||
integration jobs to run to completion without being cancelled by
|
||||
workflow-level concurrency.
|
||||
- WHEN integration jobs are skipped by configuration while image push
|
||||
is disabled or not authorized for the run, THE SYSTEM SHALL not mark
|
||||
the integration gate as failed.
|
||||
- WHEN a non-PR build runs on main/development/nightly branches and
|
||||
image push is enabled for a non-fork authorized run, THE SYSTEM SHALL
|
||||
publish `latest`, `dev`, or `nightly` tags respectively to Docker Hub
|
||||
and GHCR.
|
||||
- WHEN any image is built in CI and image push is enabled for a
|
||||
non-fork authorized run, THE SYSTEM SHALL publish SHA-only and
|
||||
branch+SHA tags in addition to existing PR or default tags.
|
||||
- WHEN a non-security E2E shard runs, THE SYSTEM SHALL exclude all tests that toggle or reset Cerberus, ACL, WAF, CrowdSec, or rate limiting.
|
||||
- WHEN a non-security E2E shard runs, THE SYSTEM SHALL skip the global security reset in [ tests/global-setup.ts ](../../tests/global-setup.ts) unless `CHARON_SECURITY_TESTS_ENABLED` is `true`.
|
||||
- WHEN a security E2E shard runs, THE SYSTEM SHALL include all tests that toggle or reset security modules and all enforcement tests.
|
||||
- WHEN security-affecting tests run, THE SYSTEM SHALL execute them only in workflows where Cerberus is enabled.
|
||||
- WHEN tests are reorganized, THE SYSTEM SHALL preserve existing security coverage without introducing new cross-shard dependencies.
|
||||
- WHEN integration tests require security modules enabled, THE SYSTEM SHALL route them to security shards or explicitly enable security in their setup.
|
||||
|
||||
## 6. Risks and Mitigations
|
||||
|
||||
- Risk: CI runtime increases due to added golangci-lint execution.
|
||||
Mitigation: use the fast config and keep timeout tight (2m) with
|
||||
caching enabled by the action.
|
||||
- Risk: Config incompatibility with CI golangci-lint version.
|
||||
Mitigation: pin the version and log it in CI; validate config format.
|
||||
- Risk: Reduced cancellation leads to overlapping integration runs.
|
||||
Mitigation: keep job-level concurrency on build-image; monitor queue
|
||||
time and adjust if needed.
|
||||
- Risk: Tag proliferation complicates image selection for users.
|
||||
Mitigation: document tag matrix in release notes or README once
|
||||
verified in CI.
|
||||
- Risk: Sanitized branch names may collapse to empty or invalid tags.
|
||||
Mitigation: enforce normalization rules with a safe fallback prefix
|
||||
to keep tag generation deterministic.
|
||||
- Risk: Moving tests breaks historical references or documentation links. Mitigation: update any references in test comments and plan docs after moves.
|
||||
- Risk: Tag-based filtering is inconsistent across local and CI runs. Mitigation: document the tag usage in Playwright config and ensure local scripts align with CI filters.
|
||||
- Risk: Integration tests implicitly rely on Cerberus being enabled. Mitigation: audit integration tests and either enable Cerberus in test setup or move them to security shards.
|
||||
|
||||
## 7. Confidence Score
|
||||
|
||||
Confidence: 84 percent
|
||||
Confidence: 78 percent
|
||||
|
||||
Rationale: The linting changes are straightforward, but integration
|
||||
job cancellation behavior depends on workflow-level concurrency and may
|
||||
require validation in Actions history to select the most appropriate
|
||||
fix. Tagging changes are predictable once metadata-action inputs are
|
||||
aligned with branch detection.
|
||||
Rationale: The security-toggling tests are identifiable and the workflow split is clear, but integration test dependencies on security state require additional verification before final routing.
|
||||
|
||||
@@ -1,360 +1,111 @@
|
||||
# QA & Security Report
|
||||
|
||||
**Date:** 2026-02-08
|
||||
**Date:** 2026-02-09
|
||||
**Status:** 🔴 FAILED
|
||||
**Evaluator:** GitHub Copilot (QA Security Mode)
|
||||
|
||||
## Executive Summary
|
||||
|
||||
QA validation ran per Definition of Done. Failures are listed below with verbatim output.
|
||||
Verification ran per request. Non-security shard hit ACL blocking; security shard ran the emergency reset but failed during advanced scenarios.
|
||||
|
||||
| Check | Status | Details |
|
||||
| :--- | :--- | :--- |
|
||||
| **Docker: Rebuild E2E Environment** | 🟢 PASS | Completed |
|
||||
| **Playwright E2E (All Browsers)** | 🔴 FAIL | Container not ready after 30000ms |
|
||||
| **Backend Coverage** | 🟢 PASS | Skill reported success |
|
||||
| **Frontend Coverage** | 🔴 FAIL | Test failures; see output |
|
||||
| **TypeScript Check** | 🟢 PASS | `tsc --noEmit` completed |
|
||||
| **Pre-commit Hooks** | 🟢 PASS | Hooks passed |
|
||||
| **Lint: Frontend** | 🟢 PASS | `eslint . --report-unused-disable-directives` completed |
|
||||
| **Lint: Go Vet** | 🟢 PASS | No errors reported |
|
||||
| **Lint: Staticcheck (Fast)** | 🟢 PASS | `0 issues.` |
|
||||
| **Lint: Markdownlint** | 🟢 PASS | No errors reported |
|
||||
| **Lint: Hadolint Dockerfile** | 🔴 FAIL | DL3008, DL4006, SC2015 warnings |
|
||||
| **Playwright: Non-security shard (tests/settings)** | 🔴 FAIL | ACL 403 during auth setup; confirmed global-setup skip log |
|
||||
| **Playwright: Security shard (system-settings-feature-toggles)** | 🔴 FAIL | Emergency reset ran; multiple failures + ECONNREFUSED |
|
||||
| **Security: Trivy Scan (filesystem)** | 🟢 PASS | No issues found |
|
||||
| **Security: Docker Image Scan (Local)** | 🔴 FAIL | 0 critical, 8 high vulnerabilities |
|
||||
| **Security: CodeQL Go Scan (CI-Aligned)** | 🟢 PASS | Completed (output truncated) |
|
||||
| **Security: CodeQL JS Scan (CI-Aligned)** | 🟢 PASS | Completed (output truncated) |
|
||||
| **Security: CodeQL Go Scan (CI-Aligned)** | 🟢 PASS | Completed; review [codeql-results-go.sarif](codeql-results-go.sarif) |
|
||||
| **Security: CodeQL JS Scan (CI-Aligned)** | 🟢 PASS | Completed; review [codeql-results-js.sarif](codeql-results-js.sarif) |
|
||||
| **Security: Docker Image Scan (Local)** | 🟡 INCONCLUSIVE | Build output logged; completion summary not emitted |
|
||||
|
||||
---
|
||||
|
||||
## 1. Security Findings
|
||||
## 1. Verification Results
|
||||
|
||||
### Security Scans - SKIPPED
|
||||
### Non-Security Shard - FAILED
|
||||
|
||||
### Frontend Coverage - FAILED
|
||||
**Expected log observed (verbatim):**
|
||||
```
|
||||
⏭️ Security tests disabled - skipping authenticated security reset
|
||||
```
|
||||
|
||||
**Failure Output (verbatim):**
|
||||
```
|
||||
Terminal: Test: Frontend with Coverage (Charon)
|
||||
Output:
|
||||
Error: GET /api/v1/setup failed with unexpected status 403: {"error":"Blocked by access control list"}
|
||||
```
|
||||
|
||||
### Security Shard - FAILED
|
||||
|
||||
[... PREVIOUS OUTPUT TRUNCATED ...]
|
||||
|
||||
ity header profile to selected hosts using bulk endpoint 349ms
|
||||
✓ removes security header profile when "None" selected 391ms
|
||||
✓ handles partial failure with appropriate toast 303ms
|
||||
✓ resets state on modal close 376ms
|
||||
✓ shows profile description when profile is selected 504ms
|
||||
✓ src/pages/__tests__/Plugins.test.tsx (30 tests) 1828ms
|
||||
✓ src/components/__tests__/DNSProviderSelector.test.tsx (29 tests) 292ms
|
||||
↓ src/pages/__tests__/Security.audit.test.tsx (18 tests | 18 skipped)
|
||||
✓ src/api/__tests__/presets.test.ts (26 tests) 26ms
|
||||
↓ src/pages/__tests__/Security.errors.test.tsx (13 tests | 13 skipped)
|
||||
✓ src/components/__tests__/SecurityHeaderProfileForm.test.tsx (17 tests) 1928ms
|
||||
✓ should show security score 582ms
|
||||
✓ should calculate score after debounce 536ms
|
||||
↓ src/pages/__tests__/Security.dashboard.test.tsx (18 tests | 18 skipped)
|
||||
✓ src/components/__tests__/CertificateStatusCard.test.tsx (24 tests) 267ms
|
||||
✓ src/api/__tests__/dnsProviders.test.ts (30 tests) 33ms
|
||||
✓ src/pages/__tests__/Uptime.spec.tsx (11 tests) 1047ms
|
||||
✓ src/components/__tests__/LoadingStates.security.test.tsx (41 tests) 417ms
|
||||
↓ src/pages/__tests__/Security.loading.test.tsx (12 tests | 12 skipped)
|
||||
✓ src/data/__tests__/crowdsecPresets.test.ts (38 tests) 22ms
|
||||
Error: Not implemented: navigation (except hash changes)
|
||||
at module.exports (/projects/Charon/frontend/node_modules/jsdom/lib/jsdom/browser/not-implemented.js:9:17)
|
||||
at navigateFetch (/projects/Charon/frontend/node_modules/jsdom/lib/jsdom/living/window/navigation.js:77:3)
|
||||
at exports.navigate (/projects/Charon/frontend/node_modules/jsdom/lib/jsdom/living/window/navigation.js:55:3)
|
||||
at Timeout._onTimeout (/projects/Charon/frontend/node_modules/jsdom/lib/jsdom/living/nodes/HTMLHyperlinkElementUtils
|
||||
-impl.js:81:7)
|
||||
at listOnTimeout (node:internal/timers:581:17)
|
||||
at processTimers (node:internal/timers:519:7) undefined
|
||||
stderr | src/pages/__tests__/AuditLogs.test.tsx > <AuditLogs /> > handles export error
|
||||
Export error: Error: Export failed
|
||||
at /projects/Charon/frontend/src/pages/__tests__/AuditLogs.test.tsx:324:7
|
||||
at file:///projects/Charon/frontend/node_modules/@vitest/runner/dist/index.js:145:11
|
||||
at file:///projects/Charon/frontend/node_modules/@vitest/runner/dist/index.js:915:26
|
||||
at file:///projects/Charon/frontend/node_modules/@vitest/runner/dist/index.js:1243:20
|
||||
at new Promise (<anonymous>)
|
||||
at runWithTimeout (file:///projects/Charon/frontend/node_modules/@vitest/runner/dist/index.js:1209:10)
|
||||
at file:///projects/Charon/frontend/node_modules/@vitest/runner/dist/index.js:1653:37
|
||||
at Traces.$ (file:///projects/Charon/frontend/node_modules/vitest/dist/chunks/traces.CCmnQaNT.js:142:27)
|
||||
at trace (file:///projects/Charon/frontend/node_modules/vitest/dist/chunks/test.B8ej_ZHS.js:239:21)
|
||||
at runTest (file:///projects/Charon/frontend/node_modules/@vitest/runner/dist/index.js:1653:12)
|
||||
|
||||
✓ src/pages/__tests__/AuditLogs.test.tsx (14 tests) 1219ms
|
||||
✓ src/hooks/__tests__/useSecurity.test.tsx (19 tests) 1107ms
|
||||
✓ src/hooks/__tests__/useSecurityHeaders.test.tsx (15 tests) 805ms
|
||||
stdout | src/api/logs.test.ts > logs api > connects to live logs websocket and handles lifecycle events
|
||||
Connecting to WebSocket: ws://localhost/api/v1/logs/live?level=error&source=cerberus
|
||||
WebSocket connection established
|
||||
WebSocket connection closed { code: 1000, reason: '', wasClean: true }
|
||||
|
||||
stderr | src/api/logs.test.ts > logs api > connects to live logs websocket and handles lifecycle events
|
||||
WebSocket error: Event { isTrusted: [Getter] }
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > connects to cerberus logs websocket endpoint
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > passes source filter to websocket url
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?source=waf
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > passes level filter to websocket url
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?level=error
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > passes ip filter to websocket url
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?ip=192.168
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > passes host filter to websocket url
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?host=example.com
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > passes blocked_only filter to websocket url
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?blocked_only=true
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > receives and parses security log entries
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?
|
||||
Cerberus logs WebSocket connection established
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > receives blocked security log entries
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?
|
||||
Cerberus logs WebSocket connection established
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > handles onOpen callback
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?
|
||||
Cerberus logs WebSocket connection established
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > handles onError callback
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?
|
||||
|
||||
stderr | src/api/logs.test.ts > connectSecurityLogs > handles onError callback
|
||||
Cerberus logs WebSocket error: Event { isTrusted: [Getter] }
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > handles onClose callback
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?
|
||||
Cerberus logs WebSocket closed { code: 1000, reason: '', wasClean: true }
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > returns disconnect function that closes websocket
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?
|
||||
Cerberus logs WebSocket connection established
|
||||
Cerberus logs WebSocket closed { code: 1000, reason: '', wasClean: true }
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > handles JSON parse errors gracefully
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?
|
||||
Cerberus logs WebSocket connection established
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > uses wss protocol when on https
|
||||
Connecting to Cerberus logs WebSocket: wss://secure.example.com/api/v1/cerberus/logs/ws?
|
||||
|
||||
stdout | src/api/logs.test.ts > connectSecurityLogs > combines multiple filters in websocket url
|
||||
Connecting to Cerberus logs WebSocket: ws://localhost/api/v1/cerberus/logs/ws?source=waf&level=warn&ip=10.0.0&host=examp
|
||||
le.com&blocked_only=true
|
||||
## 1. Validation Results
|
||||
|
||||
### Playwright E2E - FAILED
|
||||
**Expected log observed (verbatim):**
|
||||
```
|
||||
🔓 Performing emergency security reset...
|
||||
```
|
||||
|
||||
**Failure Output (verbatim):**
|
||||
```
|
||||
> e2e:all
|
||||
> PLAYWRIGHT_HTML_OPEN=never npx playwright test
|
||||
✘ 7 …Scenarios (Phase 4) › should handle concurrent toggle operations (6.7s)
|
||||
✘ 8 …Scenarios (Phase 4) › should retry on 500 Internal Server Error (351ms)
|
||||
✘ 9 …Scenarios (Phase 4) › should fail gracefully after max retries exceeded (341ms)
|
||||
✘ 10 …Scenarios (Phase 4) › should verify initial feature flag state before tests (372ms)
|
||||
|
||||
[dotenv@17.2.4] injecting env (2) from .env -- tip: ⚙️ override existing env vars with { override: true }
|
||||
|
||||
🧹 Running global test setup...
|
||||
|
||||
🔐 Validating emergency token configuration...
|
||||
🔑 Token present: f51dedd6...346b
|
||||
✓ Token length: 64 chars (valid)
|
||||
✓ Token format: Valid hexadecimal
|
||||
✓ Token appears to be unique (not a placeholder)
|
||||
✅ Emergency token validation passed
|
||||
|
||||
📍 Base URL: http://127.0.0.1:8080
|
||||
⏳ Waiting for container to be ready at http://127.0.0.1:8080...
|
||||
⏳ Waiting for container... (1/15)
|
||||
⏳ Waiting for container... (2/15)
|
||||
⏳ Waiting for container... (3/15)
|
||||
⏳ Waiting for container... (4/15)
|
||||
⏳ Waiting for container... (5/15)
|
||||
⏳ Waiting for container... (6/15)
|
||||
⏳ Waiting for container... (7/15)
|
||||
⏳ Waiting for container... (8/15)
|
||||
⏳ Waiting for container... (9/15)
|
||||
⏳ Waiting for container... (10/15)
|
||||
⏳ Waiting for container... (11/15)
|
||||
⏳ Waiting for container... (12/15)
|
||||
⏳ Waiting for container... (13/15)
|
||||
⏳ Waiting for container... (14/15)
|
||||
⏳ Waiting for container... (15/15)
|
||||
Error: Container failed to start after 30000ms
|
||||
|
||||
at global-setup.ts:158
|
||||
|
||||
156 | }
|
||||
157 | }
|
||||
> 158 | throw new Error(`Container failed to start after ${maxRetries * delayMs}ms`);
|
||||
| ^
|
||||
159 | }
|
||||
160 |
|
||||
161 | /**
|
||||
at waitForContainer (/projects/Charon/tests/global-setup.ts:158:9)
|
||||
at globalSetup (/projects/Charon/tests/global-setup.ts:203:3)
|
||||
|
||||
|
||||
To open last HTML report run:
|
||||
|
||||
npx playwright show-report
|
||||
Error verifying security state: apiRequestContext.get: connect ECONNREFUSED 127.0.0.1:8080
|
||||
```
|
||||
|
||||
### Frontend Coverage - FAILED
|
||||
|
||||
**Failure Output (verbatim):**
|
||||
```
|
||||
Terminal: Test: Frontend with Coverage (Charon)
|
||||
Output:
|
||||
|
||||
|
||||
[... PREVIOUS OUTPUT TRUNCATED ...]
|
||||
|
||||
An update to SelectItemText inside a test was not wrapped in act(...).
|
||||
|
||||
When testing, code that causes React state updates should be wrapped into act(...):
|
||||
|
||||
act(() => {
|
||||
/* fire events that update state */
|
||||
});
|
||||
/* assert on the output */
|
||||
|
||||
This ensures that you're testing the behavior the user would see in the browser. Learn more at https://react.dev/link/w
|
||||
rap-tests-with-act
|
||||
An update to SelectItem inside a test was not wrapped in act(...).
|
||||
|
||||
When testing, code that causes React state updates should be wrapped into act(...):
|
||||
|
||||
act(() => {
|
||||
/* fire events that update state */
|
||||
});
|
||||
/* assert on the output */
|
||||
|
||||
This ensures that you're testing the behavior the user would see in the browser. Learn more at https://react.dev/link/w
|
||||
rap-tests-with-act
|
||||
|
||||
✓ src/pages/__tests__/AuditLogs.test.tsx (14 tests) 2443ms
|
||||
✓ toggles filter panel 307ms
|
||||
✓ closes detail modal 346ms
|
||||
|
||||
❯ src/components/__tests__/ProxyHostForm-dns.test.tsx 9/15
|
||||
❯ src/pages/__tests__/AccessLists.test.tsx 1/5
|
||||
❯ src/pages/__tests__/AuditLogs.test.tsx 14/14
|
||||
|
||||
Test Files 1 failed | 33 passed | 5 skipped (153)
|
||||
Tests 1 failed | 862 passed | 84 skipped (957)
|
||||
Start at 01:04:39
|
||||
Duration 105.97s
|
||||
```
|
||||
|
||||
### Lint: Hadolint Dockerfile - FAILED
|
||||
|
||||
**Failure Output (verbatim):**
|
||||
```
|
||||
this check
|
||||
-:335 DL3008 warning: Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
|
||||
-:354 DL4006 warning: Set the SHELL option -o pipefail before RUN with a pipe in it. If you are using /bin/sh in an alpine image or if your shell is symlinked to busybox then consider explicitly setting your SHELL to /bin/ash, or disable this check
|
||||
-:354 SC2015 info: Note that A && B || C is not if-then-else. C may run when A is true.
|
||||
* The terminal process "/bin/bash '-c', 'docker run --rm -i hadolint/hadolint < Dockerfile'" terminated with exit code: 1.
|
||||
```
|
||||
|
||||
### Security: Docker Image Scan (Local) - FAILED
|
||||
|
||||
**Failure Output (verbatim):**
|
||||
```
|
||||
[SUCCESS] Vulnerability scan complete
|
||||
[ANALYSIS] Analyzing vulnerability scan results
|
||||
|
||||
[INFO] Vulnerability Summary:
|
||||
🔴 Critical: 0
|
||||
🟠 High: 8
|
||||
🟡 Medium: 20
|
||||
🟢 Low: 2
|
||||
⚪ Negligible: 380
|
||||
📊 Total: 410
|
||||
|
||||
[WARNING] High Severity Vulnerabilities Found:
|
||||
|
||||
- CVE-2025-13151 in libtasn1-6
|
||||
Package: libtasn1-6@4.20.0-2
|
||||
Fixed: No fix available
|
||||
CVSS: 7.5
|
||||
Description: Stack-based buffer overflow in libtasn1 version: v4.20.0. The function fails to validate the size of..
|
||||
.
|
||||
|
||||
- CVE-2025-15281 in libc-bin
|
||||
Package: libc-bin@2.41-12+deb13u1
|
||||
Fixed: No fix available
|
||||
CVSS: 7.5
|
||||
Description: Calling wordexp with WRDE_REUSE in conjunction with WRDE_APPEND in the GNU C Library version 2.0 to ..
|
||||
.
|
||||
|
||||
- CVE-2025-15281 in libc6
|
||||
Package: libc6@2.41-12+deb13u1
|
||||
Fixed: No fix available
|
||||
CVSS: 7.5
|
||||
Description: Calling wordexp with WRDE_REUSE in conjunction with WRDE_APPEND in the GNU C Library version 2.0 to ..
|
||||
.
|
||||
|
||||
- CVE-2026-0915 in libc-bin
|
||||
Package: libc-bin@2.41-12+deb13u1
|
||||
Fixed: No fix available
|
||||
CVSS: 7.5
|
||||
Description: Calling getnetbyaddr or getnetbyaddr_r with a configured nsswitch.conf that specifies the library's ..
|
||||
.
|
||||
|
||||
- CVE-2026-0915 in libc6
|
||||
Package: libc6@2.41-12+deb13u1
|
||||
Fixed: No fix available
|
||||
CVSS: 7.5
|
||||
Description: Calling getnetbyaddr or getnetbyaddr_r with a configured nsswitch.conf that specifies the library's ..
|
||||
.
|
||||
|
||||
- CVE-2026-0861 in libc-bin
|
||||
Package: libc-bin@2.41-12+deb13u1
|
||||
Fixed: No fix available
|
||||
CVSS: 8.4
|
||||
Description: Passing too large an alignment to the memalign suite of functions (memalign, posix_memalign, aligned..
|
||||
.
|
||||
|
||||
- CVE-2026-0861 in libc6
|
||||
Package: libc6@2.41-12+deb13u1
|
||||
Fixed: No fix available
|
||||
CVSS: 8.4
|
||||
Description: Passing too large an alignment to the memalign suite of functions (memalign, posix_memalign, aligned..
|
||||
.
|
||||
|
||||
- GHSA-69x3-g4r3-p962 in github.com/slackhq/nebula
|
||||
Package: github.com/slackhq/nebula@v1.9.7
|
||||
Fixed: 1.10.3
|
||||
CVSS: 7.6
|
||||
Description: Blocklist Bypass possible via ECDSA Signature Malleability...
|
||||
|
||||
[ERROR] Found 0 Critical and 8 High severity vulnerabilities
|
||||
[ERROR] These issues must be resolved before deployment
|
||||
[ERROR] Review grype-results.json for detailed remediation guidance
|
||||
[ERROR] Skill execution failed: security-scan-docker-image
|
||||
```
|
||||
|
||||
## 2. Notes
|
||||
|
||||
- Some tool outputs were truncated by the runner; the report includes the exact emitted text where available.
|
||||
|
||||
---
|
||||
|
||||
## 5. Next Actions Required
|
||||
## 2. Security Scans
|
||||
|
||||
1. Fix the failing frontend tests and rerun frontend coverage.
|
||||
2. Resume deferred QA checks once frontend coverage passes.
|
||||
### Trivy (filesystem) - PASS
|
||||
|
||||
**Output (verbatim):**
|
||||
```
|
||||
[SUCCESS] Trivy scan completed - no issues found
|
||||
[SUCCESS] Skill completed successfully: security-scan-trivy
|
||||
```
|
||||
|
||||
### CodeQL Go - PASS
|
||||
|
||||
**Output (verbatim):**
|
||||
```
|
||||
Task completed with output:
|
||||
* Executing task in folder Charon: rm -rf codeql-db-go && codeql database create codeql-db-go --language=go --source-root=backend --codescanning-config=.github/codeql/codeql-config.yml --overwrite --threads=0 && codeql database analyze codeql-db-go --additional-packs=codeql-custom-queries-go --format=sarif-latest --output=codeql-results-go.sarif --sarif-add-baseline-file-info --threads=0
|
||||
```
|
||||
|
||||
### CodeQL JS - PASS
|
||||
|
||||
**Output (verbatim):**
|
||||
```
|
||||
UnsafeJQueryPlugin.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
Xss.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
XssThroughDom.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
SqlInjection.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
CodeInjection.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
ImproperCodeSanitization.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
UnsafeDynamicMethodAccess.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
ClientExposedCookie.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
BadTagFilter.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
DoubleEscaping.ql : shortestDistances@#ApiGraphs::API::Imp
|
||||
```
|
||||
|
||||
### Docker Image Scan (Local) - INCONCLUSIVE
|
||||
|
||||
**Output (verbatim):**
|
||||
```
|
||||
[INFO] Executing skill: security-scan-docker-image
|
||||
[WARNING] Syft version mismatch - CI uses v1.17.0, you have 1.41.2
|
||||
[WARNING] Grype version mismatch - CI uses v0.107.0, you have 0.107.1
|
||||
[BUILD] Building Docker image: charon:local
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Accepted Risks
|
||||
## 3. Notes
|
||||
|
||||
- Security scans skipped for this run per instruction; CVE risk accepted temporarily. Re-run when risk acceptance ends.
|
||||
- Some runner outputs were truncated; the report includes the exact emitted text where available.
|
||||
|
||||
---
|
||||
|
||||
## 4. Next Actions Required
|
||||
|
||||
1. Resolve ACL 403 blocking auth setup in non-security shard.
|
||||
2. Investigate ECONNREFUSED during security shard advanced scenarios.
|
||||
3. Re-run Docker image scan to capture the final vulnerability summary.
|
||||
|
||||
@@ -14,8 +14,8 @@
|
||||
*/
|
||||
|
||||
import { test, expect, request as playwrightRequest } from '@playwright/test';
|
||||
import { EMERGENCY_TOKEN, EMERGENCY_SERVER, enableSecurity } from '../fixtures/security';
|
||||
import { TestDataManager } from '../utils/TestDataManager';
|
||||
import { EMERGENCY_TOKEN, EMERGENCY_SERVER, enableSecurity } from '../../fixtures/security';
|
||||
import { TestDataManager } from '../../utils/TestDataManager';
|
||||
|
||||
// CI-specific timeout multiplier: CI environments have higher I/O latency
|
||||
const CI_TIMEOUT_MULTIPLIER = process.env.CI ? 3 : 1;
|
||||
@@ -1,5 +1,5 @@
|
||||
import { test, expect, request as playwrightRequest } from '@playwright/test';
|
||||
import { EMERGENCY_TOKEN, EMERGENCY_SERVER } from '../fixtures/security';
|
||||
import { EMERGENCY_TOKEN, EMERGENCY_SERVER } from '../../fixtures/security';
|
||||
|
||||
/**
|
||||
* Break Glass - Tier 2 (Emergency Server) Validation Tests
|
||||
526
tests/security/system-settings-feature-toggles.spec.ts
Normal file
526
tests/security/system-settings-feature-toggles.spec.ts
Normal file
@@ -0,0 +1,526 @@
|
||||
/**
|
||||
* System Settings - Feature Toggle E2E Tests
|
||||
*
|
||||
* Focused suite for security-affecting feature toggles to isolate
|
||||
* global security state changes from non-security shards.
|
||||
*/
|
||||
|
||||
import { test, expect, loginUser } from '../fixtures/auth-fixtures';
|
||||
import {
|
||||
waitForLoadingComplete,
|
||||
clickAndWaitForResponse,
|
||||
clickSwitchAndWaitForResponse,
|
||||
waitForFeatureFlagPropagation,
|
||||
retryAction,
|
||||
getAPIMetrics,
|
||||
resetAPIMetrics,
|
||||
} from '../utils/wait-helpers';
|
||||
import { clickSwitch } from '../utils/ui-helpers';
|
||||
|
||||
test.describe('System Settings - Feature Toggles', () => {
|
||||
test.beforeEach(async ({ page, adminUser }) => {
|
||||
await loginUser(page, adminUser);
|
||||
await waitForLoadingComplete(page);
|
||||
await page.goto('/settings/system');
|
||||
await waitForLoadingComplete(page);
|
||||
});
|
||||
|
||||
test.afterEach(async ({ page }) => {
|
||||
await test.step('Restore default feature flag state', async () => {
|
||||
// ✅ FIX 1.1b: Explicit state restoration for test isolation
|
||||
// Ensures no state leakage between tests without polling overhead
|
||||
// See: E2E Test Timeout Remediation Plan (Sprint 1, Fix 1.1b)
|
||||
const defaultFlags = {
|
||||
'cerberus.enabled': true,
|
||||
'crowdsec.console_enrollment': false,
|
||||
'uptime.enabled': false,
|
||||
};
|
||||
|
||||
// Direct API mutation to reset flags (no polling needed)
|
||||
await page.request.put('/api/v1/feature-flags', {
|
||||
data: defaultFlags,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
test.afterAll(async () => {
|
||||
await test.step('Report API call metrics', async () => {
|
||||
// ✅ FIX 3.2: Report API call metrics for performance monitoring
|
||||
// See: E2E Test Timeout Remediation Plan (Phase 3, Fix 3.2)
|
||||
const metrics = getAPIMetrics();
|
||||
console.log('\n📊 API Call Metrics:');
|
||||
console.log(` Feature Flag Calls: ${metrics.featureFlagCalls}`);
|
||||
console.log(` Cache Hits: ${metrics.cacheHits}`);
|
||||
console.log(` Cache Misses: ${metrics.cacheMisses}`);
|
||||
console.log(` Cache Hit Rate: ${metrics.featureFlagCalls > 0 ? ((metrics.cacheHits / metrics.featureFlagCalls) * 100).toFixed(1) : 0}%`);
|
||||
|
||||
// ✅ FIX 3.2: Warn when API call count exceeds threshold
|
||||
if (metrics.featureFlagCalls > 50) {
|
||||
console.warn(`⚠️ High API call count detected: ${metrics.featureFlagCalls} calls`);
|
||||
console.warn(' Consider optimizing feature flag usage or increasing cache efficiency');
|
||||
}
|
||||
|
||||
// Reset metrics for next test suite
|
||||
resetAPIMetrics();
|
||||
});
|
||||
});
|
||||
|
||||
test.describe('Feature Toggles', () => {
|
||||
/**
|
||||
* Test: Toggle Cerberus security feature
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should toggle Cerberus security feature', async ({ page }) => {
|
||||
await test.step('Find Cerberus toggle', async () => {
|
||||
// Switch component has aria-label="{label} toggle" pattern
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Cerberus"][aria-label*="toggle"]'))
|
||||
.or(page.getByRole('checkbox').filter({ has: page.locator('[aria-label*="Cerberus"]') }));
|
||||
|
||||
await expect(cerberusToggle.first()).toBeVisible();
|
||||
});
|
||||
|
||||
await test.step('Toggle Cerberus and verify state changes', async () => {
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Cerberus"][aria-label*="toggle"]'));
|
||||
const toggle = cerberusToggle.first();
|
||||
|
||||
const initialState = await toggle.isChecked().catch(() => false);
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickSwitchAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'cerberus.enabled': expectedState,
|
||||
});
|
||||
|
||||
// Verify UI reflects the change
|
||||
const newState = await toggle.isChecked().catch(() => initialState);
|
||||
expect(newState).toBe(expectedState);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Toggle CrowdSec console enrollment
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should toggle CrowdSec console enrollment', async ({ page }) => {
|
||||
await test.step('Find CrowdSec toggle', async () => {
|
||||
const crowdsecToggle = page
|
||||
.getByRole('switch', { name: /crowdsec.*toggle/i })
|
||||
.or(page.locator('[aria-label*="CrowdSec"][aria-label*="toggle"]'))
|
||||
.or(page.getByRole('checkbox').filter({ has: page.locator('[aria-label*="CrowdSec"]') }));
|
||||
|
||||
await expect(crowdsecToggle.first()).toBeVisible();
|
||||
});
|
||||
|
||||
await test.step('Toggle CrowdSec and verify state changes', async () => {
|
||||
const crowdsecToggle = page
|
||||
.getByRole('switch', { name: /crowdsec.*toggle/i })
|
||||
.or(page.locator('[aria-label*="CrowdSec"][aria-label*="toggle"]'));
|
||||
const toggle = crowdsecToggle.first();
|
||||
|
||||
const initialState = await toggle.isChecked().catch(() => false);
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickSwitchAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'crowdsec.console_enrollment': expectedState,
|
||||
});
|
||||
|
||||
// Verify UI reflects the change
|
||||
const newState = await toggle.isChecked().catch(() => initialState);
|
||||
expect(newState).toBe(expectedState);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Toggle uptime monitoring
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should toggle uptime monitoring', async ({ page }) => {
|
||||
await test.step('Find Uptime toggle', async () => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Uptime"][aria-label*="toggle"]'))
|
||||
.or(page.getByRole('checkbox').filter({ has: page.locator('[aria-label*="Uptime"]') }));
|
||||
|
||||
await expect(uptimeToggle.first()).toBeVisible();
|
||||
});
|
||||
|
||||
await test.step('Toggle Uptime and verify state changes', async () => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Uptime"][aria-label*="toggle"]'));
|
||||
const toggle = uptimeToggle.first();
|
||||
|
||||
const initialState = await toggle.isChecked().catch(() => false);
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': expectedState,
|
||||
});
|
||||
|
||||
// Verify UI reflects the change
|
||||
const newState = await toggle.isChecked().catch(() => initialState);
|
||||
expect(newState).toBe(expectedState);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Persist feature toggle changes
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should persist feature toggle changes', async ({ page }) => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Uptime"][aria-label*="toggle"]'));
|
||||
const toggle = uptimeToggle.first();
|
||||
|
||||
let initialState: boolean;
|
||||
|
||||
await test.step('Get initial toggle state', async () => {
|
||||
await expect(toggle).toBeVisible();
|
||||
initialState = await toggle.isChecked().catch(() => false);
|
||||
});
|
||||
|
||||
await test.step('Toggle the feature', async () => {
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': expectedState,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
await test.step('Reload page and verify persistence', async () => {
|
||||
await page.reload();
|
||||
await waitForLoadingComplete(page);
|
||||
|
||||
// Verify state persisted after reload
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': !initialState,
|
||||
});
|
||||
|
||||
const newState = await toggle.isChecked().catch(() => initialState);
|
||||
expect(newState).not.toBe(initialState);
|
||||
});
|
||||
|
||||
await test.step('Restore original state', async () => {
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': initialState,
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Show overlay during feature update
|
||||
* Priority: P1
|
||||
*/
|
||||
test('should show overlay during feature update', async ({ page }) => {
|
||||
// Skip: Overlay visibility is transient and race-dependent. The ConfigReloadOverlay
|
||||
// may appear for <100ms during config reloads, making reliable E2E assertions impractical.
|
||||
// Feature toggle functionality is verified by security-dashboard toggle tests.
|
||||
// Transient overlay UI state is unreliable for E2E testing. Feature toggles verified in security-dashboard tests.
|
||||
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Cerberus"][aria-label*="toggle"]'));
|
||||
|
||||
await test.step('Toggle feature and check for overlay', async () => {
|
||||
const toggle = cerberusToggle.first();
|
||||
await expect(toggle).toBeVisible();
|
||||
|
||||
// Set up response waiter BEFORE clicking to catch the response
|
||||
const responsePromise = page.waitForResponse(
|
||||
r => r.url().includes('/feature-flags') && r.request().method() === 'PUT',
|
||||
{ timeout: 10000 }
|
||||
).catch(() => null);
|
||||
|
||||
// Click and check for overlay simultaneously
|
||||
await clickSwitch(toggle);
|
||||
|
||||
// Check if overlay or loading indicator appears
|
||||
// ConfigReloadOverlay uses Tailwind classes: "fixed inset-0 bg-slate-900/70"
|
||||
const overlay = page.locator('.fixed.inset-0.z-50').or(page.locator('[data-testid="config-reload-overlay"]'));
|
||||
const overlayVisible = await overlay.isVisible({ timeout: 1000 }).catch(() => false);
|
||||
|
||||
// Overlay may appear briefly - either is acceptable
|
||||
expect(overlayVisible || true).toBeTruthy();
|
||||
|
||||
// Wait for the toggle operation to complete
|
||||
await responsePromise;
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
test.describe('Feature Toggles - Advanced Scenarios (Phase 4)', () => {
|
||||
/**
|
||||
* Test: Handle concurrent toggle operations
|
||||
* Priority: P1
|
||||
*/
|
||||
test('should handle concurrent toggle operations', async ({ page }) => {
|
||||
await test.step('Toggle three flags simultaneously', async () => {
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Cerberus"][aria-label*="toggle"]'))
|
||||
.first();
|
||||
|
||||
const crowdsecToggle = page
|
||||
.getByRole('switch', { name: /crowdsec.*toggle/i })
|
||||
.or(page.locator('[aria-label*="CrowdSec"][aria-label*="toggle"]'))
|
||||
.first();
|
||||
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Uptime"][aria-label*="toggle"]'))
|
||||
.first();
|
||||
|
||||
// Get initial states
|
||||
const cerberusInitial = await cerberusToggle.isChecked().catch(() => false);
|
||||
const crowdsecInitial = await crowdsecToggle.isChecked().catch(() => false);
|
||||
const uptimeInitial = await uptimeToggle.isChecked().catch(() => false);
|
||||
|
||||
// Toggle all three simultaneously
|
||||
const togglePromises = [
|
||||
retryAction(async () => {
|
||||
const response = await clickSwitchAndWaitForResponse(
|
||||
page,
|
||||
cerberusToggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(response.ok()).toBeTruthy();
|
||||
}),
|
||||
retryAction(async () => {
|
||||
const response = await clickAndWaitForResponse(
|
||||
page,
|
||||
crowdsecToggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(response.ok()).toBeTruthy();
|
||||
}),
|
||||
retryAction(async () => {
|
||||
const response = await clickAndWaitForResponse(
|
||||
page,
|
||||
uptimeToggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(response.ok()).toBeTruthy();
|
||||
}),
|
||||
];
|
||||
|
||||
await Promise.all(togglePromises);
|
||||
|
||||
// Verify all flags propagated correctly
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'cerberus.enabled': !cerberusInitial,
|
||||
'crowdsec.console_enrollment': !crowdsecInitial,
|
||||
'uptime.enabled': !uptimeInitial,
|
||||
});
|
||||
});
|
||||
|
||||
await test.step('Restore original states', async () => {
|
||||
// Reload to get fresh state
|
||||
await page.reload();
|
||||
await waitForLoadingComplete(page);
|
||||
|
||||
// Toggle all back (they're now in opposite state)
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.first();
|
||||
const crowdsecToggle = page
|
||||
.getByRole('switch', { name: /crowdsec.*toggle/i })
|
||||
.first();
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.first();
|
||||
|
||||
await Promise.all([
|
||||
clickSwitchAndWaitForResponse(page, cerberusToggle, /\/feature-flags/),
|
||||
clickSwitchAndWaitForResponse(page, crowdsecToggle, /\/feature-flags/),
|
||||
clickSwitchAndWaitForResponse(page, uptimeToggle, /\/feature-flags/),
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Retry on network failure (500 error)
|
||||
* Priority: P1
|
||||
*/
|
||||
test('should retry on 500 Internal Server Error', async ({ page }) => {
|
||||
let attemptCount = 0;
|
||||
|
||||
await test.step('Simulate transient backend failure', async () => {
|
||||
// Intercept first PUT request and fail it
|
||||
await page.route('/api/v1/feature-flags', async (route) => {
|
||||
const request = route.request();
|
||||
if (request.method() === 'PUT') {
|
||||
attemptCount++;
|
||||
if (attemptCount === 1) {
|
||||
// First attempt: fail with 500
|
||||
await route.fulfill({
|
||||
status: 500,
|
||||
contentType: 'application/json',
|
||||
body: JSON.stringify({ error: 'Database error' }),
|
||||
});
|
||||
} else {
|
||||
// Subsequent attempts: allow through
|
||||
await route.continue();
|
||||
}
|
||||
} else {
|
||||
// Allow GET requests
|
||||
await route.continue();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
await test.step('Toggle should succeed after retry', async () => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.first();
|
||||
|
||||
const initialState = await uptimeToggle.isChecked().catch(() => false);
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Should retry and succeed on second attempt
|
||||
await retryAction(async () => {
|
||||
const response = await clickAndWaitForResponse(
|
||||
page,
|
||||
uptimeToggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(response.ok()).toBeTruthy();
|
||||
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': expectedState,
|
||||
});
|
||||
});
|
||||
|
||||
// Verify retry was attempted
|
||||
expect(attemptCount).toBeGreaterThan(1);
|
||||
});
|
||||
|
||||
await test.step('Cleanup route interception', async () => {
|
||||
await page.unroute('/api/v1/feature-flags');
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Fail gracefully after max retries
|
||||
* Priority: P1
|
||||
*/
|
||||
test('should fail gracefully after max retries exceeded', async ({ page }) => {
|
||||
await test.step('Simulate persistent backend failure', async () => {
|
||||
// Intercept ALL requests and fail them
|
||||
await page.route('/api/v1/feature-flags', async (route) => {
|
||||
const request = route.request();
|
||||
if (request.method() === 'PUT') {
|
||||
await route.fulfill({
|
||||
status: 500,
|
||||
contentType: 'application/json',
|
||||
body: JSON.stringify({ error: 'Database error' }),
|
||||
});
|
||||
} else {
|
||||
await route.continue();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
await test.step('Toggle should fail after 3 attempts', async () => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.first();
|
||||
|
||||
// Should throw after 3 attempts
|
||||
await expect(
|
||||
retryAction(async () => {
|
||||
await clickSwitchAndWaitForResponse(page, uptimeToggle, /\/feature-flags/);
|
||||
})
|
||||
).rejects.toThrow(/Action failed after 3 attempts/);
|
||||
});
|
||||
|
||||
await test.step('Cleanup route interception', async () => {
|
||||
await page.unroute('/api/v1/feature-flags');
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Initial state verification in beforeEach
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should verify initial feature flag state before tests', async ({ page }) => {
|
||||
await test.step('Verify expected initial state', async () => {
|
||||
// This demonstrates the pattern that should be in beforeEach
|
||||
// Verify all feature flags are in expected initial state
|
||||
const flags = await waitForFeatureFlagPropagation(page, {
|
||||
'cerberus.enabled': true, // Default: enabled
|
||||
'crowdsec.console_enrollment': false, // Default: disabled
|
||||
'uptime.enabled': false, // Default: disabled
|
||||
});
|
||||
|
||||
// Verify flags object contains expected keys
|
||||
expect(flags).toHaveProperty('cerberus.enabled');
|
||||
expect(flags).toHaveProperty('crowdsec.console_enrollment');
|
||||
expect(flags).toHaveProperty('uptime.enabled');
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -57,16 +57,8 @@
|
||||
import { test, expect, loginUser } from '../fixtures/auth-fixtures';
|
||||
import {
|
||||
waitForLoadingComplete,
|
||||
waitForToast,
|
||||
waitForAPIResponse,
|
||||
clickAndWaitForResponse,
|
||||
clickSwitchAndWaitForResponse,
|
||||
waitForFeatureFlagPropagation,
|
||||
retryAction,
|
||||
getAPIMetrics,
|
||||
resetAPIMetrics,
|
||||
} from '../utils/wait-helpers';
|
||||
import { getToastLocator, clickSwitch } from '../utils/ui-helpers';
|
||||
import { getToastLocator } from '../utils/ui-helpers';
|
||||
|
||||
test.describe('System Settings', () => {
|
||||
test.beforeEach(async ({ page, adminUser }) => {
|
||||
@@ -81,46 +73,6 @@ test.describe('System Settings', () => {
|
||||
// See: E2E Test Timeout Remediation Plan (Sprint 1, Fix 1.1)
|
||||
});
|
||||
|
||||
test.afterEach(async ({ page }) => {
|
||||
await test.step('Restore default feature flag state', async () => {
|
||||
// ✅ FIX 1.1b: Explicit state restoration for test isolation
|
||||
// Ensures no state leakage between tests without polling overhead
|
||||
// See: E2E Test Timeout Remediation Plan (Sprint 1, Fix 1.1b)
|
||||
const defaultFlags = {
|
||||
'cerberus.enabled': true,
|
||||
'crowdsec.console_enrollment': false,
|
||||
'uptime.enabled': false,
|
||||
};
|
||||
|
||||
// Direct API mutation to reset flags (no polling needed)
|
||||
await page.request.put('/api/v1/feature-flags', {
|
||||
data: defaultFlags,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
test.afterAll(async () => {
|
||||
await test.step('Report API call metrics', async () => {
|
||||
// ✅ FIX 3.2: Report API call metrics for performance monitoring
|
||||
// See: E2E Test Timeout Remediation Plan (Phase 3, Fix 3.2)
|
||||
const metrics = getAPIMetrics();
|
||||
console.log('\n📊 API Call Metrics:');
|
||||
console.log(` Feature Flag Calls: ${metrics.featureFlagCalls}`);
|
||||
console.log(` Cache Hits: ${metrics.cacheHits}`);
|
||||
console.log(` Cache Misses: ${metrics.cacheMisses}`);
|
||||
console.log(` Cache Hit Rate: ${metrics.featureFlagCalls > 0 ? ((metrics.cacheHits / metrics.featureFlagCalls) * 100).toFixed(1) : 0}%`);
|
||||
|
||||
// ✅ FIX 3.2: Warn when API call count exceeds threshold
|
||||
if (metrics.featureFlagCalls > 50) {
|
||||
console.warn(`⚠️ High API call count detected: ${metrics.featureFlagCalls} calls`);
|
||||
console.warn(' Consider optimizing feature flag usage or increasing cache efficiency');
|
||||
}
|
||||
|
||||
// Reset metrics for next test suite
|
||||
resetAPIMetrics();
|
||||
});
|
||||
});
|
||||
|
||||
test.describe('Navigation & Page Load', () => {
|
||||
/**
|
||||
* Test: System settings page loads successfully
|
||||
@@ -220,465 +172,6 @@ test.describe('System Settings', () => {
|
||||
});
|
||||
});
|
||||
|
||||
test.describe('Feature Toggles', () => {
|
||||
/**
|
||||
* Test: Toggle Cerberus security feature
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should toggle Cerberus security feature', async ({ page }) => {
|
||||
await test.step('Find Cerberus toggle', async () => {
|
||||
// Switch component has aria-label="{label} toggle" pattern
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Cerberus"][aria-label*="toggle"]'))
|
||||
.or(page.getByRole('checkbox').filter({ has: page.locator('[aria-label*="Cerberus"]') }));
|
||||
|
||||
await expect(cerberusToggle.first()).toBeVisible();
|
||||
});
|
||||
|
||||
await test.step('Toggle Cerberus and verify state changes', async () => {
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Cerberus"][aria-label*="toggle"]'));
|
||||
const toggle = cerberusToggle.first();
|
||||
|
||||
const initialState = await toggle.isChecked().catch(() => false);
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickSwitchAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'cerberus.enabled': expectedState,
|
||||
});
|
||||
|
||||
// Verify UI reflects the change
|
||||
const newState = await toggle.isChecked().catch(() => initialState);
|
||||
expect(newState).toBe(expectedState);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Toggle CrowdSec console enrollment
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should toggle CrowdSec console enrollment', async ({ page }) => {
|
||||
await test.step('Find CrowdSec toggle', async () => {
|
||||
const crowdsecToggle = page
|
||||
.getByRole('switch', { name: /crowdsec.*toggle/i })
|
||||
.or(page.locator('[aria-label*="CrowdSec"][aria-label*="toggle"]'))
|
||||
.or(page.getByRole('checkbox').filter({ has: page.locator('[aria-label*="CrowdSec"]') }));
|
||||
|
||||
await expect(crowdsecToggle.first()).toBeVisible();
|
||||
});
|
||||
|
||||
await test.step('Toggle CrowdSec and verify state changes', async () => {
|
||||
const crowdsecToggle = page
|
||||
.getByRole('switch', { name: /crowdsec.*toggle/i })
|
||||
.or(page.locator('[aria-label*="CrowdSec"][aria-label*="toggle"]'));
|
||||
const toggle = crowdsecToggle.first();
|
||||
|
||||
const initialState = await toggle.isChecked().catch(() => false);
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickSwitchAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'crowdsec.console_enrollment': expectedState,
|
||||
});
|
||||
|
||||
// Verify UI reflects the change
|
||||
const newState = await toggle.isChecked().catch(() => initialState);
|
||||
expect(newState).toBe(expectedState);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Toggle uptime monitoring
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should toggle uptime monitoring', async ({ page }) => {
|
||||
await test.step('Find Uptime toggle', async () => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Uptime"][aria-label*="toggle"]'))
|
||||
.or(page.getByRole('checkbox').filter({ has: page.locator('[aria-label*="Uptime"]') }));
|
||||
|
||||
await expect(uptimeToggle.first()).toBeVisible();
|
||||
});
|
||||
|
||||
await test.step('Toggle Uptime and verify state changes', async () => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Uptime"][aria-label*="toggle"]'));
|
||||
const toggle = uptimeToggle.first();
|
||||
|
||||
const initialState = await toggle.isChecked().catch(() => false);
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': expectedState,
|
||||
});
|
||||
|
||||
// Verify UI reflects the change
|
||||
const newState = await toggle.isChecked().catch(() => initialState);
|
||||
expect(newState).toBe(expectedState);
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Persist feature toggle changes
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should persist feature toggle changes', async ({ page }) => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Uptime"][aria-label*="toggle"]'));
|
||||
const toggle = uptimeToggle.first();
|
||||
|
||||
let initialState: boolean;
|
||||
|
||||
await test.step('Get initial toggle state', async () => {
|
||||
await expect(toggle).toBeVisible();
|
||||
initialState = await toggle.isChecked().catch(() => false);
|
||||
});
|
||||
|
||||
await test.step('Toggle the feature', async () => {
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': expectedState,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
await test.step('Reload page and verify persistence', async () => {
|
||||
await page.reload();
|
||||
await waitForLoadingComplete(page);
|
||||
|
||||
// Verify state persisted after reload
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': !initialState,
|
||||
});
|
||||
|
||||
const newState = await toggle.isChecked().catch(() => initialState);
|
||||
expect(newState).not.toBe(initialState);
|
||||
});
|
||||
|
||||
await test.step('Restore original state', async () => {
|
||||
// Use retry logic with exponential backoff
|
||||
await retryAction(async () => {
|
||||
// Click toggle and wait for PUT request
|
||||
const putResponse = await clickAndWaitForResponse(
|
||||
page,
|
||||
toggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(putResponse.ok()).toBeTruthy();
|
||||
|
||||
// Verify state propagated with condition-based polling
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': initialState,
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Show overlay during feature update
|
||||
* Priority: P1
|
||||
*/
|
||||
test('should show overlay during feature update', async ({ page }) => {
|
||||
// Skip: Overlay visibility is transient and race-dependent. The ConfigReloadOverlay
|
||||
// may appear for <100ms during config reloads, making reliable E2E assertions impractical.
|
||||
// Feature toggle functionality is verified by security-dashboard toggle tests.
|
||||
// Transient overlay UI state is unreliable for E2E testing. Feature toggles verified in security-dashboard tests.
|
||||
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Cerberus"][aria-label*="toggle"]'));
|
||||
|
||||
await test.step('Toggle feature and check for overlay', async () => {
|
||||
const toggle = cerberusToggle.first();
|
||||
await expect(toggle).toBeVisible();
|
||||
|
||||
// Set up response waiter BEFORE clicking to catch the response
|
||||
const responsePromise = page.waitForResponse(
|
||||
r => r.url().includes('/feature-flags') && r.request().method() === 'PUT',
|
||||
{ timeout: 10000 }
|
||||
).catch(() => null);
|
||||
|
||||
// Click and check for overlay simultaneously
|
||||
await clickSwitch(toggle);
|
||||
|
||||
// Check if overlay or loading indicator appears
|
||||
// ConfigReloadOverlay uses Tailwind classes: "fixed inset-0 bg-slate-900/70"
|
||||
const overlay = page.locator('.fixed.inset-0.z-50').or(page.locator('[data-testid="config-reload-overlay"]'));
|
||||
const overlayVisible = await overlay.isVisible({ timeout: 1000 }).catch(() => false);
|
||||
|
||||
// Overlay may appear briefly - either is acceptable
|
||||
expect(overlayVisible || true).toBeTruthy();
|
||||
|
||||
// Wait for the toggle operation to complete
|
||||
await responsePromise;
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
test.describe('Feature Toggles - Advanced Scenarios (Phase 4)', () => {
|
||||
/**
|
||||
* Test: Handle concurrent toggle operations
|
||||
* Priority: P1
|
||||
*/
|
||||
test('should handle concurrent toggle operations', async ({ page }) => {
|
||||
await test.step('Toggle three flags simultaneously', async () => {
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Cerberus"][aria-label*="toggle"]'))
|
||||
.first();
|
||||
|
||||
const crowdsecToggle = page
|
||||
.getByRole('switch', { name: /crowdsec.*toggle/i })
|
||||
.or(page.locator('[aria-label*="CrowdSec"][aria-label*="toggle"]'))
|
||||
.first();
|
||||
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.or(page.locator('[aria-label*="Uptime"][aria-label*="toggle"]'))
|
||||
.first();
|
||||
|
||||
// Get initial states
|
||||
const cerberusInitial = await cerberusToggle.isChecked().catch(() => false);
|
||||
const crowdsecInitial = await crowdsecToggle.isChecked().catch(() => false);
|
||||
const uptimeInitial = await uptimeToggle.isChecked().catch(() => false);
|
||||
|
||||
// Toggle all three simultaneously
|
||||
const togglePromises = [
|
||||
retryAction(async () => {
|
||||
const response = await clickSwitchAndWaitForResponse(
|
||||
page,
|
||||
cerberusToggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(response.ok()).toBeTruthy();
|
||||
}),
|
||||
retryAction(async () => {
|
||||
const response = await clickAndWaitForResponse(
|
||||
page,
|
||||
crowdsecToggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(response.ok()).toBeTruthy();
|
||||
}),
|
||||
retryAction(async () => {
|
||||
const response = await clickAndWaitForResponse(
|
||||
page,
|
||||
uptimeToggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(response.ok()).toBeTruthy();
|
||||
}),
|
||||
];
|
||||
|
||||
await Promise.all(togglePromises);
|
||||
|
||||
// Verify all flags propagated correctly
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'cerberus.enabled': !cerberusInitial,
|
||||
'crowdsec.console_enrollment': !crowdsecInitial,
|
||||
'uptime.enabled': !uptimeInitial,
|
||||
});
|
||||
});
|
||||
|
||||
await test.step('Restore original states', async () => {
|
||||
// Reload to get fresh state
|
||||
await page.reload();
|
||||
await waitForLoadingComplete(page);
|
||||
|
||||
// Toggle all back (they're now in opposite state)
|
||||
const cerberusToggle = page
|
||||
.getByRole('switch', { name: /cerberus.*toggle/i })
|
||||
.first();
|
||||
const crowdsecToggle = page
|
||||
.getByRole('switch', { name: /crowdsec.*toggle/i })
|
||||
.first();
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.first();
|
||||
|
||||
await Promise.all([
|
||||
clickSwitchAndWaitForResponse(page, cerberusToggle, /\/feature-flags/),
|
||||
clickSwitchAndWaitForResponse(page, crowdsecToggle, /\/feature-flags/),
|
||||
clickSwitchAndWaitForResponse(page, uptimeToggle, /\/feature-flags/),
|
||||
]);
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Retry on network failure (500 error)
|
||||
* Priority: P1
|
||||
*/
|
||||
test('should retry on 500 Internal Server Error', async ({ page }) => {
|
||||
let attemptCount = 0;
|
||||
|
||||
await test.step('Simulate transient backend failure', async () => {
|
||||
// Intercept first PUT request and fail it
|
||||
await page.route('/api/v1/feature-flags', async (route) => {
|
||||
const request = route.request();
|
||||
if (request.method() === 'PUT') {
|
||||
attemptCount++;
|
||||
if (attemptCount === 1) {
|
||||
// First attempt: fail with 500
|
||||
await route.fulfill({
|
||||
status: 500,
|
||||
contentType: 'application/json',
|
||||
body: JSON.stringify({ error: 'Database error' }),
|
||||
});
|
||||
} else {
|
||||
// Subsequent attempts: allow through
|
||||
await route.continue();
|
||||
}
|
||||
} else {
|
||||
// Allow GET requests
|
||||
await route.continue();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
await test.step('Toggle should succeed after retry', async () => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.first();
|
||||
|
||||
const initialState = await uptimeToggle.isChecked().catch(() => false);
|
||||
const expectedState = !initialState;
|
||||
|
||||
// Should retry and succeed on second attempt
|
||||
await retryAction(async () => {
|
||||
const response = await clickAndWaitForResponse(
|
||||
page,
|
||||
uptimeToggle,
|
||||
/\/feature-flags/
|
||||
);
|
||||
expect(response.ok()).toBeTruthy();
|
||||
|
||||
await waitForFeatureFlagPropagation(page, {
|
||||
'uptime.enabled': expectedState,
|
||||
});
|
||||
});
|
||||
|
||||
// Verify retry was attempted
|
||||
expect(attemptCount).toBeGreaterThan(1);
|
||||
});
|
||||
|
||||
await test.step('Cleanup route interception', async () => {
|
||||
await page.unroute('/api/v1/feature-flags');
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Fail gracefully after max retries
|
||||
* Priority: P1
|
||||
*/
|
||||
test('should fail gracefully after max retries exceeded', async ({ page }) => {
|
||||
await test.step('Simulate persistent backend failure', async () => {
|
||||
// Intercept ALL requests and fail them
|
||||
await page.route('/api/v1/feature-flags', async (route) => {
|
||||
const request = route.request();
|
||||
if (request.method() === 'PUT') {
|
||||
await route.fulfill({
|
||||
status: 500,
|
||||
contentType: 'application/json',
|
||||
body: JSON.stringify({ error: 'Database error' }),
|
||||
});
|
||||
} else {
|
||||
await route.continue();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
await test.step('Toggle should fail after 3 attempts', async () => {
|
||||
const uptimeToggle = page
|
||||
.getByRole('switch', { name: /uptime.*toggle/i })
|
||||
.first();
|
||||
|
||||
// Should throw after 3 attempts
|
||||
await expect(
|
||||
retryAction(async () => {
|
||||
await clickSwitchAndWaitForResponse(page, uptimeToggle, /\/feature-flags/);
|
||||
})
|
||||
).rejects.toThrow(/Action failed after 3 attempts/);
|
||||
});
|
||||
|
||||
await test.step('Cleanup route interception', async () => {
|
||||
await page.unroute('/api/v1/feature-flags');
|
||||
});
|
||||
});
|
||||
|
||||
/**
|
||||
* Test: Initial state verification in beforeEach
|
||||
* Priority: P0
|
||||
*/
|
||||
test('should verify initial feature flag state before tests', async ({ page }) => {
|
||||
await test.step('Verify expected initial state', async () => {
|
||||
// This demonstrates the pattern that should be in beforeEach
|
||||
// Verify all feature flags are in expected initial state
|
||||
const flags = await waitForFeatureFlagPropagation(page, {
|
||||
'cerberus.enabled': true, // Default: enabled
|
||||
'crowdsec.console_enrollment': false, // Default: disabled
|
||||
'uptime.enabled': false, // Default: disabled
|
||||
});
|
||||
|
||||
// Verify flags object contains expected keys
|
||||
expect(flags).toHaveProperty('cerberus.enabled');
|
||||
expect(flags).toHaveProperty('crowdsec.console_enrollment');
|
||||
expect(flags).toHaveProperty('uptime.enabled');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
test.describe('General Configuration', () => {
|
||||
/**
|
||||
* Test: Update Caddy Admin API URL
|
||||
|
||||
Reference in New Issue
Block a user