diff --git a/.github/agents/Backend_Dev.agent.md b/.github/agents/Backend_Dev.agent.md index bc7f1bd4..2114e2c7 100644 --- a/.github/agents/Backend_Dev.agent.md +++ b/.github/agents/Backend_Dev.agent.md @@ -3,7 +3,7 @@ name: 'Backend Dev' description: 'Senior Go Engineer focused on high-performance, secure backend implementation.' argument-hint: 'The specific backend task from the Plan (e.g., "Implement ProxyHost CRUD endpoints")' tools: - ['vscode/memory', 'execute', 'read/terminalSelection', 'read/terminalLastCommand', 'read/getTaskOutput', 'read/problems', 'read/readFile', 'agent', 'edit/createFile', 'edit/editFiles', 'search/changes', 'search/codebase', 'search/fileSearch', 'search/listDirectory', 'search/textSearch', 'search/usages', 'search/searchSubagent', 'todo'] + ['vscode/memory', 'execute', 'read', 'agent', 'edit', 'search', 'todo'] model: 'claude-opus-4-5-20250514' --- You are a SENIOR GO BACKEND ENGINEER specializing in Gin, GORM, and System Architecture. diff --git a/.github/agents/DevOps.agent.md b/.github/agents/DevOps.agent.md index dd180418..f04e689e 100644 --- a/.github/agents/DevOps.agent.md +++ b/.github/agents/DevOps.agent.md @@ -3,7 +3,7 @@ name: 'DevOps' description: 'DevOps specialist for CI/CD pipelines, deployment debugging, and GitOps workflows focused on making deployments boring and reliable' argument-hint: 'The CI/CD or infrastructure task (e.g., "Debug failing GitHub Action workflow")' tools: - ['vscode/memory', 'execute', 'read/terminalSelection', 'read/terminalLastCommand', 'read/getTaskOutput', 'read/problems', 'read/readFile', 'agent', 'github/*', 'github/*', 'io.github.goreleaser/mcp/*', 'edit/createFile', 'edit/editFiles', 'search/changes', 'search/codebase', 'search/fileSearch', 'search/listDirectory', 'search/textSearch', 'search/usages', 'search/searchSubagent', 'web', 'github/*', 'copilot-container-tools/*', 'todo'] + ['vscode/memory', 'execute', 'read', 'agent', 'github/*', 'io.github.goreleaser/mcp/*', 'edit', 'search', 'web', 'ms-azuretools.vscode-containers/containerToolsConfig', 'todo'] model: 'claude-opus-4-5-20250514' mcp-servers: - github diff --git a/.github/agents/Doc_Writer.agent.md b/.github/agents/Doc_Writer.agent.md index c9b9c695..45f4815b 100644 --- a/.github/agents/Doc_Writer.agent.md +++ b/.github/agents/Doc_Writer.agent.md @@ -3,7 +3,7 @@ name: 'Docs Writer' description: 'User Advocate and Writer focused on creating simple, layman-friendly documentation.' argument-hint: 'The feature to document (e.g., "Write the guide for the new Real-Time Logs")' tools: - ['vscode/memory', 'read/readFile', 'edit/createFile', 'edit/editFiles', 'search/changes', 'search/codebase', 'search/fileSearch', 'search/listDirectory', 'search/textSearch', 'search/searchSubagent', 'github/*', 'todo'] + ['vscode/memory', 'read', 'edit', 'search', 'github/*', 'todo'] model: 'claude-opus-4-5-20250514' mcp-servers: - github diff --git a/.github/agents/Frontend_Dev.agent.md b/.github/agents/Frontend_Dev.agent.md index 382fdee8..a8c6fc87 100644 --- a/.github/agents/Frontend_Dev.agent.md +++ b/.github/agents/Frontend_Dev.agent.md @@ -3,7 +3,7 @@ name: 'Frontend Dev' description: 'Senior React/TypeScript Engineer for frontend implementation.' argument-hint: 'The frontend feature or component to implement (e.g., "Implement the Real-Time Logs dashboard component")' tools: - ['vscode/openSimpleBrowser', 'vscode/vscodeAPI', 'vscode/memory', 'execute', 'read/terminalSelection', 'read/terminalLastCommand', 'read/getTaskOutput', 'read/problems', 'read/readFile', 'agent', 'edit/createFile', 'edit/editFiles', 'search/changes', 'search/codebase', 'search/fileSearch', 'search/listDirectory', 'search/textSearch', 'search/usages', 'search/searchSubagent', 'todo'] + ['vscode', 'execute', 'read', 'agent', 'edit', 'search', 'todo'] model: 'claude-opus-4-5-20250514' --- You are a SENIOR REACT/TYPESCRIPT ENGINEER with deep expertise in: diff --git a/.github/agents/Managment.agent.md b/.github/agents/Managment.agent.md index 413c39b8..676b5ded 100644 --- a/.github/agents/Managment.agent.md +++ b/.github/agents/Managment.agent.md @@ -3,7 +3,7 @@ name: 'Management' description: 'Engineering Director. Delegates ALL research and execution. DO NOT ask it to debug code directly.' argument-hint: 'The high-level goal (e.g., "Build the new Proxy Host Dashboard widget")' tools: - ['execute/getTerminalOutput', 'execute/runTask', 'execute/createAndRunTask', 'execute/runTests', 'execute/runNotebookCell', 'execute/testFailure', 'execute/runInTerminal', 'read/terminalSelection', 'read/terminalLastCommand', 'read/getTaskOutput', 'read/getNotebookSummary', 'read/problems', 'read/readFile', 'read/readNotebookCellOutput', 'agent/runSubagent', 'edit/createDirectory', 'edit/createFile', 'edit/createJupyterNotebook', 'edit/editFiles', 'edit/editNotebook', 'search/listDirectory', 'search/searchSubagent', 'todo', 'askQuestions'] + ['vscode', 'execute', 'read', 'agent', 'github/*', 'io.github.goreleaser/mcp/*', 'playwright/*', 'trivy-mcp/*', 'edit', 'search', 'web', 'todo', 'vscode.mermaid-chat-features/renderMermaidDiagram', 'github.vscode-pull-request-github/issue_fetch', 'github.vscode-pull-request-github/suggest-fix', 'github.vscode-pull-request-github/searchSyntax', 'github.vscode-pull-request-github/doSearch', 'github.vscode-pull-request-github/renderIssues', 'github.vscode-pull-request-github/activePullRequest', 'github.vscode-pull-request-github/openPullRequest', 'ms-azuretools.vscode-containers/containerToolsConfig'] model: 'claude-opus-4-5-20250514' --- You are the ENGINEERING DIRECTOR. diff --git a/.github/agents/Planning.agent.md b/.github/agents/Planning.agent.md index 813cec64..d3cfd960 100644 --- a/.github/agents/Planning.agent.md +++ b/.github/agents/Planning.agent.md @@ -3,7 +3,7 @@ name: 'Planning' description: 'Principal Architect for technical planning and design decisions.' argument-hint: 'The feature or system to plan (e.g., "Design the architecture for Real-Time Logs")' tools: - ['execute/getTerminalOutput', 'execute/runTask', 'execute/createAndRunTask', 'execute/runTests', 'execute/runNotebookCell', 'execute/testFailure', 'execute/runInTerminal', 'read/terminalSelection', 'read/terminalLastCommand', 'read/getTaskOutput', 'read/getNotebookSummary', 'read/problems', 'read/readFile', 'read/readNotebookCellOutput', 'agent/runSubagent', 'github/add_comment_to_pending_review', 'github/add_issue_comment', 'github/assign_copilot_to_issue', 'github/create_branch', 'github/create_or_update_file', 'github/create_pull_request', 'github/create_repository', 'github/delete_file', 'github/fork_repository', 'github/get_commit', 'github/get_file_contents', 'github/get_label', 'github/get_latest_release', 'github/get_me', 'github/get_release_by_tag', 'github/get_tag', 'github/get_team_members', 'github/get_teams', 'github/issue_read', 'github/issue_write', 'github/list_branches', 'github/list_commits', 'github/list_issue_types', 'github/list_issues', 'github/list_pull_requests', 'github/list_releases', 'github/list_tags', 'github/merge_pull_request', 'github/pull_request_read', 'github/pull_request_review_write', 'github/push_files', 'github/request_copilot_review', 'github/search_code', 'github/search_issues', 'github/search_pull_requests', 'github/search_repositories', 'github/search_users', 'github/sub_issue_write', 'github/update_pull_request', 'github/update_pull_request_branch', 'github/add_comment_to_pending_review', 'github/add_issue_comment', 'github/assign_copilot_to_issue', 'github/create_branch', 'github/create_or_update_file', 'github/create_pull_request', 'github/create_repository', 'github/delete_file', 'github/fork_repository', 'github/get_commit', 'github/get_file_contents', 'github/get_label', 'github/get_latest_release', 'github/get_me', 'github/get_release_by_tag', 'github/get_tag', 'github/get_team_members', 'github/get_teams', 'github/issue_read', 'github/issue_write', 'github/list_branches', 'github/list_commits', 'github/list_issue_types', 'github/list_issues', 'github/list_pull_requests', 'github/list_releases', 'github/list_tags', 'github/merge_pull_request', 'github/pull_request_read', 'github/pull_request_review_write', 'github/push_files', 'github/request_copilot_review', 'github/search_code', 'github/search_issues', 'github/search_pull_requests', 'github/search_repositories', 'github/search_users', 'github/sub_issue_write', 'github/update_pull_request', 'github/update_pull_request_branch', 'edit/createDirectory', 'edit/createFile', 'edit/editFiles', 'edit/editNotebook', 'search/changes', 'search/codebase', 'search/fileSearch', 'search/listDirectory', 'search/textSearch', 'search/usages', 'search/searchSubagent', 'web/fetch', 'web/githubRepo', 'github/add_comment_to_pending_review', 'github/add_issue_comment', 'github/assign_copilot_to_issue', 'github/create_branch', 'github/create_or_update_file', 'github/create_pull_request', 'github/create_repository', 'github/delete_file', 'github/fork_repository', 'github/get_commit', 'github/get_file_contents', 'github/get_label', 'github/get_latest_release', 'github/get_me', 'github/get_release_by_tag', 'github/get_tag', 'github/get_team_members', 'github/get_teams', 'github/issue_read', 'github/issue_write', 'github/list_branches', 'github/list_commits', 'github/list_issue_types', 'github/list_issues', 'github/list_pull_requests', 'github/list_releases', 'github/list_tags', 'github/merge_pull_request', 'github/pull_request_read', 'github/pull_request_review_write', 'github/push_files', 'github/request_copilot_review', 'github/search_code', 'github/search_issues', 'github/search_pull_requests', 'github/search_repositories', 'github/search_users', 'github/sub_issue_write', 'github/update_pull_request', 'github/update_pull_request_branch', 'todo', 'askQuestions'] + ['execute', 'read', 'agent', 'github/*', 'edit', 'search', 'web', 'todo'] model: 'claude-opus-4-5-20250514' mcp-servers: - github diff --git a/.github/agents/QA_Security.agent.md b/.github/agents/QA_Security.agent.md index 3844ce4d..11f6f40c 100644 --- a/.github/agents/QA_Security.agent.md +++ b/.github/agents/QA_Security.agent.md @@ -3,7 +3,7 @@ name: 'QA Security' description: 'Quality Assurance and Security Engineer for testing and vulnerability assessment.' argument-hint: 'The component or feature to test (e.g., "Run security scan on authentication endpoints")' tools: - ['vscode/memory', 'execute', 'read/terminalSelection', 'read/terminalLastCommand', 'read/getTaskOutput', 'read/problems', 'read/readFile', 'agent', 'playwright/*', 'trivy-mcp/*', 'edit/createFile', 'edit/editFiles', 'search/changes', 'search/codebase', 'search/fileSearch', 'search/listDirectory', 'search/textSearch', 'search/usages', 'search/searchSubagent', 'todo'] + ['vscode/memory', 'execute', 'read', 'agent', 'playwright/*', 'trivy-mcp/*', 'edit', 'search', 'todo'] model: 'claude-opus-4-5-20250514' mcp-servers: - trivy-mcp diff --git a/.github/agents/Supervisor.agent.md b/.github/agents/Supervisor.agent.md index 42598268..69838064 100644 --- a/.github/agents/Supervisor.agent.md +++ b/.github/agents/Supervisor.agent.md @@ -3,7 +3,7 @@ name: 'Supervisor' description: 'Code Review Lead for quality assurance and PR review.' argument-hint: 'The PR or code change to review (e.g., "Review PR #123 for security issues")' tools: - ['vscode/memory', 'execute', 'read/terminalSelection', 'read/terminalLastCommand', 'read/problems', 'read/readFile', 'search/changes', 'search/codebase', 'search/fileSearch', 'search/listDirectory', 'search/textSearch', 'search/usages', 'search/searchSubagent', 'web', 'github/*', 'todo'] + ['vscode/memory', 'execute', 'read', 'search', 'web', 'github/*', 'todo'] model: 'claude-opus-4-5-20250514' mcp-servers: - github diff --git a/.github/workflows/playwright.yml b/.github/workflows/playwright.yml index eeec0823..f0a705b9 100644 --- a/.github/workflows/playwright.yml +++ b/.github/workflows/playwright.yml @@ -14,13 +14,13 @@ on: - 'tests/**' - 'playwright.config.js' - '.github/workflows/playwright.yml' - + pull_request: branches: - main - development - 'feature/**' - + workflow_run: workflows: ["Docker Build, Publish & Test"] types: diff --git a/.vscode/mcp.json b/.vscode/mcp.json index 496ea175..4f600da4 100644 --- a/.vscode/mcp.json +++ b/.vscode/mcp.json @@ -11,4 +11,4 @@ } }, "inputs": [] -} \ No newline at end of file +} diff --git a/backend/internal/api/handlers/import_handler_test.go b/backend/internal/api/handlers/import_handler_test.go index 59bbbef2..b1d4b721 100644 --- a/backend/internal/api/handlers/import_handler_test.go +++ b/backend/internal/api/handlers/import_handler_test.go @@ -1091,3 +1091,95 @@ func TestImportHandler_Commit_CreateFailure(t *testing.T) { // Verify the error mentions the duplicate assert.Contains(t, errors[0].(string), "duplicate.com") } + +// TestUpload_NormalizationSuccess tests the success path where NormalizeCaddyfile succeeds (line 271) +func TestUpload_NormalizationSuccess(t *testing.T) { + gin.SetMode(gin.TestMode) + db := setupImportTestDB(t) + + // Use fake caddy script that handles both fmt and adapt + cwd, _ := os.Getwd() + fakeCaddy := filepath.Join(cwd, "testdata", "fake_caddy_fmt_success.sh") + _ = os.Chmod(fakeCaddy, 0o755) //nolint:gosec // G302: test script needs exec permissions + + tmpDir := t.TempDir() + handler := handlers.NewImportHandler(db, fakeCaddy, tmpDir, "") + router := gin.New() + router.POST("/import/upload", handler.Upload) + + // Use single-line Caddyfile format (triggers normalization) + singleLineCaddyfile := `test.local { reverse_proxy localhost:3000 }` + + payload := map[string]string{ + "content": singleLineCaddyfile, + "filename": "Caddyfile", + } + body, _ := json.Marshal(payload) + + w := httptest.NewRecorder() + req, _ := http.NewRequest("POST", "/import/upload", bytes.NewBuffer(body)) + req.Header.Set("Content-Type", "application/json") + router.ServeHTTP(w, req) + + // Should succeed with 200 (normalization worked) + assert.Equal(t, http.StatusOK, w.Code) + + // Verify response contains hosts (parsing succeeded) + var response map[string]any + err := json.Unmarshal(w.Body.Bytes(), &response) + assert.NoError(t, err) + + // Verify preview contains hosts + preview, ok := response["preview"].(map[string]any) + assert.True(t, ok, "response should contain preview") + hosts, ok := preview["hosts"].([]any) + assert.True(t, ok, "preview should contain hosts") + assert.Greater(t, len(hosts), 0, "should have at least one parsed host") +} + +// TestUpload_NormalizationFallback tests the fallback path where NormalizeCaddyfile fails (line 269) +func TestUpload_NormalizationFallback(t *testing.T) { + gin.SetMode(gin.TestMode) + db := setupImportTestDB(t) + + // Use fake caddy script that fails fmt but succeeds on adapt + cwd, _ := os.Getwd() + fakeCaddy := filepath.Join(cwd, "testdata", "fake_caddy_fmt_fail.sh") + _ = os.Chmod(fakeCaddy, 0o755) //nolint:gosec // G302: test script needs exec permissions + + tmpDir := t.TempDir() + handler := handlers.NewImportHandler(db, fakeCaddy, tmpDir, "") + router := gin.New() + router.POST("/import/upload", handler.Upload) + + // Valid Caddyfile that would parse successfully (even if normalization fails) + caddyfile := `test.local { + reverse_proxy localhost:3000 +}` + + payload := map[string]string{ + "content": caddyfile, + "filename": "Caddyfile", + } + body, _ := json.Marshal(payload) + + w := httptest.NewRecorder() + req, _ := http.NewRequest("POST", "/import/upload", bytes.NewBuffer(body)) + req.Header.Set("Content-Type", "application/json") + router.ServeHTTP(w, req) + + // Should still succeed (falls back to original content) + assert.Equal(t, http.StatusOK, w.Code) + + // Verify hosts were parsed from original content + var response map[string]any + err := json.Unmarshal(w.Body.Bytes(), &response) + assert.NoError(t, err) + + // Verify preview contains hosts + preview, ok := response["preview"].(map[string]any) + assert.True(t, ok, "response should contain preview") + hosts, ok := preview["hosts"].([]any) + assert.True(t, ok, "preview should contain hosts") + assert.Greater(t, len(hosts), 0, "should have at least one parsed host from original content") +} diff --git a/backend/internal/api/handlers/testdata/fake_caddy_fmt_fail.sh b/backend/internal/api/handlers/testdata/fake_caddy_fmt_fail.sh new file mode 100755 index 00000000..dec0b9e1 --- /dev/null +++ b/backend/internal/api/handlers/testdata/fake_caddy_fmt_fail.sh @@ -0,0 +1,25 @@ +#!/bin/sh +# Fake caddy that fails fmt but succeeds on adapt (for testing normalization fallback) + +if [ "$1" = "version" ]; then + echo "v2.0.0" + exit 0 +fi + +if [ "$1" = "fmt" ]; then + # Simulate fmt failure + echo "Error: fmt failed" >&2 + exit 1 +fi + +if [ "$1" = "adapt" ]; then + DOMAIN="example.com" + if [ "$2" = "--config" ]; then + # Read domain from first line of file + DOMAIN=$(head -1 "$3" | awk '{print $1}') + fi + echo "{\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"routes\":[{\"match\":[{\"host\":[\"$DOMAIN\"]}],\"handle\":[{\"handler\":\"reverse_proxy\",\"upstreams\":[{\"dial\":\"localhost:8080\"}]}]}]}}}}}" + exit 0 +fi + +exit 1 diff --git a/backend/internal/api/handlers/testdata/fake_caddy_fmt_success.sh b/backend/internal/api/handlers/testdata/fake_caddy_fmt_success.sh new file mode 100755 index 00000000..13db20f5 --- /dev/null +++ b/backend/internal/api/handlers/testdata/fake_caddy_fmt_success.sh @@ -0,0 +1,35 @@ +#!/bin/sh +# Fake caddy that handles fmt (formats single-line to multi-line) and adapt + +if [ "$1" = "version" ]; then + echo "v2.0.0" + exit 0 +fi + +if [ "$1" = "fmt" ] && [ "$2" = "--overwrite" ]; then + # Read the file content + CONTENT=$(cat "$3") + # Check if it looks like a single-line Caddyfile + if echo "$CONTENT" | grep -q '{ .* }$'; then + # Simulate formatting: write formatted content back to the file + DOMAIN=$(echo "$CONTENT" | sed 's/ {.*//') + cat > "$3" << EOF +${DOMAIN} { + reverse_proxy localhost:8080 +} +EOF + fi + exit 0 +fi + +if [ "$1" = "adapt" ]; then + DOMAIN="example.com" + if [ "$2" = "--config" ]; then + # Read domain from first line of file + DOMAIN=$(head -1 "$3" | awk '{print $1}') + fi + echo "{\"apps\":{\"http\":{\"servers\":{\"srv0\":{\"routes\":[{\"match\":[{\"host\":[\"$DOMAIN\"]}],\"handle\":[{\"handler\":\"reverse_proxy\",\"upstreams\":[{\"dial\":\"localhost:8080\"}]}]}]}}}}}" + exit 0 +fi + +exit 1 diff --git a/backend/internal/caddy/importer_test.go b/backend/internal/caddy/importer_test.go index cb64d05e..54cb906e 100644 --- a/backend/internal/caddy/importer_test.go +++ b/backend/internal/caddy/importer_test.go @@ -26,11 +26,15 @@ func TestImporter_ParseCaddyfile_NotFound(t *testing.T) { } type MockExecutor struct { - Output []byte - Err error + Output []byte + Err error + ExecuteFunc func(name string, args ...string) ([]byte, error) // Custom execution logic } func (m *MockExecutor) Execute(name string, args ...string) ([]byte, error) { + if m.ExecuteFunc != nil { + return m.ExecuteFunc(name, args...) + } return m.Output, m.Err } @@ -437,3 +441,43 @@ func TestImporter_NormalizeCaddyfile_Integration(t *testing.T) { }) } } + +// TestDefaultExecutor_Execute_Timeout verifies the 5-second timeout triggers correctly +func TestDefaultExecutor_Execute_Timeout(t *testing.T) { + executor := &DefaultExecutor{} + + // Use "sleep 10" to trigger the 5-second timeout + output, err := executor.Execute("sleep", "10") + + // Error must be returned + assert.Error(t, err) + // Error message must contain the timeout message + assert.Contains(t, err.Error(), "command timed out after 5 seconds") + assert.Contains(t, err.Error(), "sleep") + // Output may be empty or partial + _ = output +} + +// TestImporter_NormalizeCaddyfile_ReadError tests the error path when reading the formatted file fails +func TestImporter_NormalizeCaddyfile_ReadError(t *testing.T) { + importer := NewImporter("caddy") + + // Mock executor that succeeds but deletes the temp file before returning + // This simulates the file being removed after caddy fmt writes it + mockExecutor := &MockExecutor{ + ExecuteFunc: func(name string, args ...string) ([]byte, error) { + // The temp file path is the last argument (caddy fmt --overwrite ) + if len(args) >= 3 && args[0] == "fmt" && args[1] == "--overwrite" { + // Delete the temp file to trigger ReadFile error + _ = os.Remove(args[2]) + } + return []byte{}, nil + }, + } + importer.executor = mockExecutor + + _, err := importer.NormalizeCaddyfile("test.local { reverse_proxy localhost:8080 }") + + assert.Error(t, err) + assert.Contains(t, err.Error(), "failed to read formatted file") +} diff --git a/docs/plans/current_spec.md b/docs/plans/current_spec.md index 6f4b388d..b1fa3d71 100644 --- a/docs/plans/current_spec.md +++ b/docs/plans/current_spec.md @@ -1,5 +1,10 @@ # Caddy Import E2E Test Plan - Gap Coverage +# Caddy Import E2E Test Plan - Gap Coverage +**Created**: 2026-01-30 +**Status**: Active +**Target File**: `tests/tasks/caddy-import-gaps.spec.ts` +**Related**: `tests/tasks/caddy-import-debug.spec.ts`, `tests/tasks/import-caddyfile.spec.ts` **Created**: 2026-01-30 **Status**: Active **Target File**: `tests/tasks/caddy-import-gaps.spec.ts` @@ -112,6 +117,115 @@ This plan addresses 5 identified gaps in Caddy Import E2E test coverage. Tests w - `page.url()` matches `/` or `/dashboard` - Success modal is no longer visible +**Selectors**: +| Element | Selector | +|---------|----------| +| Dashboard Button | `button:has-text("Go to Dashboard")` | +## Overview + +This plan addresses 5 identified gaps in Caddy Import E2E test coverage. Tests will follow established patterns from existing test files, using: +- Stored auth state (no `loginUser()` calls needed) +- Response waiters registered BEFORE click actions +- Real API calls (no mocking) for reliable integration testing +- **TestDataManager fixture** from `auth-fixtures.ts` for automatic resource cleanup and namespace isolation +- **Relative paths** with the `request` fixture (baseURL pre-configured) +- **Automatic namespacing** via TestDataManager to prevent parallel execution conflicts + +--- + +## Gap 1: Success Modal Navigation + +**Priority**: 🔴 CRITICAL +**Complexity**: Medium + +### Test Case 1.1: Success modal appears after commit + +**Title**: `should display success modal after successful import commit` + +**Prerequisites**: +- Container running with healthy API +- No pending import session + +**Setup (API)**: +```typescript +// TestDataManager handles cleanup automatically +// No explicit setup needed - clean state guaranteed by fixture +``` + +**Steps**: +1. Navigate to `/tasks/import/caddyfile` +2. Paste valid Caddyfile content: + ``` + success-modal-test.example.com { + reverse_proxy localhost:3000 + } + ``` +3. Register response waiter for `/api/v1/import/upload` +4. Click "Parse and Review" button +5. Wait for review table to appear +6. Register response waiter for `/api/v1/import/commit` +7. Click "Commit Import" button +8. Wait for commit response + +**Assertions**: +- `[data-testid="import-success-modal"]` is visible +- Modal contains text "Import Completed" +- Modal shows "1 host created" or similar count + +**Selectors**: +| Element | Selector | +|---------|----------| +| Success Modal | `[data-testid="import-success-modal"]` | +| Commit Button | `page.getByRole('button', { name: /commit/i })` | +| Modal Header | `page.getByTestId('import-success-modal').locator('h2')` | + +--- + +### Test Case 1.2: "View Proxy Hosts" button navigation + +**Title**: `should navigate to /proxy-hosts when clicking View Proxy Hosts button` + +**Prerequisites**: +- Success modal visible (chain from 1.1 or re-setup) + +**Setup (API)**: +```typescript +// TestDataManager provides automatic cleanup +// Use helper function to complete import flow +``` + +**Steps**: +1. Complete import flow (reuse helper or inline steps from 1.1) +2. Wait for success modal to appear +3. Click "View Proxy Hosts" button + +**Assertions**: +- `page.url()` ends with `/proxy-hosts` +- Success modal is no longer visible + +**Selectors**: +| Element | Selector | +|---------|----------| +| View Proxy Hosts Button | `button:has-text("View Proxy Hosts")` | + +--- + +### Test Case 1.3: "Go to Dashboard" button navigation + +**Title**: `should navigate to /dashboard when clicking Go to Dashboard button` + +**Prerequisites**: +- Success modal visible + +**Steps**: +1. Complete import flow +2. Wait for success modal to appear +3. Click "Go to Dashboard" button + +**Assertions**: +- `page.url()` matches `/` or `/dashboard` +- Success modal is no longer visible + **Selectors**: | Element | Selector | |---------|----------| @@ -550,6 +664,81 @@ async function completeImportFlow( // enabled: false, // }); +// Note: TestDataManager handles cleanup automatically +// No manual cleanup helper needed +``` +## Complexity Summary + +| Test Case | Complexity | API Setup Required | Cleanup Required | +|-----------|------------|-------------------|------------------| +| 1.1 Success modal appears | Medium | None (TestDataManager) | Automatic | +| 1.2 View Proxy Hosts nav | Medium | None (TestDataManager) | Automatic | +| 1.3 Dashboard nav | Medium | None (TestDataManager) | Automatic | +| 1.4 Close button | Medium | None (TestDataManager) | Automatic | +| 2.1 Conflict indicator | Complex | Create host via testData | Automatic | +| 2.2 Side-by-side expand | Complex | Create host via testData | Automatic | +| 2.3 Recommendation text | Complex | Create host via testData | Automatic | +| 3.1 Overwrite resolution | Complex | Create host via testData | Automatic | +| 4.1 Banner appears | Medium | None | Automatic | +| 4.2 Review Changes click | Medium | None | Automatic | +| 5.1 Custom name commit | Simple | None | Automatic | + +**Total**: 11 test cases +**Estimated Implementation Time**: 6-8 hours + +**Rationale for Time Increase**: +- TestDataManager integration requires understanding fixture patterns +- Row-scoped locator strategies more complex than simple testids +- Parallel execution validation with namespacing +- Additional validation for automatic cleanup + +--- + +## Helper Functions to Create + +```typescript +import type { Page } from '@playwright/test'; +import type { TestDataManager } from '../fixtures/auth-fixtures'; + +// Helper to complete import and return to success modal +// Uses TestDataManager for automatic cleanup +async function completeImportFlow( + page: Page, + testData: TestDataManager, + caddyfile: string +): Promise { + await page.goto('/tasks/import/caddyfile'); + await page.locator('textarea').fill(caddyfile); + + const uploadPromise = page.waitForResponse(r => + r.url().includes('/api/v1/import/upload') && r.status() === 200 + ); + await page.getByRole('button', { name: /parse|review/i }).click(); + await uploadPromise; + + await expect(page.getByTestId('import-review-table')).toBeVisible(); + + const commitPromise = page.waitForResponse(r => + r.url().includes('/api/v1/import/commit') && r.status() === 200 + ); + await page.getByRole('button', { name: /commit/i }).click(); + await commitPromise; + + await expect(page.getByTestId('import-success-modal')).toBeVisible(); +} + +// Note: TestDataManager already provides createProxyHost() method +// No need for standalone helper - use testData.createProxyHost() directly +// Example: +// const hostId = await testData.createProxyHost({ +// name: 'Test Host', +// domain_names: [testData.generateDomain('test')], +// forward_scheme: 'http', +// forward_host: 'localhost', +// forward_port: 8080, +// enabled: false, +// }); + // Note: TestDataManager handles cleanup automatically // No manual cleanup helper needed ``` @@ -558,6 +747,16 @@ async function completeImportFlow( ## Acceptance Criteria +- [ ] All 11 test cases pass consistently (no flakiness) +- [ ] Tests use stored auth state (no login calls) +- [ ] Response waiters registered before click actions +- [ ] **All resources cleaned up automatically via TestDataManager fixtures** +- [ ] **Tests use `testData` fixture from `auth-fixtures.ts`** +- [ ] **No hardcoded domains (use TestDataManager's namespacing)** +- [ ] **Selectors use row-scoped patterns (filter by domain, then find within row)** +- [ ] **Relative paths in API calls (no `${baseURL}` interpolation)** +- [ ] Tests can run in parallel within their describe blocks +- [ ] Total test runtime < 60 seconds - [ ] All 11 test cases pass consistently (no flakiness) - [ ] Tests use stored auth state (no login calls) - [ ] Response waiters registered before click actions @@ -589,3 +788,21 @@ async function completeImportFlow( ## ARCHIVED: Previous Spec The GoReleaser v2 Migration spec previously in this file has been archived to `docs/plans/archived/goreleaser_v2_migration.md`. +## Requirements (EARS Notation) + +1. WHEN the import commit succeeds, THE SYSTEM SHALL display the success modal with created/updated/skipped counts. +2. WHEN clicking "View Proxy Hosts" in the success modal, THE SYSTEM SHALL navigate to `/proxy-hosts`. +3. WHEN clicking "Go to Dashboard" in the success modal, THE SYSTEM SHALL navigate to the dashboard (`/`). +4. WHEN clicking "Close" in the success modal, THE SYSTEM SHALL close the modal and remain on the import page. +5. WHEN importing a Caddyfile with a domain that already exists, THE SYSTEM SHALL display a conflict indicator. +6. WHEN expanding a conflict row, THE SYSTEM SHALL show side-by-side comparison of current vs imported configuration. +7. WHEN selecting "Replace with Imported" resolution and committing, THE SYSTEM SHALL update the existing host. +8. WHEN a pending import session exists, THE SYSTEM SHALL display a yellow banner with "Review Changes" button. +9. WHEN clicking "Review Changes" on the session banner, THE SYSTEM SHALL restore the review table with previous content. +10. WHEN editing the name field in the review table, THE SYSTEM SHALL use that custom name when creating the proxy host. + +--- + +## ARCHIVED: Previous Spec + +The GoReleaser v2 Migration spec previously in this file has been archived to `docs/plans/archived/goreleaser_v2_migration.md`. diff --git a/docs/plans/pr583_patch_coverage_spec.md b/docs/plans/pr583_patch_coverage_spec.md new file mode 100644 index 00000000..cb7d94bd --- /dev/null +++ b/docs/plans/pr583_patch_coverage_spec.md @@ -0,0 +1,306 @@ +# Codecov Patch Coverage Gap Analysis - PR #583 + +## Executive Summary + +PR #583 introduced Caddyfile normalization functionality. Codecov reports two files with missing PATCH coverage: + +| File | Patch Coverage | Missing Lines | Partial Lines | +|------|---------------|---------------|---------------| +| `backend/internal/caddy/importer.go` | 56.52% | 5 | 5 | +| `backend/internal/api/handlers/import_handler.go` | 0% | 6 | 0 | + +**Goal:** Achieve 100% patch coverage on the 16 changed lines. + +--- + +## Detailed Line Analysis + +### 1. `importer.go` - Lines Needing Coverage + +**Changed Code Block (Lines 27-40): `DefaultExecutor.Execute()` with Timeout** + +```go +func (e *DefaultExecutor) Execute(name string, args ...string) ([]byte, error) { + // Set a reasonable timeout for Caddy commands (5 seconds should be plenty for fmt/adapt) + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel() + + cmd := exec.CommandContext(ctx, name, args...) + output, err := cmd.CombinedOutput() + + // If context timed out, return a clear error message + if ctx.Err() == context.DeadlineExceeded { // ❌ PARTIAL (lines 36-38) + return output, fmt.Errorf("command timed out after 5 seconds (caddy binary may be unavailable or misconfigured): %s %v", name, args) + } + + return output, err +} +``` + +**Missing/Partial Coverage:** +- Line 36-38: Timeout error branch (`DeadlineExceeded`) - never exercised + +**Changed Code Block (Lines 135-140): `NormalizeCaddyfile()` error paths** + +```go +if err := tmpFile.Close(); err != nil { // ❌ MISSING + return "", fmt.Errorf("failed to close temp file: %w", err) +} +``` + +And: +```go +formatted, err := os.ReadFile(tmpFile.Name()) +if err != nil { // ❌ MISSING (line ~152) + return "", fmt.Errorf("failed to read formatted file: %w", err) +} +``` + +**Missing Coverage:** +- `tmpFile.Close()` error path +- `os.ReadFile()` error path after `caddy fmt` + +--- + +### 2. `import_handler.go` - Lines Needing Coverage + +**Changed Code Block (Lines 264-275): Normalization in `Upload()`** + +```go +// Normalize Caddyfile format before saving (handles single-line format) +normalizedContent := req.Content // Line 265 +if normalized, err := h.importerservice.NormalizeCaddyfile(req.Content); err != nil { + // If normalization fails, log warning but continue with original content + middleware.GetRequestLogger(c).WithError(err).Warn("Import Upload: Caddyfile normalization failed, using original content") // ❌ MISSING (line 269) +} else { + normalizedContent = normalized // ❌ MISSING (line 271) +} +``` + +**Missing Coverage:** +- Line 269: Normalization error path (logs warning, uses original content) +- Line 271: Normalization success path (uses normalized content) +- Line 289: `normalizedContent` used in `WriteFile` - implicitly tested if either path above is covered + +--- + +## Test Plan + +### Test File: `backend/internal/caddy/importer_test.go` + +#### Test Case 1: Timeout Branch in DefaultExecutor + +**Purpose:** Cover lines 36-38 (timeout error path) + +```go +func TestDefaultExecutor_Execute_Timeout(t *testing.T) { + executor := &DefaultExecutor{} + + // Use "sleep 10" to trigger 5s timeout + output, err := executor.Execute("sleep", "10") + + assert.Error(t, err) + assert.Contains(t, err.Error(), "command timed out after 5 seconds") + assert.Contains(t, err.Error(), "sleep") +} +``` + +**Mock Requirements:** None (uses real exec) +**Expected Assertions:** +- Error is returned +- Error message contains "command timed out" +- Output may be empty or partial + +#### Test Case 2: NormalizeCaddyfile - File Read Error After Format + +**Purpose:** Cover line ~152 (ReadFile error after caddy fmt) + +```go +func TestImporter_NormalizeCaddyfile_ReadError(t *testing.T) { + importer := NewImporter("caddy") + + // Mock executor that succeeds but deletes the temp file + mockExecutor := &MockExecutor{ + ExecuteFunc: func(name string, args ...string) ([]byte, error) { + // Delete the temp file before returning + // This simulates a race or permission issue + if len(args) > 2 { + os.Remove(args[2]) // Remove the temp file path + } + return []byte{}, nil + }, + } + importer.executor = mockExecutor + + _, err := importer.NormalizeCaddyfile("test.local { reverse_proxy localhost:8080 }") + + assert.Error(t, err) + assert.Contains(t, err.Error(), "failed to read formatted file") +} +``` + +**Mock Requirements:** Custom MockExecutor with ExecuteFunc field +**Expected Assertions:** +- Error returned when file can't be read after formatting +- Error message contains "failed to read formatted file" + +--- + +### Test File: `backend/internal/api/handlers/import_handler_test.go` + +#### Test Case 3: Upload - Normalization Success Path + +**Purpose:** Cover line 271 (success path where `normalizedContent` is assigned) + +```go +func TestUpload_NormalizationSuccess(t *testing.T) { + db := setupImportTestDB(t) + handler := setupImportHandler(t, db) + + // Use single-line Caddyfile format (triggers normalization) + singleLineCaddyfile := `test.local { reverse_proxy localhost:3000 }` + + req := ImportUploadRequest{ + Content: singleLineCaddyfile, + Filename: "Caddyfile", + } + body, _ := json.Marshal(req) + + w := httptest.NewRecorder() + c, _ := gin.CreateTestContext(w) + c.Request = httptest.NewRequest("POST", "/api/v1/import/upload", bytes.NewReader(body)) + c.Request.Header.Set("Content-Type", "application/json") + + handler.Upload(c) + + // Should succeed with 200 (normalization worked) + assert.Equal(t, http.StatusOK, w.Code) + + // Verify response contains hosts (parsing succeeded) + var response ImportPreview + json.Unmarshal(w.Body.Bytes(), &response) + assert.NotNil(t, response.Preview) +} +``` + +**Mock Requirements:** None (uses real Caddy binary if available, else mock executor) +**Expected Assertions:** +- HTTP 200 response +- Preview contains parsed hosts + +#### Test Case 4: Upload - Normalization Failure Path (Falls Back to Original) + +**Purpose:** Cover line 269 (error path where normalization fails but upload continues) + +```go +func TestUpload_NormalizationFallback(t *testing.T) { + db := setupImportTestDB(t) + + // Create handler with mock importer that fails normalization + mockImporter := &MockImporterService{ + NormalizeCaddyfileFunc: func(content string) (string, error) { + return "", errors.New("caddy fmt failed") + }, + // ... other methods return normal values + } + handler := NewImportHandler(db, mockImporter, "/tmp/import") + + // Valid Caddyfile that would parse successfully + caddyfile := `test.local { + reverse_proxy localhost:3000 +}` + + req := ImportUploadRequest{ + Content: caddyfile, + Filename: "Caddyfile", + } + body, _ := json.Marshal(req) + + w := httptest.NewRecorder() + c, _ := gin.CreateTestContext(w) + c.Request = httptest.NewRequest("POST", "/api/v1/import/upload", bytes.NewReader(body)) + c.Request.Header.Set("Content-Type", "application/json") + + handler.Upload(c) + + // Should still succeed (falls back to original content) + assert.Equal(t, http.StatusOK, w.Code) + + // Verify hosts were parsed from original content + var response ImportPreview + json.Unmarshal(w.Body.Bytes(), &response) + assert.Greater(t, len(response.Preview.Hosts), 0) +} +``` + +**Mock Requirements:** +- `MockImporterService` interface with controllable `NormalizeCaddyfile` behavior +- Alternatively, inject a mock executor that returns error for `caddy fmt` + +**Expected Assertions:** +- HTTP 200 response (graceful fallback) +- Hosts parsed from original (non-normalized) content + +--- + +## Implementation Checklist + +- [ ] Add `TestDefaultExecutor_Execute_Timeout` to `importer_test.go` +- [ ] Add `TestImporter_NormalizeCaddyfile_ReadError` to `importer_test.go` (requires MockExecutor enhancement) +- [ ] Add `TestUpload_NormalizationSuccess` to `import_handler_test.go` +- [ ] Add `TestUpload_NormalizationFallback` to `import_handler_test.go` (requires mock importer interface) + +## Mock Interface Requirements + +### Option A: Enhance MockExecutor with Function Field + +```go +type MockExecutor struct { + Output []byte + Err error + ExecuteFunc func(name string, args ...string) ([]byte, error) // NEW +} + +func (m *MockExecutor) Execute(name string, args ...string) ([]byte, error) { + if m.ExecuteFunc != nil { + return m.ExecuteFunc(name, args...) + } + return m.Output, m.Err +} +``` + +### Option B: Create MockImporterService for Handler Tests + +```go +type MockImporterService struct { + NormalizeCaddyfileFunc func(content string) (string, error) + ParseCaddyfileFunc func(path string) ([]byte, error) + // ... other methods +} + +func (m *MockImporterService) NormalizeCaddyfile(content string) (string, error) { + return m.NormalizeCaddyfileFunc(content) +} +``` + +--- + +## Complexity Estimate + +| Test Case | Complexity | Notes | +|-----------|------------|-------| +| Timeout test | Low | Uses real `sleep` command | +| ReadError test | Medium | Requires MockExecutor enhancement | +| Normalization success | Low | May already be covered by existing tests | +| Normalization fallback | Medium | Requires mock importer interface | + +**Total Effort:** ~2 hours + +--- + +## Acceptance Criteria + +1. All 4 test cases pass locally +2. Codecov patch coverage for both files reaches 100% +3. No new regressions in existing test suite +4. Tests are deterministic (no flaky timeouts) diff --git a/docs/reports/qa_docker_only_build_fix_report.md b/docs/reports/qa_docker_only_build_fix_report.md index 2b311b56..fca420e1 100644 --- a/docs/reports/qa_docker_only_build_fix_report.md +++ b/docs/reports/qa_docker_only_build_fix_report.md @@ -1,7 +1,7 @@ # QA Security Validation Report: Docker-Only Build Fix -**Date:** 2026-01-30 -**Agent:** QA_Security +**Date:** 2026-01-30 +**Agent:** QA_Security **Target Files:** - `.goreleaser.yaml` - `.github/workflows/nightly-build.yml` @@ -30,7 +30,7 @@ The Docker-only build fix configuration has been validated. All critical checks #### `.goreleaser.yaml` -**Method:** Python YAML parser validation +**Method:** Python YAML parser validation **Status:** ✅ **PASS** ```bash @@ -50,7 +50,7 @@ python3 -c "import yaml; yaml.safe_load(open('.goreleaser.yaml'))" #### `.github/workflows/nightly-build.yml` -**Method:** Python YAML parser validation +**Method:** Python YAML parser validation **Status:** ✅ **PASS** **Result:** Valid YAML structure with no syntax errors. @@ -335,7 +335,7 @@ verify-nightly-supply-chain docker run --name charon-nightly -d \ -p 8080:8080 \ ${{ env.GHCR_REGISTRY }}/${{ env.IMAGE_NAME }}:nightly@${{ needs.build-and-push-nightly.outputs.digest }} - + sleep 10 docker ps | grep charon-nightly curl -f http://localhost:8080/health || exit 1 @@ -460,7 +460,7 @@ grep -r "password\|secret\|token\|key" .goreleaser.yaml .github/workflows/nightl --- -**Report Generated:** 2026-01-30 -**QA Agent:** QA_Security -**Validation Scope:** Docker-Only Build Fix +**Report Generated:** 2026-01-30 +**QA Agent:** QA_Security +**Validation Scope:** Docker-Only Build Fix **Status:** ✅ APPROVED