feat: add nightly branch workflow
This commit is contained in:
@@ -11,7 +11,7 @@ parent_issue: 365
|
||||
|
||||
# Issue #365: Additional Security Enhancements - Manual Test Plan
|
||||
|
||||
**Issue**: https://github.com/Wikid82/Charon/issues/365
|
||||
**Issue**: <https://github.com/Wikid82/Charon/issues/365>
|
||||
**PRs**: #436, #437
|
||||
**Status**: Ready for Manual Testing
|
||||
|
||||
@@ -24,6 +24,7 @@ parent_issue: 365
|
||||
**Objective**: Verify constant-time token comparison doesn't leak timing information.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Create a new user invite via the admin UI
|
||||
2. Copy the invite token from the generated link
|
||||
3. Attempt to accept the invite with the correct token - should succeed
|
||||
@@ -39,6 +40,7 @@ parent_issue: 365
|
||||
**Objective**: Verify all security headers are present.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Start Charon with HTTPS enabled
|
||||
2. Use browser dev tools or curl to inspect response headers
|
||||
3. Verify presence of:
|
||||
@@ -50,6 +52,7 @@ parent_issue: 365
|
||||
- `Permissions-Policy`
|
||||
|
||||
**curl command**:
|
||||
|
||||
```bash
|
||||
curl -I https://your-charon-instance.com/
|
||||
```
|
||||
@@ -61,6 +64,7 @@ curl -I https://your-charon-instance.com/
|
||||
**Objective**: Verify documented container hardening works.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Deploy Charon using the hardened docker-compose config from docs/security.md
|
||||
2. Verify container starts successfully with `read_only: true`
|
||||
3. Verify all functionality works (proxy hosts, certificates, etc.)
|
||||
@@ -73,11 +77,13 @@ curl -I https://your-charon-instance.com/
|
||||
**Objective**: Verify all documentation is accurate and complete.
|
||||
|
||||
**Pages to Review**:
|
||||
|
||||
- [ ] `docs/security.md` - TLS, DNS, Container Hardening sections
|
||||
- [ ] `docs/security-incident-response.md` - SIRP document
|
||||
- [ ] `docs/getting-started.md` - Security Update Notifications section
|
||||
|
||||
**Check for**:
|
||||
|
||||
- Correct code examples
|
||||
- Working links
|
||||
- No typos or formatting issues
|
||||
@@ -89,6 +95,7 @@ curl -I https://your-charon-instance.com/
|
||||
**Objective**: Verify SBOM is generated on release builds.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Push a commit to trigger a non-PR build
|
||||
2. Check GitHub Actions workflow run
|
||||
3. Verify "Generate SBOM" step completes successfully
|
||||
|
||||
@@ -10,6 +10,7 @@
|
||||
## Overview
|
||||
|
||||
This manual test plan focuses on validating the UI/UX improvements for:
|
||||
|
||||
1. **Scrollable Sidebar Navigation**: Ensures logout button remains accessible when submenus expand
|
||||
2. **Fixed Header Bar**: Keeps header visible when scrolling page content
|
||||
|
||||
@@ -18,17 +19,20 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
## Test Environment Setup
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- [ ] Latest code from `feature/beta-release` branch pulled
|
||||
- [ ] Frontend dependencies installed: `cd frontend && npm install`
|
||||
- [ ] Development server running: `npm run dev`
|
||||
- [ ] Browser DevTools open for console error monitoring
|
||||
|
||||
### Test Browsers
|
||||
|
||||
- [ ] Chrome/Edge (Chromium-based)
|
||||
- [ ] Firefox
|
||||
- [ ] Safari (if available)
|
||||
|
||||
### Test Modes
|
||||
|
||||
- [ ] Light theme
|
||||
- [ ] Dark theme
|
||||
- [ ] Desktop viewport (≥1024px width)
|
||||
@@ -41,6 +45,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 1.1: Expanded Sidebar with All Submenus Open
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Open Charon in browser at desktop resolution (≥1024px)
|
||||
2. Ensure sidebar is expanded (click hamburger if collapsed)
|
||||
3. Click "Settings" menu item to expand its submenu
|
||||
@@ -50,6 +55,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
7. Scroll within the sidebar navigation area
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Sidebar navigation area shows a subtle scrollbar when content overflows
|
||||
- [ ] Scrollbar is styled with custom colors matching the theme
|
||||
- [ ] Logout button remains visible at the bottom of the sidebar
|
||||
@@ -59,6 +65,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] Smooth scrolling behavior (no jank or stutter)
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Logout button pushed off-screen
|
||||
- ❌ Harsh default scrollbar styling
|
||||
- ❌ No scrollbar when content overflows
|
||||
@@ -69,10 +76,12 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 1.2: Collapsed Sidebar State
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Click the hamburger menu icon to collapse the sidebar
|
||||
2. Observe the compact icon-only sidebar
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Collapsed sidebar shows only icons
|
||||
- [ ] Logout icon remains visible at bottom
|
||||
- [ ] No scrollbar needed (all items fit in viewport height)
|
||||
@@ -80,6 +89,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] Smooth transition animation when collapsing
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Logout icon not visible
|
||||
- ❌ Jerky collapse animation
|
||||
- ❌ Icons overlapping or misaligned
|
||||
@@ -89,6 +99,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 1.3: Sidebar Scrollbar Interactivity
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Expand sidebar with multiple submenus open (repeat Test Case 1.1 steps 2-6)
|
||||
2. Hover over the scrollbar
|
||||
3. Click and drag the scrollbar thumb
|
||||
@@ -96,6 +107,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
5. Use keyboard arrow keys to navigate menu items
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Scrollbar thumb becomes slightly more opaque on hover
|
||||
- [ ] Dragging scrollbar thumb scrolls content smoothly
|
||||
- [ ] Mouse wheel scrolling works within sidebar
|
||||
@@ -103,6 +115,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] Active menu item scrolls into view when selected via keyboard
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Scrollbar not interactive
|
||||
- ❌ Keyboard navigation broken
|
||||
- ❌ Scrolling feels laggy or stutters
|
||||
@@ -112,12 +125,14 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 1.4: Mobile Sidebar Behavior
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Resize browser to mobile viewport (<1024px) or use DevTools device emulation
|
||||
2. Click hamburger menu to open mobile sidebar overlay
|
||||
3. Expand multiple submenus
|
||||
4. Scroll within the sidebar
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Sidebar appears as overlay with backdrop
|
||||
- [ ] Navigation area is scrollable if content overflows
|
||||
- [ ] Logout button remains at bottom
|
||||
@@ -125,6 +140,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] Closing sidebar (click backdrop or X) works smoothly
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Mobile sidebar not scrollable
|
||||
- ❌ Logout button hidden on mobile
|
||||
- ❌ Backdrop not dismissing sidebar
|
||||
@@ -136,12 +152,14 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 2.1: Header Visibility During Content Scroll
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Navigate to a page with long content (e.g., Proxy Hosts with many entries)
|
||||
2. Scroll down the page content at least 500px
|
||||
3. Continue scrolling to bottom of page
|
||||
4. Scroll back to top
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Desktop header bar remains fixed at top of viewport
|
||||
- [ ] Header does not scroll with content
|
||||
- [ ] Header background and border remain visible
|
||||
@@ -150,6 +168,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] Content scrolls smoothly beneath the header
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Header scrolls off-screen with content
|
||||
- ❌ Header jumps or stutters
|
||||
- ❌ Buttons in header become unresponsive
|
||||
@@ -160,6 +179,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 2.2: Header Element Interactivity
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. With page scrolled down (header should be at top of viewport)
|
||||
2. Click the sidebar collapse/expand button in header
|
||||
3. Click the notifications icon
|
||||
@@ -167,6 +187,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
5. Open the user menu dropdown (if present)
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Sidebar collapse button works correctly
|
||||
- [ ] Notifications dropdown opens anchored to header
|
||||
- [ ] Theme toggle switches between light/dark mode
|
||||
@@ -174,6 +195,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] All click targets remain accurate (no misalignment)
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Dropdowns appear behind header or content
|
||||
- ❌ Buttons not responding to clicks
|
||||
- ❌ Dropdowns positioned incorrectly
|
||||
@@ -183,17 +205,20 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 2.3: Mobile Header Behavior
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Resize to mobile viewport (<1024px)
|
||||
2. Scroll page content down
|
||||
3. Observe mobile header behavior
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Mobile header remains fixed at top (existing behavior preserved)
|
||||
- [ ] No regressions in mobile header functionality
|
||||
- [ ] Sidebar toggle button works
|
||||
- [ ] Content scrolls beneath mobile header
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Mobile header scrolls away
|
||||
- ❌ Mobile header overlaps with content
|
||||
- ❌ Hamburger menu not working
|
||||
@@ -203,6 +228,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 2.4: Header Z-Index Hierarchy
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Desktop viewport (≥1024px)
|
||||
2. Open the notifications dropdown from header
|
||||
3. Observe dropdown positioning relative to header
|
||||
@@ -210,12 +236,14 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
5. Observe sidebar relative to header
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Sidebar (z-30) appears above header (z-10) ✅
|
||||
- [ ] Dropdowns in header appear correctly (not behind content)
|
||||
- [ ] No visual overlapping issues
|
||||
- [ ] Proper layering: Content < Header < Dropdowns < Sidebar < Modals
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Dropdown hidden behind header
|
||||
- ❌ Sidebar hidden behind header
|
||||
- ❌ Content appearing above header
|
||||
@@ -227,12 +255,14 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 3.1: Viewport Resize Behavior
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Start at desktop viewport (≥1024px)
|
||||
2. Expand sidebar with submenus
|
||||
3. Slowly resize browser width from 1400px → 1000px → 768px → 375px
|
||||
4. Observe layout transitions at breakpoints
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Smooth transition at 1024px breakpoint (desktop ↔ mobile)
|
||||
- [ ] Sidebar transitions from expanded to overlay mode
|
||||
- [ ] Header transitions from desktop to mobile style
|
||||
@@ -241,6 +271,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] Scrolling continues to work in both modes
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Layout breaks at specific widths
|
||||
- ❌ Horizontal scrollbar appears
|
||||
- ❌ Elements overlap or get cut off
|
||||
@@ -251,6 +282,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 3.2: Dark/Light Theme Toggle with Scroll State
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Expand sidebar with multiple submenus
|
||||
2. Scroll sidebar to middle position (logout button out of view above)
|
||||
3. Toggle between light and dark themes
|
||||
@@ -258,6 +290,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
5. Toggle theme again
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Sidebar scroll position preserved after theme toggle
|
||||
- [ ] Scrollbar styling updates to match new theme
|
||||
- [ ] Header background color updates correctly
|
||||
@@ -265,6 +298,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] No flashing or visual glitches during theme transition
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Scroll position resets to top
|
||||
- ❌ Scrollbar styling not updating
|
||||
- ❌ Layout shifts during theme change
|
||||
@@ -275,6 +309,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 3.3: Browser Zoom Levels
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Set browser zoom to 50% (Ctrl/Cmd + Mouse wheel or View menu)
|
||||
2. Verify sidebar scrolling and header behavior
|
||||
3. Set browser zoom to 100% (default)
|
||||
@@ -283,6 +318,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
6. Verify functionality
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Sidebar scrolling works at all zoom levels
|
||||
- [ ] Header remains fixed at all zoom levels
|
||||
- [ ] No horizontal scrollbars introduced by zoom
|
||||
@@ -290,6 +326,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] Layout remains functional
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Horizontal scrollbars at high zoom
|
||||
- ❌ Elements overlap at extreme zoom levels
|
||||
- ❌ Scrolling broken at specific zoom
|
||||
@@ -302,11 +339,13 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 4.1: Chrome/Edge (Chromium)
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Run all test suites 1-3 in Chrome or Edge
|
||||
2. Open DevTools Console and check for errors
|
||||
3. Monitor Performance tab for any issues
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] All features work as expected
|
||||
- [ ] No console errors related to layout or scrolling
|
||||
- [ ] Smooth 60fps scrolling in Performance tab
|
||||
@@ -317,17 +356,20 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 4.2: Firefox
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Open Charon in Firefox
|
||||
2. Run all test suites 1-3
|
||||
3. Verify Firefox-specific scrollbar styling (`scrollbar-width: thin`)
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] All features work as expected
|
||||
- [ ] Firefox thin scrollbar styling applied
|
||||
- [ ] Scrollbar color matches theme (via `scrollbar-color` property)
|
||||
- [ ] No layout differences compared to Chrome
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Thick default scrollbar in Firefox
|
||||
- ❌ Layout differences from Chrome
|
||||
|
||||
@@ -336,17 +378,20 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 4.3: Safari (if available)
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Open Charon in Safari (macOS)
|
||||
2. Run all test suites 1-3
|
||||
3. Verify `position: sticky` works correctly
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Header `position: sticky` works (Safari 13+ supports this)
|
||||
- [ ] All features work as expected
|
||||
- [ ] WebKit scrollbar styling applied
|
||||
- [ ] Smooth scrolling on trackpad
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Header not sticking in Safari
|
||||
- ❌ Scrollbar styling not applied
|
||||
- ❌ Stuttery scrolling
|
||||
@@ -358,6 +403,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 5.1: Keyboard Navigation Through Sidebar
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Click in browser address bar, then press Tab to enter page
|
||||
2. Use Tab key to navigate through sidebar menu items
|
||||
3. Expand submenus using Enter or Space keys
|
||||
@@ -365,6 +411,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
5. Tab to logout button
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Focus indicator visible on each menu item
|
||||
- [ ] Focused items scroll into view automatically
|
||||
- [ ] Can reach and activate logout button via keyboard
|
||||
@@ -372,6 +419,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
- [ ] Focus order is logical (top to bottom)
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Focused items not scrolling into view
|
||||
- ❌ Cannot reach logout button via keyboard
|
||||
- ❌ Focus indicator not visible
|
||||
@@ -382,11 +430,13 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 5.2: Screen Reader Testing (Optional)
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Enable screen reader (NVDA, JAWS, VoiceOver)
|
||||
2. Navigate through sidebar menu
|
||||
3. Navigate through header elements
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Sidebar navigation announced as "navigation" landmark
|
||||
- [ ] Menu items announced with proper labels
|
||||
- [ ] Current page announced correctly
|
||||
@@ -400,16 +450,19 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 6.1: Rapid Sidebar Collapse/Expand
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Rapidly click sidebar collapse button 10 times
|
||||
2. Observe for memory leaks or performance degradation
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Smooth transitions even with rapid toggling
|
||||
- [ ] No memory leaks (check DevTools Memory tab)
|
||||
- [ ] No console errors
|
||||
- [ ] Animations complete correctly
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Animations stuttering after multiple toggles
|
||||
- ❌ Memory usage increasing
|
||||
- ❌ Console errors appearing
|
||||
@@ -419,17 +472,20 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 6.2: Long Page Content Stress Test
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Navigate to Proxy Hosts page
|
||||
2. If limited data, use browser DevTools to inject 100+ fake host entries into the list
|
||||
3. Scroll from top to bottom of the page rapidly
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Header remains fixed throughout scroll
|
||||
- [ ] No layout thrashing or repaints (check DevTools Performance)
|
||||
- [ ] Smooth scrolling even with large DOM
|
||||
- [ ] No memory leaks
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Stuttering during scroll
|
||||
- ❌ Header jumping or flickering
|
||||
- ❌ Performance degradation with large lists
|
||||
@@ -439,6 +495,7 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
### Test Case 6.3: Focus Management After Scroll
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Focus an element in header (e.g., notifications button)
|
||||
2. Scroll page content down 500px
|
||||
3. Click focused element
|
||||
@@ -447,12 +504,14 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
6. Verify button remains visible
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] Focused elements in header remain accessible
|
||||
- [ ] Clicking focused elements works correctly
|
||||
- [ ] Focused elements in sidebar scroll into view
|
||||
- [ ] No focus lost during scrolling
|
||||
|
||||
**Bug Indicators**:
|
||||
|
||||
- ❌ Focused element not visible after scroll
|
||||
- ❌ Click targets misaligned
|
||||
- ❌ Focus lost unexpectedly
|
||||
@@ -461,12 +520,14 @@ This manual test plan focuses on validating the UI/UX improvements for:
|
||||
|
||||
## Known Issues & Expected Behavior
|
||||
|
||||
### Not a Bug (Expected):
|
||||
### Not a Bug (Expected)
|
||||
|
||||
- **Existing Linting Warnings**: 40 pre-existing TypeScript warnings unrelated to this change
|
||||
- **Nested Sticky Elements**: Child components using `position: sticky` will be relative to content scroll container, not viewport (documented limitation)
|
||||
- **Safari <13**: `position: sticky` not supported in very old Safari versions (not a target)
|
||||
|
||||
### Future Enhancements:
|
||||
### Future Enhancements
|
||||
|
||||
- Smooth scroll to active menu item on page load
|
||||
- Header shadow effect when content scrolls beneath
|
||||
- Collapse sidebar automatically on mobile after navigation
|
||||
@@ -499,7 +560,9 @@ If you find a bug during testing, please report it with the following details:
|
||||
|
||||
**Console Errors**:
|
||||
```
|
||||
|
||||
[Paste any console errors]
|
||||
|
||||
```
|
||||
|
||||
**Severity**: [Critical / High / Medium / Low]
|
||||
|
||||
@@ -30,12 +30,14 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify Go CodeQL scan runs successfully with CI-aligned parameters
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Open VS Code Command Palette (`Ctrl+Shift+P`)
|
||||
2. Type "Tasks: Run Task"
|
||||
3. Select `Security: CodeQL Go Scan (CI-Aligned) [~60s]`
|
||||
4. Wait for completion (~60 seconds)
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] Task completes successfully (no errors)
|
||||
- [ ] Output shows database creation progress
|
||||
- [ ] Output shows query execution progress
|
||||
@@ -53,12 +55,14 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify JavaScript/TypeScript CodeQL scan runs with CI-aligned parameters
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Open VS Code Command Palette
|
||||
2. Type "Tasks: Run Task"
|
||||
3. Select `Security: CodeQL JS Scan (CI-Aligned) [~90s]`
|
||||
4. Wait for completion (~90 seconds)
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] Task completes successfully
|
||||
- [ ] Output shows database creation for frontend source
|
||||
- [ ] Output shows query execution progress (202 queries)
|
||||
@@ -76,12 +80,14 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify sequential execution of both scans
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Open VS Code Command Palette
|
||||
2. Type "Tasks: Run Task"
|
||||
3. Select `Security: CodeQL All (CI-Aligned)`
|
||||
4. Wait for completion (~3 minutes)
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] Go scan executes first
|
||||
- [ ] JavaScript scan executes second (after Go completes)
|
||||
- [ ] Both SARIF files generated
|
||||
@@ -98,6 +104,7 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify govulncheck runs on commit
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Open terminal in project root
|
||||
2. Make a trivial change to any `.go` file (add comment)
|
||||
3. Stage file: `git add <file>`
|
||||
@@ -105,6 +112,7 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
5. Observe pre-commit execution
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] Pre-commit hook triggers automatically
|
||||
- [ ] `security-scan` stage executes
|
||||
- [ ] `govulncheck` runs on backend code
|
||||
@@ -123,6 +131,7 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify manual-stage CodeQL scans work via pre-commit
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Open terminal in project root
|
||||
2. Run manual stage: `pre-commit run --hook-stage manual codeql-go-scan --all-files`
|
||||
3. Wait for completion (~60s)
|
||||
@@ -130,6 +139,7 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
5. Run: `pre-commit run --hook-stage manual codeql-check-findings --all-files`
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] `codeql-go-scan` executes successfully
|
||||
- [ ] `codeql-js-scan` executes successfully
|
||||
- [ ] `codeql-check-findings` checks SARIF files
|
||||
@@ -146,16 +156,20 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify that ERROR-level findings block the hook
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Temporarily introduce a known security issue (e.g., SQL injection)
|
||||
|
||||
```go
|
||||
// In any handler file, add:
|
||||
query := "SELECT * FROM users WHERE id = " + userInput
|
||||
```
|
||||
|
||||
2. Run: `pre-commit run --hook-stage manual codeql-go-scan --all-files`
|
||||
3. Run: `pre-commit run --hook-stage manual codeql-check-findings --all-files`
|
||||
4. Observe output
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] CodeQL scan completes
|
||||
- [ ] `codeql-check-findings` hook **FAILS**
|
||||
- [ ] Error message shows high-severity finding
|
||||
@@ -173,12 +187,14 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify SARIF files are GitHub-compatible
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Run any CodeQL scan (TC1 or TC2)
|
||||
2. Open generated SARIF file in text editor
|
||||
3. Validate JSON structure
|
||||
4. Check for required fields
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] File is valid JSON
|
||||
- [ ] Contains `$schema` property
|
||||
- [ ] Contains `runs` array with results
|
||||
@@ -198,6 +214,7 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify CI behavior matches local execution
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Create test branch: `git checkout -b test/codeql-alignment`
|
||||
2. Make trivial change and commit
|
||||
3. Push to GitHub: `git push origin test/codeql-alignment`
|
||||
@@ -206,6 +223,7 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
6. Review security findings in PR
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] CodeQL workflow triggers on PR
|
||||
- [ ] Go and JavaScript scans execute
|
||||
- [ ] Workflow uses `security-and-quality` suite
|
||||
@@ -223,12 +241,14 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Validate user-facing documentation
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Review: `docs/security/codeql-scanning.md`
|
||||
2. Follow quick start instructions
|
||||
3. Review: `.github/instructions/copilot-instructions.md`
|
||||
4. Verify Definition of Done section
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] Quick start instructions work as documented
|
||||
- [ ] Command examples are accurate
|
||||
- [ ] Task names match VS Code tasks
|
||||
@@ -245,12 +265,14 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify scan execution times are reasonable
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Run Go scan via VS Code task
|
||||
2. Measure execution time
|
||||
3. Run JS scan via VS Code task
|
||||
4. Measure execution time
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] Go scan completes in **50-70 seconds**
|
||||
- [ ] JS scan completes in **80-100 seconds**
|
||||
- [ ] Combined scan completes in **2.5-3.5 minutes**
|
||||
@@ -268,10 +290,12 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Ensure other CI workflows still pass
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Run full CI suite on test branch
|
||||
2. Check all workflow statuses
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] Build workflows pass
|
||||
- [ ] Test workflows pass
|
||||
- [ ] Lint workflows pass
|
||||
@@ -287,12 +311,14 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
**Objective:** Verify normal development isn't disrupted
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Make code changes (normal development)
|
||||
2. Run existing VS Code tasks (Build, Test, Lint)
|
||||
3. Commit changes with pre-commit hooks
|
||||
4. Push to branch
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- [ ] Existing tasks work normally
|
||||
- [ ] Fast pre-commit hooks run automatically
|
||||
- [ ] Manual CodeQL scans are opt-in
|
||||
@@ -310,12 +336,14 @@ Validate that local CodeQL scans match CI execution and that developers can catc
|
||||
Based on QA report, these findings are expected:
|
||||
|
||||
**Go (79 findings):**
|
||||
|
||||
- Email injection (CWE-640): 3 findings
|
||||
- SSRF (CWE-918): 2 findings
|
||||
- Log injection (CWE-117): 10 findings
|
||||
- Quality issues: 64 findings (redundant code, missing checks)
|
||||
|
||||
**JavaScript (105 findings):**
|
||||
|
||||
- DOM-based XSS (CWE-079): 1 finding
|
||||
- Incomplete validation (CWE-020): 4 findings
|
||||
- Quality issues: 100 findings (mostly in minified dist/ bundles)
|
||||
@@ -349,12 +377,15 @@ Based on QA report, these findings are expected:
|
||||
**Overall Result:** ☐ **PASS** ☐ **FAIL**
|
||||
|
||||
**Blockers Found:**
|
||||
|
||||
- None / List blockers here
|
||||
|
||||
**Recommendations:**
|
||||
|
||||
- None / List improvements here
|
||||
|
||||
**Sign-Off:**
|
||||
|
||||
- [ ] All critical tests passed
|
||||
- [ ] Documentation is accurate
|
||||
- [ ] No major issues found
|
||||
@@ -370,6 +401,7 @@ Based on QA report, these findings are expected:
|
||||
### Issue: CodeQL not found
|
||||
|
||||
**Solution:**
|
||||
|
||||
```bash
|
||||
# Install/upgrade CodeQL
|
||||
gh codeql set-version latest
|
||||
@@ -381,6 +413,7 @@ codeql version # Verify installation
|
||||
**Symptom:** Error about missing predicates or incompatible query packs
|
||||
|
||||
**Solution:**
|
||||
|
||||
```bash
|
||||
# Upgrade CodeQL to v2.17.0 or newer
|
||||
gh codeql set-version latest
|
||||
@@ -394,6 +427,7 @@ rm -rf ~/.codeql/
|
||||
### Issue: Pre-commit hooks not running
|
||||
|
||||
**Solution:**
|
||||
|
||||
```bash
|
||||
# Reinstall hooks
|
||||
pre-commit uninstall
|
||||
@@ -406,6 +440,7 @@ pre-commit run --all-files
|
||||
### Issue: SARIF file not generated
|
||||
|
||||
**Solution:**
|
||||
|
||||
```bash
|
||||
# Check permissions
|
||||
ls -la codeql-*.sarif
|
||||
|
||||
@@ -5,6 +5,7 @@
|
||||
**Date Created:** 2025-12-24
|
||||
**Test Environment:** Local Docker / Staging
|
||||
**Prerequisites:**
|
||||
|
||||
- Charon running with latest build
|
||||
- Access to test webhooks (Discord, Slack, Gotify)
|
||||
- At least 2 proxy hosts configured
|
||||
@@ -121,9 +122,9 @@
|
||||
}
|
||||
```
|
||||
|
||||
4. Click "Validate Template"
|
||||
5. Click "Send Test Notification"
|
||||
6. Check Discord channel
|
||||
1. Click "Validate Template"
|
||||
2. Click "Send Test Notification"
|
||||
3. Check Discord channel
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -159,7 +160,7 @@
|
||||
]
|
||||
```
|
||||
|
||||
3. Click "Validate Template"
|
||||
1. Click "Validate Template"
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -255,8 +256,8 @@
|
||||
}
|
||||
```
|
||||
|
||||
4. Click "Send Test Notification"
|
||||
5. Check Slack channel
|
||||
1. Click "Send Test Notification"
|
||||
2. Check Slack channel
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -334,8 +335,8 @@
|
||||
}
|
||||
```
|
||||
|
||||
4. Click "Send Test Notification"
|
||||
5. Check Gotify notification
|
||||
1. Click "Send Test Notification"
|
||||
2. Check Gotify notification
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -386,8 +387,8 @@
|
||||
}
|
||||
```
|
||||
|
||||
3. Click "Send Test Notification"
|
||||
4. Check webhook.site to see received payload
|
||||
1. Click "Send Test Notification"
|
||||
2. Check webhook.site to see received payload
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -475,10 +476,10 @@
|
||||
docker logs charon 2>&1 | grep "failure_count\|waiting for threshold" | tail -50
|
||||
```
|
||||
|
||||
2. Review log output
|
||||
3. Temporarily disconnect host (e.g., stop container)
|
||||
4. Wait for 2 check cycles (120 seconds)
|
||||
5. Check logs again
|
||||
1. Review log output
|
||||
2. Temporarily disconnect host (e.g., stop container)
|
||||
3. Wait for 2 check cycles (120 seconds)
|
||||
4. Check logs again
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -571,7 +572,7 @@ docker logs charon 2>&1 | grep "Host status changed" | tail -10
|
||||
docker logs -f charon 2>&1 | grep "Host TCP check"
|
||||
```
|
||||
|
||||
2. Briefly pause host container (5 seconds):
|
||||
1. Briefly pause host container (5 seconds):
|
||||
|
||||
```bash
|
||||
docker pause <container_name>
|
||||
@@ -579,7 +580,7 @@ sleep 5
|
||||
docker unpause <container_name>
|
||||
```
|
||||
|
||||
3. Observe logs for next 2 check cycles
|
||||
1. Observe logs for next 2 check cycles
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -619,7 +620,7 @@ for (let i = 0; i < 20; i++) {
|
||||
}
|
||||
```
|
||||
|
||||
4. Observe console output and UI
|
||||
1. Observe console output and UI
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -649,8 +650,8 @@ for (let i = 0; i < 20; i++) {
|
||||
docker logs -f charon 2>&1 | grep "All host checks completed\|Check cycle started"
|
||||
```
|
||||
|
||||
2. Observe timing over 5 minutes
|
||||
3. Count check cycles
|
||||
1. Observe timing over 5 minutes
|
||||
2. Count check cycles
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -810,8 +811,8 @@ docker logs -f charon 2>&1 | grep "All host checks completed\|Check cycle starte
|
||||
}
|
||||
```
|
||||
|
||||
3. Send test notification to each
|
||||
4. Verify all variables are replaced
|
||||
1. Send test notification to each
|
||||
2. Verify all variables are replaced
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -842,13 +843,13 @@ docker logs -f charon 2>&1 | grep "All host checks completed\|Check cycle starte
|
||||
docker stats charon
|
||||
```
|
||||
|
||||
3. Check log timing:
|
||||
1. Check log timing:
|
||||
|
||||
```bash
|
||||
docker logs charon 2>&1 | grep "All host checks completed" | tail -10
|
||||
```
|
||||
|
||||
4. Observe check duration over 5 minutes
|
||||
1. Observe check duration over 5 minutes
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -883,8 +884,8 @@ docker logs charon 2>&1 | grep "All host checks completed" | tail -10
|
||||
}
|
||||
```
|
||||
|
||||
2. Click "Validate Template"
|
||||
3. Attempt to save
|
||||
1. Click "Validate Template"
|
||||
2. Attempt to save
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -916,7 +917,7 @@ docker logs charon 2>&1 | grep "All host checks completed" | tail -10
|
||||
docker logs charon 2>&1 | grep "Failed to send.*notification"
|
||||
```
|
||||
|
||||
4. Verify system continues operating
|
||||
1. Verify system continues operating
|
||||
|
||||
**Expected Result:**
|
||||
|
||||
@@ -1047,24 +1048,28 @@ docker logs charon 2>&1 | grep "Failed to send.*notification"
|
||||
### Test Data Setup
|
||||
|
||||
**Discord Webhook:**
|
||||
|
||||
```
|
||||
URL: https://discord.com/api/webhooks/YOUR_WEBHOOK_ID/YOUR_WEBHOOK_TOKEN
|
||||
Channel: #charon-test
|
||||
```
|
||||
|
||||
**Slack Webhook:**
|
||||
|
||||
```
|
||||
URL: https://hooks.slack.com/services/YOUR/WEBHOOK/PATH
|
||||
Channel: #charon-test
|
||||
```
|
||||
|
||||
**Gotify Instance:**
|
||||
|
||||
```
|
||||
URL: https://gotify.example.com
|
||||
Token: YOUR_APP_TOKEN
|
||||
```
|
||||
|
||||
**Test Proxy Hosts:**
|
||||
|
||||
```
|
||||
Host 1: test-app-1.local (port 8081)
|
||||
Host 2: test-app-2.local (port 8082)
|
||||
|
||||
@@ -32,12 +32,14 @@ Before beginning tests, ensure:
|
||||
- Discord webhook: <https://discord.com/developers/docs/resources/webhook>
|
||||
|
||||
2. **HTTP Client**:
|
||||
|
||||
```bash
|
||||
# Verify curl is available
|
||||
curl --version
|
||||
```
|
||||
|
||||
3. **Log Access**:
|
||||
|
||||
```bash
|
||||
# View Charon logs
|
||||
docker logs charon --tail=50 --follow
|
||||
@@ -65,6 +67,7 @@ Each test case includes:
|
||||
**Objective**: Verify legitimate HTTPS webhooks work correctly
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Navigate to Security Settings → Notifications
|
||||
2. Configure webhook: `https://webhook.site/<your-unique-id>`
|
||||
3. Click **Save**
|
||||
@@ -78,6 +81,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -89,6 +93,7 @@ Each test case includes:
|
||||
**Objective**: Verify HTTP webhooks work when explicitly allowed
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Navigate to Security Settings → Notifications
|
||||
2. Configure webhook: `http://webhook.site/<your-unique-id>`
|
||||
3. Click **Save**
|
||||
@@ -102,6 +107,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -113,6 +119,7 @@ Each test case includes:
|
||||
**Objective**: Verify production webhook services work
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Create Slack incoming webhook at <https://api.slack.com/messaging/webhooks>
|
||||
2. Configure webhook in Charon: `https://hooks.slack.com/services/T00/B00/XXX`
|
||||
3. Save configuration
|
||||
@@ -126,6 +133,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -137,6 +145,7 @@ Each test case includes:
|
||||
**Objective**: Verify Discord integration works
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Create Discord webhook in server settings
|
||||
2. Configure webhook in Charon: `https://discord.com/api/webhooks/123456/abcdef`
|
||||
3. Save configuration
|
||||
@@ -150,6 +159,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -163,6 +173,7 @@ Each test case includes:
|
||||
**Objective**: Verify RFC 1918 Class A blocking
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://10.0.0.1/webhook`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -174,6 +185,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -185,6 +197,7 @@ Each test case includes:
|
||||
**Objective**: Verify RFC 1918 Class B blocking
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://172.16.0.1/admin`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -196,6 +209,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -207,6 +221,7 @@ Each test case includes:
|
||||
**Objective**: Verify RFC 1918 Class C blocking
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://192.168.1.1/`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -218,6 +233,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -229,6 +245,7 @@ Each test case includes:
|
||||
**Objective**: Verify port numbers don't bypass protection
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://192.168.1.100:8080/webhook`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -240,6 +257,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -253,12 +271,14 @@ Each test case includes:
|
||||
**Objective**: Verify AWS metadata service is blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://169.254.169.254/latest/meta-data/`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
4. Check logs for HIGH severity SSRF attempt
|
||||
|
||||
**Expected Result**:
|
||||
|
||||
- ❌ Configuration rejected
|
||||
- ✅ Log entry: `severity=HIGH event=ssrf_blocked`
|
||||
|
||||
@@ -267,6 +287,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -278,6 +299,7 @@ Each test case includes:
|
||||
**Objective**: Verify GCP metadata service is blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://metadata.google.internal/computeMetadata/v1/`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -289,6 +311,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -300,6 +323,7 @@ Each test case includes:
|
||||
**Objective**: Verify Azure metadata service is blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://169.254.169.254/metadata/instance?api-version=2021-02-01`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -311,6 +335,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -324,6 +349,7 @@ Each test case includes:
|
||||
**Objective**: Verify localhost blocking (unless explicitly allowed)
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://127.0.0.1:8080/internal`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -335,6 +361,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -346,6 +373,7 @@ Each test case includes:
|
||||
**Objective**: Verify `localhost` keyword blocking
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://localhost/admin`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -357,6 +385,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -368,6 +397,7 @@ Each test case includes:
|
||||
**Objective**: Verify IPv6 loopback blocking
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `http://[::1]/webhook`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -379,6 +409,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -392,6 +423,7 @@ Each test case includes:
|
||||
**Objective**: Verify file:// protocol is blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `file:///etc/passwd`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -403,6 +435,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -414,6 +447,7 @@ Each test case includes:
|
||||
**Objective**: Verify ftp:// protocol is blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `ftp://internal-server.local/upload/`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -425,6 +459,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -436,6 +471,7 @@ Each test case includes:
|
||||
**Objective**: Verify gopher:// protocol is blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `gopher://internal:70/`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -447,6 +483,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -458,6 +495,7 @@ Each test case includes:
|
||||
**Objective**: Verify data: scheme is blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure webhook: `data:text/html,<script>alert(1)</script>`
|
||||
2. Click **Save**
|
||||
3. Observe error message
|
||||
@@ -469,6 +507,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -484,6 +523,7 @@ Each test case includes:
|
||||
**Objective**: Verify connection-time IP validation prevents DNS rebinding
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Configure a webhook with a domain you control
|
||||
2. Initially point the domain to a public IP (passes validation)
|
||||
3. After webhook is saved, update DNS to point to `192.168.1.100`
|
||||
@@ -491,6 +531,7 @@ Each test case includes:
|
||||
5. Observe the webhook delivery failure
|
||||
|
||||
**Expected Result**:
|
||||
|
||||
- ❌ Webhook delivery fails with "connection to private IP blocked"
|
||||
- ✅ Log entry shows re-validation caught the attack
|
||||
- ✅ No request reaches 192.168.1.100
|
||||
@@ -500,6 +541,7 @@ Each test case includes:
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
This test requires DNS control. Alternative: use tools like rebinder.it
|
||||
```
|
||||
@@ -511,6 +553,7 @@ This test requires DNS control. Alternative: use tools like rebinder.it
|
||||
**Objective**: Verify IPs are validated at TCP connection time (not just URL parsing)
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Use a webhook receiver that logs incoming connections
|
||||
2. Configure webhook URL pointing to the receiver
|
||||
3. Check that the connection comes from Charon
|
||||
@@ -523,6 +566,7 @@ This test requires DNS control. Alternative: use tools like rebinder.it
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
Check logs for: "Validating IP for connection"
|
||||
```
|
||||
@@ -536,12 +580,14 @@ Check logs for: "Validating IP for connection"
|
||||
**Objective**: Verify redirects to private IPs are blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Set up a redirect server that returns: `HTTP 302 Location: http://192.168.1.100/`
|
||||
2. Configure webhook pointing to the redirect server
|
||||
3. Trigger webhook delivery
|
||||
4. Observe redirect handling
|
||||
|
||||
**Expected Result**:
|
||||
|
||||
- ❌ "redirect to private IP blocked"
|
||||
- ✅ Original request fails, no connection to 192.168.1.100
|
||||
|
||||
@@ -550,6 +596,7 @@ Check logs for: "Validating IP for connection"
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
Alternative: use httpbin.org/redirect-to?url=http://192.168.1.100
|
||||
```
|
||||
@@ -561,6 +608,7 @@ Alternative: use httpbin.org/redirect-to?url=http://192.168.1.100
|
||||
**Objective**: Verify redirects to cloud metadata endpoints are blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Set up redirect: `HTTP 302 Location: http://169.254.169.254/latest/meta-data/`
|
||||
2. Configure webhook pointing to redirect
|
||||
3. Trigger webhook delivery
|
||||
@@ -573,6 +621,7 @@ Alternative: use httpbin.org/redirect-to?url=http://192.168.1.100
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -584,6 +633,7 @@ Alternative: use httpbin.org/redirect-to?url=http://192.168.1.100
|
||||
**Objective**: Verify excessive redirects are blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Set up chain of 5+ redirects (each to a valid public URL)
|
||||
2. Configure webhook pointing to first redirect
|
||||
3. Trigger webhook delivery
|
||||
@@ -596,6 +646,7 @@ Alternative: use httpbin.org/redirect-to?url=http://192.168.1.100
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
```
|
||||
@@ -607,6 +658,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify redirects to localhost are blocked
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Set up redirect: `HTTP 302 Location: http://127.0.0.1:8080/admin`
|
||||
2. Configure webhook pointing to redirect
|
||||
3. Trigger webhook delivery
|
||||
@@ -619,6 +671,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -632,6 +685,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify URL test endpoint works for legitimate URLs
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Navigate to **System Settings** → **URL Testing** (or use API)
|
||||
2. Test URL: `https://api.github.com`
|
||||
3. Submit test
|
||||
@@ -644,6 +698,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -655,6 +710,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify URL test endpoint also has SSRF protection
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Navigate to URL Testing
|
||||
2. Test URL: `http://192.168.1.1`
|
||||
3. Submit test
|
||||
@@ -667,6 +723,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -678,6 +735,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify DNS resolution failure handling
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Test URL: `https://this-domain-does-not-exist-12345.com`
|
||||
2. Submit test
|
||||
3. Observe error
|
||||
@@ -689,6 +747,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -702,6 +761,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify CrowdSec hub sync works with official domain
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Navigate to **Security** → **CrowdSec**
|
||||
2. Enable CrowdSec (if not already enabled)
|
||||
3. Trigger hub sync (or wait for automatic sync)
|
||||
@@ -714,6 +774,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -725,6 +786,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify custom hub URLs are validated
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt to configure custom hub URL: `http://malicious-hub.evil.com`
|
||||
2. Trigger hub sync
|
||||
3. Observe error in logs
|
||||
@@ -736,6 +798,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
(This test may require configuration file modification)
|
||||
```
|
||||
@@ -749,6 +812,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify update service uses validated GitHub URLs
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Navigate to **System** → **Updates** (if available in UI)
|
||||
2. Click **Check for Updates**
|
||||
3. Observe success or error
|
||||
@@ -761,6 +825,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -774,6 +839,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify error messages don't leak internal information
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt various blocked URLs from previous tests
|
||||
2. Record exact error messages shown to user
|
||||
3. Verify no internal IPs, hostnames, or network topology revealed
|
||||
@@ -785,6 +851,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -796,11 +863,13 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify logs contain more detail than user-facing errors
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Attempt blocked URL: `http://192.168.1.100/admin`
|
||||
2. Check user-facing error message
|
||||
3. Check server logs for detailed information
|
||||
|
||||
**Expected Result**:
|
||||
|
||||
- User sees: "URL resolves to a private IP address (blocked for security)"
|
||||
- Logs show: `severity=HIGH url=http://192.168.1.100/admin resolved_ip=192.168.1.100`
|
||||
|
||||
@@ -809,6 +878,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -822,12 +892,14 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify complete webhook notification flow with SSRF protection
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Configure valid webhook: `https://webhook.site/<unique-id>`
|
||||
2. Trigger CrowdSec block event (simulate attack)
|
||||
3. Verify notification received at webhook.site
|
||||
4. Check logs for successful webhook delivery
|
||||
|
||||
**Expected Result**:
|
||||
|
||||
- ✅ Webhook configured without errors
|
||||
- ✅ Security event triggered
|
||||
- ✅ Notification delivered successfully
|
||||
@@ -838,6 +910,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -849,6 +922,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify webhook validation persists across restarts
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Configure valid webhook: `https://webhook.site/<unique-id>`
|
||||
2. Restart Charon container: `docker restart charon`
|
||||
3. Trigger security event
|
||||
@@ -861,6 +935,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -872,12 +947,14 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify SSRF protection applies to all webhook types
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Configure security notification webhook (valid)
|
||||
2. Configure custom webhook notification (valid)
|
||||
3. Attempt to add webhook with private IP (blocked)
|
||||
4. Verify both valid webhooks work, blocked one rejected
|
||||
|
||||
**Expected Result**:
|
||||
|
||||
- ✅ Valid webhooks accepted
|
||||
- ❌ Private IP webhook rejected
|
||||
- ✅ Both valid webhooks receive notifications
|
||||
@@ -887,6 +964,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -898,6 +976,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Objective**: Verify URL testing requires admin privileges
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Log out of admin account
|
||||
2. Log in as non-admin user (if available)
|
||||
3. Attempt to access URL testing endpoint
|
||||
@@ -910,6 +989,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Pass/Fail**: [ ] Pass [ ] Fail
|
||||
|
||||
**Notes**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -939,11 +1019,13 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
### Pass Criteria
|
||||
|
||||
**Minimum Requirements**:
|
||||
|
||||
- [ ] All 36 test cases passed OR
|
||||
- [ ] All critical tests passed (TC-005 through TC-018, TC-021 through TC-024, TC-026) AND
|
||||
- [ ] All failures have documented justification
|
||||
|
||||
**Critical Tests** (Must Pass):
|
||||
|
||||
- [ ] TC-005: Class A Private Network blocking
|
||||
- [ ] TC-006: Class B Private Network blocking
|
||||
- [ ] TC-007: Class C Private Network blocking
|
||||
@@ -964,21 +1046,25 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
**Test Case**: TC-___
|
||||
**Severity**: [ ] Critical [ ] High [ ] Medium [ ] Low
|
||||
**Description**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
**Steps to Reproduce**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
**Expected vs Actual**:
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
|
||||
**Workaround** (if applicable):
|
||||
|
||||
```
|
||||
|
||||
```
|
||||
@@ -990,6 +1076,7 @@ Default max redirects is 0 (no redirects). If enabled, max is typically 2.
|
||||
### Tester Certification
|
||||
|
||||
I certify that:
|
||||
|
||||
- [ ] All test cases were executed as described
|
||||
- [ ] Results are accurate and complete
|
||||
- [ ] All issues are documented
|
||||
|
||||
@@ -30,6 +30,7 @@
|
||||
| SSRF-006 | `http://192.168.255.255/webhook` | ❌ Blocked |
|
||||
|
||||
**Command**:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/settings/test-url \
|
||||
-H "Content-Type: application/json" \
|
||||
@@ -65,6 +66,7 @@ curl -X POST http://localhost:8080/api/v1/settings/test-url \
|
||||
| SSRF-023 | `http://169.254.0.1/` | ❌ Blocked (link-local range) |
|
||||
|
||||
**Command**:
|
||||
|
||||
```bash
|
||||
curl -X POST http://localhost:8080/api/v1/settings/test-url \
|
||||
-H "Content-Type: application/json" \
|
||||
@@ -118,6 +120,7 @@ curl -X POST http://localhost:8080/api/v1/settings/test-url \
|
||||
| SSRF-062 | URL redirects to private IP | ❌ Blocked |
|
||||
|
||||
**Test Setup**: Use httpbin.org redirect:
|
||||
|
||||
```bash
|
||||
# This should be blocked if final destination is private
|
||||
curl -X POST http://localhost:8080/api/v1/settings/test-url \
|
||||
|
||||
@@ -34,6 +34,7 @@ FAIL: github.com/Wikid82/charon/backend/internal/api/handlers (timeout 441s)
|
||||
### Affected Tests
|
||||
|
||||
All handler tests, including:
|
||||
|
||||
- Access list handlers
|
||||
- Auth handlers
|
||||
- Backup handlers
|
||||
@@ -48,11 +49,13 @@ All handler tests, including:
|
||||
### Recommended Fix
|
||||
|
||||
**Option 1: Increase Timeout**
|
||||
|
||||
```bash
|
||||
go test -timeout 15m ./internal/api/handlers/...
|
||||
```
|
||||
|
||||
**Option 2: Split Test Suite**
|
||||
|
||||
```bash
|
||||
# Fast unit tests
|
||||
go test -short ./internal/api/handlers/...
|
||||
@@ -62,6 +65,7 @@ go test -run Integration ./internal/api/handlers/...
|
||||
```
|
||||
|
||||
**Option 3: Optimize Tests**
|
||||
|
||||
- Use mocks for external HTTP calls
|
||||
- Parallelize independent tests with `t.Parallel()`
|
||||
- Use table-driven tests to reduce setup/teardown overhead
|
||||
@@ -111,12 +115,14 @@ Failed Tests:
|
||||
### Root Cause
|
||||
|
||||
**Error Pattern:**
|
||||
|
||||
```
|
||||
Error: "access to private IP addresses is blocked (resolved to 127.0.0.1)"
|
||||
does not contain "status 404"
|
||||
```
|
||||
|
||||
**Analysis:**
|
||||
|
||||
1. Tests use `httptest.NewServer()` which binds to `127.0.0.1` (localhost)
|
||||
2. URL validation code has private IP blocking for security
|
||||
3. Private IP check runs BEFORE HTTP request is made
|
||||
@@ -124,6 +130,7 @@ Error: "access to private IP addresses is blocked (resolved to 127.0.0.1)"
|
||||
5. This creates a mismatch between expected and actual error messages
|
||||
|
||||
**Code Location:**
|
||||
|
||||
```go
|
||||
// File: backend/internal/utils/url_connectivity_test.go
|
||||
// Lines: 103, 127-128, 156
|
||||
@@ -138,6 +145,7 @@ assert.Contains(t, err.Error(), "status 404")
|
||||
### Recommended Fix
|
||||
|
||||
**Option 1: Use Public Test Endpoints**
|
||||
|
||||
```go
|
||||
func TestTestURLConnectivity_StatusCodes(t *testing.T) {
|
||||
tests := []struct {
|
||||
@@ -153,6 +161,7 @@ func TestTestURLConnectivity_StatusCodes(t *testing.T) {
|
||||
```
|
||||
|
||||
**Option 2: Add Test-Only Bypass**
|
||||
|
||||
```go
|
||||
// In url_connectivity.go
|
||||
func TestURLConnectivity(url string) error {
|
||||
@@ -174,6 +183,7 @@ func TestMain(m *testing.M) {
|
||||
```
|
||||
|
||||
**Option 3: Mock DNS Resolution**
|
||||
|
||||
```go
|
||||
// Use custom dialer that returns public IPs for test domains
|
||||
type testDialer struct {
|
||||
|
||||
@@ -11,6 +11,7 @@
|
||||
## Objective
|
||||
|
||||
Manually validate the Grype SBOM remediation implementation in real-world CI/CD scenarios to ensure:
|
||||
|
||||
- Workflow operates correctly in all expected conditions
|
||||
- Error handling is robust and user-friendly
|
||||
- No regressions in existing functionality
|
||||
@@ -32,15 +33,18 @@ Manually validate the Grype SBOM remediation implementation in real-world CI/CD
|
||||
**Objective**: Verify workflow gracefully skips when image doesn't exist (common in PR workflows before docker-build completes).
|
||||
|
||||
**Prerequisites**:
|
||||
|
||||
- Create a test PR with code changes
|
||||
- Ensure docker-build workflow has NOT completed yet
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Create/update PR on feature branch
|
||||
2. Navigate to Actions → Supply Chain Verification workflow
|
||||
3. Wait for workflow to complete
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- ✅ Workflow completes successfully (green check)
|
||||
- ✅ "Check Image Availability" step shows "Image not found" message
|
||||
- ✅ "Report Skipped Scan" step shows clear skip reason
|
||||
@@ -49,6 +53,7 @@ Manually validate the Grype SBOM remediation implementation in real-world CI/CD
|
||||
- ✅ No false failures or error messages
|
||||
|
||||
**Pass Criteria**:
|
||||
|
||||
- [ ] Workflow status: Success (not failed or warning)
|
||||
- [ ] PR comment is clear and helpful
|
||||
- [ ] GitHub Step Summary shows skip reason
|
||||
@@ -61,15 +66,18 @@ Manually validate the Grype SBOM remediation implementation in real-world CI/CD
|
||||
**Objective**: Verify full SBOM generation, validation, and vulnerability scanning when image exists.
|
||||
|
||||
**Prerequisites**:
|
||||
|
||||
- Use a branch where docker-build has completed (e.g., `main` or merged PR)
|
||||
- Image exists in GHCR: `ghcr.io/wikid82/charon:latest` or `ghcr.io/wikid82/charon:pr-XXX`
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Trigger workflow manually via `workflow_dispatch` on main branch
|
||||
2. OR merge a PR and wait for automatic workflow trigger
|
||||
3. Monitor workflow execution
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- ✅ "Check Image Availability" step finds image
|
||||
- ✅ "Verify SBOM Completeness" step generates CycloneDX SBOM
|
||||
- ✅ Syft version is logged
|
||||
@@ -90,6 +98,7 @@ Manually validate the Grype SBOM remediation implementation in real-world CI/CD
|
||||
- ✅ No "sbom format not recognized" errors
|
||||
|
||||
**Pass Criteria**:
|
||||
|
||||
- [ ] Workflow status: Success
|
||||
- [ ] SBOM artifact uploaded and downloadable
|
||||
- [ ] Grype scan completes without format errors
|
||||
@@ -104,6 +113,7 @@ Manually validate the Grype SBOM remediation implementation in real-world CI/CD
|
||||
**Objective**: Verify SBOM validation catches malformed files before passing to Grype.
|
||||
|
||||
**Prerequisites**:
|
||||
|
||||
- Requires temporarily modifying workflow to introduce error (NOT for production testing)
|
||||
- OR wait for natural occurrence (unlikely)
|
||||
|
||||
@@ -111,6 +121,7 @@ Manually validate the Grype SBOM remediation implementation in real-world CI/CD
|
||||
This scenario is validated through code review and unit testing of validation logic. Manual testing in production environment is not recommended as it requires intentionally breaking the workflow.
|
||||
|
||||
**Code Review Validation** (Already Completed):
|
||||
|
||||
- ✅ jq availability check (lines 125-130)
|
||||
- ✅ File existence check (lines 133-138)
|
||||
- ✅ Non-empty check (lines 141-146)
|
||||
@@ -118,6 +129,7 @@ This scenario is validated through code review and unit testing of validation lo
|
||||
- ✅ CycloneDX format check (lines 159-173)
|
||||
|
||||
**Pass Criteria**:
|
||||
|
||||
- [ ] Code review confirms all validation checks present
|
||||
- [ ] Error handling paths use `exit 1` for real errors
|
||||
- [ ] Clear error messages at each validation point
|
||||
@@ -129,15 +141,18 @@ This scenario is validated through code review and unit testing of validation lo
|
||||
**Objective**: Verify workflow correctly identifies and reports critical vulnerabilities.
|
||||
|
||||
**Prerequisites**:
|
||||
|
||||
- Use an older image tag with known vulnerabilities (if available)
|
||||
- OR wait for vulnerability to be discovered in current image
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Trigger workflow on image with vulnerabilities
|
||||
2. Monitor vulnerability scan step
|
||||
3. Check PR comment and workflow logs
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- ✅ Grype scan completes successfully
|
||||
- ✅ Vulnerabilities categorized by severity
|
||||
- ✅ Critical vulnerabilities trigger GitHub annotation/warning
|
||||
@@ -146,6 +161,7 @@ This scenario is validated through code review and unit testing of validation lo
|
||||
- ✅ Link to full report is provided
|
||||
|
||||
**Pass Criteria**:
|
||||
|
||||
- [ ] Vulnerability counts are accurate
|
||||
- [ ] Critical vulnerabilities highlighted
|
||||
- [ ] Clear action guidance provided
|
||||
@@ -158,10 +174,12 @@ This scenario is validated through code review and unit testing of validation lo
|
||||
**Objective**: Verify workflow executes within acceptable time limits.
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Monitor workflow execution time across multiple runs
|
||||
2. Check individual step durations
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- ✅ Total workflow time: < 10 minutes
|
||||
- ✅ Image check: < 30 seconds
|
||||
- ✅ SBOM generation: < 2 minutes
|
||||
@@ -170,6 +188,7 @@ This scenario is validated through code review and unit testing of validation lo
|
||||
- ✅ Artifact upload: < 1 minute
|
||||
|
||||
**Pass Criteria**:
|
||||
|
||||
- [ ] Average workflow time within limits
|
||||
- [ ] No significant performance degradation vs. previous implementation
|
||||
- [ ] No timeout failures
|
||||
@@ -181,15 +200,18 @@ This scenario is validated through code review and unit testing of validation lo
|
||||
**Objective**: Verify workflow handles concurrent executions without conflicts.
|
||||
|
||||
**Prerequisites**:
|
||||
|
||||
- Create multiple PRs simultaneously
|
||||
- Trigger workflows on multiple branches
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Create 3-5 PRs from different feature branches
|
||||
2. Wait for workflows to run concurrently
|
||||
3. Monitor all workflow executions
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- ✅ All workflows complete successfully
|
||||
- ✅ No resource conflicts or race conditions
|
||||
- ✅ Correct image checked for each PR (`pr-XXX` tags)
|
||||
@@ -197,6 +219,7 @@ This scenario is validated through code review and unit testing of validation lo
|
||||
- ✅ Artifact names are unique (include tag)
|
||||
|
||||
**Pass Criteria**:
|
||||
|
||||
- [ ] All workflows succeed independently
|
||||
- [ ] No cross-contamination of results
|
||||
- [ ] Artifact names unique and correct
|
||||
@@ -208,11 +231,13 @@ This scenario is validated through code review and unit testing of validation lo
|
||||
### Verify No Breaking Changes
|
||||
|
||||
**Test Areas**:
|
||||
|
||||
1. **Other Workflows**: Ensure docker-build.yml, codeql-analysis.yml, etc. still work
|
||||
2. **Existing Releases**: Verify workflow runs successfully on existing release tags
|
||||
3. **Backward Compatibility**: Old PRs can be re-run without issues
|
||||
|
||||
**Pass Criteria**:
|
||||
|
||||
- [ ] No regressions in other workflows
|
||||
- [ ] Existing functionality preserved
|
||||
- [ ] No unexpected failures
|
||||
|
||||
@@ -11,11 +11,13 @@ Manually verify that the CI workflow fixes work correctly in production, focusin
|
||||
## Background
|
||||
|
||||
**What Was Fixed:**
|
||||
|
||||
1. Removed `branches` filter from `supply-chain-verify.yml` to enable `workflow_run` triggering on all branches
|
||||
2. Added documentation to explain the GitHub Security warning (false positive)
|
||||
3. Updated SECURITY.md with comprehensive security scanning documentation
|
||||
|
||||
**Expected Behavior:**
|
||||
|
||||
- Supply Chain Verification should now trigger via `workflow_run` after Docker Build completes on ANY branch
|
||||
- Previous behavior: Only triggered via `pull_request` fallback (branch filter prevented workflow_run)
|
||||
|
||||
@@ -26,23 +28,27 @@ Manually verify that the CI workflow fixes work correctly in production, focusin
|
||||
**Goal:** Verify `workflow_run` trigger works on feature branches after fix
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Create a small test commit on `feature/beta-release`
|
||||
2. Push the commit
|
||||
3. Monitor GitHub Actions workflow runs
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- ✅ Docker Build workflow triggers and completes successfully
|
||||
- ✅ Supply Chain Verification triggers **via workflow_run event** (not pull_request)
|
||||
- ✅ Supply Chain completes successfully
|
||||
- ✅ GitHub Actions logs show event type is `workflow_run`
|
||||
|
||||
**How to Verify Event Type:**
|
||||
|
||||
```bash
|
||||
gh run list --workflow="supply-chain-verify.yml" --limit 1 --json event,conclusion
|
||||
# Should show: "event": "workflow_run", "conclusion": "success"
|
||||
```
|
||||
|
||||
**Potential Bugs to Watch For:**
|
||||
|
||||
- ❌ Supply Chain doesn't trigger at all
|
||||
- ❌ Supply Chain triggers but fails
|
||||
- ❌ Multiple simultaneous runs (race condition)
|
||||
@@ -55,16 +61,19 @@ gh run list --workflow="supply-chain-verify.yml" --limit 1 --json event,conclusi
|
||||
**Goal:** Verify `pull_request` fallback trigger still works correctly
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. With PR #461 open, push another small commit
|
||||
2. Monitor GitHub Actions workflow runs
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- ✅ Docker Build triggers via `pull_request` event
|
||||
- ✅ Supply Chain may trigger via BOTH `workflow_run` AND `pull_request` (race condition possible)
|
||||
- ✅ If both trigger, both should complete successfully without conflict
|
||||
- ✅ PR should show both workflow checks passing
|
||||
|
||||
**Potential Bugs to Watch For:**
|
||||
|
||||
- ❌ Duplicate runs causing conflicts
|
||||
- ❌ Race condition causing failures
|
||||
- ❌ PR checks showing "pending" indefinitely
|
||||
@@ -77,16 +86,19 @@ gh run list --workflow="supply-chain-verify.yml" --limit 1 --json event,conclusi
|
||||
**Goal:** Verify fix doesn't break main branch behavior
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. After PR #461 merges to main, monitor the merge commit
|
||||
2. Check GitHub Actions runs
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- ✅ Docker Build runs on main
|
||||
- ✅ Supply Chain triggers via `workflow_run`
|
||||
- ✅ Both complete successfully
|
||||
- ✅ Weekly scheduled runs continue to work
|
||||
|
||||
**Potential Bugs to Watch For:**
|
||||
|
||||
- ❌ Main branch workflows broken
|
||||
- ❌ Weekly schedule interferes with workflow_run
|
||||
- ❌ Permissions issues on main branch
|
||||
@@ -98,16 +110,19 @@ gh run list --workflow="supply-chain-verify.yml" --limit 1 --json event,conclusi
|
||||
**Goal:** Verify Supply Chain doesn't trigger when Docker Build fails
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Intentionally break Docker Build (e.g., invalid Dockerfile syntax)
|
||||
2. Push to a test branch
|
||||
3. Monitor workflow behavior
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- ✅ Docker Build fails as expected
|
||||
- ✅ Supply Chain **does NOT trigger** (workflow_run only fires on `completed` and `success`)
|
||||
- ✅ No cascading failures
|
||||
|
||||
**Potential Bugs to Watch For:**
|
||||
|
||||
- ❌ Supply Chain triggers on failed builds
|
||||
- ❌ Error handling missing
|
||||
- ❌ Workflow stuck in pending state
|
||||
@@ -119,17 +134,20 @@ gh run list --workflow="supply-chain-verify.yml" --limit 1 --json event,conclusi
|
||||
**Goal:** Verify manual trigger still works
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Go to GitHub Actions → Supply Chain Verification
|
||||
2. Click "Run workflow"
|
||||
3. Select `feature/beta-release` branch
|
||||
4. Click "Run workflow"
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- ✅ Workflow starts via `workflow_dispatch` event
|
||||
- ✅ Completes successfully
|
||||
- ✅ SBOM and attestations generated
|
||||
|
||||
**Potential Bugs to Watch For:**
|
||||
|
||||
- ❌ Manual dispatch broken
|
||||
- ❌ Branch selector doesn't work
|
||||
- ❌ Workflow fails with "branch not found"
|
||||
@@ -141,15 +159,18 @@ gh run list --workflow="supply-chain-verify.yml" --limit 1 --json event,conclusi
|
||||
**Goal:** Verify scheduled trigger still works
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Wait for next Monday 00:00 UTC
|
||||
2. Check GitHub Actions for scheduled run
|
||||
|
||||
**Expected Results:**
|
||||
|
||||
- ✅ Workflow triggers via `schedule` event
|
||||
- ✅ Runs on main branch
|
||||
- ✅ Completes successfully
|
||||
|
||||
**Potential Bugs to Watch For:**
|
||||
|
||||
- ❌ Schedule doesn't fire
|
||||
- ❌ Wrong branch selected
|
||||
- ❌ Interference with other workflows
|
||||
@@ -159,16 +180,19 @@ gh run list --workflow="supply-chain-verify.yml" --limit 1 --json event,conclusi
|
||||
## Edge Cases to Test
|
||||
|
||||
### Edge Case 1: Rapid Pushes (Rate Limiting)
|
||||
|
||||
**Test:** Push 3-5 commits rapidly to feature branch
|
||||
**Expected:** All Docker Builds run, Supply Chain may queue or skip redundant runs
|
||||
**Watch For:** Workflow queue overflow, cancellations, failures
|
||||
|
||||
### Edge Case 2: Long-Running Docker Build
|
||||
|
||||
**Test:** Create a commit that makes Docker Build take >10 minutes
|
||||
**Expected:** Supply Chain waits for completion before triggering
|
||||
**Watch For:** Timeouts, abandoned runs, state corruption
|
||||
|
||||
### Edge Case 3: Branch Deletion During Run
|
||||
|
||||
**Test:** Delete feature branch while workflows are running
|
||||
**Expected:** Workflows complete or cancel gracefully
|
||||
**Watch For:** Orphaned runs, resource leaks, errors
|
||||
@@ -187,21 +211,25 @@ gh run list --workflow="supply-chain-verify.yml" --limit 1 --json event,conclusi
|
||||
## Bug Severity Guidelines
|
||||
|
||||
**CRITICAL** (Block Merge):
|
||||
|
||||
- Supply Chain doesn't run at all
|
||||
- Cascading failures breaking other workflows
|
||||
- Security vulnerabilities introduced
|
||||
|
||||
**HIGH** (Fix Before Release):
|
||||
|
||||
- Race conditions causing frequent failures
|
||||
- Resource leaks or orphaned workflows
|
||||
- Error handling missing
|
||||
|
||||
**MEDIUM** (Fix in Future PR):
|
||||
|
||||
- Duplicate runs (but both succeed)
|
||||
- Inconsistent behavior (works sometimes)
|
||||
- Minor UX issues
|
||||
|
||||
**LOW** (Document as Known Issue):
|
||||
|
||||
- Cosmetic issues in logs
|
||||
- Non-breaking edge cases
|
||||
- Timing inconsistencies
|
||||
|
||||
@@ -29,11 +29,13 @@ Verify that the CI Docker build fix resolves the "reference does not exist" erro
|
||||
**Objective**: Verify normal PR build succeeds with image artifact save
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Create a test PR with a minor change (e.g., update README.md)
|
||||
2. Wait for `docker-build.yml` workflow to trigger
|
||||
3. Monitor the workflow execution in GitHub Actions
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] ✅ `build-and-push` job completes successfully
|
||||
- [ ] ✅ "Save Docker Image as Artifact" step completes without errors
|
||||
- [ ] ✅ Step output shows: "🔍 Detected image tag: ghcr.io/wikid82/charon:pr-XXX"
|
||||
@@ -52,10 +54,12 @@ Verify that the CI Docker build fix resolves the "reference does not exist" erro
|
||||
**Objective**: Verify defensive validation catches missing or invalid tags
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Review the "Save Docker Image as Artifact" step logs
|
||||
2. Check for validation output
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] ✅ Step logs show: "🔍 Detected image tag: ghcr.io/wikid82/charon:pr-XXX"
|
||||
- [ ] ✅ No error messages about missing tags
|
||||
- [ ] ✅ Image inspection succeeds (no "not found locally" errors)
|
||||
@@ -69,12 +73,14 @@ Verify that the CI Docker build fix resolves the "reference does not exist" erro
|
||||
**Objective**: Verify downstream job receives and processes the artifact correctly
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Wait for `verify-supply-chain-pr` job to start
|
||||
2. Check "Download Image Artifact" step
|
||||
3. Check "Load Docker Image" step
|
||||
4. Check "Verify Loaded Image" step
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] ✅ Artifact downloads successfully
|
||||
- [ ] ✅ Image loads without errors
|
||||
- [ ] ✅ Verification step confirms image exists: "✅ Image verified: ghcr.io/wikid82/charon:pr-XXX"
|
||||
@@ -93,6 +99,7 @@ Verify that the CI Docker build fix resolves the "reference does not exist" erro
|
||||
**Note**: This scenario is difficult to test without artificially breaking the build. Monitor for this in production if a natural failure occurs.
|
||||
|
||||
**Expected Behavior** (if error occurs):
|
||||
|
||||
- [ ] Step fails fast with clear diagnostics
|
||||
- [ ] Error message shows exact issue (missing tag, image not found, etc.)
|
||||
- [ ] Available images are listed for debugging
|
||||
@@ -107,11 +114,13 @@ Verify that the CI Docker build fix resolves the "reference does not exist" erro
|
||||
### Check Previous Failure Cases
|
||||
|
||||
**Steps**:
|
||||
|
||||
1. Review previous failed PR builds (before fix)
|
||||
2. Note the exact error messages
|
||||
3. Confirm those errors no longer occur
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- [ ] ✅ No "reference does not exist" errors
|
||||
- [ ] ✅ No "image not found" errors during save
|
||||
- [ ] ✅ No manual tag reconstruction mismatches
|
||||
@@ -125,12 +134,14 @@ Verify that the CI Docker build fix resolves the "reference does not exist" erro
|
||||
**Objective**: Ensure fix does not introduce performance degradation
|
||||
|
||||
**Metrics to Monitor**:
|
||||
|
||||
- [ ] Build time (build-and-push job duration)
|
||||
- [ ] Artifact save time
|
||||
- [ ] Artifact upload time
|
||||
- [ ] Total PR workflow duration
|
||||
|
||||
**Expected Results**:
|
||||
|
||||
- Build time: ~10-15 minutes (no significant change)
|
||||
- Artifact save: <30 seconds
|
||||
- Artifact upload: <1 minute
|
||||
@@ -170,26 +181,32 @@ Verify that the CI Docker build fix resolves the "reference does not exist" erro
|
||||
**Tester**: [Name]
|
||||
|
||||
### Scenario 1: Standard PR Build
|
||||
|
||||
- Status: [ ] PASS / [ ] FAIL
|
||||
- Notes:
|
||||
|
||||
### Scenario 2: Metadata Tag Validation
|
||||
|
||||
- Status: [ ] PASS / [ ] FAIL
|
||||
- Notes:
|
||||
|
||||
### Scenario 3: Supply Chain Verification Integration
|
||||
|
||||
- Status: [ ] PASS / [ ] FAIL
|
||||
- Notes:
|
||||
|
||||
### Scenario 4: Error Handling
|
||||
|
||||
- Status: [ ] PASS / [ ] FAIL / [ ] N/A
|
||||
- Notes:
|
||||
|
||||
### Regression Testing
|
||||
|
||||
- Status: [ ] PASS / [ ] FAIL
|
||||
- Notes:
|
||||
|
||||
### Performance Validation
|
||||
|
||||
- Status: [ ] PASS / [ ] FAIL
|
||||
- Build time: X minutes
|
||||
- Artifact save: X seconds
|
||||
|
||||
Reference in New Issue
Block a user