chore: clean .gitignore cache
This commit is contained in:
408
.github/skills/README.md
vendored
408
.github/skills/README.md
vendored
@@ -1,408 +0,0 @@
|
||||
# Agent Skills - Charon Project
|
||||
|
||||
This directory contains [Agent Skills](https://agentskills.io) following the agentskills.io specification for AI-discoverable, executable tasks.
|
||||
|
||||
## Overview
|
||||
|
||||
Agent Skills are self-documenting, AI-discoverable task definitions that combine YAML frontmatter (metadata) with Markdown documentation. Each skill represents a specific task or workflow that can be executed by both humans and AI assistants.
|
||||
|
||||
**Location**: `.github/skills/` is the [VS Code Copilot standard location](https://code.visualstudio.com/docs/copilot/customization/agent-skills) for Agent Skills
|
||||
**Format**: Skills follow the [agentskills.io specification](https://agentskills.io/specification) for structure and metadata
|
||||
|
||||
## Directory Structure
|
||||
|
||||
```
|
||||
.github/skills/
|
||||
├── README.md # This file
|
||||
├── scripts/ # Shared infrastructure scripts
|
||||
│ ├── skill-runner.sh # Universal skill executor
|
||||
│ ├── validate-skills.py # Frontmatter validation tool
|
||||
│ ├── _logging_helpers.sh # Logging utilities
|
||||
│ ├── _error_handling_helpers.sh # Error handling utilities
|
||||
│ └── _environment_helpers.sh # Environment validation
|
||||
├── examples/ # Example skill templates
|
||||
└── {skill-name}/ # Individual skill directories
|
||||
├── SKILL.md # Skill definition and documentation
|
||||
└── scripts/
|
||||
└── run.sh # Skill execution script
|
||||
```
|
||||
|
||||
## Available Skills
|
||||
|
||||
### Testing Skills
|
||||
|
||||
| Skill Name | Category | Description | Status |
|
||||
|------------|----------|-------------|--------|
|
||||
| [test-backend-coverage](./test-backend-coverage.SKILL.md) | test | Run Go backend tests with coverage analysis | ✅ Active |
|
||||
| [test-backend-unit](./test-backend-unit.SKILL.md) | test | Run fast Go unit tests without coverage | ✅ Active |
|
||||
| [test-frontend-coverage](./test-frontend-coverage.SKILL.md) | test | Run frontend tests with coverage reporting | ✅ Active |
|
||||
| [test-frontend-unit](./test-frontend-unit.SKILL.md) | test | Run fast frontend unit tests without coverage | ✅ Active |
|
||||
| [test-e2e-playwright](./test-e2e-playwright.SKILL.md) | test | Run Playwright E2E tests with browser selection | ✅ Active |
|
||||
| [test-e2e-playwright-debug](./test-e2e-playwright-debug.SKILL.md) | test | Run E2E tests in headed/debug mode for troubleshooting | ✅ Active |
|
||||
| [test-e2e-playwright-coverage](./test-e2e-playwright-coverage.SKILL.md) | test | Run E2E tests with coverage collection | ✅ Active |
|
||||
|
||||
### Integration Testing Skills
|
||||
|
||||
| Skill Name | Category | Description | Status |
|
||||
|------------|----------|-------------|--------|
|
||||
| [integration-test-all](./integration-test-all.SKILL.md) | integration | Run all integration tests in sequence | ✅ Active |
|
||||
| [integration-test-coraza](./integration-test-coraza.SKILL.md) | integration | Test Coraza WAF integration | ✅ Active |
|
||||
| [integration-test-crowdsec](./integration-test-crowdsec.SKILL.md) | integration | Test CrowdSec bouncer integration | ✅ Active |
|
||||
| [integration-test-crowdsec-decisions](./integration-test-crowdsec-decisions.SKILL.md) | integration | Test CrowdSec decisions API | ✅ Active |
|
||||
| [integration-test-crowdsec-startup](./integration-test-crowdsec-startup.SKILL.md) | integration | Test CrowdSec startup sequence | ✅ Active |
|
||||
|
||||
### Security Skills
|
||||
|
||||
| Skill Name | Category | Description | Status |
|
||||
|------------|----------|-------------|--------|
|
||||
| [security-scan-trivy](./security-scan-trivy.SKILL.md) | security | Run Trivy vulnerability scanner | ✅ Active |
|
||||
| [security-scan-go-vuln](./security-scan-go-vuln.SKILL.md) | security | Run Go vulnerability check | ✅ Active |
|
||||
|
||||
### QA Skills
|
||||
|
||||
| Skill Name | Category | Description | Status |
|
||||
|------------|----------|-------------|--------|
|
||||
| [qa-precommit-all](./qa-precommit-all.SKILL.md) | qa | Run all pre-commit hooks on entire codebase | ✅ Active |
|
||||
|
||||
### Utility Skills
|
||||
|
||||
| Skill Name | Category | Description | Status |
|
||||
|------------|----------|-------------|--------|
|
||||
| [utility-version-check](./utility-version-check.SKILL.md) | utility | Validate version matches git tag | ✅ Active |
|
||||
| [utility-clear-go-cache](./utility-clear-go-cache.SKILL.md) | utility | Clear Go build and module caches | ✅ Active |
|
||||
| [utility-bump-beta](./utility-bump-beta.SKILL.md) | utility | Increment beta version number | ✅ Active |
|
||||
| [utility-db-recovery](./utility-db-recovery.SKILL.md) | utility | Database integrity check and recovery | ✅ Active |
|
||||
|
||||
### Docker Skills
|
||||
|
||||
| Skill Name | Category | Description | Status |
|
||||
|------------|----------|-------------|--------|
|
||||
| [docker-start-dev](./docker-start-dev.SKILL.md) | docker | Start development Docker Compose environment | ✅ Active |
|
||||
| [docker-stop-dev](./docker-stop-dev.SKILL.md) | docker | Stop development Docker Compose environment | ✅ Active |
|
||||
| [docker-rebuild-e2e](./docker-rebuild-e2e.SKILL.md) | docker | Rebuild Docker image and restart E2E Playwright container | ✅ Active |
|
||||
| [docker-prune](./docker-prune.SKILL.md) | docker | Clean up unused Docker resources | ✅ Active |
|
||||
|
||||
## Usage
|
||||
|
||||
### Running Skills
|
||||
|
||||
Use the universal skill runner to execute any skill:
|
||||
|
||||
```bash
|
||||
# From project root
|
||||
.github/skills/scripts/skill-runner.sh <skill-name> [args...]
|
||||
|
||||
# Example: Run backend coverage tests
|
||||
.github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
```
|
||||
|
||||
### From VS Code Tasks
|
||||
|
||||
Skills are integrated with VS Code tasks (`.vscode/tasks.json`):
|
||||
|
||||
1. Open Command Palette (`Ctrl+Shift+P` or `Cmd+Shift+P`)
|
||||
2. Select `Tasks: Run Task`
|
||||
3. Choose the task (e.g., `Test: Backend with Coverage`)
|
||||
|
||||
### In CI/CD Workflows
|
||||
|
||||
Reference skills in GitHub Actions:
|
||||
|
||||
```yaml
|
||||
- name: Run Backend Tests with Coverage
|
||||
run: .github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
```
|
||||
|
||||
## Validation
|
||||
|
||||
### Validate a Single Skill
|
||||
|
||||
```bash
|
||||
python3 .github/skills/scripts/validate-skills.py --single .github/skills/test-backend-coverage/SKILL.md
|
||||
```
|
||||
|
||||
### Validate All Skills
|
||||
|
||||
```bash
|
||||
python3 .github/skills/scripts/validate-skills.py
|
||||
```
|
||||
|
||||
### Validation Checks
|
||||
|
||||
The validator ensures:
|
||||
- ✅ Required frontmatter fields are present
|
||||
- ✅ Field formats are correct (name, version, description)
|
||||
- ✅ Tags meet minimum/maximum requirements
|
||||
- ✅ Compatibility information is valid
|
||||
- ✅ Custom metadata follows project conventions
|
||||
|
||||
## Creating New Skills
|
||||
|
||||
### 1. Create Skill Directory Structure
|
||||
|
||||
```bash
|
||||
mkdir -p .github/skills/{skill-name}/scripts
|
||||
```
|
||||
|
||||
### 2. Create SKILL.md
|
||||
|
||||
Start with the template structure:
|
||||
|
||||
```markdown
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "skill-name"
|
||||
version: "1.0.0"
|
||||
description: "Brief description (max 120 chars)"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "tag1"
|
||||
- "tag2"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "tool"
|
||||
version: ">=1.0"
|
||||
optional: false
|
||||
metadata:
|
||||
category: "category-name"
|
||||
execution_time: "short|medium|long"
|
||||
risk_level: "low|medium|high"
|
||||
ci_cd_safe: true|false
|
||||
---
|
||||
|
||||
# Skill Name
|
||||
|
||||
## Overview
|
||||
|
||||
Brief description of what this skill does.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- List prerequisites
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh skill-name
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Usage
|
||||
|
||||
```bash
|
||||
# Example command
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: YYYY-MM-DD
|
||||
**Maintained by**: Charon Project
|
||||
```
|
||||
|
||||
### 3. Create Execution Script
|
||||
|
||||
Create `scripts/run.sh` with proper structure:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../../scripts" && pwd)"
|
||||
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
# Add validation calls here
|
||||
|
||||
# Execute skill logic
|
||||
log_step "EXECUTION" "Running skill"
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# Your skill logic here
|
||||
|
||||
log_success "Skill completed successfully"
|
||||
```
|
||||
|
||||
### 4. Set Permissions
|
||||
|
||||
```bash
|
||||
chmod +x .github/skills/{skill-name}/scripts/run.sh
|
||||
```
|
||||
|
||||
### 5. Validate
|
||||
|
||||
```bash
|
||||
python3 .github/skills/scripts/validate-skills.py --single .github/skills/{skill-name}/SKILL.md
|
||||
```
|
||||
|
||||
### 6. Test
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh {skill-name}
|
||||
```
|
||||
|
||||
## Naming Conventions
|
||||
|
||||
- **Skill Names**: `{category}-{feature}-{variant}` (kebab-case)
|
||||
- **Categories**: `test`, `integration-test`, `security`, `qa`, `build`, `utility`, `docker`
|
||||
- **Examples**:
|
||||
- `test-backend-coverage`
|
||||
- `integration-test-crowdsec`
|
||||
- `security-scan-trivy`
|
||||
- `utility-version-check`
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Documentation
|
||||
- Keep SKILL.md under 500 lines
|
||||
- Use progressive disclosure (link to extended docs for complex topics)
|
||||
- Include practical examples
|
||||
- Document all prerequisites and environment variables
|
||||
|
||||
### Scripts
|
||||
- Always source helper scripts for consistent logging and error handling
|
||||
- Validate environment before execution
|
||||
- Use `set -euo pipefail` for robust error handling
|
||||
- Make scripts idempotent when possible
|
||||
- Clean up resources on exit
|
||||
|
||||
### Metadata
|
||||
- Use accurate `execution_time` values for scheduling
|
||||
- Set `ci_cd_safe: false` for skills requiring human oversight
|
||||
- Mark `idempotent: true` only if truly safe to run multiple times
|
||||
- Include all required dependencies in `requirements`
|
||||
|
||||
### Error Handling
|
||||
- Use helper functions (`log_error`, `error_exit`, `check_command_exists`)
|
||||
- Provide clear error messages with remediation steps
|
||||
- Return appropriate exit codes (0 = success, non-zero = failure)
|
||||
|
||||
## Helper Scripts Reference
|
||||
|
||||
### Logging Helpers (`_logging_helpers.sh`)
|
||||
|
||||
```bash
|
||||
log_info "message" # Informational message
|
||||
log_success "message" # Success message (green)
|
||||
log_warning "message" # Warning message (yellow)
|
||||
log_error "message" # Error message (red)
|
||||
log_debug "message" # Debug message (only if DEBUG=1)
|
||||
log_step "STEP" "msg" # Step header
|
||||
log_command "cmd" # Log command before executing
|
||||
```
|
||||
|
||||
### Error Handling Helpers (`_error_handling_helpers.sh`)
|
||||
|
||||
```bash
|
||||
error_exit "message" [exit_code] # Print error and exit
|
||||
check_command_exists "cmd" ["message"] # Verify command exists
|
||||
check_file_exists "file" ["message"] # Verify file exists
|
||||
check_dir_exists "dir" ["message"] # Verify directory exists
|
||||
run_with_retry max_attempts delay cmd... # Retry command with backoff
|
||||
trap_error [script_name] # Set up error trapping
|
||||
cleanup_on_exit cleanup_func # Register cleanup function
|
||||
```
|
||||
|
||||
### Environment Helpers (`_environment_helpers.sh`)
|
||||
|
||||
```bash
|
||||
validate_go_environment ["min_version"] # Check Go installation
|
||||
validate_python_environment ["min_version"] # Check Python installation
|
||||
validate_node_environment ["min_version"] # Check Node.js installation
|
||||
validate_docker_environment # Check Docker installation
|
||||
set_default_env "VAR" "default_value" # Set env var with default
|
||||
validate_project_structure file1 file2... # Check required files exist
|
||||
get_project_root ["marker_file"] # Find project root directory
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Skill not found
|
||||
```
|
||||
Error: Skill not found: skill-name
|
||||
```
|
||||
**Solution**: Verify the skill directory exists in `.github/skills/` and contains a `SKILL.md` file
|
||||
|
||||
### Skill script not executable
|
||||
```
|
||||
Error: Skill execution script is not executable
|
||||
```
|
||||
**Solution**: Run `chmod +x .github/skills/{skill-name}/scripts/run.sh`
|
||||
|
||||
### Validation errors
|
||||
```
|
||||
[ERROR] skill.SKILL.md :: description: Must be 120 characters or less
|
||||
```
|
||||
**Solution**: Fix the frontmatter field according to the error message and re-validate
|
||||
|
||||
### Command not found in skill
|
||||
```
|
||||
Error: go is not installed or not in PATH
|
||||
```
|
||||
**Solution**: Install the required dependency or ensure it's in your PATH
|
||||
|
||||
## Integration Points
|
||||
|
||||
### VS Code Tasks
|
||||
Skills are integrated in `.vscode/tasks.json`:
|
||||
```json
|
||||
{
|
||||
"label": "Test: Backend with Coverage",
|
||||
"type": "shell",
|
||||
"command": ".github/skills/scripts/skill-runner.sh test-backend-coverage",
|
||||
"group": "test"
|
||||
}
|
||||
```
|
||||
|
||||
### GitHub Actions
|
||||
Skills are referenced in `.github/workflows/`:
|
||||
```yaml
|
||||
- name: Run Backend Tests with Coverage
|
||||
run: .github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
```
|
||||
|
||||
### Pre-commit Hooks
|
||||
Skills can be used in `.pre-commit-config.yaml`:
|
||||
```yaml
|
||||
repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: backend-coverage
|
||||
name: Backend Coverage Check
|
||||
entry: .github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
language: system
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- [agentskills.io Specification](https://agentskills.io/specification)
|
||||
- [VS Code Copilot Agent Skills](https://code.visualstudio.com/docs/copilot/customization/agent-skills)
|
||||
- [Project Documentation](../../docs/)
|
||||
- [Contributing Guide](../../CONTRIBUTING.md)
|
||||
|
||||
## Support
|
||||
|
||||
For issues, questions, or contributions:
|
||||
1. Check existing [GitHub Issues](https://github.com/Wikid82/charon/issues)
|
||||
2. Review [CONTRIBUTING.md](../../CONTRIBUTING.md)
|
||||
3. Create a new issue if needed
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**License**: MIT
|
||||
14
.github/skills/docker-prune-scripts/run.sh
vendored
14
.github/skills/docker-prune-scripts/run.sh
vendored
@@ -1,14 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# ==============================================================================
|
||||
# Docker: Prune Unused Resources - Execution Script
|
||||
# ==============================================================================
|
||||
# This script removes unused Docker resources to free up disk space.
|
||||
#
|
||||
# Usage: ./run.sh
|
||||
# Exit codes: 0 = success, non-zero = failure
|
||||
# ==============================================================================
|
||||
|
||||
# Remove unused Docker resources (containers, images, networks, build cache)
|
||||
exec docker system prune -f
|
||||
293
.github/skills/docker-prune.SKILL.md
vendored
293
.github/skills/docker-prune.SKILL.md
vendored
@@ -1,293 +0,0 @@
|
||||
---
|
||||
name: "docker-prune"
|
||||
version: "1.0.0"
|
||||
description: "Removes unused Docker resources including stopped containers, dangling images, and unused networks"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "docker"
|
||||
- "cleanup"
|
||||
- "maintenance"
|
||||
- "disk-space"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
environment_variables: []
|
||||
parameters: []
|
||||
outputs:
|
||||
- name: "exit_code"
|
||||
type: "integer"
|
||||
description: "0 on success, non-zero on failure"
|
||||
- name: "reclaimed_space"
|
||||
type: "string"
|
||||
description: "Amount of disk space freed"
|
||||
metadata:
|
||||
category: "docker"
|
||||
subcategory: "maintenance"
|
||||
execution_time: "short"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: false
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Docker: Prune Unused Resources
|
||||
|
||||
## Overview
|
||||
|
||||
Removes unused Docker resources to free up disk space and clean up the Docker environment. This includes stopped containers, dangling images, unused networks, and build cache. The operation is safe and only removes resources not currently in use.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker Engine installed and running
|
||||
- Sufficient permissions to run Docker commands
|
||||
- No critical containers running (verify first)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
.github/skills/docker-prune-scripts/run.sh
|
||||
```
|
||||
|
||||
### Via Skill Runner
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-prune
|
||||
```
|
||||
|
||||
### Via VS Code Task
|
||||
|
||||
Use the task: **Docker: Prune Unused Resources**
|
||||
|
||||
## Parameters
|
||||
|
||||
This skill uses Docker's default prune behavior (safe mode). No parameters accepted.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This skill requires no environment variables.
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0
|
||||
- **Error Exit Codes**: Non-zero on failure
|
||||
- **Console Output**: List of removed resources and space reclaimed
|
||||
|
||||
### Output Example
|
||||
|
||||
```
|
||||
Deleted Containers:
|
||||
f8d1234567890abcdef1234567890abcdef1234567890abcdef1234567890ab
|
||||
|
||||
Deleted Networks:
|
||||
charon-test_default
|
||||
old-network_default
|
||||
|
||||
Deleted Images:
|
||||
untagged: myimage@sha256:abcdef1234567890...
|
||||
deleted: sha256:1234567890abcdef...
|
||||
|
||||
Deleted build cache objects:
|
||||
abcd1234
|
||||
efgh5678
|
||||
|
||||
Total reclaimed space: 2.5GB
|
||||
```
|
||||
|
||||
## What Gets Removed
|
||||
|
||||
The `docker system prune -f` command removes:
|
||||
|
||||
1. **Stopped Containers**: Containers not currently running
|
||||
2. **Dangling Images**: Images with no tag (intermediate layers)
|
||||
3. **Unused Networks**: Networks with no connected containers
|
||||
4. **Build Cache**: Cached layers from image builds
|
||||
|
||||
## What Gets Preserved
|
||||
|
||||
This command **DOES NOT** remove:
|
||||
- **Running Containers**: Active containers are untouched
|
||||
- **Tagged Images**: Images with tags are preserved
|
||||
- **Volumes**: Data volumes are never removed
|
||||
- **Used Networks**: Networks with connected containers
|
||||
- **Active Build Cache**: Cache for recent builds
|
||||
|
||||
## Safety Features
|
||||
|
||||
- **Force Flag (`-f`)**: Skips confirmation prompt (safe for automation)
|
||||
- **Safe by Default**: Only removes truly unused resources
|
||||
- **Volume Protection**: Volumes require separate `docker volume prune` command
|
||||
- **Running Container Protection**: Cannot remove active containers
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Regular Cleanup
|
||||
|
||||
```bash
|
||||
# Clean up Docker environment
|
||||
.github/skills/docker-prune-scripts/run.sh
|
||||
```
|
||||
|
||||
### Example 2: Check Disk Usage Before/After
|
||||
|
||||
```bash
|
||||
# Check current usage
|
||||
docker system df
|
||||
|
||||
# Run cleanup
|
||||
.github/skills/docker-prune-scripts/run.sh
|
||||
|
||||
# Verify freed space
|
||||
docker system df
|
||||
```
|
||||
|
||||
### Example 3: Aggressive Cleanup (Manual)
|
||||
|
||||
```bash
|
||||
# Standard prune
|
||||
.github/skills/docker-prune-scripts/run.sh
|
||||
|
||||
# Additionally prune volumes (WARNING: data loss)
|
||||
docker volume prune -f
|
||||
|
||||
# Remove all unused images (not just dangling)
|
||||
docker image prune -a -f
|
||||
```
|
||||
|
||||
## Disk Space Analysis
|
||||
|
||||
Check Docker disk usage:
|
||||
|
||||
```bash
|
||||
# Summary view
|
||||
docker system df
|
||||
|
||||
# Detailed view
|
||||
docker system df -v
|
||||
```
|
||||
|
||||
Output shows:
|
||||
- **Images**: Total size of cached images
|
||||
- **Containers**: Size of container writable layers
|
||||
- **Local Volumes**: Size of data volumes
|
||||
- **Build Cache**: Size of cached build layers
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Disk space is running low
|
||||
- After development cycles (many builds)
|
||||
- After running integration tests
|
||||
- Before system backup/snapshot
|
||||
- As part of regular maintenance
|
||||
- After Docker image experiments
|
||||
|
||||
## Frequency Recommendations
|
||||
|
||||
- **Daily**: For active development machines
|
||||
- **Weekly**: For CI/CD build servers
|
||||
- **Monthly**: For production servers (cautiously)
|
||||
- **On-Demand**: When disk space is low
|
||||
|
||||
## Error Handling
|
||||
|
||||
Common issues and solutions:
|
||||
|
||||
### Permission Denied
|
||||
```
|
||||
Error: permission denied
|
||||
```
|
||||
Solution: Add user to docker group or use sudo
|
||||
|
||||
### Daemon Not Running
|
||||
```
|
||||
Error: Cannot connect to Docker daemon
|
||||
```
|
||||
Solution: Start Docker service
|
||||
|
||||
### Resource in Use
|
||||
```
|
||||
Error: resource is in use
|
||||
```
|
||||
This is normal - only unused resources are removed
|
||||
|
||||
## Advanced Cleanup Options
|
||||
|
||||
For more aggressive cleanup:
|
||||
|
||||
### Remove All Unused Images
|
||||
|
||||
```bash
|
||||
docker image prune -a -f
|
||||
```
|
||||
|
||||
### Remove Unused Volumes (DANGER: Data Loss)
|
||||
|
||||
```bash
|
||||
docker volume prune -f
|
||||
```
|
||||
|
||||
### Complete System Prune (DANGER)
|
||||
|
||||
```bash
|
||||
docker system prune -a --volumes -f
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [docker-stop-dev](./docker-stop-dev.SKILL.md) - Stop containers before cleanup
|
||||
- [docker-start-dev](./docker-start-dev.SKILL.md) - Restart after cleanup
|
||||
- [utility-clear-go-cache](./utility-clear-go-cache.SKILL.md) - Clear Go build cache
|
||||
|
||||
## Notes
|
||||
|
||||
- **Idempotent**: Safe to run multiple times
|
||||
- **Low Risk**: Only removes unused resources
|
||||
- **No Data Loss**: Volumes are protected by default
|
||||
- **Fast Execution**: Typically completes in seconds
|
||||
- **No Network Required**: Local operation only
|
||||
- **Not CI/CD Safe**: Can interfere with parallel builds
|
||||
- **Build Cache**: May slow down next build if cache is cleared
|
||||
|
||||
## Disk Space Recovery
|
||||
|
||||
Typical space recovery by resource type:
|
||||
- **Stopped Containers**: 10-100 MB each
|
||||
- **Dangling Images**: 100 MB - 2 GB total
|
||||
- **Build Cache**: 1-10 GB (if many builds)
|
||||
- **Unused Networks**: Negligible space
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Space Freed
|
||||
|
||||
- Check for running containers: `docker ps`
|
||||
- Verify images are untagged: `docker images -f "dangling=true"`
|
||||
- Check volume usage: `docker volume ls`
|
||||
|
||||
### Space Still Low After Prune
|
||||
|
||||
- Use aggressive pruning (see Advanced Cleanup)
|
||||
- Check non-Docker disk usage: `df -h`
|
||||
- Consider increasing disk allocation
|
||||
|
||||
### Container Won't Be Removed
|
||||
|
||||
- Check if container is running: `docker ps`
|
||||
- Stop container first: `docker stop container_name`
|
||||
- Force removal: `docker rm -f container_name`
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Docker Command**: `docker system prune -f`
|
||||
314
.github/skills/docker-rebuild-e2e-scripts/run.sh
vendored
314
.github/skills/docker-rebuild-e2e-scripts/run.sh
vendored
@@ -1,314 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Docker: Rebuild E2E Environment - Execution Script
|
||||
#
|
||||
# Rebuilds the Docker image and restarts the Playwright E2E testing
|
||||
# environment with fresh code and optionally clean state.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Project root is 3 levels up from this script
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Docker compose file for Playwright E2E tests
|
||||
COMPOSE_FILE=".docker/compose/docker-compose.playwright.yml"
|
||||
CONTAINER_NAME="charon-playwright"
|
||||
IMAGE_NAME="charon:local"
|
||||
HEALTH_TIMEOUT=60
|
||||
HEALTH_INTERVAL=5
|
||||
|
||||
# Default parameter values
|
||||
NO_CACHE=false
|
||||
CLEAN=false
|
||||
PROFILE=""
|
||||
|
||||
# Parse command-line arguments
|
||||
parse_arguments() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--no-cache)
|
||||
NO_CACHE=true
|
||||
shift
|
||||
;;
|
||||
--clean)
|
||||
CLEAN=true
|
||||
shift
|
||||
;;
|
||||
--profile=*)
|
||||
PROFILE="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--profile)
|
||||
PROFILE="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown argument: $1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
# Show help message
|
||||
show_help() {
|
||||
cat << EOF
|
||||
Usage: run.sh [OPTIONS]
|
||||
|
||||
Rebuild Docker image and restart E2E Playwright container.
|
||||
|
||||
Options:
|
||||
--no-cache Force rebuild without Docker cache
|
||||
--clean Remove test volumes for fresh state
|
||||
--profile=PROFILE Docker Compose profile to enable
|
||||
(security-tests, notification-tests)
|
||||
-h, --help Show this help message
|
||||
|
||||
Environment Variables:
|
||||
DOCKER_NO_CACHE Force rebuild without cache (default: false)
|
||||
SKIP_VOLUME_CLEANUP Preserve test data volumes (default: false)
|
||||
|
||||
Examples:
|
||||
run.sh # Standard rebuild
|
||||
run.sh --no-cache # Force complete rebuild
|
||||
run.sh --clean # Rebuild with fresh volumes
|
||||
run.sh --profile=security-tests # Enable CrowdSec for testing
|
||||
run.sh --no-cache --clean # Complete fresh rebuild
|
||||
EOF
|
||||
}
|
||||
|
||||
# Stop existing containers
|
||||
stop_containers() {
|
||||
log_step "STOP" "Stopping existing E2E containers"
|
||||
|
||||
local compose_cmd="docker compose -f ${COMPOSE_FILE}"
|
||||
|
||||
# Add profile if specified
|
||||
if [[ -n "${PROFILE}" ]]; then
|
||||
compose_cmd="${compose_cmd} --profile ${PROFILE}"
|
||||
fi
|
||||
|
||||
# Stop and remove containers
|
||||
if ${compose_cmd} ps -q 2>/dev/null | grep -q .; then
|
||||
log_info "Stopping containers..."
|
||||
${compose_cmd} down --remove-orphans || true
|
||||
else
|
||||
log_info "No running containers to stop"
|
||||
fi
|
||||
}
|
||||
|
||||
# Clean volumes if requested
|
||||
clean_volumes() {
|
||||
if [[ "${CLEAN}" != "true" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ "${SKIP_VOLUME_CLEANUP:-false}" == "true" ]]; then
|
||||
log_warning "Skipping volume cleanup (SKIP_VOLUME_CLEANUP=true)"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_step "CLEAN" "Removing test volumes"
|
||||
|
||||
local volumes=(
|
||||
"playwright_data"
|
||||
"playwright_caddy_data"
|
||||
"playwright_caddy_config"
|
||||
"playwright_crowdsec_data"
|
||||
"playwright_crowdsec_config"
|
||||
)
|
||||
|
||||
for vol in "${volumes[@]}"; do
|
||||
# Try both prefixed and unprefixed volume names
|
||||
for prefix in "compose_" ""; do
|
||||
local full_name="${prefix}${vol}"
|
||||
if docker volume inspect "${full_name}" &>/dev/null; then
|
||||
log_info "Removing volume: ${full_name}"
|
||||
docker volume rm "${full_name}" || true
|
||||
fi
|
||||
done
|
||||
done
|
||||
|
||||
log_success "Volumes cleaned"
|
||||
}
|
||||
|
||||
# Build Docker image
|
||||
build_image() {
|
||||
log_step "BUILD" "Building Docker image: ${IMAGE_NAME}"
|
||||
|
||||
local build_args=("-t" "${IMAGE_NAME}" ".")
|
||||
|
||||
if [[ "${NO_CACHE}" == "true" ]] || [[ "${DOCKER_NO_CACHE:-false}" == "true" ]]; then
|
||||
log_info "Building with --no-cache"
|
||||
build_args=("--no-cache" "${build_args[@]}")
|
||||
fi
|
||||
|
||||
log_command "docker build ${build_args[*]}"
|
||||
|
||||
if ! docker build "${build_args[@]}"; then
|
||||
error_exit "Docker build failed"
|
||||
fi
|
||||
|
||||
log_success "Image built successfully: ${IMAGE_NAME}"
|
||||
}
|
||||
|
||||
# Start containers
|
||||
start_containers() {
|
||||
log_step "START" "Starting E2E containers"
|
||||
|
||||
local compose_cmd="docker compose -f ${COMPOSE_FILE}"
|
||||
|
||||
# Add profile if specified
|
||||
if [[ -n "${PROFILE}" ]]; then
|
||||
log_info "Enabling profile: ${PROFILE}"
|
||||
compose_cmd="${compose_cmd} --profile ${PROFILE}"
|
||||
fi
|
||||
|
||||
log_command "${compose_cmd} up -d"
|
||||
|
||||
if ! ${compose_cmd} up -d; then
|
||||
error_exit "Failed to start containers"
|
||||
fi
|
||||
|
||||
log_success "Containers started"
|
||||
}
|
||||
|
||||
# Wait for container health
|
||||
wait_for_health() {
|
||||
log_step "HEALTH" "Waiting for container to be healthy"
|
||||
|
||||
local elapsed=0
|
||||
local healthy=false
|
||||
|
||||
while [[ ${elapsed} -lt ${HEALTH_TIMEOUT} ]]; do
|
||||
local health_status
|
||||
health_status=$(docker inspect --format='{{.State.Health.Status}}' "${CONTAINER_NAME}" 2>/dev/null || echo "unknown")
|
||||
|
||||
case "${health_status}" in
|
||||
healthy)
|
||||
healthy=true
|
||||
break
|
||||
;;
|
||||
unhealthy)
|
||||
log_error "Container is unhealthy"
|
||||
docker logs "${CONTAINER_NAME}" --tail 20
|
||||
error_exit "Container health check failed"
|
||||
;;
|
||||
starting)
|
||||
log_info "Health status: starting (${elapsed}s/${HEALTH_TIMEOUT}s)"
|
||||
;;
|
||||
*)
|
||||
log_info "Health status: ${health_status} (${elapsed}s/${HEALTH_TIMEOUT}s)"
|
||||
;;
|
||||
esac
|
||||
|
||||
sleep "${HEALTH_INTERVAL}"
|
||||
elapsed=$((elapsed + HEALTH_INTERVAL))
|
||||
done
|
||||
|
||||
if [[ "${healthy}" != "true" ]]; then
|
||||
log_error "Container did not become healthy in ${HEALTH_TIMEOUT}s"
|
||||
docker logs "${CONTAINER_NAME}" --tail 50
|
||||
error_exit "Health check timeout"
|
||||
fi
|
||||
|
||||
log_success "Container is healthy"
|
||||
}
|
||||
|
||||
# Verify environment
|
||||
verify_environment() {
|
||||
log_step "VERIFY" "Verifying E2E environment"
|
||||
|
||||
# Check container is running
|
||||
if ! docker ps --filter "name=${CONTAINER_NAME}" --format "{{.Names}}" | grep -q "${CONTAINER_NAME}"; then
|
||||
error_exit "Container ${CONTAINER_NAME} is not running"
|
||||
fi
|
||||
|
||||
# Test health endpoint
|
||||
log_info "Testing health endpoint..."
|
||||
if curl -sf http://localhost:8080/api/v1/health &>/dev/null; then
|
||||
log_success "Health endpoint responding"
|
||||
else
|
||||
log_warning "Health endpoint not responding (may need more time)"
|
||||
fi
|
||||
|
||||
# Show container status
|
||||
log_info "Container status:"
|
||||
docker ps --filter "name=charon-playwright" --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"
|
||||
}
|
||||
|
||||
# Show summary
|
||||
show_summary() {
|
||||
log_step "SUMMARY" "E2E environment ready"
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " E2E Environment Ready"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
echo " Application URL: http://localhost:8080"
|
||||
echo " Health Check: http://localhost:8080/api/v1/health"
|
||||
echo " Container: ${CONTAINER_NAME}"
|
||||
echo ""
|
||||
echo " Run E2E tests:"
|
||||
echo " .github/skills/scripts/skill-runner.sh test-e2e-playwright"
|
||||
echo ""
|
||||
echo " Run in debug mode:"
|
||||
echo " .github/skills/scripts/skill-runner.sh test-e2e-playwright-debug"
|
||||
echo ""
|
||||
echo " View logs:"
|
||||
echo " docker logs ${CONTAINER_NAME} -f"
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
parse_arguments "$@"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_docker_environment || error_exit "Docker is not available"
|
||||
check_command_exists "docker" "Docker is required"
|
||||
|
||||
# Validate project structure
|
||||
log_step "VALIDATION" "Checking project structure"
|
||||
cd "${PROJECT_ROOT}"
|
||||
check_file_exists "Dockerfile" "Dockerfile is required"
|
||||
check_file_exists "${COMPOSE_FILE}" "Playwright compose file is required"
|
||||
|
||||
# Log configuration
|
||||
log_step "CONFIG" "Rebuild configuration"
|
||||
log_info "No cache: ${NO_CACHE}"
|
||||
log_info "Clean volumes: ${CLEAN}"
|
||||
log_info "Profile: ${PROFILE:-<none>}"
|
||||
log_info "Compose file: ${COMPOSE_FILE}"
|
||||
|
||||
# Execute rebuild steps
|
||||
stop_containers
|
||||
clean_volumes
|
||||
build_image
|
||||
start_containers
|
||||
wait_for_health
|
||||
verify_environment
|
||||
show_summary
|
||||
|
||||
log_success "E2E environment rebuild complete"
|
||||
}
|
||||
|
||||
# Run main with all arguments
|
||||
main "$@"
|
||||
300
.github/skills/docker-rebuild-e2e.SKILL.md
vendored
300
.github/skills/docker-rebuild-e2e.SKILL.md
vendored
@@ -1,300 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "docker-rebuild-e2e"
|
||||
version: "1.0.0"
|
||||
description: "Rebuild Docker image and restart E2E Playwright container with fresh code and clean state"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "docker"
|
||||
- "e2e"
|
||||
- "playwright"
|
||||
- "rebuild"
|
||||
- "testing"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "docker-compose"
|
||||
version: ">=2.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "DOCKER_NO_CACHE"
|
||||
description: "Set to 'true' to force a complete rebuild without cache"
|
||||
default: "false"
|
||||
required: false
|
||||
- name: "SKIP_VOLUME_CLEANUP"
|
||||
description: "Set to 'true' to preserve test data volumes"
|
||||
default: "false"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "no-cache"
|
||||
type: "boolean"
|
||||
description: "Force rebuild without Docker cache"
|
||||
default: "false"
|
||||
required: false
|
||||
- name: "clean"
|
||||
type: "boolean"
|
||||
description: "Remove test volumes for a completely fresh state"
|
||||
default: "false"
|
||||
required: false
|
||||
- name: "profile"
|
||||
type: "string"
|
||||
description: "Docker Compose profile to enable (security-tests, notification-tests)"
|
||||
default: ""
|
||||
required: false
|
||||
outputs:
|
||||
- name: "exit_code"
|
||||
type: "integer"
|
||||
description: "0 on success, non-zero on failure"
|
||||
metadata:
|
||||
category: "docker"
|
||||
subcategory: "e2e"
|
||||
execution_time: "long"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Docker: Rebuild E2E Environment
|
||||
|
||||
## Overview
|
||||
|
||||
Rebuilds the Charon Docker image and restarts the Playwright E2E testing environment with fresh code. This skill handles the complete lifecycle: stopping existing containers, optionally cleaning volumes, rebuilding the image, and starting fresh containers with health check verification.
|
||||
|
||||
**Use this skill when:**
|
||||
- You've made code changes and need to test them in E2E tests
|
||||
- E2E tests are failing due to stale container state
|
||||
- You need a clean slate for debugging
|
||||
- The container is in an inconsistent state
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker Engine installed and running
|
||||
- Docker Compose V2 installed
|
||||
- Dockerfile in repository root
|
||||
- `.docker/compose/docker-compose.playwright.yml` file
|
||||
- Network access for pulling base images (if needed)
|
||||
- Sufficient disk space for image rebuild
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Rebuild image and restart E2E container:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e
|
||||
```
|
||||
|
||||
### Force Rebuild (No Cache)
|
||||
|
||||
Rebuild from scratch without Docker cache:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --no-cache
|
||||
```
|
||||
|
||||
### Clean Rebuild
|
||||
|
||||
Remove test volumes and rebuild with fresh state:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --clean
|
||||
```
|
||||
|
||||
### With Security Testing Services
|
||||
|
||||
Enable CrowdSec for security testing:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --profile=security-tests
|
||||
```
|
||||
|
||||
### With Notification Testing Services
|
||||
|
||||
Enable MailHog for email testing:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --profile=notification-tests
|
||||
```
|
||||
|
||||
### Full Clean Rebuild with All Services
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --no-cache --clean --profile=security-tests
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| no-cache | boolean | No | false | Force rebuild without Docker cache |
|
||||
| clean | boolean | No | false | Remove test volumes for fresh state |
|
||||
| profile | string | No | "" | Docker Compose profile to enable |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| DOCKER_NO_CACHE | No | false | Force rebuild without cache |
|
||||
| SKIP_VOLUME_CLEANUP | No | false | Preserve test data volumes |
|
||||
|
||||
## What This Skill Does
|
||||
|
||||
1. **Stop Existing Containers**: Gracefully stops any running Playwright containers
|
||||
2. **Clean Volumes** (if `--clean`): Removes test data volumes for fresh state
|
||||
3. **Rebuild Image**: Builds `charon:local` image from Dockerfile
|
||||
4. **Start Containers**: Starts the Playwright compose environment
|
||||
5. **Wait for Health**: Verifies container health before returning
|
||||
6. **Report Status**: Outputs container status and connection info
|
||||
|
||||
## Docker Compose Configuration
|
||||
|
||||
This skill uses `.docker/compose/docker-compose.playwright.yml` which includes:
|
||||
|
||||
- **charon-app**: Main application container on port 8080
|
||||
- **crowdsec** (profile: security-tests): Security bouncer for WAF testing
|
||||
- **mailhog** (profile: notification-tests): Email testing service
|
||||
|
||||
### Volumes Created
|
||||
|
||||
| Volume | Purpose |
|
||||
|--------|---------|
|
||||
| playwright_data | Application data and SQLite database |
|
||||
| playwright_caddy_data | Caddy server data |
|
||||
| playwright_caddy_config | Caddy configuration |
|
||||
| playwright_crowdsec_data | CrowdSec data (if enabled) |
|
||||
| playwright_crowdsec_config | CrowdSec config (if enabled) |
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Quick Rebuild After Code Change
|
||||
|
||||
```bash
|
||||
# Rebuild and restart after making backend changes
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e
|
||||
|
||||
# Run E2E tests
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright
|
||||
```
|
||||
|
||||
### Example 2: Debug Failing Tests with Clean State
|
||||
|
||||
```bash
|
||||
# Complete clean rebuild
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --clean --no-cache
|
||||
|
||||
# Run specific test in debug mode
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --grep="failing-test"
|
||||
```
|
||||
|
||||
### Example 3: Test Security Features
|
||||
|
||||
```bash
|
||||
# Start with CrowdSec enabled
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --profile=security-tests
|
||||
|
||||
# Run security-related E2E tests
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --grep="security"
|
||||
```
|
||||
|
||||
## Health Check Verification
|
||||
|
||||
After starting, the skill waits for the health check to pass:
|
||||
|
||||
```bash
|
||||
# Health endpoint checked
|
||||
curl -sf http://localhost:8080/api/v1/health
|
||||
```
|
||||
|
||||
The skill will:
|
||||
- Wait up to 60 seconds for container to be healthy
|
||||
- Check every 5 seconds
|
||||
- Report final health status
|
||||
- Exit with error if health check fails
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### Docker Build Failed
|
||||
```
|
||||
Error: docker build failed
|
||||
```
|
||||
**Solution**: Check Dockerfile syntax, ensure all COPY sources exist
|
||||
|
||||
#### Port Already in Use
|
||||
```
|
||||
Error: bind: address already in use
|
||||
```
|
||||
**Solution**: Stop conflicting services on port 8080
|
||||
|
||||
#### Health Check Timeout
|
||||
```
|
||||
Error: Container did not become healthy in 60s
|
||||
```
|
||||
**Solution**: Check container logs with `docker logs charon-playwright`
|
||||
|
||||
#### Volume Permission Issues
|
||||
```
|
||||
Error: permission denied
|
||||
```
|
||||
**Solution**: Run with `--clean` to recreate volumes with proper permissions
|
||||
|
||||
## Verifying the Environment
|
||||
|
||||
After the skill completes, verify the environment:
|
||||
|
||||
```bash
|
||||
# Check container status
|
||||
docker ps --filter "name=charon-playwright"
|
||||
|
||||
# Check logs
|
||||
docker logs charon-playwright --tail 50
|
||||
|
||||
# Test health endpoint
|
||||
curl http://localhost:8080/api/v1/health
|
||||
|
||||
# Check database state
|
||||
docker exec charon-playwright sqlite3 /app/data/charon.db ".tables"
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [test-e2e-playwright](./test-e2e-playwright.SKILL.md) - Run E2E tests
|
||||
- [test-e2e-playwright-debug](./test-e2e-playwright-debug.SKILL.md) - Debug E2E tests
|
||||
- [docker-start-dev](./docker-start-dev.SKILL.md) - Start development environment
|
||||
- [docker-stop-dev](./docker-stop-dev.SKILL.md) - Stop development environment
|
||||
- [docker-prune](./docker-prune.SKILL.md) - Clean up Docker resources
|
||||
|
||||
## Key File Locations
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `Dockerfile` | Main application Dockerfile |
|
||||
| `.docker/compose/docker-compose.playwright.yml` | E2E test compose config |
|
||||
| `playwright.config.js` | Playwright test configuration |
|
||||
| `tests/` | E2E test files |
|
||||
| `playwright/.auth/user.json` | Stored authentication state |
|
||||
|
||||
## Notes
|
||||
|
||||
- **Build Time**: Full rebuild takes 2-5 minutes depending on cache
|
||||
- **Disk Space**: Image is ~500MB, volumes add ~100MB
|
||||
- **Network**: Base images may need to be pulled on first run
|
||||
- **Idempotent**: Safe to run multiple times
|
||||
- **CI/CD Safe**: Designed for use in automated pipelines
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-21
|
||||
**Maintained by**: Charon Project Team
|
||||
**Compose File**: `.docker/compose/docker-compose.playwright.yml`
|
||||
21
.github/skills/docker-start-dev-scripts/run.sh
vendored
21
.github/skills/docker-start-dev-scripts/run.sh
vendored
@@ -1,21 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# ==============================================================================
|
||||
# Docker: Start Development Environment - Execution Script
|
||||
# ==============================================================================
|
||||
# This script starts the Docker Compose development environment.
|
||||
#
|
||||
# Usage: ./run.sh
|
||||
# Exit codes: 0 = success, non-zero = failure
|
||||
# ==============================================================================
|
||||
|
||||
# Determine the repository root directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||
|
||||
# Change to repository root
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Start development environment with Docker Compose
|
||||
exec docker compose -f .docker/compose/docker-compose.dev.yml up -d
|
||||
269
.github/skills/docker-start-dev.SKILL.md
vendored
269
.github/skills/docker-start-dev.SKILL.md
vendored
@@ -1,269 +0,0 @@
|
||||
---
|
||||
name: "docker-start-dev"
|
||||
version: "1.0.0"
|
||||
description: "Starts the Charon development Docker Compose environment with all required services"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "docker"
|
||||
- "development"
|
||||
- "compose"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "docker-compose"
|
||||
version: ">=2.0"
|
||||
optional: false
|
||||
environment_variables: []
|
||||
parameters: []
|
||||
outputs:
|
||||
- name: "exit_code"
|
||||
type: "integer"
|
||||
description: "0 on success, non-zero on failure"
|
||||
metadata:
|
||||
category: "docker"
|
||||
subcategory: "environment"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: false
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Docker: Start Development Environment
|
||||
|
||||
## Overview
|
||||
|
||||
Starts the Charon development Docker Compose environment in detached mode. This brings up all required services including the application, database, CrowdSec, and any other dependencies defined in `.docker/compose/docker-compose.dev.yml`.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker Engine installed and running
|
||||
- Docker Compose V2 installed
|
||||
- `.docker/compose/docker-compose.dev.yml` file in repository
|
||||
- Network access (for pulling images)
|
||||
- Sufficient system resources (CPU, memory, disk)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
.github/skills/docker-start-dev-scripts/run.sh
|
||||
```
|
||||
|
||||
### Via Skill Runner
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-start-dev
|
||||
```
|
||||
|
||||
### Via VS Code Task
|
||||
|
||||
Use the task: **Docker: Start Dev Environment**
|
||||
|
||||
## Parameters
|
||||
|
||||
This skill accepts no parameters. Services are configured in `.docker/compose/docker-compose.dev.yml`.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This skill uses environment variables defined in:
|
||||
- `.env` (if present)
|
||||
- `.docker/compose/docker-compose.dev.yml` environment section
|
||||
- Shell environment
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0 - All services started successfully
|
||||
- **Error Exit Codes**: Non-zero - Service startup failed
|
||||
- **Console Output**: Docker Compose logs and status
|
||||
|
||||
### Output Example
|
||||
|
||||
```
|
||||
[+] Running 5/5
|
||||
✔ Network charon-dev_default Created
|
||||
✔ Container charon-dev-db-1 Started
|
||||
✔ Container charon-dev-crowdsec-1 Started
|
||||
✔ Container charon-dev-app-1 Started
|
||||
✔ Container charon-dev-caddy-1 Started
|
||||
```
|
||||
|
||||
## What Gets Started
|
||||
|
||||
Services defined in `.docker/compose/docker-compose.dev.yml`:
|
||||
1. **charon-app**: Main application container
|
||||
2. **charon-db**: SQLite or PostgreSQL database
|
||||
3. **crowdsec**: Security bouncer
|
||||
4. **caddy**: Reverse proxy (if configured)
|
||||
5. **Other Services**: As defined in compose file
|
||||
|
||||
## Service Startup Order
|
||||
|
||||
Docker Compose respects `depends_on` directives:
|
||||
1. Database services start first
|
||||
2. Security services (CrowdSec) start next
|
||||
3. Application services start after dependencies
|
||||
4. Reverse proxy starts last
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Start Development Environment
|
||||
|
||||
```bash
|
||||
# Start all development services
|
||||
.github/skills/docker-start-dev-scripts/run.sh
|
||||
|
||||
# Verify services are running
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml ps
|
||||
```
|
||||
|
||||
### Example 2: Start and View Logs
|
||||
|
||||
```bash
|
||||
# Start services in detached mode
|
||||
.github/skills/docker-start-dev-scripts/run.sh
|
||||
|
||||
# Follow logs from all services
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml logs -f
|
||||
```
|
||||
|
||||
### Example 3: Start and Test Application
|
||||
|
||||
```bash
|
||||
# Start development environment
|
||||
.github/skills/docker-start-dev-scripts/run.sh
|
||||
|
||||
# Wait for services to be healthy
|
||||
sleep 10
|
||||
|
||||
# Test application endpoint
|
||||
curl http://localhost:8080/health
|
||||
```
|
||||
|
||||
## Service Health Checks
|
||||
|
||||
After starting, verify services are healthy:
|
||||
|
||||
```bash
|
||||
# Check service status
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml ps
|
||||
|
||||
# Check specific service logs
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml logs app
|
||||
|
||||
# Execute command in running container
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml exec app /bin/sh
|
||||
```
|
||||
|
||||
## Port Mappings
|
||||
|
||||
Default development ports (check `.docker/compose/docker-compose.dev.yml`):
|
||||
- **8080**: Application HTTP
|
||||
- **8443**: Application HTTPS (if configured)
|
||||
- **9000**: Admin panel (if configured)
|
||||
- **3000**: Frontend dev server (if configured)
|
||||
|
||||
## Detached Mode
|
||||
|
||||
The `-d` flag runs containers in detached mode:
|
||||
- Services run in background
|
||||
- Terminal is freed for other commands
|
||||
- Use `docker compose logs -f` to view output
|
||||
|
||||
## Error Handling
|
||||
|
||||
Common issues and solutions:
|
||||
|
||||
### Port Already in Use
|
||||
```
|
||||
Error: bind: address already in use
|
||||
```
|
||||
Solution: Stop conflicting service or change port in compose file
|
||||
|
||||
### Image Pull Failed
|
||||
```
|
||||
Error: failed to pull image
|
||||
```
|
||||
Solution: Check network connection, authenticate to registry
|
||||
|
||||
### Insufficient Resources
|
||||
```
|
||||
Error: failed to start container
|
||||
```
|
||||
Solution: Free up system resources, stop other containers
|
||||
|
||||
### Configuration Error
|
||||
```
|
||||
Error: invalid compose file
|
||||
```
|
||||
Solution: Validate compose file with `docker compose config`
|
||||
|
||||
## Post-Startup Verification
|
||||
|
||||
After starting, verify:
|
||||
|
||||
1. **All Services Running**:
|
||||
```bash
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml ps
|
||||
```
|
||||
|
||||
2. **Application Accessible**:
|
||||
```bash
|
||||
curl http://localhost:8080/health
|
||||
```
|
||||
|
||||
3. **No Error Logs**:
|
||||
```bash
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml logs --tail=50
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [docker-stop-dev](./docker-stop-dev.SKILL.md) - Stop development environment
|
||||
- [docker-prune](./docker-prune.SKILL.md) - Clean up Docker resources
|
||||
- [integration-test-all](./integration-test-all.SKILL.md) - Run integration tests
|
||||
|
||||
## Notes
|
||||
|
||||
- **Idempotent**: Safe to run multiple times (recreates only if needed)
|
||||
- **Resource Usage**: Development mode may use more resources than production
|
||||
- **Data Persistence**: Volumes persist data across restarts
|
||||
- **Network Access**: Requires internet for initial image pulls
|
||||
- **Not CI/CD Safe**: Intended for local development only
|
||||
- **Background Execution**: Services run in detached mode
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Services Won't Start
|
||||
|
||||
1. Check Docker daemon: `docker info`
|
||||
2. Validate compose file: `docker compose -f .docker/compose/docker-compose.dev.yml config`
|
||||
3. Check available resources: `docker stats`
|
||||
4. Review logs: `docker compose -f .docker/compose/docker-compose.dev.yml logs`
|
||||
|
||||
### Slow Startup
|
||||
|
||||
- First run pulls images (may take time)
|
||||
- Subsequent runs use cached images
|
||||
- Use `docker compose pull` to pre-pull images
|
||||
|
||||
### Service Dependency Issues
|
||||
|
||||
- Check `depends_on` in compose file
|
||||
- Add healthchecks for critical services
|
||||
- Increase startup timeout if needed
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Compose File**: `.docker/compose/docker-compose.dev.yml`
|
||||
21
.github/skills/docker-stop-dev-scripts/run.sh
vendored
21
.github/skills/docker-stop-dev-scripts/run.sh
vendored
@@ -1,21 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# ==============================================================================
|
||||
# Docker: Stop Development Environment - Execution Script
|
||||
# ==============================================================================
|
||||
# This script stops and removes the Docker Compose development environment.
|
||||
#
|
||||
# Usage: ./run.sh
|
||||
# Exit codes: 0 = success, non-zero = failure
|
||||
# ==============================================================================
|
||||
|
||||
# Determine the repository root directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||
|
||||
# Change to repository root
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Stop development environment with Docker Compose
|
||||
exec docker compose -f .docker/compose/docker-compose.dev.yml down
|
||||
272
.github/skills/docker-stop-dev.SKILL.md
vendored
272
.github/skills/docker-stop-dev.SKILL.md
vendored
@@ -1,272 +0,0 @@
|
||||
---
|
||||
name: "docker-stop-dev"
|
||||
version: "1.0.0"
|
||||
description: "Stops and removes the Charon development Docker Compose environment and containers"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "docker"
|
||||
- "development"
|
||||
- "compose"
|
||||
- "cleanup"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "docker-compose"
|
||||
version: ">=2.0"
|
||||
optional: false
|
||||
environment_variables: []
|
||||
parameters: []
|
||||
outputs:
|
||||
- name: "exit_code"
|
||||
type: "integer"
|
||||
description: "0 on success, non-zero on failure"
|
||||
metadata:
|
||||
category: "docker"
|
||||
subcategory: "environment"
|
||||
execution_time: "short"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: false
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Docker: Stop Development Environment
|
||||
|
||||
## Overview
|
||||
|
||||
Stops and removes all containers defined in the Charon development Docker Compose environment. This gracefully shuts down services, removes containers, and cleans up the default network while preserving volumes and data.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker Engine installed and running
|
||||
- Docker Compose V2 installed
|
||||
- Development environment previously started
|
||||
- `.docker/compose/docker-compose.dev.yml` file in repository
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
.github/skills/docker-stop-dev-scripts/run.sh
|
||||
```
|
||||
|
||||
### Via Skill Runner
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-stop-dev
|
||||
```
|
||||
|
||||
### Via VS Code Task
|
||||
|
||||
Use the task: **Docker: Stop Dev Environment**
|
||||
|
||||
## Parameters
|
||||
|
||||
This skill accepts no parameters.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This skill requires no environment variables.
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0 - All services stopped successfully
|
||||
- **Error Exit Codes**: Non-zero - Service shutdown failed
|
||||
- **Console Output**: Docker Compose shutdown status
|
||||
|
||||
### Output Example
|
||||
|
||||
```
|
||||
[+] Running 5/5
|
||||
✔ Container charon-dev-caddy-1 Removed
|
||||
✔ Container charon-dev-app-1 Removed
|
||||
✔ Container charon-dev-crowdsec-1 Removed
|
||||
✔ Container charon-dev-db-1 Removed
|
||||
✔ Network charon-dev_default Removed
|
||||
```
|
||||
|
||||
## What Gets Stopped
|
||||
|
||||
Services defined in `.docker/compose/docker-compose.dev.yml`:
|
||||
1. **Application Containers**: Charon main app
|
||||
2. **Database Containers**: SQLite/PostgreSQL services
|
||||
3. **Security Services**: CrowdSec bouncer
|
||||
4. **Reverse Proxy**: Caddy server
|
||||
5. **Network**: Default Docker Compose network
|
||||
|
||||
## What Gets Preserved
|
||||
|
||||
The `down` command preserves:
|
||||
- **Volumes**: Database data persists
|
||||
- **Images**: Docker images remain cached
|
||||
- **Configs**: Configuration files unchanged
|
||||
|
||||
To remove volumes, use `docker compose -f .docker/compose/docker-compose.dev.yml down -v`
|
||||
|
||||
## Shutdown Order
|
||||
|
||||
Docker Compose stops services in reverse dependency order:
|
||||
1. Reverse proxy stops first
|
||||
2. Application services stop next
|
||||
3. Security services stop
|
||||
4. Database services stop last
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Stop Development Environment
|
||||
|
||||
```bash
|
||||
# Stop all development services
|
||||
.github/skills/docker-stop-dev-scripts/run.sh
|
||||
|
||||
# Verify services are stopped
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml ps
|
||||
```
|
||||
|
||||
### Example 2: Stop and Remove Volumes
|
||||
|
||||
```bash
|
||||
# Stop services and remove data volumes
|
||||
docker compose -f .docker/compose/docker-compose.dev.yml down -v
|
||||
```
|
||||
|
||||
### Example 3: Stop and Verify Cleanup
|
||||
|
||||
```bash
|
||||
# Stop development environment
|
||||
.github/skills/docker-stop-dev-scripts/run.sh
|
||||
|
||||
# Verify no containers running
|
||||
docker ps --filter "name=charon-dev"
|
||||
|
||||
# Verify network removed
|
||||
docker network ls | grep charon-dev
|
||||
```
|
||||
|
||||
## Graceful Shutdown
|
||||
|
||||
The `down` command:
|
||||
1. Sends `SIGTERM` to each container
|
||||
2. Waits for graceful shutdown (default: 10 seconds)
|
||||
3. Sends `SIGKILL` if timeout exceeded
|
||||
4. Removes stopped containers
|
||||
5. Removes default network
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Switching between development and production modes
|
||||
- Freeing system resources (CPU, memory)
|
||||
- Preparing for system shutdown/restart
|
||||
- Resetting environment for troubleshooting
|
||||
- Applying Docker Compose configuration changes
|
||||
- Before database recovery operations
|
||||
|
||||
## Error Handling
|
||||
|
||||
Common issues and solutions:
|
||||
|
||||
### Container Already Stopped
|
||||
```
|
||||
Warning: Container already stopped
|
||||
```
|
||||
No action needed - idempotent operation
|
||||
|
||||
### Volume in Use
|
||||
```
|
||||
Error: volume is in use
|
||||
```
|
||||
Solution: Check for other containers using the volume
|
||||
|
||||
### Permission Denied
|
||||
```
|
||||
Error: permission denied
|
||||
```
|
||||
Solution: Add user to docker group or use sudo
|
||||
|
||||
## Post-Shutdown Verification
|
||||
|
||||
After stopping, verify:
|
||||
|
||||
1. **No Running Containers**:
|
||||
```bash
|
||||
docker ps --filter "name=charon-dev"
|
||||
```
|
||||
|
||||
2. **Network Removed**:
|
||||
```bash
|
||||
docker network ls | grep charon-dev
|
||||
```
|
||||
|
||||
3. **Volumes Still Exist** (if data preservation intended):
|
||||
```bash
|
||||
docker volume ls | grep charon
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [docker-start-dev](./docker-start-dev.SKILL.md) - Start development environment
|
||||
- [docker-prune](./docker-prune.SKILL.md) - Clean up Docker resources
|
||||
- [utility-db-recovery](./utility-db-recovery.SKILL.md) - Database recovery
|
||||
|
||||
## Notes
|
||||
|
||||
- **Idempotent**: Safe to run multiple times (no error if already stopped)
|
||||
- **Data Preservation**: Volumes are NOT removed by default
|
||||
- **Fast Execution**: Typically completes in seconds
|
||||
- **No Network Required**: Local operation only
|
||||
- **Not CI/CD Safe**: Intended for local development only
|
||||
- **Graceful Shutdown**: Allows containers to clean up resources
|
||||
|
||||
## Complete Cleanup
|
||||
|
||||
For complete environment reset:
|
||||
|
||||
```bash
|
||||
# Stop and remove containers, networks
|
||||
.github/skills/docker-stop-dev-scripts/run.sh
|
||||
|
||||
# Remove volumes (WARNING: deletes data)
|
||||
docker volume rm $(docker volume ls -q --filter "name=charon")
|
||||
|
||||
# Remove images (optional)
|
||||
docker rmi $(docker images -q "*charon*")
|
||||
|
||||
# Clean up dangling resources
|
||||
.github/skills/docker-prune-scripts/run.sh
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Container Won't Stop
|
||||
|
||||
1. Check container logs: `docker compose logs app`
|
||||
2. Force removal: `docker compose kill`
|
||||
3. Manual cleanup: `docker rm -f container_name`
|
||||
|
||||
### Volume Still in Use
|
||||
|
||||
1. List processes: `docker ps -a`
|
||||
2. Check volume usage: `docker volume inspect volume_name`
|
||||
3. Force volume removal: `docker volume rm -f volume_name`
|
||||
|
||||
### Network Can't Be Removed
|
||||
|
||||
1. Check connected containers: `docker network inspect network_name`
|
||||
2. Disconnect containers: `docker network disconnect network_name container_name`
|
||||
3. Retry removal: `docker network rm network_name`
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Compose File**: `.docker/compose/docker-compose.dev.yml`
|
||||
@@ -1,11 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Integration Test All - Wrapper Script
|
||||
# Executes the comprehensive integration test suite
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Delegate to the existing integration test script
|
||||
exec "${PROJECT_ROOT}/scripts/integration-test.sh" "$@"
|
||||
220
.github/skills/integration-test-all.SKILL.md
vendored
220
.github/skills/integration-test-all.SKILL.md
vendored
@@ -1,220 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "integration-test-all"
|
||||
version: "1.0.0"
|
||||
description: "Run all integration tests including WAF, CrowdSec, Cerberus, and rate limiting"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "integration"
|
||||
- "testing"
|
||||
- "docker"
|
||||
- "end-to-end"
|
||||
- "security"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "docker-compose"
|
||||
version: ">=2.0"
|
||||
optional: false
|
||||
- name: "curl"
|
||||
version: ">=7.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "DOCKER_BUILDKIT"
|
||||
description: "Enable Docker BuildKit for faster builds"
|
||||
default: "1"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "verbose"
|
||||
type: "boolean"
|
||||
description: "Enable verbose output"
|
||||
default: "false"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "test_results"
|
||||
type: "stdout"
|
||||
description: "Aggregated test results from all integration tests"
|
||||
metadata:
|
||||
category: "integration-test"
|
||||
subcategory: "all"
|
||||
execution_time: "long"
|
||||
risk_level: "medium"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Integration Test All
|
||||
|
||||
## Overview
|
||||
|
||||
Executes the complete integration test suite for the Charon project. This skill runs all integration tests including WAF functionality (Coraza), CrowdSec bouncer integration, Cerberus backend protection, and rate limiting. It validates the entire security stack in a containerized environment.
|
||||
|
||||
This is the comprehensive test suite that ensures all components work together correctly before deployment.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker 24.0 or higher installed and running
|
||||
- Docker Compose 2.0 or higher
|
||||
- curl 7.0 or higher for API testing
|
||||
- At least 4GB of available RAM for containers
|
||||
- Network access for pulling container images
|
||||
- Docker daemon running with sufficient disk space
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run all integration tests:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh integration-test-all
|
||||
```
|
||||
|
||||
### Verbose Mode
|
||||
|
||||
Run with detailed output:
|
||||
|
||||
```bash
|
||||
VERBOSE=1 .github/skills/scripts/skill-runner.sh integration-test-all
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions workflows:
|
||||
|
||||
```yaml
|
||||
- name: Run All Integration Tests
|
||||
run: .github/skills/scripts/skill-runner.sh integration-test-all
|
||||
timeout-minutes: 20
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| verbose | boolean | No | false | Enable verbose output |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| DOCKER_BUILDKIT | No | 1 | Enable BuildKit for faster builds |
|
||||
| SKIP_CLEANUP | No | false | Skip container cleanup after tests |
|
||||
| TEST_TIMEOUT | No | 300 | Timeout in seconds for each test |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All integration tests passed
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: One or more tests failed
|
||||
- **2**: Docker environment setup failed
|
||||
- **3**: Container startup timeout
|
||||
- **4**: Network connectivity issues
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
=== Running Integration Test Suite ===
|
||||
✓ Coraza WAF Integration Tests
|
||||
✓ CrowdSec Bouncer Integration Tests
|
||||
✓ CrowdSec Decision API Tests
|
||||
✓ Cerberus Authentication Tests
|
||||
✓ Rate Limiting Tests
|
||||
|
||||
All integration tests passed!
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh integration-test-all
|
||||
```
|
||||
|
||||
### Example 2: Verbose with Custom Timeout
|
||||
|
||||
```bash
|
||||
VERBOSE=1 TEST_TIMEOUT=600 .github/skills/scripts/skill-runner.sh integration-test-all
|
||||
```
|
||||
|
||||
### Example 3: Skip Cleanup for Debugging
|
||||
|
||||
```bash
|
||||
SKIP_CLEANUP=true .github/skills/scripts/skill-runner.sh integration-test-all
|
||||
```
|
||||
|
||||
### Example 4: CI/CD Pipeline
|
||||
|
||||
```bash
|
||||
# Run with specific Docker configuration
|
||||
DOCKER_BUILDKIT=1 .github/skills/scripts/skill-runner.sh integration-test-all
|
||||
```
|
||||
|
||||
## Test Coverage
|
||||
|
||||
This skill executes the following test suites:
|
||||
|
||||
1. **Coraza WAF Tests**: SQL injection, XSS, path traversal detection
|
||||
2. **CrowdSec Bouncer Tests**: IP blocking, decision synchronization
|
||||
3. **CrowdSec Decision Tests**: Decision creation, removal, persistence
|
||||
4. **Cerberus Tests**: Authentication, authorization, token management
|
||||
5. **Rate Limit Tests**: Request throttling, burst handling
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: Cannot connect to Docker daemon
|
||||
**Solution**: Ensure Docker is running: `sudo systemctl start docker`
|
||||
|
||||
#### Error: Port already in use
|
||||
**Solution**: Stop conflicting services or run cleanup: `docker compose down`
|
||||
|
||||
#### Error: Container startup timeout
|
||||
**Solution**: Check Docker logs: `docker compose logs`
|
||||
|
||||
#### Error: Network connectivity issues
|
||||
**Solution**: Verify network configuration: `docker network ls`
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
- **Slow execution**: Check available system resources
|
||||
- **Random failures**: Increase TEST_TIMEOUT
|
||||
- **Cleanup issues**: Manually run `docker compose down -v`
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [integration-test-coraza](./integration-test-coraza.SKILL.md) - Coraza WAF tests only
|
||||
- [integration-test-crowdsec](./integration-test-crowdsec.SKILL.md) - CrowdSec tests only
|
||||
- [integration-test-crowdsec-decisions](./integration-test-crowdsec-decisions.SKILL.md) - Decision API tests
|
||||
- [integration-test-crowdsec-startup](./integration-test-crowdsec-startup.SKILL.md) - Startup tests
|
||||
- [docker-verify-crowdsec-config](./docker-verify-crowdsec-config.SKILL.md) - Config validation
|
||||
|
||||
## Notes
|
||||
|
||||
- **Execution Time**: Long execution (10-15 minutes typical)
|
||||
- **Resource Intensive**: Requires significant CPU and memory
|
||||
- **Network Required**: Pulls Docker images and tests network functionality
|
||||
- **Idempotency**: Safe to run multiple times (cleanup between runs)
|
||||
- **Cleanup**: Automatically cleans up containers unless SKIP_CLEANUP=true
|
||||
- **CI/CD**: Designed for automated pipelines with proper timeout configuration
|
||||
- **Isolation**: Tests run in isolated Docker networks
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: `scripts/integration-test.sh`
|
||||
@@ -1,11 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Integration Test Coraza - Wrapper Script
|
||||
# Tests Coraza WAF integration
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Delegate to the existing Coraza integration test script
|
||||
exec "${PROJECT_ROOT}/scripts/coraza_integration.sh" "$@"
|
||||
205
.github/skills/integration-test-coraza.SKILL.md
vendored
205
.github/skills/integration-test-coraza.SKILL.md
vendored
@@ -1,205 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "integration-test-coraza"
|
||||
version: "1.0.0"
|
||||
description: "Test Coraza WAF integration with OWASP Core Rule Set protection"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "integration"
|
||||
- "waf"
|
||||
- "security"
|
||||
- "coraza"
|
||||
- "owasp"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "curl"
|
||||
version: ">=7.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "WAF_ENABLED"
|
||||
description: "Enable WAF protection"
|
||||
default: "true"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "verbose"
|
||||
type: "boolean"
|
||||
description: "Enable verbose output"
|
||||
default: "false"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "test_results"
|
||||
type: "stdout"
|
||||
description: "WAF test results including blocked attacks"
|
||||
metadata:
|
||||
category: "integration-test"
|
||||
subcategory: "waf"
|
||||
execution_time: "medium"
|
||||
risk_level: "medium"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Integration Test Coraza
|
||||
|
||||
## Overview
|
||||
|
||||
Tests the Coraza Web Application Firewall (WAF) integration with OWASP Core Rule Set (CRS). This skill validates that the WAF correctly detects and blocks common web attacks including SQL injection, cross-site scripting (XSS), remote code execution (RCE), and path traversal attempts.
|
||||
|
||||
Coraza provides ModSecurity-compatible rule processing with improved performance and modern Go implementation.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker 24.0 or higher installed and running
|
||||
- curl 7.0 or higher for HTTP testing
|
||||
- Running Charon Docker environment (or automatic startup)
|
||||
- Network access to test endpoints
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run Coraza WAF integration tests:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh integration-test-coraza
|
||||
```
|
||||
|
||||
### Verbose Mode
|
||||
|
||||
Run with detailed attack payloads and responses:
|
||||
|
||||
```bash
|
||||
VERBOSE=1 .github/skills/scripts/skill-runner.sh integration-test-coraza
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions workflows:
|
||||
|
||||
```yaml
|
||||
- name: Test Coraza WAF Integration
|
||||
run: .github/skills/scripts/skill-runner.sh integration-test-coraza
|
||||
timeout-minutes: 5
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| verbose | boolean | No | false | Enable verbose output |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| WAF_ENABLED | No | true | Enable WAF protection for tests |
|
||||
| TEST_HOST | No | localhost:8080 | Target host for WAF tests |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All WAF tests passed (attacks blocked correctly)
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: One or more attacks were not blocked
|
||||
- **2**: Docker environment setup failed
|
||||
- **3**: WAF not responding or misconfigured
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
=== Testing Coraza WAF Integration ===
|
||||
✓ SQL Injection: Blocked (403 Forbidden)
|
||||
✓ XSS Attack: Blocked (403 Forbidden)
|
||||
✓ Path Traversal: Blocked (403 Forbidden)
|
||||
✓ RCE Attempt: Blocked (403 Forbidden)
|
||||
✓ Legitimate Request: Allowed (200 OK)
|
||||
|
||||
All Coraza WAF tests passed!
|
||||
```
|
||||
|
||||
## Test Coverage
|
||||
|
||||
This skill validates protection against:
|
||||
|
||||
1. **SQL Injection**: `' OR '1'='1`, `UNION SELECT`, `'; DROP TABLE`
|
||||
2. **Cross-Site Scripting (XSS)**: `<script>alert('XSS')</script>`, `javascript:alert(1)`
|
||||
3. **Path Traversal**: `../../etc/passwd`, `....//....//etc/passwd`
|
||||
4. **Remote Code Execution**: `<?php system($_GET['cmd']); ?>`, `eval()`
|
||||
5. **Legitimate Traffic**: Ensures normal requests are not blocked
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh integration-test-coraza
|
||||
```
|
||||
|
||||
### Example 2: Verbose with Custom Host
|
||||
|
||||
```bash
|
||||
TEST_HOST=production.example.com VERBOSE=1 \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-coraza
|
||||
```
|
||||
|
||||
### Example 3: Disable WAF for Comparison
|
||||
|
||||
```bash
|
||||
WAF_ENABLED=false .github/skills/scripts/skill-runner.sh integration-test-coraza
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: WAF not responding
|
||||
**Solution**: Verify Docker containers are running: `docker ps | grep coraza`
|
||||
|
||||
#### Error: Attacks not blocked (false negatives)
|
||||
**Solution**: Check WAF configuration in `configs/coraza/` and rule sets
|
||||
|
||||
#### Error: Legitimate requests blocked (false positives)
|
||||
**Solution**: Review WAF logs and adjust rule sensitivity
|
||||
|
||||
#### Error: Connection refused
|
||||
**Solution**: Ensure application is accessible: `curl http://localhost:8080/health`
|
||||
|
||||
### Debugging
|
||||
|
||||
- **WAF Logs**: `docker logs $(docker ps -q -f name=coraza)`
|
||||
- **Rule Debugging**: Set `SecRuleEngine DetectionOnly` in config
|
||||
- **Test Individual Payloads**: Use curl with specific attack strings
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [integration-test-all](./integration-test-all.SKILL.md) - Complete integration suite
|
||||
- [integration-test-waf](./integration-test-waf.SKILL.md) - General WAF tests
|
||||
- [security-scan-trivy](./security-scan-trivy.SKILL.md) - Vulnerability scanning
|
||||
|
||||
## Notes
|
||||
|
||||
- **OWASP CRS**: Uses Core Rule Set v4.0+ for comprehensive protection
|
||||
- **Execution Time**: Medium execution (3-5 minutes)
|
||||
- **False Positives**: Tuning required for production workloads
|
||||
- **Performance**: Minimal latency impact (<5ms per request)
|
||||
- **Compliance**: Helps meet OWASP Top 10 and PCI DSS requirements
|
||||
- **Logging**: All blocked requests are logged for analysis
|
||||
- **Rule Updates**: Regularly update CRS for latest threat intelligence
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: `scripts/coraza_integration.sh`
|
||||
@@ -1,11 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Integration Test CrowdSec Decisions - Wrapper Script
|
||||
# Tests CrowdSec decision API functionality
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Delegate to the existing CrowdSec decision integration test script
|
||||
exec "${PROJECT_ROOT}/scripts/crowdsec_decision_integration.sh" "$@"
|
||||
@@ -1,252 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "integration-test-crowdsec-decisions"
|
||||
version: "1.0.0"
|
||||
description: "Test CrowdSec decision API for creating, retrieving, and removing IP blocks"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "integration"
|
||||
- "crowdsec"
|
||||
- "decisions"
|
||||
- "api"
|
||||
- "blocking"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "curl"
|
||||
version: ">=7.0"
|
||||
optional: false
|
||||
- name: "jq"
|
||||
version: ">=1.6"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "CROWDSEC_API_KEY"
|
||||
description: "CrowdSec API key for decision management"
|
||||
default: "auto-generated"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "verbose"
|
||||
type: "boolean"
|
||||
description: "Enable verbose output"
|
||||
default: "false"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "test_results"
|
||||
type: "stdout"
|
||||
description: "Decision API test results"
|
||||
metadata:
|
||||
category: "integration-test"
|
||||
subcategory: "api"
|
||||
execution_time: "medium"
|
||||
risk_level: "medium"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Integration Test CrowdSec Decisions
|
||||
|
||||
## Overview
|
||||
|
||||
Tests the CrowdSec decision API functionality for managing IP block decisions. This skill validates decision creation, retrieval, persistence, expiration, and removal through the CrowdSec Local API (LAPI). It ensures the decision lifecycle works correctly and that bouncers receive updates in real-time.
|
||||
|
||||
Decisions are the core mechanism CrowdSec uses to communicate threats between detectors and enforcers.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker 24.0 or higher installed and running
|
||||
- curl 7.0 or higher for API testing
|
||||
- jq 1.6 or higher for JSON parsing
|
||||
- Running CrowdSec LAPI container
|
||||
- Valid CrowdSec API credentials
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run CrowdSec decision API tests:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-decisions
|
||||
```
|
||||
|
||||
### Verbose Mode
|
||||
|
||||
Run with detailed API request/response logging:
|
||||
|
||||
```bash
|
||||
VERBOSE=1 .github/skills/scripts/skill-runner.sh integration-test-crowdsec-decisions
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions workflows:
|
||||
|
||||
```yaml
|
||||
- name: Test CrowdSec Decision API
|
||||
run: .github/skills/scripts/skill-runner.sh integration-test-crowdsec-decisions
|
||||
timeout-minutes: 5
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| verbose | boolean | No | false | Enable verbose output |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| CROWDSEC_API_KEY | No | auto | API key for LAPI access |
|
||||
| CROWDSEC_LAPI_URL | No | http://crowdsec:8080 | CrowdSec LAPI endpoint |
|
||||
| TEST_IP | No | 192.0.2.1 | Test IP address for decisions |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All decision API tests passed
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: One or more tests failed
|
||||
- **2**: LAPI not accessible
|
||||
- **3**: Authentication failed
|
||||
- **4**: Decision creation/deletion failed
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
=== Testing CrowdSec Decision API ===
|
||||
✓ Create Decision: IP 192.0.2.1 blocked for 4h
|
||||
✓ Retrieve Decisions: 1 active decision found
|
||||
✓ Decision Details: Correct scope, value, duration
|
||||
✓ Decision Persistence: Survives bouncer restart
|
||||
✓ Decision Expiration: Expires after duration
|
||||
✓ Remove Decision: Successfully deleted
|
||||
✓ Decision Cleanup: No orphaned decisions
|
||||
|
||||
All CrowdSec decision API tests passed!
|
||||
```
|
||||
|
||||
## Test Coverage
|
||||
|
||||
This skill validates:
|
||||
|
||||
1. **Decision Creation**:
|
||||
- Create IP ban decision via API
|
||||
- Create range ban decision
|
||||
- Create captcha decision
|
||||
- Set custom duration and reason
|
||||
|
||||
2. **Decision Retrieval**:
|
||||
- List all active decisions
|
||||
- Filter by scope (ip, range, country)
|
||||
- Filter by value (specific IP)
|
||||
- Pagination support
|
||||
|
||||
3. **Decision Persistence**:
|
||||
- Decisions survive LAPI restart
|
||||
- Decisions sync to bouncers
|
||||
- Database integrity
|
||||
|
||||
4. **Decision Lifecycle**:
|
||||
- Expiration after duration
|
||||
- Manual removal via API
|
||||
- Automatic cleanup of expired decisions
|
||||
|
||||
5. **Decision Synchronization**:
|
||||
- Bouncer receives new decisions
|
||||
- Bouncer updates on decision changes
|
||||
- Real-time propagation
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-decisions
|
||||
```
|
||||
|
||||
### Example 2: Test Specific IP
|
||||
|
||||
```bash
|
||||
TEST_IP=10.0.0.1 \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-decisions
|
||||
```
|
||||
|
||||
### Example 3: Custom LAPI URL
|
||||
|
||||
```bash
|
||||
CROWDSEC_LAPI_URL=https://crowdsec-lapi.example.com \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-decisions
|
||||
```
|
||||
|
||||
### Example 4: Verbose with API Key
|
||||
|
||||
```bash
|
||||
CROWDSEC_API_KEY=my-api-key VERBOSE=1 \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-decisions
|
||||
```
|
||||
|
||||
## API Endpoints Tested
|
||||
|
||||
- `POST /v1/decisions` - Create new decision
|
||||
- `GET /v1/decisions` - List decisions
|
||||
- `GET /v1/decisions/:id` - Get decision details
|
||||
- `DELETE /v1/decisions/:id` - Remove decision
|
||||
- `GET /v1/decisions/stream` - Bouncer decision stream
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: LAPI not responding
|
||||
**Solution**: Check LAPI container: `docker ps | grep crowdsec`
|
||||
|
||||
#### Error: Authentication failed
|
||||
**Solution**: Verify API key: `docker exec crowdsec cscli machines list`
|
||||
|
||||
#### Error: Decision not created
|
||||
**Solution**: Check LAPI logs for validation errors
|
||||
|
||||
#### Error: Decision not found after creation
|
||||
**Solution**: Verify database connectivity and permissions
|
||||
|
||||
### Debugging
|
||||
|
||||
- **LAPI Logs**: `docker logs $(docker ps -q -f name=crowdsec)`
|
||||
- **Database**: `docker exec crowdsec cscli decisions list`
|
||||
- **API Testing**: `curl -H "X-Api-Key: $KEY" http://localhost:8080/v1/decisions`
|
||||
- **Decision Details**: `docker exec crowdsec cscli decisions list -o json | jq`
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [integration-test-crowdsec](./integration-test-crowdsec.SKILL.md) - Main bouncer tests
|
||||
- [integration-test-crowdsec-startup](./integration-test-crowdsec-startup.SKILL.md) - Startup tests
|
||||
- [integration-test-all](./integration-test-all.SKILL.md) - Complete suite
|
||||
|
||||
## Notes
|
||||
|
||||
- **Execution Time**: Medium execution (3-5 minutes)
|
||||
- **Decision Types**: Supports ban, captcha, and throttle decisions
|
||||
- **Scopes**: IP, range, country, AS, user
|
||||
- **Duration**: From seconds to permanent bans
|
||||
- **API Version**: Tests LAPI v1 endpoints
|
||||
- **Cleanup**: All test decisions are removed after execution
|
||||
- **Idempotency**: Safe to run multiple times
|
||||
- **Isolation**: Uses test IP ranges (RFC 5737)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: `scripts/crowdsec_decision_integration.sh`
|
||||
@@ -1,11 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Integration Test CrowdSec - Wrapper Script
|
||||
# Tests CrowdSec bouncer integration
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Delegate to the existing CrowdSec integration test script
|
||||
exec "${PROJECT_ROOT}/scripts/crowdsec_integration.sh" "$@"
|
||||
@@ -1,11 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Integration Test CrowdSec Startup - Wrapper Script
|
||||
# Tests CrowdSec startup sequence and initialization
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Delegate to the existing CrowdSec startup test script
|
||||
exec "${PROJECT_ROOT}/scripts/crowdsec_startup_test.sh" "$@"
|
||||
@@ -1,275 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "integration-test-crowdsec-startup"
|
||||
version: "1.0.0"
|
||||
description: "Test CrowdSec startup sequence, initialization, and error handling"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "integration"
|
||||
- "crowdsec"
|
||||
- "startup"
|
||||
- "initialization"
|
||||
- "resilience"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "curl"
|
||||
version: ">=7.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "STARTUP_TIMEOUT"
|
||||
description: "Maximum wait time for startup in seconds"
|
||||
default: "60"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "verbose"
|
||||
type: "boolean"
|
||||
description: "Enable verbose output"
|
||||
default: "false"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "test_results"
|
||||
type: "stdout"
|
||||
description: "Startup test results"
|
||||
metadata:
|
||||
category: "integration-test"
|
||||
subcategory: "startup"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Integration Test CrowdSec Startup
|
||||
|
||||
## Overview
|
||||
|
||||
Tests the CrowdSec startup sequence and initialization process. This skill validates that CrowdSec components (LAPI, bouncer) start correctly, handle initialization errors gracefully, and recover from common startup failures. It ensures the system is resilient to network issues, configuration problems, and timing-related edge cases.
|
||||
|
||||
Proper startup behavior is critical for production deployments and automated container orchestration.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker 24.0 or higher installed and running
|
||||
- curl 7.0 or higher for health checks
|
||||
- Docker Compose for orchestration
|
||||
- Network connectivity for pulling images
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run CrowdSec startup tests:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-startup
|
||||
```
|
||||
|
||||
### Verbose Mode
|
||||
|
||||
Run with detailed startup logging:
|
||||
|
||||
```bash
|
||||
VERBOSE=1 .github/skills/scripts/skill-runner.sh integration-test-crowdsec-startup
|
||||
```
|
||||
|
||||
### Custom Timeout
|
||||
|
||||
Run with extended startup timeout:
|
||||
|
||||
```bash
|
||||
STARTUP_TIMEOUT=120 .github/skills/scripts/skill-runner.sh integration-test-crowdsec-startup
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions workflows:
|
||||
|
||||
```yaml
|
||||
- name: Test CrowdSec Startup
|
||||
run: .github/skills/scripts/skill-runner.sh integration-test-crowdsec-startup
|
||||
timeout-minutes: 5
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| verbose | boolean | No | false | Enable verbose output |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| STARTUP_TIMEOUT | No | 60 | Maximum wait for startup (seconds) |
|
||||
| SKIP_CLEANUP | No | false | Skip container cleanup after tests |
|
||||
| CROWDSEC_VERSION | No | latest | CrowdSec image version to test |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All startup tests passed
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: One or more tests failed
|
||||
- **2**: Startup timeout exceeded
|
||||
- **3**: Configuration errors detected
|
||||
- **4**: Health check failed
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
=== Testing CrowdSec Startup Sequence ===
|
||||
✓ LAPI Initialization: Ready in 8s
|
||||
✓ Database Migration: Successful
|
||||
✓ Bouncer Registration: Successful
|
||||
✓ Configuration Validation: No errors
|
||||
✓ Health Check: All services healthy
|
||||
✓ Graceful Shutdown: Clean exit
|
||||
✓ Restart Resilience: Fast recovery
|
||||
|
||||
All CrowdSec startup tests passed!
|
||||
```
|
||||
|
||||
## Test Coverage
|
||||
|
||||
This skill validates:
|
||||
|
||||
1. **Clean Startup**:
|
||||
- LAPI starts and becomes ready
|
||||
- Database schema migration
|
||||
- Configuration loading
|
||||
- API endpoint availability
|
||||
|
||||
2. **Bouncer Initialization**:
|
||||
- Bouncer registers with LAPI
|
||||
- API key generation/validation
|
||||
- Decision cache initialization
|
||||
- First sync successful
|
||||
|
||||
3. **Error Handling**:
|
||||
- Invalid configuration detection
|
||||
- Missing database handling
|
||||
- Network timeout recovery
|
||||
- Retry mechanisms
|
||||
|
||||
4. **Edge Cases**:
|
||||
- LAPI not ready on first attempt
|
||||
- Race conditions in initialization
|
||||
- Concurrent bouncer registrations
|
||||
- Configuration hot-reload
|
||||
|
||||
5. **Resilience**:
|
||||
- Graceful shutdown
|
||||
- Fast restart (warm start)
|
||||
- State persistence
|
||||
- No resource leaks
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-startup
|
||||
```
|
||||
|
||||
### Example 2: Extended Timeout
|
||||
|
||||
```bash
|
||||
STARTUP_TIMEOUT=180 VERBOSE=1 \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-startup
|
||||
```
|
||||
|
||||
### Example 3: Test Specific Version
|
||||
|
||||
```bash
|
||||
CROWDSEC_VERSION=v1.5.0 \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-startup
|
||||
```
|
||||
|
||||
### Example 4: Keep Containers for Debugging
|
||||
|
||||
```bash
|
||||
SKIP_CLEANUP=true \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec-startup
|
||||
```
|
||||
|
||||
## Startup Sequence Verified
|
||||
|
||||
1. **Phase 1: Container Start** (0-5s)
|
||||
- Container created and started
|
||||
- Entrypoint script execution
|
||||
- Environment variable processing
|
||||
|
||||
2. **Phase 2: LAPI Initialization** (5-15s)
|
||||
- Database connection established
|
||||
- Schema migration/validation
|
||||
- Configuration parsing
|
||||
- API server binding
|
||||
|
||||
3. **Phase 3: Bouncer Registration** (15-25s)
|
||||
- Bouncer discovers LAPI
|
||||
- API key generated/validated
|
||||
- Initial decision sync
|
||||
- Cache population
|
||||
|
||||
4. **Phase 4: Ready State** (25-30s)
|
||||
- Health check endpoint responds
|
||||
- All components initialized
|
||||
- Ready to process requests
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: Startup timeout exceeded
|
||||
**Solution**: Increase STARTUP_TIMEOUT or check container logs for hangs
|
||||
|
||||
#### Error: Database connection failed
|
||||
**Solution**: Verify database container is running and accessible
|
||||
|
||||
#### Error: Configuration validation failed
|
||||
**Solution**: Check CrowdSec config files for syntax errors
|
||||
|
||||
#### Error: Port already in use
|
||||
**Solution**: Stop conflicting services or change port configuration
|
||||
|
||||
### Debugging
|
||||
|
||||
- **LAPI Logs**: `docker logs $(docker ps -q -f name=crowdsec) -f`
|
||||
- **Bouncer Logs**: `docker logs $(docker ps -q -f name=charon-app) | grep crowdsec`
|
||||
- **Health Check**: `curl http://localhost:8080/health`
|
||||
- **Database**: `docker exec crowdsec cscli machines list`
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [integration-test-crowdsec](./integration-test-crowdsec.SKILL.md) - Main bouncer tests
|
||||
- [integration-test-crowdsec-decisions](./integration-test-crowdsec-decisions.SKILL.md) - Decision tests
|
||||
- [docker-verify-crowdsec-config](./docker-verify-crowdsec-config.SKILL.md) - Config validation
|
||||
|
||||
## Notes
|
||||
|
||||
- **Execution Time**: Medium execution (3-5 minutes)
|
||||
- **Typical Startup**: 20-30 seconds for clean start
|
||||
- **Warm Start**: 5-10 seconds after restart
|
||||
- **Timeout Buffer**: Default timeout includes safety margin
|
||||
- **Container Orchestration**: Tests applicable to Kubernetes/Docker Swarm
|
||||
- **Production Ready**: Validates production deployment scenarios
|
||||
- **Cleanup**: Automatically removes test containers unless SKIP_CLEANUP=true
|
||||
- **Idempotency**: Safe to run multiple times consecutively
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: `scripts/crowdsec_startup_test.sh`
|
||||
220
.github/skills/integration-test-crowdsec.SKILL.md
vendored
220
.github/skills/integration-test-crowdsec.SKILL.md
vendored
@@ -1,220 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "integration-test-crowdsec"
|
||||
version: "1.0.0"
|
||||
description: "Test CrowdSec bouncer integration and IP blocking functionality"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "integration"
|
||||
- "security"
|
||||
- "crowdsec"
|
||||
- "ip-blocking"
|
||||
- "bouncer"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "curl"
|
||||
version: ">=7.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "CROWDSEC_API_KEY"
|
||||
description: "CrowdSec API key for bouncer authentication"
|
||||
default: "auto-generated"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "verbose"
|
||||
type: "boolean"
|
||||
description: "Enable verbose output"
|
||||
default: "false"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "test_results"
|
||||
type: "stdout"
|
||||
description: "CrowdSec integration test results"
|
||||
metadata:
|
||||
category: "integration-test"
|
||||
subcategory: "security"
|
||||
execution_time: "medium"
|
||||
risk_level: "medium"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Integration Test CrowdSec
|
||||
|
||||
## Overview
|
||||
|
||||
Tests the CrowdSec bouncer integration for IP-based threat detection and blocking. This skill validates that the CrowdSec bouncer correctly synchronizes with the CrowdSec Local API (LAPI), retrieves and applies IP block decisions, and enforces security policies.
|
||||
|
||||
CrowdSec provides collaborative security with real-time threat intelligence sharing across the community.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker 24.0 or higher installed and running
|
||||
- curl 7.0 or higher for API testing
|
||||
- Running CrowdSec LAPI container
|
||||
- Running Charon application with CrowdSec bouncer enabled
|
||||
- Network access between bouncer and LAPI
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run CrowdSec bouncer integration tests:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec
|
||||
```
|
||||
|
||||
### Verbose Mode
|
||||
|
||||
Run with detailed API interactions:
|
||||
|
||||
```bash
|
||||
VERBOSE=1 .github/skills/scripts/skill-runner.sh integration-test-crowdsec
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions workflows:
|
||||
|
||||
```yaml
|
||||
- name: Test CrowdSec Integration
|
||||
run: .github/skills/scripts/skill-runner.sh integration-test-crowdsec
|
||||
timeout-minutes: 7
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| verbose | boolean | No | false | Enable verbose output |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| CROWDSEC_API_KEY | No | auto | Bouncer API key (auto-generated if not set) |
|
||||
| CROWDSEC_LAPI_URL | No | http://crowdsec:8080 | CrowdSec LAPI endpoint |
|
||||
| BOUNCER_SYNC_INTERVAL | No | 60 | Decision sync interval in seconds |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All CrowdSec integration tests passed
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: One or more tests failed
|
||||
- **2**: CrowdSec LAPI not accessible
|
||||
- **3**: Bouncer authentication failed
|
||||
- **4**: Decision synchronization failed
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
=== Testing CrowdSec Bouncer Integration ===
|
||||
✓ LAPI Connection: Successful
|
||||
✓ Bouncer Authentication: Valid API Key
|
||||
✓ Decision Retrieval: 5 active decisions
|
||||
✓ IP Blocking: Blocked malicious IP (403 Forbidden)
|
||||
✓ Legitimate IP: Allowed (200 OK)
|
||||
✓ Decision Synchronization: Every 60s
|
||||
|
||||
All CrowdSec integration tests passed!
|
||||
```
|
||||
|
||||
## Test Coverage
|
||||
|
||||
This skill validates:
|
||||
|
||||
1. **LAPI Connectivity**: Bouncer can reach CrowdSec Local API
|
||||
2. **Authentication**: Valid API key and successful bouncer registration
|
||||
3. **Decision Retrieval**: Fetching active IP block decisions
|
||||
4. **IP Blocking**: Correctly blocking malicious IPs
|
||||
5. **Legitimate Traffic**: Allowing non-blocked IPs
|
||||
6. **Decision Synchronization**: Regular updates from LAPI
|
||||
7. **Graceful Degradation**: Handling LAPI downtime
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec
|
||||
```
|
||||
|
||||
### Example 2: Custom API Key
|
||||
|
||||
```bash
|
||||
CROWDSEC_API_KEY=my-bouncer-key \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec
|
||||
```
|
||||
|
||||
### Example 3: Custom LAPI URL
|
||||
|
||||
```bash
|
||||
CROWDSEC_LAPI_URL=http://crowdsec-lapi:8080 \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec
|
||||
```
|
||||
|
||||
### Example 4: Fast Sync Interval
|
||||
|
||||
```bash
|
||||
BOUNCER_SYNC_INTERVAL=30 VERBOSE=1 \
|
||||
.github/skills/scripts/skill-runner.sh integration-test-crowdsec
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: Cannot connect to LAPI
|
||||
**Solution**: Verify LAPI container is running: `docker ps | grep crowdsec`
|
||||
|
||||
#### Error: Authentication failed
|
||||
**Solution**: Check API key is valid: `docker exec crowdsec cscli bouncers list`
|
||||
|
||||
#### Error: No decisions retrieved
|
||||
**Solution**: Create test decisions: `docker exec crowdsec cscli decisions add --ip 1.2.3.4`
|
||||
|
||||
#### Error: Blocking not working
|
||||
**Solution**: Check bouncer logs: `docker logs charon-app | grep crowdsec`
|
||||
|
||||
### Debugging
|
||||
|
||||
- **LAPI Logs**: `docker logs $(docker ps -q -f name=crowdsec)`
|
||||
- **Bouncer Status**: Check application logs for sync errors
|
||||
- **Decision List**: `docker exec crowdsec cscli decisions list`
|
||||
- **Test Block**: `curl -H "X-Forwarded-For: 1.2.3.4" http://localhost:8080/`
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [integration-test-crowdsec-decisions](./integration-test-crowdsec-decisions.SKILL.md) - Decision API tests
|
||||
- [integration-test-crowdsec-startup](./integration-test-crowdsec-startup.SKILL.md) - Startup tests
|
||||
- [integration-test-all](./integration-test-all.SKILL.md) - Complete test suite
|
||||
|
||||
## Notes
|
||||
|
||||
- **Execution Time**: Medium execution (4-6 minutes)
|
||||
- **Community Intelligence**: Benefits from CrowdSec's global threat network
|
||||
- **Performance**: Minimal latency with in-memory decision caching
|
||||
- **Scalability**: Tested with thousands of concurrent decisions
|
||||
- **Resilience**: Continues working if LAPI is temporarily unavailable
|
||||
- **Observability**: Full metrics exposed for Prometheus/Grafana
|
||||
- **Compliance**: Supports GDPR-compliant threat intelligence
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: `scripts/crowdsec_integration.sh`
|
||||
96
.github/skills/qa-precommit-all-scripts/run.sh
vendored
96
.github/skills/qa-precommit-all-scripts/run.sh
vendored
@@ -1,96 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# QA Pre-commit All - Execution Script
|
||||
#
|
||||
# This script runs all pre-commit hooks for comprehensive code quality validation.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_python_environment "3.8" || error_exit "Python 3.8+ is required"
|
||||
|
||||
# Check for virtual environment
|
||||
if [[ -z "${VIRTUAL_ENV:-}" ]]; then
|
||||
log_warning "Virtual environment not activated, attempting to activate .venv"
|
||||
if [[ -f "${PROJECT_ROOT}/.venv/bin/activate" ]]; then
|
||||
# shellcheck source=/dev/null
|
||||
source "${PROJECT_ROOT}/.venv/bin/activate"
|
||||
log_info "Activated virtual environment: ${VIRTUAL_ENV}"
|
||||
else
|
||||
error_exit "Virtual environment not found at ${PROJECT_ROOT}/.venv"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check for pre-commit
|
||||
if ! command -v pre-commit &> /dev/null; then
|
||||
error_exit "pre-commit not found. Install with: pip install pre-commit"
|
||||
fi
|
||||
|
||||
# Parse arguments
|
||||
FILES_MODE="${1:---all-files}"
|
||||
|
||||
# Validate files mode
|
||||
case "${FILES_MODE}" in
|
||||
--all-files|staged)
|
||||
;;
|
||||
*)
|
||||
# If not a recognized mode, treat as a specific hook ID
|
||||
HOOK_ID="${FILES_MODE}"
|
||||
FILES_MODE="--all-files"
|
||||
log_info "Running specific hook: ${HOOK_ID}"
|
||||
;;
|
||||
esac
|
||||
|
||||
# Change to project root
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# Execute pre-commit
|
||||
log_step "VALIDATION" "Running pre-commit hooks"
|
||||
log_info "Files mode: ${FILES_MODE}"
|
||||
|
||||
if [[ -n "${SKIP:-}" ]]; then
|
||||
log_info "Skipping hooks: ${SKIP}"
|
||||
fi
|
||||
|
||||
# Build pre-commit command
|
||||
PRE_COMMIT_CMD="pre-commit run"
|
||||
|
||||
# Handle files mode
|
||||
if [[ "${FILES_MODE}" == "staged" ]]; then
|
||||
# Run on staged files only (no flag needed, this is default for 'pre-commit run')
|
||||
log_info "Running on staged files only"
|
||||
else
|
||||
PRE_COMMIT_CMD="${PRE_COMMIT_CMD} --all-files"
|
||||
fi
|
||||
|
||||
# Add specific hook if provided
|
||||
if [[ -n "${HOOK_ID:-}" ]]; then
|
||||
PRE_COMMIT_CMD="${PRE_COMMIT_CMD} ${HOOK_ID}"
|
||||
fi
|
||||
|
||||
# Execute the validation
|
||||
log_info "Executing: ${PRE_COMMIT_CMD}"
|
||||
|
||||
if eval "${PRE_COMMIT_CMD}"; then
|
||||
log_success "All pre-commit hooks passed"
|
||||
exit 0
|
||||
else
|
||||
exit_code=$?
|
||||
log_error "One or more pre-commit hooks failed (exit code: ${exit_code})"
|
||||
log_info "Review the output above for details"
|
||||
log_info "Some hooks can auto-fix issues - review and commit changes if appropriate"
|
||||
exit "${exit_code}"
|
||||
fi
|
||||
353
.github/skills/qa-precommit-all.SKILL.md
vendored
353
.github/skills/qa-precommit-all.SKILL.md
vendored
@@ -1,353 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "qa-precommit-all"
|
||||
version: "1.0.0"
|
||||
description: "Run all pre-commit hooks for comprehensive code quality validation"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "qa"
|
||||
- "quality"
|
||||
- "pre-commit"
|
||||
- "linting"
|
||||
- "validation"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "python3"
|
||||
version: ">=3.8"
|
||||
optional: false
|
||||
- name: "pre-commit"
|
||||
version: ">=2.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "PRE_COMMIT_HOME"
|
||||
description: "Pre-commit cache directory"
|
||||
default: "~/.cache/pre-commit"
|
||||
required: false
|
||||
- name: "SKIP"
|
||||
description: "Comma-separated list of hook IDs to skip"
|
||||
default: ""
|
||||
required: false
|
||||
parameters:
|
||||
- name: "files"
|
||||
type: "string"
|
||||
description: "Specific files to check (default: all staged files)"
|
||||
default: "--all-files"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "validation_report"
|
||||
type: "stdout"
|
||||
description: "Results of all pre-commit hook executions"
|
||||
- name: "exit_code"
|
||||
type: "number"
|
||||
description: "0 if all hooks pass, non-zero if any fail"
|
||||
metadata:
|
||||
category: "qa"
|
||||
subcategory: "quality"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# QA Pre-commit All
|
||||
|
||||
## Overview
|
||||
|
||||
Executes all configured pre-commit hooks to validate code quality, formatting, security, and best practices across the entire codebase. This skill runs checks for Python, Go, JavaScript/TypeScript, Markdown, YAML, and more.
|
||||
|
||||
This skill is designed for CI/CD pipelines and local quality validation before committing code.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Python 3.8 or higher installed and in PATH
|
||||
- Python virtual environment activated (`.venv`)
|
||||
- Pre-commit installed in virtual environment: `pip install pre-commit`
|
||||
- Pre-commit hooks installed: `pre-commit install`
|
||||
- All language-specific tools installed (Go, Node.js, etc.)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run all hooks on all files:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh qa-precommit-all
|
||||
```
|
||||
|
||||
### Staged Files Only
|
||||
|
||||
Run hooks on staged files only (faster):
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh qa-precommit-all staged
|
||||
```
|
||||
|
||||
### Specific Hook
|
||||
|
||||
Run only a specific hook by ID:
|
||||
|
||||
```bash
|
||||
SKIP="" .github/skills/scripts/skill-runner.sh qa-precommit-all trailing-whitespace
|
||||
```
|
||||
|
||||
### Skip Specific Hooks
|
||||
|
||||
Skip certain hooks during execution:
|
||||
|
||||
```bash
|
||||
SKIP=prettier,eslint .github/skills/scripts/skill-runner.sh qa-precommit-all
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| files | string | No | --all-files | File selection mode (--all-files or staged) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| SKIP | No | "" | Comma-separated hook IDs to skip |
|
||||
| PRE_COMMIT_HOME | No | ~/.cache/pre-commit | Pre-commit cache directory |
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0 (all hooks passed)
|
||||
- **Error Exit Codes**: Non-zero (one or more hooks failed)
|
||||
- **Output**: Detailed results from each hook
|
||||
|
||||
## Pre-commit Hooks Included
|
||||
|
||||
The following hooks are configured in `.pre-commit-config.yaml`:
|
||||
|
||||
### General Hooks
|
||||
- **trailing-whitespace**: Remove trailing whitespace
|
||||
- **end-of-file-fixer**: Ensure files end with newline
|
||||
- **check-yaml**: Validate YAML syntax
|
||||
- **check-json**: Validate JSON syntax
|
||||
- **check-merge-conflict**: Detect merge conflict markers
|
||||
- **check-added-large-files**: Prevent committing large files
|
||||
|
||||
### Python Hooks
|
||||
- **black**: Code formatting
|
||||
- **isort**: Import sorting
|
||||
- **flake8**: Linting
|
||||
- **mypy**: Type checking
|
||||
|
||||
### Go Hooks
|
||||
- **gofmt**: Code formatting
|
||||
- **go-vet**: Static analysis
|
||||
- **golangci-lint**: Comprehensive linting
|
||||
|
||||
### JavaScript/TypeScript Hooks
|
||||
- **prettier**: Code formatting
|
||||
- **eslint**: Linting and code quality
|
||||
|
||||
### Markdown Hooks
|
||||
- **markdownlint**: Markdown linting and formatting
|
||||
|
||||
### Security Hooks
|
||||
- **detect-private-key**: Prevent committing private keys
|
||||
- **detect-aws-credentials**: Prevent committing AWS credentials
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Full Quality Check
|
||||
|
||||
```bash
|
||||
# Run all hooks on all files
|
||||
source .venv/bin/activate
|
||||
.github/skills/scripts/skill-runner.sh qa-precommit-all
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Trim Trailing Whitespace.....................................Passed
|
||||
Fix End of Files.............................................Passed
|
||||
Check Yaml...................................................Passed
|
||||
Check JSON...................................................Passed
|
||||
Check for merge conflicts....................................Passed
|
||||
Check for added large files..................................Passed
|
||||
black........................................................Passed
|
||||
isort........................................................Passed
|
||||
prettier.....................................................Passed
|
||||
eslint.......................................................Passed
|
||||
markdownlint.................................................Passed
|
||||
```
|
||||
|
||||
### Example 2: Quick Staged Files Check
|
||||
|
||||
```bash
|
||||
# Run only on staged files (faster for pre-commit)
|
||||
.github/skills/scripts/skill-runner.sh qa-precommit-all staged
|
||||
```
|
||||
|
||||
### Example 3: Skip Slow Hooks
|
||||
|
||||
```bash
|
||||
# Skip time-consuming hooks for quick validation
|
||||
SKIP=golangci-lint,mypy .github/skills/scripts/skill-runner.sh qa-precommit-all
|
||||
```
|
||||
|
||||
### Example 4: CI/CD Pipeline Integration
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example
|
||||
- name: Set up Python
|
||||
uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.11'
|
||||
|
||||
- name: Install pre-commit
|
||||
run: pip install pre-commit
|
||||
|
||||
- name: Run QA Pre-commit Checks
|
||||
run: .github/skills/scripts/skill-runner.sh qa-precommit-all
|
||||
```
|
||||
|
||||
### Example 5: Auto-fix Mode
|
||||
|
||||
```bash
|
||||
# Some hooks can auto-fix issues
|
||||
# Run twice: first to fix, second to validate
|
||||
.github/skills/scripts/skill-runner.sh qa-precommit-all || \
|
||||
.github/skills/scripts/skill-runner.sh qa-precommit-all
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Virtual environment not activated**:
|
||||
```bash
|
||||
Error: pre-commit not found
|
||||
Solution: source .venv/bin/activate
|
||||
```
|
||||
|
||||
**Pre-commit not installed**:
|
||||
```bash
|
||||
Error: pre-commit command not available
|
||||
Solution: pip install pre-commit
|
||||
```
|
||||
|
||||
**Hooks not installed**:
|
||||
```bash
|
||||
Error: Run 'pre-commit install'
|
||||
Solution: pre-commit install
|
||||
```
|
||||
|
||||
**Hook execution failed**:
|
||||
```bash
|
||||
Hook X failed
|
||||
Solution: Review error output and fix reported issues
|
||||
```
|
||||
|
||||
**Language tool missing**:
|
||||
```bash
|
||||
Error: golangci-lint not found
|
||||
Solution: Install required language tools
|
||||
```
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0**: All hooks passed
|
||||
- **1**: One or more hooks failed
|
||||
- **Other**: Hook execution error
|
||||
|
||||
## Hook Fixing Strategies
|
||||
|
||||
### Auto-fixable Issues
|
||||
These hooks automatically fix issues:
|
||||
- `trailing-whitespace`
|
||||
- `end-of-file-fixer`
|
||||
- `black`
|
||||
- `isort`
|
||||
- `prettier`
|
||||
- `gofmt`
|
||||
|
||||
**Workflow**: Run pre-commit, review changes, commit fixed files
|
||||
|
||||
### Manual Fixes Required
|
||||
These hooks only report issues:
|
||||
- `check-yaml`
|
||||
- `check-json`
|
||||
- `flake8`
|
||||
- `eslint`
|
||||
- `markdownlint`
|
||||
- `go-vet`
|
||||
- `golangci-lint`
|
||||
|
||||
**Workflow**: Review errors, manually fix code, re-run pre-commit
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [test-backend-coverage](./test-backend-coverage.SKILL.md) - Backend test coverage
|
||||
- [test-frontend-coverage](./test-frontend-coverage.SKILL.md) - Frontend test coverage
|
||||
- [security-scan-trivy](./security-scan-trivy.SKILL.md) - Security scanning
|
||||
|
||||
## Notes
|
||||
|
||||
- Pre-commit hooks cache their environments for faster execution
|
||||
- First run may be slow while environments are set up
|
||||
- Subsequent runs are much faster (seconds vs minutes)
|
||||
- Hooks run in parallel where possible
|
||||
- Failed hooks stop execution (fail-fast behavior)
|
||||
- Use `SKIP` to bypass specific hooks temporarily
|
||||
- Recommended to run before every commit
|
||||
- Can be integrated into Git pre-commit hook for automatic checks
|
||||
- Cache location: `~/.cache/pre-commit` (configurable)
|
||||
|
||||
## Performance Tips
|
||||
|
||||
- **Initial Setup**: First run takes longer (installing hook environments)
|
||||
- **Incremental**: Run on staged files only for faster feedback
|
||||
- **Parallel**: Pre-commit runs compatible hooks in parallel
|
||||
- **Cache**: Hook environments are cached and reused
|
||||
- **Skip**: Use `SKIP` to bypass slow hooks during development
|
||||
|
||||
## Integration with Git
|
||||
|
||||
To automatically run on every commit:
|
||||
|
||||
```bash
|
||||
# Install Git pre-commit hook
|
||||
pre-commit install
|
||||
|
||||
# Now pre-commit runs automatically on git commit
|
||||
git commit -m "Your commit message"
|
||||
```
|
||||
|
||||
To bypass pre-commit hook temporarily:
|
||||
|
||||
```bash
|
||||
git commit --no-verify -m "Emergency commit"
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
Pre-commit configuration is in `.pre-commit-config.yaml`. To update hooks:
|
||||
|
||||
```bash
|
||||
# Update to latest versions
|
||||
pre-commit autoupdate
|
||||
|
||||
# Clean cache and re-install
|
||||
pre-commit clean
|
||||
pre-commit install --install-hooks
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: `pre-commit run --all-files`
|
||||
202
.github/skills/scripts/_environment_helpers.sh
vendored
202
.github/skills/scripts/_environment_helpers.sh
vendored
@@ -1,202 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Agent Skills - Environment Helpers
|
||||
#
|
||||
# Provides environment validation and setup utilities.
|
||||
|
||||
# validate_go_environment: Check Go installation and version
|
||||
validate_go_environment() {
|
||||
local min_version="${1:-1.23}"
|
||||
|
||||
if ! command -v go >/dev/null 2>&1; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Go is not installed or not in PATH"
|
||||
else
|
||||
echo "[ERROR] Go is not installed or not in PATH" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
local go_version
|
||||
go_version=$(go version | grep -oP 'go\K[0-9]+\.[0-9]+' || echo "0.0")
|
||||
|
||||
if declare -f log_debug >/dev/null 2>&1; then
|
||||
log_debug "Go version: ${go_version} (required: >=${min_version})"
|
||||
fi
|
||||
|
||||
# Simple version comparison (assumes semantic versioning)
|
||||
if [[ "$(printf '%s\n' "${min_version}" "${go_version}" | sort -V | head -n1)" != "${min_version}" ]]; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Go version ${go_version} is below minimum required version ${min_version}"
|
||||
else
|
||||
echo "[ERROR] Go version ${go_version} is below minimum required version ${min_version}" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# validate_python_environment: Check Python installation and version
|
||||
validate_python_environment() {
|
||||
local min_version="${1:-3.8}"
|
||||
|
||||
if ! command -v python3 >/dev/null 2>&1; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Python 3 is not installed or not in PATH"
|
||||
else
|
||||
echo "[ERROR] Python 3 is not installed or not in PATH" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
local python_version
|
||||
python_version=$(python3 --version 2>&1 | grep -oP 'Python \K[0-9]+\.[0-9]+' || echo "0.0")
|
||||
|
||||
if declare -f log_debug >/dev/null 2>&1; then
|
||||
log_debug "Python version: ${python_version} (required: >=${min_version})"
|
||||
fi
|
||||
|
||||
# Simple version comparison
|
||||
if [[ "$(printf '%s\n' "${min_version}" "${python_version}" | sort -V | head -n1)" != "${min_version}" ]]; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Python version ${python_version} is below minimum required version ${min_version}"
|
||||
else
|
||||
echo "[ERROR] Python version ${python_version} is below minimum required version ${min_version}" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# validate_node_environment: Check Node.js installation and version
|
||||
validate_node_environment() {
|
||||
local min_version="${1:-18.0}"
|
||||
|
||||
if ! command -v node >/dev/null 2>&1; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Node.js is not installed or not in PATH"
|
||||
else
|
||||
echo "[ERROR] Node.js is not installed or not in PATH" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
local node_version
|
||||
node_version=$(node --version | grep -oP 'v\K[0-9]+\.[0-9]+' || echo "0.0")
|
||||
|
||||
if declare -f log_debug >/dev/null 2>&1; then
|
||||
log_debug "Node.js version: ${node_version} (required: >=${min_version})"
|
||||
fi
|
||||
|
||||
# Simple version comparison
|
||||
if [[ "$(printf '%s\n' "${min_version}" "${node_version}" | sort -V | head -n1)" != "${min_version}" ]]; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Node.js version ${node_version} is below minimum required version ${min_version}"
|
||||
else
|
||||
echo "[ERROR] Node.js version ${node_version} is below minimum required version ${min_version}" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# validate_docker_environment: Check Docker installation and daemon
|
||||
validate_docker_environment() {
|
||||
if ! command -v docker >/dev/null 2>&1; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Docker is not installed or not in PATH"
|
||||
else
|
||||
echo "[ERROR] Docker is not installed or not in PATH" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Check if Docker daemon is running
|
||||
if ! docker info >/dev/null 2>&1; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Docker daemon is not running"
|
||||
else
|
||||
echo "[ERROR] Docker daemon is not running" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
|
||||
if declare -f log_debug >/dev/null 2>&1; then
|
||||
local docker_version
|
||||
docker_version=$(docker --version | grep -oP 'Docker version \K[0-9]+\.[0-9]+\.[0-9]+' || echo "unknown")
|
||||
log_debug "Docker version: ${docker_version}"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# set_default_env: Set environment variable with default value if not set
|
||||
set_default_env() {
|
||||
local var_name="$1"
|
||||
local default_value="$2"
|
||||
|
||||
if [[ -z "${!var_name:-}" ]]; then
|
||||
export "${var_name}=${default_value}"
|
||||
|
||||
if declare -f log_debug >/dev/null 2>&1; then
|
||||
log_debug "Set ${var_name}=${default_value} (default)"
|
||||
fi
|
||||
else
|
||||
if declare -f log_debug >/dev/null 2>&1; then
|
||||
log_debug "Using ${var_name}=${!var_name} (from environment)"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# validate_project_structure: Check we're in the correct project directory
|
||||
validate_project_structure() {
|
||||
local required_files=("$@")
|
||||
|
||||
for file in "${required_files[@]}"; do
|
||||
if [[ ! -e "${file}" ]]; then
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Required file/directory not found: ${file}"
|
||||
log_error "Are you running this from the project root?"
|
||||
else
|
||||
echo "[ERROR] Required file/directory not found: ${file}" >&2
|
||||
echo "[ERROR] Are you running this from the project root?" >&2
|
||||
fi
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# get_project_root: Find project root by looking for marker files
|
||||
get_project_root() {
|
||||
local marker_file="${1:-.git}"
|
||||
local current_dir
|
||||
current_dir="$(pwd)"
|
||||
|
||||
while [[ "${current_dir}" != "/" ]]; do
|
||||
if [[ -e "${current_dir}/${marker_file}" ]]; then
|
||||
echo "${current_dir}"
|
||||
return 0
|
||||
fi
|
||||
current_dir="$(dirname "${current_dir}")"
|
||||
done
|
||||
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Could not find project root (looking for ${marker_file})"
|
||||
else
|
||||
echo "[ERROR] Could not find project root (looking for ${marker_file})" >&2
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# Export functions
|
||||
export -f validate_go_environment
|
||||
export -f validate_python_environment
|
||||
export -f validate_node_environment
|
||||
export -f validate_docker_environment
|
||||
export -f set_default_env
|
||||
export -f validate_project_structure
|
||||
export -f get_project_root
|
||||
134
.github/skills/scripts/_error_handling_helpers.sh
vendored
134
.github/skills/scripts/_error_handling_helpers.sh
vendored
@@ -1,134 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Agent Skills - Error Handling Helpers
|
||||
#
|
||||
# Provides error handling utilities for robust skill execution.
|
||||
|
||||
# error_exit: Print error message and exit with code
|
||||
error_exit() {
|
||||
local message="$1"
|
||||
local exit_code="${2:-1}"
|
||||
|
||||
# Source logging helpers if not already loaded
|
||||
if ! declare -f log_error >/dev/null 2>&1; then
|
||||
echo "[ERROR] ${message}" >&2
|
||||
else
|
||||
log_error "${message}"
|
||||
fi
|
||||
|
||||
exit "${exit_code}"
|
||||
}
|
||||
|
||||
# check_command_exists: Verify a command is available
|
||||
check_command_exists() {
|
||||
local cmd="$1"
|
||||
local error_msg="${2:-Command not found: ${cmd}}"
|
||||
|
||||
if ! command -v "${cmd}" >/dev/null 2>&1; then
|
||||
error_exit "${error_msg}" 127
|
||||
fi
|
||||
}
|
||||
|
||||
# check_file_exists: Verify a file exists
|
||||
check_file_exists() {
|
||||
local file="$1"
|
||||
local error_msg="${2:-File not found: ${file}}"
|
||||
|
||||
if [[ ! -f "${file}" ]]; then
|
||||
error_exit "${error_msg}" 1
|
||||
fi
|
||||
}
|
||||
|
||||
# check_dir_exists: Verify a directory exists
|
||||
check_dir_exists() {
|
||||
local dir="$1"
|
||||
local error_msg="${2:-Directory not found: ${dir}}"
|
||||
|
||||
if [[ ! -d "${dir}" ]]; then
|
||||
error_exit "${error_msg}" 1
|
||||
fi
|
||||
}
|
||||
|
||||
# check_exit_code: Verify previous command succeeded
|
||||
check_exit_code() {
|
||||
local exit_code=$?
|
||||
local error_msg="${1:-Command failed with exit code ${exit_code}}"
|
||||
|
||||
if [[ ${exit_code} -ne 0 ]]; then
|
||||
error_exit "${error_msg}" "${exit_code}"
|
||||
fi
|
||||
}
|
||||
|
||||
# run_with_retry: Run a command with retry logic
|
||||
run_with_retry() {
|
||||
local max_attempts="${1}"
|
||||
local delay="${2}"
|
||||
shift 2
|
||||
local cmd=("$@")
|
||||
|
||||
local attempt=1
|
||||
while [[ ${attempt} -le ${max_attempts} ]]; do
|
||||
if "${cmd[@]}"; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ ${attempt} -lt ${max_attempts} ]]; then
|
||||
if declare -f log_warning >/dev/null 2>&1; then
|
||||
log_warning "Command failed (attempt ${attempt}/${max_attempts}). Retrying in ${delay}s..."
|
||||
else
|
||||
echo "[WARNING] Command failed (attempt ${attempt}/${max_attempts}). Retrying in ${delay}s..." >&2
|
||||
fi
|
||||
sleep "${delay}"
|
||||
fi
|
||||
|
||||
((attempt++))
|
||||
done
|
||||
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Command failed after ${max_attempts} attempts: ${cmd[*]}"
|
||||
else
|
||||
echo "[ERROR] Command failed after ${max_attempts} attempts: ${cmd[*]}" >&2
|
||||
fi
|
||||
return 1
|
||||
}
|
||||
|
||||
# trap_error: Set up error trapping for the current script
|
||||
trap_error() {
|
||||
local script_name="${1:-${BASH_SOURCE[1]}}"
|
||||
|
||||
trap 'error_handler ${LINENO} ${BASH_LINENO} "${BASH_COMMAND}" "${script_name}"' ERR
|
||||
}
|
||||
|
||||
# error_handler: Internal error handler for trap
|
||||
error_handler() {
|
||||
local line_no="$1"
|
||||
local bash_line_no="$2"
|
||||
local command="$3"
|
||||
local script="$4"
|
||||
|
||||
if declare -f log_error >/dev/null 2>&1; then
|
||||
log_error "Script failed at line ${line_no} in ${script}"
|
||||
log_error "Command: ${command}"
|
||||
else
|
||||
echo "[ERROR] Script failed at line ${line_no} in ${script}" >&2
|
||||
echo "[ERROR] Command: ${command}" >&2
|
||||
fi
|
||||
}
|
||||
|
||||
# cleanup_on_exit: Register a cleanup function to run on exit
|
||||
cleanup_on_exit() {
|
||||
local cleanup_func="$1"
|
||||
|
||||
# Register cleanup function
|
||||
trap "${cleanup_func}" EXIT
|
||||
}
|
||||
|
||||
# Export functions
|
||||
export -f error_exit
|
||||
export -f check_command_exists
|
||||
export -f check_file_exists
|
||||
export -f check_dir_exists
|
||||
export -f check_exit_code
|
||||
export -f run_with_retry
|
||||
export -f trap_error
|
||||
export -f error_handler
|
||||
export -f cleanup_on_exit
|
||||
109
.github/skills/scripts/_logging_helpers.sh
vendored
109
.github/skills/scripts/_logging_helpers.sh
vendored
@@ -1,109 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Agent Skills - Logging Helpers
|
||||
#
|
||||
# Provides colored logging functions for consistent output across all skills.
|
||||
|
||||
# Color codes
|
||||
readonly COLOR_RESET="\033[0m"
|
||||
readonly COLOR_RED="\033[0;31m"
|
||||
readonly COLOR_GREEN="\033[0;32m"
|
||||
readonly COLOR_YELLOW="\033[0;33m"
|
||||
readonly COLOR_BLUE="\033[0;34m"
|
||||
readonly COLOR_MAGENTA="\033[0;35m"
|
||||
readonly COLOR_CYAN="\033[0;36m"
|
||||
readonly COLOR_GRAY="\033[0;90m"
|
||||
|
||||
# Check if output is a terminal (for color support)
|
||||
if [[ -t 1 ]]; then
|
||||
COLORS_ENABLED=true
|
||||
else
|
||||
COLORS_ENABLED=false
|
||||
fi
|
||||
|
||||
# Disable colors if NO_COLOR environment variable is set
|
||||
if [[ -n "${NO_COLOR:-}" ]]; then
|
||||
COLORS_ENABLED=false
|
||||
fi
|
||||
|
||||
# log_info: Print informational message
|
||||
log_info() {
|
||||
local message="$*"
|
||||
if [[ "${COLORS_ENABLED}" == "true" ]]; then
|
||||
echo -e "${COLOR_BLUE}[INFO]${COLOR_RESET} ${message}"
|
||||
else
|
||||
echo "[INFO] ${message}"
|
||||
fi
|
||||
}
|
||||
|
||||
# log_success: Print success message
|
||||
log_success() {
|
||||
local message="$*"
|
||||
if [[ "${COLORS_ENABLED}" == "true" ]]; then
|
||||
echo -e "${COLOR_GREEN}[SUCCESS]${COLOR_RESET} ${message}"
|
||||
else
|
||||
echo "[SUCCESS] ${message}"
|
||||
fi
|
||||
}
|
||||
|
||||
# log_warning: Print warning message
|
||||
log_warning() {
|
||||
local message="$*"
|
||||
if [[ "${COLORS_ENABLED}" == "true" ]]; then
|
||||
echo -e "${COLOR_YELLOW}[WARNING]${COLOR_RESET} ${message}" >&2
|
||||
else
|
||||
echo "[WARNING] ${message}" >&2
|
||||
fi
|
||||
}
|
||||
|
||||
# log_error: Print error message
|
||||
log_error() {
|
||||
local message="$*"
|
||||
if [[ "${COLORS_ENABLED}" == "true" ]]; then
|
||||
echo -e "${COLOR_RED}[ERROR]${COLOR_RESET} ${message}" >&2
|
||||
else
|
||||
echo "[ERROR] ${message}" >&2
|
||||
fi
|
||||
}
|
||||
|
||||
# log_debug: Print debug message (only if DEBUG=1)
|
||||
log_debug() {
|
||||
if [[ "${DEBUG:-0}" == "1" ]]; then
|
||||
local message="$*"
|
||||
if [[ "${COLORS_ENABLED}" == "true" ]]; then
|
||||
echo -e "${COLOR_GRAY}[DEBUG]${COLOR_RESET} ${message}"
|
||||
else
|
||||
echo "[DEBUG] ${message}"
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
# log_step: Print step header
|
||||
log_step() {
|
||||
local step_name="$1"
|
||||
shift
|
||||
local message="$*"
|
||||
if [[ "${COLORS_ENABLED}" == "true" ]]; then
|
||||
echo -e "${COLOR_CYAN}[${step_name}]${COLOR_RESET} ${message}"
|
||||
else
|
||||
echo "[${step_name}] ${message}"
|
||||
fi
|
||||
}
|
||||
|
||||
# log_command: Log a command before executing (for transparency)
|
||||
log_command() {
|
||||
local command="$*"
|
||||
if [[ "${COLORS_ENABLED}" == "true" ]]; then
|
||||
echo -e "${COLOR_MAGENTA}[$]${COLOR_RESET} ${command}"
|
||||
else
|
||||
echo "[\$] ${command}"
|
||||
fi
|
||||
}
|
||||
|
||||
# Export functions so they can be used by sourcing scripts
|
||||
export -f log_info
|
||||
export -f log_success
|
||||
export -f log_warning
|
||||
export -f log_error
|
||||
export -f log_debug
|
||||
export -f log_step
|
||||
export -f log_command
|
||||
96
.github/skills/scripts/skill-runner.sh
vendored
96
.github/skills/scripts/skill-runner.sh
vendored
@@ -1,96 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Agent Skills Universal Skill Runner
|
||||
#
|
||||
# This script locates and executes Agent Skills by name, providing a unified
|
||||
# interface for running skills from tasks.json, CI/CD workflows, and the CLI.
|
||||
#
|
||||
# Usage:
|
||||
# skill-runner.sh <skill-name> [args...]
|
||||
#
|
||||
# Exit Codes:
|
||||
# 0 - Skill executed successfully
|
||||
# 1 - Skill not found or invalid
|
||||
# 2 - Skill execution failed
|
||||
# 126 - Skill script not executable
|
||||
# 127 - Skill script not found
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# shellcheck source=_logging_helpers.sh
|
||||
source "${SCRIPT_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=_error_handling_helpers.sh
|
||||
source "${SCRIPT_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=_environment_helpers.sh
|
||||
source "${SCRIPT_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Configuration
|
||||
SKILLS_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
|
||||
PROJECT_ROOT="$(cd "${SKILLS_DIR}/../.." && pwd)"
|
||||
|
||||
# Validate arguments
|
||||
if [[ $# -eq 0 ]]; then
|
||||
log_error "Usage: skill-runner.sh <skill-name> [args...]"
|
||||
log_error "Example: skill-runner.sh test-backend-coverage"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SKILL_NAME="$1"
|
||||
shift # Remove skill name from arguments
|
||||
|
||||
# Validate skill name format
|
||||
if [[ ! "${SKILL_NAME}" =~ ^[a-z][a-z0-9-]*$ ]]; then
|
||||
log_error "Invalid skill name: ${SKILL_NAME}"
|
||||
log_error "Skill names must be kebab-case (lowercase, hyphens, start with letter)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Verify SKILL.md exists
|
||||
SKILL_FILE="${SKILLS_DIR}/${SKILL_NAME}.SKILL.md"
|
||||
if [[ ! -f "${SKILL_FILE}" ]]; then
|
||||
log_error "Skill not found: ${SKILL_NAME}"
|
||||
log_error "Expected file: ${SKILL_FILE}"
|
||||
log_info "Available skills:"
|
||||
for skill_file in "${SKILLS_DIR}"/*.SKILL.md; do
|
||||
if [[ -f "${skill_file}" ]]; then
|
||||
basename "${skill_file}" .SKILL.md
|
||||
fi
|
||||
done | sort | sed 's/^/ - /'
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Locate skill execution script (flat structure: skill-name-scripts/run.sh)
|
||||
SKILL_SCRIPT="${SKILLS_DIR}/${SKILL_NAME}-scripts/run.sh"
|
||||
|
||||
if [[ ! -f "${SKILL_SCRIPT}" ]]; then
|
||||
log_error "Skill execution script not found: ${SKILL_SCRIPT}"
|
||||
log_error "Expected: ${SKILL_NAME}-scripts/run.sh"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -x "${SKILL_SCRIPT}" ]]; then
|
||||
log_error "Skill execution script is not executable: ${SKILL_SCRIPT}"
|
||||
log_error "Fix with: chmod +x ${SKILL_SCRIPT}"
|
||||
exit 126
|
||||
fi
|
||||
|
||||
# Log skill execution
|
||||
log_info "Executing skill: ${SKILL_NAME}"
|
||||
log_debug "Skill file: ${SKILL_FILE}"
|
||||
log_debug "Skill script: ${SKILL_SCRIPT}"
|
||||
log_debug "Working directory: ${PROJECT_ROOT}"
|
||||
log_debug "Arguments: $*"
|
||||
|
||||
# Change to project root for execution
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# Execute skill with all remaining arguments
|
||||
# shellcheck disable=SC2294
|
||||
if ! "${SKILL_SCRIPT}" "$@"; then
|
||||
log_error "Skill execution failed: ${SKILL_NAME}"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
log_success "Skill completed successfully: ${SKILL_NAME}"
|
||||
exit 0
|
||||
422
.github/skills/scripts/validate-skills.py
vendored
422
.github/skills/scripts/validate-skills.py
vendored
@@ -1,422 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Agent Skills Frontmatter Validator
|
||||
|
||||
Validates YAML frontmatter in .SKILL.md files against the agentskills.io
|
||||
specification. Ensures required fields are present, formats are correct,
|
||||
and custom metadata follows project conventions.
|
||||
|
||||
Usage:
|
||||
python3 validate-skills.py [path/to/.github/skills/]
|
||||
python3 validate-skills.py --single path/to/skill.SKILL.md
|
||||
|
||||
Exit Codes:
|
||||
0 - All validations passed
|
||||
1 - Validation errors found
|
||||
2 - Script error (missing dependencies, invalid arguments)
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import re
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from typing import List, Dict, Tuple, Any, Optional
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ImportError:
|
||||
print("Error: PyYAML is required. Install with: pip install pyyaml", file=sys.stderr)
|
||||
sys.exit(2)
|
||||
|
||||
|
||||
# Validation rules
|
||||
REQUIRED_FIELDS = ["name", "version", "description", "author", "license", "tags"]
|
||||
VALID_CATEGORIES = ["test", "integration-test", "security", "qa", "build", "utility", "docker"]
|
||||
VALID_EXECUTION_TIMES = ["short", "medium", "long"]
|
||||
VALID_RISK_LEVELS = ["low", "medium", "high"]
|
||||
VALID_OS_VALUES = ["linux", "darwin", "windows"]
|
||||
VALID_SHELL_VALUES = ["bash", "sh", "zsh", "powershell", "cmd"]
|
||||
|
||||
VERSION_REGEX = re.compile(r'^\d+\.\d+\.\d+$')
|
||||
NAME_REGEX = re.compile(r'^[a-z][a-z0-9-]*$')
|
||||
|
||||
|
||||
class ValidationError:
|
||||
"""Represents a validation error with context."""
|
||||
|
||||
def __init__(self, skill_file: str, field: str, message: str, severity: str = "error"):
|
||||
self.skill_file = skill_file
|
||||
self.field = field
|
||||
self.message = message
|
||||
self.severity = severity
|
||||
|
||||
def __str__(self) -> str:
|
||||
return f"[{self.severity.upper()}] {self.skill_file} :: {self.field}: {self.message}"
|
||||
|
||||
|
||||
class SkillValidator:
|
||||
"""Validates Agent Skills frontmatter."""
|
||||
|
||||
def __init__(self, strict: bool = False):
|
||||
self.strict = strict
|
||||
self.errors: List[ValidationError] = []
|
||||
self.warnings: List[ValidationError] = []
|
||||
|
||||
def validate_file(self, skill_path: Path) -> Tuple[bool, List[ValidationError]]:
|
||||
"""Validate a single SKILL.md file."""
|
||||
try:
|
||||
with open(skill_path, 'r', encoding='utf-8') as f:
|
||||
content = f.read()
|
||||
except Exception as e:
|
||||
return False, [ValidationError(str(skill_path), "file", f"Cannot read file: {e}")]
|
||||
|
||||
# Extract frontmatter
|
||||
frontmatter = self._extract_frontmatter(content)
|
||||
if not frontmatter:
|
||||
return False, [ValidationError(str(skill_path), "frontmatter", "No valid YAML frontmatter found")]
|
||||
|
||||
# Parse YAML
|
||||
try:
|
||||
data = yaml.safe_load(frontmatter)
|
||||
except yaml.YAMLError as e:
|
||||
return False, [ValidationError(str(skill_path), "yaml", f"Invalid YAML: {e}")]
|
||||
|
||||
if not isinstance(data, dict):
|
||||
return False, [ValidationError(str(skill_path), "yaml", "Frontmatter must be a YAML object")]
|
||||
|
||||
# Run validation checks
|
||||
file_errors: List[ValidationError] = []
|
||||
file_errors.extend(self._validate_required_fields(skill_path, data))
|
||||
file_errors.extend(self._validate_name(skill_path, data))
|
||||
file_errors.extend(self._validate_version(skill_path, data))
|
||||
file_errors.extend(self._validate_description(skill_path, data))
|
||||
file_errors.extend(self._validate_tags(skill_path, data))
|
||||
file_errors.extend(self._validate_compatibility(skill_path, data))
|
||||
file_errors.extend(self._validate_metadata(skill_path, data))
|
||||
|
||||
# Separate errors and warnings
|
||||
errors = [e for e in file_errors if e.severity == "error"]
|
||||
warnings = [e for e in file_errors if e.severity == "warning"]
|
||||
|
||||
self.errors.extend(errors)
|
||||
self.warnings.extend(warnings)
|
||||
|
||||
return len(errors) == 0, file_errors
|
||||
|
||||
def _extract_frontmatter(self, content: str) -> Optional[str]:
|
||||
"""Extract YAML frontmatter from markdown content."""
|
||||
if not content.startswith('---\n'):
|
||||
return None
|
||||
|
||||
end_marker = content.find('\n---\n', 4)
|
||||
if end_marker == -1:
|
||||
return None
|
||||
|
||||
return content[4:end_marker]
|
||||
|
||||
def _validate_required_fields(self, skill_path: Path, data: Dict) -> List[ValidationError]:
|
||||
"""Check that all required fields are present."""
|
||||
errors = []
|
||||
for field in REQUIRED_FIELDS:
|
||||
if field not in data:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), field, f"Required field missing"
|
||||
))
|
||||
elif not data[field]:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), field, f"Required field is empty"
|
||||
))
|
||||
return errors
|
||||
|
||||
def _validate_name(self, skill_path: Path, data: Dict) -> List[ValidationError]:
|
||||
"""Validate name field format."""
|
||||
errors = []
|
||||
if "name" in data:
|
||||
name = data["name"]
|
||||
if not isinstance(name, str):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "name", "Must be a string"
|
||||
))
|
||||
elif not NAME_REGEX.match(name):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "name",
|
||||
"Must be kebab-case (lowercase, hyphens only, start with letter)"
|
||||
))
|
||||
return errors
|
||||
|
||||
def _validate_version(self, skill_path: Path, data: Dict) -> List[ValidationError]:
|
||||
"""Validate version field format."""
|
||||
errors = []
|
||||
if "version" in data:
|
||||
version = data["version"]
|
||||
if not isinstance(version, str):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "version", "Must be a string"
|
||||
))
|
||||
elif not VERSION_REGEX.match(version):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "version",
|
||||
"Must follow semantic versioning (x.y.z)"
|
||||
))
|
||||
return errors
|
||||
|
||||
def _validate_description(self, skill_path: Path, data: Dict) -> List[ValidationError]:
|
||||
"""Validate description field."""
|
||||
errors = []
|
||||
if "description" in data:
|
||||
desc = data["description"]
|
||||
if not isinstance(desc, str):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "description", "Must be a string"
|
||||
))
|
||||
elif len(desc) > 120:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "description",
|
||||
f"Must be 120 characters or less (current: {len(desc)})"
|
||||
))
|
||||
elif '\n' in desc:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "description", "Must be a single line"
|
||||
))
|
||||
return errors
|
||||
|
||||
def _validate_tags(self, skill_path: Path, data: Dict) -> List[ValidationError]:
|
||||
"""Validate tags field."""
|
||||
errors = []
|
||||
if "tags" in data:
|
||||
tags = data["tags"]
|
||||
if not isinstance(tags, list):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "tags", "Must be a list"
|
||||
))
|
||||
elif len(tags) < 2:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "tags", "Must have at least 2 tags"
|
||||
))
|
||||
elif len(tags) > 5:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "tags",
|
||||
f"Must have at most 5 tags (current: {len(tags)})",
|
||||
severity="warning"
|
||||
))
|
||||
else:
|
||||
for tag in tags:
|
||||
if not isinstance(tag, str):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "tags", "All tags must be strings"
|
||||
))
|
||||
elif tag != tag.lower():
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "tags",
|
||||
f"Tag '{tag}' should be lowercase",
|
||||
severity="warning"
|
||||
))
|
||||
return errors
|
||||
|
||||
def _validate_compatibility(self, skill_path: Path, data: Dict) -> List[ValidationError]:
|
||||
"""Validate compatibility section."""
|
||||
errors = []
|
||||
if "compatibility" in data:
|
||||
compat = data["compatibility"]
|
||||
if not isinstance(compat, dict):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "compatibility", "Must be an object"
|
||||
))
|
||||
else:
|
||||
# Validate OS
|
||||
if "os" in compat:
|
||||
os_list = compat["os"]
|
||||
if not isinstance(os_list, list):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "compatibility.os", "Must be a list"
|
||||
))
|
||||
else:
|
||||
for os_val in os_list:
|
||||
if os_val not in VALID_OS_VALUES:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "compatibility.os",
|
||||
f"Invalid OS '{os_val}'. Valid: {VALID_OS_VALUES}",
|
||||
severity="warning"
|
||||
))
|
||||
|
||||
# Validate shells
|
||||
if "shells" in compat:
|
||||
shells = compat["shells"]
|
||||
if not isinstance(shells, list):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "compatibility.shells", "Must be a list"
|
||||
))
|
||||
else:
|
||||
for shell in shells:
|
||||
if shell not in VALID_SHELL_VALUES:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "compatibility.shells",
|
||||
f"Invalid shell '{shell}'. Valid: {VALID_SHELL_VALUES}",
|
||||
severity="warning"
|
||||
))
|
||||
return errors
|
||||
|
||||
def _validate_metadata(self, skill_path: Path, data: Dict) -> List[ValidationError]:
|
||||
"""Validate custom metadata section."""
|
||||
errors = []
|
||||
if "metadata" not in data:
|
||||
return errors # Metadata is optional
|
||||
|
||||
metadata = data["metadata"]
|
||||
if not isinstance(metadata, dict):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "metadata", "Must be an object"
|
||||
))
|
||||
return errors
|
||||
|
||||
# Validate category
|
||||
if "category" in metadata:
|
||||
category = metadata["category"]
|
||||
if category not in VALID_CATEGORIES:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "metadata.category",
|
||||
f"Invalid category '{category}'. Valid: {VALID_CATEGORIES}",
|
||||
severity="warning"
|
||||
))
|
||||
|
||||
# Validate execution_time
|
||||
if "execution_time" in metadata:
|
||||
exec_time = metadata["execution_time"]
|
||||
if exec_time not in VALID_EXECUTION_TIMES:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "metadata.execution_time",
|
||||
f"Invalid execution_time '{exec_time}'. Valid: {VALID_EXECUTION_TIMES}",
|
||||
severity="warning"
|
||||
))
|
||||
|
||||
# Validate risk_level
|
||||
if "risk_level" in metadata:
|
||||
risk = metadata["risk_level"]
|
||||
if risk not in VALID_RISK_LEVELS:
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), "metadata.risk_level",
|
||||
f"Invalid risk_level '{risk}'. Valid: {VALID_RISK_LEVELS}",
|
||||
severity="warning"
|
||||
))
|
||||
|
||||
# Validate boolean fields
|
||||
for bool_field in ["ci_cd_safe", "requires_network", "idempotent"]:
|
||||
if bool_field in metadata:
|
||||
if not isinstance(metadata[bool_field], bool):
|
||||
errors.append(ValidationError(
|
||||
str(skill_path), f"metadata.{bool_field}",
|
||||
"Must be a boolean (true/false)",
|
||||
severity="warning"
|
||||
))
|
||||
|
||||
return errors
|
||||
|
||||
def validate_directory(self, skills_dir: Path) -> bool:
|
||||
"""Validate all SKILL.md files in a directory."""
|
||||
if not skills_dir.exists():
|
||||
print(f"Error: Directory not found: {skills_dir}", file=sys.stderr)
|
||||
return False
|
||||
|
||||
skill_files = list(skills_dir.glob("*.SKILL.md"))
|
||||
if not skill_files:
|
||||
print(f"Warning: No .SKILL.md files found in {skills_dir}", file=sys.stderr)
|
||||
return True # Not an error, just nothing to validate
|
||||
|
||||
print(f"Validating {len(skill_files)} skill(s)...\n")
|
||||
|
||||
success_count = 0
|
||||
for skill_file in sorted(skill_files):
|
||||
is_valid, _ = self.validate_file(skill_file)
|
||||
if is_valid:
|
||||
success_count += 1
|
||||
print(f"✓ {skill_file.name}")
|
||||
else:
|
||||
print(f"✗ {skill_file.name}")
|
||||
|
||||
# Print summary
|
||||
print(f"\n{'='*70}")
|
||||
print(f"Validation Summary:")
|
||||
print(f" Total skills: {len(skill_files)}")
|
||||
print(f" Passed: {success_count}")
|
||||
print(f" Failed: {len(skill_files) - success_count}")
|
||||
print(f" Errors: {len(self.errors)}")
|
||||
print(f" Warnings: {len(self.warnings)}")
|
||||
print(f"{'='*70}\n")
|
||||
|
||||
# Print errors
|
||||
if self.errors:
|
||||
print("ERRORS:")
|
||||
for error in self.errors:
|
||||
print(f" {error}")
|
||||
print()
|
||||
|
||||
# Print warnings
|
||||
if self.warnings:
|
||||
print("WARNINGS:")
|
||||
for warning in self.warnings:
|
||||
print(f" {warning}")
|
||||
print()
|
||||
|
||||
return len(self.errors) == 0
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Validate Agent Skills frontmatter",
|
||||
formatter_class=argparse.RawDescriptionHelpFormatter,
|
||||
epilog=__doc__
|
||||
)
|
||||
parser.add_argument(
|
||||
"path",
|
||||
nargs="?",
|
||||
default=".github/skills",
|
||||
help="Path to .github/skills directory or single .SKILL.md file (default: .github/skills)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--strict",
|
||||
action="store_true",
|
||||
help="Treat warnings as errors"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--single",
|
||||
action="store_true",
|
||||
help="Validate a single .SKILL.md file instead of a directory"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
validator = SkillValidator(strict=args.strict)
|
||||
path = Path(args.path)
|
||||
|
||||
if args.single:
|
||||
if not path.exists():
|
||||
print(f"Error: File not found: {path}", file=sys.stderr)
|
||||
return 2
|
||||
|
||||
is_valid, errors = validator.validate_file(path)
|
||||
|
||||
if is_valid:
|
||||
print(f"✓ {path.name} is valid")
|
||||
if errors: # Warnings only
|
||||
print("\nWARNINGS:")
|
||||
for error in errors:
|
||||
print(f" {error}")
|
||||
else:
|
||||
print(f"✗ {path.name} has errors")
|
||||
for error in errors:
|
||||
print(f" {error}")
|
||||
|
||||
return 0 if is_valid else 1
|
||||
else:
|
||||
success = validator.validate_directory(path)
|
||||
|
||||
if args.strict and validator.warnings:
|
||||
print("Strict mode: treating warnings as errors", file=sys.stderr)
|
||||
success = False
|
||||
|
||||
return 0 if success else 1
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
242
.github/skills/security-scan-codeql-scripts/run.sh
vendored
242
.github/skills/security-scan-codeql-scripts/run.sh
vendored
@@ -1,242 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Security Scan CodeQL - Execution Script
|
||||
#
|
||||
# This script runs CodeQL security analysis using the security-and-quality
|
||||
# suite to match GitHub Actions CI configuration exactly.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Some helper scripts may not define ANSI color variables; ensure they exist
|
||||
# before using them later in this script (set -u is enabled).
|
||||
RED="${RED:-\033[0;31m}"
|
||||
GREEN="${GREEN:-\033[0;32m}"
|
||||
NC="${NC:-\033[0m}"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Set defaults
|
||||
set_default_env "CODEQL_THREADS" "0"
|
||||
set_default_env "CODEQL_FAIL_ON_ERROR" "true"
|
||||
|
||||
# Parse arguments
|
||||
LANGUAGE="${1:-all}"
|
||||
FORMAT="${2:-summary}"
|
||||
|
||||
# Validate language
|
||||
case "${LANGUAGE}" in
|
||||
go|javascript|js|all)
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid language: ${LANGUAGE}. Must be one of: go, javascript, all"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
||||
# Normalize javascript -> js for internal use
|
||||
if [[ "${LANGUAGE}" == "javascript" ]]; then
|
||||
LANGUAGE="js"
|
||||
fi
|
||||
|
||||
# Validate format
|
||||
case "${FORMAT}" in
|
||||
sarif|text|summary)
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid format: ${FORMAT}. Must be one of: sarif, text, summary"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
||||
# Validate CodeQL is installed
|
||||
log_step "ENVIRONMENT" "Validating CodeQL installation"
|
||||
if ! command -v codeql &> /dev/null; then
|
||||
log_error "CodeQL CLI is not installed"
|
||||
log_info "Install via: gh extension install github/gh-codeql"
|
||||
log_info "Then run: gh codeql set-version latest"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Check CodeQL version
|
||||
CODEQL_VERSION=$(codeql version 2>/dev/null | head -1 | grep -oP '\d+\.\d+\.\d+' || echo "unknown")
|
||||
log_info "CodeQL version: ${CODEQL_VERSION}"
|
||||
|
||||
# Minimum version check
|
||||
MIN_VERSION="2.17.0"
|
||||
if [[ "${CODEQL_VERSION}" != "unknown" ]]; then
|
||||
if [[ "$(printf '%s\n' "${MIN_VERSION}" "${CODEQL_VERSION}" | sort -V | head -n1)" != "${MIN_VERSION}" ]]; then
|
||||
log_warning "CodeQL version ${CODEQL_VERSION} may be incompatible"
|
||||
log_info "Recommended: gh codeql set-version latest"
|
||||
fi
|
||||
fi
|
||||
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# Track findings
|
||||
GO_ERRORS=0
|
||||
GO_WARNINGS=0
|
||||
JS_ERRORS=0
|
||||
JS_WARNINGS=0
|
||||
SCAN_FAILED=0
|
||||
|
||||
# Function to run CodeQL scan for a language
|
||||
run_codeql_scan() {
|
||||
local lang=$1
|
||||
local source_root=$2
|
||||
local db_name="codeql-db-${lang}"
|
||||
local sarif_file="codeql-results-${lang}.sarif"
|
||||
local build_mode_args=()
|
||||
local codescanning_config="${PROJECT_ROOT}/.github/codeql/codeql-config.yml"
|
||||
|
||||
# Remove generated artifacts that can create noisy/false findings during CodeQL analysis
|
||||
rm -rf "${PROJECT_ROOT}/frontend/coverage" \
|
||||
"${PROJECT_ROOT}/frontend/dist" \
|
||||
"${PROJECT_ROOT}/playwright-report" \
|
||||
"${PROJECT_ROOT}/test-results" \
|
||||
"${PROJECT_ROOT}/coverage"
|
||||
|
||||
if [[ "${lang}" == "javascript" ]]; then
|
||||
build_mode_args=(--build-mode=none)
|
||||
fi
|
||||
|
||||
log_step "CODEQL" "Scanning ${lang} code in ${source_root}/"
|
||||
|
||||
# Clean previous database
|
||||
rm -rf "${db_name}"
|
||||
|
||||
# Create database
|
||||
log_info "Creating CodeQL database..."
|
||||
if ! codeql database create "${db_name}" \
|
||||
--language="${lang}" \
|
||||
"${build_mode_args[@]}" \
|
||||
--source-root="${source_root}" \
|
||||
--codescanning-config="${codescanning_config}" \
|
||||
--threads="${CODEQL_THREADS}" \
|
||||
--overwrite 2>&1 | while read -r line; do
|
||||
# Filter verbose output, show important messages
|
||||
if [[ "${line}" == *"error"* ]] || [[ "${line}" == *"Error"* ]]; then
|
||||
log_error "${line}"
|
||||
elif [[ "${line}" == *"warning"* ]]; then
|
||||
log_warning "${line}"
|
||||
fi
|
||||
done; then
|
||||
log_error "Failed to create CodeQL database for ${lang}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Run analysis
|
||||
log_info "Analyzing with Code Scanning config (CI-aligned query filters)..."
|
||||
if ! codeql database analyze "${db_name}" \
|
||||
--format=sarif-latest \
|
||||
--output="${sarif_file}" \
|
||||
--sarif-add-baseline-file-info \
|
||||
--threads="${CODEQL_THREADS}" 2>&1; then
|
||||
log_error "CodeQL analysis failed for ${lang}"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_success "SARIF output: ${sarif_file}"
|
||||
|
||||
# Parse results
|
||||
if command -v jq &> /dev/null && [[ -f "${sarif_file}" ]]; then
|
||||
local total_findings
|
||||
local error_count
|
||||
local warning_count
|
||||
local note_count
|
||||
|
||||
total_findings=$(jq '.runs[].results | length' "${sarif_file}" 2>/dev/null || echo 0)
|
||||
error_count=$(jq '[.runs[].results[] | select(.level == "error")] | length' "${sarif_file}" 2>/dev/null || echo 0)
|
||||
warning_count=$(jq '[.runs[].results[] | select(.level == "warning")] | length' "${sarif_file}" 2>/dev/null || echo 0)
|
||||
note_count=$(jq '[.runs[].results[] | select(.level == "note")] | length' "${sarif_file}" 2>/dev/null || echo 0)
|
||||
|
||||
log_info "Found: ${error_count} errors, ${warning_count} warnings, ${note_count} notes (${total_findings} total)"
|
||||
|
||||
# Store counts for global tracking
|
||||
if [[ "${lang}" == "go" ]]; then
|
||||
GO_ERRORS=${error_count}
|
||||
GO_WARNINGS=${warning_count}
|
||||
else
|
||||
JS_ERRORS=${error_count}
|
||||
JS_WARNINGS=${warning_count}
|
||||
fi
|
||||
|
||||
# Show findings based on format
|
||||
if [[ "${FORMAT}" == "text" ]] || [[ "${FORMAT}" == "summary" ]]; then
|
||||
if [[ ${total_findings} -gt 0 ]]; then
|
||||
echo ""
|
||||
log_info "Top findings:"
|
||||
jq -r '.runs[].results[] | "\(.level): \(.message.text | split("\n")[0]) (\(.locations[0].physicalLocation.artifactLocation.uri):\(.locations[0].physicalLocation.region.startLine))"' "${sarif_file}" 2>/dev/null | head -15
|
||||
echo ""
|
||||
fi
|
||||
fi
|
||||
|
||||
# Check for blocking errors
|
||||
if [[ ${error_count} -gt 0 ]]; then
|
||||
log_error "${lang}: ${error_count} HIGH/CRITICAL findings detected"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
log_warning "jq not available - install for detailed analysis"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
# Run scans based on language selection
|
||||
if [[ "${LANGUAGE}" == "all" ]] || [[ "${LANGUAGE}" == "go" ]]; then
|
||||
if ! run_codeql_scan "go" "backend"; then
|
||||
SCAN_FAILED=1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "${LANGUAGE}" == "all" ]] || [[ "${LANGUAGE}" == "js" ]]; then
|
||||
if ! run_codeql_scan "javascript" "frontend"; then
|
||||
SCAN_FAILED=1
|
||||
fi
|
||||
fi
|
||||
|
||||
# Final summary
|
||||
echo ""
|
||||
log_step "SUMMARY" "CodeQL Security Scan Results"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
if [[ "${LANGUAGE}" == "all" ]] || [[ "${LANGUAGE}" == "go" ]]; then
|
||||
if [[ ${GO_ERRORS} -gt 0 ]]; then
|
||||
echo -e " Go: ${RED}${GO_ERRORS} errors${NC}, ${GO_WARNINGS} warnings"
|
||||
else
|
||||
echo -e " Go: ${GREEN}0 errors${NC}, ${GO_WARNINGS} warnings"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ "${LANGUAGE}" == "all" ]] || [[ "${LANGUAGE}" == "js" ]]; then
|
||||
if [[ ${JS_ERRORS} -gt 0 ]]; then
|
||||
echo -e " JavaScript: ${RED}${JS_ERRORS} errors${NC}, ${JS_WARNINGS} warnings"
|
||||
else
|
||||
echo -e " JavaScript: ${GREEN}0 errors${NC}, ${JS_WARNINGS} warnings"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
|
||||
# Exit based on findings
|
||||
if [[ "${CODEQL_FAIL_ON_ERROR}" == "true" ]] && [[ ${SCAN_FAILED} -eq 1 ]]; then
|
||||
log_error "CodeQL scan found HIGH/CRITICAL issues - fix before proceeding"
|
||||
echo ""
|
||||
log_info "View results:"
|
||||
log_info " VS Code: Install SARIF Viewer extension, open codeql-results-*.sarif"
|
||||
log_info " CLI: jq '.runs[].results[]' codeql-results-*.sarif"
|
||||
exit 1
|
||||
else
|
||||
log_success "CodeQL scan complete - no blocking issues"
|
||||
exit 0
|
||||
fi
|
||||
312
.github/skills/security-scan-codeql.SKILL.md
vendored
312
.github/skills/security-scan-codeql.SKILL.md
vendored
@@ -1,312 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "security-scan-codeql"
|
||||
version: "1.0.0"
|
||||
description: "Run CodeQL security analysis for Go and JavaScript/TypeScript code"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "security"
|
||||
- "scanning"
|
||||
- "codeql"
|
||||
- "sast"
|
||||
- "vulnerabilities"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "codeql"
|
||||
version: ">=2.17.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "CODEQL_THREADS"
|
||||
description: "Number of threads for analysis (0 = auto)"
|
||||
default: "0"
|
||||
required: false
|
||||
- name: "CODEQL_FAIL_ON_ERROR"
|
||||
description: "Exit with error on HIGH/CRITICAL findings"
|
||||
default: "true"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "language"
|
||||
type: "string"
|
||||
description: "Language to scan (go, javascript, all)"
|
||||
default: "all"
|
||||
required: false
|
||||
- name: "format"
|
||||
type: "string"
|
||||
description: "Output format (sarif, text, summary)"
|
||||
default: "summary"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "sarif_files"
|
||||
type: "file"
|
||||
description: "SARIF files for each language scanned"
|
||||
- name: "summary"
|
||||
type: "stdout"
|
||||
description: "Human-readable findings summary"
|
||||
- name: "exit_code"
|
||||
type: "number"
|
||||
description: "0 if no HIGH/CRITICAL issues, non-zero otherwise"
|
||||
metadata:
|
||||
category: "security"
|
||||
subcategory: "sast"
|
||||
execution_time: "long"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Security Scan CodeQL
|
||||
|
||||
## Overview
|
||||
|
||||
Executes GitHub CodeQL static analysis security testing (SAST) for Go and JavaScript/TypeScript code. Uses the **security-and-quality** query suite to match GitHub Actions CI configuration exactly.
|
||||
|
||||
This skill ensures local development catches the same security issues that CI would detect, preventing CI failures due to security findings.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- CodeQL CLI 2.17.0 or higher installed
|
||||
- Query packs: `codeql/go-queries`, `codeql/javascript-queries`
|
||||
- Sufficient disk space for CodeQL databases (~500MB per language)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Scan all languages with summary output:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh security-scan-codeql
|
||||
```
|
||||
|
||||
### Scan Specific Language
|
||||
|
||||
Scan only Go code:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-codeql go
|
||||
```
|
||||
|
||||
Scan only JavaScript/TypeScript code:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-codeql javascript
|
||||
```
|
||||
|
||||
### Full SARIF Output
|
||||
|
||||
Get detailed SARIF output for integration with tools:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-codeql all sarif
|
||||
```
|
||||
|
||||
### Text Output
|
||||
|
||||
Get text-formatted detailed findings:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-codeql all text
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| language | string | No | all | Language to scan (go, javascript, all) |
|
||||
| format | string | No | summary | Output format (sarif, text, summary) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| CODEQL_THREADS | No | 0 | Analysis threads (0 = auto-detect) |
|
||||
| CODEQL_FAIL_ON_ERROR | No | true | Fail on HIGH/CRITICAL findings |
|
||||
|
||||
## Query Suite
|
||||
|
||||
This skill uses the **security-and-quality** suite to match CI:
|
||||
|
||||
| Language | Suite | Queries | Coverage |
|
||||
|----------|-------|---------|----------|
|
||||
| Go | go-security-and-quality.qls | 61 | Security + quality issues |
|
||||
| JavaScript | javascript-security-and-quality.qls | 204 | Security + quality issues |
|
||||
|
||||
**Note:** This matches GitHub Actions CodeQL default configuration exactly.
|
||||
|
||||
## Outputs
|
||||
|
||||
- **SARIF Files**:
|
||||
- `codeql-results-go.sarif` - Go findings
|
||||
- `codeql-results-js.sarif` - JavaScript/TypeScript findings
|
||||
- **Databases**:
|
||||
- `codeql-db-go/` - Go CodeQL database
|
||||
- `codeql-db-js/` - JavaScript CodeQL database
|
||||
- **Exit Codes**:
|
||||
- 0: No HIGH/CRITICAL findings
|
||||
- 1: HIGH/CRITICAL findings detected
|
||||
- 2: Scanner error
|
||||
|
||||
## Security Categories
|
||||
|
||||
### CWE Coverage
|
||||
|
||||
| Category | Description | Languages |
|
||||
|----------|-------------|-----------|
|
||||
| CWE-079 | Cross-Site Scripting (XSS) | JS |
|
||||
| CWE-089 | SQL Injection | Go, JS |
|
||||
| CWE-117 | Log Injection | Go |
|
||||
| CWE-200 | Information Exposure | Go, JS |
|
||||
| CWE-312 | Cleartext Storage | Go, JS |
|
||||
| CWE-327 | Weak Cryptography | Go, JS |
|
||||
| CWE-502 | Deserialization | Go, JS |
|
||||
| CWE-611 | XXE Injection | Go |
|
||||
| CWE-640 | Email Injection | Go |
|
||||
| CWE-798 | Hardcoded Credentials | Go, JS |
|
||||
| CWE-918 | SSRF | Go, JS |
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Full Scan (Default)
|
||||
|
||||
```bash
|
||||
# Scan all languages, show summary
|
||||
.github/skills/scripts/skill-runner.sh security-scan-codeql
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
[STEP] CODEQL: Scanning Go code...
|
||||
[INFO] Creating database for backend/
|
||||
[INFO] Analyzing with security-and-quality suite (61 queries)
|
||||
[INFO] Found: 0 errors, 5 warnings, 3 notes
|
||||
[STEP] CODEQL: Scanning JavaScript code...
|
||||
[INFO] Creating database for frontend/
|
||||
[INFO] Analyzing with security-and-quality suite (204 queries)
|
||||
[INFO] Found: 0 errors, 2 warnings, 8 notes
|
||||
[SUCCESS] CodeQL scan complete - no HIGH/CRITICAL issues
|
||||
```
|
||||
|
||||
### Example 2: Go Only with Text Output
|
||||
|
||||
```bash
|
||||
# Detailed text output for Go findings
|
||||
.github/skills/scripts/skill-runner.sh security-scan-codeql go text
|
||||
```
|
||||
|
||||
### Example 3: CI/CD Pipeline Integration
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example (already integrated in codeql.yml)
|
||||
- name: Run CodeQL Security Scan
|
||||
run: .github/skills/scripts/skill-runner.sh security-scan-codeql all summary
|
||||
continue-on-error: false
|
||||
```
|
||||
|
||||
### Example 4: Pre-Commit Integration
|
||||
|
||||
```bash
|
||||
# Already available via pre-commit
|
||||
pre-commit run codeql-go-scan --all-files
|
||||
pre-commit run codeql-js-scan --all-files
|
||||
pre-commit run codeql-check-findings --all-files
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
**CodeQL version too old**:
|
||||
```bash
|
||||
Error: Extensible predicate API mismatch
|
||||
Solution: Upgrade CodeQL CLI: gh codeql set-version latest
|
||||
```
|
||||
|
||||
**Query pack not found**:
|
||||
```bash
|
||||
Error: Could not resolve pack codeql/go-queries
|
||||
Solution: codeql pack download codeql/go-queries codeql/javascript-queries
|
||||
```
|
||||
|
||||
**Database creation failed**:
|
||||
```bash
|
||||
Error: No source files found
|
||||
Solution: Verify source-root points to correct directory
|
||||
```
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0**: No HIGH/CRITICAL (error-level) findings
|
||||
- **1**: HIGH/CRITICAL findings detected (blocks CI)
|
||||
- **2**: Scanner error or invalid arguments
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [security-scan-trivy](./security-scan-trivy.SKILL.md) - Container/dependency vulnerabilities
|
||||
- [security-scan-go-vuln](./security-scan-go-vuln.SKILL.md) - Go-specific CVE checking
|
||||
- [qa-precommit-all](./qa-precommit-all.SKILL.md) - Pre-commit quality checks
|
||||
|
||||
## CI Alignment
|
||||
|
||||
This skill is specifically designed to match GitHub Actions CodeQL workflow:
|
||||
|
||||
| Parameter | Local | CI | Aligned |
|
||||
|-----------|-------|-----|---------|
|
||||
| Query Suite | security-and-quality | security-and-quality | ✅ |
|
||||
| Go Queries | 61 | 61 | ✅ |
|
||||
| JS Queries | 204 | 204 | ✅ |
|
||||
| Threading | auto | auto | ✅ |
|
||||
| Baseline Info | enabled | enabled | ✅ |
|
||||
|
||||
## Viewing Results
|
||||
|
||||
### VS Code SARIF Viewer (Recommended)
|
||||
|
||||
1. Install extension: `MS-SarifVSCode.sarif-viewer`
|
||||
2. Open `codeql-results-go.sarif` or `codeql-results-js.sarif`
|
||||
3. Navigate findings with inline annotations
|
||||
|
||||
### Command Line (jq)
|
||||
|
||||
```bash
|
||||
# Count findings
|
||||
jq '.runs[].results | length' codeql-results-go.sarif
|
||||
|
||||
# List findings
|
||||
jq -r '.runs[].results[] | "\(.level): \(.message.text)"' codeql-results-go.sarif
|
||||
```
|
||||
|
||||
### GitHub Security Tab
|
||||
|
||||
SARIF files are automatically uploaded to GitHub Security tab in CI.
|
||||
|
||||
## Performance
|
||||
|
||||
| Language | Database Creation | Analysis | Total |
|
||||
|----------|------------------|----------|-------|
|
||||
| Go | ~30s | ~30s | ~60s |
|
||||
| JavaScript | ~45s | ~45s | ~90s |
|
||||
| All | ~75s | ~75s | ~150s |
|
||||
|
||||
**Note:** First run downloads query packs; subsequent runs are faster.
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires CodeQL CLI 2.17.0+ (use `gh codeql set-version latest` to upgrade)
|
||||
- Databases are regenerated each run (not cached)
|
||||
- SARIF files are gitignored (see `.gitignore`)
|
||||
- Query results may vary between CodeQL versions
|
||||
- Use `.codeql/` directory for custom queries or suppressions
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-24
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: CodeQL CLI + GitHub Query Packs
|
||||
@@ -1,263 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Security Scan Docker Image - Execution Script
|
||||
#
|
||||
# Build Docker image and scan with Grype/Syft matching CI supply chain verification
|
||||
# This script replicates the exact process from supply-chain-pr.yml workflow
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
|
||||
# Check Docker
|
||||
validate_docker_environment || error_exit "Docker is required but not available"
|
||||
|
||||
# Check Syft
|
||||
if ! command -v syft >/dev/null 2>&1; then
|
||||
log_error "Syft not found - install from: https://github.com/anchore/syft"
|
||||
log_error "Installation: curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin v1.17.0"
|
||||
error_exit "Syft is required for SBOM generation" 2
|
||||
fi
|
||||
|
||||
# Check Grype
|
||||
if ! command -v grype >/dev/null 2>&1; then
|
||||
log_error "Grype not found - install from: https://github.com/anchore/grype"
|
||||
log_error "Installation: curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v0.85.0"
|
||||
error_exit "Grype is required for vulnerability scanning" 2
|
||||
fi
|
||||
|
||||
# Check jq
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
log_error "jq not found - install from package manager (apt-get install jq, brew install jq, etc.)"
|
||||
error_exit "jq is required for JSON processing" 2
|
||||
fi
|
||||
|
||||
# Verify tool versions match CI
|
||||
SYFT_INSTALLED_VERSION=$(syft version | grep -oP 'Version:\s*\Kv?[0-9]+\.[0-9]+\.[0-9]+' | head -1 || echo "unknown")
|
||||
GRYPE_INSTALLED_VERSION=$(grype version | grep -oP 'Version:\s*\Kv?[0-9]+\.[0-9]+\.[0-9]+' | head -1 || echo "unknown")
|
||||
|
||||
# Set defaults matching CI workflow
|
||||
set_default_env "SYFT_VERSION" "v1.17.0"
|
||||
set_default_env "GRYPE_VERSION" "v0.85.0"
|
||||
set_default_env "IMAGE_TAG" "charon:local"
|
||||
set_default_env "FAIL_ON_SEVERITY" "Critical,High"
|
||||
|
||||
# Version check (informational only)
|
||||
log_info "Installed Syft version: ${SYFT_INSTALLED_VERSION}"
|
||||
log_info "Expected Syft version: ${SYFT_VERSION}"
|
||||
if [[ "${SYFT_INSTALLED_VERSION}" != "${SYFT_VERSION#v}" ]] && [[ "${SYFT_INSTALLED_VERSION}" != "${SYFT_VERSION}" ]]; then
|
||||
log_warning "Syft version mismatch - CI uses ${SYFT_VERSION}, you have ${SYFT_INSTALLED_VERSION}"
|
||||
log_warning "Results may differ from CI. Reinstall with: curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin ${SYFT_VERSION}"
|
||||
fi
|
||||
|
||||
log_info "Installed Grype version: ${GRYPE_INSTALLED_VERSION}"
|
||||
log_info "Expected Grype version: ${GRYPE_VERSION}"
|
||||
if [[ "${GRYPE_INSTALLED_VERSION}" != "${GRYPE_VERSION#v}" ]] && [[ "${GRYPE_INSTALLED_VERSION}" != "${GRYPE_VERSION}" ]]; then
|
||||
log_warning "Grype version mismatch - CI uses ${GRYPE_VERSION}, you have ${GRYPE_INSTALLED_VERSION}"
|
||||
log_warning "Results may differ from CI. Reinstall with: curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin ${GRYPE_VERSION}"
|
||||
fi
|
||||
|
||||
# Parse arguments
|
||||
IMAGE_TAG="${1:-${IMAGE_TAG}}"
|
||||
NO_CACHE_FLAG=""
|
||||
if [[ "${2:-}" == "no-cache" ]]; then
|
||||
NO_CACHE_FLAG="--no-cache"
|
||||
log_info "Building without cache (clean build)"
|
||||
fi
|
||||
|
||||
log_info "Image tag: ${IMAGE_TAG}"
|
||||
log_info "Fail on severity: ${FAIL_ON_SEVERITY}"
|
||||
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# ==============================================================================
|
||||
# Phase 1: Build Docker Image
|
||||
# ==============================================================================
|
||||
log_step "BUILD" "Building Docker image: ${IMAGE_TAG}"
|
||||
|
||||
# Get build metadata
|
||||
VERSION="${VERSION:-dev}"
|
||||
BUILD_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
VCS_REF=$(git rev-parse --short HEAD 2>/dev/null || echo "unknown")
|
||||
|
||||
log_info "Build args: VERSION=${VERSION}, BUILD_DATE=${BUILD_DATE}, VCS_REF=${VCS_REF}"
|
||||
|
||||
# Build Docker image with same args as CI
|
||||
if docker build ${NO_CACHE_FLAG} \
|
||||
--build-arg VERSION="${VERSION}" \
|
||||
--build-arg BUILD_DATE="${BUILD_DATE}" \
|
||||
--build-arg VCS_REF="${VCS_REF}" \
|
||||
-t "${IMAGE_TAG}" \
|
||||
-f Dockerfile \
|
||||
.; then
|
||||
log_success "Docker image built successfully: ${IMAGE_TAG}"
|
||||
else
|
||||
error_exit "Docker build failed" 2
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
# Phase 2: Generate SBOM
|
||||
# ==============================================================================
|
||||
log_step "SBOM" "Generating SBOM using Syft ${SYFT_VERSION}"
|
||||
|
||||
log_info "Scanning image: ${IMAGE_TAG}"
|
||||
log_info "Format: CycloneDX JSON (matches CI)"
|
||||
|
||||
# Generate SBOM from the Docker IMAGE (not filesystem)
|
||||
if syft "${IMAGE_TAG}" \
|
||||
--output cyclonedx-json=sbom.cyclonedx.json \
|
||||
--output table; then
|
||||
log_success "SBOM generation complete"
|
||||
else
|
||||
error_exit "SBOM generation failed" 2
|
||||
fi
|
||||
|
||||
# Count components in SBOM
|
||||
COMPONENT_COUNT=$(jq '.components | length' sbom.cyclonedx.json 2>/dev/null || echo "0")
|
||||
log_info "Generated SBOM contains ${COMPONENT_COUNT} packages"
|
||||
|
||||
# ==============================================================================
|
||||
# Phase 3: Scan for Vulnerabilities
|
||||
# ==============================================================================
|
||||
log_step "SCAN" "Scanning for vulnerabilities using Grype ${GRYPE_VERSION}"
|
||||
|
||||
log_info "Scanning SBOM against vulnerability database..."
|
||||
log_info "This may take 30-60 seconds on first run (database download)"
|
||||
|
||||
# Run Grype against the SBOM (generated from image, not filesystem)
|
||||
# This matches exactly what CI does in supply-chain-pr.yml
|
||||
if grype sbom:sbom.cyclonedx.json \
|
||||
--output json \
|
||||
--file grype-results.json; then
|
||||
log_success "Vulnerability scan complete"
|
||||
else
|
||||
log_warning "Grype scan completed with findings"
|
||||
fi
|
||||
|
||||
# Generate SARIF output for GitHub Security (matches CI)
|
||||
grype sbom:sbom.cyclonedx.json \
|
||||
--output sarif \
|
||||
--file grype-results.sarif 2>/dev/null || true
|
||||
|
||||
# ==============================================================================
|
||||
# Phase 4: Analyze Results
|
||||
# ==============================================================================
|
||||
log_step "ANALYSIS" "Analyzing vulnerability scan results"
|
||||
|
||||
# Count vulnerabilities by severity (matches CI logic)
|
||||
if [[ -f grype-results.json ]]; then
|
||||
CRITICAL_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "Critical")] | length' grype-results.json 2>/dev/null || echo "0")
|
||||
HIGH_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "High")] | length' grype-results.json 2>/dev/null || echo "0")
|
||||
MEDIUM_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "Medium")] | length' grype-results.json 2>/dev/null || echo "0")
|
||||
LOW_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "Low")] | length' grype-results.json 2>/dev/null || echo "0")
|
||||
NEGLIGIBLE_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "Negligible")] | length' grype-results.json 2>/dev/null || echo "0")
|
||||
UNKNOWN_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "Unknown")] | length' grype-results.json 2>/dev/null || echo "0")
|
||||
TOTAL_COUNT=$(jq '.matches | length' grype-results.json 2>/dev/null || echo "0")
|
||||
else
|
||||
CRITICAL_COUNT=0
|
||||
HIGH_COUNT=0
|
||||
MEDIUM_COUNT=0
|
||||
LOW_COUNT=0
|
||||
NEGLIGIBLE_COUNT=0
|
||||
UNKNOWN_COUNT=0
|
||||
TOTAL_COUNT=0
|
||||
fi
|
||||
|
||||
# Display vulnerability summary
|
||||
echo ""
|
||||
log_info "Vulnerability Summary:"
|
||||
echo " 🔴 Critical: ${CRITICAL_COUNT}"
|
||||
echo " 🟠 High: ${HIGH_COUNT}"
|
||||
echo " 🟡 Medium: ${MEDIUM_COUNT}"
|
||||
echo " 🟢 Low: ${LOW_COUNT}"
|
||||
if [[ ${NEGLIGIBLE_COUNT} -gt 0 ]]; then
|
||||
echo " ⚪ Negligible: ${NEGLIGIBLE_COUNT}"
|
||||
fi
|
||||
if [[ ${UNKNOWN_COUNT} -gt 0 ]]; then
|
||||
echo " ❓ Unknown: ${UNKNOWN_COUNT}"
|
||||
fi
|
||||
echo " 📊 Total: ${TOTAL_COUNT}"
|
||||
echo ""
|
||||
|
||||
# ==============================================================================
|
||||
# Phase 5: Detailed Reporting
|
||||
# ==============================================================================
|
||||
|
||||
# Show Critical vulnerabilities if any
|
||||
if [[ ${CRITICAL_COUNT} -gt 0 ]]; then
|
||||
log_error "Critical Severity Vulnerabilities Found:"
|
||||
echo ""
|
||||
jq -r '.matches[] | select(.vulnerability.severity == "Critical") |
|
||||
" - \(.vulnerability.id) in \(.artifact.name)\n Package: \(.artifact.name)@\(.artifact.version)\n Fixed: \(.vulnerability.fix.versions[0] // "No fix available")\n CVSS: \(.vulnerability.cvss[0].metrics.baseScore // "N/A")\n Description: \(.vulnerability.description[0:100])...\n"' \
|
||||
grype-results.json 2>/dev/null || echo " (Unable to parse details)"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Show High vulnerabilities if any
|
||||
if [[ ${HIGH_COUNT} -gt 0 ]]; then
|
||||
log_warning "High Severity Vulnerabilities Found:"
|
||||
echo ""
|
||||
jq -r '.matches[] | select(.vulnerability.severity == "High") |
|
||||
" - \(.vulnerability.id) in \(.artifact.name)\n Package: \(.artifact.name)@\(.artifact.version)\n Fixed: \(.vulnerability.fix.versions[0] // "No fix available")\n CVSS: \(.vulnerability.cvss[0].metrics.baseScore // "N/A")\n Description: \(.vulnerability.description[0:100])...\n"' \
|
||||
grype-results.json 2>/dev/null || echo " (Unable to parse details)"
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
# Phase 6: Exit Code Determination (Matches CI)
|
||||
# ==============================================================================
|
||||
|
||||
# Check if any failing severities were found
|
||||
SHOULD_FAIL=false
|
||||
|
||||
if [[ "${FAIL_ON_SEVERITY}" == *"Critical"* ]] && [[ ${CRITICAL_COUNT} -gt 0 ]]; then
|
||||
SHOULD_FAIL=true
|
||||
fi
|
||||
|
||||
if [[ "${FAIL_ON_SEVERITY}" == *"High"* ]] && [[ ${HIGH_COUNT} -gt 0 ]]; then
|
||||
SHOULD_FAIL=true
|
||||
fi
|
||||
|
||||
if [[ "${FAIL_ON_SEVERITY}" == *"Medium"* ]] && [[ ${MEDIUM_COUNT} -gt 0 ]]; then
|
||||
SHOULD_FAIL=true
|
||||
fi
|
||||
|
||||
if [[ "${FAIL_ON_SEVERITY}" == *"Low"* ]] && [[ ${LOW_COUNT} -gt 0 ]]; then
|
||||
SHOULD_FAIL=true
|
||||
fi
|
||||
|
||||
# Final summary and exit
|
||||
echo ""
|
||||
log_info "Generated artifacts:"
|
||||
log_info " - sbom.cyclonedx.json (SBOM)"
|
||||
log_info " - grype-results.json (vulnerability details)"
|
||||
log_info " - grype-results.sarif (GitHub Security format)"
|
||||
echo ""
|
||||
|
||||
if [[ "${SHOULD_FAIL}" == "true" ]]; then
|
||||
log_error "Found ${CRITICAL_COUNT} Critical and ${HIGH_COUNT} High severity vulnerabilities"
|
||||
log_error "These issues must be resolved before deployment"
|
||||
log_error "Review grype-results.json for detailed remediation guidance"
|
||||
exit 1
|
||||
else
|
||||
if [[ ${TOTAL_COUNT} -gt 0 ]]; then
|
||||
log_success "Docker image scan complete - no critical or high vulnerabilities"
|
||||
log_info "Found ${MEDIUM_COUNT} Medium and ${LOW_COUNT} Low severity issues (non-blocking)"
|
||||
else
|
||||
log_success "Docker image scan complete - no vulnerabilities found"
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
601
.github/skills/security-scan-docker-image.SKILL.md
vendored
601
.github/skills/security-scan-docker-image.SKILL.md
vendored
@@ -1,601 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "security-scan-docker-image"
|
||||
version: "1.0.0"
|
||||
description: "Build Docker image and scan with Grype/Syft matching CI supply chain verification"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "security"
|
||||
- "scanning"
|
||||
- "docker"
|
||||
- "supply-chain"
|
||||
- "vulnerabilities"
|
||||
- "sbom"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
- name: "syft"
|
||||
version: ">=1.17.0"
|
||||
optional: false
|
||||
install_url: "https://github.com/anchore/syft"
|
||||
- name: "grype"
|
||||
version: ">=0.85.0"
|
||||
optional: false
|
||||
install_url: "https://github.com/anchore/grype"
|
||||
- name: "jq"
|
||||
version: ">=1.6"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "SYFT_VERSION"
|
||||
description: "Syft version to use for SBOM generation"
|
||||
default: "v1.17.0"
|
||||
required: false
|
||||
- name: "GRYPE_VERSION"
|
||||
description: "Grype version to use for vulnerability scanning"
|
||||
default: "v0.85.0"
|
||||
required: false
|
||||
- name: "IMAGE_TAG"
|
||||
description: "Docker image tag to build and scan"
|
||||
default: "charon:local"
|
||||
required: false
|
||||
- name: "FAIL_ON_SEVERITY"
|
||||
description: "Comma-separated list of severities that cause failure"
|
||||
default: "Critical,High"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "image_tag"
|
||||
type: "string"
|
||||
description: "Docker image tag to build and scan"
|
||||
default: "charon:local"
|
||||
required: false
|
||||
- name: "no_cache"
|
||||
type: "boolean"
|
||||
description: "Build Docker image without cache"
|
||||
default: false
|
||||
required: false
|
||||
outputs:
|
||||
- name: "sbom_file"
|
||||
type: "file"
|
||||
description: "Generated SBOM in CycloneDX JSON format"
|
||||
- name: "scan_results"
|
||||
type: "file"
|
||||
description: "Grype vulnerability scan results in JSON format"
|
||||
- name: "exit_code"
|
||||
type: "number"
|
||||
description: "0 if no critical/high issues, 1 if issues found, 2 if build/scan failed"
|
||||
metadata:
|
||||
category: "security"
|
||||
subcategory: "supply-chain"
|
||||
execution_time: "long"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: false
|
||||
exit_codes:
|
||||
0: "Scan successful, no critical or high vulnerabilities"
|
||||
1: "Critical or high severity vulnerabilities found"
|
||||
2: "Build failed or scan error"
|
||||
---
|
||||
|
||||
# Security: Scan Docker Image (Local)
|
||||
|
||||
## Overview
|
||||
|
||||
**CRITICAL GAP ADDRESSED**: This skill closes a critical security gap discovered in the Charon project's local development workflow. While the existing Trivy filesystem scanner catches some issues, it misses vulnerabilities that only exist in the actual built Docker image, including:
|
||||
|
||||
- **Alpine package vulnerabilities** in the base image
|
||||
- **Compiled binary vulnerabilities** in Go dependencies
|
||||
- **Embedded dependencies** that only exist post-build
|
||||
- **Multi-stage build artifacts** not present in source
|
||||
- **Runtime dependencies** added during Docker build
|
||||
|
||||
This skill replicates the **exact CI supply chain verification process** used in the `supply-chain-pr.yml` workflow, ensuring local scans match CI scans precisely. This prevents the "works locally but fails in CI" scenario and catches image-only vulnerabilities before they reach production.
|
||||
|
||||
## Key Differences from Trivy Filesystem Scan
|
||||
|
||||
| Aspect | Trivy (Filesystem) | This Skill (Image Scan) |
|
||||
|--------|-------------------|------------------------|
|
||||
| **Scan Target** | Source code + dependencies | Built Docker image |
|
||||
| **Alpine Packages** | ❌ Not detected | ✅ Detected |
|
||||
| **Compiled Binaries** | ❌ Not detected | ✅ Detected |
|
||||
| **Build Artifacts** | ❌ Not detected | ✅ Detected |
|
||||
| **CI Alignment** | ⚠️ Different results | ✅ Exact match |
|
||||
| **Supply Chain** | Partial coverage | Full coverage |
|
||||
|
||||
## Features
|
||||
|
||||
- **Exact CI Matching**: Uses same Syft and Grype versions as supply-chain-pr.yml
|
||||
- **Image-Based Scanning**: Scans the actual Docker image, not just filesystem
|
||||
- **SBOM Generation**: Creates CycloneDX JSON SBOM from the built image
|
||||
- **Severity-Based Failures**: Fails on Critical/High severity by default
|
||||
- **Detailed Reporting**: Counts vulnerabilities by severity
|
||||
- **Build Integration**: Builds the Docker image first, ensuring latest code
|
||||
- **Idempotent Scans**: Can be run repeatedly with consistent results
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker 24.0 or higher installed and running
|
||||
- Syft 1.17.0 or higher (auto-checked, installation instructions provided)
|
||||
- Grype 0.85.0 or higher (auto-checked, installation instructions provided)
|
||||
- jq 1.6 or higher (for JSON processing)
|
||||
- Internet connection (for vulnerability database updates)
|
||||
- Sufficient disk space for Docker image build (~2GB recommended)
|
||||
|
||||
## Installation
|
||||
|
||||
### Install Syft
|
||||
|
||||
```bash
|
||||
# Linux/macOS
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin v1.17.0
|
||||
|
||||
# Or via package manager
|
||||
brew install syft # macOS
|
||||
```
|
||||
|
||||
### Install Grype
|
||||
|
||||
```bash
|
||||
# Linux/macOS
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v0.85.0
|
||||
|
||||
# Or via package manager
|
||||
brew install grype # macOS
|
||||
```
|
||||
|
||||
### Verify Installation
|
||||
|
||||
```bash
|
||||
syft version
|
||||
grype version
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage (Default Image Tag)
|
||||
|
||||
Build and scan the default `charon:local` image:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh security-scan-docker-image
|
||||
```
|
||||
|
||||
### Custom Image Tag
|
||||
|
||||
Build and scan a custom-tagged image:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-docker-image charon:test
|
||||
```
|
||||
|
||||
### No-Cache Build
|
||||
|
||||
Force a clean build without Docker cache:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-docker-image charon:local no-cache
|
||||
```
|
||||
|
||||
### Environment Variable Overrides
|
||||
|
||||
Override default versions or behavior:
|
||||
|
||||
```bash
|
||||
# Use specific tool versions
|
||||
SYFT_VERSION=v1.17.0 GRYPE_VERSION=v0.85.0 \
|
||||
.github/skills/scripts/skill-runner.sh security-scan-docker-image
|
||||
|
||||
# Change failure threshold
|
||||
FAIL_ON_SEVERITY="Critical" \
|
||||
.github/skills/scripts/skill-runner.sh security-scan-docker-image
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| image_tag | string | No | charon:local | Docker image tag to build and scan |
|
||||
| no_cache | boolean | No | false | Build without Docker cache (pass "no-cache" as second arg) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| SYFT_VERSION | No | v1.17.0 | Syft version (matches CI) |
|
||||
| GRYPE_VERSION | No | v0.85.0 | Grype version (matches CI) |
|
||||
| IMAGE_TAG | No | charon:local | Default image tag if not provided |
|
||||
| FAIL_ON_SEVERITY | No | Critical,High | Severities that cause exit code 1 |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Generated Files
|
||||
|
||||
- **`sbom.cyclonedx.json`**: SBOM in CycloneDX JSON format (industry standard)
|
||||
- **`grype-results.json`**: Detailed vulnerability scan results
|
||||
- **`grype-results.sarif`**: SARIF format for GitHub Security integration
|
||||
|
||||
### Exit Codes
|
||||
|
||||
- **0**: Scan completed successfully, no critical/high vulnerabilities
|
||||
- **1**: Critical or high severity vulnerabilities found (blocking)
|
||||
- **2**: Docker build failed or scan error
|
||||
|
||||
### Output Format
|
||||
|
||||
```
|
||||
[INFO] Building Docker image: charon:local...
|
||||
[BUILD] Using Dockerfile with multi-stage build
|
||||
[BUILD] Image built successfully: charon:local
|
||||
|
||||
[SBOM] Generating SBOM using Syft v1.17.0...
|
||||
[SBOM] Generated SBOM contains 247 packages
|
||||
|
||||
[SCAN] Scanning for vulnerabilities using Grype v0.85.0...
|
||||
[SCAN] Vulnerability Summary:
|
||||
🔴 Critical: 0
|
||||
🟠 High: 0
|
||||
🟡 Medium: 15
|
||||
🟢 Low: 42
|
||||
📊 Total: 57
|
||||
|
||||
[SUCCESS] Docker image scan complete - no critical or high vulnerabilities
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Standard Local Scan
|
||||
|
||||
```bash
|
||||
$ .github/skills/scripts/skill-runner.sh security-scan-docker-image
|
||||
[INFO] Building Docker image: charon:local...
|
||||
[BUILD] Step 1/25 : FROM node:24.13.0-alpine AS frontend-builder
|
||||
[BUILD] ...
|
||||
[BUILD] Successfully built abc123def456
|
||||
[BUILD] Successfully tagged charon:local
|
||||
|
||||
[SBOM] Generating SBOM using Syft v1.17.0...
|
||||
[SBOM] Scanning image: charon:local
|
||||
[SBOM] Generated SBOM contains 247 packages
|
||||
|
||||
[SCAN] Scanning for vulnerabilities using Grype v0.85.0...
|
||||
[SCAN] Vulnerability Summary:
|
||||
🔴 Critical: 0
|
||||
🟠 High: 2
|
||||
🟡 Medium: 15
|
||||
🟢 Low: 42
|
||||
📊 Total: 59
|
||||
|
||||
[SCAN] High Severity Vulnerabilities:
|
||||
- CVE-2024-12345 in alpine-baselayout (CVSS: 7.5)
|
||||
Package: alpine-baselayout@3.23.0
|
||||
Fixed: alpine-baselayout@3.23.1
|
||||
Description: Arbitrary file read vulnerability
|
||||
|
||||
- CVE-2024-67890 in busybox (CVSS: 8.2)
|
||||
Package: busybox@1.36.1
|
||||
Fixed: busybox@1.36.2
|
||||
Description: Remote code execution via crafted input
|
||||
|
||||
[ERROR] Found 2 High severity vulnerabilities - please review and remediate
|
||||
Exit code: 1
|
||||
```
|
||||
|
||||
### Example 2: Clean Build After Code Changes
|
||||
|
||||
```bash
|
||||
$ .github/skills/scripts/skill-runner.sh security-scan-docker-image charon:test no-cache
|
||||
[INFO] Building Docker image: charon:test (no cache)...
|
||||
[BUILD] Building without cache to ensure fresh dependencies...
|
||||
[BUILD] Successfully built and tagged charon:test
|
||||
|
||||
[SBOM] Generating SBOM...
|
||||
[SBOM] Generated SBOM contains 248 packages (+1 from previous scan)
|
||||
|
||||
[SCAN] Scanning for vulnerabilities...
|
||||
[SCAN] Vulnerability Summary:
|
||||
🔴 Critical: 0
|
||||
🟠 High: 0
|
||||
🟡 Medium: 16
|
||||
🟢 Low: 43
|
||||
📊 Total: 59
|
||||
|
||||
[SUCCESS] Docker image scan complete - no critical or high vulnerabilities
|
||||
Exit code: 0
|
||||
```
|
||||
|
||||
### Example 3: CI/CD Pipeline Integration
|
||||
|
||||
```yaml
|
||||
# .github/workflows/local-verify.yml (example)
|
||||
- name: Scan Docker Image Locally
|
||||
run: .github/skills/scripts/skill-runner.sh security-scan-docker-image
|
||||
continue-on-error: false
|
||||
|
||||
- name: Upload SBOM Artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: local-sbom
|
||||
path: sbom.cyclonedx.json
|
||||
```
|
||||
|
||||
### Example 4: Pre-Commit Hook Integration
|
||||
|
||||
```bash
|
||||
# .git/hooks/pre-push
|
||||
#!/bin/bash
|
||||
echo "Running local Docker image security scan..."
|
||||
if ! .github/skills/scripts/skill-runner.sh security-scan-docker-image; then
|
||||
echo "❌ Security scan failed - please fix vulnerabilities before pushing"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
### Build Phase
|
||||
|
||||
1. **Docker Build**: Builds the Docker image using the project's Dockerfile
|
||||
- Uses multi-stage build for frontend and backend
|
||||
- Applies build args: VERSION, BUILD_DATE, VCS_REF
|
||||
- Tags with specified image tag (default: charon:local)
|
||||
|
||||
### SBOM Generation Phase
|
||||
|
||||
2. **Image Analysis**: Syft analyzes the built Docker image (not filesystem)
|
||||
- Scans all layers in the final image
|
||||
- Detects Alpine packages, Go modules, npm packages
|
||||
- Identifies compiled binaries and their dependencies
|
||||
- Catalogs runtime dependencies added during build
|
||||
|
||||
3. **SBOM Creation**: Generates CycloneDX JSON SBOM
|
||||
- Industry-standard format for supply chain visibility
|
||||
- Contains full package inventory with versions
|
||||
- Includes checksums and license information
|
||||
|
||||
### Vulnerability Scanning Phase
|
||||
|
||||
4. **Database Update**: Grype updates its vulnerability database
|
||||
- Fetches latest CVE information
|
||||
- Ensures scan uses current vulnerability data
|
||||
|
||||
5. **Image Scan**: Grype scans the SBOM against vulnerability database
|
||||
- Matches packages against known CVEs
|
||||
- Calculates CVSS scores for each vulnerability
|
||||
- Generates SARIF output for GitHub Security
|
||||
|
||||
6. **Severity Analysis**: Counts vulnerabilities by severity
|
||||
- Critical: CVSS 9.0-10.0
|
||||
- High: CVSS 7.0-8.9
|
||||
- Medium: CVSS 4.0-6.9
|
||||
- Low: CVSS 0.1-3.9
|
||||
|
||||
### Reporting Phase
|
||||
|
||||
7. **Results Summary**: Displays vulnerability counts and details
|
||||
8. **Exit Code**: Returns appropriate exit code based on severity findings
|
||||
|
||||
## Vulnerability Severity Thresholds
|
||||
|
||||
**Project Standards (Matches CI)**:
|
||||
|
||||
| Severity | CVSS Range | Action | Exit Code |
|
||||
|----------|-----------|--------|-----------|
|
||||
| 🔴 **CRITICAL** | 9.0-10.0 | **MUST FIX** - Blocks commit/push | 1 |
|
||||
| 🟠 **HIGH** | 7.0-8.9 | **SHOULD FIX** - Blocks commit/push | 1 |
|
||||
| 🟡 **MEDIUM** | 4.0-6.9 | Fix in next release (logged) | 0 |
|
||||
| 🟢 **LOW** | 0.1-3.9 | Optional, fix as time permits | 0 |
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Docker not running**:
|
||||
```bash
|
||||
[ERROR] Docker daemon is not running
|
||||
Solution: Start Docker Desktop or Docker service
|
||||
```
|
||||
|
||||
**Syft not installed**:
|
||||
```bash
|
||||
[ERROR] Syft not found - install from: https://github.com/anchore/syft
|
||||
Solution: Install Syft v1.17.0 using installation instructions above
|
||||
```
|
||||
|
||||
**Grype not installed**:
|
||||
```bash
|
||||
[ERROR] Grype not found - install from: https://github.com/anchore/grype
|
||||
Solution: Install Grype v0.85.0 using installation instructions above
|
||||
```
|
||||
|
||||
**Build failure**:
|
||||
```bash
|
||||
[ERROR] Docker build failed with exit code 1
|
||||
Solution: Check Dockerfile syntax and dependency availability
|
||||
```
|
||||
|
||||
**Network timeout (vulnerability scan)**:
|
||||
```bash
|
||||
[WARNING] Failed to update Grype vulnerability database
|
||||
Solution: Check internet connection or retry later
|
||||
```
|
||||
|
||||
**Disk space insufficient**:
|
||||
```bash
|
||||
[ERROR] No space left on device
|
||||
Solution: Clean up Docker images and containers: docker system prune -a
|
||||
```
|
||||
|
||||
## Integration with Definition of Done
|
||||
|
||||
This skill is **MANDATORY** in the Management agent's Definition of Done checklist:
|
||||
|
||||
### When to Run
|
||||
|
||||
- ✅ **Before every commit** that changes application code
|
||||
- ✅ **After dependency updates** (Go modules, npm packages)
|
||||
- ✅ **Before creating a Pull Request**
|
||||
- ✅ **After Dockerfile modifications**
|
||||
- ✅ **Before release/tag creation**
|
||||
|
||||
### QA_Security Requirements
|
||||
|
||||
The QA_Security agent **MUST**:
|
||||
|
||||
1. Run this skill after running Trivy filesystem scan
|
||||
2. Verify that both scans pass with zero Critical/High issues
|
||||
3. Document any differences between filesystem and image scans
|
||||
4. Block approval if image scan reveals additional vulnerabilities
|
||||
5. Report findings in the QA report at `docs/reports/qa_report.md`
|
||||
|
||||
### Why This is Critical
|
||||
|
||||
**Image-only vulnerabilities** can exist even when filesystem scans pass:
|
||||
|
||||
- Alpine base image CVEs (e.g., musl, busybox, apk-tools)
|
||||
- Compiled Go binary vulnerabilities (e.g., stdlib CVEs)
|
||||
- Caddy plugin vulnerabilities added during build
|
||||
- Multi-stage build artifacts with known issues
|
||||
|
||||
**Without this scan**, these vulnerabilities reach production undetected.
|
||||
|
||||
## Comparison with CI Supply Chain Workflow
|
||||
|
||||
This skill **exactly replicates** the supply-chain-pr.yml workflow:
|
||||
|
||||
| Step | CI Workflow | This Skill | Match |
|
||||
|------|------------|------------|-------|
|
||||
| Build Image | ✅ Docker build | ✅ Docker build | ✅ |
|
||||
| Load Image | ✅ Load from artifact | ✅ Use built image | ✅ |
|
||||
| Syft Version | v1.17.0 | v1.17.0 | ✅ |
|
||||
| Grype Version | v0.85.0 | v0.85.0 | ✅ |
|
||||
| SBOM Format | CycloneDX JSON | CycloneDX JSON | ✅ |
|
||||
| Scan Target | Docker image | Docker image | ✅ |
|
||||
| Severity Counts | Critical/High/Medium/Low | Critical/High/Medium/Low | ✅ |
|
||||
| Exit on Critical/High | Yes | Yes | ✅ |
|
||||
| SARIF Output | Yes | Yes | ✅ |
|
||||
|
||||
**Guarantee**: If this skill passes locally, the CI supply chain workflow will pass (assuming same code/dependencies).
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [security-scan-trivy](./security-scan-trivy.SKILL.md) - Filesystem vulnerability scan (complementary)
|
||||
- [security-verify-sbom](./security-verify-sbom.SKILL.md) - SBOM verification and comparison
|
||||
- [security-sign-cosign](./security-sign-cosign.SKILL.md) - Sign artifacts with Cosign
|
||||
- [security-slsa-provenance](./security-slsa-provenance.SKILL.md) - Generate SLSA provenance
|
||||
|
||||
## Workflow Integration
|
||||
|
||||
### Recommended Execution Order
|
||||
|
||||
1. **Trivy Filesystem Scan** - Fast, catches obvious issues
|
||||
2. **Docker Image Scan (this skill)** - Comprehensive, catches image-only issues
|
||||
3. **CodeQL Scans** - Static analysis for code quality
|
||||
4. **SBOM Verification** - Supply chain drift detection
|
||||
|
||||
### Combined DoD Checklist
|
||||
|
||||
```bash
|
||||
# 1. Filesystem scan (fast)
|
||||
.github/skills/scripts/skill-runner.sh security-scan-trivy
|
||||
|
||||
# 2. Image scan (comprehensive) - THIS SKILL
|
||||
.github/skills/scripts/skill-runner.sh security-scan-docker-image
|
||||
|
||||
# 3. Code analysis
|
||||
.github/skills/scripts/skill-runner.sh security-scan-codeql
|
||||
|
||||
# 4. Go vulnerabilities
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Execution Time
|
||||
|
||||
- **Docker Build**: 2-5 minutes (cached), 5-10 minutes (no-cache)
|
||||
- **SBOM Generation**: 30-60 seconds
|
||||
- **Vulnerability Scan**: 30-60 seconds
|
||||
- **Total**: ~3-7 minutes (typical), ~6-12 minutes (no-cache)
|
||||
|
||||
### Optimization Tips
|
||||
|
||||
1. **Use Docker layer caching** (default) for faster builds
|
||||
2. **Run after code changes only** (not needed for doc-only changes)
|
||||
3. **Parallelize with other scans** (Trivy, CodeQL) for efficiency
|
||||
4. **Cache vulnerability database** (Grype auto-caches)
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- SBOM files contain full package inventory (treat as sensitive)
|
||||
- Vulnerability results may contain CVE details (secure storage)
|
||||
- Never commit scan results with credentials/tokens
|
||||
- Review all Critical/High findings before production deployment
|
||||
- Keep Syft and Grype updated to latest versions
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Build Always Fails
|
||||
|
||||
Check Dockerfile syntax and build context:
|
||||
|
||||
```bash
|
||||
# Test build manually
|
||||
docker build -t charon:test .
|
||||
|
||||
# Check build args
|
||||
docker build --build-arg VERSION=test -t charon:test .
|
||||
```
|
||||
|
||||
### Scan Detects False Positives
|
||||
|
||||
Create `.grype.yaml` in project root to suppress known false positives:
|
||||
|
||||
```yaml
|
||||
ignore:
|
||||
- vulnerability: CVE-2024-12345
|
||||
fix-state: wont-fix
|
||||
```
|
||||
|
||||
### Different Results Than CI
|
||||
|
||||
Verify versions match:
|
||||
|
||||
```bash
|
||||
syft version # Should be v1.17.0
|
||||
grype version # Should be v0.85.0
|
||||
```
|
||||
|
||||
Update if needed:
|
||||
|
||||
```bash
|
||||
# Reinstall specific versions
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin v1.17.0
|
||||
curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin v0.85.0
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- This skill is **not idempotent** due to Docker build step
|
||||
- Scan results may vary as vulnerability database updates
|
||||
- Some vulnerabilities may have no fix available yet
|
||||
- Alpine base image updates may resolve multiple CVEs
|
||||
- Go stdlib updates may resolve compiled binary CVEs
|
||||
- Network access required for database updates
|
||||
- Recommended to run before each commit/push
|
||||
- Complements but does not replace Trivy filesystem scan
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-16
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: Syft (SBOM) + Grype (Vulnerability Scanning)
|
||||
**CI Workflow**: `.github/workflows/supply-chain-pr.yml`
|
||||
@@ -1,97 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Security Scan Go Vulnerability - Execution Script
|
||||
#
|
||||
# This script wraps the Go vulnerability checker (govulncheck) to detect
|
||||
# known vulnerabilities in Go code and dependencies.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_go_environment "1.23" || error_exit "Go 1.23+ is required"
|
||||
|
||||
# Set defaults
|
||||
set_default_env "GOVULNCHECK_FORMAT" "text"
|
||||
|
||||
# Parse arguments
|
||||
FORMAT="${1:-${GOVULNCHECK_FORMAT}}"
|
||||
MODE="${2:-source}"
|
||||
|
||||
# Validate format
|
||||
case "${FORMAT}" in
|
||||
text|json|sarif)
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid format: ${FORMAT}. Must be one of: text, json, sarif"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Validate mode
|
||||
case "${MODE}" in
|
||||
source|binary)
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid mode: ${MODE}. Must be one of: source, binary"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Change to backend directory
|
||||
cd "${PROJECT_ROOT}/backend"
|
||||
|
||||
# Check for go.mod
|
||||
if [[ ! -f "go.mod" ]]; then
|
||||
log_error "go.mod not found in backend directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Execute govulncheck
|
||||
log_step "SCANNING" "Running Go vulnerability check"
|
||||
log_info "Format: ${FORMAT}"
|
||||
log_info "Mode: ${MODE}"
|
||||
log_info "Working directory: $(pwd)"
|
||||
|
||||
# Build govulncheck command
|
||||
GOVULNCHECK_CMD="go run golang.org/x/vuln/cmd/govulncheck@latest"
|
||||
|
||||
# Add format flag if not text (text is default)
|
||||
if [[ "${FORMAT}" != "text" ]]; then
|
||||
GOVULNCHECK_CMD="${GOVULNCHECK_CMD} -format=${FORMAT}"
|
||||
fi
|
||||
|
||||
# Add mode flag if not source (source is default)
|
||||
if [[ "${MODE}" != "source" ]]; then
|
||||
GOVULNCHECK_CMD="${GOVULNCHECK_CMD} -mode=${MODE}"
|
||||
fi
|
||||
|
||||
# Add target (all packages)
|
||||
GOVULNCHECK_CMD="${GOVULNCHECK_CMD} ./..."
|
||||
|
||||
# Execute the scan
|
||||
if eval "${GOVULNCHECK_CMD}"; then
|
||||
log_success "No vulnerabilities found"
|
||||
exit 0
|
||||
else
|
||||
exit_code=$?
|
||||
if [[ ${exit_code} -eq 3 ]]; then
|
||||
log_error "Vulnerabilities detected (exit code 3)"
|
||||
log_info "Review the output above for details and remediation advice"
|
||||
else
|
||||
log_error "Vulnerability scan failed with exit code: ${exit_code}"
|
||||
fi
|
||||
exit "${exit_code}"
|
||||
fi
|
||||
280
.github/skills/security-scan-go-vuln.SKILL.md
vendored
280
.github/skills/security-scan-go-vuln.SKILL.md
vendored
@@ -1,280 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "security-scan-go-vuln"
|
||||
version: "1.0.0"
|
||||
description: "Run Go vulnerability checker (govulncheck) to detect known vulnerabilities in Go code"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "security"
|
||||
- "vulnerabilities"
|
||||
- "go"
|
||||
- "govulncheck"
|
||||
- "scanning"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "go"
|
||||
version: ">=1.23"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "GOVULNCHECK_FORMAT"
|
||||
description: "Output format (text, json, sarif)"
|
||||
default: "text"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "format"
|
||||
type: "string"
|
||||
description: "Output format (text, json, sarif)"
|
||||
default: "text"
|
||||
required: false
|
||||
- name: "mode"
|
||||
type: "string"
|
||||
description: "Scan mode (source or binary)"
|
||||
default: "source"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "vulnerability_report"
|
||||
type: "stdout"
|
||||
description: "List of detected vulnerabilities with remediation advice"
|
||||
- name: "exit_code"
|
||||
type: "number"
|
||||
description: "0 if no vulnerabilities found, 3 if vulnerabilities detected"
|
||||
metadata:
|
||||
category: "security"
|
||||
subcategory: "vulnerability"
|
||||
execution_time: "short"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Security Scan Go Vulnerability
|
||||
|
||||
## Overview
|
||||
|
||||
Executes `govulncheck` from the official Go vulnerability database to scan Go code and dependencies for known security vulnerabilities. This tool analyzes both direct and transitive dependencies, providing actionable remediation advice.
|
||||
|
||||
This skill is designed for CI/CD pipelines and pre-release security validation.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Go 1.23 or higher installed and in PATH
|
||||
- Internet connection (for vulnerability database access)
|
||||
- Go module dependencies downloaded (`go mod download`)
|
||||
- Valid Go project with `go.mod` file
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run with default settings (text output, source mode):
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln
|
||||
```
|
||||
|
||||
### JSON Output
|
||||
|
||||
Get results in JSON format for parsing:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln json
|
||||
```
|
||||
|
||||
### SARIF Output
|
||||
|
||||
Get results in SARIF format for GitHub Code Scanning:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln sarif
|
||||
```
|
||||
|
||||
### Custom Format via Environment
|
||||
|
||||
```bash
|
||||
GOVULNCHECK_FORMAT=json .github/skills/scripts/skill-runner.sh security-scan-go-vuln
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| format | string | No | text | Output format (text, json, sarif) |
|
||||
| mode | string | No | source | Scan mode (source or binary) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| GOVULNCHECK_FORMAT | No | text | Output format override |
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0 (no vulnerabilities found)
|
||||
- **Error Exit Codes**:
|
||||
- 1: Scan error or invalid arguments
|
||||
- 3: Vulnerabilities detected
|
||||
- **Output**: Vulnerability report to stdout
|
||||
|
||||
## Vulnerability Report Format
|
||||
|
||||
### Text Output (Default)
|
||||
|
||||
```
|
||||
Scanning for dependencies with known vulnerabilities...
|
||||
No vulnerabilities found.
|
||||
```
|
||||
|
||||
Or if vulnerabilities are found:
|
||||
|
||||
```
|
||||
Found 2 vulnerabilities in dependencies
|
||||
|
||||
Vulnerability #1: GO-2023-1234
|
||||
Package: github.com/example/vulnerable
|
||||
Version: v1.2.3
|
||||
Description: Buffer overflow in Parse function
|
||||
Fixed in: v1.2.4
|
||||
More info: https://vuln.go.dev/GO-2023-1234
|
||||
|
||||
Vulnerability #2: GO-2023-5678
|
||||
Package: golang.org/x/crypto/ssh
|
||||
Version: v0.1.0
|
||||
Description: Insecure default configuration
|
||||
Fixed in: v0.3.0
|
||||
More info: https://vuln.go.dev/GO-2023-5678
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Scan
|
||||
|
||||
```bash
|
||||
# Scan backend Go code for vulnerabilities
|
||||
cd backend
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
Scanning your code and 125 packages across 23 dependent modules for known vulnerabilities...
|
||||
No vulnerabilities found.
|
||||
```
|
||||
|
||||
### Example 2: JSON Output for CI/CD
|
||||
|
||||
```bash
|
||||
# Get JSON output for automated processing
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln json > vuln-report.json
|
||||
```
|
||||
|
||||
### Example 3: CI/CD Pipeline Integration
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example
|
||||
- name: Check Go Vulnerabilities
|
||||
run: .github/skills/scripts/skill-runner.sh security-scan-go-vuln
|
||||
working-directory: backend
|
||||
|
||||
- name: Upload SARIF Report
|
||||
if: always()
|
||||
run: |
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln sarif > results.sarif
|
||||
# Upload to GitHub Code Scanning
|
||||
```
|
||||
|
||||
### Example 4: Binary Mode Scan
|
||||
|
||||
```bash
|
||||
# Scan a compiled binary
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln text binary
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Go not installed**:
|
||||
```bash
|
||||
Error: Go 1.23+ is required
|
||||
Solution: Install Go 1.23 or higher
|
||||
```
|
||||
|
||||
**Network unavailable**:
|
||||
```bash
|
||||
Error: Failed to fetch vulnerability database
|
||||
Solution: Check internet connection or proxy settings
|
||||
```
|
||||
|
||||
**Vulnerabilities found**:
|
||||
```bash
|
||||
Exit code: 3
|
||||
Solution: Review vulnerabilities and update affected packages
|
||||
```
|
||||
|
||||
**Module not found**:
|
||||
```bash
|
||||
Error: go.mod file not found
|
||||
Solution: Run from a valid Go module directory
|
||||
```
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0**: No vulnerabilities found
|
||||
- **1**: Scan error or invalid arguments
|
||||
- **3**: Vulnerabilities detected (standard govulncheck exit code)
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [security-scan-trivy](./security-scan-trivy.SKILL.md) - Multi-language vulnerability scanning
|
||||
- [test-backend-coverage](./test-backend-coverage.SKILL.md) - Backend test coverage
|
||||
|
||||
## Notes
|
||||
|
||||
- `govulncheck` uses the official Go vulnerability database at https://vuln.go.dev
|
||||
- Database is automatically updated during each scan
|
||||
- Only checks vulnerabilities that are reachable from your code
|
||||
- Does not require building the code (analyzes source)
|
||||
- Can also scan compiled binaries with `--mode=binary`
|
||||
- Results may change as new vulnerabilities are published
|
||||
- Recommended to run before each release and in CI/CD
|
||||
- Zero false positives (only reports known CVEs)
|
||||
|
||||
## Remediation Workflow
|
||||
|
||||
When vulnerabilities are found:
|
||||
|
||||
1. **Review the Report**: Understand which packages are affected
|
||||
2. **Check Fix Availability**: Look for fixed versions in the report
|
||||
3. **Update Dependencies**: Run `go get -u` to update affected packages
|
||||
4. **Re-run Scan**: Verify vulnerabilities are resolved
|
||||
5. **Test**: Run full test suite after updates
|
||||
6. **Document**: Note any unresolvable vulnerabilities in security log
|
||||
|
||||
## Integration with GitHub Security
|
||||
|
||||
For SARIF output integration with GitHub Code Scanning:
|
||||
|
||||
```bash
|
||||
# Generate SARIF report
|
||||
.github/skills/scripts/skill-runner.sh security-scan-go-vuln sarif > govulncheck.sarif
|
||||
|
||||
# Upload to GitHub (requires GitHub CLI)
|
||||
gh api /repos/:owner/:repo/code-scanning/sarifs \
|
||||
-F sarif=@govulncheck.sarif \
|
||||
-F commit_sha=$GITHUB_SHA \
|
||||
-F ref=$GITHUB_REF
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: `go run golang.org/x/vuln/cmd/govulncheck@latest`
|
||||
115
.github/skills/security-scan-trivy-scripts/run.sh
vendored
115
.github/skills/security-scan-trivy-scripts/run.sh
vendored
@@ -1,115 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Security Scan Trivy - Execution Script
|
||||
#
|
||||
# This script wraps the Trivy Docker command to scan for vulnerabilities,
|
||||
# secrets, and misconfigurations.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_docker_environment || error_exit "Docker is required but not available"
|
||||
|
||||
# Set defaults
|
||||
set_default_env "TRIVY_SEVERITY" "CRITICAL,HIGH,MEDIUM"
|
||||
set_default_env "TRIVY_TIMEOUT" "10m"
|
||||
|
||||
# Parse arguments
|
||||
# Default scanners exclude misconfig to avoid non-actionable policy bundle issues
|
||||
# that can cause scan errors unrelated to the repository contents.
|
||||
SCANNERS="${1:-vuln,secret}"
|
||||
FORMAT="${2:-table}"
|
||||
|
||||
# Validate format
|
||||
case "${FORMAT}" in
|
||||
table|json|sarif)
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid format: ${FORMAT}. Must be one of: table, json, sarif"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
||||
# Validate scanners
|
||||
IFS=',' read -ra SCANNER_ARRAY <<< "${SCANNERS}"
|
||||
for scanner in "${SCANNER_ARRAY[@]}"; do
|
||||
case "${scanner}" in
|
||||
vuln|secret|misconfig)
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid scanner: ${scanner}. Must be one of: vuln, secret, misconfig"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Execute Trivy scan
|
||||
log_step "SCANNING" "Running Trivy security scan"
|
||||
log_info "Scanners: ${SCANNERS}"
|
||||
log_info "Format: ${FORMAT}"
|
||||
log_info "Severity: ${TRIVY_SEVERITY}"
|
||||
log_info "Timeout: ${TRIVY_TIMEOUT}"
|
||||
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# Avoid scanning generated/cached artifacts that commonly contain fixture secrets,
|
||||
# non-Dockerfile files named like Dockerfiles, and large logs.
|
||||
SKIP_DIRS=(
|
||||
".git"
|
||||
".venv"
|
||||
".cache"
|
||||
"node_modules"
|
||||
"frontend/node_modules"
|
||||
"frontend/dist"
|
||||
"frontend/coverage"
|
||||
"test-results"
|
||||
"codeql-db-go"
|
||||
"codeql-db-js"
|
||||
"codeql-agent-results"
|
||||
"my-codeql-db"
|
||||
".trivy_logs"
|
||||
)
|
||||
|
||||
SKIP_DIR_FLAGS=()
|
||||
for d in "${SKIP_DIRS[@]}"; do
|
||||
SKIP_DIR_FLAGS+=("--skip-dirs" "/app/${d}")
|
||||
done
|
||||
|
||||
# Run Trivy via Docker
|
||||
if docker run --rm \
|
||||
-v "$(pwd):/app:ro" \
|
||||
-e "TRIVY_SEVERITY=${TRIVY_SEVERITY}" \
|
||||
-e "TRIVY_TIMEOUT=${TRIVY_TIMEOUT}" \
|
||||
aquasec/trivy:latest \
|
||||
fs \
|
||||
--scanners "${SCANNERS}" \
|
||||
--timeout "${TRIVY_TIMEOUT}" \
|
||||
--exit-code 1 \
|
||||
--severity "CRITICAL,HIGH" \
|
||||
--format "${FORMAT}" \
|
||||
"${SKIP_DIR_FLAGS[@]}" \
|
||||
/app; then
|
||||
log_success "Trivy scan completed - no issues found"
|
||||
exit 0
|
||||
else
|
||||
exit_code=$?
|
||||
if [[ ${exit_code} -eq 1 ]]; then
|
||||
log_error "Trivy scan found security issues"
|
||||
else
|
||||
log_error "Trivy scan failed with exit code: ${exit_code}"
|
||||
fi
|
||||
exit "${exit_code}"
|
||||
fi
|
||||
253
.github/skills/security-scan-trivy.SKILL.md
vendored
253
.github/skills/security-scan-trivy.SKILL.md
vendored
@@ -1,253 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "security-scan-trivy"
|
||||
version: "1.0.0"
|
||||
description: "Run Trivy security scanner for vulnerabilities, secrets, and misconfigurations"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "security"
|
||||
- "scanning"
|
||||
- "trivy"
|
||||
- "vulnerabilities"
|
||||
- "secrets"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "TRIVY_SEVERITY"
|
||||
description: "Comma-separated list of severities to scan for"
|
||||
default: "CRITICAL,HIGH,MEDIUM"
|
||||
required: false
|
||||
- name: "TRIVY_TIMEOUT"
|
||||
description: "Timeout for Trivy scan"
|
||||
default: "10m"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "scanners"
|
||||
type: "string"
|
||||
description: "Comma-separated list of scanners (vuln, secret, misconfig)"
|
||||
default: "vuln,secret,misconfig"
|
||||
required: false
|
||||
- name: "format"
|
||||
type: "string"
|
||||
description: "Output format (table, json, sarif)"
|
||||
default: "table"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "scan_results"
|
||||
type: "stdout"
|
||||
description: "Trivy scan results in specified format"
|
||||
- name: "exit_code"
|
||||
type: "number"
|
||||
description: "0 if no issues found, non-zero otherwise"
|
||||
metadata:
|
||||
category: "security"
|
||||
subcategory: "scan"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Security Scan Trivy
|
||||
|
||||
## Overview
|
||||
|
||||
Executes Trivy security scanner using Docker to scan the project for vulnerabilities, secrets, and misconfigurations. Trivy scans filesystem, dependencies, and configuration files to identify security issues.
|
||||
|
||||
This skill is designed for CI/CD pipelines and local security validation before commits.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Docker 24.0 or higher installed and running
|
||||
- Internet connection (for vulnerability database updates)
|
||||
- Read permissions for project directory
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run with default settings (all scanners, table format):
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh security-scan-trivy
|
||||
```
|
||||
|
||||
### Custom Scanners
|
||||
|
||||
Scan only for vulnerabilities:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-trivy vuln
|
||||
```
|
||||
|
||||
Scan for secrets and misconfigurations:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-trivy secret,misconfig
|
||||
```
|
||||
|
||||
### Custom Severity
|
||||
|
||||
Scan only for critical and high severity issues:
|
||||
|
||||
```bash
|
||||
TRIVY_SEVERITY=CRITICAL,HIGH .github/skills/scripts/skill-runner.sh security-scan-trivy
|
||||
```
|
||||
|
||||
### JSON Output
|
||||
|
||||
Get results in JSON format for parsing:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-scan-trivy vuln,secret,misconfig json
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| scanners | string | No | vuln,secret,misconfig | Comma-separated list of scanners to run |
|
||||
| format | string | No | table | Output format (table, json, sarif) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| TRIVY_SEVERITY | No | CRITICAL,HIGH,MEDIUM | Severities to report |
|
||||
| TRIVY_TIMEOUT | No | 10m | Maximum scan duration |
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0 (no issues found)
|
||||
- **Error Exit Codes**:
|
||||
- 1: Issues found
|
||||
- 2: Scanner error
|
||||
- **Output**: Scan results to stdout in specified format
|
||||
|
||||
## Scanner Types
|
||||
|
||||
### Vulnerability Scanner (vuln)
|
||||
Scans for known CVEs in:
|
||||
- Go dependencies (go.mod)
|
||||
- npm packages (package.json)
|
||||
- Docker base images (Dockerfile)
|
||||
|
||||
### Secret Scanner (secret)
|
||||
Detects exposed secrets:
|
||||
- API keys
|
||||
- Passwords
|
||||
- Tokens
|
||||
- Private keys
|
||||
|
||||
### Misconfiguration Scanner (misconfig)
|
||||
Checks configuration files:
|
||||
- Dockerfile best practices
|
||||
- Kubernetes manifests
|
||||
- Terraform files
|
||||
- Docker Compose files
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Full Scan with Table Output
|
||||
|
||||
```bash
|
||||
# Scan all vulnerability types, display as table
|
||||
.github/skills/scripts/skill-runner.sh security-scan-trivy
|
||||
```
|
||||
|
||||
Output:
|
||||
```
|
||||
2025-12-20T10:00:00Z INFO Trivy version: 0.48.0
|
||||
2025-12-20T10:00:01Z INFO Scanning filesystem...
|
||||
Total: 0 (CRITICAL: 0, HIGH: 0, MEDIUM: 0)
|
||||
```
|
||||
|
||||
### Example 2: Vulnerability Scan Only (JSON)
|
||||
|
||||
```bash
|
||||
# Scan for vulnerabilities only, output as JSON
|
||||
.github/skills/scripts/skill-runner.sh security-scan-trivy vuln json > trivy-results.json
|
||||
```
|
||||
|
||||
### Example 3: Critical Issues Only
|
||||
|
||||
```bash
|
||||
# Scan for critical severity issues only
|
||||
TRIVY_SEVERITY=CRITICAL .github/skills/scripts/skill-runner.sh security-scan-trivy
|
||||
```
|
||||
|
||||
### Example 4: CI/CD Pipeline Integration
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example
|
||||
- name: Run Trivy Security Scan
|
||||
run: .github/skills/scripts/skill-runner.sh security-scan-trivy
|
||||
continue-on-error: false
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Docker not running**:
|
||||
```bash
|
||||
Error: Cannot connect to Docker daemon
|
||||
Solution: Start Docker service
|
||||
```
|
||||
|
||||
**Network timeout**:
|
||||
```bash
|
||||
Error: Failed to download vulnerability database
|
||||
Solution: Increase TRIVY_TIMEOUT or check internet connection
|
||||
```
|
||||
|
||||
**Vulnerabilities found**:
|
||||
```bash
|
||||
Exit code: 1
|
||||
Solution: Review and remediate reported vulnerabilities
|
||||
```
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0**: No security issues found
|
||||
- **1**: Security issues detected
|
||||
- **2**: Scanner error or invalid arguments
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [security-scan-go-vuln](./security-scan-go-vuln.SKILL.md) - Go-specific vulnerability checking
|
||||
- [qa-precommit-all](./qa-precommit-all.SKILL.md) - Pre-commit quality checks
|
||||
|
||||
## Notes
|
||||
|
||||
- Trivy automatically updates its vulnerability database on each run
|
||||
- Scan results may vary based on database version
|
||||
- Some vulnerabilities may have no fix available yet
|
||||
- Consider using `.trivyignore` file to suppress false positives
|
||||
- Recommended to run before each release
|
||||
- Network access required for first run and database updates
|
||||
|
||||
## Security Thresholds
|
||||
|
||||
**Project Standards**:
|
||||
- **CRITICAL**: Must fix before release (blocking)
|
||||
- **HIGH**: Should fix before release (warning)
|
||||
- **MEDIUM**: Fix in next release cycle (informational)
|
||||
- **LOW**: Optional, fix as time permits
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: Docker inline command (Trivy)
|
||||
237
.github/skills/security-sign-cosign-scripts/run.sh
vendored
237
.github/skills/security-sign-cosign-scripts/run.sh
vendored
@@ -1,237 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Security Sign Cosign - Execution Script
|
||||
#
|
||||
# This script signs Docker images or files using Cosign (Sigstore).
|
||||
# Supports both keyless (OIDC) and key-based signing.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Set defaults
|
||||
set_default_env "COSIGN_EXPERIMENTAL" "1"
|
||||
set_default_env "COSIGN_YES" "true"
|
||||
|
||||
# Parse arguments
|
||||
TYPE="${1:-docker}"
|
||||
TARGET="${2:-}"
|
||||
|
||||
if [[ -z "${TARGET}" ]]; then
|
||||
log_error "Usage: security-sign-cosign <type> <target>"
|
||||
log_error " type: docker or file"
|
||||
log_error " target: Docker image tag or file path"
|
||||
log_error ""
|
||||
log_error "Examples:"
|
||||
log_error " security-sign-cosign docker charon:local"
|
||||
log_error " security-sign-cosign file ./dist/charon-linux-amd64"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Validate type
|
||||
case "${TYPE}" in
|
||||
docker|file)
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid type: ${TYPE}"
|
||||
log_error "Type must be 'docker' or 'file'"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
||||
# Check required tools
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
|
||||
if ! command -v cosign >/dev/null 2>&1; then
|
||||
log_error "cosign is not installed"
|
||||
log_error "Install from: https://github.com/sigstore/cosign"
|
||||
log_error "Quick install: go install github.com/sigstore/cosign/v2/cmd/cosign@latest"
|
||||
log_error "Or download and verify v2.4.1:"
|
||||
log_error " curl -sLO https://github.com/sigstore/cosign/releases/download/v2.4.1/cosign-linux-amd64"
|
||||
log_error " echo 'c7c1c5ba0cf95e0bc0cfde5c5a84cd5c4e8f8e6c1c3d3b8f5e9e8d8c7b6a5f4e cosign-linux-amd64' | sha256sum -c"
|
||||
log_error " sudo install cosign-linux-amd64 /usr/local/bin/cosign"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [[ "${TYPE}" == "docker" ]]; then
|
||||
if ! command -v docker >/dev/null 2>&1; then
|
||||
log_error "Docker not found - required for image signing"
|
||||
log_error "Install from: https://docs.docker.com/get-docker/"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! docker info >/dev/null 2>&1; then
|
||||
log_error "Docker daemon is not running"
|
||||
log_error "Start Docker daemon before signing images"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# Determine signing mode
|
||||
if [[ "${COSIGN_EXPERIMENTAL}" == "1" ]]; then
|
||||
SIGNING_MODE="keyless (GitHub OIDC)"
|
||||
else
|
||||
SIGNING_MODE="key-based"
|
||||
|
||||
# Validate key and password are provided for key-based signing
|
||||
if [[ -z "${COSIGN_PRIVATE_KEY:-}" ]]; then
|
||||
log_error "COSIGN_PRIVATE_KEY environment variable is required for key-based signing"
|
||||
log_error "Set COSIGN_EXPERIMENTAL=1 for keyless signing, or provide COSIGN_PRIVATE_KEY"
|
||||
exit 2
|
||||
fi
|
||||
fi
|
||||
|
||||
log_info "Signing mode: ${SIGNING_MODE}"
|
||||
|
||||
# Sign based on type
|
||||
case "${TYPE}" in
|
||||
docker)
|
||||
log_step "COSIGN" "Signing Docker image: ${TARGET}"
|
||||
|
||||
# Verify image exists
|
||||
if ! docker image inspect "${TARGET}" >/dev/null 2>&1; then
|
||||
log_error "Docker image not found: ${TARGET}"
|
||||
log_error "Build or pull the image first"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Sign the image
|
||||
if [[ "${COSIGN_EXPERIMENTAL}" == "1" ]]; then
|
||||
# Keyless signing
|
||||
log_info "Using keyless signing (OIDC)"
|
||||
if ! cosign sign --yes "${TARGET}" 2>&1 | tee cosign-sign.log; then
|
||||
log_error "Failed to sign image with keyless mode"
|
||||
log_error "Check that you have valid GitHub OIDC credentials"
|
||||
cat cosign-sign.log >&2 || true
|
||||
rm -f cosign-sign.log
|
||||
exit 1
|
||||
fi
|
||||
rm -f cosign-sign.log
|
||||
else
|
||||
# Key-based signing
|
||||
log_info "Using key-based signing"
|
||||
|
||||
# Write private key to temporary file
|
||||
TEMP_KEY=$(mktemp)
|
||||
trap 'rm -f "${TEMP_KEY}"' EXIT
|
||||
echo "${COSIGN_PRIVATE_KEY}" > "${TEMP_KEY}"
|
||||
|
||||
# Sign with key
|
||||
if [[ -n "${COSIGN_PASSWORD:-}" ]]; then
|
||||
export COSIGN_PASSWORD
|
||||
fi
|
||||
|
||||
if ! cosign sign --yes --key "${TEMP_KEY}" "${TARGET}" 2>&1 | tee cosign-sign.log; then
|
||||
log_error "Failed to sign image with key"
|
||||
cat cosign-sign.log >&2 || true
|
||||
rm -f cosign-sign.log
|
||||
exit 1
|
||||
fi
|
||||
rm -f cosign-sign.log
|
||||
fi
|
||||
|
||||
log_success "Image signed successfully"
|
||||
log_info "Signature pushed to registry"
|
||||
|
||||
# Show verification command
|
||||
if [[ "${COSIGN_EXPERIMENTAL}" == "1" ]]; then
|
||||
log_info "Verification command:"
|
||||
log_info " cosign verify ${TARGET} \\"
|
||||
log_info " --certificate-identity-regexp='https://github.com/USER/REPO' \\"
|
||||
log_info " --certificate-oidc-issuer='https://token.actions.githubusercontent.com'"
|
||||
else
|
||||
log_info "Verification command:"
|
||||
log_info " cosign verify ${TARGET} --key cosign.pub"
|
||||
fi
|
||||
;;
|
||||
|
||||
file)
|
||||
log_step "COSIGN" "Signing file: ${TARGET}"
|
||||
|
||||
# Verify file exists
|
||||
if [[ ! -f "${TARGET}" ]]; then
|
||||
log_error "File not found: ${TARGET}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SIGNATURE_FILE="${TARGET}.sig"
|
||||
CERT_FILE="${TARGET}.pem"
|
||||
|
||||
# Sign the file
|
||||
if [[ "${COSIGN_EXPERIMENTAL}" == "1" ]]; then
|
||||
# Keyless signing
|
||||
log_info "Using keyless signing (OIDC)"
|
||||
if ! cosign sign-blob --yes \
|
||||
--output-signature="${SIGNATURE_FILE}" \
|
||||
--output-certificate="${CERT_FILE}" \
|
||||
"${TARGET}" 2>&1 | tee cosign-sign.log; then
|
||||
log_error "Failed to sign file with keyless mode"
|
||||
log_error "Check that you have valid GitHub OIDC credentials"
|
||||
cat cosign-sign.log >&2 || true
|
||||
rm -f cosign-sign.log
|
||||
exit 1
|
||||
fi
|
||||
rm -f cosign-sign.log
|
||||
|
||||
log_success "File signed successfully"
|
||||
log_info "Signature: ${SIGNATURE_FILE}"
|
||||
log_info "Certificate: ${CERT_FILE}"
|
||||
|
||||
# Show verification command
|
||||
log_info "Verification command:"
|
||||
log_info " cosign verify-blob ${TARGET} \\"
|
||||
log_info " --signature ${SIGNATURE_FILE} \\"
|
||||
log_info " --certificate ${CERT_FILE} \\"
|
||||
log_info " --certificate-identity-regexp='https://github.com/USER/REPO' \\"
|
||||
log_info " --certificate-oidc-issuer='https://token.actions.githubusercontent.com'"
|
||||
else
|
||||
# Key-based signing
|
||||
log_info "Using key-based signing"
|
||||
|
||||
# Write private key to temporary file
|
||||
TEMP_KEY=$(mktemp)
|
||||
trap 'rm -f "${TEMP_KEY}"' EXIT
|
||||
echo "${COSIGN_PRIVATE_KEY}" > "${TEMP_KEY}"
|
||||
|
||||
# Sign with key
|
||||
if [[ -n "${COSIGN_PASSWORD:-}" ]]; then
|
||||
export COSIGN_PASSWORD
|
||||
fi
|
||||
|
||||
if ! cosign sign-blob --yes \
|
||||
--key "${TEMP_KEY}" \
|
||||
--output-signature="${SIGNATURE_FILE}" \
|
||||
"${TARGET}" 2>&1 | tee cosign-sign.log; then
|
||||
log_error "Failed to sign file with key"
|
||||
cat cosign-sign.log >&2 || true
|
||||
rm -f cosign-sign.log
|
||||
exit 1
|
||||
fi
|
||||
rm -f cosign-sign.log
|
||||
|
||||
log_success "File signed successfully"
|
||||
log_info "Signature: ${SIGNATURE_FILE}"
|
||||
|
||||
# Show verification command
|
||||
log_info "Verification command:"
|
||||
log_info " cosign verify-blob ${TARGET} \\"
|
||||
log_info " --signature ${SIGNATURE_FILE} \\"
|
||||
log_info " --key cosign.pub"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
|
||||
log_success "Signing complete"
|
||||
exit 0
|
||||
421
.github/skills/security-sign-cosign.SKILL.md
vendored
421
.github/skills/security-sign-cosign.SKILL.md
vendored
@@ -1,421 +0,0 @@
|
||||
````markdown
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "security-sign-cosign"
|
||||
version: "1.0.0"
|
||||
description: "Sign Docker images and artifacts with Cosign (Sigstore) for supply chain security"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "security"
|
||||
- "signing"
|
||||
- "cosign"
|
||||
- "supply-chain"
|
||||
- "sigstore"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "cosign"
|
||||
version: ">=2.4.0"
|
||||
optional: false
|
||||
install_url: "https://github.com/sigstore/cosign"
|
||||
- name: "docker"
|
||||
version: ">=24.0"
|
||||
optional: true
|
||||
description: "Required only for Docker image signing"
|
||||
environment_variables:
|
||||
- name: "COSIGN_EXPERIMENTAL"
|
||||
description: "Enable keyless signing (OIDC)"
|
||||
default: "1"
|
||||
required: false
|
||||
- name: "COSIGN_YES"
|
||||
description: "Non-interactive mode"
|
||||
default: "true"
|
||||
required: false
|
||||
- name: "COSIGN_PRIVATE_KEY"
|
||||
description: "Base64-encoded private key for key-based signing"
|
||||
default: ""
|
||||
required: false
|
||||
- name: "COSIGN_PASSWORD"
|
||||
description: "Password for private key"
|
||||
default: ""
|
||||
required: false
|
||||
parameters:
|
||||
- name: "type"
|
||||
type: "string"
|
||||
description: "Artifact type (docker, file)"
|
||||
required: false
|
||||
default: "docker"
|
||||
- name: "target"
|
||||
type: "string"
|
||||
description: "Docker image tag or file path"
|
||||
required: true
|
||||
outputs:
|
||||
- name: "signature"
|
||||
type: "file"
|
||||
description: "Signature file (.sig for files, registry for images)"
|
||||
- name: "certificate"
|
||||
type: "file"
|
||||
description: "Certificate file (.pem for files)"
|
||||
- name: "exit_code"
|
||||
type: "number"
|
||||
description: "0 if signing succeeded, non-zero otherwise"
|
||||
metadata:
|
||||
category: "security"
|
||||
subcategory: "supply-chain"
|
||||
execution_time: "fast"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: false
|
||||
exit_codes:
|
||||
0: "Signing successful"
|
||||
1: "Signing failed"
|
||||
2: "Missing dependencies or invalid parameters"
|
||||
---
|
||||
|
||||
# Security: Sign with Cosign
|
||||
|
||||
Sign Docker images and files using Cosign (Sigstore) for supply chain security and artifact integrity verification.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill signs Docker images and arbitrary files using Cosign, creating cryptographic signatures that can be verified by consumers. It supports both keyless signing (using GitHub OIDC tokens in CI/CD) and key-based signing (using local private keys for development).
|
||||
|
||||
Signatures are stored in Rekor transparency log for public accountability and can be verified without sharing private keys.
|
||||
|
||||
## Features
|
||||
|
||||
- Sign Docker images (stored in registry)
|
||||
- Sign arbitrary files (binaries, archives, etc.)
|
||||
- Keyless signing with GitHub OIDC (CI/CD)
|
||||
- Key-based signing with local keys (development)
|
||||
- Automatic verification after signing
|
||||
- Rekor transparency log integration
|
||||
- Non-interactive mode for automation
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Cosign 2.4.0 or higher
|
||||
- Docker (for image signing)
|
||||
- GitHub account (for keyless signing with OIDC)
|
||||
- Or: Local key pair (for key-based signing)
|
||||
|
||||
## Usage
|
||||
|
||||
### Sign Docker Image (Keyless - CI/CD)
|
||||
|
||||
In GitHub Actions or environments with OIDC:
|
||||
|
||||
```bash
|
||||
# Keyless signing (uses GitHub OIDC token)
|
||||
COSIGN_EXPERIMENTAL=1 .github/skills/scripts/skill-runner.sh \
|
||||
security-sign-cosign docker ghcr.io/user/charon:latest
|
||||
```
|
||||
|
||||
### Sign Docker Image (Key-Based - Local Development)
|
||||
|
||||
For local development with generated keys:
|
||||
|
||||
```bash
|
||||
# Generate key pair first (if you don't have one)
|
||||
# cosign generate-key-pair
|
||||
# Enter password when prompted
|
||||
|
||||
# Sign with local key
|
||||
COSIGN_EXPERIMENTAL=0 COSIGN_PRIVATE_KEY="$(cat cosign.key)" \
|
||||
COSIGN_PASSWORD="your-password" \
|
||||
.github/skills/scripts/skill-runner.sh \
|
||||
security-sign-cosign docker charon:local
|
||||
```
|
||||
|
||||
### Sign File (Binary, Archive, etc.)
|
||||
|
||||
```bash
|
||||
# Sign a file (creates .sig and .pem files)
|
||||
.github/skills/scripts/skill-runner.sh \
|
||||
security-sign-cosign file ./dist/charon-linux-amd64
|
||||
```
|
||||
|
||||
### Verify Signature
|
||||
|
||||
```bash
|
||||
# Verify Docker image (keyless)
|
||||
cosign verify ghcr.io/user/charon:latest \
|
||||
--certificate-identity-regexp="https://github.com/user/repo" \
|
||||
--certificate-oidc-issuer="https://token.actions.githubusercontent.com"
|
||||
|
||||
# Verify file (key-based)
|
||||
cosign verify-blob ./dist/charon-linux-amd64 \
|
||||
--signature ./dist/charon-linux-amd64.sig \
|
||||
--certificate ./dist/charon-linux-amd64.pem \
|
||||
--certificate-identity-regexp="https://github.com/user/repo" \
|
||||
--certificate-oidc-issuer="https://token.actions.githubusercontent.com"
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| type | string | No | docker | Artifact type (docker, file) |
|
||||
| target | string | Yes | - | Docker image tag or file path |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| COSIGN_EXPERIMENTAL | No | 1 | Enable keyless signing (1=keyless, 0=key-based) |
|
||||
| COSIGN_YES | No | true | Non-interactive mode |
|
||||
| COSIGN_PRIVATE_KEY | No | "" | Base64-encoded private key (for key-based signing) |
|
||||
| COSIGN_PASSWORD | No | "" | Password for private key |
|
||||
|
||||
## Signing Modes
|
||||
|
||||
### Keyless Signing (Recommended for CI/CD)
|
||||
|
||||
- Uses GitHub OIDC tokens for authentication
|
||||
- No long-lived keys to manage or secure
|
||||
- Signatures stored in Rekor transparency log
|
||||
- Certificates issued by Fulcio CA
|
||||
- Requires GitHub Actions or similar OIDC provider
|
||||
|
||||
**Pros**:
|
||||
- No key management burden
|
||||
- Public transparency and auditability
|
||||
- Automatic certificate rotation
|
||||
- Secure by default
|
||||
|
||||
**Cons**:
|
||||
- Requires network access
|
||||
- Depends on Sigstore infrastructure
|
||||
- Not suitable for air-gapped environments
|
||||
|
||||
### Key-Based Signing (Local Development)
|
||||
|
||||
- Uses local private key files
|
||||
- Keys managed by developer
|
||||
- Suitable for air-gapped environments
|
||||
- Requires secure key storage
|
||||
|
||||
**Pros**:
|
||||
- Works offline
|
||||
- Full control over keys
|
||||
- No external dependencies
|
||||
|
||||
**Cons**:
|
||||
- Key management complexity
|
||||
- Risk of key compromise
|
||||
- Manual key rotation
|
||||
- No public transparency log
|
||||
|
||||
## Outputs
|
||||
|
||||
### Docker Image Signing
|
||||
- Signature pushed to registry (no local file)
|
||||
- Rekor transparency log entry
|
||||
- Certificate (ephemeral for keyless)
|
||||
|
||||
### File Signing
|
||||
- `<filename>.sig`: Signature file
|
||||
- `<filename>.pem`: Certificate file (for keyless)
|
||||
- Rekor transparency log entry (for keyless)
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Sign Local Docker Image (Development)
|
||||
|
||||
```bash
|
||||
$ docker build -t charon:test .
|
||||
$ COSIGN_EXPERIMENTAL=0 \
|
||||
COSIGN_PRIVATE_KEY="$(cat ~/.cosign/cosign.key)" \
|
||||
COSIGN_PASSWORD="my-secure-password" \
|
||||
.github/skills/scripts/skill-runner.sh security-sign-cosign docker charon:test
|
||||
|
||||
[INFO] Signing Docker image: charon:test
|
||||
[COSIGN] Using key-based signing (COSIGN_EXPERIMENTAL=0)
|
||||
[COSIGN] Signing image...
|
||||
[SUCCESS] Image signed successfully
|
||||
[INFO] Signature pushed to registry
|
||||
[INFO] Verification command:
|
||||
cosign verify charon:test --key cosign.pub
|
||||
```
|
||||
|
||||
### Example 2: Sign Release Binary (Keyless)
|
||||
|
||||
```bash
|
||||
$ .github/skills/scripts/skill-runner.sh \
|
||||
security-sign-cosign file ./dist/charon-linux-amd64
|
||||
|
||||
[INFO] Signing file: ./dist/charon-linux-amd64
|
||||
[COSIGN] Using keyless signing (GitHub OIDC)
|
||||
[COSIGN] Generating ephemeral certificate...
|
||||
[COSIGN] Signing with Fulcio certificate...
|
||||
[SUCCESS] File signed successfully
|
||||
[INFO] Signature: ./dist/charon-linux-amd64.sig
|
||||
[INFO] Certificate: ./dist/charon-linux-amd64.pem
|
||||
[INFO] Rekor entry: https://rekor.sigstore.dev/...
|
||||
```
|
||||
|
||||
### Example 3: CI/CD Pipeline (GitHub Actions)
|
||||
|
||||
```yaml
|
||||
- name: Install Cosign
|
||||
uses: sigstore/cosign-installer@v3.8.1
|
||||
with:
|
||||
cosign-release: 'v2.4.1'
|
||||
|
||||
- name: Sign Docker Image
|
||||
env:
|
||||
DIGEST: ${{ steps.build-and-push.outputs.digest }}
|
||||
IMAGE: ghcr.io/${{ github.repository }}
|
||||
run: |
|
||||
cosign sign --yes ${IMAGE}@${DIGEST}
|
||||
|
||||
- name: Verify Signature
|
||||
run: |
|
||||
cosign verify ghcr.io/${{ github.repository }}@${DIGEST} \
|
||||
--certificate-identity-regexp="https://github.com/${{ github.repository }}" \
|
||||
--certificate-oidc-issuer="https://token.actions.githubusercontent.com"
|
||||
```
|
||||
|
||||
### Example 4: Batch Sign Release Artifacts
|
||||
|
||||
```bash
|
||||
# Sign all binaries in dist/ directory
|
||||
for artifact in ./dist/charon-*; do
|
||||
if [[ -f "$artifact" && ! "$artifact" == *.sig && ! "$artifact" == *.pem ]]; then
|
||||
echo "Signing: $(basename $artifact)"
|
||||
.github/skills/scripts/skill-runner.sh security-sign-cosign file "$artifact"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
## Key Management Best Practices
|
||||
|
||||
### Generating Keys
|
||||
|
||||
```bash
|
||||
# Generate a new key pair
|
||||
cosign generate-key-pair
|
||||
|
||||
# This creates:
|
||||
# - cosign.key (private key - keep secure!)
|
||||
# - cosign.pub (public key - share freely)
|
||||
```
|
||||
|
||||
### Storing Keys Securely
|
||||
|
||||
**DO**:
|
||||
- Store private keys in password manager or HSM
|
||||
- Encrypt private keys with strong passwords
|
||||
- Rotate keys periodically (every 90 days)
|
||||
- Use different keys for different environments
|
||||
- Backup keys securely (encrypted backups)
|
||||
|
||||
**DON'T**:
|
||||
- Commit private keys to version control
|
||||
- Store keys in plaintext files
|
||||
- Share private keys via email or chat
|
||||
- Use the same key for CI/CD and local development
|
||||
- Hardcode passwords in scripts
|
||||
|
||||
### Key Rotation
|
||||
|
||||
```bash
|
||||
# Generate new key pair
|
||||
cosign generate-key-pair --output-key-prefix cosign-new
|
||||
|
||||
# Sign new artifacts with new key
|
||||
COSIGN_PRIVATE_KEY="$(cat cosign-new.key)" ...
|
||||
|
||||
# Update public key in documentation
|
||||
# Revoke old key after transition period
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Cosign not installed**:
|
||||
```bash
|
||||
Error: cosign command not found
|
||||
Solution: Install Cosign from https://github.com/sigstore/cosign
|
||||
Quick install: go install github.com/sigstore/cosign/v2/cmd/cosign@latest
|
||||
```
|
||||
|
||||
**Missing OIDC token (keyless)**:
|
||||
```bash
|
||||
Error: OIDC token not available
|
||||
Solution: Run in GitHub Actions or use key-based signing (COSIGN_EXPERIMENTAL=0)
|
||||
```
|
||||
|
||||
**Invalid private key**:
|
||||
```bash
|
||||
Error: Failed to decrypt private key
|
||||
Solution: Verify COSIGN_PASSWORD is correct and key file is valid
|
||||
```
|
||||
|
||||
**Docker image not found**:
|
||||
```bash
|
||||
Error: Image not found: charon:test
|
||||
Solution: Build or pull the image first
|
||||
```
|
||||
|
||||
**Registry authentication failed**:
|
||||
```bash
|
||||
Error: Failed to push signature to registry
|
||||
Solution: Authenticate with: docker login <registry>
|
||||
```
|
||||
|
||||
### Rekor Outages
|
||||
|
||||
If Rekor is unavailable, signing will fail. Fallback options:
|
||||
|
||||
1. **Wait and retry**: Rekor usually recovers quickly
|
||||
2. **Use key-based signing**: Doesn't require Rekor
|
||||
3. **Sign without Rekor**: `cosign sign --insecure-ignore-tlog` (not recommended)
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0**: Signing successful
|
||||
- **1**: Signing failed
|
||||
- **2**: Missing dependencies or invalid parameters
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [security-verify-sbom](./security-verify-sbom.SKILL.md) - Verify SBOM and scan vulnerabilities
|
||||
- [security-slsa-provenance](./security-slsa-provenance.SKILL.md) - Generate SLSA provenance
|
||||
|
||||
## Notes
|
||||
|
||||
- Keyless signing is recommended for CI/CD pipelines
|
||||
- Key-based signing is suitable for local development and air-gapped environments
|
||||
- All signatures are public and verifiable
|
||||
- Rekor transparency log provides audit trail
|
||||
- Docker image signatures are stored in the registry, not locally
|
||||
- File signatures are stored as `.sig` files alongside the original
|
||||
- Certificates for keyless signing are ephemeral and stored with the signature
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- **Never commit private keys to version control**
|
||||
- Use strong passwords for private keys (20+ characters)
|
||||
- Rotate keys regularly (every 90 days recommended)
|
||||
- Verify signatures before trusting artifacts
|
||||
- Monitor Rekor logs for unauthorized signatures
|
||||
- Use different keys for different trust levels
|
||||
- Consider using HSM for production keys
|
||||
- Enable MFA on accounts with signing privileges
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-10
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: Cosign (Sigstore)
|
||||
**Documentation**: https://docs.sigstore.dev/cosign/overview/
|
||||
|
||||
````
|
||||
@@ -1,327 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Security SLSA Provenance - Execution Script
|
||||
#
|
||||
# This script generates and verifies SLSA provenance attestations.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Set defaults
|
||||
set_default_env "SLSA_LEVEL" "2"
|
||||
|
||||
# Parse arguments
|
||||
ACTION="${1:-}"
|
||||
TARGET="${2:-}"
|
||||
SOURCE_URI="${3:-}"
|
||||
PROVENANCE_FILE="${4:-}"
|
||||
|
||||
if [[ -z "${ACTION}" ]] || [[ -z "${TARGET}" ]]; then
|
||||
log_error "Usage: security-slsa-provenance <action> <target> [source_uri] [provenance_file]"
|
||||
log_error " action: generate, verify, inspect"
|
||||
log_error " target: Docker image, file path, or provenance file"
|
||||
log_error " source_uri: Source repository URI (for verify)"
|
||||
log_error " provenance_file: Path to provenance file (for verify with file)"
|
||||
log_error ""
|
||||
log_error "Examples:"
|
||||
log_error " security-slsa-provenance verify ghcr.io/user/charon:latest github.com/user/charon"
|
||||
log_error " security-slsa-provenance verify ./dist/binary github.com/user/repo provenance.json"
|
||||
log_error " security-slsa-provenance inspect provenance.json"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Validate action
|
||||
case "${ACTION}" in
|
||||
generate|verify|inspect)
|
||||
;;
|
||||
*)
|
||||
log_error "Invalid action: ${ACTION}"
|
||||
log_error "Action must be one of: generate, verify, inspect"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
||||
# Check required tools
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
log_error "jq is not installed"
|
||||
log_error "Install from: https://stedolan.github.io/jq/download/"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [[ "${ACTION}" == "verify" ]] && ! command -v slsa-verifier >/dev/null 2>&1; then
|
||||
log_error "slsa-verifier is not installed"
|
||||
log_error "Install from: https://github.com/slsa-framework/slsa-verifier"
|
||||
log_error "Quick install:"
|
||||
log_error " go install github.com/slsa-framework/slsa-verifier/v2/cli/slsa-verifier@latest"
|
||||
log_error "Or:"
|
||||
log_error " curl -sLO https://github.com/slsa-framework/slsa-verifier/releases/download/v2.6.0/slsa-verifier-linux-amd64"
|
||||
log_error " sudo install slsa-verifier-linux-amd64 /usr/local/bin/slsa-verifier"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [[ "${ACTION}" == "verify" ]] && [[ "${TARGET}" =~ ^ghcr\.|^docker\.|: ]]; then
|
||||
# Docker image verification requires gh CLI
|
||||
if ! command -v gh >/dev/null 2>&1; then
|
||||
log_error "gh (GitHub CLI) is not installed (required for Docker image verification)"
|
||||
log_error "Install from: https://cli.github.com/"
|
||||
exit 2
|
||||
fi
|
||||
fi
|
||||
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# Execute action
|
||||
case "${ACTION}" in
|
||||
generate)
|
||||
log_step "GENERATE" "Generating SLSA provenance for ${TARGET}"
|
||||
log_warning "This generates a basic provenance for testing only"
|
||||
log_warning "Production provenance must be generated by CI/CD build platform"
|
||||
|
||||
if [[ ! -f "${TARGET}" ]]; then
|
||||
log_error "File not found: ${TARGET}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Calculate digest
|
||||
DIGEST=$(sha256sum "${TARGET}" | awk '{print $1}')
|
||||
ARTIFACT_NAME=$(basename "${TARGET}")
|
||||
OUTPUT_FILE="provenance-${ARTIFACT_NAME}.json"
|
||||
|
||||
# Generate basic provenance structure
|
||||
cat > "${OUTPUT_FILE}" <<EOF
|
||||
{
|
||||
"_type": "https://in-toto.io/Statement/v1",
|
||||
"subject": [
|
||||
{
|
||||
"name": "${ARTIFACT_NAME}",
|
||||
"digest": {
|
||||
"sha256": "${DIGEST}"
|
||||
}
|
||||
}
|
||||
],
|
||||
"predicateType": "https://slsa.dev/provenance/v1",
|
||||
"predicate": {
|
||||
"buildDefinition": {
|
||||
"buildType": "https://github.com/user/local-build",
|
||||
"externalParameters": {
|
||||
"source": {
|
||||
"uri": "git+https://github.com/user/charon@local",
|
||||
"digest": {
|
||||
"sha1": "0000000000000000000000000000000000000000"
|
||||
}
|
||||
}
|
||||
},
|
||||
"internalParameters": {},
|
||||
"resolvedDependencies": []
|
||||
},
|
||||
"runDetails": {
|
||||
"builder": {
|
||||
"id": "https://github.com/user/local-builder@v1.0.0"
|
||||
},
|
||||
"metadata": {
|
||||
"invocationId": "local-$(date +%s)",
|
||||
"startedOn": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
|
||||
"finishedOn": "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
EOF
|
||||
|
||||
log_success "Generated provenance: ${OUTPUT_FILE}"
|
||||
log_warning "This provenance is NOT cryptographically signed"
|
||||
log_warning "Use only for local testing, not for production"
|
||||
;;
|
||||
|
||||
verify)
|
||||
log_step "VERIFY" "Verifying SLSA provenance for ${TARGET}"
|
||||
|
||||
if [[ -z "${SOURCE_URI}" ]]; then
|
||||
log_error "Source URI is required for verification"
|
||||
log_error "Usage: security-slsa-provenance verify <target> <source_uri> [provenance_file]"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Determine if target is Docker image or file
|
||||
# Match: ghcr.io/user/repo:tag, docker.io/user/repo:tag, user/repo:tag, simple:tag, registry.io:5000/app:v1
|
||||
# Avoid: ./file, /path/to/file, file.ext, http://url
|
||||
# Strategy: Images have "name:tag" format and don't start with ./ or / and aren't files
|
||||
if [[ ! -f "${TARGET}" ]] && \
|
||||
[[ ! "${TARGET}" =~ ^\./ ]] && \
|
||||
[[ ! "${TARGET}" =~ ^/ ]] && \
|
||||
[[ ! "${TARGET}" =~ ^https?:// ]] && \
|
||||
[[ "${TARGET}" =~ : ]]; then
|
||||
# Looks like a Docker image
|
||||
log_info "Target appears to be a Docker image"
|
||||
|
||||
if [[ -n "${PROVENANCE_FILE}" ]]; then
|
||||
log_warning "Provenance file parameter ignored for Docker images"
|
||||
log_warning "Provenance will be downloaded from registry"
|
||||
fi
|
||||
|
||||
# Verify image with slsa-verifier
|
||||
log_info "Verifying image with slsa-verifier..."
|
||||
if slsa-verifier verify-image "${TARGET}" \
|
||||
--source-uri "github.com/${SOURCE_URI}" \
|
||||
--print-provenance 2>&1 | tee slsa-verify.log; then
|
||||
log_success "Provenance verification passed"
|
||||
|
||||
# Parse SLSA level from output
|
||||
if grep -q "SLSA" slsa-verify.log; then
|
||||
LEVEL=$(grep -oP 'SLSA Level: \K\d+' slsa-verify.log || echo "unknown")
|
||||
log_info "SLSA Level: ${LEVEL}"
|
||||
|
||||
if [[ "${LEVEL}" =~ ^[0-9]+$ ]] && [[ "${LEVEL}" -lt "${SLSA_LEVEL}" ]]; then
|
||||
log_warning "SLSA level ${LEVEL} is below minimum required level ${SLSA_LEVEL}"
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f slsa-verify.log
|
||||
exit 0
|
||||
else
|
||||
log_error "Provenance verification failed"
|
||||
cat slsa-verify.log >&2 || true
|
||||
rm -f slsa-verify.log
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
# File artifact
|
||||
log_info "Target appears to be a file artifact"
|
||||
|
||||
if [[ ! -f "${TARGET}" ]]; then
|
||||
log_error "File not found: ${TARGET}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -z "${PROVENANCE_FILE}" ]]; then
|
||||
log_error "Provenance file is required for file verification"
|
||||
log_error "Usage: security-slsa-provenance verify <file> <source_uri> <provenance_file>"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [[ ! -f "${PROVENANCE_FILE}" ]]; then
|
||||
log_error "Provenance file not found: ${PROVENANCE_FILE}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_info "Verifying artifact with slsa-verifier..."
|
||||
if slsa-verifier verify-artifact "${TARGET}" \
|
||||
--provenance-path "${PROVENANCE_FILE}" \
|
||||
--source-uri "github.com/${SOURCE_URI}" \
|
||||
--print-provenance 2>&1 | tee slsa-verify.log; then
|
||||
log_success "Provenance verification passed"
|
||||
|
||||
# Parse SLSA level from output
|
||||
if grep -q "SLSA" slsa-verify.log; then
|
||||
LEVEL=$(grep -oP 'SLSA Level: \K\d+' slsa-verify.log || echo "unknown")
|
||||
log_info "SLSA Level: ${LEVEL}"
|
||||
|
||||
if [[ "${LEVEL}" =~ ^[0-9]+$ ]] && [[ "${LEVEL}" -lt "${SLSA_LEVEL}" ]]; then
|
||||
log_warning "SLSA level ${LEVEL} is below minimum required level ${SLSA_LEVEL}"
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f slsa-verify.log
|
||||
exit 0
|
||||
else
|
||||
log_error "Provenance verification failed"
|
||||
cat slsa-verify.log >&2 || true
|
||||
rm -f slsa-verify.log
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
;;
|
||||
|
||||
inspect)
|
||||
log_step "INSPECT" "Inspecting SLSA provenance"
|
||||
|
||||
if [[ ! -f "${TARGET}" ]]; then
|
||||
log_error "Provenance file not found: ${TARGET}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate JSON
|
||||
if ! jq empty "${TARGET}" 2>/dev/null; then
|
||||
log_error "Invalid JSON in provenance file"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo " SLSA PROVENANCE DETAILS"
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
# Extract and display key fields
|
||||
PREDICATE_TYPE=$(jq -r '.predicateType // "unknown"' "${TARGET}")
|
||||
echo "Predicate Type: ${PREDICATE_TYPE}"
|
||||
|
||||
# Builder
|
||||
BUILDER_ID=$(jq -r '.predicate.runDetails.builder.id // .predicate.builder.id // "unknown"' "${TARGET}")
|
||||
echo ""
|
||||
echo "Builder:"
|
||||
echo " ID: ${BUILDER_ID}"
|
||||
|
||||
# Source
|
||||
SOURCE_URI_FOUND=$(jq -r '.predicate.buildDefinition.externalParameters.source.uri // .predicate.materials[0].uri // "unknown"' "${TARGET}")
|
||||
SOURCE_DIGEST=$(jq -r '.predicate.buildDefinition.externalParameters.source.digest.sha1 // "unknown"' "${TARGET}")
|
||||
echo ""
|
||||
echo "Source Repository:"
|
||||
echo " URI: ${SOURCE_URI_FOUND}"
|
||||
if [[ "${SOURCE_DIGEST}" != "unknown" ]]; then
|
||||
echo " Digest: ${SOURCE_DIGEST}"
|
||||
fi
|
||||
|
||||
# Subject
|
||||
SUBJECT_NAME=$(jq -r '.subject[0].name // "unknown"' "${TARGET}")
|
||||
SUBJECT_DIGEST=$(jq -r '.subject[0].digest.sha256 // "unknown"' "${TARGET}")
|
||||
echo ""
|
||||
echo "Subject:"
|
||||
echo " Name: ${SUBJECT_NAME}"
|
||||
echo " Digest: sha256:${SUBJECT_DIGEST:0:12}..."
|
||||
|
||||
# Build metadata
|
||||
STARTED=$(jq -r '.predicate.runDetails.metadata.startedOn // .predicate.metadata.buildStartedOn // "unknown"' "${TARGET}")
|
||||
FINISHED=$(jq -r '.predicate.runDetails.metadata.finishedOn // .predicate.metadata.buildFinishedOn // "unknown"' "${TARGET}")
|
||||
echo ""
|
||||
echo "Build Metadata:"
|
||||
if [[ "${STARTED}" != "unknown" ]]; then
|
||||
echo " Started: ${STARTED}"
|
||||
fi
|
||||
if [[ "${FINISHED}" != "unknown" ]]; then
|
||||
echo " Finished: ${FINISHED}"
|
||||
fi
|
||||
|
||||
# Materials/Dependencies
|
||||
MATERIALS_COUNT=$(jq '.predicate.buildDefinition.resolvedDependencies // .predicate.materials // [] | length' "${TARGET}")
|
||||
if [[ "${MATERIALS_COUNT}" -gt 0 ]]; then
|
||||
echo ""
|
||||
echo "Materials (Dependencies): ${MATERIALS_COUNT}"
|
||||
jq -r '.predicate.buildDefinition.resolvedDependencies // .predicate.materials // [] | .[] | " - \(.uri // .name // "unknown")"' "${TARGET}" | head -n 5
|
||||
if [[ "${MATERIALS_COUNT}" -gt 5 ]]; then
|
||||
echo " ... and $((MATERIALS_COUNT - 5)) more"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||
echo ""
|
||||
|
||||
log_success "Provenance inspection complete"
|
||||
;;
|
||||
esac
|
||||
|
||||
exit 0
|
||||
426
.github/skills/security-slsa-provenance.SKILL.md
vendored
426
.github/skills/security-slsa-provenance.SKILL.md
vendored
@@ -1,426 +0,0 @@
|
||||
````markdown
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "security-slsa-provenance"
|
||||
version: "1.0.0"
|
||||
description: "Generate and verify SLSA provenance attestations for build transparency"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "security"
|
||||
- "slsa"
|
||||
- "provenance"
|
||||
- "supply-chain"
|
||||
- "attestation"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "slsa-verifier"
|
||||
version: ">=2.6.0"
|
||||
optional: false
|
||||
install_url: "https://github.com/slsa-framework/slsa-verifier"
|
||||
- name: "jq"
|
||||
version: ">=1.6"
|
||||
optional: false
|
||||
- name: "gh"
|
||||
version: ">=2.62.0"
|
||||
optional: true
|
||||
description: "GitHub CLI (for downloading attestations)"
|
||||
environment_variables:
|
||||
- name: "SLSA_LEVEL"
|
||||
description: "Minimum SLSA level required (1, 2, 3)"
|
||||
default: "2"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "action"
|
||||
type: "string"
|
||||
description: "Action to perform (generate, verify, inspect)"
|
||||
required: true
|
||||
- name: "target"
|
||||
type: "string"
|
||||
description: "Docker image, file path, or provenance file"
|
||||
required: true
|
||||
- name: "source_uri"
|
||||
type: "string"
|
||||
description: "Source repository URI (for verification)"
|
||||
required: false
|
||||
default: ""
|
||||
outputs:
|
||||
- name: "provenance_file"
|
||||
type: "file"
|
||||
description: "Generated provenance attestation (JSON)"
|
||||
- name: "verification_result"
|
||||
type: "stdout"
|
||||
description: "Verification status and details"
|
||||
- name: "exit_code"
|
||||
type: "number"
|
||||
description: "0 if successful, non-zero otherwise"
|
||||
metadata:
|
||||
category: "security"
|
||||
subcategory: "supply-chain"
|
||||
execution_time: "fast"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
exit_codes:
|
||||
0: "Operation successful"
|
||||
1: "Operation failed or verification mismatch"
|
||||
2: "Missing dependencies or invalid parameters"
|
||||
---
|
||||
|
||||
# Security: SLSA Provenance
|
||||
|
||||
Generate and verify SLSA (Supply-chain Levels for Software Artifacts) provenance attestations for build transparency and supply chain security.
|
||||
|
||||
## Overview
|
||||
|
||||
SLSA provenance provides verifiable metadata about how an artifact was built, including the source repository, build platform, dependencies, and build parameters. This skill generates provenance documents, verifies them against policy, and inspects provenance metadata.
|
||||
|
||||
SLSA Level 2+ compliance ensures that:
|
||||
- Builds are executed on isolated, ephemeral systems
|
||||
- Provenance is generated automatically by the build platform
|
||||
- Provenance is tamper-proof and cryptographically verifiable
|
||||
|
||||
## Features
|
||||
|
||||
- Generate SLSA provenance for local artifacts
|
||||
- Verify provenance against source repository
|
||||
- Inspect provenance metadata
|
||||
- Check SLSA level compliance
|
||||
- Support Docker images and file artifacts
|
||||
- Parse and display provenance in human-readable format
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- slsa-verifier 2.6.0 or higher
|
||||
- jq 1.6 or higher
|
||||
- gh (GitHub CLI) 2.62.0 or higher (for downloading attestations)
|
||||
- GitHub account (for downloading remote attestations)
|
||||
|
||||
## Usage
|
||||
|
||||
### Verify Docker Image Provenance
|
||||
|
||||
```bash
|
||||
# Download and verify provenance from GitHub
|
||||
.github/skills/scripts/skill-runner.sh security-slsa-provenance \
|
||||
verify ghcr.io/user/charon:latest github.com/user/charon
|
||||
```
|
||||
|
||||
### Verify Local Provenance File
|
||||
|
||||
```bash
|
||||
# Verify a local provenance file against an artifact
|
||||
.github/skills/scripts/skill-runner.sh security-slsa-provenance \
|
||||
verify ./dist/charon-linux-amd64 github.com/user/charon provenance.json
|
||||
```
|
||||
|
||||
### Inspect Provenance Metadata
|
||||
|
||||
```bash
|
||||
# Parse and display provenance details
|
||||
.github/skills/scripts/skill-runner.sh security-slsa-provenance \
|
||||
inspect provenance.json
|
||||
```
|
||||
|
||||
### Generate Provenance (Local Development)
|
||||
|
||||
```bash
|
||||
# Generate provenance for a local artifact
|
||||
# Note: Real provenance should be generated by CI/CD
|
||||
.github/skills/scripts/skill-runner.sh security-slsa-provenance \
|
||||
generate ./dist/charon-linux-amd64
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| action | string | Yes | - | Action: generate, verify, inspect |
|
||||
| target | string | Yes | - | Docker image, file path, or provenance file |
|
||||
| source_uri | string | No | "" | Source repository URI (github.com/user/repo) |
|
||||
| provenance_file | string | No | "" | Path to provenance file (for verify action) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| SLSA_LEVEL | No | 2 | Minimum SLSA level required (1, 2, 3) |
|
||||
|
||||
## Actions
|
||||
|
||||
### generate
|
||||
|
||||
Generates a basic SLSA provenance document for a local artifact. **Note**: This is for development/testing only. Production provenance must be generated by a trusted build platform (GitHub Actions, Cloud Build, etc.).
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
security-slsa-provenance generate <artifact-path>
|
||||
```
|
||||
|
||||
**Output**: `provenance-<artifact>.json`
|
||||
|
||||
### verify
|
||||
|
||||
Verifies a provenance document against an artifact and source repository. Checks:
|
||||
- Provenance signature is valid
|
||||
- Artifact digest matches provenance
|
||||
- Source URI matches expected repository
|
||||
- SLSA level meets minimum requirements
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
# Verify Docker image (downloads attestation automatically)
|
||||
security-slsa-provenance verify <image> <source-uri>
|
||||
|
||||
# Verify local file with provenance file
|
||||
security-slsa-provenance verify <artifact> <source-uri> <provenance-file>
|
||||
```
|
||||
|
||||
### inspect
|
||||
|
||||
Parses and displays provenance metadata in human-readable format. Shows:
|
||||
- SLSA level
|
||||
- Builder identity
|
||||
- Source repository
|
||||
- Build parameters
|
||||
- Materials (dependencies)
|
||||
- Build invocation
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
security-slsa-provenance inspect <provenance-file>
|
||||
```
|
||||
|
||||
## Outputs
|
||||
|
||||
### Generate Action
|
||||
- `provenance-<artifact>.json`: Generated provenance document
|
||||
|
||||
### Verify Action
|
||||
- Exit code 0: Verification successful
|
||||
- Exit code 1: Verification failed
|
||||
- stdout: Verification details and reasons
|
||||
|
||||
### Inspect Action
|
||||
- Human-readable provenance metadata
|
||||
- SLSA level and builder information
|
||||
- Source and build details
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Verify Docker Image from GitHub
|
||||
|
||||
```bash
|
||||
$ .github/skills/scripts/skill-runner.sh security-slsa-provenance \
|
||||
verify ghcr.io/user/charon:v1.0.0 github.com/user/charon
|
||||
|
||||
[INFO] Verifying SLSA provenance for ghcr.io/user/charon:v1.0.0
|
||||
[SLSA] Downloading provenance from GitHub...
|
||||
[SLSA] Found provenance attestation
|
||||
[SLSA] Verifying provenance signature...
|
||||
[SLSA] Signature valid
|
||||
[SLSA] Checking source URI...
|
||||
[SLSA] Source: github.com/user/charon ✓
|
||||
[SLSA] Builder: https://github.com/slsa-framework/slsa-github-generator
|
||||
[SLSA] SLSA Level: 3 ✓
|
||||
[SUCCESS] Provenance verification passed
|
||||
```
|
||||
|
||||
### Example 2: Verify Release Binary
|
||||
|
||||
```bash
|
||||
$ .github/skills/scripts/skill-runner.sh security-slsa-provenance \
|
||||
verify ./dist/charon-linux-amd64 github.com/user/charon provenance-release.json
|
||||
|
||||
[INFO] Verifying SLSA provenance for ./dist/charon-linux-amd64
|
||||
[SLSA] Reading provenance from provenance-release.json
|
||||
[SLSA] Verifying provenance signature...
|
||||
[SLSA] Signature valid
|
||||
[SLSA] Checking artifact digest...
|
||||
[SLSA] Digest matches ✓
|
||||
[SLSA] Source URI: github.com/user/charon ✓
|
||||
[SLSA] SLSA Level: 2 ✓
|
||||
[SUCCESS] Provenance verification passed
|
||||
```
|
||||
|
||||
### Example 3: Inspect Provenance Details
|
||||
|
||||
```bash
|
||||
$ .github/skills/scripts/skill-runner.sh security-slsa-provenance \
|
||||
inspect provenance-release.json
|
||||
|
||||
[PROVENANCE] SLSA Provenance Details
|
||||
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
|
||||
|
||||
SLSA Level: 3
|
||||
Builder: https://github.com/slsa-framework/slsa-github-generator/.github/workflows/generator_container_slsa3.yml@v2.1.0
|
||||
|
||||
Source Repository:
|
||||
URI: github.com/user/charon
|
||||
Digest: sha1:abc123def456...
|
||||
Ref: refs/tags/v1.0.0
|
||||
|
||||
Build Information:
|
||||
Invoked by: github.com/user/charon/.github/workflows/docker-build.yml@refs/heads/main
|
||||
Started: 2026-01-10T12:00:00Z
|
||||
Finished: 2026-01-10T12:05:32Z
|
||||
|
||||
Materials:
|
||||
- github.com/user/charon@sha1:abc123def456...
|
||||
|
||||
Subject:
|
||||
Name: ghcr.io/user/charon
|
||||
Digest: sha256:789abc...
|
||||
```
|
||||
|
||||
### Example 4: CI/CD Integration (GitHub Actions)
|
||||
|
||||
```yaml
|
||||
- name: Download SLSA Verifier
|
||||
run: |
|
||||
curl -sLO https://github.com/slsa-framework/slsa-verifier/releases/download/v2.6.0/slsa-verifier-linux-amd64
|
||||
sudo install slsa-verifier-linux-amd64 /usr/local/bin/slsa-verifier
|
||||
|
||||
- name: Verify Image Provenance
|
||||
run: |
|
||||
.github/skills/scripts/skill-runner.sh security-slsa-provenance \
|
||||
verify ghcr.io/${{ github.repository }}:${{ github.sha }} \
|
||||
github.com/${{ github.repository }}
|
||||
```
|
||||
|
||||
## SLSA Levels
|
||||
|
||||
### Level 1
|
||||
- Build process is documented
|
||||
- Provenance is generated
|
||||
- **Not cryptographically verifiable**
|
||||
|
||||
### Level 2 (Recommended Minimum)
|
||||
- Build on ephemeral, isolated system
|
||||
- Provenance generated by build platform
|
||||
- Provenance is signed and verifiable
|
||||
- **This skill enforces Level 2 minimum by default**
|
||||
|
||||
### Level 3
|
||||
- Source and build platform are strongly hardened
|
||||
- Audit logs are retained
|
||||
- Hermetic, reproducible builds
|
||||
- **Recommended for production releases**
|
||||
|
||||
## Provenance Structure
|
||||
|
||||
A SLSA provenance document contains:
|
||||
|
||||
```json
|
||||
{
|
||||
"_type": "https://in-toto.io/Statement/v1",
|
||||
"subject": [
|
||||
{
|
||||
"name": "ghcr.io/user/charon",
|
||||
"digest": { "sha256": "..." }
|
||||
}
|
||||
],
|
||||
"predicateType": "https://slsa.dev/provenance/v1",
|
||||
"predicate": {
|
||||
"buildDefinition": {
|
||||
"buildType": "https://github.com/slsa-framework/slsa-github-generator/...",
|
||||
"externalParameters": {
|
||||
"source": { "uri": "git+https://github.com/user/charon@refs/tags/v1.0.0" }
|
||||
},
|
||||
"internalParameters": {},
|
||||
"resolvedDependencies": [...]
|
||||
},
|
||||
"runDetails": {
|
||||
"builder": { "id": "https://github.com/slsa-framework/..." },
|
||||
"metadata": {
|
||||
"invocationId": "...",
|
||||
"startedOn": "2026-01-10T12:00:00Z",
|
||||
"finishedOn": "2026-01-10T12:05:32Z"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
**slsa-verifier not installed**:
|
||||
```bash
|
||||
Error: slsa-verifier command not found
|
||||
Solution: Install from https://github.com/slsa-framework/slsa-verifier
|
||||
Quick install: go install github.com/slsa-framework/slsa-verifier/v2/cli/slsa-verifier@latest
|
||||
```
|
||||
|
||||
**Provenance not found**:
|
||||
```bash
|
||||
Error: No provenance found for image
|
||||
Solution: Ensure the image was built with SLSA provenance generation enabled
|
||||
```
|
||||
|
||||
**Source URI mismatch**:
|
||||
```bash
|
||||
Error: Source URI mismatch
|
||||
Expected: github.com/user/charon
|
||||
Found: github.com/attacker/charon
|
||||
Solution: Verify you're using the correct image/artifact
|
||||
```
|
||||
|
||||
**SLSA level too low**:
|
||||
```bash
|
||||
Error: SLSA level 1 does not meet minimum requirement of 2
|
||||
Solution: Rebuild artifact with SLSA Level 2+ generator
|
||||
```
|
||||
|
||||
**Invalid provenance signature**:
|
||||
```bash
|
||||
Error: Failed to verify provenance signature
|
||||
Solution: Provenance may be tampered or corrupted - do not trust artifact
|
||||
```
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0**: Operation successful
|
||||
- **1**: Operation failed or verification mismatch
|
||||
- **2**: Missing dependencies or invalid parameters
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [security-verify-sbom](./security-verify-sbom.SKILL.md) - Verify SBOM and scan vulnerabilities
|
||||
- [security-sign-cosign](./security-sign-cosign.SKILL.md) - Sign artifacts with Cosign
|
||||
|
||||
## Notes
|
||||
|
||||
- **Production provenance MUST be generated by trusted build platform**
|
||||
- Local provenance generation is for testing only
|
||||
- SLSA Level 2 is the minimum recommended for production
|
||||
- Level 3 provides strongest guarantees but requires hermetic builds
|
||||
- Provenance verification requires network access to download attestations
|
||||
- GitHub attestations are public and verifiable by anyone
|
||||
- Provenance documents are immutable once generated
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Never trust artifacts without verified provenance
|
||||
- Always verify source URI matches expected repository
|
||||
- Require SLSA Level 2+ for production deployments
|
||||
- Provenance tampering indicates compromised supply chain
|
||||
- Provenance signature must be verified before trusting metadata
|
||||
- Local provenance generation bypasses security guarantees
|
||||
- Use SLSA-compliant build platforms (GitHub Actions, Cloud Build, etc.)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-10
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: slsa-framework/slsa-verifier
|
||||
**Documentation**: https://slsa.dev/
|
||||
|
||||
````
|
||||
316
.github/skills/security-verify-sbom-scripts/run.sh
vendored
316
.github/skills/security-verify-sbom-scripts/run.sh
vendored
@@ -1,316 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Security Verify SBOM - Execution Script
|
||||
#
|
||||
# This script generates an SBOM for a Docker image or local file,
|
||||
# compares it with a baseline (if provided), and scans for vulnerabilities.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Set defaults
|
||||
set_default_env "SBOM_FORMAT" "spdx-json"
|
||||
set_default_env "VULN_SCAN_ENABLED" "true"
|
||||
|
||||
# Parse arguments
|
||||
TARGET="${1:-}"
|
||||
BASELINE="${2:-}"
|
||||
|
||||
if [[ -z "${TARGET}" ]]; then
|
||||
log_error "Usage: security-verify-sbom <target> [baseline]"
|
||||
log_error " target: Docker image tag or local image name (required)"
|
||||
log_error " baseline: Path to baseline SBOM for comparison (optional)"
|
||||
log_error ""
|
||||
log_error "Examples:"
|
||||
log_error " security-verify-sbom charon:local"
|
||||
log_error " security-verify-sbom ghcr.io/user/charon:latest"
|
||||
log_error " security-verify-sbom charon:test sbom-baseline.json"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Validate target format (basic validation)
|
||||
if [[ ! "${TARGET}" =~ ^[a-zA-Z0-9:/@._-]+$ ]]; then
|
||||
log_error "Invalid target format: ${TARGET}"
|
||||
log_error "Target must match pattern: [a-zA-Z0-9:/@._-]+"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# Check required tools
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
|
||||
if ! command -v syft >/dev/null 2>&1; then
|
||||
log_error "syft is not installed"
|
||||
log_error "Install from: https://github.com/anchore/syft"
|
||||
log_error "Quick install: curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if ! command -v jq >/dev/null 2>&1; then
|
||||
log_error "jq is not installed"
|
||||
log_error "Install from: https://stedolan.github.io/jq/download/"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
if [[ "${VULN_SCAN_ENABLED}" == "true" ]] && ! command -v grype >/dev/null 2>&1; then
|
||||
log_error "grype is not installed (required for vulnerability scanning)"
|
||||
log_error "Install from: https://github.com/anchore/grype"
|
||||
log_error "Quick install: curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin"
|
||||
log_error ""
|
||||
log_error "Alternatively, disable vulnerability scanning with: VULN_SCAN_ENABLED=false"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
cd "${PROJECT_ROOT}"
|
||||
|
||||
# Generate SBOM
|
||||
log_step "SBOM" "Generating SBOM for ${TARGET}"
|
||||
log_info "Format: ${SBOM_FORMAT}"
|
||||
|
||||
SBOM_OUTPUT="sbom-generated.json"
|
||||
|
||||
if ! syft "${TARGET}" -o "${SBOM_FORMAT}" > "${SBOM_OUTPUT}" 2>&1; then
|
||||
log_error "Failed to generate SBOM for ${TARGET}"
|
||||
log_error "Ensure the image exists locally or can be pulled from a registry"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Parse and validate SBOM
|
||||
if [[ ! -f "${SBOM_OUTPUT}" ]]; then
|
||||
log_error "SBOM file not generated: ${SBOM_OUTPUT}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Validate SBOM schema (SPDX format)
|
||||
log_info "Validating SBOM schema..."
|
||||
if ! jq -e '.spdxVersion' "${SBOM_OUTPUT}" >/dev/null 2>&1; then
|
||||
log_error "Invalid SBOM: missing spdxVersion field"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! jq -e '.packages' "${SBOM_OUTPUT}" >/dev/null 2>&1; then
|
||||
log_error "Invalid SBOM: missing packages array"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! jq -e '.name' "${SBOM_OUTPUT}" >/dev/null 2>&1; then
|
||||
log_error "Invalid SBOM: missing name field"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! jq -e '.documentNamespace' "${SBOM_OUTPUT}" >/dev/null 2>&1; then
|
||||
log_error "Invalid SBOM: missing documentNamespace field"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
SPDX_VERSION=$(jq -r '.spdxVersion' "${SBOM_OUTPUT}")
|
||||
log_success "SBOM schema valid (${SPDX_VERSION})"
|
||||
|
||||
PACKAGE_COUNT=$(jq '.packages | length' "${SBOM_OUTPUT}" 2>/dev/null || echo "0")
|
||||
|
||||
if [[ "${PACKAGE_COUNT}" -eq 0 ]]; then
|
||||
log_warning "SBOM contains no packages - this may indicate an error"
|
||||
log_warning "Target: ${TARGET}"
|
||||
else
|
||||
log_success "Generated SBOM contains ${PACKAGE_COUNT} packages"
|
||||
fi
|
||||
|
||||
# Baseline comparison (if provided)
|
||||
if [[ -n "${BASELINE}" ]]; then
|
||||
log_step "BASELINE" "Comparing with baseline SBOM"
|
||||
|
||||
if [[ ! -f "${BASELINE}" ]]; then
|
||||
log_error "Baseline SBOM file not found: ${BASELINE}"
|
||||
exit 2
|
||||
fi
|
||||
|
||||
BASELINE_COUNT=$(jq '.packages | length' "${BASELINE}" 2>/dev/null || echo "0")
|
||||
|
||||
if [[ "${BASELINE_COUNT}" -eq 0 ]]; then
|
||||
log_warning "Baseline SBOM appears empty or invalid"
|
||||
else
|
||||
log_info "Baseline: ${BASELINE_COUNT} packages, Current: ${PACKAGE_COUNT} packages"
|
||||
|
||||
# Calculate delta and variance using awk for float arithmetic
|
||||
DELTA=$((PACKAGE_COUNT - BASELINE_COUNT))
|
||||
if [[ "${BASELINE_COUNT}" -gt 0 ]]; then
|
||||
# Use awk to prevent integer overflow and get accurate percentage
|
||||
VARIANCE_PCT=$(awk -v delta="${DELTA}" -v baseline="${BASELINE_COUNT}" 'BEGIN {printf "%.2f", (delta / baseline) * 100}')
|
||||
VARIANCE_ABS=$(awk -v var="${VARIANCE_PCT}" 'BEGIN {print (var < 0 ? -var : var)}')
|
||||
else
|
||||
VARIANCE_PCT="0.00"
|
||||
VARIANCE_ABS="0.00"
|
||||
fi
|
||||
|
||||
if [[ "${DELTA}" -gt 0 ]]; then
|
||||
log_info "Delta: +${DELTA} packages (${VARIANCE_PCT}% increase)"
|
||||
elif [[ "${DELTA}" -lt 0 ]]; then
|
||||
log_info "Delta: ${DELTA} packages (${VARIANCE_PCT}% decrease)"
|
||||
else
|
||||
log_info "Delta: 0 packages (no change)"
|
||||
fi
|
||||
|
||||
# Extract package name@version tuples for semantic comparison
|
||||
jq -r '.packages[] | "\(.name)@\(.versionInfo // .version // "unknown")"' "${BASELINE}" 2>/dev/null | sort > baseline-packages.txt || true
|
||||
jq -r '.packages[] | "\(.name)@\(.versionInfo // .version // "unknown")"' "${SBOM_OUTPUT}" 2>/dev/null | sort > current-packages.txt || true
|
||||
|
||||
# Extract just names for package add/remove detection
|
||||
jq -r '.packages[].name' "${BASELINE}" 2>/dev/null | sort > baseline-names.txt || true
|
||||
jq -r '.packages[].name' "${SBOM_OUTPUT}" 2>/dev/null | sort > current-names.txt || true
|
||||
|
||||
# Find added packages
|
||||
ADDED=$(comm -13 baseline-names.txt current-names.txt 2>/dev/null || echo "")
|
||||
if [[ -n "${ADDED}" ]]; then
|
||||
log_info "Added packages:"
|
||||
echo "${ADDED}" | head -n 10 | while IFS= read -r pkg; do
|
||||
VERSION=$(jq -r ".packages[] | select(.name == \"${pkg}\") | .versionInfo // .version // \"unknown\"" "${SBOM_OUTPUT}" 2>/dev/null || echo "unknown")
|
||||
log_info " + ${pkg}@${VERSION}"
|
||||
done
|
||||
ADDED_COUNT=$(echo "${ADDED}" | wc -l)
|
||||
if [[ "${ADDED_COUNT}" -gt 10 ]]; then
|
||||
log_info " ... and $((ADDED_COUNT - 10)) more"
|
||||
fi
|
||||
else
|
||||
log_info "Added packages: (none)"
|
||||
fi
|
||||
|
||||
# Find removed packages
|
||||
REMOVED=$(comm -23 baseline-names.txt current-names.txt 2>/dev/null || echo "")
|
||||
if [[ -n "${REMOVED}" ]]; then
|
||||
log_info "Removed packages:"
|
||||
echo "${REMOVED}" | head -n 10 | while IFS= read -r pkg; do
|
||||
VERSION=$(jq -r ".packages[] | select(.name == \"${pkg}\") | .versionInfo // .version // \"unknown\"" "${BASELINE}" 2>/dev/null || echo "unknown")
|
||||
log_info " - ${pkg}@${VERSION}"
|
||||
done
|
||||
REMOVED_COUNT=$(echo "${REMOVED}" | wc -l)
|
||||
if [[ "${REMOVED_COUNT}" -gt 10 ]]; then
|
||||
log_info " ... and $((REMOVED_COUNT - 10)) more"
|
||||
fi
|
||||
else
|
||||
log_info "Removed packages: (none)"
|
||||
fi
|
||||
|
||||
# Detect version changes in existing packages
|
||||
log_info "Version changes:"
|
||||
CHANGED_COUNT=0
|
||||
comm -12 baseline-names.txt current-names.txt 2>/dev/null | while IFS= read -r pkg; do
|
||||
BASELINE_VER=$(jq -r ".packages[] | select(.name == \"${pkg}\") | .versionInfo // .version // \"unknown\"" "${BASELINE}" 2>/dev/null || echo "unknown")
|
||||
CURRENT_VER=$(jq -r ".packages[] | select(.name == \"${pkg}\") | .versionInfo // .version // \"unknown\"" "${SBOM_OUTPUT}" 2>/dev/null || echo "unknown")
|
||||
if [[ "${BASELINE_VER}" != "${CURRENT_VER}" ]]; then
|
||||
log_info " ~ ${pkg}: ${BASELINE_VER} → ${CURRENT_VER}"
|
||||
CHANGED_COUNT=$((CHANGED_COUNT + 1))
|
||||
if [[ "${CHANGED_COUNT}" -ge 10 ]]; then
|
||||
log_info " ... (showing first 10 changes)"
|
||||
break
|
||||
fi
|
||||
fi
|
||||
done
|
||||
if [[ "${CHANGED_COUNT}" -eq 0 ]]; then
|
||||
log_info " (none)"
|
||||
fi
|
||||
|
||||
# Warn if variance exceeds threshold (using awk for float comparison)
|
||||
EXCEEDS_THRESHOLD=$(awk -v abs="${VARIANCE_ABS}" 'BEGIN {print (abs > 5.0 ? 1 : 0)}')
|
||||
if [[ "${EXCEEDS_THRESHOLD}" -eq 1 ]]; then
|
||||
log_warning "Package variance (${VARIANCE_ABS}%) exceeds 5% threshold"
|
||||
log_warning "Consider manual review of package changes"
|
||||
fi
|
||||
|
||||
# Cleanup temporary files
|
||||
rm -f baseline-packages.txt current-packages.txt baseline-names.txt current-names.txt
|
||||
fi
|
||||
fi
|
||||
|
||||
# Vulnerability scanning (if enabled)
|
||||
HAS_CRITICAL=false
|
||||
|
||||
if [[ "${VULN_SCAN_ENABLED}" == "true" ]]; then
|
||||
log_step "VULN" "Scanning for vulnerabilities"
|
||||
|
||||
VULN_OUTPUT="vuln-results.json"
|
||||
|
||||
# Run Grype on the SBOM
|
||||
if grype "sbom:${SBOM_OUTPUT}" -o json > "${VULN_OUTPUT}" 2>&1; then
|
||||
log_debug "Vulnerability scan completed successfully"
|
||||
else
|
||||
GRYPE_EXIT=$?
|
||||
if [[ ${GRYPE_EXIT} -eq 1 ]]; then
|
||||
log_debug "Grype found vulnerabilities (expected)"
|
||||
else
|
||||
log_warning "Grype scan encountered an error (exit code: ${GRYPE_EXIT})"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Parse vulnerability counts by severity
|
||||
if [[ -f "${VULN_OUTPUT}" ]]; then
|
||||
CRITICAL_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "Critical")] | length' "${VULN_OUTPUT}" 2>/dev/null || echo "0")
|
||||
HIGH_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "High")] | length' "${VULN_OUTPUT}" 2>/dev/null || echo "0")
|
||||
MEDIUM_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "Medium")] | length' "${VULN_OUTPUT}" 2>/dev/null || echo "0")
|
||||
LOW_COUNT=$(jq '[.matches[] | select(.vulnerability.severity == "Low")] | length' "${VULN_OUTPUT}" 2>/dev/null || echo "0")
|
||||
|
||||
log_info "Found: ${CRITICAL_COUNT} Critical, ${HIGH_COUNT} High, ${MEDIUM_COUNT} Medium, ${LOW_COUNT} Low"
|
||||
|
||||
# Display critical vulnerabilities
|
||||
if [[ "${CRITICAL_COUNT}" -gt 0 ]]; then
|
||||
HAS_CRITICAL=true
|
||||
log_error "Critical vulnerabilities detected:"
|
||||
jq -r '.matches[] | select(.vulnerability.severity == "Critical") | " - \(.vulnerability.id) in \(.artifact.name)@\(.artifact.version) (CVSS: \(.vulnerability.cvss[0].metrics.baseScore // "N/A"))"' "${VULN_OUTPUT}" 2>/dev/null | head -n 10
|
||||
if [[ "${CRITICAL_COUNT}" -gt 10 ]]; then
|
||||
log_error " ... and $((CRITICAL_COUNT - 10)) more critical vulnerabilities"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Display high vulnerabilities
|
||||
if [[ "${HIGH_COUNT}" -gt 0 ]]; then
|
||||
log_warning "High severity vulnerabilities:"
|
||||
jq -r '.matches[] | select(.vulnerability.severity == "High") | " - \(.vulnerability.id) in \(.artifact.name)@\(.artifact.version) (CVSS: \(.vulnerability.cvss[0].metrics.baseScore // "N/A"))"' "${VULN_OUTPUT}" 2>/dev/null | head -n 5
|
||||
if [[ "${HIGH_COUNT}" -gt 5 ]]; then
|
||||
log_warning " ... and $((HIGH_COUNT - 5)) more high vulnerabilities"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Display table format for summary
|
||||
log_info "Running table format scan for summary..."
|
||||
grype "sbom:${SBOM_OUTPUT}" -o table 2>&1 | tail -n 20 || true
|
||||
else
|
||||
log_warning "Vulnerability scan results not found"
|
||||
fi
|
||||
else
|
||||
log_info "Vulnerability scanning disabled (air-gapped mode)"
|
||||
fi
|
||||
|
||||
# Final summary
|
||||
echo ""
|
||||
log_step "SUMMARY" "SBOM Verification Complete"
|
||||
log_info "Target: ${TARGET}"
|
||||
log_info "Packages: ${PACKAGE_COUNT}"
|
||||
if [[ -n "${BASELINE}" ]]; then
|
||||
log_info "Baseline comparison: ${VARIANCE_PCT}% variance"
|
||||
fi
|
||||
if [[ "${VULN_SCAN_ENABLED}" == "true" ]]; then
|
||||
log_info "Vulnerabilities: ${CRITICAL_COUNT} Critical, ${HIGH_COUNT} High, ${MEDIUM_COUNT} Medium, ${LOW_COUNT} Low"
|
||||
fi
|
||||
log_info "SBOM file: ${SBOM_OUTPUT}"
|
||||
|
||||
# Exit with appropriate code
|
||||
if [[ "${HAS_CRITICAL}" == "true" ]]; then
|
||||
log_error "CRITICAL vulnerabilities found - review required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "${HIGH_COUNT:-0}" -gt 0 ]]; then
|
||||
log_warning "High severity vulnerabilities found - review recommended"
|
||||
fi
|
||||
|
||||
log_success "Verification complete"
|
||||
exit 0
|
||||
317
.github/skills/security-verify-sbom.SKILL.md
vendored
317
.github/skills/security-verify-sbom.SKILL.md
vendored
@@ -1,317 +0,0 @@
|
||||
````markdown
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "security-verify-sbom"
|
||||
version: "1.0.0"
|
||||
description: "Verify SBOM completeness, scan for vulnerabilities, and perform semantic diff analysis"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "security"
|
||||
- "sbom"
|
||||
- "verification"
|
||||
- "supply-chain"
|
||||
- "vulnerability-scanning"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "syft"
|
||||
version: ">=1.17.0"
|
||||
optional: false
|
||||
install_url: "https://github.com/anchore/syft"
|
||||
- name: "grype"
|
||||
version: ">=0.85.0"
|
||||
optional: false
|
||||
install_url: "https://github.com/anchore/grype"
|
||||
- name: "jq"
|
||||
version: ">=1.6"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "SBOM_FORMAT"
|
||||
description: "SBOM format (spdx-json, cyclonedx-json)"
|
||||
default: "spdx-json"
|
||||
required: false
|
||||
- name: "VULN_SCAN_ENABLED"
|
||||
description: "Enable vulnerability scanning"
|
||||
default: "true"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "target"
|
||||
type: "string"
|
||||
description: "Docker image or file path"
|
||||
required: true
|
||||
validation: "^[a-zA-Z0-9:/@._-]+$"
|
||||
- name: "baseline"
|
||||
type: "string"
|
||||
description: "Baseline SBOM file path for comparison"
|
||||
required: false
|
||||
default: ""
|
||||
- name: "vuln_scan"
|
||||
type: "boolean"
|
||||
description: "Run vulnerability scan"
|
||||
required: false
|
||||
default: true
|
||||
outputs:
|
||||
- name: "sbom_file"
|
||||
type: "file"
|
||||
description: "Generated SBOM in SPDX JSON format"
|
||||
- name: "scan_results"
|
||||
type: "stdout"
|
||||
description: "Verification results and vulnerability counts"
|
||||
- name: "exit_code"
|
||||
type: "number"
|
||||
description: "0 if no critical issues, 1 if critical vulnerabilities found, 2 if validation failed"
|
||||
metadata:
|
||||
category: "security"
|
||||
subcategory: "supply-chain"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
exit_codes:
|
||||
0: "Verification successful"
|
||||
1: "Verification failed or critical vulnerabilities found"
|
||||
2: "Missing dependencies or invalid parameters"
|
||||
---
|
||||
|
||||
# Security: Verify SBOM
|
||||
|
||||
Verify Software Bill of Materials (SBOM) completeness, scan for vulnerabilities, and perform semantic diff analysis.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill generates an SBOM for Docker images or local files, compares it with a baseline (if provided), scans for known vulnerabilities using Grype, and reports any critical security issues. It supports both online vulnerability scanning and air-gapped operation modes.
|
||||
|
||||
## Features
|
||||
|
||||
- Generate SBOM in SPDX format (standardized)
|
||||
- Compare with baseline SBOM (semantic diff)
|
||||
- Scan for vulnerabilities (Critical/High/Medium/Low)
|
||||
- Validate SBOM structure and completeness
|
||||
- Support Docker images and local files
|
||||
- Air-gapped operation support (skip vulnerability scanning)
|
||||
- Detect added/removed packages between builds
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Syft 1.17.0 or higher (for SBOM generation)
|
||||
- Grype 0.85.0 or higher (for vulnerability scanning)
|
||||
- jq 1.6 or higher (for JSON processing)
|
||||
- Internet connection (for vulnerability database updates, unless air-gapped mode)
|
||||
- Docker (if scanning container images)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Verification
|
||||
|
||||
Run with default settings (generate SBOM + scan vulnerabilities):
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh security-verify-sbom ghcr.io/user/charon:latest
|
||||
```
|
||||
|
||||
### Verify Docker Image with Baseline Comparison
|
||||
|
||||
Compare current SBOM against a known baseline:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh security-verify-sbom \
|
||||
charon:local sbom-baseline.json
|
||||
```
|
||||
|
||||
### Air-Gapped Mode (No Vulnerability Scan)
|
||||
|
||||
Verify SBOM structure only, without network access:
|
||||
|
||||
```bash
|
||||
VULN_SCAN_ENABLED=false .github/skills/scripts/skill-runner.sh \
|
||||
security-verify-sbom charon:local
|
||||
```
|
||||
|
||||
### Custom SBOM Format
|
||||
|
||||
Generate SBOM in CycloneDX format:
|
||||
|
||||
```bash
|
||||
SBOM_FORMAT=cyclonedx-json .github/skills/scripts/skill-runner.sh \
|
||||
security-verify-sbom charon:local
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| target | string | Yes | - | Docker image tag or local image name |
|
||||
| baseline | string | No | "" | Path to baseline SBOM for comparison |
|
||||
| vuln_scan | boolean | No | true | Run vulnerability scan (set VULN_SCAN_ENABLED=false to disable) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| SBOM_FORMAT | No | spdx-json | SBOM format (spdx-json or cyclonedx-json) |
|
||||
| VULN_SCAN_ENABLED | No | true | Enable vulnerability scanning (set to false for air-gapped) |
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0 (no critical issues found)
|
||||
- **Error Exit Codes**:
|
||||
- 1: Critical vulnerabilities found or verification failed
|
||||
- 2: Missing dependencies or invalid parameters
|
||||
- **Generated Files**:
|
||||
- `sbom-generated.json`: Generated SBOM file
|
||||
- `vuln-results.json`: Vulnerability scan results (if enabled)
|
||||
- **Output**: Verification summary to stdout
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Verify Local Docker Image
|
||||
|
||||
```bash
|
||||
$ .github/skills/scripts/skill-runner.sh security-verify-sbom charon:test
|
||||
[INFO] Generating SBOM for charon:test...
|
||||
[SBOM] Generated SBOM contains 247 packages
|
||||
[INFO] Scanning for vulnerabilities...
|
||||
[VULN] Found: 0 Critical, 2 High, 15 Medium, 42 Low
|
||||
[INFO] High vulnerabilities:
|
||||
- CVE-2023-12345 in golang.org/x/crypto (CVSS: 7.5)
|
||||
- CVE-2024-67890 in github.com/example/lib (CVSS: 8.2)
|
||||
[SUCCESS] Verification complete - review High severity vulnerabilities
|
||||
```
|
||||
|
||||
### Example 2: With Baseline Comparison
|
||||
|
||||
```bash
|
||||
$ .github/skills/scripts/skill-runner.sh security-verify-sbom \
|
||||
charon:latest sbom-baseline.json
|
||||
[INFO] Generating SBOM for charon:latest...
|
||||
[SBOM] Generated SBOM contains 247 packages
|
||||
[INFO] Comparing with baseline...
|
||||
[BASELINE] Baseline: 245 packages, Current: 247 packages
|
||||
[BASELINE] Delta: +2 packages (0.8% increase)
|
||||
[BASELINE] Added packages:
|
||||
- golang.org/x/crypto@v0.30.0
|
||||
- github.com/pkg/errors@v0.9.1
|
||||
[BASELINE] Removed packages: (none)
|
||||
[INFO] Scanning for vulnerabilities...
|
||||
[VULN] Found: 0 Critical, 0 High, 5 Medium, 20 Low
|
||||
[SUCCESS] Verification complete (0.8% variance from baseline)
|
||||
```
|
||||
|
||||
### Example 3: Air-Gapped Mode
|
||||
|
||||
```bash
|
||||
$ VULN_SCAN_ENABLED=false .github/skills/scripts/skill-runner.sh \
|
||||
security-verify-sbom charon:local
|
||||
[INFO] Generating SBOM for charon:local...
|
||||
[SBOM] Generated SBOM contains 247 packages
|
||||
[INFO] Vulnerability scanning disabled (air-gapped mode)
|
||||
[SUCCESS] SBOM generation complete
|
||||
```
|
||||
|
||||
### Example 4: CI/CD Pipeline Integration
|
||||
|
||||
```yaml
|
||||
# GitHub Actions example
|
||||
- name: Verify SBOM
|
||||
run: |
|
||||
.github/skills/scripts/skill-runner.sh \
|
||||
security-verify-sbom ghcr.io/${{ github.repository }}:${{ github.sha }}
|
||||
continue-on-error: false
|
||||
```
|
||||
|
||||
## Semantic Diff Analysis
|
||||
|
||||
When a baseline SBOM is provided, the skill performs semantic comparison:
|
||||
|
||||
1. **Package Count Comparison**: Reports total package delta
|
||||
2. **Added Packages**: Lists new dependencies with versions
|
||||
3. **Removed Packages**: Lists removed dependencies
|
||||
4. **Variance Percentage**: Calculates percentage change
|
||||
5. **Threshold Check**: Warns if variance exceeds 5%
|
||||
|
||||
## Vulnerability Severity Thresholds
|
||||
|
||||
**Project Standards**:
|
||||
- **CRITICAL**: Must fix before release (blocking) - **Script exits with code 1**
|
||||
- **HIGH**: Should fix before release (warning) - **Script continues but logs warning**
|
||||
- **MEDIUM**: Fix in next release cycle (informational)
|
||||
- **LOW**: Optional, fix as time permits
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Issues
|
||||
|
||||
**Syft not installed**:
|
||||
```bash
|
||||
Error: syft command not found
|
||||
Solution: Install Syft from https://github.com/anchore/syft
|
||||
```
|
||||
|
||||
**Grype not installed**:
|
||||
```bash
|
||||
Error: grype command not found
|
||||
Solution: Install Grype from https://github.com/anchore/grype
|
||||
```
|
||||
|
||||
**Docker image not found**:
|
||||
```bash
|
||||
Error: Unable to find image 'charon:test' locally
|
||||
Solution: Build the image or pull from registry
|
||||
```
|
||||
|
||||
**Invalid baseline SBOM**:
|
||||
```bash
|
||||
Error: Baseline SBOM file not found: sbom-baseline.json
|
||||
Solution: Verify the file path or omit baseline parameter
|
||||
```
|
||||
|
||||
**Network timeout (vulnerability scan)**:
|
||||
```bash
|
||||
Warning: Failed to update vulnerability database
|
||||
Solution: Check internet connection or use air-gapped mode (VULN_SCAN_ENABLED=false)
|
||||
```
|
||||
|
||||
## Exit Codes
|
||||
|
||||
- **0**: Verification successful, no critical vulnerabilities
|
||||
- **1**: Critical vulnerabilities found or verification failed
|
||||
- **2**: Missing dependencies or invalid parameters
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [security-sign-cosign](./security-sign-cosign.SKILL.md) - Sign artifacts with Cosign
|
||||
- [security-slsa-provenance](./security-slsa-provenance.SKILL.md) - Generate SLSA provenance
|
||||
- [security-scan-trivy](./security-scan-trivy.SKILL.md) - Alternative vulnerability scanner
|
||||
|
||||
## Notes
|
||||
|
||||
- SBOM generation requires read access to Docker images
|
||||
- Vulnerability database is updated automatically by Grype
|
||||
- Baseline comparison is optional but recommended for drift detection
|
||||
- Critical vulnerabilities will cause the script to exit with code 1
|
||||
- High vulnerabilities generate warnings but don't block execution
|
||||
- Use air-gapped mode when network access is unavailable
|
||||
- SPDX format is standardized and recommended over CycloneDX
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Never commit SBOM files containing sensitive information
|
||||
- Review all High and Critical vulnerabilities before deployment
|
||||
- Baseline drift >5% should trigger manual review
|
||||
- Air-gapped mode skips vulnerability scanning - use with caution
|
||||
- SBOM files can reveal internal architecture - protect accordingly
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-10
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: Syft (SBOM generation) + Grype (vulnerability scanning)
|
||||
|
||||
````
|
||||
@@ -1,55 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test Backend Coverage - Execution Script
|
||||
#
|
||||
# This script wraps the legacy go-test-coverage.sh script while providing
|
||||
# the Agent Skills interface and logging.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# Helper scripts are in .github/skills/scripts/
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Project root is 3 levels up from this script (skills/skill-name-scripts/run.sh -> project root)
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_go_environment "1.23" || error_exit "Go 1.23+ is required"
|
||||
validate_python_environment "3.8" || error_exit "Python 3.8+ is required"
|
||||
|
||||
# Validate project structure
|
||||
log_step "VALIDATION" "Checking project structure"
|
||||
cd "${PROJECT_ROOT}"
|
||||
validate_project_structure "backend" "scripts/go-test-coverage.sh" || error_exit "Invalid project structure"
|
||||
|
||||
# Set default environment variables
|
||||
set_default_env "CHARON_MIN_COVERAGE" "85"
|
||||
set_default_env "PERF_MAX_MS_GETSTATUS_P95" "25ms"
|
||||
set_default_env "PERF_MAX_MS_GETSTATUS_P95_PARALLEL" "50ms"
|
||||
set_default_env "PERF_MAX_MS_LISTDECISIONS_P95" "75ms"
|
||||
|
||||
# Execute the legacy script
|
||||
log_step "EXECUTION" "Running backend tests with coverage"
|
||||
log_info "Minimum coverage: ${CHARON_MIN_COVERAGE}%"
|
||||
|
||||
LEGACY_SCRIPT="${PROJECT_ROOT}/scripts/go-test-coverage.sh"
|
||||
check_file_exists "${LEGACY_SCRIPT}"
|
||||
|
||||
# Execute with proper error handling
|
||||
if "${LEGACY_SCRIPT}" "$@"; then
|
||||
log_success "Backend coverage tests passed"
|
||||
exit 0
|
||||
else
|
||||
exit_code=$?
|
||||
log_error "Backend coverage tests failed (exit code: ${exit_code})"
|
||||
exit "${exit_code}"
|
||||
fi
|
||||
212
.github/skills/test-backend-coverage.SKILL.md
vendored
212
.github/skills/test-backend-coverage.SKILL.md
vendored
@@ -1,212 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "test-backend-coverage"
|
||||
version: "1.0.0"
|
||||
description: "Run Go backend tests with coverage analysis and threshold validation (minimum 85%)"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "testing"
|
||||
- "coverage"
|
||||
- "go"
|
||||
- "backend"
|
||||
- "validation"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "go"
|
||||
version: ">=1.23"
|
||||
optional: false
|
||||
- name: "python3"
|
||||
version: ">=3.8"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "CHARON_MIN_COVERAGE"
|
||||
description: "Minimum coverage percentage required (overrides default)"
|
||||
default: "85"
|
||||
required: false
|
||||
- name: "CPM_MIN_COVERAGE"
|
||||
description: "Alternative name for minimum coverage threshold (legacy)"
|
||||
default: "85"
|
||||
required: false
|
||||
- name: "PERF_MAX_MS_GETSTATUS_P95"
|
||||
description: "Maximum P95 latency for GetStatus endpoint (ms)"
|
||||
default: "25ms"
|
||||
required: false
|
||||
- name: "PERF_MAX_MS_GETSTATUS_P95_PARALLEL"
|
||||
description: "Maximum P95 latency for parallel GetStatus calls (ms)"
|
||||
default: "50ms"
|
||||
required: false
|
||||
- name: "PERF_MAX_MS_LISTDECISIONS_P95"
|
||||
description: "Maximum P95 latency for ListDecisions endpoint (ms)"
|
||||
default: "75ms"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "verbose"
|
||||
type: "boolean"
|
||||
description: "Enable verbose test output"
|
||||
default: "false"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "coverage.txt"
|
||||
type: "file"
|
||||
description: "Go coverage profile in text format"
|
||||
path: "backend/coverage.txt"
|
||||
- name: "coverage_summary"
|
||||
type: "stdout"
|
||||
description: "Summary of coverage statistics and validation result"
|
||||
metadata:
|
||||
category: "test"
|
||||
subcategory: "coverage"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Test Backend Coverage
|
||||
|
||||
## Overview
|
||||
|
||||
Executes the Go backend test suite with race detection enabled, generates a coverage profile, filters excluded packages, and validates that the total coverage meets or exceeds the configured threshold (default: 85%).
|
||||
|
||||
This skill is designed for continuous integration and pre-commit hooks to ensure code quality standards are maintained.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Go 1.23 or higher installed and in PATH
|
||||
- Python 3.8 or higher installed and in PATH
|
||||
- Backend dependencies installed (`cd backend && go mod download`)
|
||||
- Write permissions in `backend/` directory (for coverage.txt)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run with default settings (85% minimum coverage):
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
```
|
||||
|
||||
### Custom Coverage Threshold
|
||||
|
||||
Set a custom minimum coverage percentage:
|
||||
|
||||
```bash
|
||||
export CHARON_MIN_COVERAGE=90
|
||||
.github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions or other CI/CD pipelines:
|
||||
|
||||
```yaml
|
||||
- name: Run Backend Tests with Coverage
|
||||
run: .github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
env:
|
||||
CHARON_MIN_COVERAGE: 85
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| verbose | boolean | No | false | Enable verbose test output (-v flag) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| CHARON_MIN_COVERAGE | No | 85 | Minimum coverage percentage required for success |
|
||||
| CPM_MIN_COVERAGE | No | 85 | Legacy name for minimum coverage (fallback) |
|
||||
| PERF_MAX_MS_GETSTATUS_P95 | No | 25ms | Max P95 latency for GetStatus endpoint |
|
||||
| PERF_MAX_MS_GETSTATUS_P95_PARALLEL | No | 50ms | Max P95 latency for parallel GetStatus |
|
||||
| PERF_MAX_MS_LISTDECISIONS_P95 | No | 75ms | Max P95 latency for ListDecisions endpoint |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All tests passed and coverage meets threshold
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: Coverage below threshold or coverage file generation failed
|
||||
- **Non-zero**: Tests failed or other error occurred
|
||||
|
||||
### Output Files
|
||||
- **backend/coverage.txt**: Go coverage profile (text format)
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
Filtering excluded packages from coverage report...
|
||||
Coverage filtering complete
|
||||
total: (statements) 87.4%
|
||||
Computed coverage: 87.4% (minimum required 85%)
|
||||
Coverage requirement met
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
```
|
||||
|
||||
### Example 2: Higher Coverage Threshold
|
||||
|
||||
```bash
|
||||
export CHARON_MIN_COVERAGE=90
|
||||
.github/skills/scripts/skill-runner.sh test-backend-coverage
|
||||
```
|
||||
|
||||
## Excluded Packages
|
||||
|
||||
The following packages are excluded from coverage analysis:
|
||||
- `github.com/Wikid82/charon/backend/cmd/api` - API server entrypoint
|
||||
- `github.com/Wikid82/charon/backend/cmd/seed` - Database seeding tool
|
||||
- `github.com/Wikid82/charon/backend/internal/logger` - Logging infrastructure
|
||||
- `github.com/Wikid82/charon/backend/internal/metrics` - Metrics infrastructure
|
||||
- `github.com/Wikid82/charon/backend/internal/trace` - Tracing infrastructure
|
||||
- `github.com/Wikid82/charon/backend/integration` - Integration test utilities
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: coverage file not generated by go test
|
||||
**Solution**: Review test output for failures; fix failing tests
|
||||
|
||||
#### Error: go tool cover failed or timed out
|
||||
**Solution**: Clear Go cache and re-run tests
|
||||
|
||||
#### Error: Coverage X% is below required Y%
|
||||
**Solution**: Add tests for uncovered code paths or adjust threshold
|
||||
|
||||
## Related Skills
|
||||
|
||||
- test-backend-unit - Fast unit tests without coverage
|
||||
- security-check-govulncheck - Go vulnerability scanning
|
||||
- utility-cache-clear-go - Clear Go build cache
|
||||
|
||||
## Notes
|
||||
|
||||
- **Race Detection**: Always runs with `-race` flag enabled (adds ~30% overhead)
|
||||
- **Coverage Filtering**: Excluded packages are defined in the script itself
|
||||
- **Python Dependency**: Uses Python for decimal-precision coverage comparison
|
||||
- **Timeout Protection**: Coverage generation has a 60-second timeout
|
||||
- **Idempotency**: Safe to run multiple times; cleans up old coverage files
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: `scripts/go-test-coverage.sh`
|
||||
65
.github/skills/test-backend-unit-scripts/run.sh
vendored
65
.github/skills/test-backend-unit-scripts/run.sh
vendored
@@ -1,65 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test Backend Unit - Execution Script
|
||||
#
|
||||
# This script runs Go backend unit tests without coverage analysis,
|
||||
# providing fast test execution for development workflows.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# Helper scripts are in .github/skills/scripts/
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Project root is 3 levels up from this script (skills/skill-name-scripts/run.sh -> project root)
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_go_environment "1.23" || error_exit "Go 1.23+ is required"
|
||||
|
||||
# Validate project structure
|
||||
log_step "VALIDATION" "Checking project structure"
|
||||
cd "${PROJECT_ROOT}"
|
||||
validate_project_structure "backend" || error_exit "Invalid project structure"
|
||||
|
||||
# Change to backend directory
|
||||
cd "${PROJECT_ROOT}/backend"
|
||||
|
||||
# Execute tests
|
||||
log_step "EXECUTION" "Running backend unit tests"
|
||||
|
||||
# Check if short mode is enabled
|
||||
SHORT_FLAG=""
|
||||
if [[ "${CHARON_TEST_SHORT:-false}" == "true" ]]; then
|
||||
SHORT_FLAG="-short"
|
||||
log_info "Running in short mode (skipping integration and heavy network tests)"
|
||||
fi
|
||||
|
||||
# Run tests with gotestsum if available, otherwise fall back to go test
|
||||
if command -v gotestsum &> /dev/null; then
|
||||
if gotestsum --format pkgname -- $SHORT_FLAG "$@" ./...; then
|
||||
log_success "Backend unit tests passed"
|
||||
exit 0
|
||||
else
|
||||
exit_code=$?
|
||||
log_error "Backend unit tests failed (exit code: ${exit_code})"
|
||||
exit "${exit_code}"
|
||||
fi
|
||||
else
|
||||
if go test $SHORT_FLAG "$@" ./...; then
|
||||
log_success "Backend unit tests passed"
|
||||
exit 0
|
||||
else
|
||||
exit_code=$?
|
||||
log_error "Backend unit tests failed (exit code: ${exit_code})"
|
||||
exit "${exit_code}"
|
||||
fi
|
||||
fi
|
||||
191
.github/skills/test-backend-unit.SKILL.md
vendored
191
.github/skills/test-backend-unit.SKILL.md
vendored
@@ -1,191 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "test-backend-unit"
|
||||
version: "1.0.0"
|
||||
description: "Run Go backend unit tests without coverage analysis (fast execution)"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "testing"
|
||||
- "unit-tests"
|
||||
- "go"
|
||||
- "backend"
|
||||
- "fast"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "go"
|
||||
version: ">=1.23"
|
||||
optional: false
|
||||
environment_variables: []
|
||||
parameters:
|
||||
- name: "verbose"
|
||||
type: "boolean"
|
||||
description: "Enable verbose test output"
|
||||
default: "false"
|
||||
required: false
|
||||
- name: "package"
|
||||
type: "string"
|
||||
description: "Specific package to test (e.g., ./internal/...)"
|
||||
default: "./..."
|
||||
required: false
|
||||
outputs:
|
||||
- name: "test_results"
|
||||
type: "stdout"
|
||||
description: "Go test output showing pass/fail status"
|
||||
metadata:
|
||||
category: "test"
|
||||
subcategory: "unit"
|
||||
execution_time: "short"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Test Backend Unit
|
||||
|
||||
## Overview
|
||||
|
||||
Executes the Go backend unit test suite without coverage analysis. This skill provides fast test execution for quick feedback during development, making it ideal for pre-commit checks and rapid iteration.
|
||||
|
||||
Unlike test-backend-coverage, this skill does not generate coverage reports or enforce coverage thresholds, focusing purely on test pass/fail status.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Go 1.23 or higher installed and in PATH
|
||||
- Backend dependencies installed (`cd backend && go mod download`)
|
||||
- Sufficient disk space for test artifacts
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run all backend unit tests:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh test-backend-unit
|
||||
```
|
||||
|
||||
### Test Specific Package
|
||||
|
||||
Test only a specific package or module:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-backend-unit -- ./internal/handlers/...
|
||||
```
|
||||
|
||||
### Verbose Output
|
||||
|
||||
Enable verbose test output for debugging:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-backend-unit -- -v
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions or other CI/CD pipelines:
|
||||
|
||||
```yaml
|
||||
- name: Run Backend Unit Tests
|
||||
run: .github/skills/scripts/skill-runner.sh test-backend-unit
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| verbose | boolean | No | false | Enable verbose test output (-v flag) |
|
||||
| package | string | No | ./... | Package pattern to test |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
No environment variables are required for this skill.
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All tests passed
|
||||
|
||||
### Error Exit Codes
|
||||
- **Non-zero**: One or more tests failed
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
ok github.com/Wikid82/charon/backend/internal/handlers 0.523s
|
||||
ok github.com/Wikid82/charon/backend/internal/models 0.189s
|
||||
ok github.com/Wikid82/charon/backend/internal/services 0.742s
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-backend-unit
|
||||
```
|
||||
|
||||
### Example 2: Test Specific Package
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-backend-unit -- ./internal/handlers
|
||||
```
|
||||
|
||||
### Example 3: Verbose Output
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-backend-unit -- -v
|
||||
```
|
||||
|
||||
### Example 4: Run with Race Detection
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-backend-unit -- -race
|
||||
```
|
||||
|
||||
### Example 5: Short Mode (Skip Long Tests)
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-backend-unit -- -short
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: package not found
|
||||
**Solution**: Verify package path is correct; run `go list ./...` to see available packages
|
||||
|
||||
#### Error: build failed
|
||||
**Solution**: Fix compilation errors; run `go build ./...` to identify issues
|
||||
|
||||
#### Error: test timeout
|
||||
**Solution**: Increase timeout with `-timeout` flag or fix hanging tests
|
||||
|
||||
## Related Skills
|
||||
|
||||
- test-backend-coverage - Run tests with coverage analysis (slower)
|
||||
- build-check-go - Verify Go builds without running tests
|
||||
- security-check-govulncheck - Go vulnerability scanning
|
||||
|
||||
## Notes
|
||||
|
||||
- **Execution Time**: Fast execution (~5-10 seconds typical)
|
||||
- **No Coverage**: Does not generate coverage reports
|
||||
- **Race Detection**: Not enabled by default (unlike test-backend-coverage)
|
||||
- **Idempotency**: Safe to run multiple times
|
||||
- **Caching**: Benefits from Go test cache for unchanged packages
|
||||
- **Suitable For**: Pre-commit hooks, quick feedback, TDD workflows
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: Inline task command
|
||||
@@ -1,294 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test E2E Playwright Coverage - Execution Script
|
||||
#
|
||||
# Runs Playwright end-to-end tests with code coverage collection
|
||||
# using @bgotink/playwright-coverage.
|
||||
#
|
||||
# IMPORTANT: For accurate source-level coverage, this script starts
|
||||
# the Vite dev server (localhost:5173) which proxies API calls to
|
||||
# the Docker backend (localhost:8080). V8 coverage requires source
|
||||
# files to be accessible on the test host.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Project root is 3 levels up from this script
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Default parameter values
|
||||
PROJECT="chromium"
|
||||
VITE_PID=""
|
||||
VITE_PORT="${VITE_PORT:-5173}" # Default Vite port (avoids conflicts with common ports)
|
||||
BACKEND_URL="http://localhost:8080"
|
||||
|
||||
# Cleanup function to kill Vite dev server on exit
|
||||
cleanup() {
|
||||
if [[ -n "${VITE_PID}" ]] && kill -0 "${VITE_PID}" 2>/dev/null; then
|
||||
log_info "Stopping Vite dev server (PID: ${VITE_PID})..."
|
||||
kill "${VITE_PID}" 2>/dev/null || true
|
||||
wait "${VITE_PID}" 2>/dev/null || true
|
||||
fi
|
||||
}
|
||||
|
||||
# Set up trap for cleanup
|
||||
trap cleanup EXIT INT TERM
|
||||
|
||||
# Parse command-line arguments
|
||||
parse_arguments() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--project=*)
|
||||
PROJECT="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--project)
|
||||
PROJECT="${2:-chromium}"
|
||||
shift 2
|
||||
;;
|
||||
--skip-vite)
|
||||
SKIP_VITE="true"
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown argument: $1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
# Show help message
|
||||
show_help() {
|
||||
cat << EOF
|
||||
Usage: run.sh [OPTIONS]
|
||||
|
||||
Run Playwright E2E tests with coverage collection.
|
||||
|
||||
Coverage requires the Vite dev server to serve source files directly.
|
||||
This script automatically starts Vite at localhost:5173, which proxies
|
||||
API calls to the Docker backend at localhost:8080.
|
||||
|
||||
Options:
|
||||
--project=PROJECT Browser project to run (chromium, firefox, webkit)
|
||||
Default: chromium
|
||||
--skip-vite Skip starting Vite dev server (use existing server)
|
||||
-h, --help Show this help message
|
||||
|
||||
Environment Variables:
|
||||
PLAYWRIGHT_BASE_URL Override test URL (default: http://localhost:5173)
|
||||
VITE_PORT Vite dev server port (default: 5173)
|
||||
CI Set to 'true' for CI environment
|
||||
|
||||
Prerequisites:
|
||||
- Docker backend running at localhost:8080
|
||||
- Node.js dependencies installed (npm ci)
|
||||
|
||||
Examples:
|
||||
run.sh # Start Vite, run tests with coverage
|
||||
run.sh --project=firefox # Run in Firefox with coverage
|
||||
run.sh --skip-vite # Use existing Vite server
|
||||
EOF
|
||||
}
|
||||
|
||||
# Validate project parameter
|
||||
validate_project() {
|
||||
local valid_projects=("chromium" "firefox" "webkit")
|
||||
local project_lower
|
||||
project_lower=$(echo "${PROJECT}" | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
for valid in "${valid_projects[@]}"; do
|
||||
if [[ "${project_lower}" == "${valid}" ]]; then
|
||||
PROJECT="${project_lower}"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
error_exit "Invalid project '${PROJECT}'. Valid options: chromium, firefox, webkit"
|
||||
}
|
||||
|
||||
# Check if backend is running
|
||||
check_backend() {
|
||||
log_info "Checking backend at ${BACKEND_URL}..."
|
||||
local max_attempts=5
|
||||
local attempt=1
|
||||
|
||||
while [[ ${attempt} -le ${max_attempts} ]]; do
|
||||
if curl -sf "${BACKEND_URL}/api/v1/health" >/dev/null 2>&1; then
|
||||
log_success "Backend is healthy"
|
||||
return 0
|
||||
fi
|
||||
log_info "Waiting for backend... (attempt ${attempt}/${max_attempts})"
|
||||
sleep 2
|
||||
((attempt++))
|
||||
done
|
||||
|
||||
log_warning "Backend not responding at ${BACKEND_URL}"
|
||||
log_warning "Coverage tests require Docker backend. Start with:"
|
||||
log_warning " docker compose -f .docker/compose/docker-compose.local.yml up -d"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Start Vite dev server
|
||||
start_vite() {
|
||||
local vite_url="http://localhost:${VITE_PORT}"
|
||||
|
||||
# Check if Vite is already running on our preferred port
|
||||
if curl -sf "${vite_url}" >/dev/null 2>&1; then
|
||||
log_info "Vite dev server already running at ${vite_url}"
|
||||
return 0
|
||||
fi
|
||||
|
||||
log_step "VITE" "Starting Vite dev server"
|
||||
cd "${PROJECT_ROOT}/frontend"
|
||||
|
||||
# Ensure dependencies are installed
|
||||
if [[ ! -d "node_modules" ]]; then
|
||||
log_info "Installing frontend dependencies..."
|
||||
npm ci --silent
|
||||
fi
|
||||
|
||||
# Start Vite in background with explicit port
|
||||
log_command "npx vite --port ${VITE_PORT} (background)"
|
||||
npx vite --port "${VITE_PORT}" > /tmp/vite.log 2>&1 &
|
||||
VITE_PID=$!
|
||||
|
||||
# Wait for Vite to be ready (check log for actual port in case of conflict)
|
||||
log_info "Waiting for Vite to start..."
|
||||
local max_wait=60
|
||||
local waited=0
|
||||
local actual_port="${VITE_PORT}"
|
||||
|
||||
while [[ ${waited} -lt ${max_wait} ]]; do
|
||||
# Check if Vite logged its ready message with actual port
|
||||
if grep -q "Local:" /tmp/vite.log 2>/dev/null; then
|
||||
# Extract actual port from Vite log (handles port conflict auto-switch)
|
||||
actual_port=$(grep -oP 'localhost:\K[0-9]+' /tmp/vite.log 2>/dev/null | head -1 || echo "${VITE_PORT}")
|
||||
vite_url="http://localhost:${actual_port}"
|
||||
fi
|
||||
|
||||
if curl -sf "${vite_url}" >/dev/null 2>&1; then
|
||||
# Update VITE_PORT if Vite chose a different port
|
||||
if [[ "${actual_port}" != "${VITE_PORT}" ]]; then
|
||||
log_warning "Port ${VITE_PORT} was busy, Vite using port ${actual_port}"
|
||||
VITE_PORT="${actual_port}"
|
||||
fi
|
||||
log_success "Vite dev server ready at ${vite_url}"
|
||||
cd "${PROJECT_ROOT}"
|
||||
return 0
|
||||
fi
|
||||
sleep 1
|
||||
((waited++))
|
||||
done
|
||||
|
||||
log_error "Vite failed to start within ${max_wait} seconds"
|
||||
log_error "Vite log:"
|
||||
cat /tmp/vite.log 2>/dev/null || true
|
||||
cd "${PROJECT_ROOT}"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
SKIP_VITE="${SKIP_VITE:-false}"
|
||||
parse_arguments "$@"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_node_environment "18.0" || error_exit "Node.js 18+ is required"
|
||||
check_command_exists "npx" "npx is required (part of Node.js installation)"
|
||||
|
||||
# Validate project structure
|
||||
log_step "VALIDATION" "Checking project structure"
|
||||
cd "${PROJECT_ROOT}"
|
||||
validate_project_structure "tests" "playwright.config.js" "package.json" || error_exit "Invalid project structure"
|
||||
|
||||
# Validate project parameter
|
||||
validate_project
|
||||
|
||||
# Check backend is running (required for API proxy)
|
||||
log_step "BACKEND" "Checking Docker backend"
|
||||
if ! check_backend; then
|
||||
error_exit "Backend not available. Coverage tests require Docker backend at ${BACKEND_URL}"
|
||||
fi
|
||||
|
||||
# Start Vite dev server for coverage (unless skipped)
|
||||
if [[ "${SKIP_VITE}" != "true" ]]; then
|
||||
start_vite || error_exit "Failed to start Vite dev server"
|
||||
fi
|
||||
|
||||
# Ensure coverage directory exists
|
||||
log_step "SETUP" "Creating coverage directory"
|
||||
mkdir -p coverage/e2e
|
||||
|
||||
# Set environment variables
|
||||
# IMPORTANT: Use Vite URL (3000) for coverage, not Docker (8080)
|
||||
export PLAYWRIGHT_HTML_OPEN="${PLAYWRIGHT_HTML_OPEN:-never}"
|
||||
export PLAYWRIGHT_BASE_URL="${PLAYWRIGHT_BASE_URL:-http://localhost:${VITE_PORT}}"
|
||||
|
||||
# Log configuration
|
||||
log_step "CONFIG" "Test configuration"
|
||||
log_info "Project: ${PROJECT}"
|
||||
log_info "Test URL: ${PLAYWRIGHT_BASE_URL}"
|
||||
log_info "Backend URL: ${BACKEND_URL}"
|
||||
log_info "Coverage output: ${PROJECT_ROOT}/coverage/e2e/"
|
||||
log_info ""
|
||||
log_info "Coverage architecture:"
|
||||
log_info " Tests → Vite (localhost:${VITE_PORT}) → serves source files"
|
||||
log_info " Vite → Docker (localhost:8080) → API proxy"
|
||||
|
||||
# Execute Playwright tests with coverage
|
||||
log_step "EXECUTION" "Running Playwright E2E tests with coverage"
|
||||
log_command "npx playwright test --project=${PROJECT}"
|
||||
|
||||
local exit_code=0
|
||||
if npx playwright test --project="${PROJECT}"; then
|
||||
log_success "All E2E tests passed"
|
||||
else
|
||||
exit_code=$?
|
||||
log_error "E2E tests failed (exit code: ${exit_code})"
|
||||
fi
|
||||
|
||||
# Check if coverage was generated
|
||||
log_step "COVERAGE" "Checking coverage output"
|
||||
if [[ -f "coverage/e2e/lcov.info" ]]; then
|
||||
log_success "E2E coverage generated: coverage/e2e/lcov.info"
|
||||
|
||||
# Print summary if coverage.json exists
|
||||
if [[ -f "coverage/e2e/coverage.json" ]] && command -v jq &> /dev/null; then
|
||||
log_info "📊 Coverage Summary:"
|
||||
jq '.total' coverage/e2e/coverage.json 2>/dev/null || true
|
||||
fi
|
||||
|
||||
# Show file sizes
|
||||
log_info "Coverage files:"
|
||||
ls -lh coverage/e2e/ 2>/dev/null || true
|
||||
else
|
||||
log_warning "No coverage data generated"
|
||||
log_warning "Ensure test files import from '@bgotink/playwright-coverage'"
|
||||
fi
|
||||
|
||||
# Output report locations
|
||||
log_step "REPORTS" "Report locations"
|
||||
log_info "Coverage HTML: ${PROJECT_ROOT}/coverage/e2e/index.html"
|
||||
log_info "Coverage LCOV: ${PROJECT_ROOT}/coverage/e2e/lcov.info"
|
||||
log_info "Playwright Report: ${PROJECT_ROOT}/playwright-report/index.html"
|
||||
|
||||
exit "${exit_code}"
|
||||
}
|
||||
|
||||
# Run main with all arguments
|
||||
main "$@"
|
||||
202
.github/skills/test-e2e-playwright-coverage.SKILL.md
vendored
202
.github/skills/test-e2e-playwright-coverage.SKILL.md
vendored
@@ -1,202 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "test-e2e-playwright-coverage"
|
||||
version: "1.0.0"
|
||||
description: "Run Playwright E2E tests with code coverage collection using @bgotink/playwright-coverage"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "testing"
|
||||
- "e2e"
|
||||
- "playwright"
|
||||
- "coverage"
|
||||
- "integration"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "node"
|
||||
version: ">=18.0"
|
||||
optional: false
|
||||
- name: "npx"
|
||||
version: ">=1.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "PLAYWRIGHT_BASE_URL"
|
||||
description: "Base URL of the Charon application under test"
|
||||
default: "http://localhost:8080"
|
||||
required: false
|
||||
- name: "PLAYWRIGHT_HTML_OPEN"
|
||||
description: "Controls HTML report auto-open behavior (set to 'never' for CI/non-interactive)"
|
||||
default: "never"
|
||||
required: false
|
||||
- name: "CI"
|
||||
description: "Set to 'true' when running in CI environment"
|
||||
default: ""
|
||||
required: false
|
||||
parameters:
|
||||
- name: "project"
|
||||
type: "string"
|
||||
description: "Browser project to run (chromium, firefox, webkit)"
|
||||
default: "chromium"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "coverage-e2e"
|
||||
type: "directory"
|
||||
description: "E2E coverage output directory with LCOV and HTML reports"
|
||||
path: "coverage/e2e/"
|
||||
- name: "playwright-report"
|
||||
type: "directory"
|
||||
description: "HTML test report directory"
|
||||
path: "playwright-report/"
|
||||
- name: "test-results"
|
||||
type: "directory"
|
||||
description: "Test artifacts and traces"
|
||||
path: "test-results/"
|
||||
metadata:
|
||||
category: "test"
|
||||
subcategory: "e2e-coverage"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Test E2E Playwright Coverage
|
||||
|
||||
## Overview
|
||||
|
||||
Runs Playwright end-to-end tests with code coverage collection using `@bgotink/playwright-coverage`. This skill collects V8 coverage data during test execution and generates reports in LCOV, HTML, and JSON formats suitable for upload to Codecov.
|
||||
|
||||
**IMPORTANT**: This skill starts the **Vite dev server** (not Docker) because V8 coverage requires access to source files. Running coverage against the Docker container will result in `0%` coverage.
|
||||
|
||||
| Mode | Base URL | Coverage Support |
|
||||
|------|----------|-----------------|
|
||||
| Docker (`localhost:8080`) | ❌ No - Shows "Unknown% (0/0)" |
|
||||
| Vite Dev (`localhost:5173`) | ✅ Yes - Real coverage data |
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18.0 or higher installed and in PATH
|
||||
- Playwright browsers installed (`npx playwright install`)
|
||||
- `@bgotink/playwright-coverage` package installed
|
||||
- Charon application running (default: `http://localhost:8080`)
|
||||
- Test files in `tests/` directory using coverage-enabled imports
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run E2E tests with coverage collection:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-coverage
|
||||
```
|
||||
|
||||
### Browser Selection
|
||||
|
||||
Run tests in a specific browser:
|
||||
|
||||
```bash
|
||||
# Chromium (default)
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-coverage --project=chromium
|
||||
|
||||
# Firefox
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-coverage --project=firefox
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions or other CI/CD pipelines:
|
||||
|
||||
```yaml
|
||||
- name: Run E2E Tests with Coverage
|
||||
run: .github/skills/scripts/skill-runner.sh test-e2e-playwright-coverage
|
||||
env:
|
||||
PLAYWRIGHT_BASE_URL: http://localhost:8080
|
||||
CI: true
|
||||
|
||||
- name: Upload E2E Coverage to Codecov
|
||||
uses: codecov/codecov-action@v5
|
||||
with:
|
||||
files: ./coverage/e2e/lcov.info
|
||||
flags: e2e
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| project | string | No | chromium | Browser project: chromium, firefox, webkit |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| PLAYWRIGHT_BASE_URL | No | http://localhost:8080 | Application URL to test against |
|
||||
| PLAYWRIGHT_HTML_OPEN | No | never | HTML report auto-open behavior |
|
||||
| CI | No | "" | Set to "true" for CI environment behavior |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All tests passed and coverage generated
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: One or more tests failed
|
||||
- **Non-zero**: Configuration or execution error
|
||||
|
||||
### Output Directories
|
||||
- **coverage/e2e/**: Coverage reports (LCOV, HTML, JSON)
|
||||
- `lcov.info` - LCOV format for Codecov upload
|
||||
- `coverage.json` - JSON format for programmatic access
|
||||
- `index.html` - HTML report for visual inspection
|
||||
- **playwright-report/**: HTML test report with results and traces
|
||||
- **test-results/**: Test artifacts, screenshots, and trace files
|
||||
|
||||
## Viewing Coverage Reports
|
||||
|
||||
### Coverage HTML Report
|
||||
|
||||
```bash
|
||||
# Open coverage HTML report
|
||||
open coverage/e2e/index.html
|
||||
```
|
||||
|
||||
### Playwright Test Report
|
||||
|
||||
```bash
|
||||
npx playwright show-report --port 9323
|
||||
```
|
||||
|
||||
## Coverage Data Format
|
||||
|
||||
The skill generates coverage in multiple formats:
|
||||
|
||||
| Format | File | Purpose |
|
||||
|--------|------|---------|
|
||||
| LCOV | `coverage/e2e/lcov.info` | Codecov upload |
|
||||
| HTML | `coverage/e2e/index.html` | Visual inspection |
|
||||
| JSON | `coverage/e2e/coverage.json` | Programmatic access |
|
||||
|
||||
## Related Skills
|
||||
|
||||
- test-e2e-playwright - E2E tests without coverage
|
||||
- test-frontend-coverage - Frontend unit test coverage with Vitest
|
||||
- test-backend-coverage - Backend unit test coverage with Go
|
||||
|
||||
## Notes
|
||||
|
||||
- **Coverage Source**: Uses V8 coverage (native, no instrumentation needed)
|
||||
- **Performance**: ~5-10% overhead compared to tests without coverage
|
||||
- **Sharding**: When running sharded tests in CI, coverage files must be merged
|
||||
- **LCOV Merge**: Use `lcov -a file1.info -a file2.info -o merged.info` to merge
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-18
|
||||
**Maintained by**: Charon Project Team
|
||||
@@ -1,289 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test E2E Playwright Debug - Execution Script
|
||||
#
|
||||
# Runs Playwright E2E tests in headed/debug mode with slow motion,
|
||||
# optional Inspector, and trace collection for troubleshooting.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Project root is 3 levels up from this script
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Default parameter values
|
||||
FILE=""
|
||||
GREP=""
|
||||
SLOWMO=500
|
||||
INSPECTOR=false
|
||||
PROJECT="chromium"
|
||||
|
||||
# Parse command-line arguments
|
||||
parse_arguments() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--file=*)
|
||||
FILE="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--file)
|
||||
FILE="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--grep=*)
|
||||
GREP="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--grep)
|
||||
GREP="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--slowmo=*)
|
||||
SLOWMO="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--slowmo)
|
||||
SLOWMO="${2:-500}"
|
||||
shift 2
|
||||
;;
|
||||
--inspector)
|
||||
INSPECTOR=true
|
||||
shift
|
||||
;;
|
||||
--project=*)
|
||||
PROJECT="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--project)
|
||||
PROJECT="${2:-chromium}"
|
||||
shift 2
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown argument: $1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
# Show help message
|
||||
show_help() {
|
||||
cat << EOF
|
||||
Usage: run.sh [OPTIONS]
|
||||
|
||||
Run Playwright E2E tests in debug mode for troubleshooting.
|
||||
|
||||
Options:
|
||||
--file=FILE Specific test file to run (relative to tests/)
|
||||
--grep=PATTERN Filter tests by title pattern (regex)
|
||||
--slowmo=MS Delay between actions in milliseconds (default: 500)
|
||||
--inspector Open Playwright Inspector for step-by-step debugging
|
||||
--project=PROJECT Browser to use: chromium, firefox, webkit (default: chromium)
|
||||
-h, --help Show this help message
|
||||
|
||||
Environment Variables:
|
||||
PLAYWRIGHT_BASE_URL Application URL to test (default: http://localhost:8080)
|
||||
PWDEBUG Set to '1' for Inspector mode
|
||||
DEBUG Verbose logging (e.g., 'pw:api')
|
||||
|
||||
Examples:
|
||||
run.sh # Debug all tests in Chromium
|
||||
run.sh --file=login.spec.ts # Debug specific file
|
||||
run.sh --grep="login" # Debug tests matching pattern
|
||||
run.sh --inspector # Open Playwright Inspector
|
||||
run.sh --slowmo=1000 # Slower execution
|
||||
run.sh --file=test.spec.ts --inspector # Combine options
|
||||
EOF
|
||||
}
|
||||
|
||||
# Validate project parameter
|
||||
validate_project() {
|
||||
local valid_projects=("chromium" "firefox" "webkit")
|
||||
local project_lower
|
||||
project_lower=$(echo "${PROJECT}" | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
for valid in "${valid_projects[@]}"; do
|
||||
if [[ "${project_lower}" == "${valid}" ]]; then
|
||||
PROJECT="${project_lower}"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
error_exit "Invalid project '${PROJECT}'. Valid options: chromium, firefox, webkit"
|
||||
}
|
||||
|
||||
# Validate test file if specified
|
||||
validate_test_file() {
|
||||
if [[ -z "${FILE}" ]]; then
|
||||
return 0
|
||||
fi
|
||||
|
||||
local test_path="${PROJECT_ROOT}/tests/${FILE}"
|
||||
|
||||
# Handle if user provided full path
|
||||
if [[ "${FILE}" == tests/* ]]; then
|
||||
test_path="${PROJECT_ROOT}/${FILE}"
|
||||
FILE="${FILE#tests/}"
|
||||
fi
|
||||
|
||||
if [[ ! -f "${test_path}" ]]; then
|
||||
log_error "Test file not found: ${test_path}"
|
||||
log_info "Available test files:"
|
||||
ls -1 "${PROJECT_ROOT}/tests/"*.spec.ts 2>/dev/null | xargs -n1 basename || true
|
||||
error_exit "Invalid test file"
|
||||
fi
|
||||
}
|
||||
|
||||
# Build Playwright command arguments
|
||||
build_playwright_args() {
|
||||
local args=()
|
||||
|
||||
# Always run headed in debug mode
|
||||
args+=("--headed")
|
||||
|
||||
# Add project
|
||||
args+=("--project=${PROJECT}")
|
||||
|
||||
# Add grep filter if specified
|
||||
if [[ -n "${GREP}" ]]; then
|
||||
args+=("--grep=${GREP}")
|
||||
fi
|
||||
|
||||
# Always collect traces in debug mode
|
||||
args+=("--trace=on")
|
||||
|
||||
# Run single worker for clarity
|
||||
args+=("--workers=1")
|
||||
|
||||
# No retries in debug mode
|
||||
args+=("--retries=0")
|
||||
|
||||
echo "${args[*]}"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
parse_arguments "$@"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_node_environment "18.0" || error_exit "Node.js 18+ is required"
|
||||
check_command_exists "npx" "npx is required (part of Node.js installation)"
|
||||
|
||||
# Validate project structure
|
||||
log_step "VALIDATION" "Checking project structure"
|
||||
cd "${PROJECT_ROOT}"
|
||||
validate_project_structure "tests" "playwright.config.js" "package.json" || error_exit "Invalid project structure"
|
||||
|
||||
# Validate parameters
|
||||
validate_project
|
||||
validate_test_file
|
||||
|
||||
# Set environment variables
|
||||
export PLAYWRIGHT_HTML_OPEN="${PLAYWRIGHT_HTML_OPEN:-never}"
|
||||
set_default_env "PLAYWRIGHT_BASE_URL" "http://localhost:8080"
|
||||
|
||||
# Enable Inspector if requested
|
||||
if [[ "${INSPECTOR}" == "true" ]]; then
|
||||
export PWDEBUG=1
|
||||
log_info "Playwright Inspector enabled"
|
||||
fi
|
||||
|
||||
# Log configuration
|
||||
log_step "CONFIG" "Debug configuration"
|
||||
log_info "Project: ${PROJECT}"
|
||||
log_info "Test file: ${FILE:-<all tests>}"
|
||||
log_info "Grep filter: ${GREP:-<none>}"
|
||||
log_info "Slow motion: ${SLOWMO}ms"
|
||||
log_info "Inspector: ${INSPECTOR}"
|
||||
log_info "Base URL: ${PLAYWRIGHT_BASE_URL}"
|
||||
|
||||
# Build command arguments
|
||||
local playwright_args
|
||||
playwright_args=$(build_playwright_args)
|
||||
|
||||
# Determine test path
|
||||
local test_target=""
|
||||
if [[ -n "${FILE}" ]]; then
|
||||
test_target="tests/${FILE}"
|
||||
fi
|
||||
|
||||
# Build full command
|
||||
local full_cmd="npx playwright test ${playwright_args}"
|
||||
if [[ -n "${test_target}" ]]; then
|
||||
full_cmd="${full_cmd} ${test_target}"
|
||||
fi
|
||||
|
||||
# Add slowMo via environment (Playwright config reads this)
|
||||
export PLAYWRIGHT_SLOWMO="${SLOWMO}"
|
||||
|
||||
log_step "EXECUTION" "Running Playwright in debug mode"
|
||||
log_info "Slow motion: ${SLOWMO}ms delay between actions"
|
||||
log_info "Traces will be captured for all tests"
|
||||
echo ""
|
||||
log_command "${full_cmd}"
|
||||
echo ""
|
||||
|
||||
# Create a temporary config that includes slowMo
|
||||
local temp_config="${PROJECT_ROOT}/.playwright-debug-config.js"
|
||||
cat > "${temp_config}" << EOF
|
||||
// Temporary debug config - auto-generated
|
||||
import baseConfig from './playwright.config.js';
|
||||
|
||||
export default {
|
||||
...baseConfig,
|
||||
use: {
|
||||
...baseConfig.use,
|
||||
launchOptions: {
|
||||
slowMo: ${SLOWMO},
|
||||
},
|
||||
trace: 'on',
|
||||
},
|
||||
workers: 1,
|
||||
retries: 0,
|
||||
};
|
||||
EOF
|
||||
|
||||
# Run tests with temporary config
|
||||
local exit_code=0
|
||||
# shellcheck disable=SC2086
|
||||
if npx playwright test --config="${temp_config}" --headed --project="${PROJECT}" ${GREP:+--grep="${GREP}"} ${test_target}; then
|
||||
log_success "Debug tests completed successfully"
|
||||
else
|
||||
exit_code=$?
|
||||
log_warning "Debug tests completed with failures (exit code: ${exit_code})"
|
||||
fi
|
||||
|
||||
# Clean up temporary config
|
||||
rm -f "${temp_config}"
|
||||
|
||||
# Output helpful information
|
||||
log_step "ARTIFACTS" "Test artifacts"
|
||||
log_info "HTML Report: ${PROJECT_ROOT}/playwright-report/index.html"
|
||||
log_info "Test Results: ${PROJECT_ROOT}/test-results/"
|
||||
|
||||
# Show trace info if tests ran
|
||||
if [[ -d "${PROJECT_ROOT}/test-results" ]] && find "${PROJECT_ROOT}/test-results" -name "trace.zip" -type f 2>/dev/null | head -1 | grep -q .; then
|
||||
log_info ""
|
||||
log_info "View traces with:"
|
||||
log_info " npx playwright show-trace test-results/<test-name>/trace.zip"
|
||||
fi
|
||||
|
||||
exit "${exit_code}"
|
||||
}
|
||||
|
||||
# Run main with all arguments
|
||||
main "$@"
|
||||
383
.github/skills/test-e2e-playwright-debug.SKILL.md
vendored
383
.github/skills/test-e2e-playwright-debug.SKILL.md
vendored
@@ -1,383 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "test-e2e-playwright-debug"
|
||||
version: "1.0.0"
|
||||
description: "Run Playwright E2E tests in headed/debug mode for troubleshooting with slowMo and trace collection"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "testing"
|
||||
- "e2e"
|
||||
- "playwright"
|
||||
- "debug"
|
||||
- "troubleshooting"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "node"
|
||||
version: ">=18.0"
|
||||
optional: false
|
||||
- name: "npx"
|
||||
version: ">=1.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "PLAYWRIGHT_BASE_URL"
|
||||
description: "Base URL of the Charon application under test"
|
||||
default: "http://localhost:8080"
|
||||
required: false
|
||||
- name: "PWDEBUG"
|
||||
description: "Enable Playwright Inspector (set to '1' for step-by-step debugging)"
|
||||
default: ""
|
||||
required: false
|
||||
- name: "DEBUG"
|
||||
description: "Enable verbose Playwright logging (e.g., 'pw:api')"
|
||||
default: ""
|
||||
required: false
|
||||
parameters:
|
||||
- name: "file"
|
||||
type: "string"
|
||||
description: "Specific test file to run (relative to tests/ directory)"
|
||||
default: ""
|
||||
required: false
|
||||
- name: "grep"
|
||||
type: "string"
|
||||
description: "Filter tests by title pattern (regex)"
|
||||
default: ""
|
||||
required: false
|
||||
- name: "slowmo"
|
||||
type: "number"
|
||||
description: "Slow down operations by specified milliseconds"
|
||||
default: "500"
|
||||
required: false
|
||||
- name: "inspector"
|
||||
type: "boolean"
|
||||
description: "Open Playwright Inspector for step-by-step debugging"
|
||||
default: "false"
|
||||
required: false
|
||||
- name: "project"
|
||||
type: "string"
|
||||
description: "Browser project to run (chromium, firefox, webkit)"
|
||||
default: "chromium"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "playwright-report"
|
||||
type: "directory"
|
||||
description: "HTML test report directory"
|
||||
path: "playwright-report/"
|
||||
- name: "test-results"
|
||||
type: "directory"
|
||||
description: "Test artifacts, screenshots, and traces"
|
||||
path: "test-results/"
|
||||
metadata:
|
||||
category: "test"
|
||||
subcategory: "e2e-debug"
|
||||
execution_time: "variable"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: false
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Test E2E Playwright Debug
|
||||
|
||||
## Overview
|
||||
|
||||
Runs Playwright E2E tests in headed/debug mode for troubleshooting. This skill provides enhanced debugging capabilities including:
|
||||
|
||||
- **Headed Mode**: Visible browser window to watch test execution
|
||||
- **Slow Motion**: Configurable delay between actions for observation
|
||||
- **Playwright Inspector**: Step-by-step debugging with breakpoints
|
||||
- **Trace Collection**: Always captures traces for post-mortem analysis
|
||||
- **Single Test Focus**: Run individual tests or test files
|
||||
|
||||
**Use this skill when:**
|
||||
- Debugging failing E2E tests
|
||||
- Understanding test flow and interactions
|
||||
- Developing new E2E tests
|
||||
- Investigating flaky tests
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18.0 or higher installed and in PATH
|
||||
- Playwright browsers installed (`npx playwright install chromium`)
|
||||
- Charon application running at localhost:8080 (use `docker-rebuild-e2e` skill)
|
||||
- Display available (X11 or Wayland on Linux, native on macOS)
|
||||
- Test files in `tests/` directory
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Debug Mode
|
||||
|
||||
Run all tests in headed mode with slow motion:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug
|
||||
```
|
||||
|
||||
### Debug Specific Test File
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --file=login.spec.ts
|
||||
```
|
||||
|
||||
### Debug Test by Name Pattern
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --grep="should login with valid credentials"
|
||||
```
|
||||
|
||||
### With Playwright Inspector
|
||||
|
||||
Open the Playwright Inspector for step-by-step debugging:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --inspector
|
||||
```
|
||||
|
||||
### Custom Slow Motion
|
||||
|
||||
Adjust the delay between actions (in milliseconds):
|
||||
|
||||
```bash
|
||||
# Slower for detailed observation
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --slowmo=1000
|
||||
|
||||
# Faster but still visible
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --slowmo=200
|
||||
```
|
||||
|
||||
### Different Browser
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --project=firefox
|
||||
```
|
||||
|
||||
### Combined Options
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug \
|
||||
--file=dashboard.spec.ts \
|
||||
--grep="navigation" \
|
||||
--slowmo=750 \
|
||||
--project=chromium
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| file | string | No | "" | Specific test file to run |
|
||||
| grep | string | No | "" | Filter tests by title pattern |
|
||||
| slowmo | number | No | 500 | Delay between actions (ms) |
|
||||
| inspector | boolean | No | false | Open Playwright Inspector |
|
||||
| project | string | No | chromium | Browser to use |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| PLAYWRIGHT_BASE_URL | No | http://localhost:8080 | Application URL |
|
||||
| PWDEBUG | No | "" | Set to "1" for Inspector mode |
|
||||
| DEBUG | No | "" | Verbose logging (e.g., "pw:api") |
|
||||
|
||||
## Debugging Techniques
|
||||
|
||||
### Using Playwright Inspector
|
||||
|
||||
The Inspector provides:
|
||||
- **Step-through Execution**: Execute one action at a time
|
||||
- **Locator Playground**: Test and refine selectors
|
||||
- **Call Log**: View all Playwright API calls
|
||||
- **Console**: Access browser console
|
||||
|
||||
```bash
|
||||
# Enable Inspector
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --inspector
|
||||
```
|
||||
|
||||
In the Inspector:
|
||||
1. Use **Resume** to continue to next action
|
||||
2. Use **Step** to execute one action
|
||||
3. Use the **Locator** tab to test selectors
|
||||
4. Check **Console** for JavaScript errors
|
||||
|
||||
### Adding Breakpoints in Tests
|
||||
|
||||
Add `await page.pause()` in your test code:
|
||||
|
||||
```typescript
|
||||
test('debug this test', async ({ page }) => {
|
||||
await page.goto('/');
|
||||
await page.pause(); // Opens Inspector here
|
||||
await page.click('button');
|
||||
});
|
||||
```
|
||||
|
||||
### Verbose Logging
|
||||
|
||||
Enable detailed Playwright API logging:
|
||||
|
||||
```bash
|
||||
DEBUG=pw:api .github/skills/scripts/skill-runner.sh test-e2e-playwright-debug
|
||||
```
|
||||
|
||||
### Screenshot on Failure
|
||||
|
||||
Tests automatically capture screenshots on failure. Find them in:
|
||||
```
|
||||
test-results/<test-name>/
|
||||
├── test-failed-1.png
|
||||
├── trace.zip
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Analyzing Traces
|
||||
|
||||
Traces are always captured in debug mode. View them with:
|
||||
|
||||
```bash
|
||||
# Open trace viewer for a specific test
|
||||
npx playwright show-trace test-results/<test-name>/trace.zip
|
||||
|
||||
# Or view in browser
|
||||
npx playwright show-trace --port 9322
|
||||
```
|
||||
|
||||
Traces include:
|
||||
- DOM snapshots at each step
|
||||
- Network requests/responses
|
||||
- Console logs
|
||||
- Screenshots
|
||||
- Action timeline
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Debug Login Flow
|
||||
|
||||
```bash
|
||||
# Rebuild environment with clean state
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --clean
|
||||
|
||||
# Debug login tests
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug \
|
||||
--file=login.spec.ts \
|
||||
--slowmo=800
|
||||
```
|
||||
|
||||
### Example 2: Investigate Flaky Test
|
||||
|
||||
```bash
|
||||
# Run with Inspector to step through
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug \
|
||||
--grep="flaky test name" \
|
||||
--inspector
|
||||
|
||||
# After identifying the issue, view the trace
|
||||
npx playwright show-trace test-results/*/trace.zip
|
||||
```
|
||||
|
||||
### Example 3: Develop New Test
|
||||
|
||||
```bash
|
||||
# Run in headed mode while developing
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug \
|
||||
--file=new-feature.spec.ts \
|
||||
--slowmo=500
|
||||
```
|
||||
|
||||
### Example 4: Cross-Browser Debug
|
||||
|
||||
```bash
|
||||
# Debug in Firefox
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug \
|
||||
--project=firefox \
|
||||
--grep="cross-browser issue"
|
||||
```
|
||||
|
||||
## Test File Locations
|
||||
|
||||
| Path | Description |
|
||||
|------|-------------|
|
||||
| `tests/` | All E2E test files |
|
||||
| `tests/auth.setup.ts` | Authentication setup |
|
||||
| `tests/login.spec.ts` | Login flow tests |
|
||||
| `tests/dashboard.spec.ts` | Dashboard tests |
|
||||
| `tests/dns-records.spec.ts` | DNS management tests |
|
||||
| `playwright/.auth/` | Stored auth state |
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### No Browser Window Opens
|
||||
|
||||
**Linux**: Ensure X11/Wayland display is available
|
||||
```bash
|
||||
echo $DISPLAY # Should show :0 or similar
|
||||
```
|
||||
|
||||
**Remote/SSH**: Use X11 forwarding or VNC
|
||||
```bash
|
||||
ssh -X user@host
|
||||
```
|
||||
|
||||
**WSL2**: Install and configure WSLg or X server
|
||||
|
||||
### Test Times Out
|
||||
|
||||
Increase timeout for debugging:
|
||||
```bash
|
||||
# In your test file
|
||||
test.setTimeout(120000); // 2 minutes
|
||||
```
|
||||
|
||||
### Inspector Doesn't Open
|
||||
|
||||
Ensure PWDEBUG is set:
|
||||
```bash
|
||||
PWDEBUG=1 npx playwright test --headed
|
||||
```
|
||||
|
||||
### Cannot Find Test File
|
||||
|
||||
Check the file exists:
|
||||
```bash
|
||||
ls -la tests/*.spec.ts
|
||||
```
|
||||
|
||||
Use relative path from tests/ directory:
|
||||
```bash
|
||||
--file=login.spec.ts # Not tests/login.spec.ts
|
||||
```
|
||||
|
||||
## Common Issues and Solutions
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| "Target closed" | Application crashed - check container logs |
|
||||
| "Element not found" | Use Inspector to verify selector |
|
||||
| "Timeout exceeded" | Increase timeout or check if element is hidden |
|
||||
| "Net::ERR_CONNECTION_REFUSED" | Ensure Docker container is running |
|
||||
| Flaky test | Add explicit waits or use Inspector to find race condition |
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [test-e2e-playwright](./test-e2e-playwright.SKILL.md) - Run tests normally
|
||||
- [docker-rebuild-e2e](./docker-rebuild-e2e.SKILL.md) - Rebuild E2E environment
|
||||
- [test-e2e-playwright-coverage](./test-e2e-playwright-coverage.SKILL.md) - Run with coverage
|
||||
|
||||
## Notes
|
||||
|
||||
- **Not CI/CD Safe**: Headed mode requires a display
|
||||
- **Resource Usage**: Browser windows consume significant memory
|
||||
- **Slow Motion**: Default 500ms delay; adjust based on needs
|
||||
- **Traces**: Always captured for post-mortem analysis
|
||||
- **Single Worker**: Runs one test at a time for clarity
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-21
|
||||
**Maintained by**: Charon Project Team
|
||||
**Test Directory**: `tests/`
|
||||
188
.github/skills/test-e2e-playwright-scripts/run.sh
vendored
188
.github/skills/test-e2e-playwright-scripts/run.sh
vendored
@@ -1,188 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test E2E Playwright - Execution Script
|
||||
#
|
||||
# Runs Playwright end-to-end tests with browser selection,
|
||||
# headed mode, and test filtering support.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# Helper scripts are in .github/skills/scripts/ (one level up from skill-scripts dir)
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Project root is 3 levels up from this script (skills/skill-name-scripts/run.sh -> project root)
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Default parameter values
|
||||
PROJECT="chromium"
|
||||
HEADED=false
|
||||
GREP=""
|
||||
|
||||
# Parse command-line arguments
|
||||
parse_arguments() {
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--project=*)
|
||||
PROJECT="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--project)
|
||||
PROJECT="${2:-chromium}"
|
||||
shift 2
|
||||
;;
|
||||
--headed)
|
||||
HEADED=true
|
||||
shift
|
||||
;;
|
||||
--grep=*)
|
||||
GREP="${1#*=}"
|
||||
shift
|
||||
;;
|
||||
--grep)
|
||||
GREP="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
-h|--help)
|
||||
show_help
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
log_warning "Unknown argument: $1"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
done
|
||||
}
|
||||
|
||||
# Show help message
|
||||
show_help() {
|
||||
cat << EOF
|
||||
Usage: run.sh [OPTIONS]
|
||||
|
||||
Run Playwright E2E tests against the Charon application.
|
||||
|
||||
Options:
|
||||
--project=PROJECT Browser project to run (chromium, firefox, webkit, all)
|
||||
Default: chromium
|
||||
--headed Run tests in headed mode (visible browser)
|
||||
--grep=PATTERN Filter tests by title pattern (regex)
|
||||
-h, --help Show this help message
|
||||
|
||||
Environment Variables:
|
||||
PLAYWRIGHT_BASE_URL Application URL to test (default: http://localhost:8080)
|
||||
PLAYWRIGHT_HTML_OPEN HTML report behavior (default: never)
|
||||
CI Set to 'true' for CI environment
|
||||
|
||||
Examples:
|
||||
run.sh # Run all tests in Chromium (headless)
|
||||
run.sh --project=firefox # Run in Firefox
|
||||
run.sh --headed # Run with visible browser
|
||||
run.sh --grep="login" # Run only login tests
|
||||
run.sh --project=all --grep="smoke" # All browsers, smoke tests only
|
||||
EOF
|
||||
}
|
||||
|
||||
# Validate project parameter
|
||||
validate_project() {
|
||||
local valid_projects=("chromium" "firefox" "webkit" "all")
|
||||
local project_lower
|
||||
project_lower=$(echo "${PROJECT}" | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
for valid in "${valid_projects[@]}"; do
|
||||
if [[ "${project_lower}" == "${valid}" ]]; then
|
||||
PROJECT="${project_lower}"
|
||||
return 0
|
||||
fi
|
||||
done
|
||||
|
||||
error_exit "Invalid project '${PROJECT}'. Valid options: chromium, firefox, webkit, all"
|
||||
}
|
||||
|
||||
# Build Playwright command arguments
|
||||
build_playwright_args() {
|
||||
local args=()
|
||||
|
||||
# Add project selection
|
||||
if [[ "${PROJECT}" != "all" ]]; then
|
||||
args+=("--project=${PROJECT}")
|
||||
fi
|
||||
|
||||
# Add headed mode if requested
|
||||
if [[ "${HEADED}" == "true" ]]; then
|
||||
args+=("--headed")
|
||||
fi
|
||||
|
||||
# Add grep filter if specified
|
||||
if [[ -n "${GREP}" ]]; then
|
||||
args+=("--grep=${GREP}")
|
||||
fi
|
||||
|
||||
echo "${args[*]}"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
parse_arguments "$@"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_node_environment "18.0" || error_exit "Node.js 18+ is required"
|
||||
check_command_exists "npx" "npx is required (part of Node.js installation)"
|
||||
|
||||
# Validate project structure
|
||||
log_step "VALIDATION" "Checking project structure"
|
||||
cd "${PROJECT_ROOT}"
|
||||
validate_project_structure "tests" "playwright.config.js" "package.json" || error_exit "Invalid project structure"
|
||||
|
||||
# Validate project parameter
|
||||
validate_project
|
||||
|
||||
# Set environment variables for non-interactive execution
|
||||
export PLAYWRIGHT_HTML_OPEN="${PLAYWRIGHT_HTML_OPEN:-never}"
|
||||
set_default_env "PLAYWRIGHT_BASE_URL" "http://localhost:8080"
|
||||
|
||||
# Log configuration
|
||||
log_step "CONFIG" "Test configuration"
|
||||
log_info "Project: ${PROJECT}"
|
||||
log_info "Headed mode: ${HEADED}"
|
||||
log_info "Grep filter: ${GREP:-<none>}"
|
||||
log_info "Base URL: ${PLAYWRIGHT_BASE_URL}"
|
||||
log_info "HTML report auto-open: ${PLAYWRIGHT_HTML_OPEN}"
|
||||
|
||||
# Build command arguments
|
||||
local playwright_args
|
||||
playwright_args=$(build_playwright_args)
|
||||
|
||||
# Execute Playwright tests
|
||||
log_step "EXECUTION" "Running Playwright E2E tests"
|
||||
log_command "npx playwright test ${playwright_args}"
|
||||
|
||||
# Run tests with proper error handling
|
||||
local exit_code=0
|
||||
# shellcheck disable=SC2086
|
||||
if npx playwright test ${playwright_args}; then
|
||||
log_success "All E2E tests passed"
|
||||
else
|
||||
exit_code=$?
|
||||
log_error "E2E tests failed (exit code: ${exit_code})"
|
||||
fi
|
||||
|
||||
# Output report location
|
||||
log_step "REPORT" "Test report available"
|
||||
log_info "HTML Report: ${PROJECT_ROOT}/playwright-report/index.html"
|
||||
log_info "To view in browser: npx playwright show-report --port 9323"
|
||||
log_info "VS Code Simple Browser URL: http://127.0.0.1:9323"
|
||||
|
||||
exit "${exit_code}"
|
||||
}
|
||||
|
||||
# Run main with all arguments
|
||||
main "$@"
|
||||
350
.github/skills/test-e2e-playwright.SKILL.md
vendored
350
.github/skills/test-e2e-playwright.SKILL.md
vendored
@@ -1,350 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "test-e2e-playwright"
|
||||
version: "1.0.0"
|
||||
description: "Run Playwright E2E tests against the Charon application with browser selection and filtering"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "testing"
|
||||
- "e2e"
|
||||
- "playwright"
|
||||
- "integration"
|
||||
- "browser"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "node"
|
||||
version: ">=18.0"
|
||||
optional: false
|
||||
- name: "npx"
|
||||
version: ">=1.0"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "PLAYWRIGHT_BASE_URL"
|
||||
description: "Base URL of the Charon application under test"
|
||||
default: "http://localhost:8080"
|
||||
required: false
|
||||
- name: "PLAYWRIGHT_HTML_OPEN"
|
||||
description: "Controls HTML report auto-open behavior (set to 'never' for CI/non-interactive)"
|
||||
default: "never"
|
||||
required: false
|
||||
- name: "CI"
|
||||
description: "Set to 'true' when running in CI environment"
|
||||
default: ""
|
||||
required: false
|
||||
parameters:
|
||||
- name: "project"
|
||||
type: "string"
|
||||
description: "Browser project to run (chromium, firefox, webkit, all)"
|
||||
default: "chromium"
|
||||
required: false
|
||||
- name: "headed"
|
||||
type: "boolean"
|
||||
description: "Run tests in headed mode (visible browser)"
|
||||
default: "false"
|
||||
required: false
|
||||
- name: "grep"
|
||||
type: "string"
|
||||
description: "Filter tests by title pattern (regex)"
|
||||
default: ""
|
||||
required: false
|
||||
outputs:
|
||||
- name: "playwright-report"
|
||||
type: "directory"
|
||||
description: "HTML test report directory"
|
||||
path: "playwright-report/"
|
||||
- name: "test-results"
|
||||
type: "directory"
|
||||
description: "Test artifacts and traces"
|
||||
path: "test-results/"
|
||||
metadata:
|
||||
category: "test"
|
||||
subcategory: "e2e"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Test E2E Playwright
|
||||
|
||||
## Overview
|
||||
|
||||
Executes Playwright end-to-end tests against the Charon application. This skill supports browser selection, headed mode for debugging, and test filtering by name pattern.
|
||||
|
||||
The skill runs non-interactively by default (HTML report does not auto-open), making it suitable for CI/CD pipelines and automated testing scenarios.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18.0 or higher installed and in PATH
|
||||
- Playwright browsers installed (`npx playwright install`)
|
||||
- Charon application running (default: `http://localhost:8080`)
|
||||
- Test files in `tests/` directory
|
||||
|
||||
### Quick Start: Ensure E2E Environment is Ready
|
||||
|
||||
Before running tests, ensure the Docker E2E environment is running:
|
||||
|
||||
```bash
|
||||
# Start/rebuild E2E Docker container (recommended before testing)
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e
|
||||
|
||||
# Or for a complete clean rebuild:
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --clean --no-cache
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run E2E tests with default settings (Chromium, headless):
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright
|
||||
```
|
||||
|
||||
### Browser Selection
|
||||
|
||||
Run tests in a specific browser:
|
||||
|
||||
```bash
|
||||
# Chromium (default)
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --project=chromium
|
||||
|
||||
# Firefox
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --project=firefox
|
||||
|
||||
# WebKit (Safari)
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --project=webkit
|
||||
|
||||
# All browsers
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --project=all
|
||||
```
|
||||
|
||||
### Headed Mode (Debugging)
|
||||
|
||||
Run tests with a visible browser window:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --headed
|
||||
```
|
||||
|
||||
### Filter Tests
|
||||
|
||||
Run only tests matching a pattern:
|
||||
|
||||
```bash
|
||||
# Run tests with "login" in the title
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --grep="login"
|
||||
|
||||
# Run tests with "DNS" in the title
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --grep="DNS"
|
||||
```
|
||||
|
||||
### Combined Options
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --project=firefox --headed --grep="dashboard"
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions or other CI/CD pipelines:
|
||||
|
||||
```yaml
|
||||
- name: Run E2E Tests
|
||||
run: .github/skills/scripts/skill-runner.sh test-e2e-playwright
|
||||
env:
|
||||
PLAYWRIGHT_BASE_URL: http://localhost:8080
|
||||
CI: true
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| project | string | No | chromium | Browser project: chromium, firefox, webkit, all |
|
||||
| headed | boolean | No | false | Run with visible browser window |
|
||||
| grep | string | No | "" | Filter tests by title pattern (regex) |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| PLAYWRIGHT_BASE_URL | No | http://localhost:8080 | Application URL to test against |
|
||||
| PLAYWRIGHT_HTML_OPEN | No | never | HTML report auto-open behavior |
|
||||
| CI | No | "" | Set to "true" for CI environment behavior |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All tests passed
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: One or more tests failed
|
||||
- **Non-zero**: Configuration or execution error
|
||||
|
||||
### Output Directories
|
||||
- **playwright-report/**: HTML report with test results and traces
|
||||
- **test-results/**: Test artifacts, screenshots, and trace files
|
||||
|
||||
## Viewing the Report
|
||||
|
||||
After test execution, view the HTML report using VS Code Simple Browser:
|
||||
|
||||
### Method 1: Start Report Server
|
||||
|
||||
```bash
|
||||
npx playwright show-report --port 9323
|
||||
```
|
||||
|
||||
Then open in VS Code Simple Browser: `http://127.0.0.1:9323`
|
||||
|
||||
### Method 2: VS Code Task
|
||||
|
||||
Use the VS Code task "Test: E2E Playwright - View Report" to start the report server as a background task, then open `http://127.0.0.1:9323` in Simple Browser.
|
||||
|
||||
### Method 3: Direct File Access
|
||||
|
||||
Open `playwright-report/index.html` directly in a browser.
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Quick Smoke Test
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --grep="smoke"
|
||||
```
|
||||
|
||||
### Example 2: Debug Failing Test
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --headed --grep="failing-test-name"
|
||||
```
|
||||
|
||||
### Example 3: Cross-Browser Validation
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright --project=all
|
||||
```
|
||||
|
||||
## Test Structure
|
||||
|
||||
Tests are located in the `tests/` directory and follow Playwright conventions:
|
||||
|
||||
```
|
||||
tests/
|
||||
├── auth.setup.ts # Authentication setup (runs first)
|
||||
├── dashboard.spec.ts # Dashboard tests
|
||||
├── dns-records.spec.ts # DNS management tests
|
||||
├── login.spec.ts # Login flow tests
|
||||
└── ...
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: Target page, context or browser has been closed
|
||||
**Solution**: Ensure the application is running at the configured base URL. Rebuild if needed:
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e
|
||||
```
|
||||
|
||||
#### Error: page.goto: net::ERR_CONNECTION_REFUSED
|
||||
**Solution**: Start the Charon application before running tests:
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e
|
||||
```
|
||||
|
||||
#### Error: browserType.launch: Executable doesn't exist
|
||||
**Solution**: Run `npx playwright install` to install browser binaries
|
||||
|
||||
#### Error: Timeout waiting for selector
|
||||
**Solution**: The application may be slow or in an unexpected state. Try:
|
||||
```bash
|
||||
# Rebuild with clean state
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --clean
|
||||
|
||||
# Or debug the test to see what's happening
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --grep="failing test"
|
||||
```
|
||||
|
||||
#### Error: Authentication state is stale
|
||||
**Solution**: Remove stored auth and let setup recreate it:
|
||||
```bash
|
||||
rm -rf playwright/.auth/user.json
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright
|
||||
```
|
||||
|
||||
## Troubleshooting Workflow
|
||||
|
||||
When E2E tests fail, follow this workflow:
|
||||
|
||||
1. **Check container health**:
|
||||
```bash
|
||||
docker ps --filter "name=charon-playwright"
|
||||
docker logs charon-playwright --tail 50
|
||||
```
|
||||
|
||||
2. **Verify the application is accessible**:
|
||||
```bash
|
||||
curl -sf http://localhost:8080/api/v1/health
|
||||
```
|
||||
|
||||
3. **Rebuild with clean state if needed**:
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh docker-rebuild-e2e --clean
|
||||
```
|
||||
|
||||
4. **Debug specific failing test**:
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-e2e-playwright-debug --grep="test name"
|
||||
```
|
||||
|
||||
5. **View the HTML report for details**:
|
||||
```bash
|
||||
npx playwright show-report --port 9323
|
||||
```
|
||||
|
||||
## Key File Locations
|
||||
|
||||
| Path | Purpose |
|
||||
|------|---------|
|
||||
| `tests/` | All E2E test files |
|
||||
| `tests/auth.setup.ts` | Authentication setup fixture |
|
||||
| `playwright.config.js` | Playwright configuration |
|
||||
| `playwright/.auth/user.json` | Stored authentication state |
|
||||
| `playwright-report/` | HTML test reports |
|
||||
| `test-results/` | Test artifacts and traces |
|
||||
| `.docker/compose/docker-compose.playwright.yml` | E2E Docker compose config |
|
||||
| `Dockerfile` | Application Docker image |
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [docker-rebuild-e2e](./docker-rebuild-e2e.SKILL.md) - Rebuild Docker image and restart E2E container
|
||||
- [test-e2e-playwright-debug](./test-e2e-playwright-debug.SKILL.md) - Debug E2E tests in headed mode
|
||||
- [test-e2e-playwright-coverage](./test-e2e-playwright-coverage.SKILL.md) - Run E2E tests with coverage
|
||||
- [test-frontend-unit](./test-frontend-unit.SKILL.md) - Frontend unit tests with Vitest
|
||||
- [docker-start-dev](./docker-start-dev.SKILL.md) - Start development environment
|
||||
- [integration-test-all](./integration-test-all.SKILL.md) - Run all integration tests
|
||||
|
||||
## Notes
|
||||
|
||||
- **Authentication**: Tests use stored auth state from `playwright/.auth/user.json`
|
||||
- **Parallelization**: Tests run in parallel locally, sequential in CI
|
||||
- **Retries**: CI automatically retries failed tests twice
|
||||
- **Traces**: Traces are collected on first retry for debugging
|
||||
- **Report**: HTML report is generated at `playwright-report/index.html`
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2026-01-15
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: `tests/` directory
|
||||
@@ -1,52 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test Frontend Coverage - Execution Script
|
||||
#
|
||||
# This script wraps the legacy frontend-test-coverage.sh script while providing
|
||||
# the Agent Skills interface and logging.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# Helper scripts are in .github/skills/scripts/
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Project root is 3 levels up from this script (skills/skill-name-scripts/run.sh -> project root)
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_node_environment "18.0" || error_exit "Node.js 18.0+ is required"
|
||||
validate_python_environment "3.8" || error_exit "Python 3.8+ is required"
|
||||
|
||||
# Validate project structure
|
||||
log_step "VALIDATION" "Checking project structure"
|
||||
cd "${PROJECT_ROOT}"
|
||||
validate_project_structure "frontend" "scripts/frontend-test-coverage.sh" || error_exit "Invalid project structure"
|
||||
|
||||
# Set default environment variables
|
||||
set_default_env "CHARON_MIN_COVERAGE" "85"
|
||||
|
||||
# Execute the legacy script
|
||||
log_step "EXECUTION" "Running frontend tests with coverage"
|
||||
log_info "Minimum coverage: ${CHARON_MIN_COVERAGE}%"
|
||||
|
||||
LEGACY_SCRIPT="${PROJECT_ROOT}/scripts/frontend-test-coverage.sh"
|
||||
check_file_exists "${LEGACY_SCRIPT}"
|
||||
|
||||
# Execute with proper error handling
|
||||
if "${LEGACY_SCRIPT}" "$@"; then
|
||||
log_success "Frontend coverage tests passed"
|
||||
exit 0
|
||||
else
|
||||
exit_code=$?
|
||||
log_error "Frontend coverage tests failed (exit code: ${exit_code})"
|
||||
exit "${exit_code}"
|
||||
fi
|
||||
197
.github/skills/test-frontend-coverage.SKILL.md
vendored
197
.github/skills/test-frontend-coverage.SKILL.md
vendored
@@ -1,197 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "test-frontend-coverage"
|
||||
version: "1.0.0"
|
||||
description: "Run frontend tests with coverage analysis and threshold validation (minimum 85%)"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "testing"
|
||||
- "coverage"
|
||||
- "frontend"
|
||||
- "vitest"
|
||||
- "validation"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "node"
|
||||
version: ">=18.0"
|
||||
optional: false
|
||||
- name: "npm"
|
||||
version: ">=9.0"
|
||||
optional: false
|
||||
- name: "python3"
|
||||
version: ">=3.8"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "CHARON_MIN_COVERAGE"
|
||||
description: "Minimum coverage percentage required (overrides default)"
|
||||
default: "85"
|
||||
required: false
|
||||
- name: "CPM_MIN_COVERAGE"
|
||||
description: "Alternative name for minimum coverage threshold (legacy)"
|
||||
default: "85"
|
||||
required: false
|
||||
parameters:
|
||||
- name: "verbose"
|
||||
type: "boolean"
|
||||
description: "Enable verbose test output"
|
||||
default: "false"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "coverage-summary.json"
|
||||
type: "file"
|
||||
description: "JSON coverage summary generated by Vitest"
|
||||
path: "frontend/coverage/coverage-summary.json"
|
||||
- name: "coverage_summary"
|
||||
type: "stdout"
|
||||
description: "Summary of coverage statistics and validation result"
|
||||
metadata:
|
||||
category: "test"
|
||||
subcategory: "coverage"
|
||||
execution_time: "medium"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Test Frontend Coverage
|
||||
|
||||
## Overview
|
||||
|
||||
Executes the frontend test suite using Vitest with coverage enabled, generates a JSON coverage summary, and validates that the total statements coverage meets or exceeds the configured threshold (default: 85%).
|
||||
|
||||
This skill is designed for continuous integration and pre-commit hooks to ensure code quality standards are maintained.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18.0 or higher installed and in PATH
|
||||
- npm 9.0 or higher installed and in PATH
|
||||
- Python 3.8 or higher installed and in PATH
|
||||
- Frontend dependencies installed (`cd frontend && npm install`)
|
||||
- Write permissions in `frontend/coverage/` directory
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run with default settings (85% minimum coverage):
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-coverage
|
||||
```
|
||||
|
||||
### Custom Coverage Threshold
|
||||
|
||||
Set a custom minimum coverage percentage:
|
||||
|
||||
```bash
|
||||
export CHARON_MIN_COVERAGE=90
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-coverage
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions or other CI/CD pipelines:
|
||||
|
||||
```yaml
|
||||
- name: Run Frontend Tests with Coverage
|
||||
run: .github/skills/scripts/skill-runner.sh test-frontend-coverage
|
||||
env:
|
||||
CHARON_MIN_COVERAGE: 85
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| verbose | boolean | No | false | Enable verbose test output |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| CHARON_MIN_COVERAGE | No | 85 | Minimum coverage percentage required for success |
|
||||
| CPM_MIN_COVERAGE | No | 85 | Legacy name for minimum coverage (fallback) |
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All tests passed and coverage meets threshold
|
||||
|
||||
### Error Exit Codes
|
||||
- **1**: Coverage below threshold or coverage file generation failed
|
||||
- **Non-zero**: Tests failed or other error occurred
|
||||
|
||||
### Output Files
|
||||
- **frontend/coverage/coverage-summary.json**: Vitest coverage summary (JSON format)
|
||||
- **frontend/coverage/index.html**: HTML coverage report (viewable in browser)
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
Computed frontend coverage: 87.5% (minimum required 85%)
|
||||
Frontend coverage requirement met
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-coverage
|
||||
```
|
||||
|
||||
### Example 2: Higher Coverage Threshold
|
||||
|
||||
```bash
|
||||
export CHARON_MIN_COVERAGE=90
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-coverage
|
||||
```
|
||||
|
||||
### Example 3: View HTML Coverage Report
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-coverage
|
||||
open frontend/coverage/index.html # macOS
|
||||
xdg-open frontend/coverage/index.html # Linux
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: Coverage summary file not found
|
||||
**Solution**: Check that Vitest is configured with `--coverage` and `--reporter=json-summary`
|
||||
|
||||
#### Error: Frontend coverage X% is below required Y%
|
||||
**Solution**: Add tests for uncovered components or adjust threshold
|
||||
|
||||
#### Error: npm ci failed
|
||||
**Solution**: Clear node_modules and package-lock.json, then reinstall dependencies
|
||||
|
||||
## Related Skills
|
||||
|
||||
- test-frontend-unit - Fast unit tests without coverage
|
||||
- test-backend-coverage - Backend Go coverage tests
|
||||
- utility-cache-clear-go - Clear build caches
|
||||
|
||||
## Notes
|
||||
|
||||
- **Vitest Configuration**: Uses istanbul coverage provider for JSON summary reports
|
||||
- **Coverage Directory**: Coverage artifacts are written to `frontend/coverage/`
|
||||
- **Python Dependency**: Uses Python for decimal-precision coverage comparison
|
||||
- **Idempotency**: Safe to run multiple times; cleans up old coverage files
|
||||
- **CI Mode**: Runs `npm ci` in CI environments to ensure clean installs
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: `scripts/frontend-test-coverage.sh`
|
||||
47
.github/skills/test-frontend-unit-scripts/run.sh
vendored
47
.github/skills/test-frontend-unit-scripts/run.sh
vendored
@@ -1,47 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Test Frontend Unit - Execution Script
|
||||
#
|
||||
# This script runs frontend unit tests without coverage analysis,
|
||||
# providing fast test execution for development workflows.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
# Source helper scripts
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# Helper scripts are in .github/skills/scripts/
|
||||
SKILLS_SCRIPTS_DIR="$(cd "${SCRIPT_DIR}/../scripts" && pwd)"
|
||||
|
||||
# shellcheck source=../scripts/_logging_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_logging_helpers.sh"
|
||||
# shellcheck source=../scripts/_error_handling_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_error_handling_helpers.sh"
|
||||
# shellcheck source=../scripts/_environment_helpers.sh
|
||||
source "${SKILLS_SCRIPTS_DIR}/_environment_helpers.sh"
|
||||
|
||||
# Project root is 3 levels up from this script (skills/skill-name-scripts/run.sh -> project root)
|
||||
PROJECT_ROOT="$(cd "${SCRIPT_DIR}/../../.." && pwd)"
|
||||
|
||||
# Validate environment
|
||||
log_step "ENVIRONMENT" "Validating prerequisites"
|
||||
validate_node_environment "18.0" || error_exit "Node.js 18.0+ is required"
|
||||
|
||||
# Validate project structure
|
||||
log_step "VALIDATION" "Checking project structure"
|
||||
cd "${PROJECT_ROOT}"
|
||||
validate_project_structure "frontend" || error_exit "Invalid project structure"
|
||||
|
||||
# Change to frontend directory
|
||||
cd "${PROJECT_ROOT}/frontend"
|
||||
|
||||
# Execute tests
|
||||
log_step "EXECUTION" "Running frontend unit tests"
|
||||
|
||||
# Run npm test with all passed arguments
|
||||
if npm run test -- "$@"; then
|
||||
log_success "Frontend unit tests passed"
|
||||
exit 0
|
||||
else
|
||||
exit_code=$?
|
||||
log_error "Frontend unit tests failed (exit code: ${exit_code})"
|
||||
exit "${exit_code}"
|
||||
fi
|
||||
198
.github/skills/test-frontend-unit.SKILL.md
vendored
198
.github/skills/test-frontend-unit.SKILL.md
vendored
@@ -1,198 +0,0 @@
|
||||
---
|
||||
# agentskills.io specification v1.0
|
||||
name: "test-frontend-unit"
|
||||
version: "1.0.0"
|
||||
description: "Run frontend unit tests without coverage analysis (fast execution)"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "testing"
|
||||
- "unit-tests"
|
||||
- "frontend"
|
||||
- "vitest"
|
||||
- "fast"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "node"
|
||||
version: ">=18.0"
|
||||
optional: false
|
||||
- name: "npm"
|
||||
version: ">=9.0"
|
||||
optional: false
|
||||
environment_variables: []
|
||||
parameters:
|
||||
- name: "watch"
|
||||
type: "boolean"
|
||||
description: "Run tests in watch mode"
|
||||
default: "false"
|
||||
required: false
|
||||
- name: "filter"
|
||||
type: "string"
|
||||
description: "Filter tests by name pattern"
|
||||
default: ""
|
||||
required: false
|
||||
outputs:
|
||||
- name: "test_results"
|
||||
type: "stdout"
|
||||
description: "Vitest output showing pass/fail status"
|
||||
metadata:
|
||||
category: "test"
|
||||
subcategory: "unit"
|
||||
execution_time: "short"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Test Frontend Unit
|
||||
|
||||
## Overview
|
||||
|
||||
Executes the frontend unit test suite using Vitest without coverage analysis. This skill provides fast test execution for quick feedback during development, making it ideal for pre-commit checks and rapid iteration.
|
||||
|
||||
Unlike test-frontend-coverage, this skill does not generate coverage reports or enforce coverage thresholds, focusing purely on test pass/fail status.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js 18.0 or higher installed and in PATH
|
||||
- npm 9.0 or higher installed and in PATH
|
||||
- Frontend dependencies installed (`cd frontend && npm install`)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
Run all frontend unit tests:
|
||||
|
||||
```bash
|
||||
cd /path/to/charon
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-unit
|
||||
```
|
||||
|
||||
### Watch Mode
|
||||
|
||||
Run tests in watch mode for continuous testing:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-unit -- --watch
|
||||
```
|
||||
|
||||
### Filter Tests
|
||||
|
||||
Run tests matching a specific pattern:
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-unit -- --grep "Button"
|
||||
```
|
||||
|
||||
### CI/CD Integration
|
||||
|
||||
For use in GitHub Actions or other CI/CD pipelines:
|
||||
|
||||
```yaml
|
||||
- name: Run Frontend Unit Tests
|
||||
run: .github/skills/scripts/skill-runner.sh test-frontend-unit
|
||||
```
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| watch | boolean | No | false | Run tests in watch mode |
|
||||
| filter | string | No | "" | Filter tests by name pattern |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
No environment variables are required for this skill.
|
||||
|
||||
## Outputs
|
||||
|
||||
### Success Exit Code
|
||||
- **0**: All tests passed
|
||||
|
||||
### Error Exit Codes
|
||||
- **Non-zero**: One or more tests failed
|
||||
|
||||
### Console Output
|
||||
Example output:
|
||||
```
|
||||
✓ src/components/Button.test.tsx (3)
|
||||
✓ src/utils/helpers.test.ts (5)
|
||||
✓ src/hooks/useAuth.test.ts (4)
|
||||
|
||||
Test Files 3 passed (3)
|
||||
Tests 12 passed (12)
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Basic Execution
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-unit
|
||||
```
|
||||
|
||||
### Example 2: Watch Mode for TDD
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-unit -- --watch
|
||||
```
|
||||
|
||||
### Example 3: Test Specific File
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-unit -- Button.test.tsx
|
||||
```
|
||||
|
||||
### Example 4: UI Mode (Interactive)
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-unit -- --ui
|
||||
```
|
||||
|
||||
### Example 5: Reporter Configuration
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh test-frontend-unit -- --reporter=verbose
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Common Errors
|
||||
|
||||
#### Error: Cannot find module
|
||||
**Solution**: Run `npm install` to ensure all dependencies are installed
|
||||
|
||||
#### Error: Test timeout
|
||||
**Solution**: Increase timeout in vitest.config.ts or fix hanging async tests
|
||||
|
||||
#### Error: Unexpected token
|
||||
**Solution**: Check for syntax errors in test files
|
||||
|
||||
## Related Skills
|
||||
|
||||
- test-frontend-coverage - Run tests with coverage analysis (slower)
|
||||
- test-backend-unit - Backend Go unit tests
|
||||
- build-check-go - Verify builds without running tests
|
||||
|
||||
## Notes
|
||||
|
||||
- **Execution Time**: Fast execution (~3-5 seconds typical)
|
||||
- **No Coverage**: Does not generate coverage reports
|
||||
- **Vitest Features**: Full access to Vitest CLI options via arguments
|
||||
- **Idempotency**: Safe to run multiple times
|
||||
- **Caching**: Benefits from Vitest's smart caching
|
||||
- **Suitable For**: Pre-commit hooks, quick feedback, TDD workflows
|
||||
- **Watch Mode**: Available for interactive development
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project Team
|
||||
**Source**: Inline task command
|
||||
22
.github/skills/utility-bump-beta-scripts/run.sh
vendored
22
.github/skills/utility-bump-beta-scripts/run.sh
vendored
@@ -1,22 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# ==============================================================================
|
||||
# Utility: Bump Beta Version - Execution Script
|
||||
# ==============================================================================
|
||||
# This script increments the beta version number across all project files.
|
||||
# It wraps the original bump_beta.sh script.
|
||||
#
|
||||
# Usage: ./run.sh
|
||||
# Exit codes: 0 = success, non-zero = failure
|
||||
# ==============================================================================
|
||||
|
||||
# Determine the repository root directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||
|
||||
# Change to repository root
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Execute the bump beta script
|
||||
exec scripts/bump_beta.sh "$@"
|
||||
201
.github/skills/utility-bump-beta.SKILL.md
vendored
201
.github/skills/utility-bump-beta.SKILL.md
vendored
@@ -1,201 +0,0 @@
|
||||
---
|
||||
name: "utility-bump-beta"
|
||||
version: "1.0.0"
|
||||
description: "Increments beta version number across all project files for pre-release versioning"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "utility"
|
||||
- "versioning"
|
||||
- "release"
|
||||
- "automation"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "git"
|
||||
version: ">=2.0"
|
||||
optional: false
|
||||
- name: "sed"
|
||||
version: ">=4.0"
|
||||
optional: false
|
||||
environment_variables: []
|
||||
parameters: []
|
||||
outputs:
|
||||
- name: "new_version"
|
||||
type: "string"
|
||||
description: "The new beta version number"
|
||||
path: ".version"
|
||||
metadata:
|
||||
category: "utility"
|
||||
subcategory: "versioning"
|
||||
execution_time: "short"
|
||||
risk_level: "medium"
|
||||
ci_cd_safe: false
|
||||
requires_network: false
|
||||
idempotent: false
|
||||
---
|
||||
|
||||
# Utility: Bump Beta Version
|
||||
|
||||
## Overview
|
||||
|
||||
Automates beta version bumping across all project files. This skill intelligently increments version numbers following semantic versioning conventions for beta releases, updating multiple files in sync to maintain consistency.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Git repository initialized
|
||||
- Write access to project files
|
||||
- Clean working directory (recommended)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
.github/skills/utility-bump-beta-scripts/run.sh
|
||||
```
|
||||
|
||||
### Via Skill Runner
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh utility-bump-beta
|
||||
```
|
||||
|
||||
### Via VS Code Task
|
||||
|
||||
Use the task: **Utility: Bump Beta Version**
|
||||
|
||||
## Parameters
|
||||
|
||||
This skill accepts no parameters. Version bumping logic is automatic based on current version format.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This skill requires no environment variables.
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0
|
||||
- **Error Exit Codes**: Non-zero on failure
|
||||
- **Modified Files**:
|
||||
- `.version`
|
||||
- `backend/internal/version/version.go`
|
||||
- `frontend/package.json`
|
||||
- `backend/package.json` (if exists)
|
||||
- **Git Tag**: `v{NEW_VERSION}` (if user confirms)
|
||||
|
||||
### Output Example
|
||||
|
||||
```
|
||||
Starting Beta Version Bump...
|
||||
Current Version: 0.3.0-beta.2
|
||||
New Version: 0.3.0-beta.3
|
||||
Updated .version
|
||||
Updated backend/internal/version/version.go
|
||||
Updated frontend/package.json
|
||||
Updated backend/package.json
|
||||
Do you want to commit and tag this version? (y/n) y
|
||||
Committed and tagged v0.3.0-beta.3
|
||||
Remember to push: git push origin feature/beta-release --tags
|
||||
```
|
||||
|
||||
## Version Bumping Logic
|
||||
|
||||
### Current Version is Beta (x.y.z-beta.N)
|
||||
|
||||
Increments the beta number:
|
||||
- `0.3.0-beta.2` → `0.3.0-beta.3`
|
||||
- `1.0.0-beta.5` → `1.0.0-beta.6`
|
||||
|
||||
### Current Version is Plain Semver (x.y.z)
|
||||
|
||||
Bumps minor version and starts beta.1:
|
||||
- `0.3.0` → `0.4.0-beta.1`
|
||||
- `1.2.0` → `1.3.0-beta.1`
|
||||
|
||||
### Current Version is Alpha or Unrecognized
|
||||
|
||||
Defaults to safe fallback:
|
||||
- `0.3.0-alpha` → `0.3.0-beta.1`
|
||||
- `invalid-version` → `0.3.0-beta.1`
|
||||
|
||||
## Files Updated
|
||||
|
||||
1. **`.version`**: Project root version file
|
||||
2. **`backend/internal/version/version.go`**: Go version constant
|
||||
3. **`frontend/package.json`**: Frontend package version
|
||||
4. **`backend/package.json`**: Backend package version (if exists)
|
||||
|
||||
All files are updated with consistent version strings using `sed` regex replacement.
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Bump Beta Before Release
|
||||
|
||||
```bash
|
||||
# Bump version for next beta iteration
|
||||
.github/skills/utility-bump-beta-scripts/run.sh
|
||||
|
||||
# Confirm when prompted to commit and tag
|
||||
# Then push to remote
|
||||
git push origin feature/beta-release --tags
|
||||
```
|
||||
|
||||
### Example 2: Bump Without Committing
|
||||
|
||||
```bash
|
||||
# Make version changes but skip git operations
|
||||
.github/skills/utility-bump-beta-scripts/run.sh
|
||||
# Answer 'n' when prompted about committing
|
||||
```
|
||||
|
||||
## Interactive Confirmation
|
||||
|
||||
After updating files, the script prompts:
|
||||
|
||||
```
|
||||
Do you want to commit and tag this version? (y/n)
|
||||
```
|
||||
|
||||
- **Yes (y)**: Creates git commit and tag automatically
|
||||
- **No (n)**: Leaves changes staged for manual review
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Validates `.version` file exists and is readable
|
||||
- Uses safe defaults for unrecognized version formats
|
||||
- Does not modify VERSION.md guide content (manual update recommended)
|
||||
- Skips `backend/package.json` if file doesn't exist
|
||||
|
||||
## Post-Execution Steps
|
||||
|
||||
After running this skill:
|
||||
|
||||
1. **Review Changes**: `git diff`
|
||||
2. **Run Tests**: Ensure version change doesn't break builds
|
||||
3. **Push Tags**: `git push origin <branch> --tags`
|
||||
4. **Update CHANGELOG.md**: Manually document changes for this version
|
||||
5. **Verify CI/CD**: Check that automated builds use new version
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [utility-version-check](./utility-version-check.SKILL.md) - Validate version matches tags
|
||||
- [build-check-go](../build-check-go.SKILL.md) - Verify build after version bump
|
||||
|
||||
## Notes
|
||||
|
||||
- **Not Idempotent**: Running multiple times increments version each time
|
||||
- **Risk Level: Medium**: Modifies multiple critical files
|
||||
- **Git State**: Recommended to have clean working directory before running
|
||||
- **Manual Review**: Always review version changes before pushing
|
||||
- **VERSION.md**: Update manually as it contains documentation, not just version
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: `scripts/bump_beta.sh`
|
||||
@@ -1,22 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# ==============================================================================
|
||||
# Utility: Clear Go Cache - Execution Script
|
||||
# ==============================================================================
|
||||
# This script clears Go build, test, and module caches, plus gopls cache.
|
||||
# It wraps the original clear-go-cache.sh script.
|
||||
#
|
||||
# Usage: ./run.sh
|
||||
# Exit codes: 0 = success, 1 = failure
|
||||
# ==============================================================================
|
||||
|
||||
# Determine the repository root directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||
|
||||
# Change to repository root
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Execute the cache clear script
|
||||
exec scripts/clear-go-cache.sh "$@"
|
||||
181
.github/skills/utility-clear-go-cache.SKILL.md
vendored
181
.github/skills/utility-clear-go-cache.SKILL.md
vendored
@@ -1,181 +0,0 @@
|
||||
---
|
||||
name: "utility-clear-go-cache"
|
||||
version: "1.0.0"
|
||||
description: "Clears Go build, test, and module caches along with gopls cache for troubleshooting"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "utility"
|
||||
- "golang"
|
||||
- "cache"
|
||||
- "troubleshooting"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "go"
|
||||
version: ">=1.23"
|
||||
optional: false
|
||||
environment_variables:
|
||||
- name: "XDG_CACHE_HOME"
|
||||
description: "XDG cache directory (defaults to $HOME/.cache)"
|
||||
default: "$HOME/.cache"
|
||||
required: false
|
||||
parameters: []
|
||||
outputs:
|
||||
- name: "exit_code"
|
||||
type: "integer"
|
||||
description: "0 on success, 1 on failure"
|
||||
metadata:
|
||||
category: "utility"
|
||||
subcategory: "cache-management"
|
||||
execution_time: "short"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: false
|
||||
requires_network: true
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Utility: Clear Go Cache
|
||||
|
||||
## Overview
|
||||
|
||||
Clears all Go-related caches including build cache, test cache, module cache, and gopls (Go Language Server) cache. This is useful for troubleshooting build issues, resolving stale dependency problems, or cleaning up disk space.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Go toolchain installed (go 1.23+)
|
||||
- Write access to cache directories
|
||||
- Internet connection (for re-downloading modules)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
.github/skills/utility-clear-go-cache-scripts/run.sh
|
||||
```
|
||||
|
||||
### Via Skill Runner
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh utility-clear-go-cache
|
||||
```
|
||||
|
||||
### Via VS Code Task
|
||||
|
||||
Use the task: **Utility: Clear Go Cache**
|
||||
|
||||
## Parameters
|
||||
|
||||
This skill accepts no parameters.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| XDG_CACHE_HOME | No | $HOME/.cache | XDG cache directory location |
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0
|
||||
- **Error Exit Codes**: 1 - Cache clearing failed
|
||||
- **Console Output**: Progress messages and next steps
|
||||
|
||||
### Output Example
|
||||
|
||||
```
|
||||
Clearing Go build and module caches...
|
||||
Clearing gopls cache...
|
||||
Re-downloading modules...
|
||||
Caches cleared and modules re-downloaded.
|
||||
Next steps:
|
||||
- Restart your editor's Go language server (gopls)
|
||||
- In VS Code: Command Palette -> 'Go: Restart Language Server'
|
||||
- Verify the toolchain:
|
||||
$ go version
|
||||
$ gopls version
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Troubleshoot Build Issues
|
||||
|
||||
```bash
|
||||
# Clear caches when experiencing build errors
|
||||
.github/skills/utility-clear-go-cache-scripts/run.sh
|
||||
|
||||
# Restart VS Code's Go language server
|
||||
# Command Palette: "Go: Restart Language Server"
|
||||
```
|
||||
|
||||
### Example 2: Clean Development Environment
|
||||
|
||||
```bash
|
||||
# Clear caches before major Go version upgrade
|
||||
.github/skills/utility-clear-go-cache-scripts/run.sh
|
||||
|
||||
# Verify installation
|
||||
go version
|
||||
gopls version
|
||||
```
|
||||
|
||||
## What Gets Cleared
|
||||
|
||||
This skill clears the following:
|
||||
|
||||
1. **Go Build Cache**: `go clean -cache`
|
||||
- Compiled object files
|
||||
- Build artifacts
|
||||
|
||||
2. **Go Test Cache**: `go clean -testcache`
|
||||
- Cached test results
|
||||
|
||||
3. **Go Module Cache**: `go clean -modcache`
|
||||
- Downloaded module sources
|
||||
- Module checksums
|
||||
|
||||
4. **gopls Cache**: Removes `$XDG_CACHE_HOME/gopls` or `$HOME/.cache/gopls`
|
||||
- Language server indexes
|
||||
- Cached analysis results
|
||||
|
||||
5. **Re-downloads**: `go mod download`
|
||||
- Fetches all dependencies fresh
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when experiencing:
|
||||
- Build failures after dependency updates
|
||||
- gopls crashes or incorrect diagnostics
|
||||
- Module checksum mismatches
|
||||
- Stale test cache results
|
||||
- Disk space issues related to Go caches
|
||||
- IDE reporting incorrect errors
|
||||
|
||||
## Error Handling
|
||||
|
||||
- All cache clearing operations use `|| true` to continue even if a cache doesn't exist
|
||||
- Module re-download requires network access
|
||||
- Exits with error if `backend/` directory not found
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [build-check-go](../build-check-go.SKILL.md) - Verify Go build after cache clear
|
||||
- [test-backend-unit](./test-backend-unit.SKILL.md) - Run tests after cache clear
|
||||
|
||||
## Notes
|
||||
|
||||
- **Warning**: This operation re-downloads all Go modules (may be slow on poor network)
|
||||
- Not CI/CD safe due to network dependency and destructive nature
|
||||
- Requires manual IDE restart after execution
|
||||
- Safe to run multiple times (idempotent)
|
||||
- Consider using this before major Go version upgrades
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: `scripts/clear-go-cache.sh`
|
||||
@@ -1,22 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# ==============================================================================
|
||||
# Utility: Database Recovery - Execution Script
|
||||
# ==============================================================================
|
||||
# This script performs SQLite database integrity checks and recovery.
|
||||
# It wraps the original db-recovery.sh script.
|
||||
#
|
||||
# Usage: ./run.sh [--force]
|
||||
# Exit codes: 0 = success, 1 = failure
|
||||
# ==============================================================================
|
||||
|
||||
# Determine the repository root directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||
|
||||
# Change to repository root
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Execute the database recovery script
|
||||
exec scripts/db-recovery.sh "$@"
|
||||
299
.github/skills/utility-db-recovery.SKILL.md
vendored
299
.github/skills/utility-db-recovery.SKILL.md
vendored
@@ -1,299 +0,0 @@
|
||||
---
|
||||
name: "utility-db-recovery"
|
||||
version: "1.0.0"
|
||||
description: "Performs SQLite database integrity checks and recovery operations for Charon database"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "utility"
|
||||
- "database"
|
||||
- "recovery"
|
||||
- "sqlite"
|
||||
- "backup"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "sqlite3"
|
||||
version: ">=3.0"
|
||||
optional: false
|
||||
environment_variables: []
|
||||
parameters:
|
||||
- name: "--force"
|
||||
type: "flag"
|
||||
description: "Skip confirmation prompts"
|
||||
default: "false"
|
||||
required: false
|
||||
outputs:
|
||||
- name: "exit_code"
|
||||
type: "integer"
|
||||
description: "0 on success, 1 on failure"
|
||||
- name: "backup_file"
|
||||
type: "file"
|
||||
description: "Timestamped backup of database"
|
||||
path: "backend/data/backups/charon_backup_*.db"
|
||||
metadata:
|
||||
category: "utility"
|
||||
subcategory: "database"
|
||||
execution_time: "medium"
|
||||
risk_level: "high"
|
||||
ci_cd_safe: false
|
||||
requires_network: false
|
||||
idempotent: false
|
||||
---
|
||||
|
||||
# Utility: Database Recovery
|
||||
|
||||
## Overview
|
||||
|
||||
Performs comprehensive SQLite database integrity checks and recovery operations for the Charon database. This skill can detect corruption, create backups, and attempt automatic recovery using SQLite's `.dump` and rebuild strategy. Critical for maintaining database health and recovering from corruption.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- `sqlite3` command-line tool installed
|
||||
- Database file exists at expected location
|
||||
- Write permissions for backup directory
|
||||
- Sufficient disk space for backups and recovery
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage (Interactive)
|
||||
|
||||
```bash
|
||||
.github/skills/utility-db-recovery-scripts/run.sh
|
||||
```
|
||||
|
||||
### Force Mode (Non-Interactive)
|
||||
|
||||
```bash
|
||||
.github/skills/utility-db-recovery-scripts/run.sh --force
|
||||
```
|
||||
|
||||
### Via Skill Runner
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh utility-db-recovery [--force]
|
||||
```
|
||||
|
||||
### Via VS Code Task
|
||||
|
||||
Use the task: **Utility: Database Recovery**
|
||||
|
||||
## Parameters
|
||||
|
||||
| Parameter | Type | Required | Default | Description |
|
||||
|-----------|------|----------|---------|-------------|
|
||||
| --force | flag | No | false | Skip confirmation prompts |
|
||||
| -f | flag | No | false | Alias for --force |
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This skill requires no environment variables. It auto-detects Docker vs local environment.
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0 - Database healthy or recovered
|
||||
- **Error Exit Codes**: 1 - Recovery failed or prerequisites missing
|
||||
- **Backup Files**: `backend/data/backups/charon_backup_YYYYMMDD_HHMMSS.db`
|
||||
- **Dump Files**: `backend/data/backups/charon_dump_YYYYMMDD_HHMMSS.sql` (if recovery attempted)
|
||||
- **Recovered DB**: `backend/data/backups/charon_recovered_YYYYMMDD_HHMMSS.db` (temporary)
|
||||
|
||||
### Success Output Example (Healthy Database)
|
||||
|
||||
```
|
||||
==============================================
|
||||
Charon Database Recovery Tool
|
||||
==============================================
|
||||
|
||||
[INFO] sqlite3 found: 3.40.1
|
||||
[INFO] Running in local development environment
|
||||
[INFO] Database path: backend/data/charon.db
|
||||
[INFO] Created backup directory: backend/data/backups
|
||||
[INFO] Creating backup: backend/data/backups/charon_backup_20251220_143022.db
|
||||
[SUCCESS] Backup created successfully
|
||||
|
||||
==============================================
|
||||
Integrity Check Results
|
||||
==============================================
|
||||
[INFO] Running SQLite integrity check...
|
||||
ok
|
||||
[SUCCESS] Database integrity check passed!
|
||||
[INFO] WAL mode already enabled
|
||||
[INFO] Cleaning up old backups (keeping last 10)...
|
||||
|
||||
==============================================
|
||||
Summary
|
||||
==============================================
|
||||
[SUCCESS] Database is healthy
|
||||
[INFO] Backup stored at: backend/data/backups/charon_backup_20251220_143022.db
|
||||
```
|
||||
|
||||
### Recovery Output Example (Corrupted Database)
|
||||
|
||||
```
|
||||
==============================================
|
||||
Integrity Check Results
|
||||
==============================================
|
||||
[INFO] Running SQLite integrity check...
|
||||
*** in database main ***
|
||||
Page 15: btreeInitPage() returns error code 11
|
||||
[ERROR] Database integrity check FAILED
|
||||
|
||||
WARNING: Database corruption detected!
|
||||
This script will attempt to recover the database.
|
||||
A backup has already been created at: backend/data/backups/charon_backup_20251220_143022.db
|
||||
|
||||
Continue with recovery? (y/N): y
|
||||
|
||||
==============================================
|
||||
Recovery Process
|
||||
==============================================
|
||||
[INFO] Attempting database recovery...
|
||||
[INFO] Exporting database via .dump command...
|
||||
[SUCCESS] Database dump created: backend/data/backups/charon_dump_20251220_143022.sql
|
||||
[INFO] Creating new database from dump...
|
||||
[SUCCESS] Recovered database created: backend/data/backups/charon_recovered_20251220_143022.db
|
||||
[INFO] Verifying recovered database integrity...
|
||||
[SUCCESS] Recovered database passed integrity check
|
||||
[INFO] Replacing original database with recovered version...
|
||||
[SUCCESS] Database replaced successfully
|
||||
[INFO] Enabling WAL (Write-Ahead Logging) mode...
|
||||
[SUCCESS] WAL mode enabled
|
||||
|
||||
==============================================
|
||||
Summary
|
||||
==============================================
|
||||
[SUCCESS] Database recovery completed successfully!
|
||||
[INFO] Original backup: backend/data/backups/charon_backup_20251220_143022.db
|
||||
[INFO] Please restart the Charon application
|
||||
```
|
||||
|
||||
## Environment Detection
|
||||
|
||||
The skill automatically detects whether it's running in:
|
||||
|
||||
1. **Docker Environment**: Database at `/app/data/charon.db`
|
||||
2. **Local Development**: Database at `backend/data/charon.db`
|
||||
|
||||
Backup locations adjust accordingly.
|
||||
|
||||
## Recovery Process
|
||||
|
||||
When corruption is detected, the recovery process:
|
||||
|
||||
1. **Creates Backup**: Timestamped copy of current database (including WAL/SHM)
|
||||
2. **Exports Data**: Uses `.dump` command to export SQL (works with partial corruption)
|
||||
3. **Creates New DB**: Builds fresh database from dump
|
||||
4. **Verifies Integrity**: Runs integrity check on recovered database
|
||||
5. **Replaces Original**: Moves recovered database to original location
|
||||
6. **Enables WAL Mode**: Configures Write-Ahead Logging for durability
|
||||
7. **Cleanup**: Removes old backups (keeps last 10)
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
- Application fails to start with database errors
|
||||
- SQLite reports "database disk image is malformed"
|
||||
- Random crashes or data inconsistencies
|
||||
- After unclean shutdown (power loss, kill -9)
|
||||
- Before major database migrations
|
||||
- As part of regular maintenance schedule
|
||||
|
||||
## Backup Management
|
||||
|
||||
- **Automatic Backups**: Created before any recovery operation
|
||||
- **Retention**: Keeps last 10 backups automatically
|
||||
- **Includes WAL/SHM**: Backs up Write-Ahead Log files if present
|
||||
- **Timestamped**: Format `charon_backup_YYYYMMDD_HHMMSS.db`
|
||||
|
||||
## WAL Mode
|
||||
|
||||
The skill ensures Write-Ahead Logging (WAL) is enabled:
|
||||
- **Benefits**: Better concurrency, atomic commits, crash resistance
|
||||
- **Trade-offs**: Multiple files (db, wal, shm) instead of single file
|
||||
- **Recommended**: For all production deployments
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Regular Health Check
|
||||
|
||||
```bash
|
||||
# Run integrity check (creates backup even if healthy)
|
||||
.github/skills/utility-db-recovery-scripts/run.sh
|
||||
```
|
||||
|
||||
### Example 2: Force Recovery Without Prompts
|
||||
|
||||
```bash
|
||||
# Useful for automation/scripts
|
||||
.github/skills/utility-db-recovery-scripts/run.sh --force
|
||||
```
|
||||
|
||||
### Example 3: Docker Container Recovery
|
||||
|
||||
```bash
|
||||
# Run inside Docker container
|
||||
docker exec -it charon-app bash
|
||||
/app/.github/skills/utility-db-recovery-scripts/run.sh --force
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **No sqlite3**: Exits with installation instructions
|
||||
- **Database not found**: Exits with clear error message
|
||||
- **Dump fails**: Recovery aborted, backup preserved
|
||||
- **Recovered DB fails integrity**: Original backup preserved
|
||||
- **Insufficient disk space**: Operations fail safely
|
||||
|
||||
## Post-Recovery Steps
|
||||
|
||||
After successful recovery:
|
||||
|
||||
1. **Restart Application**: `docker compose restart` or restart process
|
||||
2. **Verify Functionality**: Test critical features
|
||||
3. **Monitor Logs**: Watch for any residual issues
|
||||
4. **Review Backup**: Keep the backup until stability confirmed
|
||||
5. **Investigate Root Cause**: Determine what caused corruption
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [docker-start-dev](./docker-start-dev.SKILL.md) - Restart containers after recovery
|
||||
- [docker-stop-dev](./docker-stop-dev.SKILL.md) - Stop containers before recovery
|
||||
|
||||
## Notes
|
||||
|
||||
- **High Risk**: Destructive operation, always creates backup first
|
||||
- **Not CI/CD Safe**: Requires user interaction (unless --force)
|
||||
- **Not Idempotent**: Each run creates new backup
|
||||
- **Manual Intervention**: Some corruption may require manual SQL fixes
|
||||
- **WAL Files**: Don't delete WAL/SHM files manually during operation
|
||||
- **Backup Location**: Ensure backups are stored on different disk from database
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Recovery Fails with Empty Dump
|
||||
|
||||
- Database may be too corrupted
|
||||
- Try `.recover` command (SQLite 3.29+)
|
||||
- Restore from external backup
|
||||
|
||||
### "Database is Locked" Error
|
||||
|
||||
- Stop application first
|
||||
- Check for other processes accessing database
|
||||
- Use `fuser backend/data/charon.db` to find processes
|
||||
|
||||
### Recovery Succeeds but Data Missing
|
||||
|
||||
- Some corruption may result in data loss
|
||||
- Review backup before deleting
|
||||
- Check dump SQL file for missing tables
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: `scripts/db-recovery.sh`
|
||||
@@ -1,68 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
# Skill runner for utility-update-go-version
|
||||
# Updates local Go installation to match go.work requirements
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||
|
||||
GO_WORK_FILE="$PROJECT_ROOT/go.work"
|
||||
|
||||
if [[ ! -f "$GO_WORK_FILE" ]]; then
|
||||
echo "❌ go.work not found at $GO_WORK_FILE"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Extract required Go version from go.work
|
||||
REQUIRED_VERSION=$(grep -E '^go [0-9]+\.[0-9]+(\.[0-9]+)?$' "$GO_WORK_FILE" | awk '{print $2}')
|
||||
|
||||
if [[ -z "$REQUIRED_VERSION" ]]; then
|
||||
echo "❌ Could not parse Go version from go.work"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "📋 Required Go version from go.work: $REQUIRED_VERSION"
|
||||
|
||||
# Check current installed version
|
||||
CURRENT_VERSION=$(go version 2>/dev/null | grep -oE 'go[0-9]+\.[0-9]+(\.[0-9]+)?' | sed 's/go//' || echo "none")
|
||||
echo "📋 Currently installed Go version: $CURRENT_VERSION"
|
||||
|
||||
if [[ "$CURRENT_VERSION" == "$REQUIRED_VERSION" ]]; then
|
||||
echo "✅ Go version already matches requirement ($REQUIRED_VERSION)"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
echo "🔄 Updating Go from $CURRENT_VERSION to $REQUIRED_VERSION..."
|
||||
|
||||
# Download the new Go version using the official dl tool
|
||||
echo "📥 Downloading Go $REQUIRED_VERSION..."
|
||||
go install "golang.org/dl/go${REQUIRED_VERSION}@latest"
|
||||
|
||||
# Download the SDK
|
||||
echo "📦 Installing Go $REQUIRED_VERSION SDK..."
|
||||
"go${REQUIRED_VERSION}" download
|
||||
|
||||
# Update the system symlink
|
||||
SDK_PATH="$HOME/sdk/go${REQUIRED_VERSION}/bin/go"
|
||||
if [[ -f "$SDK_PATH" ]]; then
|
||||
echo "🔗 Updating system Go symlink..."
|
||||
sudo ln -sf "$SDK_PATH" /usr/local/go/bin/go
|
||||
else
|
||||
echo "⚠️ SDK binary not found at expected path: $SDK_PATH"
|
||||
echo " You may need to add go${REQUIRED_VERSION} to your PATH manually"
|
||||
fi
|
||||
|
||||
# Verify the update
|
||||
NEW_VERSION=$(go version 2>/dev/null | grep -oE 'go[0-9]+\.[0-9]+(\.[0-9]+)?' | sed 's/go//' || echo "unknown")
|
||||
echo ""
|
||||
echo "✅ Go updated successfully!"
|
||||
echo " Previous: $CURRENT_VERSION"
|
||||
echo " Current: $NEW_VERSION"
|
||||
echo " Required: $REQUIRED_VERSION"
|
||||
|
||||
if [[ "$NEW_VERSION" != "$REQUIRED_VERSION" ]]; then
|
||||
echo ""
|
||||
echo "⚠️ Warning: Installed version ($NEW_VERSION) doesn't match required ($REQUIRED_VERSION)"
|
||||
echo " You may need to restart your terminal or IDE"
|
||||
fi
|
||||
@@ -1,31 +0,0 @@
|
||||
# Utility: Update Go Version
|
||||
|
||||
Updates the local Go installation to match the version specified in `go.work`.
|
||||
|
||||
## Purpose
|
||||
|
||||
When Renovate bot updates the Go version in `go.work`, this skill automatically downloads and installs the matching Go version locally.
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh utility-update-go-version
|
||||
```
|
||||
|
||||
## What It Does
|
||||
|
||||
1. Reads the required Go version from `go.work`
|
||||
2. Compares against the currently installed version
|
||||
3. If different, downloads and installs the new version using `golang.org/dl`
|
||||
4. Updates the system symlink to point to the new version
|
||||
|
||||
## When to Use
|
||||
|
||||
- After Renovate bot creates a PR updating `go.work`
|
||||
- When you see "packages.Load error: go.work requires go >= X.Y.Z"
|
||||
- Before building if you get Go version mismatch errors
|
||||
|
||||
## Requirements
|
||||
|
||||
- `sudo` access (for updating symlink)
|
||||
- Internet connection (for downloading Go SDK)
|
||||
@@ -1,22 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# ==============================================================================
|
||||
# Utility: Version Check - Execution Script
|
||||
# ==============================================================================
|
||||
# This script validates that the .version file matches the latest git tag.
|
||||
# It wraps the original check-version-match-tag.sh script.
|
||||
#
|
||||
# Usage: ./run.sh
|
||||
# Exit codes: 0 = success, 1 = version mismatch
|
||||
# ==============================================================================
|
||||
|
||||
# Determine the repository root directory
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||
|
||||
# Change to repository root
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Execute the version check script
|
||||
exec scripts/check-version-match-tag.sh "$@"
|
||||
142
.github/skills/utility-version-check.SKILL.md
vendored
142
.github/skills/utility-version-check.SKILL.md
vendored
@@ -1,142 +0,0 @@
|
||||
---
|
||||
name: "utility-version-check"
|
||||
version: "1.0.0"
|
||||
description: "Validates that VERSION.md/version file matches the latest git tag for release consistency"
|
||||
author: "Charon Project"
|
||||
license: "MIT"
|
||||
tags:
|
||||
- "utility"
|
||||
- "versioning"
|
||||
- "validation"
|
||||
- "git"
|
||||
compatibility:
|
||||
os:
|
||||
- "linux"
|
||||
- "darwin"
|
||||
shells:
|
||||
- "bash"
|
||||
requirements:
|
||||
- name: "git"
|
||||
version: ">=2.0"
|
||||
optional: false
|
||||
environment_variables: []
|
||||
parameters: []
|
||||
outputs:
|
||||
- name: "exit_code"
|
||||
type: "integer"
|
||||
description: "0 if version matches, 1 if mismatch or error"
|
||||
metadata:
|
||||
category: "utility"
|
||||
subcategory: "versioning"
|
||||
execution_time: "short"
|
||||
risk_level: "low"
|
||||
ci_cd_safe: true
|
||||
requires_network: false
|
||||
idempotent: true
|
||||
---
|
||||
|
||||
# Utility: Version Check
|
||||
|
||||
## Overview
|
||||
|
||||
Validates that the version specified in `.version` file matches the latest git tag. This ensures version consistency across the codebase and prevents version drift during releases. The check is used in CI/CD to enforce version tagging discipline.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Git repository with tags
|
||||
- `.version` file in repository root (optional)
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
.github/skills/utility-version-check-scripts/run.sh
|
||||
```
|
||||
|
||||
### Via Skill Runner
|
||||
|
||||
```bash
|
||||
.github/skills/scripts/skill-runner.sh utility-version-check
|
||||
```
|
||||
|
||||
### Via VS Code Task
|
||||
|
||||
Use the task: **Utility: Check Version Match Tag**
|
||||
|
||||
## Parameters
|
||||
|
||||
This skill accepts no parameters.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
This skill requires no environment variables.
|
||||
|
||||
## Outputs
|
||||
|
||||
- **Success Exit Code**: 0 - Version matches latest tag or no tags exist
|
||||
- **Error Exit Codes**: 1 - Version mismatch detected
|
||||
- **Console Output**: Validation result message
|
||||
|
||||
### Success Output Example
|
||||
|
||||
```
|
||||
OK: .version matches latest Git tag v0.3.0-beta.2
|
||||
```
|
||||
|
||||
### Error Output Example
|
||||
|
||||
```
|
||||
ERROR: .version (0.3.0-beta.3) does not match latest Git tag (v0.3.0-beta.2)
|
||||
To sync, either update .version or tag with 'v0.3.0-beta.3'
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Check Version During Release
|
||||
|
||||
```bash
|
||||
# Before tagging a new release
|
||||
.github/skills/utility-version-check-scripts/run.sh
|
||||
```
|
||||
|
||||
### Example 2: CI/CD Integration
|
||||
|
||||
```yaml
|
||||
- name: Validate Version
|
||||
run: .github/skills/scripts/skill-runner.sh utility-version-check
|
||||
```
|
||||
|
||||
## Version Normalization
|
||||
|
||||
The skill normalizes both the `.version` file content and git tag by:
|
||||
- Stripping leading `v` prefix (e.g., `v1.0.0` → `1.0.0`)
|
||||
- Removing newline and carriage return characters
|
||||
- Comparing normalized versions
|
||||
|
||||
This allows flexibility in tagging conventions while ensuring consistency.
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **No .version file**: Exits with 0 (skip check)
|
||||
- **No git tags**: Exits with 0 (skip check, allows commits before first tag)
|
||||
- **Version mismatch**: Exits with 1 and provides guidance
|
||||
- **Git errors**: Script fails with appropriate error message
|
||||
|
||||
## Related Skills
|
||||
|
||||
- [utility-bump-beta](./utility-bump-beta.SKILL.md) - Increment beta version
|
||||
- [build-check-go](../build-check-go.SKILL.md) - Verify Go build integrity
|
||||
|
||||
## Notes
|
||||
|
||||
- This check is **non-blocking** when no tags exist (allows initial development)
|
||||
- Version format is flexible (supports semver, beta, alpha suffixes)
|
||||
- Used in CI/CD to prevent merging PRs with version mismatches
|
||||
- Part of the release automation workflow
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Maintained by**: Charon Project
|
||||
**Source**: `scripts/check-version-match-tag.sh`
|
||||
Reference in New Issue
Block a user