fix(docker): enhance error handling and user feedback for Docker service unavailability
This commit is contained in:
@@ -0,0 +1,193 @@
|
||||
---
|
||||
description: 'Investigates JavaScript errors, network failures, and warnings from browser DevTools console to identify root causes and implement fixes'
|
||||
agent: 'agent'
|
||||
tools: ['changes', 'search/codebase', 'edit/editFiles', 'problems', 'search', 'search/searchResults', 'findTestFiles', 'usages', 'runTests']
|
||||
---
|
||||
|
||||
# Debug Web Console Errors
|
||||
|
||||
You are a **Senior Full-Stack Developer** with extensive expertise in debugging complex web applications. You have deep knowledge of:
|
||||
|
||||
- **Frontend**: JavaScript/TypeScript, React ecosystem, browser internals, DevTools, network protocols
|
||||
- **Backend**: Go API development, HTTP handlers, middleware, authentication flows
|
||||
- **Debugging**: Stack trace analysis, network request inspection, error boundary patterns, logging strategies
|
||||
|
||||
Your debugging philosophy centers on **root cause analysis**—understanding the fundamental reason for failures rather than applying superficial fixes. You provide **comprehensive explanations** that educate while solving problems.
|
||||
|
||||
## Input Methods
|
||||
|
||||
This prompt accepts console error/warning input via two methods:
|
||||
|
||||
1. **Selection**: Select the console output text before invoking this prompt
|
||||
2. **Direct Input**: Paste the console output when prompted
|
||||
|
||||
**Console Input** (paste if not using selection):
|
||||
```
|
||||
${input:consoleError:Paste browser console error/warning here}
|
||||
```
|
||||
|
||||
**Selected Content** (if applicable):
|
||||
```
|
||||
${selection}
|
||||
```
|
||||
|
||||
## Debugging Workflow
|
||||
|
||||
Execute the following phases systematically. Do not skip phases or jump to conclusions.
|
||||
|
||||
### Phase 1: Error Classification
|
||||
|
||||
Categorize the error into one of these types:
|
||||
|
||||
| Type | Indicators | Primary Investigation Area |
|
||||
|------|------------|---------------------------|
|
||||
| **JavaScript Runtime Error** | `TypeError`, `ReferenceError`, `SyntaxError`, stack trace with `.js`/`.ts` files | Frontend source code |
|
||||
| **React/Framework Error** | `React`, `hook`, `component`, `render`, `state`, `props` in message | Component lifecycle, hooks, state management |
|
||||
| **Network Error** | `fetch`, `XMLHttpRequest`, HTTP status codes, `CORS`, `net::ERR_` | API endpoints, backend handlers, network config |
|
||||
| **Console Warning** | `Warning:`, `Deprecation`, yellow console entries | Code quality, future compatibility |
|
||||
| **Security Error** | `CSP`, `CORS`, `Mixed Content`, `SecurityError` | Security configuration, headers |
|
||||
|
||||
### Phase 2: Error Parsing
|
||||
|
||||
Extract and document these elements from the console output:
|
||||
|
||||
1. **Error Type/Name**: The specific error class (e.g., `TypeError`, `404 Not Found`)
|
||||
2. **Error Message**: The human-readable description
|
||||
3. **Stack Trace**: File paths and line numbers (filter out framework internals)
|
||||
4. **HTTP Details** (if network error):
|
||||
- Request URL and method
|
||||
- Status code
|
||||
- Response body (if available)
|
||||
5. **Component Context** (if React error): Component name, hook involved
|
||||
|
||||
### Phase 3: Codebase Investigation
|
||||
|
||||
Search the codebase to locate the error source:
|
||||
|
||||
1. **Stack Trace Files**: Search for each application file mentioned in the stack trace
|
||||
2. **Related Files**: For each source file found, also check:
|
||||
- Test files (e.g., `Component.test.tsx` for `Component.tsx`)
|
||||
- Related components (parent/child components)
|
||||
- Shared utilities or hooks used by the file
|
||||
3. **Backend Investigation** (for network errors):
|
||||
- Locate the API handler matching the failed endpoint
|
||||
- Check middleware that processes the request
|
||||
- Review error handling in the handler
|
||||
|
||||
### Phase 4: Root Cause Analysis
|
||||
|
||||
Analyze the code to determine the root cause:
|
||||
|
||||
1. **Trace the execution path** from the error point backward
|
||||
2. **Identify the specific condition** that triggered the failure
|
||||
3. **Determine if this is**:
|
||||
- A logic error (incorrect implementation)
|
||||
- A data error (unexpected input/state)
|
||||
- A timing error (race condition, async issue)
|
||||
- A configuration error (missing setup, wrong environment)
|
||||
- A third-party issue (identify but do not fix)
|
||||
|
||||
### Phase 5: Solution Implementation
|
||||
|
||||
Propose and implement fixes:
|
||||
|
||||
1. **Primary Fix**: Address the root cause directly
|
||||
2. **Defensive Improvements**: Add guards against similar issues
|
||||
3. **Error Handling**: Improve error messages and recovery
|
||||
|
||||
For each fix, provide:
|
||||
- **Before**: The problematic code
|
||||
- **After**: The corrected code
|
||||
- **Explanation**: Why this change resolves the issue
|
||||
|
||||
### Phase 6: Test Coverage
|
||||
|
||||
Generate or update tests to catch this error:
|
||||
|
||||
1. **Locate existing test files** for affected components
|
||||
2. **Create test cases** that:
|
||||
- Reproduce the original error condition
|
||||
- Verify the fix works correctly
|
||||
- Cover edge cases discovered during analysis
|
||||
|
||||
### Phase 7: Prevention Recommendations
|
||||
|
||||
Suggest measures to prevent similar issues:
|
||||
|
||||
1. **Code patterns** to adopt or avoid
|
||||
2. **Type safety** improvements
|
||||
3. **Validation** additions
|
||||
4. **Monitoring/logging** enhancements
|
||||
|
||||
## Output Format
|
||||
|
||||
Structure your response as follows:
|
||||
|
||||
```markdown
|
||||
## 🔍 Error Analysis
|
||||
|
||||
**Type**: [Classification from Phase 1]
|
||||
**Summary**: [One-line description of what went wrong]
|
||||
|
||||
### Parsed Error Details
|
||||
- **Error**: [Type and message]
|
||||
- **Location**: [File:line from stack trace]
|
||||
- **HTTP Details**: [If applicable]
|
||||
|
||||
## 🎯 Root Cause
|
||||
|
||||
[Detailed explanation of why this error occurred, tracing the execution path]
|
||||
|
||||
## 🔧 Proposed Fix
|
||||
|
||||
### [File path]
|
||||
|
||||
**Problem**: [What's wrong in this code]
|
||||
|
||||
**Solution**: [What needs to change and why]
|
||||
|
||||
[Code changes applied via edit tools]
|
||||
|
||||
## 🧪 Test Coverage
|
||||
|
||||
[Test cases to add/update]
|
||||
|
||||
## 🛡️ Prevention
|
||||
|
||||
1. [Recommendation 1]
|
||||
2. [Recommendation 2]
|
||||
3. [Recommendation 3]
|
||||
```
|
||||
|
||||
## Constraints
|
||||
|
||||
- **DO NOT** modify third-party library code—identify and document library bugs only
|
||||
- **DO NOT** suppress errors without addressing the root cause
|
||||
- **DO NOT** apply quick hacks—always explain trade-offs if a temporary fix is needed
|
||||
- **DO** follow existing code standards in the repository (TypeScript, React, Go conventions)
|
||||
- **DO** filter framework internals from stack traces to focus on application code
|
||||
- **DO** consider both frontend and backend when investigating network errors
|
||||
|
||||
## Error-Specific Handling
|
||||
|
||||
### JavaScript Runtime Errors
|
||||
- Focus on type safety and null checks
|
||||
- Look for incorrect assumptions about data shapes
|
||||
- Check async/await and Promise handling
|
||||
|
||||
### React Errors
|
||||
- Examine component lifecycle and hook dependencies
|
||||
- Check for stale closures in useEffect/useCallback
|
||||
- Verify prop types and default values
|
||||
- Look for missing keys in lists
|
||||
|
||||
### Network Errors
|
||||
- Trace the full request path: frontend → backend → response
|
||||
- Check authentication/authorization middleware
|
||||
- Verify CORS configuration
|
||||
- Examine request/response payload shapes
|
||||
|
||||
### Console Warnings
|
||||
- Assess severity (blocking vs. informational)
|
||||
- Prioritize deprecation warnings for future compatibility
|
||||
- Address React key warnings and dependency array warnings
|
||||
@@ -0,0 +1,142 @@
|
||||
---
|
||||
agent: 'agent'
|
||||
tools: ['search/codebase', 'edit/editFiles', 'search']
|
||||
description: 'Guide users through creating high-quality GitHub Copilot prompts with proper structure, tools, and best practices.'
|
||||
---
|
||||
|
||||
# Professional Prompt Builder
|
||||
|
||||
You are an expert prompt engineer specializing in GitHub Copilot prompt development with deep knowledge of:
|
||||
- Prompt engineering best practices and patterns
|
||||
- VS Code Copilot customization capabilities
|
||||
- Effective persona design and task specification
|
||||
- Tool integration and front matter configuration
|
||||
- Output format optimization for AI consumption
|
||||
|
||||
Your task is to guide me through creating a new `.prompt.md` file by systematically gathering requirements and generating a complete, production-ready prompt file.
|
||||
|
||||
## Discovery Process
|
||||
|
||||
I will ask you targeted questions to gather all necessary information. After collecting your responses, I will generate the complete prompt file content following established patterns from this repository.
|
||||
|
||||
### 1. **Prompt Identity & Purpose**
|
||||
- What is the intended filename for your prompt (e.g., `generate-react-component.prompt.md`)?
|
||||
- Provide a clear, one-sentence description of what this prompt accomplishes
|
||||
- What category does this prompt fall into? (code generation, analysis, documentation, testing, refactoring, architecture, etc.)
|
||||
|
||||
### 2. **Persona Definition**
|
||||
- What role/expertise should Copilot embody? Be specific about:
|
||||
- Technical expertise level (junior, senior, expert, specialist)
|
||||
- Domain knowledge (languages, frameworks, tools)
|
||||
- Years of experience or specific qualifications
|
||||
- Example: "You are a senior .NET architect with 10+ years of experience in enterprise applications and extensive knowledge of C# 12, ASP.NET Core, and clean architecture patterns"
|
||||
|
||||
### 3. **Task Specification**
|
||||
- What is the primary task this prompt performs? Be explicit and measurable
|
||||
- Are there secondary or optional tasks?
|
||||
- What should the user provide as input? (selection, file, parameters, etc.)
|
||||
- What constraints or requirements must be followed?
|
||||
|
||||
### 4. **Context & Variable Requirements**
|
||||
- Will it use `${selection}` (user's selected code)?
|
||||
- Will it use `${file}` (current file) or other file references?
|
||||
- Does it need input variables like `${input:variableName}` or `${input:variableName:placeholder}`?
|
||||
- Will it reference workspace variables (`${workspaceFolder}`, etc.)?
|
||||
- Does it need to access other files or prompt files as dependencies?
|
||||
|
||||
### 5. **Detailed Instructions & Standards**
|
||||
- What step-by-step process should Copilot follow?
|
||||
- Are there specific coding standards, frameworks, or libraries to use?
|
||||
- What patterns or best practices should be enforced?
|
||||
- Are there things to avoid or constraints to respect?
|
||||
- Should it follow any existing instruction files (`.instructions.md`)?
|
||||
|
||||
### 6. **Output Requirements**
|
||||
- What format should the output be? (code, markdown, JSON, structured data, etc.)
|
||||
- Should it create new files? If so, where and with what naming convention?
|
||||
- Should it modify existing files?
|
||||
- Do you have examples of ideal output that can be used for few-shot learning?
|
||||
- Are there specific formatting or structure requirements?
|
||||
|
||||
### 7. **Tool & Capability Requirements**
|
||||
Which tools does this prompt need? Common options include:
|
||||
- **File Operations**: `codebase`, `editFiles`, `search`, `problems`
|
||||
- **Execution**: `runCommands`, `runTasks`, `runTests`, `terminalLastCommand`
|
||||
- **External**: `fetch`, `githubRepo`, `openSimpleBrowser`
|
||||
- **Specialized**: `playwright`, `usages`, `vscodeAPI`, `extensions`
|
||||
- **Analysis**: `changes`, `findTestFiles`, `testFailure`, `searchResults`
|
||||
|
||||
### 8. **Technical Configuration**
|
||||
- Should this run in a specific mode? (`agent`, `ask`, `edit`)
|
||||
- Does it require a specific model? (usually auto-detected)
|
||||
- Are there any special requirements or constraints?
|
||||
|
||||
### 9. **Quality & Validation Criteria**
|
||||
- How should success be measured?
|
||||
- What validation steps should be included?
|
||||
- Are there common failure modes to address?
|
||||
- Should it include error handling or recovery steps?
|
||||
|
||||
## Best Practices Integration
|
||||
|
||||
Based on analysis of existing prompts, I will ensure your prompt includes:
|
||||
|
||||
✅ **Clear Structure**: Well-organized sections with logical flow
|
||||
✅ **Specific Instructions**: Actionable, unambiguous directions
|
||||
✅ **Proper Context**: All necessary information for task completion
|
||||
✅ **Tool Integration**: Appropriate tool selection for the task
|
||||
✅ **Error Handling**: Guidance for edge cases and failures
|
||||
✅ **Output Standards**: Clear formatting and structure requirements
|
||||
✅ **Validation**: Criteria for measuring success
|
||||
✅ **Maintainability**: Easy to update and extend
|
||||
|
||||
## Next Steps
|
||||
|
||||
Please start by answering the questions in section 1 (Prompt Identity & Purpose). I'll guide you through each section systematically, then generate your complete prompt file.
|
||||
|
||||
## Template Generation
|
||||
|
||||
After gathering all requirements, I will generate a complete `.prompt.md` file following this structure:
|
||||
|
||||
```markdown
|
||||
---
|
||||
description: "[Clear, concise description from requirements]"
|
||||
agent: "[agent|ask|edit based on task type]"
|
||||
tools: ["[appropriate tools based on functionality]"]
|
||||
model: "[only if specific model required]"
|
||||
---
|
||||
|
||||
# [Prompt Title]
|
||||
|
||||
[Persona definition - specific role and expertise]
|
||||
|
||||
## [Task Section]
|
||||
[Clear task description with specific requirements]
|
||||
|
||||
## [Instructions Section]
|
||||
[Step-by-step instructions following established patterns]
|
||||
|
||||
## [Context/Input Section]
|
||||
[Variable usage and context requirements]
|
||||
|
||||
## [Output Section]
|
||||
[Expected output format and structure]
|
||||
|
||||
## [Quality/Validation Section]
|
||||
[Success criteria and validation steps]
|
||||
```
|
||||
|
||||
The generated prompt will follow patterns observed in high-quality prompts like:
|
||||
- **Comprehensive blueprints** (architecture-blueprint-generator)
|
||||
- **Structured specifications** (create-github-action-workflow-specification)
|
||||
- **Best practice guides** (dotnet-best-practices, csharp-xunit)
|
||||
- **Implementation plans** (create-implementation-plan)
|
||||
- **Code generation** (playwright-generate-test)
|
||||
|
||||
Each prompt will be optimized for:
|
||||
- **AI Consumption**: Token-efficient, structured content
|
||||
- **Maintainability**: Clear sections, consistent formatting
|
||||
- **Extensibility**: Easy to modify and enhance
|
||||
- **Reliability**: Comprehensive instructions and error handling
|
||||
|
||||
Please start by telling me the name and description for the new prompt you want to build.
|
||||
@@ -71,12 +71,15 @@ func (h *DockerHandler) ListContainers(c *gin.Context) {
|
||||
if err != nil {
|
||||
var unavailableErr *services.DockerUnavailableError
|
||||
if errors.As(err, &unavailableErr) {
|
||||
log.WithFields(map[string]any{"server_id": serverID}).WithError(err).Warn("docker unavailable")
|
||||
c.JSON(http.StatusServiceUnavailable, gin.H{"error": "Docker daemon unavailable"})
|
||||
log.WithFields(map[string]any{"server_id": serverID, "host": host}).WithError(err).Warn("docker unavailable")
|
||||
c.JSON(http.StatusServiceUnavailable, gin.H{
|
||||
"error": "Docker daemon unavailable",
|
||||
"details": "Cannot connect to Docker. Please ensure Docker is running and the socket is accessible (e.g., /var/run/docker.sock is mounted).",
|
||||
})
|
||||
return
|
||||
}
|
||||
|
||||
log.WithFields(map[string]any{"server_id": serverID}).WithError(err).Error("failed to list containers")
|
||||
log.WithFields(map[string]any{"server_id": serverID, "host": host}).WithError(err).Error("failed to list containers")
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "Failed to list containers"})
|
||||
return
|
||||
}
|
||||
|
||||
@@ -76,6 +76,9 @@ func TestDockerHandler_ListContainers_DockerUnavailableMappedTo503(t *testing.T)
|
||||
|
||||
assert.Equal(t, http.StatusServiceUnavailable, w.Code)
|
||||
assert.Contains(t, w.Body.String(), "Docker daemon unavailable")
|
||||
// Verify the new details field is included in the response
|
||||
assert.Contains(t, w.Body.String(), "details")
|
||||
assert.Contains(t, w.Body.String(), "Docker is running")
|
||||
}
|
||||
|
||||
func TestDockerHandler_ListContainers_ServerIDResolvesToTCPHost(t *testing.T) {
|
||||
|
||||
@@ -317,14 +317,11 @@ func Register(router *gin.Engine, db *gorm.DB, cfg config.Config) error {
|
||||
logger.Log().Warn("CHARON_ENCRYPTION_KEY not set - DNS provider and plugin features will be unavailable")
|
||||
}
|
||||
|
||||
// Docker
|
||||
dockerService, err := services.NewDockerService()
|
||||
if err == nil { // Only register if Docker is available
|
||||
dockerHandler := handlers.NewDockerHandler(dockerService, remoteServerService)
|
||||
dockerHandler.RegisterRoutes(protected)
|
||||
} else {
|
||||
logger.Log().WithError(err).Warn("Docker service unavailable")
|
||||
}
|
||||
// Docker - Always register routes even if Docker is unavailable
|
||||
// The service will return proper error messages when Docker is not accessible
|
||||
dockerService := services.NewDockerService()
|
||||
dockerHandler := handlers.NewDockerHandler(dockerService, remoteServerService)
|
||||
dockerHandler.RegisterRoutes(protected)
|
||||
|
||||
// Uptime Service
|
||||
uptimeService := services.NewUptimeService(db, notificationService)
|
||||
|
||||
@@ -55,18 +55,32 @@ type DockerContainer struct {
|
||||
}
|
||||
|
||||
type DockerService struct {
|
||||
client *client.Client
|
||||
client *client.Client
|
||||
initErr error // Stores initialization error if Docker is unavailable
|
||||
}
|
||||
|
||||
func NewDockerService() (*DockerService, error) {
|
||||
// NewDockerService creates a new Docker service instance.
|
||||
// If Docker client initialization fails, it returns a stub service that will return
|
||||
// DockerUnavailableError for all operations. This allows routes to be registered
|
||||
// and provide helpful error messages to users.
|
||||
func NewDockerService() *DockerService {
|
||||
cli, err := client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create docker client: %w", err)
|
||||
logger.Log().WithError(err).Warn("Failed to initialize Docker client - Docker features will be unavailable")
|
||||
return &DockerService{
|
||||
client: nil,
|
||||
initErr: err,
|
||||
}
|
||||
}
|
||||
return &DockerService{client: cli}, nil
|
||||
return &DockerService{client: cli, initErr: nil}
|
||||
}
|
||||
|
||||
func (s *DockerService) ListContainers(ctx context.Context, host string) ([]DockerContainer, error) {
|
||||
// Check if Docker was available during initialization
|
||||
if s.initErr != nil {
|
||||
return nil, &DockerUnavailableError{err: s.initErr}
|
||||
}
|
||||
|
||||
var cli *client.Client
|
||||
var err error
|
||||
|
||||
|
||||
@@ -13,30 +13,33 @@ import (
|
||||
)
|
||||
|
||||
func TestDockerService_New(t *testing.T) {
|
||||
// This test might fail if docker socket is not available in the build environment
|
||||
// So we just check if it returns error or not, but don't fail the test if it's just "socket not found"
|
||||
// In a real CI environment with Docker-in-Docker, this would work.
|
||||
svc, err := NewDockerService()
|
||||
if err != nil {
|
||||
t.Logf("Skipping DockerService test: %v", err)
|
||||
return
|
||||
// NewDockerService now always returns a service (never nil)
|
||||
// If Docker is unavailable, the service will have initErr set
|
||||
svc := NewDockerService()
|
||||
assert.NotNil(t, svc, "NewDockerService should always return a non-nil service")
|
||||
|
||||
// If Docker is unavailable, the service should have an initErr
|
||||
if svc.initErr != nil {
|
||||
t.Logf("Docker service initialized but Docker is unavailable: %v", svc.initErr)
|
||||
}
|
||||
assert.NotNil(t, svc)
|
||||
}
|
||||
|
||||
func TestDockerService_ListContainers(t *testing.T) {
|
||||
svc, err := NewDockerService()
|
||||
if err != nil {
|
||||
t.Logf("Skipping DockerService test: %v", err)
|
||||
return
|
||||
}
|
||||
svc := NewDockerService()
|
||||
assert.NotNil(t, svc)
|
||||
|
||||
// Test local listing
|
||||
containers, err := svc.ListContainers(context.Background(), "")
|
||||
// If we can't connect to docker daemon, this will fail.
|
||||
// We should probably mock the client, but the docker client is an interface?
|
||||
// The official client struct is concrete.
|
||||
// For now, we just assert that if err is nil, containers is a slice.
|
||||
|
||||
// If service has initErr, it should return DockerUnavailableError
|
||||
if svc.initErr != nil {
|
||||
var unavailableErr *DockerUnavailableError
|
||||
assert.ErrorAs(t, err, &unavailableErr, "Should return DockerUnavailableError when Docker is not available")
|
||||
t.Logf("Docker unavailable (expected in some environments): %v", err)
|
||||
return
|
||||
}
|
||||
|
||||
// If we can connect to docker daemon, this should succeed
|
||||
if err == nil {
|
||||
assert.IsType(t, []DockerContainer{}, containers)
|
||||
}
|
||||
|
||||
@@ -604,9 +604,21 @@ export default function ProxyHostForm({ host, onSubmit, onCancel }: ProxyHostFor
|
||||
))}
|
||||
</select>
|
||||
{dockerError && connectionSource !== 'custom' && (
|
||||
<p className="text-xs text-red-400 mt-1">
|
||||
Failed to connect: {(dockerError as Error).message}
|
||||
</p>
|
||||
<div className="mt-2 p-3 bg-red-500/10 border border-red-500/30 rounded-lg">
|
||||
<div className="flex items-start gap-2">
|
||||
<AlertTriangle className="w-4 h-4 text-red-400 flex-shrink-0 mt-0.5" />
|
||||
<div className="text-xs text-red-300">
|
||||
<p className="font-semibold mb-1">Docker Connection Failed</p>
|
||||
<p className="text-red-400/90 mb-2">
|
||||
{(dockerError as Error).message}
|
||||
</p>
|
||||
<p className="text-gray-400">
|
||||
<strong>Troubleshooting:</strong> Ensure Docker is running and the socket is accessible.
|
||||
If running in a container, mount <code className="text-xs bg-gray-800 px-1 py-0.5 rounded">/var/run/docker.sock</code>.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
)}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -123,6 +123,35 @@ describe('useDocker', () => {
|
||||
expect(result.current.containers).toEqual([]);
|
||||
});
|
||||
|
||||
it('extracts details from 503 service unavailable error', async () => {
|
||||
const mockError = {
|
||||
response: {
|
||||
status: 503,
|
||||
data: {
|
||||
error: 'Docker daemon unavailable',
|
||||
details: 'Cannot connect to Docker. Please ensure Docker is running and the socket is accessible (e.g., /var/run/docker.sock is mounted).'
|
||||
}
|
||||
}
|
||||
};
|
||||
vi.mocked(dockerApi.listContainers).mockRejectedValue(mockError);
|
||||
|
||||
const { result } = renderHook(() => useDocker('local'), {
|
||||
wrapper: createWrapper(),
|
||||
});
|
||||
|
||||
await waitFor(
|
||||
() => {
|
||||
expect(result.current.isLoading).toBe(false);
|
||||
},
|
||||
{ timeout: 3000 }
|
||||
);
|
||||
|
||||
// Verify error message includes the details
|
||||
expect(result.current.error).toBeTruthy();
|
||||
const errorMessage = (result.current.error as Error)?.message;
|
||||
expect(errorMessage).toContain('Docker is running');
|
||||
});
|
||||
|
||||
it('provides refetch function', async () => {
|
||||
vi.mocked(dockerApi.listContainers).mockResolvedValue(mockContainers);
|
||||
|
||||
|
||||
@@ -9,7 +9,19 @@ export function useDocker(host?: string | null, serverId?: string | null) {
|
||||
refetch,
|
||||
} = useQuery({
|
||||
queryKey: ['docker-containers', host, serverId],
|
||||
queryFn: () => dockerApi.listContainers(host || undefined, serverId || undefined),
|
||||
queryFn: async () => {
|
||||
try {
|
||||
return await dockerApi.listContainers(host || undefined, serverId || undefined)
|
||||
} catch (err: any) {
|
||||
// Extract helpful error message from response
|
||||
if (err.response?.status === 503) {
|
||||
const details = err.response?.data?.details
|
||||
const message = details || 'Docker service unavailable. Check that Docker is running.'
|
||||
throw new Error(message)
|
||||
}
|
||||
throw err
|
||||
}
|
||||
},
|
||||
enabled: Boolean(host) || Boolean(serverId),
|
||||
retry: 1, // Don't retry too much if docker is not available
|
||||
})
|
||||
|
||||
Reference in New Issue
Block a user