Diagnostics
When something’s wrong, start here.
debug-tool
non-destructive (may write fixes)
Trigger: “Debug the foo tool”, “Why is bar failing?”, “Fix the validation errors on baz”.
Steps:
- Run the validator first —
uv run pixie validate <id> --json. The report is the primary diagnostic; read it before reading the code. - For each failing check, map to the most likely file:
tool_json_parses→tool.jsonpyproject_ok→pyproject.tomlvenv_functional→pyproject.toml+ runuv syncschema_matches_disk→ bothtool.jsonandmain.pysample_input_run→main.pyoutput_conforms→main.py(especially the response shape)clean_shutdown→main.pylifespan / signal handling
- Read the relevant lines.
- Propose and apply a fix.
- Re-run the validator.
- Two-attempt cap. If two attempts fail, surface both reports verbatim and stop. No further automatic suggestions — the user is better placed to debug a stubborn case.
Does not:
- Run the tool outside the validator.
- Modify
tool.jsonto “make the test pass” without understanding why the test failed. - Suggest running the validator manually — the skill does this automatically.
pixie-status
read-only
Trigger: “Is Pixie running?”, “What’s warm?”, “How many tools are spawned?”
Returns a snapshot from /api/launcher/state (developer-mode only):
which tools are warm, their ports, uptime, recent activity, total
warm count vs cap. Doesn’t enumerate installed tools — for that, use
list-tools.
pixie-doctor
non-destructive (read-only)
Trigger: “Pixie won’t start”, “Dashboard is broken”, “Run a health check”.
Comprehensive installation diagnostics:
- Python 3.12+ available?
uvavailable?pixiepackage importable?pixie.dbexists and is readable?tools/exists with at least the example tool?PIXIE_HOSTandPIXIE_PORTset correctly?- Pixie’s HTTP API reachable at
127.0.0.1:7860/api/healthz? gitavailable (required byadd-tool-from-repo)?- Does
pixie validate example-compound-interestpass? - Free disk space in
tools/? - Write access to
tools/andpixie.db?
Returns a formatted report with a final summary.
list-tools
read-only
Trigger: “List all tools”, “Show installed tools”, “Enumerate tools”.
One-line-per-tool table with id, name, category, layout, validation status. Useful as the starting point for skills that take a tool id — when you don’t remember the exact name.
validate-against-reference
read-only
Trigger: “Validate accuracy of foo”, “Check foo against its fixtures”, “Confirm reproducibility”.
Runs only check 12 of the validator: the reference fixture comparison. Skips checks 1–11. Use in CI when you’ve already confirmed the contract elsewhere and just want to verify the numerics.
Equivalent to uv run pixie validate <id> --reference-only. See
Fixtures & reference validation for the
fixture format.
Diagnosis playbook
Most “tool is broken” cases resolve in this order:
pixie-status— is Pixie even running?list-tools— is the tool present?debug-tool <id>— what does the validator say?- If the validator’s checks 1–5 fail → it’s a setup issue (missing
file, broken JSON, missing dep, missing
.venv/). - If checks 6–8 fail → it’s a startup issue (
main.pywon’t import, uvicorn won’t bind, schema endpoint missing). - If checks 9–11 fail → it’s a logic issue (wrong output shape,
exception during
/run, slow shutdown). - If check 12 fails → it’s a correctness regression (use
validate-against-referenceto narrow down which fixture).
If you’ve done all that and the tool still doesn’t work:
view-logs <id>to see the raw stderr.pixie-doctorto check the surrounding installation.- Ask the user. Sometimes the answer is “this whole concept is wrong”.