E2E execution runbook
[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).
purpose - A summary of steps to run tests via an API container in a repository and how to deal with common failures.
premise
- Docker / Docker Compose is installed and the service is started with docker compose up -d.
- The API container is assumed to run as evospikenet-api and is exposed to host 8000.
Primary endpoint
- GET /test/test-status — current test running status (e.g. {"status":"idle","message":"No test running"})
- POST /test/run-tests — Run pytest in the backend. test_type can be specified in query/parameter (e.g. unit, integration, all).
Execution example (on host)```bash
Verify the API is working
curl -sS http://localhost:8000/test/test-status
Run unit tests on backend (synchronous calls, response includes stdout/stderr)
curl -sS -X POST "http://localhost:8000/test/run-tests?test_type=unit" -H "Content-Type: application/json" -o /tmp/e2e_run_output.json -w "\nHTTP_STATUS:%{http_code}\n"
Check results
cat /tmp/e2e_run_output.json
Points to note/Troubleshooting
- `run-tests` executes a subprocess within the API, so the Python environment on the container side must have all the necessary dependencies. The following issues were observed during this run:
- `torch` related errors such as `ModuleNotFoundError: No module named 'torch._custom_ops'`
- `ModuleNotFoundError: No module named 'GPUtil'`, `zmq`, `pkg_resources`, etc. missing dependencies
- Syntax errors in test source (invalid except/indent, etc.)
- Action:
1. Run lightweight: Run only `test_type=unit` or specific files (example below).
2. Modify the container: Add missing packages to the `evospikenet-api` image, or rebuild on the host side with the dependencies satisfied.
- Edit Dockerfile and add required pip packages → `docker compose build api` → `docker compose up -d`
3. Isolate the cause by running unit tests directly within the container:```bash
docker exec -it evospikenet-api /bin/bash
source /opt/venv/bin/activate
cd /home/app/app
# Example: Run a single test file
/opt/venv/bin/python -m pytest tests/unit/test_stdp_modulation.py -q
Where to store logs and results
- The API may be configured to write JSON reports (if you have a plugin) to /tmp/test_results.json.
- The response body of run-tests also includes stdout / stderr / returncode (can be saved by the caller).
CI and operation suggestions - For CI, we recommend running the required subset in each job instead of running full tests all at once via API. Full suites have heavy dependencies and more failure factors, so it's easier to handle only running them in staging.
next step
- Link this document to README.md if necessary. Do you want to apply the changes?