EvoSpikeNet Edge Verification Report
[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).
1. Overview
- Purpose: Verify whether EvoSpikeNet SDK can be operated at the edge and decide on an operation policy that meets the requirements for low latency and low power consumption.
- Execution environment: Virtual environment (Python3.12 .venv) on the development PC, installed and verified local Zenoh bindings.
2. Verification performed (main points)
- Dependency check: Run
scripts/validate_sdk_startup.py→ Checktorch,requests,numpy. Nozenohat first → later successfully installed from GitHub source. - Coordinator startup: Run
scripts/test_coordinator.py→init_coordinator/start_coordinatorto make the node the leader and check Zenoh connection. - Single node bench:
scripts/benchmark_coordinator.pyrun (200 tasks) - Latency: average 0.0000523 s (0.052 ms), p95 ≈ 0.155 ms, max ≈ 0.479 ms
- Memory: Average ≈ 517 MB
- CPU: Average 7.2% (Peak 59.5%)
- Small data learning/inference: Execute
scripts/mini_data_train_infer.py(check operation with dummy model, display simple accuracy) - Multi-node short test: run
scripts/multi_node_sim.py --nodes 3 --tasks 60 - Result: count=60, avg ≈ 0.0000268 s, p95 ≈ 0.000237 s
3. Short interpretation
- In a local (same host) environment, the coordinator's response is very fast in sub-milliseconds. Even if distributed communication (Zenoh) is introduced, the overhead is small.
- Requires several hundred MB of memory, so be careful on devices with memory constraints.
- "Using the SDK as-is" on the device side is possible if the device can sufficiently run Python/Torch, but for mobile devices (especially iPhones/some Android devices), it is desirable to convert the model for native inference.
4. Verification procedures and recommendations for each actual device (target)
A. Raspberry Pi (general specification example: RPi 4/8GB, 64-bit OS)
- Decision points: Is the OS 64-bit, is PyTorch ARM build available, and is there enough RAM (4GB or more recommended)?
- Software instructions (example):
# Virtual environment creation
python3 -m venv .venv
. .venv/bin/activate
pip install -U pip
# Insert PyTorch (ARM) and dependencies (see official instructions)
pip install numpy requests
# Install EvoSpikeNet (or place source)
pip install -e /path/to/EvoSpikeNet-Core
# If you need zenoh, build and install it from source.
pip install "git+https://github.com/eclipse-zenoh/zenoh-python.git"
- Power measurement:
- An external USB power meter is recommended for accuracy (inserted into the power supply line).
-
Alternative: Use
vcgencmd measure_temp+topto check load trends, but power consumption requires external measurement. -
Recommended operation:
- If Python+torch works comfortably on RPi, SDK can be used.
- If RAM is insufficient or power is tight, reduce the weight of the model (quantize it) or convert it to TorchScript/TFLite and move to native inference.
B. iPhone 17 (latest iOS / Apple Silicon A-series equivalent)
- Judgment point: There are many restrictions on Python/tensor execution on iOS (App Store rules and runtime restrictions). Core ML is recommended for mobile native inference.
- Recommended steps:
- Model conversion path: PyTorch -> ONNX -> CoreML (using
onnx-coremlorcoremltools) - Evaluation: Energy measurement using Xcode Instruments' Energy. Measurement was also carried out using an external wattmeter.
- Execution policy:
- Place the SDK as an edge server and use the iPhone as a REST client (if the network allows).
- If native standalone operation is required, convert it to CoreML and incorporate it into the app.
C. Android (general specs: ARMv8, 4–12GB RAM)
- Decision point: It is not common to run Python + torch directly (exception: Pytorch Android runtime, Chaquopy, etc.).
- Choices:
- Convert to mobile model (TFLite / NNAPI) and infer on the device.
- Connect the terminal to the
EvoSpikeNetedge server using REST (SDK runs on the server side). - Measurement procedure (Android):
# Power/Battery Statistics Reset
adb shell dumpsys batterystats --reset
# run the experiment
# Get results
adb shell dumpsys batterystats > batterystats.txt
# CPU usage etc.
adb shell top -n 1 | head -n 20
- Recommended: If low power consumption on real devices is important, prioritize
TFLite+ quantization.
5. Model conversion (simple guide)
- TorchScript (recommended: serialize PyTorch models)
import torch
model.eval()
example = torch.randn(1, *input_shape)
traced = torch.jit.trace(model, example)
traced.save('model.pt')
- ONNX (compatibility check)
torch.onnx.export(model, example, 'model.onnx', opset_version=14)
- TFLite (general flow: ONNX -> TF -> TFLite)
- ONNX -> TensorFlow: Use
onnx-tf - TF -> TFLite:
tflite_convertortf.lite.TFLiteConverter - iOS(CoreML): Convert with
onnx-coremlorcoremltools. - Quantization: Apply Post-Training Quantization if possible to improve inference efficiency.
Note: If you rely on SNN-specific implementation (spike processing), you may not be able to obtain equivalent behavior with a simple PyTorch → TFLite conversion. If you have SNN-specific layers/operations, you will need to run a native SDK at the edge or implement a compatibility layer for transformation.
6. Recommended future work (in order of priority)
- Baseline measurement on actual machine (Raspberry Pi first) — Idle/load measurement with external power meter. (Responsible: Executed on actual machine)
- Convert representative models for Android/iPhone and compare accuracy, latency, and power of single-unit inference. (I can prepare the script)
- Determine operation policy: If the device has sufficient conditions, use
SDK standalone operation'', otherwise usehybrid (edge SDK + mobile native model)''.
7. Appendix: Main commands
- Virtual environment & script execution
cd /home/maoki/Products/EvoSpikeNet-Core
. .venv/bin/activate
python3 scripts/validate_sdk_startup.py
python3 scripts/test_coordinator.py
python3 scripts/benchmark_coordinator.py
python3 scripts/multi_node_sim.py --nodes 3 --tasks 60
python3 scripts/mini_data_train_infer.py
- Zenoh source installation (reference)
. .venv/bin/activate
pip install "git+https://github.com/eclipse-zenoh/zenoh-python.git"
Work logs and generation scripts are already saved in scripts/. If necessary, convert this Markdown to PDF and generate a report for distribution. Which would you choose?