Skip to content

EvoSpikeNet function spec table

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

Creation date: 2026-04-01 Last actual measurement date: 2026-04-01 (local direct execution N=50 iterations) Purpose: Import and execute EvoSpikeNet-Core directly from the local Python environment and list the performance indicators of the unit test target functions based on actual measurements.

Actual measurement environment (2026-04-01) Execution mode: Import and execute EvoSpikeNet-Core directly from local venv. Docker/HTTP API is not used.
Host: Linux x86-64, CPU-only (no GPU), Python 3.12.3, /home/maoki/GitHub/.venv Measurement tool: time.perf_counter_ns() + psutil, N=50 repeats Bench script: EvoSpikeNet-Core/benchmarks/feature_spec_local_bench.py Note: MFCC extraction from raw waveform is not measured on this host due to torchaudio ABI inconsistency. For the audio system, we actually measured the encoder itself with MFCC tensor input.


1. Definition of metrics

Prioritizes adoption of ``indicators that are measurable and effective for differentiation'' in comparison with other AI systems.

Column heading Unit Explanation Points of differentiation from other AIs
Cold Latency ms Initial call (cold start) time Actual measurement of edge deployment and offline startup
Warm p50 / p90 / p99 ms Each percentile latency of stationary inference Low tail latency shows real-time response advantage
Throughput ops/s / FPS Processing amount per unit time Double-sided evaluation of batch processing and real-time stream
Peak RAM MB Peak memory usage during processing (RSS) Basis for possibility of edge device operation
Disk size MB Storage occupancy of model weight + checkpoint Comparison of deployment capacity (GPT-4 etc. are not disclosed and expected to exceed 300GB)
Spike firing rate Hz Average spike firing frequency SNN specific index. Conventional AI (Dense Tensor) does not exist
Spike sparse density % Active neuron percentage (low = energy saving) Direct indicator of computational efficiency/energy efficiency
Adaptive convergence steps steps Number of steps required to adapt to a new task Quantification of continuous learning ability
Energy ratio × (vs. conventional AI) Relative power consumption compared to conventional CNN/Transformer Quantification of energy saving advantage
Offline operation ✓/✗ Does it work without an internet connection? Ability to operate independently without API dependence
Number of parallel nodes Units Maximum number of verified distributed nodes Demonstration of distributed scalability
Test files Corresponding unit test files (main ones) Basis for test coverage

Measurement tools: time.perf_counter_ns() + psutil.Process.memory_info() + tracemalloc Statistics: Each metric calculates mean/p50/p90/p99 with N≥30 iterations (Benchmark details: benchmarks/system_bench_plan_and_report.md) Actually measured: ☑ Target value/document value: ◎ Not measured (measurement required): — Test result legend: 🟢 All PASS / 🟡 Partial FAIL (X pass/Y fail) / 🔴 Error/All FAIL


2. Functional specifications table (by category)

2-1. Core SNN engine

Function Cold Latency (ms) Warm p50 (ms) Warm p90 (ms) Peak RAM (MB) Disk (MB) Spike firing rate (Hz) Spike sparse density (%) Offline Test results Test file
LIF neuron model ☑ 0.497 ☑ 0.078 ☑ 0.083 ☑ 392.97 < 1 ◎ 20–100 ◎ 5–15 % 🟢 1/0 test_neuron_factory.py
Surrogate gradient (BPTT alternative) ☑ 0.047 ☑ 0.008 ☑ 0.009 ☑ 395.25 < 1 N/A N/A 🟢 3/0 test_surrogate.py
ChronoSpikeAttention ☑ 0.054 ☑ 0.012 ☑ 0.013 ☑ 395.25 < 10 ◎ 10–80 ◎ 10–30 % 🟡 7/1 test_attention.py, test_attention_shapes.py
Synapse basic operations ☑ 0.225 ☑ 0.036 ☑ 0.038 ☑ 396.43 < 1 N/A N/A 🟢 11/0 test_synapses.py
Izhikevich neuron / NMDA block ☑ 0.376 ☑ 0.120 ☑ 0.127 ☑ 394.21 < 1 ◎ 5–150 ◎ 5–20 % 🟢 3/0 test_nmda_block_and_izhikevich.py
Quantization utilities (int8/int16) ◎ < 0.5 ◎ < 2 ◎ < 30 % reduction ◎ 50 % reduction N/A N/A test_quantization_utils.py
Geometry/Structures ◎ < 0.2 ◎ < 1 < 10 < 1 N/A N/A 🟡 1/1 test_structures.py, test_shapes_suite.py

☑ 2026-04-01 Local direct measurement: LIF forward pass (CPU, n=1, dim=100) N=50: cold=0.497 ms, p50=0.078 ms, p90=0.083 ms. ChronoSpikeAttention was also under the same conditions, p50=0.012 ms, and the test passed 7 / failed 1.


2-2. Synaptic plasticity/learning

Feature Cold Latency (ms) Warm p50 (ms) Warm p90 (ms) Peak RAM (MB) Adaptive Convergence Step Energy Ratio (×) Offline Test Results Test File
STDP Basic ☑ 0.204 ☑ 0.063 ☑ 0.066 ☑ 397.14 ◎ 0.6× 🟢 1/0 test_stdp_modulation.py
Meta-STDP (self-adjusting type) ☑ 0.101 ☑ 0.047 ☑ 0.048 ☑ 421.22 ◎ −75 % (vs basic STDP) ◎ 0.4× 🟡 14/1 test_meta_stdp.py
Neuromodulation STDP Gating ◎ < 3 ◎ < 15 < 60 ◎ 0.5× test_stdp_neuromodulation.py
Hierarchical Plasticity ☑ 0.008 ☑ 0.002 ☑ 0.002 ☑ 421.22 N/A 🟢 5/0 test_hierarchical_plasticity.py
Eligibility Traces ◎ < 2 ◎ < 8 < 30 N/A test_eligibility_traces.py
Plasticity Factory / Modulator ◎ < 1 ◎ < 3 < 20 N/A test_plasticity_factory.py, test_plasticity_modulator.py
Energy-dependent plasticity ☑ 0.079 ☑ 0.041 ☑ 0.046 ☑ 421.22 ◎ 0.7× 🟢 18/0 test_energy_plasticity.py
Distributed adaptive synchronization ◎ < 10 ◎ < 50 < 100 N/A test_adaptive_sync.py

2-3. Neuromodulator system

Function Warm p50 (ms) Warm p99 (ms) Peak RAM (MB) Spike firing rate change (%) Offline Test results Test file
Dopamine / Reward Circuit ◎ < 2 < 30 ◎ +20–50 🟢 2/0 test_reward_circuit.py, test_td_and_oxytocin.py
Acetylcholine (ACh) module ◎ < 2 < 20 ◎ θ band +15 test_acetylcholine_module.py, test_adapter_ach_trigger.py
GABA / Excitation-inhibition balance ◎ < 1 < 20 ◎ −30–60 🟢 2/0 test_gaba_and_network.py, test_gaba_tuning.py
Neuromodulatory gate integration ◎ < 3 < 50 test_neuromod_gate.py
Neuromodulation REST API ☑ ☑ 4.1 ☑ 28.8 < 60 test_neuromod_rest_api.py
Emotion System ◎ < 5 < 40 🟢 3/0 test_emotion_system.py
Consciousness Circuit ☑ 0.001 ☑ 421.31 🟡 34/5 test_conscience_circuit.py
Rhythm synchronization (θ/γ) ◎ < 2 < 30 ◎ γ band sync 🟡 3/2 test_rhythm_sync.py
Neuromodulatory state (biomimetic API) ☑ ☑ 4.1 ☑ 28.8 REST /biomimetic/neuromod/state

☑ Actual measurement (2026-04-01): /biomimetic/neuromod/state N=30: cold=74.2 ms, p50=4.1 ms, p90=5.3 ms, p99=28.8 ms


2-4. Storage system

Feature Cold Latency (ms) Warm p50 (ms) Throughput (ops/s) Peak RAM (MB) Disk (MB) Offline Test File
Hippocampal buffer (episodic memory) ☑ ☑ 0.446 ☑ 132.17 µs/op ☑ 7566 ☑ 423.87 < 1 🟡 8/2
Hippocampal buffer (via API) ☑ ☑ 43.2 ☑ 5.2 ☑ > 10⁴
Working memory ◎ < 1 ◎ > 10⁴ < 50 < 1 🟢 3/0
Long-term memory / hippocampal transfer ☑ 0.036 ☑ 0.007 ☑ 424.65 < 10
SNN memory extension ◎ < 10 < 200 < 20
Sleep memory consolidation (STDP) ◎ < 500 ◎ < 50 < 150 < 5 🟡 9/4
Sleep–wake cycle ☑ ☑ 4.3 ☑ 4.4 < 80 < 1 🟢 2/0
Forgetting Controller ◎ < 2 < 30 < 1
Tensor cache ☑ 0.221 ☑ 46.16 µs/op ☑ 21662 ☑ 424.57 🟡 11/2
Retention policy ◎ < 1 < 20
Memory Statistics API ☑ ☑ 4.5 ☑ 4.0

☑ 2026-04-01 Local direct measurement: EpisodicMemory store cold=0.446 ms, p50=132.17 µs, throughput=7566 ops/s. TensorCache p50=46.16 µs, throughput=21662 ops/s. LongTermMemoryModule p50=0.007 ms.


2-5. Prefrontal cortex (PFC) ・Cognitive control

Feature Warm p50 (ms) Warm p90 (ms) Peak RAM (MB) Number of parallel nodes Offline Test results Test file
PFC Basic Routing ☑ ☑ 0.027 ☑ 0.045 ☑ 428.70 1 🟡 2/1 test_pfc.py
Q-PFC (Quantum-inspired PFC) ☑ ☑ 0.184 ☑ 0.222 ☑ 429.63 1 🟡 21/2 test_q_pfc_loop.py
Q-PFC Health API ☑ ☑ 3.6 ☑ 4.9 /q-pfc/api/v1/health
Q-PFC stats API ☑ ☑ 2.7 ☑ 592.8 * /q-pfc/api/v1/q-pfc/stats
Q-PFC adaptive control ◎ < 20 ◎ < 100 < 200 1 test_q_pfc_adaptive_control.py
Distributed Q-PFC (28 nodes supported) ◎ < 50 ◎ < 200 < 500 ◎ 28 test_distributed_qpfc.py
Multi PFC cluster ◎ < 30 ◎ < 150 < 400 ◎ 10 test_multi_pfc_cluster.py, test_multipfc_cluster.py
Executive Control ◎ < 10 ◎ < 40 < 120 1 test_executive_control.py
Intention Bias (Intention API) ☑ ☑ 2.7 ☑ 4.9 < 80 1 /intention/current
PFC Decision Making (make_decision API) ☑ ☑ 4.5 ☑ 9.5 < 150 /api/make_decision
Default mode network ◎ < 10 ◎ < 30 < 100 1 test_dmn.py
Consensus decision making ☑ ☑ 3.8 ◎ < 80 < 200 ◎ 5+ /api/consensus/stats

☑ Local direct measurement (2026-04-01): AdvancedPFCEngine forward N=50: cold=0.319 ms, p50=0.027 ms, p90=0.045 ms. QPFCAdaptiveController forward: cold=0.462 ms, p50=0.184 ms, p90=0.222 ms.


2-6. Sensory/perceptual input

Features Warm p50 (ms) Warm p90 (ms) Throughput Peak RAM (MB) Disk (MB) Offline Test results Test files
Visual encoder ☑ ☑ 0.105 ☑ 0.111 ◎ 15–30 FPS ☑ 432.35 < 50 🟢 2/0 test_vision.py
Audio encoder ☑ ☑ 0.582 ☑ 0.592 ☑ 434.38 < 20 🟢 3/0 test_audio.py
Audio → language conversion ◎ 100–300 ms < 300 < 50 test_audio_to_language.py
EEG integration/translation ◎ < 50 ms ◎ > 100 ch < 400 < 5 test_eeg_integration.py, test_eeg_translator.py
EEG driver/device ◎ < 20 ms < 100 < 1 test_eeg_drivers.py, test_eeg_drivers_device.py
Tactile → Language Conversion ◎ < 100 ms < 150 < 5 test_tactile_to_language.py
LiDAR driver ◎ < 10 ms < 100 < 1 test_lidar_driver.py
USB camera driver ◎ < 33 ms ◎ 30 FPS < 50 < 1 test_usb_camera_driver.py
Stereo Infrared / ONVIF ◎ < 50 ms < 150 < 1 test_stereo_infrared_onvif_env.py
Sensor integration interface ◎ < 5 ms < 50 test_sensor_interface.py
Multimodal Fusion ◎ 80–200 ms < 500 < 20 test_fusion.py

☑ Local direct measurement (2026-04-01): SpikingVisionEncoder N=50: cold=0.425 ms, p50=0.105 ms, p90=0.111 ms. SpikingAudioEncoder is MFCC tensor input condition with cold=1.203 ms, p50=0.582 ms, p90=0.592 ms.


2-7. Spatial processing/3D generation

Features Warm p50 (ms) Warm p90 (ms) Throughput (FPS) Peak RAM (MB) Output size (MB) Offline Test file
Basic spatial processing ☑ 0.009 ☑ 0.009 ☑ 439.33 test_spatial_processing.py
Spatial generation (high precision) ◎ 10–50 ms ◎ < 200 ms ◎ 15–60 FPS ◎ < 400 < 5 MB/frame test_spatial_generation_high_precision.py
Spatial optimization ◎ < 30 ◎ < 120 < 300 test_spatial_optim.py
Spatial model switching ◎ < 5 ◎ < 20 < 100 test_spatial_models_toggle.py
Cortical topology construction ☑ 18.246 ☑ 18.525 ☑ 444.83 < 20 test_cortical_topology.py, test_cortical_topology_unit.py
Cortical Topology Export ◎ < 100 ◎ < 500 < 800 ◎ < 50 test_cortical_topology_export_save.py
3D visualization ◎ < 200 ◎ < 1000 < 1000 test_3d_visualization.py, test_cortical_topology_viz.py

2-8. Language/Text Processing

Feature Warm p50 (ms) Warm p90 (ms) Throughput (tokens/s) Peak RAM (MB) Disk (MB) Offline Test results Test file
Brain language architecture (overall) ☑ 355.006 ☑ 446.27 < 100 🟢 26/0 test_brain_language.py, test_brain_language_comprehensive.py
Spike tokenizer ☑ 0.006 ☑ 0.006 ☑ 10392223 ☑ 444.83 < 10 🟢 1/0 test_tokenizer.py, test_token_categories.py
Spike Transformer ◎ < 10 ◎ < 40 ◎ > 1000 < 400 < 200 test_transformer.py
Language model (SNN-LM) ◎ < 30 ◎ < 100 ◎ > 300 < 600 < 300 test_language_model.py
Text encoder ◎ < 5 ◎ < 20 ◎ > 2000 < 100 < 20 test_text.py, test_encoding.py
Document parser (stream) ◎ < 10 ◎ < 50 < 100 test_document_parsers.py, test_parsers_stream.py
Semantic chunking ◎ < 20 ◎ < 80 < 150 test_semantic_chunking.py, test_chunking.py
Knowledge graph integration (RAG-SNN) ◎ < 30 ◎ < 100 < 300 test_snn_rag.py

Feature Warm p50 (ms) Warm p90 (ms) Index Size (GB) Peak RAM (MB) Offline Test Results Test File
RAG Query (API) ☑ ☑ 40.3 ☑ 55.7 < 500 ✓ (Local VS) 🔴 errors test_rag.py, /api/rag/query
SNN-RAG Hybrid ◎ < 30 ◎ < 120 < 400 test_snn_rag.py
Milvus backend ☑ Running ☑ 40.3 ☑ 55.7 < 200 △ Running 🔴 errors test_rag_milvus.py
Elasticsearch backend ☑ Running ☑ 3.2 ☑ 5.5 < 200 △ Running test_elasticsearch_client.py
Redis cache ☑ ☑ 0.23 ☑ 0.34 < 50 /api Via Redis
RAG version API ◎ < 20 ◎ < 80 < 100 test_rag_version_api.py, test_rag_client_versions.py
RAG WebSocket Progress ◎ < 100 ms (RTT) ◎ < 500 < 100 test_rag_ws_progress.py
RAG multilingual (supports Japanese particles) ◎ < 30 ◎ < 100 < 200 test_japanese_rag_particle_issue.py
RAG Debugging/Improvement test_rag_debug.py, test_rag_improvement.py

☑ Actual measurement (2026-04-01): Milvus/RAG: rag/query cold=286.8 ms, p50=40.3 ms, p90=55.7 ms; ES cluster status=green; Redis SET/GET p50=0.23 ms


2-10. Evolution/Optimization Engine

Feature Cold Latency (ms) Time per generation (ms) Peak RAM (MB) Disk (MB) Generalization improvement rate Offline Test file
Genome initialization ☑ ☑ 0.092 ☑ 396.44 < 1 🟢 8/0
Genome → Brain Forward ☑ ☑ 1.943 ☑ 446.66
Evolution Engine (Basic) ◎ < 100 ◎ < 500 < 500 < 10 ◎ +15–25 % 🟡 29/2
Evolution Status API ☑ ☑ 4.3 ☑ 5.1
Advanced Mutations ☑ 0.059 ☑ 446.66 < 5
Coevolution ◎ < 200 ◎ < 1000 < 800 < 20 ◎ +40 % (vs manual design)
Maintain diversity ◎ < 100 < 100
Fitness evaluator ◎ < 50 < 100
Annealing optimization ◎ < 500 ◎ < 200 < 100
Genome pool management ◎ < 50 < 200 < 50
Distributed Evolution Engine ◎ < 200 ◎ < 2000 < 1000 < 50

☑ 2026-04-01 Local direct measurement: EvoGenome initialization cold=0.092 ms, GenomeToBrainConverter p50=1.943 ms, MutationEngine p50=0.059 ms, GenomePool.evolve_generation(pop=10) p50=0.001 ms.


2-11. Quantum Inspire Function

Features Warm p50 (ms) Warm p90 (ms) Peak RAM (MB) Metacognitive instability (ratio) Offline Test file
Quantum Layers ◎ < 5 ◎ < 20 < 100 🟢 3/0
Quantum Interface ◎ < 10 ◎ < 50 < 150
Quantum Enhancer ◎ < 10 ◎ < 40 < 100
Quantum scene adjustment ◎ < 20 ◎ < 80 < 200
Quantum Tuning ◎ < 30 ◎ < 100 < 200
Quantum tomography ☑ 92.657 ☑ 735.723 ☑ 450.88 test_quantum_tomography.py
Advanced quantum decision ☑ 0.222 ☑ 0.229 ☑ 451.00 ◎ +340 % (vs. conventional AI) test_advanced_quantum_decision.py
Q-PFC Profile ◎ < 5 ◎ < 20 < 100
Q-PFC Advanced Extensions ◎ < 30 ◎ < 120 < 250

☑ Local direct measurement (2026-04-01): AdvancedQuantumDecisionMaker is measured with a real device under the binary selection condition of num_options=2. cold=0.410 ms, p50=0.222 ms, p90=0.229 ms.


2-12. Distributed/Communication System

Features Dispatch reached ACK (ms) E2E latency (ms) Throughput (msg/s) Number of parallel nodes Offline Test results Test files
Distributed Brain Node ☑ ☑ 2.6 ◎ < 50 ◎ > 1000 ◎ 28 LAN test_distributed_brain_node.py, /api/distributed_brain/status
Zenoh PubSub (stats API) ☑ ☑ 3.8 ◎ < 10 ◎ > 10⁴ ◎ 28+ LAN/WAN 🟡 1/3 test_zenoh_comm.py, /api/zenoh/stats
RAFT Consensus (stats API) ☑ ☑ 3.8 ◎ < 100 ◎ 5–25 LAN test_raft_persistence.py, /api/consensus/stats
Node autonomous discovery ☑ ☑ 2.7 ◎ 28+ LAN /node-discovery/health
Dynamic load balancing API ☑ ☑ 4.6 ◎ 25 LAN test_dynamic_load_balancer.py, /api/loadbalancer/statistics
Distributed learning ◎ < 500 ◎ 10+ LAN test_distributed_training.py
Distributed evaluation ◎ < 200 ◎ 10+ LAN test_distributed_evaluation.py
Node communication delay tag ◎ < 1 LAN test_communication_delay_tag.py, test_delay_tag_propagation.py
PTP time synchronization ◎ < 1 LAN test_ptp_sync.py
Geographic Node Management ☑ ☑ 2.5 ◎ 28+ /api/geo/nodes, test_geo_node_manager.py

☑ Actual measurement (2026-04-01): Zenoh stats API N=30: cold=25.8 ms (first connection), p50=3.8 ms; RAFT stats p50=3.8 ms; LoadBalancer p50=4.6 ms; distributed_brain/status p50=2.6 ms * Zenoh library is placed in a Docker container. Direct import from host-side Python is not possible (zenoh package not installed). All functionality is available via API.


2-13. Security/Encryption

Features Warm p50 (ms) Warm p99 (ms) Encryption overhead Key length/method Offline Test results Test file
Spike encryption (XOR byte level) ☑ 0.013 ☑ 0.014 ◎ < 5 % XOR+biomimic enforcement opt-in 🟢 33/0 test_spike_encryption.py
TLS Enforcement ◎ < 5 TLS 1.2/1.3 test_tls_enforcement.py
mTLS mutual authentication ◎ < 10 Client certificate test_mtls_auth.py
SSL Context ◎ < 2 test_ssl.py, test_ssl_context.py
Safety filter ◎ < 5 test_safety_filter.py
General Security ◎ < 10 OWASP Top10 compliant 🔴 errors test_security.py
OPA Policy Authorization ☑ Running ☑ 1.1 ☑ < 2 Rego Policy (OPA Docker: evospikenet-opa:8181)
File validator ◎ < 1 test_file_validator.py

☑ Local direct measurement (2026-04-01): AdvancedEncryptionEngine encrypt cold=0.904 ms, p50=0.013 ms, p90=0.014 ms. Spike encryption test passed 33 times / failed.


2-14. Energy/hardware optimization

Feature Warm p50 (ms) Warm p90 (ms) Energy Savings Peak RAM (MB) Offline Test Results Test File
Energy Tracking ☑ 0.003 ☑ 0.004 ☑ 451.85 🟡 22/1 test_energy_tracker.py
Energy homeostasis ◎ < 2 ◎ 40 % < 50 test_energy_homeostasis.py
Hardware Information API ☑ ☑ 2.5 ☑ 3.1 ◎ 30–40 % < 100 /api/hardware/info
Pipeline metrics API ☑ ☑ 2.7 ☑ 2.9 /api/pipeline/metrics
FPGA Safety ◎ < 1 < 10 test_fpga_safety.py
GPU operations ◎ < 10 GPU required test_gpu_operations.py
CUDA Attention ◎ < 5 GPU required test_cuda_attention.py
Model compression ◎ < 100 ◎ Disk 50–75 % reduction < 200 test_model_compressor.py
Model quantization ◎ < 50 ◎ RAM −50 % test_quantization_utils.py
Batch optimization ◎ < 5 test_batch_optimizer.py, test_batch_shaping.py

☑ Direct local measurement (2026-04-01): CPU environment without GPU. EnergyTracker N=50: cold=0.016 ms, p50=0.003 ms, p90=0.004 ms. energy_tracker test passed 22 / failed.


2-15. Robustness/Automatic recovery

Feature MTTR / p50 (ms) p90 (ms) Success rate (%) Peak RAM (MB) Offline Test results Test file
Auto Recovery API ☑ ☑ 3.0 ◎ < 500 ◎ > 99 < 100 🟡 32/1 test_auto_recovery.py, /api/recovery/status
Snapshot list API ☑ ☑ 2.9 ◎ < 1000 ◎ > 99.9 < 200 /api/snapshot/list
Rollback ◎ < 200 ◎ 100 < 100 test_rollback.py
Graceful Degradation ◎ < 100 ◎ > 95 < 50 test_graceful_degradation.py
Safety Watchdog ◎ < 10 ◎ 100 < 20 test_safety_watchdog_fix.py
Availability Monitor API ☑ ☑ 2.5 ◎ > 99.9 < 30 /api/availability/status
Robustness test ◎ > 98 test_robustness_tests.py
Error handling ◎ < 1 ◎ 100 < 10 test_error_handling.py

☑ Local direct measurement (2026-04-01): SnapshotManager.create_snapshot cold=1360.163 ms, p50=3.002 ms, p90=3.471 ms. AnomalyDetector initialization p50=0.001 ms. auto_recovery passed 32 / failed.


2-16. Monitoring/Auditing/Logging

Features Log write latency p50 (ms) p90 (ms) Storage growth rate Offline Test results Test file
Audit Log stats API ☑ ☑ 3.3 ◎ < 1 write ~1 MB/hour/node 🟡 17/2 test_audit_log.py, /api/audit/stats
Availability status API ☑ ☑ 2.5 /api/availability/status
Memory monitor ◎ < 1 test_memory_monitor.py
Centralized logger ◎ < 2 test_centralized_logger.py
Log analysis ◎ < 50 test_log_analysis.py
Metrics API ☑ ☑ 130.1 /metrics (Prometheus format)
Evolution Dashboard ◎ < 100 test_evolution_dashboard.py
Metadata handler ◎ < 2 test_metadata_handler.py

☑ Local direct measurement (2026-04-01): AuditLogManager.log p50=1.10 µs, p90=1.14 µs, throughput=908265 writes/s. Availability monitor initialization p50=0.001 ms.


2-17. SDK/API/External collaboration

Feature Warm p50 (ms) Warm p90 (ms) Offline Test results Test file
Python SDK initialization ☑ ☑ 0.020 ☑ 0.024 🟡 48/4 test_sdk.py, test_sdk_validation.py
REST API /api/health ☑ ☑ 2.0 ☑ 2.2 /api/health
REST API latency_check ☑ ☑ 3.8 ☑ 4.6 🔴 errors test_api_endpoints.py, /api/latency_check
WebSocket Asynchronous Pipeline ◎ < 50 ◎ < 200 test_async_pipeline.py
SDK backup ◎ < 100 ◎ < 500 test_sdk_backup.py
SDK sensor cooperation ◎ < 10 ◎ < 50 test_sdk_sensors.py
SDK RAG cooperation ◎ < 20 ◎ < 80 test_sdk_rag.py
SDK Jupyter integration ◎ < 200 ◎ < 1000 test_sdk_jupyter.py
Universal Integration Adapter ◎ < 30 ◎ < 100 test_universal_integration.py
Frontend UI ◎ < 200 ◎ < 1000 test_frontend.py, test_frontend_ui.py

☑ Local direct measurement (2026-04-01): SDK init cold=0.111 ms, p50=19.74 µs, p90=24.00 µs. HTTP API latency is not subject to this update as it is a local direct execution bench.


3. Comparison summary with other AIs

Detailed comparison: benchmarks/evo_vs_traditional_report.md Detailed comparison (5 perspectives): benchmarks/evo_vs_traditional_detailed.md

Comparison metrics EvoSpikeNet ChatGPT (gpt-5.4) Claude (claude-sonnet-4-6)
Model size (Disk) Several MB–several hundred MB by module Private (estimated at 300GB+) Private (estimated at 200GB+)
Memory Usage ☑ Minimum ≈ 2.21 MB ~ Several GB Private (via API) Private (via API)
Cold start ☑ Genome → Brain ≈ 5.73 ms Hundreds of ms to seconds (API round trip) Hundreds of ms to seconds (API round trip)
Inference Latency p50 ☑ 0.65 ms (forward, CPU) / ☑ 2.0 ms (REST API) 300–1000 ms (API) 300–800 ms (API)
REST API p90 2.2 ms (health) / 55.7 ms (RAG) Private Private
Offline operation ✓ (Includes LAN-only function) ✗ (Internet required) ✗ (Internet required)
Spike Dense Density ◎ Activated only 5–30% N/A (Dense Tensor) N/A (Dense Tensor)
Continuous learning ◎ Adaptation time −75 % (Meta-STDP) ✗ (Fine-tune separately) ✗ (Fine-tune separately)
Distributed nodes ◎ Supports 28 nodes / Zenoh p50 ☑ 3.8 ms N/A N/A
Metacognitive flexibility ◎ +340 % (Q-PFC, human judgment) None None
Energy efficiency ◎ 30–70% reduction compared to conventional CNN (CPU operation confirmed) Private Private
Context length N/A (neuronal memory) 1M tokens 1M tokens
Number of unit tests ☑ 332 files Private Private
Number of running Docker services ☑ 9 services
API endpoint count ☑ 187 endpoints Private Private

4. Measurement items to add (next action)

The following indicators are currently only "target values", but adding actual measurement benches will increase differentiation strength.

Priorities Measurements Recommended bench scripts Goals
High Zenoh inter-node RTT/throughput benchmarks/dispatch_bench.py < 2 ms, > 10⁴ msg/s
High Spatial generation latency (by resolution) benchmarks/spatial_gen_bench.py 20 ms @ 640×480
High Object Recognition mAP / FPS benchmarks/object_recog_bench.py mAP > 0.7, ≥ 20 FPS
Medium E2E pipeline (perception → action) benchmarks/e2e_bench.py < 200 ms
Medium Memory retention rate after sleep consolidation tests/unit/test_sleep_consolidation_stdp.py > 90 %
Medium Distributed 28-node throughput benchmarks/dispatch_bench.py --nodes 28 Linear scale ≥ 0.8
Low GPU vs CPU energy ratio Custom script +nvml −60% for GPU
Low API latency comparison (OpenAI/Anthropic) benchmarks/api_bench.py EvoSpikeNet p50 < API p50

5. future_apps — Robotics/BMI application response speed

Shows the real-time response performance target value of each future_apps that implements EvoSpikeNet-Core.
Each app is based on the KPIs listed in implementation_plan.md and is continuously verified using the corresponding test script.

Legend: ◎ = KPI target value described in implementation_plan.md, — = undefined


5-1. Humanoid robot control (humanoid)

Role: EvoSpikeNet Fleet nodes with distributed brains at the edge.
Orchestrator: cooperative_edge_robotics_system (REST + Zenoh)

Features / Loops Response Time Goal Update Rate Success Rate Notes Test File
Sensor → Brain round trip delay ◎ < 50 ms Local network test_sensor_brain_loop.py
per-node 3D mapping processing ◎ 33–100 ms/frame ◎ 10–30 Hz occipital_3d_mapper test_biped_simulation.py
Map fusion update ◎ 200 ms/update ◎ 5 Hz Multiple node integration test_full_loop.py
Orchestrator connection establishment ◎ < 10,000 ms ◎ 100 % First time after startup test_orchestrator_client.py
Lost connection → Standalone migration ◎ < 5,000 ms ◎ 100 % Failover test_brain_integration.py
Heartbeat p99 latency ☑ ☑ 3.7 ms ◎ 5 Hz ◎ 100 % Actual measurement of 30 units fleet tests/load/fleet_load_test.py
Node registration p95 latency ◎ ≤ 500 ms ◎ 100 % 30 devices registered at the same time tests/load/fleet_load_test.py
Motion control loop ◎ > 30 Hz 120 Motor control test_motion_manager.py
GPU inference pipeline (CUDA/CPU) ◎ > 10 Hz Asynchronous queue method test_pytorch_integration.py
Physical simulation (PyBullet) ◎ > 60 Hz For safety verification test_pybullet_simulation.py

tests/load/fleet_load_test.py execution result: nodes_completed=30/30, reg 100%, hb p99=3.7 ms


5-2. Cooperative Edge Robotics Orchestrator (cooperative_edge_robotics_system)

Role: A server that controls role assignment, mission planning, and federated learning for a multi-robot fleet (FastAPI port 8025).

Features/Services Response Time Goals Throughput Goals Success Rate/Accuracy Notes Test Files
Mission planning latency (10 robots) ◎ < 1,000 ms Task decomposition/dependency graph generation test_mission_planner.py
Node alive monitoring response ◎ < 500 ms Heartbeat loss detection test_node_registry.py
Task scheduling ◎ < 100 ms ◎ ≥ 100 tasks/s Priority queue processing test_task_scheduler.py
Dynamic role assignment Capability matching test_fleet_diagnostics.py
Federated learning Convergence generation number ◎ ≤ 50 rounds With differential privacy test_pytorch_integration.py
Fleet resource utilization rate ◎ ≥ 80 % All nodes aggregation test_fleet_diagnostics.py
Failure Prediction (FleetDiagnostics) ◎ ≥ 85 % Sensor/Actuator Diagnostics test_fleet_diagnostics.py
E2E Fleet Pipeline Plan → Schedule → Zenoh Delivery (integration) test_30_humanoid_fleet.py
API routes (REST) ◎ < 50 ms FastAPI all endpoints test_api_routes.py
Zenoh message delivery RTT ◎ < 5 ms ◎ > 10⁴ msg/s Real-time communication between robots (Refer to Zenoh bench)

5-3. Autonomous robot control (autonomous_robotics_control)

Role: SNN-driven perception-planning-execution cycle (grasping, transport, collision avoidance).

Features/Components Response Time Objectives Update Rate/Accuracy Notes Test Files
Sensor fusion (LiDAR/Camera/IMU) ◎ 20–50 ms BEV generation/point cloud processing test_performance.py
Collision avoidance response ◎ < 100 ms Real-time safety control test_system.py
Robot controller (joint control) ◎ < 10 ms ◎ > 100 Hz Inverse kinematics/actuator transmission test_unit.py
Object recognition (SNN inference) ◎ ≥ 95 % accuracy Grasp point estimation ≥ 85 % test_full_pipeline.py
Motion planning (RRT*/MoveIt compatible) ◎ < 200 ms Trajectory optimization test_full_pipeline.py
DNA motion encoder Evolutionary motion optimization test_unit.py
Distributed SNN inference (task distribution) ◎ < 50 ms Multi-node synchronization delay guarantee test_distributed_coordinator_only.py
GPU inference bench CUDA/CPU comparison measurement test_gpu_bench.py
LLM workflow integration ◎ < 1,000 ms Higher-level task understanding/instruction interpretation test_llm_workflow.py
Task success rate (gripping/transferring) ◎ ≥ 90 % Continuous operation ≥ 8 hours test_integration.py

5-4. Location-aware team robotics (location_aware_team_robotics)

Role: Precise position estimation and formation control using UWB/LiDAR/IMU sensor fusion EKF.

Features/Components Response Time Objectives Update Rate/Accuracy Notes Test Files
Self-localization (EKF) ◎ 20 ms ◎ ≥ 50 Hz RMSE ≤ 5 cm test_full_pipeline.py
Path planning (A/RRT, 10 m path) ◎ < 100 ms Dynamic obstacle avoidance test_full_pipeline.py
Task assignment (10 robots) ◎ < 500 ms Auction method test_full_pipeline.py
Formation control loop (ORCA) ◎ 100 ms Formation error RMS ≤ 10 cm test_robotics_extended.py
Coordination of the entire team (TeamCoordinator) ◎ 500 ms/update Consider role reassignment test_api_routes.py
Collision avoidance success rate ◎ 99.9 % ORCA algorithm test_robotics_extended.py
GPU bench Inference acceleration measurement test_gpu_bench.py
LLM workflow integration ◎ < 1,000 ms Higher order instruction interpretation test_llm_workflow.py

5-5. Brain Machine Interface (BMI) (brain_machine_interface)

Role: EEG real-time processing, motor intention decoding, neurofeedback control, clinical rehabilitation management.

Features/Services Response Time Objectives Sampling/Accuracy Notes Test Files
EEG signal stream processing ◎ < 10 ms ◎ 1000 Hz / 256 ch Bandpass filter/ICA/CSP test_signal_interface.py
Motor intention decoding ◎ < 50 ms ◎ ≥ 95 % accuracy SNN inference/left-hand discrimination test_bmi_services.py
Add neurofeedback ◎ < 100 ms Visual/haptic feedback test_bmi_services.py
Device communication (OpenBCI/Emotiv, etc.) ◎ < 5 ms Multi-device unified IF test_signal_interface.py
Safety monitoring (emergency stop) ◎ < 10 ms ◎ 100 % detection rate Stimulus intensity/biological threshold monitoring test_bmi_services.py
Distributed SNN inference (multiple nodes) ◎ < 50 ms Load balancing/redundancy test_bmi_extensions.py
BMI session management ◎ < 200 ms Start/end/state transition test_bmi_services.py
Rehabilitation program execution Treatment planning/progress tracking test_bridge_program_uplift_api.py
Operational Uplift API ◎ < 200 ms REST API for Clinicians test_operational_uplift_api.py

5-6. future_apps cross-cutting summary — response time comparison

Apps Most Important Control Loops Target Latency Actual Values Offline Fleet Size
humanoid Sensor → Brain → Actuator < 50 ms ☑ HB p99 = 3.7 ms LAN 30 units
cooperative_edge Mission planning (10 robots) < 1,000 ms LAN 30+
autonomous_robotics Joint control loop < 10 ms LAN/Standalone Single
location_aware_team EKF location estimation 20 ms LAN 10+
brain_machine_interface EEG processing→feedback < 100 ms
EvoSpikeNet-Core (reference) genome→brain forward < 1 ms ☑ 0.65 ms

This table is a combination of the measured values ​​under benchmarks/ and the design values ​​under Docs/. * ☑ = Actual value (bench_report.md / fleet_load_test), ◎ = Document KPI target value, — = Not measured*