Skip to content

EvoSpikeNet Distributed Brain Simulation System Technical Specifications

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

Creation date: January 12, 2026 (Last updated: March 19, 2026 Phase E connectome integration E-0/E-1/E-2 completed)

Author: Masahiro Aoki

Purpose and use of this document

  • Purpose: Overview the technical specifications of the entire distributed brain system and have a common understanding of design, implementation, and operation.
  • Target Audience: Architects, Distributed Brain Implementation Teams, SRE/Operations.
  • First reading order: 1. System overview → 2. Architectural design → 3. Zenoh communication system → 8. Execution flow of distributed brain nodes.
  • Related links: Execution script is examples/run_zenoh_distributed_brain.py, PFC/Zenoh/Executive details are implementation/PFC_ZENOH_EXECUTIVE.md.

  • Implementation notes (artifacts): See docs/implementation/ARTIFACT_MANIFESTS.md for artifact_manifest.json and CLI flags created by the training/generation scripts. The latest pipeline specifies the distributed brain node type with --node-type, and the node_type meta is automatically included in the generated artifact.

table of contents

  1. System Overview
  2. [Architecture Design] (#2-Architecture Design)
  3. Zenoh communication system
  4. [PFC and Q-PFC feedback loop] (#4-pfc and q-pfc feedback loop)
  5. Advanced Decision Engine
  6. [Node discovery system] (#6-Node discovery system)
  7. ChronoSpikeAttention mechanism
  8. [Execution flow of distributed brain nodes] (#8-Execution flow of distributed brain nodes)
  9. Data Structures and Type Definitions
  10. Performance optimization and control
  11. Recording simulation data
  12. Long-term memory system
  13. Feature 13: Advanced Spatial Processing Node ✅ New implementation
  14. Biomimetic Overlay (BiomimeticAdapter) ⭐ 2026-02-25
  15. [Genome-driven distributed inference (Phase D)] (#15-Genome-driven distributed inference phase-d) ⭐ NEW 2026-03-11
  16. Connectome Integration (Phase E) ⭐ NEW 2026-03-19

1. System overview

1.1. Concept

EvoSpikeNet's distributed brain simulation system is a scalable neuromorphic computing framework designed based on the principles of biological brain functional specialization and integration.

Design Philosophy: - Specialization: Each functional module (visual, auditory, language, motor) is responsible for specialized processing. - Integration: Prefrontal cortex (PFC) performs overall coordination and integration - Asynchronous Communication: Low-latency Pub/Sub pattern with Zenoh - Self-Modulation: Dynamic threshold adjustment via Q-PFC feedback loop

1.2. Main components

graph TB
    subgraph "制御層"
        PFC["PFC: Prefrontal Cortex"]
        QPFC["Q-PFC Feedback Loop Quantum modulation feedback"]
    end

    subgraph "機能モジュール層"
        SH["Sensor Hub: Sensor integrated management"]
        MH["Motor Hub: Motor integrated management"]
        VISUAL["Visual Module Visual Processing"]
        AUDIO["Auditory Module Auditory Processing"]
        LANG["Language Module Language processing"]
        MOTOR["Motor Module Motion Control"]
    end

    subgraph "通信・インフラ基盤"
        ZENOH["Zenoh Router Pub/Sub Communication"]
        PTP["PTP Sync Time synchronization"]
        DISCOVERY["Node Discovery Node discovery"]
        SAFETY["FPGA Safety Safety Monitoring"]
        SECURITY["Encryption Security Encryption Security"]
    end

    PFC <--> QPFC
    PFC <-->|ChronoSpikeAttention| ZENOH
    SH <-->|Sensor Data| PFC
    PFC <-->|Motor Commands| MH
    VISUAL <--> SH
    AUDIO <--> SH
    LANG <--> ZENOH
    MOTOR <--> MH

    ZENOH --- PTP
    ZENOH --- DISCOVERY
    ZENOH --- SAFETY

1.3. 29-node complete brain architecture

Updated on February 19, 2026: The 29-node configuration with the addition of spatial awareness and generation modules has been integrated into the public domain with FastAPI-based spatial generation services (/generate, /health) and SDK wrappers (spatial_generate, spatial_health). RAG nodes also provide /upload_file and /query via the SDK. Each layer specializes and works together through asynchronous communication using Zenoh.

Added on February 25, 2026: Added a spike compression layer and forgetting control to the long-term memory layer, and LongTermMemoryModule integrates episodic/semantic memory and compression spikes. When capacity is exceeded, ForgettingController prevents destructive forgetting.

graph TB
    subgraph "観測層 (Sensing Layer - 3 nodes)"
        CAM["Camera Sensor<br/>Camera Sensor"]
        MIC["Microphone Sensor<br/>Microphone Sensor"]
        ENV["Environment Sensor<br/>Environment Sensor"]
    end

    subgraph "エンコード層 (Encoding Layer - 4 nodes)"
        VENC["Vision Encoder<br/>Visual Encoder"]
        AENC["Audio Encoder<br/>Audio encoder"]
        TENC["Text Encoder<br/>Text Encoder"]
        SENC["Spiking Encoder<br/>spiking encoder"]
    end

    subgraph "認知・推論層 (Cognition Layer - 6 nodes)"
        LMINF["LM Inference<br/>Language model inference"]
        CLF["Classifier<br/>Classifier"]
        SPLM["Spiking LM<br/>Spirking LM"]
        SPATIAL["Spatial Processor<br/>Spatial recognition/generation"]
        ENS["Ensemble<br/>Ensemble"]
        RAG["RAG<br/>Search extension generation"]
    end

    subgraph "意思決定層 (Decision Layer - 3 nodes)"
        PFC["PFC<br/>Prefrontal cortex"]
        PLANNER["High-Level Planner<br/>High-Level Planner"]
        CTRL["Execution Controller<br/>Execution Controller"]
    end

    subgraph "長期記憶層 (Long-Term Memory - 2 nodes)"
        EPI["Episodic Memory<br/>Episodic Memory"]
        SEM["Semantic Memory<br/>Semantic Memory"]
    end

    subgraph "記憶層 (Memory Layer - 6 nodes)"
        VDB["Vector DB<br/>Vector DB"]
        EST["Episodic Storage<br/>Episode Storage"]
        RETR["Retriever<br/>Retriever"]
        KB["Knowledge Base<br/>Knowledge Base"]
        SPKRES["Spike Reservoir<br/>Spike compression layer"]
        MINT["Memory Integrator<br/>Memory Integrator"]
    end

    subgraph "学習層 (Learning Layer - 1 node) - ノードタイプベースLLMトレーニング対応"
        TRAIN["Trainer<br/>Learner"]
    end

    subgraph "集約層 (Aggregation Layer - 2 nodes)"
        FED["Federator<br/>Federator"]
        RAGG["Result Aggregator<br/>Result Aggregator"]
    end

    subgraph "管理層 (Management Layer - 2 nodes)"
        AUTH["Auth Manager<br/>Authentication management"]
        MON["Monitoring<br/>Monitoring"]
    end

    subgraph "通信基盤 (Communication Infrastructure)"
        ZENOH["Zenoh Router<br/>Pub/Sub communication"]
        PTP["PTP Sync<br/>Time synchronization"]
        DISC["Node Discovery<br/>Node Discovery"]
        SAFETY["FPGA Safety<br/>Safety monitoring"]
    end

    %% データフロー
    CAM --> VENC
    MIC --> AENC
    ENV --> SENC

    VENC --> SPATIAL
    VENC --> LMINF
    AENC --> CLF
    TENC --> RAG
    SENC --> SPLM

    LMINF --> ENS
    CLF --> ENS
    SPLM --> ENS
    SPATIAL --> ENS
    RAG --> ENS

    ENS --> PFC
    PFC --> PLANNER
    PLANNER --> CTRL

    EPI --> MINT
    SEM --> MINT
    SPKRES --> MINT
    MINT --> RETR

    VDB --> RETR
    EST --> RETR
    KB --> RETR
    SPKRES --> RETR

    RETR --> RAG
    RETR --> PFC

    PFC --> TRAIN
    TRAIN --> FED
    FED --> RAGG

    AUTH -->|Security| ZENOH
    MON -->|Monitoring| ZENOH

    %% 通信基盤接続
    ZENOH --- PTP
    ZENOH --- DISC
    ZENOH --- SAFETY

    %% 全ノードがZenohに接続
    CAM -.-> ZENOH
    MIC -.-> ZENOH
    ENV -.-> ZENOH
    VENC -.-> ZENOH
    AENC -.-> ZENOH
    TENC -.-> ZENOH
    SENC -.-> ZENOH
    LMINF -.-> ZENOH
    CLF -.-> ZENOH
    SPLM -.-> ZENOH
    SPATIAL -.-> ZENOH
    ENS -.-> ZENOH
    RAG -.-> ZENOH
    PFC -.-> ZENOH
    PLANNER -.-> ZENOH
    CTRL -.-> ZENOH
    EPI -.-> ZENOH
    SEM -.-> ZENOH
    VDB -.-> ZENOH
    EST -.-> ZENOH
    RETR -.-> ZENOH
    KB -.-> ZENOH
    SPKRES -.-> ZENOH
    MINT -.-> ZENOH
    TRAIN -.-> ZENOH
    FED -.-> ZENOH
    RAGG -.-> ZENOH
    AUTH -.-> ZENOH
    MON -.-> ZENOH

Architectural Features

  • Stratified specialization: Each layer is responsible for specialized processing that mimics biological brain functions.
  • Asynchronous communication: Low-latency real-time communication with Zenoh Pub/Sub
  • Long-term memory consolidation: Learning adaptation through episodic/semantic memory
  • Scalability: 29 node configuration (spike compression node added) covers complete brain functions
  • Fault Tolerance: Distributed architecture eliminates single points of failure

Node allocation details

  • Observation layer: 3 nodes - Data collection from external environment
  • Encoding layer: 4 nodes - Feature extraction of various data formats
  • Cognition/Inference Layer: 6 nodes - Advanced recognition and inference processing
  • Decision making layer: 3 nodes - Planning and execution control centered on PFC
  • Long-term memory layer: 2 nodes - Persistent knowledge and experience retention
  • Storage layer: 6 nodes - short-term/working memory and efficient retrieval (including spike compression layer)
  • Learning layer: 1 node - Continuous model adaptation
  • Aggregation layer: 2 nodes - Distributed learning and result integration
  • Management layer: 2 nodes - Security and system monitoring

1.4. Process execution flow

Basic data flow sequence

sequenceDiagram
    participant CAM as Camera Sensor
    participant VENC as Vision Encoder
    participant LMINF as LM Inference
    participant PFC as PFC
    participant PLANNER as High-Level Planner
    participant CTRL as Execution Controller
    participant EPI as Episodic Memory
    participant RETR as Retriever
    participant MOTOR as Motor Hub

    Note over CAM,MOTOR: センサー入力 → 認知 → 意思決定 → 行動実行

    CAM->>VENC: 画像データ送信
    VENC->>LMINF: 視覚特徴ベクトル
    LMINF->>PFC: 認識結果

    PFC->>RETR: 関連記憶検索要求
    RETR->>EPI: エピソード記憶クエリ
    EPI-->>RETR: 関連エピソード返却
    RETR-->>PFC: 統合された文脈情報

    PFC->>PLANNER: 目標設定と計画生成
    PLANNER->>CTRL: 実行可能な行動プラン
    CTRL->>MOTOR: 運動コマンド送信

    PFC->>EPI: 新しい経験の記憶保存
    EPI-->>PFC: 記憶保存確認

Long-term memory integration flow

sequenceDiagram
    participant INPUT as センサー入力
    participant ENCODER as エンコーダ
    participant INFERENCE as 推論ノード
    participant PFC as PFC
    participant EPI as Episodic Memory
    participant SEM as Semantic Memory
    participant MINT as Memory Integrator
    participant RETR as Retriever

    Note over INPUT,RETR: 経験学習と知識蓄積のサイクル

    INPUT->>ENCODER: 生データ
    ENCODER->>INFERENCE: 特徴ベクトル
    INFERENCE->>PFC: 処理結果

    PFC->>EPI: エピソード記憶保存
    PFC->>SEM: 知識抽出・保存

    Note right of PFC: 継続的な学習サイクル

    PFC->>MINT: 記憶統合要求
    MINT->>EPI: エピソード検索
    MINT->>SEM: 知識検索
    EPI-->>MINT: エピソードデータ
    SEM-->>MINT: 知識データ
    MINT->>RETR: 統合記憶インデックス更新

    Note over RETR: 検索効率向上のための<br/>クロスモーダル連想

Learning adaptation flow

sequenceDiagram
    participant PFC as PFC
    participant TRAIN as Trainer
    participant FED as Federator (Flower)
    participant RAGG as Result Aggregator
    participant ALL as 全ノード

    Note over PFC,ALL: Flowerベースの連合学習とモデル更新

    PFC->>TRAIN: 学習データ送信
    TRAIN->>TRAIN: ローカルモデル更新計算

    TRAIN->>FED: Flower FLクライアント初期化
    FED->>ALL: 連合学習ラウンド参加要求
    ALL-->>FED: ローカル更新パラメータ送信
    FED->>RAGG: 安全なパラメータ集約(差分プライバシー)
    RAGG->>TRAIN: 集約済みグローバル更新

    TRAIN->>ALL: 新グローバルモデル配布
    ALL-->>TRAIN: 配布確認

    Note over ALL: 継続的な性能向上とプライバシー保護

System-wide execution cycle

stateDiagram-v2
    [*] --> 初期化
    初期化 --> センサー監視: Zenoh接続完了
    センサー監視 --> データ収集: 入力検知
    データ収集 --> 特徴抽出: エンコーダ処理
    特徴抽出 --> 推論処理: 認知層
    推論処理 --> 意思決定: PFC統合
    意思決定 --> 記憶検索: 文脈要求
    記憶検索 --> 計画生成: 関連情報取得
    計画生成 --> 行動実行: コントローラー
    行動実行 --> 経験記憶: エピソード保存
    経験記憶 --> 学習適応: 継続学習
    学習適応 --> センサー監視: 次のサイクル

    意思決定 --> 直接実行: 単純タスク
    直接実行 --> センサー監視

    状態チェック --> センサー監視: 正常
    状態チェック --> エラー回復: 異常検知
    エラー回復 --> センサー監視: 回復完了

Main components

Sensor Hub:

Updated on December 12, 2025: We have separated the classification that manages motor cortex and sensor information and introduced a more efficient architecture.

Sensor Hub: - Integrated management of all sensor inputs (visual, auditory, tactile) - Responsible for preprocessing and integration of sensor data - Provides sensor data integrated into PFC

Motor Hub: - Integrated management of all motor outputs (trajectory control, cerebellar coordination, PWM control) - Convert commands from PFC into actual motion control - Manage coordination of multiple motor subsystems

Advantages: - Parallel processing: Sensor inputs can be processed simultaneously - Specialization: Each hub is responsible for specialized functions - Extensibility: Easy to add new sensors/motion types - Efficiency: Separation of sensor and motion enables processing optimization

Data flow:``` Sensor Hub → PFC → Motor Hub ↓ ↓ ↓ Visual Compute Motor-Traj Auditory Lang-Main Motor-Cereb Speech Motor-PWM

### 1.3. Plugin Architecture & Microservices

**Added on December 20, 2025**: We have moved the system architecture from monolithic to **plugin-based** and **microservices**.

#### Architecture Overview

**Plugin system:**
- 7 types of plug-ins (NEURON, ENCODER, PLASTICITY, FUNCTIONAL, LEARNING, MONITORING, COMMUNICATION)
- Runtime extension with dynamic loading
- 9 built-in plugins (LIF, Izhikevich, EntangledSynchrony, Rate, TAS, Latency, STDP, MetaPlasticity, Homeostasis)
- **Feature addition time reduced by 70%** (4-5 days → 1-1.5 days)

**Microservices:**
- 5 independent services (Training, Inference, Model Registry, Monitoring, API Gateway)
- Independent scaling per service
- **80% scalability improvement** (Resource efficiency 60% → 85%)
- Fully containerized deployment with Docker Compose

For details, see [PLUGIN_MICROSERVICES_ARCHITECTURE.md](PLUGIN_MICROSERVICES_ARCHITECTURE.md).

### 1.4. P3 Functional Integration - Production Ready

**Updated December 12, 2025**: All seven P3 (low priority) features have been implemented and the system has all the advanced features required for production use.

#### Integrated P3 functionality

**🔄 End-to-end delay optimization (< 500ms)**
- Latency tracking of all components with `LatencyProfiler`
- Statistical analysis and target checking function
- Real-time performance monitoring

**💾 Snapshot/Disaster Recovery System**
- Complete system state saving with `SnapshotManager`
- Compression, checksum, and full recovery functions
- Supports geographical backup

**📊 Massive scalability verification**
- Supports more than 1000 nodes with `ScalabilityTester`
- Resource monitoring and stress testing
- Automatic bottleneck detection

**🔧 Hardware optimization**
- ONNX export/quantization using `HardwareOptimizer`
- Compatible with neuromorphic chips such as Loihi
- Automatic hardware adaptation

**🛡️ High Availability Monitoring (99.9%+)**
- Health check and automatic recovery using `AvailabilityMonitor`
- SLA guarantee function
- Downtime statistics tracking

**🌐 Asynchronous Zenoh communication integration**
- Structured messaging with `AsyncZenohCommunicator`
- Request/Response/Pub/Sub pattern
- High performance distributed communication

**⚖️ Distributed decision-making consensus**
- Autonomous motion control consensus by `DistributedMotorConsensus`
- Determining cooperative motion between multiple motor nodes
- 3-phase process: proposal, vote, and aggregation
- Quorum calculation: $\lceil N \times t \rceil$ (N: number of nodes, t: threshold)

#### System-wide availability metrics

- **End-to-end delay**: < 500ms (95th percentile)
- **System availability**: 99.9%+ (annual downtime < 8.76 hours)
- **Scalability**: Supports more than 1000 nodes
- **Hardware compatibility**: CPU/GPU/TPU/neuromorphic chip
- **Disaster recovery time**: < 30 minutes (full recovery)

### 1.5. Plan D: Brain Language Architecture

**Added on December 12, 2025**: A next-generation architecture that significantly improves processing speed and transmission speed by implementing visual verbalization and motor instructions in the brain loop as a "special brain language with a small amount of data."

#### Conceptual background of brain language

- **Neuroscientific basis**: Human thinking is language-based (inner speech)
- **Information compression**: Visual data (millions of dimensions) → Language tokens (hundreds of dimensions)
- **Abstraction processing**: Convert concrete sensor data to conceptual level representation
- **Efficient transmission**: Low bandwidth and high speed communication between spiking networks

#### Architecture Overview

```mermaid
graph TD
    subgraph "Vision-to-Language"
        VISUAL[Visual Input] --> VFE[Visual Feature Extractor]
        VFE --> VLE[Vision-Language Encoder]
        VLE --> BLT[Brain Language Tokens]
    end

    subgraph "Brain Language Processing"
        BLT --> BLP[Brain Language Processor]
        BLP --> REASON[Reasoning & Inference]
        BLP --> MEMORY[Memory Integration]
        REASON --> DECISIONS[Decisions in Language]
    end

    subgraph "Language-to-Motor"
        DECISIONS --> LMD[Language-to-Motor Decoder]
        LMD --> MOTOR_CMD[Motor Commands]
        MOTOR_CMD --> EXEC[Execution]
    end

    MEMORY -.-> REASON
    EXEC -.-> FEEDBACK[Feedback Loop] -.-> BLP

Expected performance improvement

  • Data compression rate: Over 90% reduction (visual → verbal)
  • Processing speed: More than 50% improvement (< 250ms target)
  • Transmission efficiency: Bandwidth reduction of over 80%
  • Energy efficiency: More than 60% reduction in power consumption

Implementation Phase

  1. Phase 1 (2026 Q1): Proof of Concept - Vision-Language Conversion
  2. Phase 2 (2026 Q2-Q3): Core implementation - Brain Language Encoder/Processor/Decoder
  3. Phase 3 (2026 Q4): Optimization - Performance improvements and multimodality expansion
  4. Phase 4 (2027 Q1-Q2): Integration - Full integration with Plan B

This approach has the potential to create efficient AI systems within modern computational constraints while mimicking human-like cognitive processes.

1.5. System features

Features Description Technical elements
Asynchronous communication Zenoh Pub/Sub model Low latency (<1ms), loose coupling, version compatibility
Quantum Inspired Q-PFC Feedback Loop Entropy → Modulation Coefficient α(t)
Temporal Causality ChronoSpikeAttention Temporal Proximity Mask, Causality Guarantee
Hierarchical control Top-down control using PFC Task routing, resource allocation
Self-adaptability Dynamic threshold adjustment Exploration (low α) ↔ Exploitation (high α)
Advanced Decision Making Executive Control Engine Metacognition, Hierarchical Planning
Dynamic node discovery Real-time central monitoring service Heartbeat, automatic fallback
Dynamic model loading Model resolution with AutoModelSelector Automatic download of the latest model via API
Simulation recording Data storage using SimulationRecorder Spikes, membrane potential, weights, control states

2. Architecture design

2.1. Overall architecture

sequenceDiagram
    participant UI as "Web UI: Dash"
    participant API as "API Server: FastAPI"
    participant ZR as "Zenoh Router"
    participant PFC as "PFC Node"
    participant LANG as "Lang-Main Node"
    participant AMS as "AutoModelSelector"

    UI->>API: 1. プロンプト送信

    alt モダンなパス: Zenoh
        API->>ZR: 2a. api/prompt へPublish
        ZR->>PFC: 3a. プロンプト転送
    else レガシーパス: ファイル
        API->>PFC: 2b. /tmp/prompt.json ファイル書き込み
        PFC->>PFC: 3b. ファイルシステムをポーリング
    end

    PFC->>AMS: 4. 最新モデルを要求
    AMS->>API: 5. 最新モデルをダウンロード
    API-->>AMS: 6. モデルファイル
    AMS-->>PFC: 7. モデルインスタンス

    Note over PFC: 8. Q-PFCループでタスク分析・ルーティング

    PFC->>ZR: 9. pfc/text_prompt へPublish
    ZR->>LANG: 10. タスク転送

    Note over LANG: 11. SpikingEvoSpikeNetLMで推論実行

    alt モダンなパス: Zenoh
        LANG->>ZR: 12a. api/result へPublish
        ZR->>API: 13a. 結果を転送
    else レガシーパス: ファイル
        LANG->>API: 12b. /tmp/result.json ファイル書き込み
    end

    API->>UI: 14. 結果をUIへ送信: ポーリング

    LANG->>ZR: 15. task/completion をPublish
    ZR->>PFC: 16. タスク完了を通知

2.2. Node configuration and naming conventions

A distributed brain simulation consists of the following types of nodes: Node names can represent hierarchical structures, and the base type is resolved using the _get_base_module_type() function. - lang-embed-18lang-main - vis-object-9visual

PFC node (Prefrontal Cortex)

Role: Central control hub, task routing, cognitive control, advanced decision making

Implementation class: - evospikenet.pfc.PFCDecisionEngine (Basic PFC) - evospikenet.pfc.AdvancedPFCEngine (Advanced PFC) - evospikenet.executive_control.ExecutiveControlEngine (executive control)

Main features: 1. Receive tasks from both Zenoh and file system 2. Load model dynamically with AutoModelSelector 3. Self-modulation and routing with Q-PFC feedback loop 4. Dynamic discovery of active nodes and automatic fallback to lang-main

Lang-Main node (Language Main)

Role: Language processing, text generation. Default fallback destination for all systems.

Implementation class: evospikenet.models.SpikingEvoSpikeNetLM

Main features: 1. Receiving and tokenizing text prompts 2. Spike-driven inference (runs in background thread) 3. Output results to both Zenoh and file

Visual node (Visual Processing)

Role: Processing visual information

Implementation class: SimpleLIFNode (basic)/Custom vision encoder

Main features: 1. Visual data reception and spike encoding 2. Sending feature spikes to PFC

Motor node (Motor Control)

Role: Generation of motor output and distributed consensus building

Implementation class: evospikenet.motor_consensus.AutonomousMotorNode

Main features: 1. Motor goal reception and distributed consensus 2. Safety verification in conjunction with FPGA Safety service

✨ NEW: Spatial Recognition & Generation

Spatial_Where node (Rank 12) - Where processing pathway (dorsal parietal lobe)

Role: Extract spatial position, distance, and direction from visual input

Implementation class: evospikenet.spatial_processing.where_pathway.SpatialWhereNode

Main features: 1. Feature extraction from vision nodes 2. Depth estimation and spatial coordinate calculation 3. Conversion to Allocentric coordinates 4. Motion detection using optical flow calculation

Publishing topic: - spikes/spatial/where/depth (30 Hz) - Depth map - spikes/spatial/where/coordinates (30 Hz) - 3D coordinates - spikes/spatial/where/optical_flow (30 Hz) - Optical flow

Spatial_What node (Rank 13) - What processing pathway (visual cortex/temporal cortex)

Role: Generate 3D spatial scenes from text, visual generation

Implementation class: evospikenet.spatial_processing.what_pathway.SpatialWhatNode

Main features: 1. Receive text description from language module 2. Analysis and construction of scene graph 3. 3D space generation by VAE 4. Time series prediction (next frame prediction)

Publishing topic: - spikes/spatial/what/scene_graph (10 Hz) - Scene graph - spikes/spatial/what/voxel_grid (10 Hz) - 3D voxel representation - spikes/spatial/what/mesh (5 Hz) - 3D mesh

Spatial_Integration Node (Rank 14) - What-Where Integration (Occcipito-Parietal Junction)

Role: Integrating what and where information, building a unified world model

Implementation class: evospikenet.spatial_processing.integration.SpatialIntegrationNode

Main features: 1. Receiving information from What and Where 2. World model update through time series integration 3. Spatial reasoning (relationships between objects, reachability, etc.) 4. Egocentric view generation from multiple viewpoints

Publishing topic: - spikes/spatial/integration/world_model (10 Hz) - World model - spikes/spatial/integration/reasoning (10 Hz) - Spatial reasoning results - spikes/spatial/integration/perspective (30 Hz) - egocentric view

Spatial_Attention Node (Rank 15) - Spatial attention control (fronto-orbital cortex cooperation)

Role: Control of spatial attention based on task signals from PFC

Implementation class: evospikenet.spatial_processing.attention_control.SpatialAttentionNode

Main features: 1. Receive task drive signal from PFC 2. Bottom-up saliency detection 3. Top-down attention weight calculation 4. Saccade (eye movement) goal planning

Publishing topic: - spikes/spatial/attention/weights (30 Hz) - Attention weights - spikes/spatial/attention/saliency (30 Hz) - saliency map - spikes/spatial/attention/saccade (variable) - saccade target

2.3. Communication topology

evospikenet/
├── api/prompt              # API → PFC (prompt submission, via Zenoh)
├── api/result              # Function node → API (send results, via Zenoh)
├── pfc/text_prompt         # PFC → Lang-Main (text task)
├── pfc/visual_task         # PFC → Visual (visual task)
├── pfc/spatial_task        # PFC → Spatial (spatial task) ✨ NEW
├── pfc/audio_task          # PFC → Audio (audio task)
├── pfc/motor_goals         # PFC → Motor (motor goal)
├── pfc/spatial_attention   # PFC → Spatial_Attention (spatial attention signal)✨ NEW
├── pfc/add_goal            # Executive Control (Add Goal)
├── pfc/get_status          # Executive Control (status acquisition)
├── spikes/visual/pfc       # Visual → PFC (visual spike)
├── spikes/spatial/where/*  # Spatial_Where node output ✨ NEW
│   ├── depth              # Depth estimation (30 Hz)
│   ├── coordinates        # Spatial coordinates (30 Hz)
│   └── optical_flow       # Optical flow (30 Hz)
├── spikes/spatial/what/*   # Spatial_What node output ✨ NEW
│   ├── scene_graph        # Scene graph (10 Hz)
│   ├── voxel_grid         # 3D grid representation (10 Hz)
│   └── mesh               # 3D mesh (5Hz)
├── spikes/spatial/integration/*  # Spatial_Integration ✨ NEW
│   ├── world_model        # Integrated world model (10 Hz)
│   ├── reasoning          # Spatial inference results (10 Hz)
│   └── perspective        # Egocentric view (30 Hz)
├── spikes/spatial/attention/*    # Spatial_Attention ✨ NEW
│   ├── weights            # Attention weight (30 Hz)
│   ├── saliency           # Saliency map (30 Hz)
│   └── saccade            # Saccade target (variable)
│
├── spikes/audio/*          # Audio node output
├── ego_pose                # Self-position/posture information
├── task/completion         # Task completion notification
└── monitoring/*            # Monitoring/metrics distribution
├── spikes/spatial/pfc      # Spatial → PFC (spatial spike)
├── spikes/auditory/pfc     # Auditory → PFC (auditory spike)
├── task/completion         # Function node → PFC (completion notification)
├── heartbeat/{node_id}     # Each node → Discovery (survival confirmation, every 2 seconds)
└── discovery/announce      # All nodes → Discovery (node ​​discovery, at startup)

3. Zenoh communication system

3.1. What is Zenoh?

Zenoh (Zero Overhead Network Protocol) is the next generation communication protocol for robotics and IoT.

Features: - Low Latency: Sub-millisecond level communication delay - High Throughput: Millions of messages/second - Flexibility: Supports Pub/Sub, Request/Reply, Querying - Loose coupling: Dynamic addition and deletion of nodes possible

3.2. ZenohConfig data structure

# evospikenet/zenoh_comm.py
@dataclass
class ZenohConfig:
    mode: str = "peer"                      # "peer" or "client"
    connect: Optional[List[str]] = None     # Connection destination endpoint
    listen: Optional[List[str]] = None      # listen endpoint
    namespace: str = "evospikenet"          # Topic namespace

3.3. Implementation details

3.3.1. ZenohCommunicator

Provides basic Pub/Sub, Request/Reply functionality.

  • Version compatibility: Built-in compatibility layer to absorb API differences between Zenoh 0.6+ and 0.4.x.
  • Asynchronous queues: subscribe_queue() method allows receiving messages in the Queue object instead of a callback.
  • Request-Reply: Default timeout set to 5.0 seconds.
  • Compression support: Automatic compression/decompression function using IntelligentCompressor.

3.3.2. Asynchronous communication extension

Implemented structured messaging with the AsyncZenohCommunicator class.

  • Pub/Sub pattern: Asynchronous message delivery
  • Request/Response: Synchronous query processing
  • High performance distributed communication: Realizes low latency and high throughput

3.3.3. Secure communication

Features: - PSK (Pre-Shared Key) Encryption: Added psk field to ZenohConfig to support encryption with 256-bit (64-character hexadecimal) pre-shared key. - Diffie-Hellman key exchange: Dynamic key exchange with 2048-bit DH parameters and HKDF key derivation using the DHKeyExchange class - AES-256-GCM authenticated encryption: Encryption with authentication tag ensures confidentiality and integrity at the same time - Session-based key management: Session key management between nodes using set_session_key()/get_session_key() - Forward Secrecy: DH key exchange uses a different key for each session, ensuring the security of past communications - Backward Compatibility: Supports both legacy formats (embedded keys) and secure formats (PSK/session keys)

Implementation class: - evospikenet.spike_encryption.DHKeyExchange: Implementation of DH key exchange - evospikenet.spike_encryption.SpikeEncryption: Encryption/decryption engine - evospikenet.zenoh_comm.ZenohCommunicator: PSK configuration and DH key exchange integration

Usage example:```python

PSK mode

config = ZenohConfig( mode="peer", namespace="evospikenet", psk="a1b2c3d4..." # 64 characters hexadecimal ) comm = ZenohCommunicator(config)

DH key exchange mode

peer_public_key = comm.initiate_key_exchange("peer_node_id")

After receiving public key from peer

comm.complete_key_exchange(peer_public_key_bytes)

See [SECURE_DISTRIBUTED_BRAIN.md](SECURE_DISTRIBUTED_BRAIN.md) for details.

#### 3.3.4. Reliability improvement mechanism (MT25-EV016)

**ACK/Retry Mechanism:**
We implemented acknowledgment and retry functions in ZenohCommunicator and improved the reliability of inter-node communication from 99%+ to 99.9%+.
- Track send/receive correlation by Request ID
- Automatically adjust retry interval with exponential backoff
- Constantly monitor connection status

**Structured Logging:**
Record all communication errors and performance events as structured logs for faster troubleshooting.
- Full tracking of request lifecycle
- Automatic recording of error type and number of attempts

See the StructuredLogger class in `evospikenet/zenoh_comm.py` for details.

### 3.4. Optimizing spike transmission

**SpikePacket data structure (PTPSpikePacket):**```python
# evospikenet/ptp_sync.py
@dataclass
class PTPSpikePacket:
    timestamp_ns: int             # PTP synchronized nanosecond timestamp
    modality: str
    data: torch.Tensor
    metadata: Dict
```Utilizes highly accurate timestamps synchronized by PTP to ensure temporal consistency between modules.

#### 3.4.1. AEG-Comm communication optimization

**AEG-Comm** (Adaptive Energy-based Gating for Communication) is an intelligent communication control system with a three-layer safety architecture.

**Features:**
- **Layer 1: Energy Gate** - Energy-based adaptive gating
- **Layer 2: Critical Override** - Priority transmission of important packets (force, safety modality, emergency keyword)
- **Layer 3: Timestamp Guarantee** - Maintain order by guaranteeing timestamps
- **Communication reduction rate**: 85-93% target achieved
- **Error Recovery**: Exponential backoff retry
- **Security**: Spike Encryption (MT25-EV015)

### 3.5. Distributed Memory System

**Implementation completed: December 21, 2025**

A distributed sharing system for episodic memory using the Zenoh communication protocol. Realizes knowledge sharing and collaborative learning among multiple nodes.

#### 3.5.1. Architecture overview

```mermaid
graph TD
    A[Node A: EpisodicMemory] --> Z[Zenoh Router]
    B[Node B: EpisodicMemory] --> Z
    C[Node C: EpisodicMemory] --> Z

    Z --> A
    Z --> B
    Z --> C

    A --> T1[evolspikenet/memory/node_A]
    B --> T2[evolspikenet/memory/node_B]
    C --> T3[evolspikenet/memory/node_C]

3.5.2. Main features

1. Enabling distributed storage```python

Extension methods for the EpisodicMemory class

def enable_distributed_memory(self, node_id: str, zenoh_config: Optional[Dict[str, Any]] = None) -> bool: """Zenoh通信による分散記憶を有効化"""

**2. Memory sharing**```python
def share_memory_with_node(self, target_node_id: str, memory_ids: List[str]) -> bool:
    """指定ノードへの記憶共有"""

3. Synchronous request```python def request_memory_sync(self, target_node_id: str, sync_criteria: Dict[str, Any]) -> bool: """他のノードからの記憶同期要求"""

#### 3.5.3. Communication topic structure
evolspikenet/memory/{node_id} # memory sharing topic evolspikenet/memory/sync/{node_id} # Synchronous request topic
#### 3.5.4. Message format

**Memory sharing message:**```python
{
    "type": "memory_share",
    "source_node": "node_A",
    "memories": [
        {
            "source_node": "node_A",
            "memory_data": {...},  # EpisodicMemoryEntry.to_dict()
            "timestamp": "2025-12-21T10:30:00.000000"
        }
    ]
}

Synchronous request message:```python { "type": "sync_request", "source_node": "node_B", "criteria": { "importance_threshold": 0.7, "time_range": {"start": "2025-12-20", "end": "2025-12-21"} }, "timestamp": "2025-12-21T10:30:00.000000" }

#### 3.5.5. Memory merge strategy

**Intelligent merge:**
- Importance-based integration of duplicate memories
- Weighted average of access statistics
- Metadata integration

```python
def _merge_memory_entry(self, existing_id: str, new_entry: EpisodicMemoryEntry) -> None:
    """既存記憶と新規記憶のマージ"""
    existing = self.memories[existing_id]

    # weighted average of importance
    total_access = existing.access_count + new_entry.access_count
    if total_access > 0:
        existing.importance = (
            (existing.importance * existing.access_count +
             new_entry.importance * new_entry.access_count) / total_access
        )

3.5.6. Performance characteristics

Indicator Value Notes
Sync success rate 95% Based on Zenoh reliability
Latency <10ms Local network
Memory usage +5% Communication overhead
Scalability 100 nodes Theoretical upper limit

3.5.7. Usage example

<!-- from evospikenet.episodic_memory import EpisodicMemory -->

# Distributed storage enabled
memory = EpisodicMemory(embedding_dim=512, max_memories=1000)
memory.enable_distributed_memory("brain_node_01", {"port": 7447})

# experience save
memory_id = memory.store_experience(
    context={"task": "pattern_recognition"},
    action="classify_image",
    outcome="correct",
    reward=1.0
)

# Memory sharing with other nodes
memory.share_memory_with_node("brain_node_02", [memory_id])

# synchronous request
memory.request_memory_sync("brain_node_03", {
    "importance_threshold": 0.8,
    "time_range": {"start": "2025-12-21"}
})

3.5.8. Distributed learning effect

  • Collaborative learning: Accelerate learning by sharing knowledge between nodes
  • Redundancy: Increased data loss tolerance through distribution
  • Scalability: Supports dynamic node addition/deletion
  • Adaptability: Continuous learning in a distributed environment

3.6. Security System (Spike Encryption & Secure Communication)

Implementation completed: January 24, 2026

Encrypted communication system between distributed brain nodes. MT25-EV015 patented implementation ensures confidentiality and integrity of spike data and messages.

3.6.1. Security architecture overview

graph TD
    A[Node A] -->|PSK/DH key exchange| B[Node B]
    A -->|encryption spike| C[Zenoh Router]
    C -->|encryption spike| B

    subgraph "SpikeEncryption System"
        PSK[Pre-Shared Key]
        DH[Diffie-Hellman Exchange]
        SESSION[Session Key Management]
    end

    PSK --> SESSION
    DH --> SESSION
    SESSION --> ENC[AES-256-GCM Encryption]
    ENC --> C

3.6.2. Security features

1. Pre-Shared Key (PSK)```python

256-bit PSK generation

import os psk = os.urandom(32).hex() # 64 hex characters

PSK settings with ZenohConfig

config = ZenohConfig( psk=psk, namespace="evospikenet" )

**2. Diffie-Hellman key exchange**```python
# Initializing DH key exchange
dh_exchange = DHKeyExchange()
public_key = dh_exchange.generate_keypair()

# Key exchange with peer
shared_secret = dh_exchange.compute_shared_key(peer_public_key)
session_key = dh_exchange.derive_session_key(shared_secret)

3. Session-based encryption - Forward Secrecy: No impact even if past sessions are leaked due to DH key exchange - AES-256-GCM: Tampering detection using authenticated encryption - Dynamic key management: Use different keys for each session

3.6.3. Encryption format

Secure format (when using PSK/Session Key):```python { "format": "secure", "nonce": "<12 bytes base64>", "ciphertext": "", "tag": "<16 bytes auth tag>" }

**Legacy format (backwards compatible)**:```python
{
    "format": "legacy",
    "key": "<embedded AES key>",
    "nonce": "<12 bytes base64>",
    "ciphertext": "<encrypted spike data>",
    "tag": "<16 bytes auth tag>"
}

3.6.4. Communication topic structure

evospikenet/security/key_exchange/{node_id}    # DH public key exchange
evospikenet/spikes/{source}/{dest}             # Encrypted spike data
evospikenet/heartbeat/{node_id}                # Node survival confirmation

3.6.5. Security metrics

Indicator Value Notes
Encryption algorithm AES-256-GCM NIST recommended
Key length 256-bit Quantum resistance consideration
DH key size 2048-bit RFC 3526 compliant
Encryption overhead <5% Benchmark measurements
Key exchange time <100ms Local network
Forward Secrecy Enable Rekey every session

3.6.6. Usage example

<!-- TODO: update or remove - import fail<!-- Remember: Automatic conversion not possible  please fix manually -->ZenohCommunicator, ZenohConfig -->

# PSK mode
config = ZenohConfig(
    psk="64-hex-character-pre-shared-key",
    namespace="evospikenet"
)
comm = ZenohCommunicator(config)

# DH key exchange mode
comm = ZenohCommunicator(ZenohConfig())
comm.initiate_key_exchange("peer_node_id")
# ... After the key exchange is completed, encrypt with the session key ...

3.6.7. Security Best Practices

  • PSK Management: Stored in environment variables or secret management system
  • Key rotation: Periodic session key updates (recommended: every hour)
  • TLS integration: Can be used with Zenoh's TLS functionality
  • Audit Log: Logging of encryption events
  • Access Control: Node ID based authentication

Detailed documentation: SECURE_DISTRIBUTED_BRAIN.md


4. PFC and Q-PFC feedback loop

4.1. PFCDecisionEngine Overview

The PFC is responsible for the highest cognitive functions of the system.

  1. Working Memory: Short-term memory by LIF neuron layer
  2. Task Routing: Attention mechanism using ChronoSpikeAttention
  3. Entropy calculation: Quantifying decision uncertainty
  4. Self-modulating: Q-PFC feedback loop

4.2. Theory of Q-PFC feedback loop

4.2.1. Definition of cognitive entropy

\[ H = -\sum_{i=1}^{N} p_i \log p_i \]

4.2.2. Quantum inspired modulation

\[ \theta = \pi \cdot \frac{H}{H_{\max}}, \quad \alpha(t) = \cos^2\left(\frac{\theta}{2}\right) = \cos^2\left(\frac{\pi H}{2\log N}\right) \]

4.2.3. Self-modulation mechanism

Dynamic adjustment of threshold: $$ \text{threshold}(t) = \text{threshold}_{\text{base}} \cdot (0.5 + \alpha(t)) $$ - Low \(\alpha(t)\) (high entropy): Lowers the threshold and increases exploratory firing. - High \(\alpha(t)\) (low ent ROPY): Threshold is raised and stable/deterministic firing increases.

Routing temperature control: $$ T_{\text{routing}} = \frac{1}{\alpha(t) + \epsilon} \quad (\epsilon = 10^{-9}) $$ - Low \(\alpha(t)\): Temperature increases, softmax approaches uniform distribution, exploratory routing. - High \(\alpha(t)\): Temperature approaches 1, softmax concentrates at maximum value, exploitative routing.

4.3. PFC implementation (evospikenet/pfc.py)

4.3.1. QuantumModulationSimulator

Calculate alpha_t according to the formula in the document.

4.3.2. PFCDecisionEngine

Implementation features: - Standalone mode: When num_modules=0, set max_entropy to default value 1.0 to work without error. - Flexible input: The forward() method passes through the embedding layer if the input is torch.long (token ID), and uses it directly otherwise (spike train). - Fixed value: vocab_size is 256 by default. Text input to PFC is performed using a simple character code conversion (placeholder) called ord(c) % 256.

Detailed flow diagram:

graph TD
    A["input data"] --> B{"Data type?"}
    B -->|"Token ID: torch.long"| C["Embedding: 256, size"]
    B -->|"spike: torch.float"| D["Use as is"]
    C --> E["Expand to time dimension"]
    D --> E

    E --> F["working memory update"]
    F --> G["ChronoSpikeAttention"]
    G --> H["Decision vector generation"]

    H --> I["Routing score calculation"]
    I --> J["Softmax"]
    J --> K["Entropy calculation H"]

    K --> L["Q-PFC: alpha_t generation"]
    L --> M["Routing temperature adjustment T"]
    M --> N["Final routing probability: softmax scores/T"]

    L --> O["Threshold modulation: threshold = base * 0.5 + alpha"]
    O --> F

    N --> P["Output: route_probs"]
    K --> Q["Output: entropy"]
    F --> R["Output: spikes, potential"]

5. Advanced decision engine

5.1. AdvancedPFCEngine

Extends PFCDecisionEngine and integrates ExecutiveControlEngine to provide advanced cognitive control.

Implementation features: - Dynamic goal addition: High-level goals can be added dynamically from the outside using the add_goal() method. - Placeholder implementation: The goal embedding generation in add_goal() is a dummy implementation called torch.randn(self.size), and will need to be replaced with an appropriate encoding method in the future. - Performance Tracking: get_performance_stats() method allows you to obtain performance metrics such as total number of decisions, success rate, and average entropy.

Details: ADVANCED_DECISION_ENGINE.md


6. Node discovery system

6.1. ZenohNodeDiscovery (evospikenet/node_discovery.py)

It is implemented as a centralized service, where a single instance (singleton) monitors the health of all nodes. It has scalability of 100+ nodes.

Main features: - Heartbeat monitoring: Subscribe to the evospikenet/heartbeat/* topic and monitor the heartbeat of all nodes. - State management: Update nodes to inactive state after a certain amount of time (default 5.0 seconds) has passed since the last heartbeat was received. The monitoring loop runs every 1.0 seconds. - UI integration: export_for_ui() method provides formatted data including status icons (🟢/🔴) for UI display.

Improved scalability (MT25-EV016): Implemented a node grouping mechanism to support 100+ nodes. Speeds up node lookup in large clusters from O(n) to O(1). - Automatically categorize nodes by module within sections - Get nodes instantly with get_nodes_by_type(module_type) - Automatically add/delete to group when registering/deleting - Statistics information for each module can be obtained using get_group_statistics()

6.2. Use by PFC

After determining the routing destination, PFC queries the ZenohNodeDiscovery service using the _has_active_nodes_for_module() method to check if the target module is active.

Fallback logic: - If there is no active node in the target module (e.g. visual), the task will automatically fall back to the lang-main module. This ensures the robustness that the entire system will not stop functioning even if some nodes go down.

Details: ADVANCED_NODE_DISCOVERY.md


7. ChronoSpikeAttention mechanism

7.1. Overview

ChronoSpikeAttention is a spiking attention mechanism that guarantees temporal causality.

  1. Causality Guarantee: Do not refer to future information
  2. Temporal proximity bias: The closer the time, the greater the weight
  3. Spike output: Supports multiple neuron models (LIF, Izhikevich, etc.)

7.2. Theoretical basis

Causal temporal proximity mask: $$ \text{mask}(t, t') = \begin{cases} 0 & \text{if } t' > t \ \exp\left(-\frac{t - t'}{\tau}\right) & \text{if } t' \leq t \end{cases} $$ Full expression: $$ \text{Attention}{\text{chrono}}(Q, K, V) = \text{SpikingNeuron}\left(\text{sigmoid}\left(\frac{QK^T}{\sqrt{d_k}}\right) \odot M \cdot V \cdot W{\text{out}}\right) $$

7.3. Implementation details (evospikenet/attention.py)

  • Default value of time constant tau: If tau is not specified, it is set based on a fixed heuristic of time_steps / 4.0.
  • Diverse neuron types: neuron_type argument allows switching between 'LIF' (snnTorch), 'EvoLIF' (custom integer-based LIF), and `'Izhikevich'.
  • Fixed scale factor: If neuron_type='EvoLIF', the input to the LIF layer is scaled by * 1000.0. This is a fixed value for converting the floating point output to the appropriate operating range for an integer-based neuron.
  • Data shape: The input is assumed to be a 4-dimensional tensor of (batch_size, time_steps, seq_len, input_dim).

8. Execution flow of distributed brain nodes

8.1. Node initialization sequence

sequenceDiagram
    participant M as "Main"
    participant PTP as "PTP Manager"
    participant S as "Safety Controller"
    participant D as "Node Discovery"
    participant N as "ZenohBrainNode"

    M->>PTP: 1. init_ptp
    M->>S: 2. init_safety
    M->>D: 3. init_node_discovery
    M->>N: 4. ZenohBrainNode
    N->>N: 5. _create_model: AutoModelSelector使用
    M->>N: 6. start: サブスクリプション設定

8.2. Prompt processing flow (implementation compliant)

PFC nodes receive prompts from the UI in two ways.

  1. Via Zenoh: API publishes to api/prompt topic and PFC's _on_api_prompt() callback is fired.
  2. Via file (legacy): API writes /tmp/evospikenet_prompt_*.json file, discovered by PFC's _process_pfc_timestep() polling in a 100Hz loop.

In either path, PFC will eventually analyze the task and publish it to the topic (e.g. pfc/text_prompt) of the appropriate functional module (e.g. lang-main).

8.3. Time step processing

Each node executes process_timestep() in a loop of 100Hz (every 10ms). The following processing takes place within the loop: 1. Incrementing the step counter 2. Recording control status to SimulationRecorder 3. For PFC nodes, status updates to API (2 seconds interval) 4. Sending safety_heartbeat() 5. Module-specific processing (PFC file polling, etc.)


9. Data structures and type definitions

9.1. PTPSpikePacket

Defined in evospikenet.ptp_sync. Spike packets containing PTP synchronized high precision timestamps.

9.2. MotorGoal

Defined in evospikenet.motor_consensus. Goal definition for motor control.

9.3. NodeInfo

Defined in evospikenet.node_discovery. Node information managed by the node discovery service.


10. Performance optimization and control

10.1. Fast Startup (FastStartupSequencer)

Used with the --fast-startup flag in examples/run_zenoh_distributed_brain.py. - Goal: Start all nodes within 15 seconds. - Strategy: Parallel initialization with max_workers=5 and prioritized startup with PFC with priority=0. - Fixed value: PFC timeout is 5.0 seconds, other nodes are 3.0 seconds.

10.2. PTP time synchronization

evospikenet.ptp_sync provides microsecond level time synchronization.

10.3. Safety monitoring

evospikenet.fpga_safety monitors safety limits such as speed and temperature.

10.4. Memory monitoring and automatic management (MT25-EV016)

MemoryMonitor class: Regularly monitors memory usage in the background for early detection of memory leaks and automatic GC trigger. - Check memory usage every 60 seconds - Warning log output with RSS > 1000MB - Detect and alert when growth is over 100MB - Automatically run gc.collect() if memory growth is more than 200MB

ZenohCommunicator integration: Limit memory usage of compression buffers and prevent OOM. Unnecessary buffers are automatically released.

10.5. NTP synchronization monitoring (MT25-EV016)

NTP verification mechanism to RaftConsensus: Verifies system clock synchronization before Raft leader election to ensure < 5 seconds failover under network partitions. - Verify system clock synchronization status at 60 second intervals - clock_sync_tolerance (default 0.1 seconds) Warn if exceeded - Integrate timedatectl in Linux environment - Track history of out-of-sync states

10.6. Bottleneck analysis

  1. Network: Zenoh delay < 1ms (usually negligible)
  2. Computation: Inference with SpikingEvoSpikeNetLM.generate() (GPU recommended)
  3. Memory: Loading large models with AutoModelSelector + protection with memory monitoring
  4. Communication reliability: 99.9%+ ensured by ACK/retry
  5. Node discovery: Fast lookups in O(1) with grouping

11. Recording simulation data

11.1. SimulationRecorder

Available with the --enable-recording flag. Records the internal state during simulation to an HDF5 file. - Recorded: Spikes, membrane potential, weights, control states - Settings: Recording targets can be controlled in detail using command line arguments.

For details, see SIMULATION_RECORDING_GUIDE.md.


12. Long term memory system

12.1. Overview

Implementation completed on December 31, 2025: We have integrated long-term memory functionality into EvoSpikeNet's distributed brain. FAISS-based vector search and Zenoh communication enable persistent knowledge retention and learning adaptation.

12.2. Architecture

graph TB
    subgraph "長期間記憶ノード"
        LTM["LongTermMemoryNode<br/>Base class"]
        EPI["EpisodicMemoryNode<br/>Time series event memory"]
        SEM["SemanticMemoryNode<br/>Factual knowledge memory"]
        INT["MemoryIntegratorNode<br/>Memory integration/association"]
    end

    subgraph "ストレージ層"
        FAISS["FAISS<br/>Vector Search Index"]
        ZCOMM["Zenoh Communicator<br/>Distributed communication"]
        PTP["PTP Sync<br/>Time synchronization"]
    end

    subgraph "統合インターフェース"
        STORE["store_memory()<br/>Memory storage"]
        QUERY["query_memory()<br/>Similar search"]
        RETRIEVE["retrieve_memory()<br/>Specific search"]
        ASSOCIATE["associate_memories()<br/>Cross-modal association"]
    end

    LTM --> FAISS
    EPI --> LTM
    SEM --> LTM
    INT --> EPI
    INT --> SEM

    LTM --> ZCOMM
    ZCOMM --> PTP

    STORE --> LTM
    QUERY --> LTM
    RETRIEVE --> LTM
    ASSOCIATE --> INT

12.3. Implemented features

Memory node class

  • LongTermMemoryNode: Base class for vector similarity search using FAISS
  • EpisodicMemoryNode: Sequence storage of time series events (store_episodic_sequence())
  • SemanticMemoryNode: Structured memory of concepts and knowledge (store_knowledge())
  • MemoryIntegratorNode: Integration and association of episodic and semantic memories

Core Features

  • Vector search: Fast neighborhood search with FAISS (cosine similarity)
  • Zenoh integration: Distributed memory operations with Pub/Sub
  • PTP Synchronization: Nanosecond precision timestamp (system time in test environment)
  • Importance Management: Memory retention based on access frequency and importance
  • Crossmodal associations: associations between different memory types

Distributed brain integration

  • Added memory nodes to 29 node architecture (including spike compression node)
  • Real-time memory operations via Zenoh topics
  • Experience reproduction and knowledge retention for long-term learning

12.4. API Interface

Basic operations```python

memory storage

memory_id = await memory_node.store_memory(content_vector, metadata, importance=0.8)

Similar search

results = await memory_node.query_memory(query_vector, top_k=5, threshold=0.7)

Return value: [(memory_id, score, metadata), ...]

Specific search

entry = await memory_node.retrieve_memory(memory_id)

Return value: MemoryEntry or None

#### Special operations```python
# episodic memory
await episodic_node.store_episodic_sequence(sequence_vectors, metadata)

# semantic memory
await semantic_node.store_knowledge(concept, embedding, related_concepts)

# memory consolidation
associations = await integrator.associate_memories(episodic_query, semantic_query)

12.5. Performance characteristics

  • Search speed: Fast vector search with FAISS (on the order of milliseconds)
  • Scalability: Efficient processing of millions of vectors
  • Memory Efficiency: Automatic organization/deletion based on importance
  • Distribution Tolerance: Inter-node synchronization with Zenoh

12.6. Test coverage

Implemented complete test suite (9 test cases, 100% success): - Initialization test - Memory storage/retrieval/retrieval test - Episodic sequence memory test - Knowledge retention test - Memory consolidation/association test

12.7. Implementation file

  • evospikenet/memory_nodes.py: Memory node implementation
  • examples/run_zenoh_distributed_brain.py: Distributed brain integration
  • tests/test_memory_nodes.py: Test suite
  • requirements.txt: FAISS dependencies
  • Dockerfile: Container configuration

13. Feature 13: Advanced spatial processing node ✅ New implementation completed (2026-02-17)

13.1. Overview

An advanced spatial cognition and generation system added to EvoSpikeNet's distributed brain system. Consists of 4 proprietary nodes (Rank 12-15) that simulate the visual system of the biological brain (occipital lobe - temporal lobe - parietal lobe).

Implementation file: spatial_processing.py (3500+ lines) Test file: test_distributed_brain_simulation.py (2000+ lines) Detailed specifications: DISTRIBUTED_BRAIN_SPATIAL_NODES.md

13.2. Node configuration (Rank 12-15)

視覚処理フロー:
┌─────────────────────────────────────────────┐
│ Rank 1: Vision (基本視覚特徴抽出)           │
└──────────┬──────────────────────────────────┘
           │
      ┌────┴────┐
      │          │
      ▼          ▼
┌──────────┐  ┌──────────────┐
│ Rank 12  │  │ Rank 13      │
│Where処理 │  │What処理      │
│(空間・  │  │(物体認識)    │
│ 深度)    │  │              │
└────┬─────┘  └──────┬───────┘
     │               │
     └───────┬───────┘
             │
             ▼
        ┌──────────────┐
        │ Rank 14      │
        │統合処理      │
        │What-Where    │
        │融合          │
        └────┬─────────┘
             │
             ▼
        ┌──────────────┐
        │ Rank 15      │
        │注意制御      │
        │Saccade計画   │
        └──────────────┘
Rank Node Brain area Processing content Implementation status
12 SPATIAL_WHERE Dorsal parietal lobe Spatial position/depth recognition, retinal coordinates ✅ Completed
13 SPATIAL_WHAT IT Cortex Object Recognition, Scene Understanding, 100+ Class Classification ✅ Done
14 SPATIAL_INTEGRATION Occipito-Parietal Junction What-Where Integration, World Model ✅ Done
15 SPATIAL_ATTENTION Fronto-orbital area Attentional control, Saccade planning ✅ Done

13.3. Core Components

CoordinateTransformer

  • Function: Egocentric ↔ Allocentric coordinate system conversion
  • Input: Visual coordinates from Rank 12
  • Output: Transformed spatial coordinates, rotation matrix
  • Implementation: Quaternion/Euler angle support

DepthEstimationNetwork (depth estimation)

  • Model: CNN-based monocular depth estimation
  • Input: RGB image (H×W×3)
  • Output: Depth map (1×H×W)
  • Performance: < 50ms latency

SpatialCoordinateEncoder (spike transform)

  • Function: 3D coordinate → spike expression conversion
  • Encoding: Multiscale, LIF neuron
  • Output: Spike time series (T×N×D)

SpatialAttentionModule (Multi-head attention)

  • Number of heads: 8
  • Feature: Weighting of What/Where information
  • Output: Integrated attention map

DistributedSpatialCortex (integrated system)

  • Role: Integrated management of 4 nodes (Rank 12-15)
  • Communication: Zenoh Pub/Sub (spikes/spatial/*)
  • Performance monitoring: profile_section context manager integration

13.4. Performance characteristics

Indicators Goals Results Status
Rank 12 Latency < 50ms 10-20ms avg ✅ Achieved
Rank 13 Latency < 30ms 8-15ms avg ✅ Achieved
Rank 14 Latency < 50ms 12-25ms avg ✅ Achieved
Rank 15 Latency < 30ms 7-12ms avg ✅ Achieved
E2E Pipeline < 100ms 37-72ms avg ✅ Achieved
Throughput > 100 msg/sec 100+ msg/sec ✅ Achieved
Scalability 100+ nodes Supports 100+ nodes ✅ Verified

13.5. Test implementation

Test file: test_distributed_brain_simulation.py

Test class:

Test class Number of tests Target Status
TestSpatialNodeIntegration 5 Rank 12-15 each node + E2E ✅ Done
TestMultiNodeCommunication 3 Zenoh communication, message flow ✅ Done
TestErrorRecovery 3 Node failure recovery ✅ Done
TestPerformance 2 Latency/Throughput ✅ Completed
TestScalability 1 100+ node validation ✅ Completed
TestRaftPerformanceProfiling 3 profile_section measurement ✅ Done

Test statistics: - Total number of tests: 17+ - Pass rate: 100% - Total execution time: < 5 seconds

13.6. Performance measurement mechanism

profile_section context manager (pfc.py)

# automatic performance measurement
with profile_section("section_name", performance_stats, threshold_ms=100):
    # Process execution

# function:
# - Automatic measurement (perf_counter accuracy)
# - Moving average tracking (100 samples)
# - Threshold exceeded warning
# - Statistics information accumulation

Measurement section: - ntp_check: NTP synchronization check - election_init: Leader election initialization - vote_collection: Vote collection - election_finalization: Election completed

13.7. Implementation completion summary

Implementation status: - ✅ Phase 1: Node initialization & Zenoh topic design - ✅ Phase 2: SPATIAL_WHERE implementation (DepthEstimationNetwork, SpatialCoordinateEncoder) - ✅ Phase 3: SPATIAL_WHAT implementation (object recognition, scene understanding) - ✅ Phase 4: Integration layer implementation (What-Where integration, attention control) - ✅ Phase 5: Testing & Optimization

Offers: - Implementation code: 3500+ lines - Test code: 2000+ lines - Documentation: DISTRIBUTED_BRAIN_SPATIAL_NODES.md

Completion date: February 17, 2026 Acceleration rate: Planned ratio 7.1x (12 weeks planned completed in 3.5 weeks)


summary

In this document, we have detailed the following aspects of EvoSpikeNet's distributed brain simulation system according to the source code implementation:

  1. Zenoh Communication: Asynchronous Pub/Sub Architecture with Version Compatibility Layer
  2. Q-PFC Feedback Loop: Quantum-inspired self-modulation mechanism and implementation details
  3. ChronoSpikeAttention: Spiking attention mechanism that guarantees temporal causality and its fixed value
  4. Distributed Execution Flow: Robust execution model including dual input paths and automatic fallbacks
  5. Dynamic infrastructure: Centralized node discovery service and dynamic model loader
  6. Performance/auxiliary functions: Fast-start sequencer and simulation recording function
  7. Long-term storage system: FAISS-based vector memory and distributed integration
  8. Advanced Spatial Processing (Feature 13 ✅): Integrated processing of visual information using 4 layers of spatial cognition nodes (Rank 12-15)

Reference implementation file: - evospikenet/pfc.py: PFC and Q-PFC feedback loop, profile_section measurement mechanism - evospikenet/attention.py: ChronoSpikeAttention - evospikenet/zenoh_comm.py: Zenoh communication layer - evospikenet/node_discovery.py: Node discovery service - evospikenet/spatial_processing.py: Feature 13 Spatial processing node (Rank 12-15) ✅ - evospikenet/memory_nodes.py: Long-term memory nodes - examples/run_zenoh_distributed_brain.py: Distributed brain execution script

Test file: - tests/integration/test_distributed_brain_simulation.py: Multi-node integration testing (17+ tests) ✅

Document: - docs/DISTRIBUTED_BRAIN_SPATIAL_NODES.md: Feature 13 Detailed specifications ✅ - docs/DISTRIBUTED_BRAIN_VALIDATION_REPORT.md: Validation report (latest version) ✅ - docs/DISTRIBUTED_BRAIN_IMPLEMENTATION_VERIFICATION.md: Implementation completion summary ✅ - docs/DISTRIBUTED_BRAIN_METRICS_UI.md: Front-end metrics display (installation procedures, API, operation notes)


14. Biomimetic Overlay (BiomimeticAdapter) ⭐ NEW 2026-02-25

14.1. Overview

In DistributedBrainExecutor in evospikenet/eeg_integration/distributed_brain_executor.py Integrates BiomimeticAdapter and automatically adds biological adjustment coefficients when generating EEG→Brain commands.

EEG データ
  ↓
BiomimeticAdapter
  ├─ rhythm_metrics()       … δ/α 帯域パワー抽出
  ├─ modulatory_gain()      … ドーパミン/ノルアドレナリン相当ゲイン (0.6–1.6)
  ├─ homeostasis_scale()    … ホメオスタシス制約スケール (0.5–1.5)
  ├─ dev_gain()             … 発達段階利得 (0.5–1.5)
  └─ sleep_state()          … 睡眠バッファ管理
  ↓
EEGBrainCommand.metadata["biomimetic"]
  ↓
DistributedBrainNode(受信側)
  └─ biomimetic_gain をレスポンスに付与

14.2. Configuration parameters

Setting Name Type Default Description
enable_biomimetic bool True Enable biomimetic overlay
low_latency_mode bool False True to skip overlay and achieve lowest latency
development_stage float 1.0 Developmental stage from 0.0 (initial) to 1.0 (mature)
energy_budget float 1.0 Available energy percentage (0 to 1)
sleep_buffer_seconds float 3.0 Sleep buffer retention time (seconds)
  • evospikenet/eeg_integration/distributed_brain_executor.py: Contains BiomimeticAdapter and DistributedBrainConfig
  • evospikenet/distributed_brain_node.py: Add biomimetic_gain to response metadata
  • docs/DISTRIBUTED_BRAIN_EEG_INTEGRATION.md: Detailed usage example of BiomimeticAdapter

14.4. Test passing status (2026-02-25)

Passed all 23 cases below in Docker (dev, ubuntu:22.04, CPU) environment: - tests/research/test_paper_1_2_distributed_brain_architecture.py - tests/integration/test_brain_genome_integration.py - tests/unit/test_hierarchical_plasticity.py


15. Genome-driven distributed inference (Phase D) ⭐ NEW 2026-03-11

15.1. Overview

With the Phase D implementation, it is now possible to directly deploy the genome produced by DistributedEvolutionEngine to a running DistributedBrainNode. After deployment, _process_brain_command() in the node executes inference on the real network (InstantiatedBrain).

15.2. Pipeline

DistributedEvolutionEngine.run_evolution(generations=N)
    └─→ best_genome
           │
           ▼  deploy_to_nodes([node1, node2, ...])
    DistributedBrainNode.deploy_genome(best_genome)
           │
           ▼  GenomeToBrainConverter().instantiate(genome) → InstantiatedBrain
    _process_brain_command()
           │
           ▼  InstantiatedBrain(input_tensor)  →  confidence補正

15.3. Code example

import asyncio
from evospikenet.distributed_evolution_engine import DistributedEvolutionEngine
from evospikenet.distributed_brain_node import DistributedBrainNode

engine = DistributedEvolutionEngine(config={"population_size": 50})
best   = asyncio.run(engine.run_evolution(generations=50))

pfc_node   = DistributedBrainNode("pfc",   config={"neuron_count": 1000, "specialization": "pfc"})
motor_node = DistributedBrainNode("motor", config={"neuron_count": 512,  "specialization": "motor"})
engine.deploy_to_nodes([pfc_node, motor_node])

for node in [pfc_node, motor_node]:
    assert node.get_stats()["genome_deployed"] is True
File Contents
evospikenet/brain_simulation.py BrainSimulation(BrainSimulationFramework) wrapper added
evospikenet/genome_to_brain.py InstantiatedBrain.apply_weight_delta() added
evospikenet/distributed_brain_node.py deploy_genome() + genome forward pass + get_stats() update
evospikenet/distributed_evolution_engine.py deploy_to_nodes() added

16. Connectome Integration (Phase E) ⭐ NEW 2026-03-19

16.1. Overview

Phase E is a phase in which real-world neural circuit data (connectome) is directly applied to the 29-node topology of EvoSpikeNet. From C. elegans (302 neurons), FlyWire (≈140,000 neurons), MICrONS (≈65,000 neurons), HCP (macroscale DTI) Get the structural connection weights and axonal conduction delays and inject them as ConnectomeLIFLayer.structural_mask (PyTorch sparse COO).

**Phase E-0/E-1/E-2 will be fully completed on 2026-03-19. ** Total 102 tests PASS.

16.2. List of implemented modules

Module File Main classes/functions Notes
Connectome loader evospikenet/connectome_loader.py load_json, load_npz, save_npz, stratified_sample, spectral_coarsen, load, read_etag, write_etag F-1/F-2 reduction/ETag+TTL cache
Node mapping evospikenet/connectome/node_mapping.py get_source_for_node, build_manifest, apply_to_layer 29 nodes ↔ Data source support
Sparse delay buffer evospikenet/connectome/delay_buffer.py SparseDelayBuffer COO ring buffer [max_delay+1, n_neurons]
Zenoh Metadata Publishing evospikenet/zenoh_connectome_publisher.py ConnectomeMetadataPublisher Topic connectome/metadata/{node_id}
ConnectomeLIF layer evospikenet/core.py ConnectomeLIFLayer structural_mask + attach_sparse_delay_buffer
Connectome density calculation evospikenet/forgetting_controller.py compute_connectome_density Cooperation with Meta-STDP

16.3. SparseDelayBuffer — Synaptic transmission with delay

We implemented SparseDelayBuffer to mimic the axonal conduction delay between biological neurons (0.1–20 ms). Inside is a COO-style ring buffer [max_delay+1, n_neurons] that only holds non-zero delays to save memory.

INT16 scale factor: _INT16_SCALE = 512.0 (Stored in-place with spike potential ±1.0 → INT16 ±512)

Mathematical model:

\[\text{buf}[t \bmod (D_{\max}+1),\ j] = s_j(t) \cdot D_{\max}+1\]

Calling step_int16(spikes) at timestep \(t\) causes the old buffer slot by delay \(d_{ij}\) Extract the spike of presynaptic neuron \(i\) from \(\text{buf}[(t - d_{ij}) \bmod (D_{\max}+1),\ i]\).

16.4. Code example

from evospikenet import ConnectomeLIFLayer, SparseDelayBuffer
from evospikenet.connectome_loader import load_json

# 1. Convert connectome data to COO tensor
data = load_json("data/connectome/celegans_cook2019_c_elegans_302n.json")

# 2. Inject the connectome into ConnectomeLIFLayer
layer = ConnectomeLIFLayer(n_neurons=256)
layer.attach_sparse_delay_buffer(
    delay_tensor=data["delays"],   # shape [N, N]
    weight_mask=data["mask"],      # shape [N, N], bool
)

# 3. forward loop (SNNModel internally performs delay routing)
import torch
x = torch.zeros(1, 256)
for t in range(100):
    spikes, _ = layer(x)
    # Spike output is reflected in the next step input via SparseDelayBuffer

16.5. Node ↔ Connectome data source support

EvoSpikeNet Node Applicable Data Source Reduction Method
memory_spike C. elegans (302 neurons, direct mapping) None (1× scale)
visual MICrONS V1 / FlyWire visual cortex homology F-1 stratified sampling + F-2 spectral reduction
auditory MICrONS / HCP Macro DTI F-1 Stratified Sampling
spatial / spatial_integration MICrONS Parietal lobe F-1 Stratified sampling
pfc HCP macro / MICrONS dlPFC F-3 cluster representative point
Other 24 nodes HCP DTI macro ROI integrated average
File Contents
evospikenet/connectome_loader.py Phase E-1 New creation: JSON/NPZ load/ETag+TTL cache/F-1/F-2 reduction
evospikenet/connectome/__init__.py Phase E-2 New creation: connectome Package public symbol
evospikenet/connectome/node_mapping.py Phase E-2 New creation: Node ↔ data source mapping
evospikenet/connectome/delay_buffer.py Phase E-2 New creation: SparseDelayBuffer COO ring buffer
evospikenet/zenoh_connectome_publisher.py Phase E-2 New creation: Zenoh connectome metadata publication
evospikenet/core.py Phase E-1/E-2 extension: ConnectomeLIFLayer + SNNModel.forward delayed routing
evospikenet/forgetting_controller.py Phase E-2 extension: compute_connectome_density added
evospikenet/__init__.py Phase E-2 expansion: Added 8 E-2 public symbols
config/connectome_config.yaml Phase E-0 New creation: CAVE API, cache, rate limit, physiological constraints
data/connectome/ Phase E-1 added: C. elegans JSON + NPZ cache directory
  • tests/unit/test_snn_memory_extension.py