Skip to content

Distributed brain node type specifications

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

This document describes the specification of node types in the EvoSpikeNet distributed brain system. Each node type corresponds to a functional specialization in the biological brain and defines its role in the distributed system.


Node type overview

The following 11 node types are defined in the distributed brain system:

Node type Rank Corresponding brain parts Main functions Implementation status
executive 0 Prefrontal Cortex (dlPFC) Executive Control, Decision Making, and Planning ✅ Done
vision 1 Occipital lobe (V1-V5) Visual processing, object recognition ✅ Done
spatial 2 Parietal lobe + occipito-parietal junction Spatial recognition/generation, spatial attention ✅ Completed
motor 3 Motor cortex (M1) + cerebellum + spinal cord Motor control, coordinated movement ✅ Done
auditory 5 Temporal lobe (A1-A2) Auditory processing, speech recognition ✅ Done
speech 6 Broca's area Speech production, language output ✅ Completed
memory_spike N/A Hippocampal/cortical compression pathway Spike storage, compression, and forgetting control ✅ New implementation
spatial_where 12 Dorsal parietal lobe (Where pathway) Spatial position/distance recognition, depth estimation New implementation
spatial_what 13 Visual cortex/temporal cortex (What pathway) Visual generation, object recognition, scene understanding New implementation
spatial_integration 14 Occipito-Parietal Junction What-Where Integration, World Model New Implementation
spatial_attention 15 Fronto-orbital area Spatial attention control, saccade planning New implementation
general N/A General General processing, auxiliary functions ✅ Completed

Implementation Note: These node types are defined in evospikenet/node_types.py and are ranked to determine processing priority. Rank 12-15 nodes (◆new implementation) implemented in Feature 13 are implemented in spatial_processing.py. Memory expansion note: memory_spike refers to the spike compression layer and forgetting control (evospikenet/snn_memory_extension.py, evospikenet/forgetting_controller.py), and long-term memory consolidation is implemented in evospikenet/long_term_memory.py.

Detailed specifications for each node type

Executive Node

Rank: 0 (highest priority) Corresponding brain region: Prefrontal cortex (dlPFC) Main role: - High level decision making - Task planning and execution control - Resource allocation - Adjustment of other nodes

Implementation class: ExecutiveController Communication pattern: Broadcast control signals, consensus-based decisions Dependencies: Works with all node types

Vision Node

Rank: 1 Corresponding brain region: Occipital lobe (V1-V5) Main role: - Visual data processing - Object detection and recognition - Spatial feature extraction - Visual feedback

Implementation class: VisionProcessor Communication pattern: Sensor data reception, encoded data transmission Dependencies: Sensing nodes, Encoder nodes

Spatial Node

Rank: 2 Corresponding brain regions: Parietal lobe (superior parietal lobule) + occipito-parietal junction Main role: - Spatial awareness and mapping - Spatial scene generation - Control of spatial attention - Visual-spatial integration

Implementation class: SpatialProcessor Communication pattern: visual feature reception, spatial data generation, attention control signal transmission Dependencies: Vision nodes, Executive nodes, Motor coordination

Motor Node

Rank: 2 Corresponding brain regions: Motor cortex (M1) + cerebellum + spinal cord Main role: - Movement command generation - Motion coordination control - Mechanics calculation - Feedback control

Implementation class: MotorController Communication pattern: Consensus-based cooperation Dependencies: Executive nodes, Sensory feedback

Auditory Node

Rank: 5 Corresponding brain region: Temporal lobe (A1-A2) Main role: - Audio signal processing - Voice recognition - Environmental sound analysis - Auditory feedback

Implementation class: AuditoryProcessor Communication pattern: Audio stream processing, feature extraction Dependencies: Audio sensors, Encoder nodes

Speech Node (speech generation node)

Rank: 6 Corresponding brain region: Broca's area Main role: - Language generation - Speech synthesis - Communication output - Expression control

Implementation class: SpeechGenerator Communication pattern: Text input, voice output Dependencies: Language models, Motor coordination


Feature 13: Advanced spatial processing node (Rank 12-15) ✅ New implementation completed

An advanced spatial cognition and generation system added to EvoSpikeNet's distributed brain system. These nodes simulate the visual system of the biological brain (occipital lobe-temporal lobe-parietal lobe).

Implementation file: spatial_processing.py (3500+ lines) Test file: test_distributed_brain_simulation.py (2000+ lines) Implementation completion date: February 17, 2026

Spatial Where Node

Rank: 12 Corresponding brain region: Dorsal parietal lobe (LIP, MT+, V5A) Main role: - Spatial position/distance/direction recognition - Focusing of visual attention - Eye movement (Saccade) planning - Generation of adaptive spatial coordinate system

Implementation class: spatial_processing.py Input: Visual feature extraction from Rank 1 (Vision) Output: Spatial coordinates, depth map, retinal center coordinates Communication pattern: Zenoh PubSub (spikes/spatial/where/*) Performance: < 50ms average latency

Features: - ✅ CoordinateTransformer: Transformation between different coordinate systems (Egocentric ↔ Allocentric) - ✅ DepthEstimationNetwork: Monocular depth estimation (CNN based) - ✅ SpatialCoordinateEncoder: 3D coordinate → spike expression conversion - ✅ Retinotopic map: Simulates the retinotopic structure of the visual cortex

Dependencies: Vision nodes, Spatial Integration nodes

Spatial What Node (Visual generation/recognition node)

Rank: 13 Corresponding brain region: Visual cortex/temporal cortex (IT cortex) Main role: - Object recognition and classification - Scene understanding - Visual attribute extraction - Processing of higher-order visual information

Implementation class: spatial_processing.py Input: Low-level visual features from Rank 1 (Vision) Output: Object class probability, scene graph, attribute vector Communication pattern: Zenoh PubSub (spikes/spatial/what/*) Performance: < 30ms average latency

Features: - ✅ Object recognition: 100+ class classification (ImageNet compliant) - ✅ Scene understanding: Inferring relationships between objects - ✅ Attribute extraction: Features such as color, size, orientation, etc. - ✅ Multi-scale processing: integration of local to global features

Dependencies: Vision nodes, Spatial Integration nodes

Spatial Integration Node (What-Where Integration Node)

Rank: 14 Corresponding brain region: Occipito-Parietal Junction (Parietal-Temporal Junction) Main role: - Integration of Where route and What route - Generate a unified visual representation - Maintaining the world model - Predictive coding

Implementation class: spatial_processing.py Input: Output for both Rank 12 (Where) and Rank 13 (What) Output: Integrated visual representation, world state, predictions Communication pattern: Zenoh PubSub (spikes/spatial/integration/*) Performance: < 50ms average latency

Features: - ✅ Multi-head attention mechanism: weighting of Where/What information - ✅ Spatial structure encoding: relative position of objects - ✅ World model update: internal representation of the environment - ✅ Prediction part: Prediction of next frame

Dependencies: Spatial Where/What nodes, Attention control nodes

Spatial Attention Control Node

Rank: 15 Corresponding brain region: Frontal Eye Fields Main role: - Direction of spatial attention - Saccade (rapid eye movement) planning and execution - Setting attention priority - motion modulation

Implementation class: spatial_processing.py Input: Integrated representation of Rank 14 (Integration) Output: Attention priority map, Saccade target, modulation signal Communication pattern: Zenoh PubSub (spikes/spatial/attention/*) Performance: < 30ms average latency

Features: - ✅ Attentional priority map: projection to visual cortex - ✅ Saccade planning: Determination of target position - ✅ Modulation strength: Gating signal - ✅ Task-driven control: Integration of high-level task information

Dependencies: Spatial Integration nodes, Motor coordination nodes


Spatial Processing System Integration

Integrated system class: spatial_processing.py Implementation status: ✅ Fully implemented

# Integrated system structure
DistributedSpatialCortex:
  ├── spatial_where_node: SpatialWhereNode
  ├── spatial_what_node: SpatialWhatNode
  ├── spatial_integration_node: SpatialIntegrationNode
  ├── spatial_attention_node: SpatialAttentionControlNode
  └── performance_stats: Dict[str, Any]  # profile_section Measurement data

E2E pipeline: 1. Visual input (Rank 1) → 2. Where processing (Rank 12): Spatial position/depth → 3. What processing (Rank 13): Object recognition → 4. Integration (Rank 14): What-Where fusion → 5. Attention control (Rank 15): Saccade plan → 6. Go to motor output (Rank 3)

Test statistics: - Total test cases: 17+ - Spatial Integration Test: 5+ - Performance measurement test: 3+ - Error Recovery Test: 3+ - Pass rate: 100%


General Node

Rank: N/A (dynamic allocation) Corresponding brain region: General purpose area Main role: - Auxiliary processing - Load balancing - Backup function - Experimental features

Implementation class: GeneralProcessor Communication Pattern: Flexible Messaging Dependencies: Dynamic depending on the situation - Learning: Data augmentation/self-supervised filter (improved noise tolerance)

  • Node 4: Vision Encoder

    • Role: Image → embed conversion
    • Model: ViT/Vision Transformer series or ResNet→Projection, or Spiking-ViT for events
    • Data: ImageNet, COCO, domain-specific data (with metadata at time of collection)
    • Learning: Pre-learning (large-scale data) → Domain fine-tune (Fine-tune), continuous learning in some cases
  • Node 5: Audio Encoder

    • Model: Wav2Vec2 / HuBERT →Embedded
    • Data: LibriSpeech, AudioSet, domain speech corpus
    • Learning: Pre-learning + task fine-tuning (speech classification/transcription)
  • Node 6: Text Encoder

    • Model: SentenceTransformer (SBERT series) or lightweight transformer embedded
    • Data: Wikipedia, CC-News, specialized domain corpus
    • Learning: Pre-learning → Task fine-tuning (for semantic search)
  • Node 7: Spiking Encoder

    • Model: SNN (Spiking Neural Network) based encoder (for event cameras)
    • Data: DVS (Dynamic Vision Sensor) dataset, etc.
    • Training: STDP/Surrogate-gradient training / transformation learning
  • Nodes 8-12: Inference nodes (Inference x5)

    • Node 8: LM-Inference (short text/dialogue)
      • Model: Small to medium-sized transformer LM (hundreds of M to billions of parameters)
      • Data: conversation corpus, system prompts, history
      • Training: Online fine-tuning of pre-trained models (retraining managed by Trainer node)
  • Node 9: Classifier/Detector - Model: YOLOvX / Faster-RCNN / ResNet-based classifier - Data: COCO, OpenImages, dedicated annotations - Learning: Transfer learning + continuous labeling (Human-in-the-loop)

  • Node 10: Spiking-LM (biodirected generation) - Model: Generation/memory interface using spiking neural network - Data: Sensor time series + event history - Learning: Online adaptation of biomimetics (fine tuning with small amount of data)

  • Node 11: Ensemble / Multimodal Inference - Role: Integrate outputs of encoders/inference nodes to generate highly reliable output - Method: Reliability estimation using weighted ensemble and meta-learning

  • Node 12: Retriever-Augmented Generation (RAG) - Role: Support LM inference by adding context from memory nodes - Model: Lightweight searcher + LM

  • Node 13-14: Long-Term Memory Node (Long-Term Memory x2) - Role: Manage episodic memory (event-based) and semantic memory (knowledge-based) - Model: FAISS based vector search, Zenoh communication integration - Data: spike embeddings, time series events, metadata - Learning: Online adaptation, importance-based retention/forgetting - Features: Similarity search, associative recall, memory consolidation

  • Node 17-18: Decision node (Decision x2)

    • Node 17: High Level Planner
      • Role: Receive the goal and generate multiple candidates (subgoals)
      • Model: Reinforcement learning-based policy or Symbolic Planner + learned policy
      • Learning: Reinforcement learning in simulation (PPO/IMPALA, etc.) + on-site fine-tuning
  • Node 18: Execution Controller - Role: Convert plans to motor instructions (works with MotorConsensus) - Model: Existing motor model + actuator output coordination based on distributed consensus

  • Node 19-20: Memory node (Memory x2)

    • Node 19: Vector DB (Milvus/FAISS)
      • Role: Embedding storage and fast neighborhood search
      • Data: embeddings, metadata, signal time information
      • Operations: Replica, sharding, TTL policy
  • Node 16: Episodic memory (time series storage) - Role: Raw event log/transaction storage, long-term history - Storage: MinIO / Time-series DB

  • Node 21: Training node (Trainer x1)

    • Role: Direct model batch learning/distributed training/federated aggregation
    • Method: PyTorch DDP / Horovod / Federated Averaging (server side)
    • Features: checkpoints, metric aggregation, model distribution for A/B testing
  • Nodes 22-23: Aggregation/Arbitration Node (Aggregator x2)

    • Node 22: Federator
      • Role: Federated learning aggregation (secure aggregation, applying differential privacy)
    • Node 23: Results Aggregator
      • Role: Integrate outputs from multiple nodes (weighting, reliability management), support policy decisions
  • Nodes 24-25: Management/Utilities (Management x2)

    • Node 24: Authentication/Authorization/Configuration Distribution
      • Role: API key management, RBAC, TLS certificate management
    • Node 25: Monitoring/Logging
      • Role: Metric visualization with Prometheus/Grafana, log aggregation with ELK

5. Model assignment and learning method for each node (details)

  • Prior principles:

    • The main model follows the path of "Pretrain → Fine-tune domain → Continual learning during operation (Continual / Federated)".
    • Privacy protection is applied by default (differential privacy, secure aggregation, encrypted transfer).
  • Data pipeline:

    • The observation → encoder → vector DB/inference path is based on streaming, and simultaneously sampled data is supplied to the Trainer in batches.
    • Annotations are added step by step using Human-in-the-loop, and only quality-guaranteed data is input into learning.
  • Learning method (by node):

    • Encoder (Vision/Audio/Text/Spiking): Large-scale pre-training (distributed GPU) → Domain fine-tuning (high speed with small amount of data) → Knowledge distillation at the edge
    • Inference node (LM, etc.): Fine-tune some of the parameters of the pre-trained LM using a specific task. Online fine-tuning is done with the approval of the Trainer.
    • Trainer (Node 17): Model weight aggregation, validation, model signing, and distribution. During federated learning, securely aggregate together with Aggregator.

6. Operational precautions (safety, redundancy, monitoring)

  • Redundancy: Critical nodes (memory, aggregator, auth) are operated with multiple replicas and automatic failover is configured.
  • Security: mTLS, RBAC, API key rotation, audit log storage
  • Monitoring: Collect latency, throughput, memory/GPU usage, and training metrics

7. Complete brain function (End-to-End)

  • Observation: Sensor collects data and passes it to encoder
  • Perception: Encoder extracts features, stores them in vector DB, and provides them to inference nodes at the same time
  • Inference: Perform contextual responses and classification with RAG and LM
  • Storage/retrieval: Vector DB returns similar context, long-term episodic memory provides history
  • Learning: Trainer updates the model with new data, aggregated and distributed securely in Aggregator
  • Decision making/action: Decision node creates action plan and controls actuator through MotorConsensus

  1. Create a candidate list of model artifact names that can be implemented for each node (e.g. resnet50-v2, wav2vec2-large, gpt-small-v1)
  2. Scheduling (K8s) template creation on 24 nodes (including CPU/GPU/memory requests)
  3. Full-scale load test planning and CI integration (automated benchmarking)

Based on this document, you can create more detailed node implementation specifications (API, message schema, model versioning policy, etc.). Please let us know the desired output (e.g. separate file in docs/, CSV, PR creation).


How to use

  • Each section indicates a "classification (function)".
  • Category example is a candidate label that can be used during implementation or model registration.

Observation node (Sensing)

  • Category examples: vision, audio, sensor
  • Node function: Environmental data capture/preprocessing (image/audio/sensor values)
  • Input: Raw data stream (camera, microphone, IoT sensor)
  • Output: normalized features, encoder input (tensor/binary)
  • Required resources: CPU/GPU, low-latency I/O, conversion library (OpenCV, librosa, etc.)
  • Permissions: Authorization for sensor data acquisition, privacy control
  • Note: Filter/sampling and local privacy processing are recommended

Encoding node (Encoding / Feature Extractor)

  • Category examples: vision-encoder, audio-encoder, text-encoder, spiking-encoder
  • Node function: Embed raw data/convert to low-dimensional representation
  • Input: observation node output, raw data
  • Output: embedding vector, feature tensor
  • Required resources: GPU, model artifacts, batch processing pipeline
  • Permission: Permission to use the model (license/API key)
  • Notes: Manage compatibility of embedded dimensions and formats

Understanding/Inference node (Perception / Inference)

  • Category examples: classifier, detector, lm-inference, spiking-lm
  • Node functions: Inference processing such as classification, detection, generation, etc.
  • Input: Embed, Prompt, Context
  • Output: label/score, generated text, confidence level
  • Required resources: large model (GPU/TPU), low-latency inference environment
  • Authority: Confidential data handling control, authentication (X-API-Key, etc.)
  • Notes: Design distributed inference according to latency constraints

Decision / Actuation node

  • Category examples: planner, policy, controller, action-executor
  • Node functions: action decisions based on inference results, external system control
  • Input: inference results, goals, constraints
  • Output: control commands, action plans, API calls
  • Required resources: real-time communication, safeguards, authorization flows
  • Authorization: Multi-level authorization of actuator control
  • Notes: Failsafe, logging required

Storage node (Memory / Storage / Retriever)

  • Category examples: episodic-memory, vector-db, retriever, knowledge-base
  • Node function: Long and short-term memory retention/retrieval, history management
  • Input: request to write generation results, sensor history, and metadata
  • Output: search results, context fragments (text/embedded)
  • Required resources: Persistent storage (MinIO, DB), Vector DB (Milvus/FAISS)
  • Permissions: data access control, encryption, audit logging
  • Notes: Define TTL/sanitization, access control policy

Learning node (Learning / Trainer / Updater)

  • Category examples: trainer, federated-learner, fine-tuner
  • Node features: online/batch learning, model updates, federated learning
  • Input: training data, validation data, hyperparameters
  • Output: model artifacts, metrics, training logs
  • Required resources: GPU cluster, data transfer bandwidth, checkpoint area
  • Permissions: Model signing/approval workflow, upload permissions
  • Note: Implemented safe model distribution and rollback mechanism

Aggregator/Arbitration Node (Aggregator / Orchestrator)

  • Category examples: federator, aggregator, coordinator
  • Node functions: output aggregation, weighting, and consensus building of multiple nodes
  • Input: output, metastatus, health information for each node
  • Output: aggregate decisions, routing instructions, statistical metrics
  • Required resources: low-latency communication, state management, transaction control
  • Authority: Authentication of communication between nodes, trust policy
  • Notes: Includes failover strategy and version compatibility

Utility / Management Node

  • Category examples: monitoring, logging, health-check, auth
  • Node functions: logging, monitoring, health checks, API key management
  • Input: metrics, log events, configuration change requests
  • Output: alerts, dashboard data, authentication tokens
  • Required resources: time series DB, monitoring tools, authentication infrastructure
  • Permissions: administrator role control, audit log retention policy
  • Note: Prioritize security and observability

Next suggestion

  • Based on this Markdown, you can generate individual candidates by reflecting the existing values of node-types / model-categories.
  • Please let me know if you need CSV format or automatic addition of the repository to docs/.

Creation date: 2025-12-27


Implemented Features

Long-Term Memory System

The following long-term memory related features have been implemented:

1.Memory node class

  • LongTermMemoryNode: Base class, FAISS vector search and Zenoh communication integration
  • EpisodicMemoryNode: Time-series event-based episodic memory
  • SemanticMemoryNode: Factual knowledge-based semantic memory
  • MemoryIntegratorNode: Integration and association of episodic memory and semantic memory

2. Core features

  • Vector similarity search: Fast neighborhood search using FAISS
  • Zenoh distributed communication: Pub/Sub communication for inter-node memory operations
  • PTP time synchronization: generate timestamps with nanosecond precision (fallback to system time in test environments)
  • Memory consolidation: Importance-based retention/forget policy
  • Crossmodal associations: associations between different memory types

3. Implementation file

  • evospikenet/memory_nodes.py: Memory node implementation
  • examples/run_zenoh_distributed_brain.py: Memory integration into distributed brains
  • tests/test_memory_nodes.py: Comprehensive test suite (9 test cases)
  • requirements.txt: Added FAISS dependency
  • Dockerfile: FAISS installation settings

4. Test results```

================== test session starts ================== collected 9 items

tests/test_memory_nodes.py::TestLongTermMemoryNode::test_initialization PASSED tests/test_memory_nodes.py::TestLongTermMemoryNode::test_store_memory PASSED tests/test_memory_nodes.py::TestLongTermMemoryNode::test_query_memory PASSED tests/test_memory_nodes.py::TestLongTermMemoryNode::test_retrieve_memory PASSED tests/test_memory_nodes.py::TestEpisodicMemoryNode::test_store_episodic_sequence PASSED tests/test_memory_nodes.py::TestSemanticMemoryNode::test_store_knowledge PASSED tests/test_memory_nodes.py::TestMemoryIntegratorNode::test_associate_memories PASSED tests/test_memory_nodes.py::TestMemoryEntry::test_memory_entry_creation PASSED tests/test_memory_nodes.py::TestMemoryEntry::test_memory_entry_serialization PASSED

================== 9 passed, 4 warnings in 0.33s ================== ```

5. Distributed brain integration

  • Adding memory nodes to 24 node architecture
  • Real-time memory operations using Zenoh communication protocol
  • Persistent knowledge retention for long-term learning and adaptation

Next implementation plan (Remaining Features)

  1. Vector DB integration: Migration to distributed vector databases such as Milvus
  2. Memory Optimization: Efficient processing and sharding of large memory sets
  3. Learning integration: Continuous learning by replaying experiences from memory
  4. Enhanced security: Memory data encryption and access control
  5. Performance monitoring: Latency and throughput monitoring of memory operations
  1. Distributed consensus: Consistency assurance protocol among multiple memory nodes