Skip to content

Full brain operation flow/sequence of distributed brain simulation

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

This document describes in detail the operation flow and sequence of a full brain (24 node configuration) in EvoSpikeNet's distributed brain simulation. We describe the complete processing pipeline, including the integration of long-term memory nodes, on a chronological basis.

1. Overall architecture overview

Layer structure

  • Sensing Layer: Collection and initial processing of external inputs
  • Encoding Layer: Feature extraction and embedding generation
  • Cognition Layer: Semantic understanding and reasoning
  • Decision Layer: Generate action plan
  • Long-Term Memory Layer: Episodic/semantic memory management
  • Memory Layer: Short-term/working memory and retrieval
  • Learning Layer: Model adaptation and update (Supports LLM learning pipeline by node type)
  • Management Layer: Monitoring and controlling the entire system

Communication protocol

  • Zenoh: Real-time communication between nodes (Pub/Sub)
  • AEG-Comm: Adaptive energy gating communication control (3-layer safety architecture)
  • REST/gRPC: Configuration updates and management operations
  • PTP: Time synchronization
  • Heartbeat: Node health monitoring

2. Initialization sequence

Phase 1: Infrastructure startup (0-30 seconds)

  1. Start Zenoh Router
  2. Start message routing on port 7447
  3. Enable node discovery service

  4. PTP time synchronization initialization

  5. Master clock synchronization on all nodes
  6. Timestamp accuracy: <1μs

  7. Safety system activation

  8. FPGA safety monitor enabled
  9. Emergency stop circuit preparation

  10. Start node discovery

  11. Each node self-registers with Zenoh
  12. Network topology construction

Phase 2: Node initialization (30-90 seconds)

  1. Observation nodes (Nodes 1-3)
  2. Sensor connection and calibration
  3. Apply initial filter

  4. Encoding nodes (Nodes 4-7)

  5. Model loading and warm-up
  6. Embedded dimension verification

  7. Cognitive nodes (Nodes 8-12)

  8. LLM/inference model initialization
  9. RAG system connection

  10. Long-term memory nodes (Nodes 13-14)

  11. FAISS index initialization (LongTermMemoryNode)
  12. Spike compression layer initialization (LargeScaleSpikeReservoir + CompressedMemoryLayer)
  13. Forgetting control/retention scoring enabled (ForgettingController)
  14. Load existing memory data

  15. Decision Nodes (Nodes 15-16)

  16. PFC engine start
  17. Policy model load

  18. Storage Nodes (Nodes 17-18)

  19. Vector DB connection
  20. Cache initialization

  21. Learning node (Node 19)

  22. Distributed training environment preparation
  23. Node type compatible LLM training: The training script refers to evospikenet/node_types.py and automatically sets the collection/learning target with the --node-type option. Data sets and hyperparameters specific to each functional area are applied.

  24. Aggregation nodes (Nodes 20-21)

  25. Federation settings

  26. Management Nodes (Nodes 22-23)

  27. Launch monitoring dashboard
  28. Start log aggregation

Phase 3: System Verification (90-120 seconds)

  1. Health Check
  2. Check Zenoh heartbeat on all nodes
  3. Memory usage and CPU load check

  4. Connection test

  5. End-to-end messaging validation
  6. Check timeout settings

  7. Initial training data load

  8. Inject base knowledge into long-term memory nodes

3. Normal operation sequence

Input processing flow (real time)

Step 1: Observation and initial processing (0-10ms)```

外部入力 → 観測ノード → フィルタリング/同期 → Zenoh: "input/raw"

1. **Camera/mic input**
   - Observation nodes 1-3 collect data
   - Denoising and normalization
   - Zenoh topics: `input/camera`, `input/audio`

2. **Sensor data integration**
   - Synchronize IMU/temperature data in chronological order
   - Missing value interpolation

#### Step 2: Feature encoding (10-50ms)```
Zenoh: "input/*" → エンコードノード → 埋め込み生成 → Zenoh: "encoded/features"

  1. Vision encoding
  2. Image feature extraction with ViT/ResNet
  3. 768-dimensional embedding generation

  4. Audio encoding

  5. Wav2Vec/Spectrogram conversion
  6. 512-dimensional embedding generation

  7. Multimodal Fusion

  8. Integrate with cross-attention
  9. Integrated embedded output

Step 3: Cognitive/reasoning processing (50-200ms)```

Zenoh: "encoded/features" → 認知ノード → 意味理解 → Zenoh: "cognition/results"

1. **Semantic understanding**
   - Context analysis with LLM
   - Emotion/intention recognition

2. **RAG extension**
   - Retrieve related context from memory nodes
   - Add context to LLM inference

3. **Classification/Detection**
   - Object recognition/scene understanding
   - Event detection

#### Step 3.5: Spatial recognition/generation processing (150-250ms) - Feature 13 implementation ✅

**Feature 13: Advanced spatial processing node (Rank 12-15)** - Implementation completed on 2026-02-17
Zenoh: "vision/features" → ├─ SpatialWhereNode (R12) → 空間位置/深度 → <50ms ├─ SpatialWhatNode (R13) → 物体認識 → <30ms ├─ SpatialIntegrationNode (R14) → What-Where統合 → <50ms └─ SpatialAttentionControlNode (R15) → 注意制御 → <30ms → Zenoh: "spatial/context"
**Processing steps:**

1. **Where path processing (Rank 12: SpatialWhereNode)** - <50ms
   - Input: Visual features from Vision node (Rank 1)
   - Processing:
     * `DepthEstimationNetwork`: CNN-based monocular depth estimation
     * `CoordinateTransformer`: Egocentric ↔ Allocentric coordinate system transformation
     * `SpatialCoordinateEncoder`: 3D coordinate → spike representation conversion
     * Retinotopic map: Simulates the retinotopic structure of the visual cortex
   - Output: spatial coordinates, depth map, retinal center coordinates
   - Zenoh send: `spikes/spatial/where/*` topic

2. **What route processing (Rank 13: SpatialWhatNode)** - <30ms
   - Input: Low-level visual features from Vision node (Rank 1)
   - Processing:
     * Object recognition/classification: 100+ classes (ImageNet compliant)
     * Scene understanding: Inferring relationships between objects
     * Multi-scale processing: local to global feature integration
     * Attribute extraction: feature calculation such as color, size, orientation, etc.
   - Output: class probabilities, scene graphs, attribute vectors
   - Zenoh send: `spikes/spatial/what/*` topic

3. **What-Where integration processing (Rank 14: SpatialIntegrationNode)** - <50ms
   - Input: Integration of Rank 12 (Where) and Rank 13 (What)
   - Processing:
     * Multi-head attention mechanism: weighting of Where/What information
     * Spatial structure encoding: relative positional relationship of objects
     * World model update: Build an internal representation of the environment
     * Prediction part: Visual prediction of next frame
   - Output: unified visual representation, world state, predictions
   - Zenoh sending: `spikes/spatial/integration/*` topic

4. **Attention Control/Saccade Plan (Rank 15: SpatialAttentionControlNode)** - <30ms
   - Input: Integrated representation of Rank 14 (Integration)
   - Processing:
     * Attentional priority map generation: projection to visual cortex
     * Saccade (rapid eye movement) planning: target position determination
     * Modulation intensity control: gating signal generation
     * Task-driven control: Integration of high-level task information
   - Output: attention priority map, Saccade target, modulation signal

#### Step 3.6: SNN memory expansion and forgetting control (parallel, 100-200ms)
Zenoh: "spikes/*" → CompressedMemoryLayer → LargeScaleSpikeReservoir → retention scoring (ForgettingController)
1. **Spike compression/storage**
   - Input: spike train/buffer generated by each module
   - Processing: Compression judgment → Adaptive compression → Metadata addition (importance/access frequency)
   - Output: compressed spike record ID, retention meta information

2. **Forgetting control/capacity management**
   - Processing: `ForgettingController` scores based on importance, frequency, and plasticity load
   - When over capacity: prune low scoring records to avoid destructive forgetting

3. **Long-term memory consolidation**
   - `LongTermMemoryModule` associates episode/semantic tags with spike recordings
   - Restore and reuse compression spikes during associative search
   - Zenoh sending: `spikes/spatial/attention/*` topic

**Integrated system:** `DistributedSpatialCortex` (evospikenet/spatial_processing.py)
- Integrated management of 4 ranks 12-15 nodes
- Number of implementation lines: 891 lines
- Test statistics: 17+ test cases, 100% pass rate

**Performance results:**
- Total latency of all paths: ~150ms (within target time)
- Peak throughput: 60+ fps
- System acceleration rate: 7.1x (12 weeks plan → 3.5 weeks implementation)

**Detailed specifications:** [DISTRIBUTED_BRAIN_SPATIAL_NODES.md](DISTRIBUTED_BRAIN_SPATIAL_NODES.md) | [DISTRIBUTED_BRAIN_NODE_TYPES.md](DISTRIBUTED_BRAIN_NODE_TYPES.md) | [DISTRIBUTED_BRAIN_SYSTEM.md](DISTRIBUTED_BRAIN_SYSTEM.md)

#### Step 3.5: Spatial recognition/generation processing (150-250ms)```
Zenoh: "vision/features" → 空間ノード → 空間認識/生成 → Zenoh: "spatial/context"

  1. Spatial awareness
  2. Spatial mapping from visual features
  3. Estimation of location, distance, and relationship

  4. Space generation

  5. Scene generation and mental image creation
  6. Optimization of 3D spatial layout

  7. Visual-spatial integration

  8. Occipito-parietal junction simulation
  9. Generation of integrated spatial representation

Step 4: Long-term memory consolidation (200-300ms)```

Zenoh: "cognition/results" → 長期記憶ノード → 記憶保存/検索 → Zenoh: "memory/context"

**Updated December 31, 2025**: Implemented a new long-term storage system using FAISS-based vector search.

1. **Episodic memory storage**
   - Save time series events with `EpisodicMemoryNode.store_episodic_sequence()`
   - PTP synchronized timestamp added, sequence position information added to each event

2. **Semantic memory update**
   - Store concepts and knowledge with `SemanticMemoryNode.store_knowledge()`
   - Linking with related concepts, automatic setting of importance level 2.0

3. **Fast vector search**
   - Cosine similarity search using FAISS index
   - Similar memory search in real time (few milliseconds)

4. **Crossmodal association**
   - Memory integration with `MemoryIntegratorNode.associate_memories()`
   - Association between episodic and semantic memory

5. **Zenoh distributed communication**
   - Topics: `memory/episodic/store`, `memory/semantic/query` etc.
   - Real-time memory sharing between nodes

#### Step 5: Decision making (300-500ms)```
Zenoh: "cognition/results" + "memory/context" → 意思決定ノード → 計画生成 → Zenoh: "decision/plan"

  1. Situation Assessment
  2. Integration of current state and past context
  3. Goal setting

  4. Plan generation

  5. Create multi-step action plan
  6. Risk assessment

  7. Execution Controller

  8. Convert plans to motor instructions
  9. Feedback loop starts

Step 6: Action execution and learning (500ms-continuous)```

Zenoh: "decision/plan" → 運動ノード → アクチュエータ制御 → フィードバック収集 ```

  1. Motor Consensus
  2. Coordination of commands with distributed consensus
  3. Cooperative movement execution

  4. Results monitoring

  5. Sensor feedback collection
  6. Performance evaluation

  7. Online learning

  8. Save results in long-term memory
  9. Model parameter update

4. Consolidation flow of long-term memory

Updated December 31, 2025: Detailed sequence of new FAISS + Zenoh-based long-term storage system.

Memory storage sequence

  1. Event detection: Cognitive nodes detect important events
  2. Vectorization: Convert events to 128-768 dimensional vectors
  3. Add metadata: Add timestamp, importance, and context information
  4. Zenoh Send: Publish to memory/episodic/store topic
  5. FAISS index update: Save with LongTermMemoryNode.store_memory()
  6. Confirmation response: Reply the memory ID with Zenoh

Memory retrieval sequence

  1. Query generation: Cognitive node generates search vector
  2. Zenoh Query: Request to memory/episodic/query topic
  3. FAISS Search: Get top-k results with cosine similarity
  4. Filtering: Narrow down results by importance threshold
  5. Return context: Reply related memory with Zenoh

Memory Consolidation Sequence

  1. Multimodal input: Receive episodes and semantic queries
  2. Parallel Search: Search performed on both storage types at the same time
  3. Association calculation: Score-based memory-to-memory association
  4. Integration result: Generate cross-modal context
  5. Zenoh Delivery: Deliver a unified memory context

Performance characteristics

  • Save Latency: < 10ms (FAISS index update)
  • Search latency: < 5ms (vector search)
  • Throughput: 1000+ queries/sec
  • Memory Efficiency: Importance-based automatic organization
  • Scalability: Nodes can be expanded with distributed Zenoh communication
  • Add metadata: Add timestamp, importance, and related node information
  • Zenoh Send: Send to long-term memory node with memory/store topic
  • Update index: Add to FAISS index
  • Acknowledgment: Notify sender of completion of saving

Memory retrieval sequence

  1. Query generation: Cognitive nodes create search queries
  2. Vectorization: Convert query to vector
  3. Zenoh Send: Send with memory/query topic
  4. Similar search: Execute k-NN search with FAISS
  5. Results Integration: Scoring and Filtering
  6. Context return: Return related memory with Zenoh

Memory Consolidation Sequence

  1. Relevance assessment: Association between episodic and semantic memory
  2. Associative generation: Graph-based chaining of related memories
  3. Importance update: Memory enhancement based on access frequency
  4. Forgetting process: automatic deletion of low importance memories

5. Error handling and recovery sequence

At node failure

  1. Failure detection: Detected by missing heartbeat
  2. Isolation: Remove the failed node from the network
  3. Redundancy switch: Start backup node
  4. State synchronization: State restoration from long-term memory
  5. Reintegration: Rejoin the network after recovery

When memory overflows

  1. Usage monitoring: Regularly check memory usage
  2. Importance-based organization: Delete low importance entries
  3. Compression: Merging similar memories
  4. External Storage: Move old memories to permanent storage

6. Performance indicators

Latency goal

  • Observation → Encoding: <50ms
  • Encoding → Recognition: <150ms
  • Cognition → Decision making: <200ms
  • Decision → Action: <100ms
  • Total end-to-end: <500ms

Memory management

  • Long-term memory entries: up to 10,000
  • Vector dimension: 768
  • Search accuracy: >90% (Top-5)
  • Forgetting rate: automatic optimization

Scalability

  • Adding nodes: zero downtime
  • Load balancing: Zenoh automatic routing
  • Federation: multiple instance integration

This operational flow allows the full brain to achieve adaptability and learning capabilities close to those of the biological brain. Consolidation of long-term memory allows systems to leverage past experience to make smarter decisions.