Full brain operation flow/sequence of distributed brain simulation
[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).
This document describes in detail the operation flow and sequence of a full brain (24 node configuration) in EvoSpikeNet's distributed brain simulation. We describe the complete processing pipeline, including the integration of long-term memory nodes, on a chronological basis.
1. Overall architecture overview
Layer structure
- Sensing Layer: Collection and initial processing of external inputs
- Encoding Layer: Feature extraction and embedding generation
- Cognition Layer: Semantic understanding and reasoning
- Decision Layer: Generate action plan
- Long-Term Memory Layer: Episodic/semantic memory management
- Memory Layer: Short-term/working memory and retrieval
- Learning Layer: Model adaptation and update (Supports LLM learning pipeline by node type)
- Management Layer: Monitoring and controlling the entire system
Communication protocol
- Zenoh: Real-time communication between nodes (Pub/Sub)
- AEG-Comm: Adaptive energy gating communication control (3-layer safety architecture)
- REST/gRPC: Configuration updates and management operations
- PTP: Time synchronization
- Heartbeat: Node health monitoring
2. Initialization sequence
Phase 1: Infrastructure startup (0-30 seconds)
- Start Zenoh Router
- Start message routing on port 7447
-
Enable node discovery service
-
PTP time synchronization initialization
- Master clock synchronization on all nodes
-
Timestamp accuracy: <1μs
-
Safety system activation
- FPGA safety monitor enabled
-
Emergency stop circuit preparation
-
Start node discovery
- Each node self-registers with Zenoh
- Network topology construction
Phase 2: Node initialization (30-90 seconds)
- Observation nodes (Nodes 1-3)
- Sensor connection and calibration
-
Apply initial filter
-
Encoding nodes (Nodes 4-7)
- Model loading and warm-up
-
Embedded dimension verification
-
Cognitive nodes (Nodes 8-12)
- LLM/inference model initialization
-
RAG system connection
-
Long-term memory nodes (Nodes 13-14)
- FAISS index initialization (
LongTermMemoryNode) - Spike compression layer initialization (
LargeScaleSpikeReservoir+CompressedMemoryLayer) - Forgetting control/retention scoring enabled (
ForgettingController) -
Load existing memory data
-
Decision Nodes (Nodes 15-16)
- PFC engine start
-
Policy model load
-
Storage Nodes (Nodes 17-18)
- Vector DB connection
-
Cache initialization
-
Learning node (Node 19)
- Distributed training environment preparation
-
Node type compatible LLM training: The training script refers to
evospikenet/node_types.pyand automatically sets the collection/learning target with the--node-typeoption. Data sets and hyperparameters specific to each functional area are applied. -
Aggregation nodes (Nodes 20-21)
-
Federation settings
-
Management Nodes (Nodes 22-23)
- Launch monitoring dashboard
- Start log aggregation
Phase 3: System Verification (90-120 seconds)
- Health Check
- Check Zenoh heartbeat on all nodes
-
Memory usage and CPU load check
-
Connection test
- End-to-end messaging validation
-
Check timeout settings
-
Initial training data load
- Inject base knowledge into long-term memory nodes
3. Normal operation sequence
Input processing flow (real time)
Step 1: Observation and initial processing (0-10ms)```
外部入力 → 観測ノード → フィルタリング/同期 → Zenoh: "input/raw"
1. **Camera/mic input**
- Observation nodes 1-3 collect data
- Denoising and normalization
- Zenoh topics: `input/camera`, `input/audio`
2. **Sensor data integration**
- Synchronize IMU/temperature data in chronological order
- Missing value interpolation
#### Step 2: Feature encoding (10-50ms)```
Zenoh: "input/*" → エンコードノード → 埋め込み生成 → Zenoh: "encoded/features"
- Vision encoding
- Image feature extraction with ViT/ResNet
-
768-dimensional embedding generation
-
Audio encoding
- Wav2Vec/Spectrogram conversion
-
512-dimensional embedding generation
-
Multimodal Fusion
- Integrate with cross-attention
- Integrated embedded output
Step 3: Cognitive/reasoning processing (50-200ms)```
Zenoh: "encoded/features" → 認知ノード → 意味理解 → Zenoh: "cognition/results"
1. **Semantic understanding**
- Context analysis with LLM
- Emotion/intention recognition
2. **RAG extension**
- Retrieve related context from memory nodes
- Add context to LLM inference
3. **Classification/Detection**
- Object recognition/scene understanding
- Event detection
#### Step 3.5: Spatial recognition/generation processing (150-250ms) - Feature 13 implementation ✅
**Feature 13: Advanced spatial processing node (Rank 12-15)** - Implementation completed on 2026-02-17
**Processing steps:**
1. **Where path processing (Rank 12: SpatialWhereNode)** - <50ms
- Input: Visual features from Vision node (Rank 1)
- Processing:
* `DepthEstimationNetwork`: CNN-based monocular depth estimation
* `CoordinateTransformer`: Egocentric ↔ Allocentric coordinate system transformation
* `SpatialCoordinateEncoder`: 3D coordinate → spike representation conversion
* Retinotopic map: Simulates the retinotopic structure of the visual cortex
- Output: spatial coordinates, depth map, retinal center coordinates
- Zenoh send: `spikes/spatial/where/*` topic
2. **What route processing (Rank 13: SpatialWhatNode)** - <30ms
- Input: Low-level visual features from Vision node (Rank 1)
- Processing:
* Object recognition/classification: 100+ classes (ImageNet compliant)
* Scene understanding: Inferring relationships between objects
* Multi-scale processing: local to global feature integration
* Attribute extraction: feature calculation such as color, size, orientation, etc.
- Output: class probabilities, scene graphs, attribute vectors
- Zenoh send: `spikes/spatial/what/*` topic
3. **What-Where integration processing (Rank 14: SpatialIntegrationNode)** - <50ms
- Input: Integration of Rank 12 (Where) and Rank 13 (What)
- Processing:
* Multi-head attention mechanism: weighting of Where/What information
* Spatial structure encoding: relative positional relationship of objects
* World model update: Build an internal representation of the environment
* Prediction part: Visual prediction of next frame
- Output: unified visual representation, world state, predictions
- Zenoh sending: `spikes/spatial/integration/*` topic
4. **Attention Control/Saccade Plan (Rank 15: SpatialAttentionControlNode)** - <30ms
- Input: Integrated representation of Rank 14 (Integration)
- Processing:
* Attentional priority map generation: projection to visual cortex
* Saccade (rapid eye movement) planning: target position determination
* Modulation intensity control: gating signal generation
* Task-driven control: Integration of high-level task information
- Output: attention priority map, Saccade target, modulation signal
#### Step 3.6: SNN memory expansion and forgetting control (parallel, 100-200ms)
1. **Spike compression/storage**
- Input: spike train/buffer generated by each module
- Processing: Compression judgment → Adaptive compression → Metadata addition (importance/access frequency)
- Output: compressed spike record ID, retention meta information
2. **Forgetting control/capacity management**
- Processing: `ForgettingController` scores based on importance, frequency, and plasticity load
- When over capacity: prune low scoring records to avoid destructive forgetting
3. **Long-term memory consolidation**
- `LongTermMemoryModule` associates episode/semantic tags with spike recordings
- Restore and reuse compression spikes during associative search
- Zenoh sending: `spikes/spatial/attention/*` topic
**Integrated system:** `DistributedSpatialCortex` (evospikenet/spatial_processing.py)
- Integrated management of 4 ranks 12-15 nodes
- Number of implementation lines: 891 lines
- Test statistics: 17+ test cases, 100% pass rate
**Performance results:**
- Total latency of all paths: ~150ms (within target time)
- Peak throughput: 60+ fps
- System acceleration rate: 7.1x (12 weeks plan → 3.5 weeks implementation)
**Detailed specifications:** [DISTRIBUTED_BRAIN_SPATIAL_NODES.md](DISTRIBUTED_BRAIN_SPATIAL_NODES.md) | [DISTRIBUTED_BRAIN_NODE_TYPES.md](DISTRIBUTED_BRAIN_NODE_TYPES.md) | [DISTRIBUTED_BRAIN_SYSTEM.md](DISTRIBUTED_BRAIN_SYSTEM.md)
#### Step 3.5: Spatial recognition/generation processing (150-250ms)```
Zenoh: "vision/features" → 空間ノード → 空間認識/生成 → Zenoh: "spatial/context"
- Spatial awareness
- Spatial mapping from visual features
-
Estimation of location, distance, and relationship
-
Space generation
- Scene generation and mental image creation
-
Optimization of 3D spatial layout
-
Visual-spatial integration
- Occipito-parietal junction simulation
- Generation of integrated spatial representation
Step 4: Long-term memory consolidation (200-300ms)```
Zenoh: "cognition/results" → 長期記憶ノード → 記憶保存/検索 → Zenoh: "memory/context"
**Updated December 31, 2025**: Implemented a new long-term storage system using FAISS-based vector search.
1. **Episodic memory storage**
- Save time series events with `EpisodicMemoryNode.store_episodic_sequence()`
- PTP synchronized timestamp added, sequence position information added to each event
2. **Semantic memory update**
- Store concepts and knowledge with `SemanticMemoryNode.store_knowledge()`
- Linking with related concepts, automatic setting of importance level 2.0
3. **Fast vector search**
- Cosine similarity search using FAISS index
- Similar memory search in real time (few milliseconds)
4. **Crossmodal association**
- Memory integration with `MemoryIntegratorNode.associate_memories()`
- Association between episodic and semantic memory
5. **Zenoh distributed communication**
- Topics: `memory/episodic/store`, `memory/semantic/query` etc.
- Real-time memory sharing between nodes
#### Step 5: Decision making (300-500ms)```
Zenoh: "cognition/results" + "memory/context" → 意思決定ノード → 計画生成 → Zenoh: "decision/plan"
- Situation Assessment
- Integration of current state and past context
-
Goal setting
-
Plan generation
- Create multi-step action plan
-
Risk assessment
-
Execution Controller
- Convert plans to motor instructions
- Feedback loop starts
Step 6: Action execution and learning (500ms-continuous)```
Zenoh: "decision/plan" → 運動ノード → アクチュエータ制御 → フィードバック収集 ```
- Motor Consensus
- Coordination of commands with distributed consensus
-
Cooperative movement execution
-
Results monitoring
- Sensor feedback collection
-
Performance evaluation
-
Online learning
- Save results in long-term memory
- Model parameter update
4. Consolidation flow of long-term memory
Updated December 31, 2025: Detailed sequence of new FAISS + Zenoh-based long-term storage system.
Memory storage sequence
- Event detection: Cognitive nodes detect important events
- Vectorization: Convert events to 128-768 dimensional vectors
- Add metadata: Add timestamp, importance, and context information
- Zenoh Send: Publish to
memory/episodic/storetopic - FAISS index update: Save with
LongTermMemoryNode.store_memory() - Confirmation response: Reply the memory ID with Zenoh
Memory retrieval sequence
- Query generation: Cognitive node generates search vector
- Zenoh Query: Request to
memory/episodic/querytopic - FAISS Search: Get top-k results with cosine similarity
- Filtering: Narrow down results by importance threshold
- Return context: Reply related memory with Zenoh
Memory Consolidation Sequence
- Multimodal input: Receive episodes and semantic queries
- Parallel Search: Search performed on both storage types at the same time
- Association calculation: Score-based memory-to-memory association
- Integration result: Generate cross-modal context
- Zenoh Delivery: Deliver a unified memory context
Performance characteristics
- Save Latency: < 10ms (FAISS index update)
- Search latency: < 5ms (vector search)
- Throughput: 1000+ queries/sec
- Memory Efficiency: Importance-based automatic organization
- Scalability: Nodes can be expanded with distributed Zenoh communication
- Add metadata: Add timestamp, importance, and related node information
- Zenoh Send: Send to long-term memory node with
memory/storetopic - Update index: Add to FAISS index
- Acknowledgment: Notify sender of completion of saving
Memory retrieval sequence
- Query generation: Cognitive nodes create search queries
- Vectorization: Convert query to vector
- Zenoh Send: Send with
memory/querytopic - Similar search: Execute k-NN search with FAISS
- Results Integration: Scoring and Filtering
- Context return: Return related memory with Zenoh
Memory Consolidation Sequence
- Relevance assessment: Association between episodic and semantic memory
- Associative generation: Graph-based chaining of related memories
- Importance update: Memory enhancement based on access frequency
- Forgetting process: automatic deletion of low importance memories
5. Error handling and recovery sequence
At node failure
- Failure detection: Detected by missing heartbeat
- Isolation: Remove the failed node from the network
- Redundancy switch: Start backup node
- State synchronization: State restoration from long-term memory
- Reintegration: Rejoin the network after recovery
When memory overflows
- Usage monitoring: Regularly check memory usage
- Importance-based organization: Delete low importance entries
- Compression: Merging similar memories
- External Storage: Move old memories to permanent storage
6. Performance indicators
Latency goal
- Observation → Encoding: <50ms
- Encoding → Recognition: <150ms
- Cognition → Decision making: <200ms
- Decision → Action: <100ms
- Total end-to-end: <500ms
Memory management
- Long-term memory entries: up to 10,000
- Vector dimension: 768
- Search accuracy: >90% (Top-5)
- Forgetting rate: automatic optimization
Scalability
- Adding nodes: zero downtime
- Load balancing: Zenoh automatic routing
- Federation: multiple instance integration
This operational flow allows the full brain to achieve adaptability and learning capabilities close to those of the biological brain. Consolidation of long-term memory allows systems to leverage past experience to make smarter decisions.