NEUROSCIENCE BRAIN SIMULATION PAPER
EvoSpikeNet Whole Brain Simulation: For Neuroscientists
Author: Masahiro Aoki
Copyright: 2026 Moonlight Technologies Inc.
- Document ID; MT2026-AI-01-001
- ORCID ID: 0009-0007-9222-4181
- Affiliation: Moonlight Technologies Co., Ltd.
summary
In this paper, we use a spiking neural network and biological control architecture to A distributed modular simulation platform that approximates the whole human brain We demonstrate the design and implementation of "EvoSpikeNet". What is traditional deep learning and reinforcement learning? Differently, our system uses time-series spike propagation, plasticity-dependent self-regulation, and It is characterized by precise synchronization between geographically dispersed nodes. Existing AI was weak Software architecture that “maintains temporal causality” and “natural hierarchy to higher-order spatial cognition” level, and incorporates the physical latency constraints of neural circuits into the evaluation metrics. The system consists of specialized nodes such as visual, auditory, spatial, linguistic, motor, and memory. Coupled with low-latency middleware (Zenoh) and equipped with quantum-inspired self-modulation Supervised by the prefrontal control module. This paper describes the conceptual framework, functional layers, We discuss mathematical models, data flow, and cortical region correspondence. The remaining functions are listed in the specifications. Biological behavior (hippocampus, amygdala, cerebellum, neuromodulation, etc.) has already been incorporated, It has already been implemented. We present neuron models, plasticity rules, attention mechanisms, and time synchronization equations. Includes detailed block diagrams and brain region maps.
Keywords: Spiking neural network, distributed brain simulation, PFC, ChronoSpikeAttention, STDP, whole brain, cortical mapping
1. Introduction
There are approximately 86 billion neurons and 100 trillion synapses in the human cerebral cortex. It is differentiated into specialized areas such as hearing, language, memory, and motor control. The existing calculation model is Usually focused on a single function, EvoSpikeNet integrates these multiple modules and “Whole-brain simulation” as an extensible distributed software framework This is an attempt to build.
The purpose of this paper is as follows.
- Maintaining functional decomposition corresponding to cortical areas.
- Real-time (≤200ms) using a biologically plausible spiking model end-to-end) operation.
- Continuous learning and self-healing through plasticity (STDP, meta‑STDP) and Supported by the evolutionary genome layer.
- Expose parameters, timing, and state variables as physiological indicators, To enable neuroscientific verification.
- Implement biological domains including structures such as the hippocampus, amygdala, and basal ganglia.
The target audience is neuroscientists and others who want to deeply understand this platform. I am a system architect.
Differences with AI
Conventional neural networks are mainly based on dense vector calculations of activation values, Preserving the causal structure of time-series information, dynamic memory management, Not suitable for tasks that require absolute synchronization between geographically distributed processes. EvoSpikeNet treats spike times as physical events on the time axis rather than as binary numbers. By using node synchronization with Zenoh and PTP, Robotics, spatial navigation, real-time EEG closed loop, etc. It has demonstrated performance in "areas where conventional AI was weak."
Specific example: - Preserves temporal causality when tracking and predicting moving objects in 3D space ChronoSpikeAttention provides continuous attention. - For low-latency (<50ms) motion control, PTP synchronization allows each module to Firing timing is aligned to the nanosecond level, which is not possible with traditional batch inference. Achieve reaction speed. - In continuous learning, Meta‑STDP suppresses catastrophic forgetting, Long-term memory becomes possible.
2. System architecture and biological correspondence
Provide a timestamp. Spatial cognition node (Feature 13) is in the cognitive layer It is located in the occipito-parietal cortex.
2.1 Block diagram
flowchart LR
subgraph "Sensory/Encoding"
CAM["Camera/Retina (V1‑V5)"]:::implemented
MIC["Microphone/A1"]:::implemented
TAS[TAS encoding]:::implemented
end
subgraph "Cognitive"
VIS["Visual module (occipital lobe/IT cortex)"]:::implemented
AUD["Auditory module<br/>(temporal lobe)"]:::implemented
SLM["Spiking LM<br/>(Language/Broker – Wernicke)"]:::implemented
RAG["Hybrid RAG index"]:::implemented
SPAT["空間層<br/>(parietal lobe)" ]:::implemented
BM["Biomimetic module group (emotions/rewards/sleep/rhythm etc.)"]:::biomim
end
subgraph "Control/PFC"
PFC["PFC / Q‑PFC<br/>(frontal cortex)"]:::implemented
end
subgraph "Memory/State"
EPI["Episodic memory<br/>(hippocampus)"]:::implemented
SEM["Semantic memory<br/>(temporal lobe)"]:::implemented
MINT["メモリインテグレータ<br/>(cingulate gyrus)" ]:::implemented
end
subgraph "Motor/Output"
MTR["Motor planner<br/>(motor cortex)"]:::implemented
end
subgraph "Infrastructure"
ZEN["Zenoh Pub/Sub"]
PTP["PTP time synchronization"]
end
CAM --> TAS --> VIS --> SLM
MIC --> TAS --> AUD --> SLM
SLM --> RAG --> PFC --> MTR
SPAT --> PFC
BM --> PFC
PFC <--> EPI
PFC <--> SEM
EPI <--> MINT
SEM <--> MINT
ZEN --- PTP
PFC -.-> ZEN
VIS -.-> ZEN
AUD -.-> ZEN
SPAT -.-> ZEN
MTR -.-> ZEN
EPI -.-> ZEN
SEM -.-> ZEN
MINT -.-> ZEN
BM -.-> ZEN
*Figure 1. High-level diagram showing cortical region correspondence. Solid line is data flow, dashed line is Pub/Sub mesh. *
2.2 Node ranks and hub architecture
Nodes are given a rank from 1 to 15 based on functional complexity and connectivity requirements. Ranks 1–5 are sensors such as vision and hearing, and basic cognitive processing, and Ranks 12 and above are advanced Implement spatial awareness and integration. Rank also affects a node's hub role, Rank1 exposes raw spikes as leaf nodes, Rank8–11 exposes raw spikes as leaf nodes, and Rank8–11 exposes raw spikes as leaf nodes and Rank12+, an intermediate hub for LM inference, is responsible for coordination across geographical areas.
A Node Hub is a logical structure within a Zenoh mesh that is responsible for data aggregation and redistribution.
When the delay measurement (\(L_{ij}\)) exceeds the threshold, ExecutiveControlEngine will switch to another hub.
Elevate and maintain communication below 50ms. All spikes are 64bit PTP timestamps
and has a variable length payload in Protobuf format. The control message is
A JSON schema called {type,source,target,payload} that can be used between different implementations of languages.
But maintain compatibility.
Each hub maintains a list of nodes assigned to its rank area area descriptor have. During operation, this descriptor is broadcast at 1Hz, and other nodes Adjust the routing table. This mechanism eliminates the need for a central directory. Collaboration across clusters becomes possible.
2.3 Explanation of cortical region correspondence
In this version, brain region mapping has been added with the addition of biomimetic functions. Expanded. Emotion/reward circuit (amygdala/nucleus accumbens/VTA), sleep/rhythm synchronization (hippocampal theta waves + ACh), mirror neuron system (premotor cortex vs. observation cortex) , intention/motivation vectors (PFC/ACC/NAcc), and developmental dynamics. (critical period pruning/myelination) etc. have been newly implemented, and the following It constitutes functional cooperation.
- PFC receives emotional signals from the amygdala and VTA dopamine prediction errors. Adjust learning rate and simultaneously perform memory encoding by releasing ACh synchronized with Theta band EEG. Switch.
- The mirror neuron system is implemented as a bidirectional connection between motor cortex and visual cortex, The model also internally generates motor commands and estimates rewards during observation behavior.
- Developmental dynamics are epoch dependent due to
DevelopmentalScheduleControls plasticity/pruning/conduction delay and works with curriculum scheduler The training difficulty level increases gradually. - Sensory preprocessing is retina DoG → V1 Gabor, cochlear gamma tone, vestibule Provides acceleration/angular velocity normalization and effect copy of motion commands.
These modules are aggregated into the evospikenet/biomimetic package,
Shared to all nodes via EEG metadata in DistributedBrainExecutor.
The best EvoGenome generated by the evolution engine
Can be deployed directly to each node via DistributedBrainNode.deploy_genome()
It is currently implemented. After deployment, within each node's command processing loop:
InstantiatedBrain (real network generated by GenomeToBrainConverter)
A forward pass is executed and the results of genome evolution are displayed in real time.
It is reflected in the inference field. In addition, from the STDP spike government reinforcement history
Apply the calculated INT16 delta to the nn.Linear history using apply_weight_delta().
The ability to apply immediately was also implemented, closing the online plasticity loop.
Furthermore, each node mimics the function of the corresponding cortical/subcortical area in software. Parameters (delay, number of neurons, plasticity coefficient, etc.) are based on actual physiological values. I'm adjusting it closer.
| Node | Biological Domain | Main Features | Average Latency | Implementation Status |
|---|---|---|---|---|
| Vision | Occipital Cortex (V1–V5, IT) | Scene Analysis/Object Recognition | 45ms | Completed |
| Hearing | Primary auditory cortex (A1) | Frequency extraction/sound source localization | 40ms | Completed |
| Language/SLM | Broca–Wernicke circuit | Language representation generation in the brain | 60ms | Completed |
| Spatial | Superior parietal lobule, occipito-parietal cortex | Position/object recognition/attention control | 50ms | Feature 13 completed |
| PFC | Dorsolateral/orbitofrontal cortex | Decision making/goal management | 30ms | Core module completed |
| Movement | Primary motor cortex | Motor plan generation/output | 25ms | Completed |
| Episodic | Hippocampus | Temporal Sequence Memory | — | Complete |
| Semantics | Medial temporal lobe | Conceptual knowledge retention | 5ms search | Complete |
| Integrator | Cingulate/Insular Cortex | Intermodal Integration | 10ms | Done |
Each explanation includes major papers in the biological field and the corresponding EvoSpikeNet module. Implementation notes are included as internal documentation. as needed Inducible.
*Table 1. Correspondence between EvoSpikeNet modules and cortical regions. *
2.4 Connectome → Node automatic mapping (Addendum)
In this section, while maintaining neuroscientific validity, we introduce the rank structure of EvoSpikeNet from the public connectome.
The principles, verification criteria, and experimental suggestions for automatic mapping are presented. The main purpose is to ensure reliability for research purposes.
- Priority of biological constraints: Keep E/I ratio, layer distribution, and cell type frequency as the highest priority. These are the key metrics that should be preserved during reduction, as they directly affect network dynamics.
- Applying reduction methods: Apply multiple of Policy F (stratified sampling, spectral reduction, cluster representation) and select the method that best matches the source distribution. Selection criteria were E/I difference, KS test of degree distribution, and significance level (p>0.05).
- Verification experiment: Use the generated
structural_maskto compare learning curves in short-term learning tasks (visual discrimination, auditory discrimination, etc.). We report the correlation between biological indicators (kurtosis, synchrony, E/I ratio) and task performance. - Output storage format: Assign
version_uuidandetagto NPZ (COO format array:row_indices,col_indices,weights,delays,ei_mask) to ensure reproducibility and version control. - Ethics/Contract: HCP and other subject data must undergo procedures and approval for research use in accordance with DUC. Regarding the use of public data, please attach a procedural history in the appendix of your paper.
The above is an addendum for the neuroscience community, and it is recommended that it be published as an experiment note along with implementation results (E/I retention rate, KS p-value, task learning curve).
2.4.1 Brain function integration derived from connectome
In this section, we will discuss how to extract and reduce structural information from public connectomes. Specify how to integrate it into EvoSpikeNet's functional module. design guidelines and Clarify verification indicators to ensure reproducibility for research purposes.
- Structural hub (rich‑club): High-degree nodes are high-rank nodes equivalent to PFC (Rank12+) or map to a central hub. Validation indicator: Rich Club Coefficient KS test for conservation rate and degree distribution.
- Modularity/Community Structure: Group nodes based on module detection Register it as an area descriptor and reflect it in the Rank assignment/routing table. Validation metrics: Comparison of modularity \(Q\), cluster consistency score.
- E/I balance and cell type frequency: E/I ratio and cell type frequency at the local circuit level.
Retained and stored in
structural_maskasei_mask. Validation metrics: firing rate distribution, Conservation of E/I ratio. - Layered projection (interlayer connections): Layered connections are retained as meta attributes and are Map to layer parameters (e.g. L2/3→L5 feedforward).
- Delays (long‑range vs local): Keep propagation delays in
delaysarray and time synchronization Used for evaluation and routing optimization. - Synaptic weight distribution: The weight distribution is saved as an initial condition and is determined by the plasticity rule. Updated while learning. Validation metric: statistical distance of weight distribution (e.g. KL divergence).
- Circuit motif (feedforward/feedback): Frequently occurring motifs are converted into templates. Retained and injected at the node mapping stage. The vibration/synchronization motif is in the hippocampus and cerebellum. Reflect in module design.
- Subgraph injection: Semantic subgraphs such as hippocampal circuits and visual columns are placed in dedicated nodes. Partially inject to ensure functional reproducibility.
- Neuromodulation binding: neuromod metadata to global PFC etc. map to modulator channels and dynamically modulate plasticity gain and learning rate.
Output format and operating rules:
- Save format: Include
row_indices,col_indices,weights,delays,ei_mask,layers,region_tags,version_uuid,etagin NPZ (COO). - Reduction policy: Default to Policy F (stratified sampling, spectral reduction, cluster representation), giving priority to preservation of E/I and degree distribution.
- Validation set: Compare learning curves on visual and auditory short-term learning tasks and report KS test, synchronization level (PLV), and memory usage.
- Reference implementation/configuration: For details, refer to
docs-dev/connectome_schema.md,config/connectome_config.yaml, and test group (tests/test_connectome_loader.py, etc.).
The guidelines in this section allow for the analysis of connectome-derived structural information without compromising biological plausibility. Provided for integration into EvoSpikeNet modules. In actual operation, each experiment Explicitly record reduction parameters and verification protocols according to requirements to ensure reproducibility.
This section summarizes the neuron model, learning rules, and attention mechanism.
3. Mathematical models and formulas
3.1 Neuron mechanics
3.1.1 Leakage Integral Firing (LIF)
Parameters can be set for each node. See Table 2 for typical values.
3.1.2 Izhikevich model
Using the above formula, more than 20 types of firing patterns can be reproduced.
3.1.3 ChronoSpikeAttention
Attention weight of spike time \(t_i,t_j\):
Future information is blocked and decays exponentially depending on time distance.
3.2 Plasticity Law
Time lag dependent plasticity (STDP)
4. Connectome integration (CONNECTOME_INTEGRATION_PAPER integration)
This section integrates and augments the contents of docs/CONNECTOME_INTEGRATION_PAPER.md with the EvoSpikeNet system description. The figures use embedded Mermaid, with detailed explanations below each figure.
4.1 Integration overview
- Purpose: Initialize the connection topology inside EvoSpikeNet nodes with public connectome data and clearly separate the structural and functional layers to achieve both learning stability and biological validity.
- Implementation status (Updated on 2026-03-19 — Phase E-0/E-1/E-2 completed):
- ✅ config/connectome_config.yaml: Implemented.
- ✅ Document integration (this article): Implemented.
- ✅ evospikenet/connectome_loader.py: Phase E-1 completed. load_json · load_npz · save_npz · stratified_sample (F-1 stratified sampling) · spectral_coarsen (F-2 spectral reduction) · load (ETag+TTL cache) has been implemented.
- ✅ LIF extension of evospikenet/core.py (ConnectomeLIFLayer): Phase E-1 completed. Implemented structural_mask (bool COO tensor) buffer, connectome_weight parameter, attach_sparse_delay_buffer(), validate_ei_ratio(). Integrate lazy routing (step_int16()) into SNNModel.forward().
- ✅ evospikenet/connectome/node_mapping.py: Phase E-2 completed. get_source_for_node · build_manifest · apply_to_layer implementation (18 tests PASS).
- ✅ evospikenet/connectome/delay_buffer.py (SparseDelayBuffer): Phase E-2 completed. COO ring buffer format [max_delay+1, n_neurons], step / step_int16 / from_connectome_data implementation (22 tests PASS).
- ✅ evospikenet/zenoh_connectome_publisher.py (ConnectomeMetadataPublisher): Phase E-2 completed. Zenoh topic connectome/metadata/{node_id}, session=None and logo-only mode (30 tests PASS).
- ✅ evospikenet/forgetting_controller.py (compute_connectome_density): Phase E-2 completed.
- ⬜ scripts/sync_connectome.py: Phase E-3 (not yet started). An automatic differential synchronization pipeline will be implemented in the future.
4.2 Three-layer model (repost and explanation)
graph TB
subgraph "Layer 1: 構造層 Structural Layer"
S1["Connectome-derived adjacency matrix A∈{0,1}^{N×N}"]
S2["E/I neuron type mask"]
S3["Synaptic delay table delay[i,j]"]
S1 --- S2 --- S3
end
subgraph "Layer 2: 機能層 Functional Layer"
F1["STDP weight scalar w_scalar"]
F2["Plasticity update with Meta-STDP"]
F3["Integration with ChronoSpikeAttention"]
F1 --- F2 --- F3
end
subgraph "Layer 3: 進化層 Evolutionary Layer"
E1["Structural mask coevolution with EvoGenome"]
E2["Evolutionary optimization of pruning rate"]
E3["Cross-scale adaptation"]
E1 --- E2 --- E3
end
S1 -->|"W_ij = A_ij × w_scalar"| F1
F2 -->|"Update only ΔSTDP × A_ij"| F1
E1 -.->|"Phase E+ only"| S1
Description: The structure layer maintains a Boolean mask generated from an external connectome (FlyWire/MICrONS/C.elegans/HCP) that limits weight updates during training. The functional layer manages plasticity scalars and attentional weights, and the evolutionary layer handles long-term modification of the mask itself (Phase E+: research use).
4.3 Modules and data flows (integration)
The diagram below shows the end-to-end flow from connectome data acquisition to node initialization and distributed distribution. Describe the specific behavior after each block.
flowchart TB
subgraph "コネクトームデータ層"
CE_DB["C. elegans DB<br/>WormAtlas JSON"]
FLY_DB["FlyWire DB<br/>CAVE API"]
MIC_DB["MICrONS DB<br/>CAVEclient"]
HCP_DB["HCP S1200<br/>FSL/MRtrix3"]
end
subgraph "コネクトームローダー層"
LOADER["connectome_loader.py<br/>Sparse COO transformation"]
SYNC["sync_connectome.py<br/>Auto-update pipeline"]
CONFIG["connectome_config.yaml<br/>Source control"]
end
subgraph "構造層(変更不可)"
MASK_V["Visual structural mask<br/>structural_mask[V1]"]
MASK_A["Auditory structural mask<br/>structural_mask[A1]"]
MASK_M["Motor structural mask<br/>structural_mask[M1]"]
MASK_E["Episodic structural mask<br/>structural_mask[HPC]"]
ZENOH_TOPO["Zenoh connection priority<br/>HCP derived weight"]
end
subgraph "機能層(STDP可変)"
LIF_V["LIFNeuronLayer<br/>Visual Node<br/>W = mask × scalar"]
LIF_A["LIFNeuronLayer<br/>Auditory Node"]
LIF_M["LIFNeuronLayer<br/>Motor Node"]
LIF_E["LIFNeuronLayer<br/>Episodic Node"]
end
CE_DB --> LOADER
FLY_DB --> LOADER
MIC_DB --> LOADER
HCP_DB --> LOADER
LOADER --> MASK_V & MASK_A & MASK_M & MASK_E & ZENOH_TOPO
SYNC -->|"Weekly Difference"| LOADER
MASK_V --> LIF_V
MASK_A --> LIF_A
MASK_M --> LIF_M
MASK_E --> LIF_E
LIF_V & LIF_A & LIF_M & LIF_E --> ZENOH_M["Zenoh Mesh"]
ZENOH_TOPO --> ZENOH_M
ZENOH_M --> PFC_C["PFC Control"]
PFC_C --> STDP_C["Meta-STDP"] --> LIF_V & LIF_A & LIF_M & LIF_E
EVO_C["EvoGenome"] -.->|"Structural mask update (Phase E+)"| MASK_V & MASK_A & MASK_M & MASK_E
Details: connectome_loader.py is responsible for authentication, acquisition, normalization, and COO conversion for each source. sync_connectome.py detects version differences and generates a difference COO for apply_weight_delta(). structural_mask represents the effective synapse set within each node, and LIFNeuronLayer uses that mask to generate initial weights as W = A × w_scalar.
4.4 Automatic updates and validation
An auto-update pipeline satisfies the following:
- Difference detection: Compare the version number of FlyWire etc. and ETag.
- Validation: Verify that the new ei_mask and E/I ratio are within the specified range (e.g. within ±0.5).
- Rollback: Return to previous version when verification fails.
- Distribution: Distribute weight_delta via Zenoh, and each node applies it after validation locally.
4.5 Initialization/PFC control sequence
In the initialization sequence, main.py reads connectome_config.yaml, initializes connectome_loader and retrieves micro/macro data in parallel. After obtaining, generate ConnectomeBundle{mask, weight, delay, ei_mask, hcp} and instantiate LIFNeuronLayer(...bundle[node_type]) for each node type.
PFC control operates in a 50ms loop, aggregates spike groups subscribed by Zenoh, and calculates route_probs and cognitive entropy H_t. Adjust the plasticity gain based on H_t, and in the learning state, use ForgettingController to prune connections with low contribution using a threshold.
4.6 Mathematical notes regarding learning - Definition of initial weights: - $\(W_{ij}^{(0)} = A_{ij}\cdot (s_{ij}\cdot \alpha\cdot e_{ij})\)$ - STDP with structural constraints: - $\(\Delta W_{ij} = A_{ij} \cdot \Delta_{STDP}(t_i,t_j)\)$ - E/I ratio constraint: Keep the local E/I ratio \(R = N_E/N_I\) from the source.
Meta‑STDP objective function
3.3 Control equation
Cognitive entropy: $\(H_t = -\sum_i p_i(t) \log p_i(t).\)$
Quantum modulation coefficient is obtained from a simulated quantum circuit with a Hamiltonian that depends on \(H_t\). Details in patent MT25‑EV003.
3.4 Communication and time synchronization
All spike events are timestamped with the PTP synchronization clock. Measure the inter-node delay \(\Delta_{ij}\) and calculate \(\sum_{i,j} \Delta_{ij} w_{ij}\) Perform routing to minimize.
4. Full brain operation scenario
In distributed brain simulation, multiple nodes operate in parallel, Enable event-driven data flow through Zenoh meshes. Below is It shows a typical sequence of operations and the average latency required for each step.
sequenceDiagram
participant SENS as Sensory Nodes
participant ENC as TAS Encoder
participant COG as Cognitive Layer
participant PFC as PFC/Q-PFC
participant MEM as Memory Layer
participant MOT as Motor Node
participant ZEN as Zenoh Mesh
SENS->>ENC: 生データ取得 (camera/mic) [5ms]
ENC->>COG: スパイク列送信 [10ms]
COG->>ZEN: 各モジュール間スパイク交換 [2-8ms]
ZEN->>PFC: 統合入力到着
PFC->>PFC: route_probs 計算、認知エントロピー測定 [30ms]
PFC->>PFC: Q-PFC 量子変調実行
PFC->>MOT: 命令送信 [25ms]
MOT->>ZEN: 結果公開
PFC->>MEM: 経験保存要求 [5ms]
MEM->>COG: 過去情報検索要求 [5-10ms]
COG->>PFC: 補完データフィードバック
PFC-->>ZEN: 学習信号 (STDP/Meta-STDP)
ZEN->>COG: 重み更新通知
- Initialization The Zenoh network is formed by an auto-discovery protocol, and each node Advertise your capabilities.
- Input The raw signal from the camera/microphone is received in real time.
- Encoding The TAS encoder generates a spike train with 1ms resolution and multiple Distribute to the recognition module.
- Processing Cognitive layer nodes fire independently and communicate with other nodes via Zenoh as necessary. Replace spikes. ChronoSpikeAttention and spatial nodes come into play here.
- Decision Making PFC integrates the received information and calculates route_probs and cognitive entropy. Q-PFC generates gating parameters according to the confidence level and determines the final Determine the output command.
- Output Motor nodes receive PFC commands and execute or Returns a vector.
- Memory update Experiences are recorded in an episodic/semantic store, When a search request is made, relevant data will be provided through the RAG.
- Learn Synapse weights are updated online based on STDP/Meta-STDP, EvoGenome will attempt to modify the architecture offline if necessary.
Memory retrieval and spatial attention are sandwiched between processing stages depending on task demands. There are cases. The current deployment supports a full brain configuration of 21 nodes, Nodes such as the hippocampus, amygdala, and cerebellum have also been integrated.
Memory retrieval and spatial attention are sandwiched between steps 4–6 depending on task demands. There are cases. The current deployment supports full brain simulation with 21 nodes, It is composed of nodes such as the hippocampus, amygdala, and cerebellum.
5. Residual biological behavior and planned expansion
5.1 EvoGenome configuration and collaboration
EvoGenome is a compact network architecture and plasticity parameter It is an expression and consists of the following three clauses.
- Topology – Neighbor list including node ID, rank, edge weight \((n_i, r_i, w_{ij})\).
- Parameters – Node-specific dictionary such as time constants, thresholds, STDP coefficients, etc.
- Meta – Evolutionary metadata including fitness scores and mutation history.
During distributed operation, two-phase commit is performed on Zenoh to synchronize the genome. negotiator (usually the highest ranked PFC) proposes the update, and the follower node Perform checks (e.g. rank does not exceed current maximum). After the commit, all nodes are The new genome is read atomically, and nodes that are behind the old version can read the difference from any peer. You can request it.
The significance of EvoGenome lies in its ability to perform structural adaptation without centralized control. The node is When persistent performance degradation is detected, the genome is mutated locally and the changes are gossiped about. Spread it with. Other nodes accept or reject proposals based on their own metrics. With this federated evolution, even on geographically and environmentally disparate hardware, A distributed brain can adapt.
EvoGenome Deployment Pipeline to Distributed Nodes
has been implemented. After DistributedEvolutionEngine.run_evolution() finishes,
deploy_to_nodes([node1, node2, ...]) to all nodes.
best_genome can be expanded all at once. The deploy_genome() method of each node is
Use GenomeToBrainConverter and as an InstantiatedBrain instance.
hold. From then on, in _process_brain_command() of all nodes
Genome-driven forward pass is executed.
Remaining_Functionality.md has the following biologically motivated extensions:
It is listed.
- Hippocampus: Sequence encoding, pattern separation/completion, theta-gamma combination. Oscillations implemented as episodic memory nodes to model coupling mechanics $\(V_{hp}(t)=A\sin(2\pi \theta t)+B\sin(2\pi \gamma t)\)$ Introduced (\(\theta\approx8\,\mathrm{Hz}\), \(\gamma\approx40\,\mathrm{Hz}\)). Cell assembly encodes time order through LTP/LTD, Search for similar episodes using cosine similarity.
- Amygdala: Emotional valence tagging and fear conditioning. Spiking to sensory input Encode the positive and negative values in the rate and set the PFC gating coefficient as $\(g_{PFC}=1+\alpha \cdot \mathrm{valence}\)$ By modulating like Controlling memory salience.
- Cerebellum: Main function is rapid sensorimotor error correction. Between PFC and Motor node seat, delta law $\(\Delta w = \eta (r - \hat r) x\)$ Works as a supervised learning element with Calculate the prediction error \(r-\hat r\) in real time. Generate and update parameters iteratively.
- basal ganglia: Behavioral selection in competitive spiking populations with Go/No‑Go pathways. Each route has a weight update rule according to the reward signal \(R(t)\) $\(\Delta w = \alpha R(t) - \beta\)$ , and controls behavioral firing when the threshold is exceeded.
- Neuromodulators: Global time series of dopamine/serotonin levels \(D(t),S(t)\), and the STDP coefficients \(A_+,A_-\) are Modulate. $\(A_+(t)=A_{+,0}(1+\gamma_D D(t)-\gamma_S S(t)).\)$
- Space/Time Hierarchy: Additional rank nodes for planning, language, and abstract reasoning. By introducing Higher‑rank nodes, we can solve long-term dependencies. Meta-learning becomes possible.
5.1 Cortical/subcortical area composition diagram
flowchart TB
%% cortical modules already implemented
Occipital["Vision<br/>(V1‑V5, IT)"]
Temporal["Hearing<br/>(A1)"]
Broca["Language<br/>(Broker area)"]
Wernike["Language<br/>(Wernicke area)"]
Parietal["Space<br/>(parietal lobe)"]
Frontal["PFC<br/>(frontal cortex)"]
Motor["motor cortex"]
%% subcortical and neuromodulatory regions added
Hippocampus["海馬<br/>(Sequence memory/
Pattern separation/completion)" ]
Amygdala["扁桃体<br/>(emotional tagging,
fear conditioning)"]
Cerebellum["Cerebellum (rapid sensorimotor error correction)"]
BasalGanglia["Basal ganglia<br/>(Go/No‑Go behavior selection)"]
Neuromod["神経調節<br/>(DA/5‑HT/ACh)"
]
%% information flow
Occipital --> Parietal
Temporal --> Broca
Broca --> Wernike
Parietal --> Frontal
Frontal --> Motor
Hippocampus --> Frontal
Amygdala --> Frontal
Cerebellum --> Motor
BasalGanglia --> Frontal
BasalGanglia --> Motor
%% neuromodulators broadcast globally
Neuromod -.-> Occipital
Neuromod -.-> Temporal
Neuromod -.-> Parietal
Neuromod -.-> Frontal
Neuromod -.-> Motor
Neuromod -.-> Hippocampus
Neuromod -.-> Amygdala
Neuromod -.-> Cerebellum
Neuromod -.-> BasalGanglia
*Figure 2. Conceptual diagram of EvoSpikeNet modules and corresponding brain regions. Solid line is information Transmission path, dashed line indicates planned expansion. Emotion tag, error correction in the cerebellum, action selection in the basal ganglia, and dopamine/serotonin It reflects how neuromodulators such as nin/acetylcholine act on the whole brain. *
6.7 Individual subsystem diagram and detailed explanation
Below is a block diagram (mermaid), data flow, cortical correspondence, and implementation file references for the main biomimetic subsystems.
6.7.1 Hippocampus — Sequence encoding and replay
flowchart LR
Input["Context / Sequence Input"] --> CA3["CA3 / Pattern Separation"]
CA3 --> CA1["CA1 / Sequence Output"]
CA1 --> Buffer["HippocampalBuffer\n(evospikenet/biomimetic/hippocampal_memory.py)"]
Buffer -->|prioritized_replay| Sleep["SleepConsolidation\n(evospikenet/biomimetic/sleep_consolidation.py)"]
Sleep --> Cortex["Cortical Targets (PFC/Temporal)"]
- Role: Time-series episode encoding, pattern separation/completion, prioritized replay.
- Implementation:
evospikenet/biomimetic/hippocampal_memory.py,evospikenet/biomimetic/sleep_consolidation.py. - Important parameters: replay batch size, priority criteria, SWR period (100–200 Hz imitation).
6.7.2 Prefrontal Cortex (PFC/Q‑PFC) — Intention/Decision Making and Gating
flowchart LR
SensoryFeat["Features (Visual/Auditory/Spatial)"] --> PFCcore["PFC Core\n(evospikenet/biomimetic/intention_module.py)"]
PFCcore --> Policy["Route_probs / Policy\n(Q‑PFC gating)"]
Policy --> Motor["Motor Planner & Efference"]
Reward["VTA TD Error\n(evospikenet/biomimetic/reward_circuit.py)"] --> PFCcore
Neuromod["DA / ACh / Oxytocin\n(evospikenet/biomimetic/neuromodulators.py)"] --- PFCcore
- Role: Goal management (
IntentionModule), routing, plasticity gating (viaPlasticityGate). - Implementation:
evospikenet/biomimetic/intention_module.py,evospikenet/biomimetic/modulatory.py. - Important parameters: Intention priority decay half-life, gating threshold, PFC inner loop delay.
6.7.3 Sleep Consolidation — Offline playback and memory transfer
sequenceDiagram
participant H as HippocampalBuffer
participant S as SleepConsolidation
participant C as Cortex
H->>S: prioritized episodes
S->>C: replayed spike sequences (SWR bursts)
Note right of C: PlasticityGate opens during replay (ACh modulation)
- Role: Long-term memory transfer from hippocampus to cortex by prioritized replay, interaction between SWR and δ/θ.
- Implementation:
evospikenet/biomimetic/sleep_consolidation.py.
6.7.4 Reward/Emotion Circuit (VTA/NAcc/Amygdala)
flowchart LR
Stimulus --> Amy["Amygdala\n(evospikenet/biomimetic/emotion_system.py)"]
Amy --> Valence["valence/arousal"]
RewardPredict["Value Estimator\n(VTADopamineModel)"] --> DA["Dopamine Signal"]
DA --> Plasticity["PlasticityGate\n(evospikenet/biomimetic/modulatory.py)"]
Plasticity --> Synapses
- Role: Dopamine release based on emotional tagging (valence/arousal) of stimuli and TD error, dynamic modulation of plasticity.
- Implementation:
evospikenet/biomimetic/emotion_system.py,evospikenet/biomimetic/reward_circuit.py,evospikenet/biomimetic/modulatory.py.
6.7.5 Rhythm synchronization (θ/γ/δ) and EEG bands
flowchart TB
HippocampusTheta["Hippocampal θ (4–8 Hz)"] --> AChTrigger["ACh Release\n(evospikenet/biomimetic/neuromodulators.py)"]
CortexGamma["Cortical γ (30–80 Hz)"] --> Coupling["θ–γ Coupling\n(evospikenet/biomimetic/rhythm_sync.py)"]
Coupling --> Memory
- Role: Sequence segmentation by θ–γ coupling, offline integration trigger by δ wave.
- Implementation:
evospikenet/biomimetic/rhythm_sync.py, trigger for ACh module.
6.7.6 Mirror neurons and motor output (imitation learning)
flowchart LR
ObservedAction["Observed Action Embedding"] --> Classifier["Action Classifier"]
Classifier --> Mirror["MirrorNeuronSystem\n(evospikenet/biomimetic/mirror_neurons.py)"]
Mirror --> MotorPrimitives["Motor Primitives (M1)"]
MotorPrimitives --> ImitationReward
- Role: Activate motor primitives from observation and generate imitative rewards to promote learning.
- Implementation:
evospikenet/biomimetic/mirror_neurons.py.
Each subsystem diagram can be disassembled and converted to high-resolution diagrams (SVG/PNG) for publication in papers. Suggested work next:
- Add unit tests and expected input/output examples for each diagram to tests/unit/ (recommended)
- Generate SVG of diagram and save to docs/assets/
6. Data flow and communication
All inter-node traffic uses Zenoh Pub/Sub, topic naming is
It is in the <node_type>/<region>/<signal> format. Example: Visual spikes are
Published to vision/v1/spikes. Subscribers can filter by ID or neighborhood,
The SpatialWhere node subscribes to vision/*/spikes and performs local processing.
Clock synchronization is done through PTP grandmaster election and jitter is typically <1µs. Spike events contain 64bit timestamps and Zenoh is within the zone The order is guaranteed, but the entire mesh is asynchronous. master director node (monitored by ExecutiveControlEngine) logs \(L_{ij}\) delays every second, Used for adaptive routing.
7. Discussion and future prospects
EvoSpikeNet is a distributed spiking architecture that runs on commodity hardware. We demonstrated that a comprehensive set of brain functions can be simulated in real time. Hierarchical design and biologically plausible models ensure neuroscientific testability Facilitate: Parameters such as membrane time constants, plasticity time windows, and attentional decay contribute to experimental observations. Respond directly.
- EvoGenome → Distributed Node Deployment Bridge:
DistributedEvolutionEngine.deploy_to_nodes()allows evolution results to be displayed immediately. This is reflected in the real-time reasoning of each node. - STDP delta real weight application:
InstantiatedBrain.apply_weight_delta()gives INT16 delta Applied in-place to thenn.Linearlayer, closing the online plasticity loop. BrainSimulationimport error fixed: The coupling ofDistributedBrainNodeandBrainSimulationFrameworkis now complete. Now it works.
A remaining limitation is that some subsystems require additional expansion and stabilization.
Please refer to Remaining_Functionality.md for detailed implementation status and phase information.
(The main functions of the hippocampus, amygdala, cerebellum, and neuromodulatory system have already been implemented).
Large-scale scaling of PTP synchronization and fine-grained control of neuromodulators remain to be explored.
This is an ongoing challenge.
Future research will include close examination of the parameters used in the connectome, closed-loop experiments integrating the EEG/BMI interface, Deployment to geographically distributed clusters (federated simulation), and fine-grained control of neuromodulators and adaptive learning evaluation in complex environments. Can be mentioned.