Skip to content

Biomimicry Enhancement Plan Implementation Record

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

Date of formulation: 2026-03-01 Last updated: 2026-03-11 (Phase D distributed node integration completed) Standard document: Remaining_Functionality.md Section 11, docs-dev/REMAINING_FEATURES.md In charge: GitHub Copilot / Masahiro Aoki Copyright 2026 Moonlight Technologies Inc. All Rights Reserved.


0. Executive Summary

0.1 Examination results of Section 11 (11-1 to 11-19)

Status Number Notes
✅ Fully implemented 19 (11-1, 11-2, 11-3, 11-4, 11-5, 11-6, 11-7, 11-8, 11-9, 11-10, 11-11, 11-12, 11-13, 11-14, 11-15, 11-16, 11-17, 11-18, 11-19) Core functions have been implemented in both source code and testing. Some modules have next phase TODOs such as SNN core integration, UI, vestibular sense, etc.
🔶 Partial implementation (BiomimeticAdapter pilot) 0
❌ Not implemented/planned 0

Note: Even for items that are counted as ✅, unfinished or TODO for the next phase is written in the details column of the main text. The numbers in the table are based on "item presence" and are not a guarantee of complete functionality.

0.2 Phase A/B integration completed (new on 2026-03-06)

Completely eliminates the integration gap between the biomimetic/ module group and the SNN core ("the last mile"). Detailed evaluation: docs-dev/biomimetic_integration_evaluation.md v2.0 (Overall score 8.7/10).

Status Number Breakdown
✅ Phase A (required) Fully completed 4 __init__.py API, BrainSimulationFramework, STDP⇔NeuromodulatorGate, SleepConsolidation STDP
✅ Phase B (recommended) Fully completed 7 Gate/Registry bridge, Izhikevich backend, CorticalTopology integration, gammatone anonymous, EfferenceCopy adaptive, MirrorNeuron default classifier, DMN idle
✅ Phase D (distributed node integration) fully completed 4 BrainSimulation alias, deploy_genome() / deploy_to_nodes(), apply_weight_delta(), genome-driven forward pass
⏳ Phase C (future) 4 Hippocampus→cortical transfer path, wake/sleep timeline, neuromod REST/Zenoh, CorticalTopology⇔HierarchicalRank

The current BiomimeticAdapter (evospikenet/eeg_integration/distributed_brain_executor.py) provides numerical gains to the EEG command pipeline. With the integration of Phase D (2026-03-11), DistributedBrainNode will now execute genome-driven InstantiatedBrain forward pass, and in addition to modulatory_gain, confidence value correction based on genome output will also be performed. BrainSimulationFramework is now a guide layer that fully utilizes all biomimetic/ modules.


1. Detailed unimplemented analysis of each item

1.1 | 11-1: Introduction of delay and brain wave rhythm ✅ Fully implemented

Added: - Calculate PLV in addition to full band power, adapter available. - adapter.rhythm_metrics returns PLV in reference signal in metadata.

Change: Added full-band power calculation, phase synchronization, and Zenoh delay tag methods to BrainRhythmSynchronizer. Enhanced BiomimeticAdapter to normalize metrics and trigger acetylcholine release.

Implemented: - BiomimeticAdapter.rhythm_metrics() — δ(0.5–4 Hz) / α(8–13 Hz) band power calculation (FFT-based)

Implemented Core Features (evospikenet/biomimetic/rhythm_sync.py): | Implementation items | Details | |---------|------| | θ/γ/β band power calculation ✅ | BrainRhythmSynchronizer.compute_all_bands() — delta/theta/alpha/beta/gamma all 5 bands FFT based | | Phase synchronization API ✅ | compute_plv() — scipy.signal.hilbert / numpy fallback support | | Zenoh delay tag ✅ | zenoh_delay_tag() — Axonal conduction delay tag generation with Gaussian noise | | Axonal conduction delay model ✅ | AxonalConductionModel — Delay calculation from distance, myelination, and developmental stage | | Rhythm Synchronization State Machine ✅ | RhythmStateMachine — desynchronized / entraining / synchronized 3-state FSM |

Remaining TODO: - Adding delay tags to Zenoh message headers (evospikenet/communication.py) - Full integration with BiomimeticAdapter's rhythm_metrics()

Implemented files:``` evospikenet/biomimetic/rhythm_sync.py # BrainRhythmSynchronizer, AxonalConductionModel, RhythmStateMachine

**Code stub**:```python
# evospikenet/biomimetic/rhythm_sync.py
import numpy as np
from dataclasses import dataclass, field
from typing import Dict, Optional

BANDS = {
    "delta": (0.5, 4.0),
    "theta": (4.0, 8.0),
    "alpha": (8.0, 13.0),
    "beta":  (13.0, 30.0),
    "gamma": (30.0, 80.0),
}

@dataclass
class RhythmState:
    band_power: Dict[str, float] = field(default_factory=dict)
    phase_lock_value: float = 0.0   # PLV between modules
    dominant_freq: float = 10.0     # Hz

class BrainRhythmSynchronizer:
    """全帯域リズム計算・モジュール間位相同期管理"""

    def compute_all_bands(self, eeg: np.ndarray, fs: float = 250.0) -> Dict[str, float]:
        freqs = np.fft.rfftfreq(len(eeg), d=1.0/fs)
        psd = np.abs(np.fft.rfft(eeg)) ** 2
        power = {}
        for band, (lo, hi) in BANDS.items():
            mask = (freqs >= lo) & (freqs < hi)
            power[band] = float(psd[mask].mean()) if mask.any() else 0.0
        return power

    def compute_plv(self, signal_a: np.ndarray, signal_b: np.ndarray) -> float:
        """Phase Locking Value between two signals"""
        from scipy.signal import hilbert
        phase_a = np.angle(hilbert(signal_a))
        phase_b = np.angle(hilbert(signal_b))
        return float(np.abs(np.mean(np.exp(1j * (phase_a - phase_b)))))

    def zenoh_delay_tag(self, base_delay_ms: float, variability_ms: float = 2.0) -> float:
        """軸索伝導遅延タグ (ms)"""
        return max(0.0, base_delay_ms + np.random.randn() * variability_ms)

Test: tests/unit/test_rhythm_sync.py Effort estimate: 2 weeks


1.2 | 11-2: Cell/synapse diversification ✅ Complete implementation

Change: Added Tsodyks‑Markram short-term plasticity class and PV/SST/VIP inhibitory neuron stubs. Operation verification in tests/unit/test_synapses.py. The plasticity control part of BiomimeticAdapter has already been implemented via neuromodulator.

Implemented (evospikenet/synapses.py): - AMPA, NMDA, GABA classes (τ-based dynamics) - Astrocyte class (weight adjustment)

Implemented Core Features (evospikenet/synapses.py): | Implementation items | Details | |---------|------| | Short-term plasticity (STP) ✅ | TsodyksMarkramSynapse (L155–193) — facilitation / depression / recovery fully implemented | | Inhibition subtype ✅ | PVInhibitoryNeuron (L194), SSTInhibitoryNeuron (L208), VIPInhibitoryNeuron (L217) implemented |

Remaining TODO: | Unimplemented items | Details | |------------|------| | SNN Core Integration | No synapses.py connection to plasticity.py / hierarchical_plasticity.py | | Benchmark | No scale testing of kinetic parameters |

Code stub:```python

Addition to evospikenet/synapses.py

@dataclass class STPState: u: float = 0.2 # utilization x: float = 1.0 # available vesicles

class TsodyksMarkramSynapse(Synapse): """Tsodyks-Markram 短期可塑性モデル""" def init(self, U: float = 0.2, tau_rec: float = 200.0, tau_fac: float = 0.0): super().init() self.U = U self.tau_rec = tau_rec self.tau_fac = tau_fac self.stp = STPState(u=U, x=1.0)

def transmit(self, pre_spike: float, dt: float = 1.0) -> float:
    s = self.stp
    # facilitation
    s.u += self.U * (1 - s.u)
    # transmission
    psr = s.u * s.x * pre_spike
    # depression
    s.x -= s.u * s.x
    # recovery
    s.x += (1 - s.x) * dt / self.tau_rec
    return psr

class PVInhibitoryNeuron: """パルブアルブミン陽性介在ニューロン (高頻度発火)""" pass # Reuse fast-spiking parameter set of IzhikevichNeuronLayer

class SSTInhibitoryNeuron: """ソマトスタチン陽性介在ニューロン (樹状突起標的)""" pass

**Effort estimate**: 1 week (STP) + 2 weeks (subtype integration) = 3 weeks

---

### 1.3 | 11-3: Layered/hierarchical topology ✅ Complete implementation [Patent MT25-EV020]

`CorticalTopologyGenerator` fully supports standard layer template generation, duplication, long-distance joins, grids, and visualization. The structure and consistency of each method have been verified with unit tests.

**Unimplemented core features**:
**Implemented Core Features** (`evospikenet/biomimetic/cortical_topology.py`):
| Implementation items | Details |
|---------|------|
| Column/layer template generator ✅ | `build_column()` — Deep copy generation of L1/L2/3/L4/L5/L6 templates |
| Forward/reverse long-range connections ✅ | `connect_columns_long_range()` — Distance-based inter-column connection generation |
| Column duplication tool ✅ | `duplicate_column()` — Duplicate an existing column with a new ID |
| Topology visualization ✅ | `visualize_network()` — Wiring diagram rendering with networkx + matplotlib |

**Implementation destination**:```
evospikenet/biomimetic/cortical_topology.py  # new

Code stub:```python

evospikenet/biomimetic/cortical_topology.py

from dataclasses import dataclass, field from typing import List, Tuple, Dict

@dataclass class CorticalLayer: layer_id: str # "L2/3", "L4", "L5", "L6" neuron_count: int # Typical: L4=4000, L2/3=2000, L5=1000, L6=2000 exc_ratio: float = 0.8 # excitability ratio inh_subtypes: List[str] = field(default_factory=lambda: ["PV", "SST", "VIP"])

@dataclass class CorticalColumn: column_id: str layers: List[CorticalLayer] # Forward connection (L4→L2/3→L5→L6) feedforward_paths: List[Tuple[str, str]] = field(default_factory=list) # Reverse coupling (L5/6→L4/L2/3) feedback_paths: List[Tuple[str, str]] = field(default_factory=list)

class CorticalTopologyGenerator: """皮質ライク階層ネットワークの自動生成"""

CANONICAL_TEMPLATE = [
    CorticalLayer("L1",   100,  0.0),
    CorticalLayer("L2/3", 2000, 0.8),
    CorticalLayer("L4",   4000, 0.85),
    CorticalLayer("L5",   1000, 0.8),
    CorticalLayer("L6",   2000, 0.8),
]

def build_column(self, column_id: str) -> CorticalColumn:
    import copy
    layers = copy.deepcopy(self.CANONICAL_TEMPLATE)
    ff = [("L4","L2/3"), ("L2/3","L5"), ("L5","L6")]
    fb = [("L5","L4"), ("L6","L2/3")]
    return CorticalColumn(column_id, layers, ff, fb)

def replicate(self, template: CorticalColumn, n: int) -> List[CorticalColumn]:
    return [self.build_column(f"{template.column_id}_{i}") for i in range(n)]

``` Effort estimate: 3 weeks


1.4 | 11-4: Modulators and plasticity gating ✅ Complete implementation [Patent MT25-EV018]

Learning rate gates using NeuromodulatorGate and AcetylcholineModule have been implemented. adapter is used by modulatory_gain, The reward signal is updated in conjunction with the VTA model. More recent expansions Amygdala emotion coefficient and nucleus accumbens motivation scale are now reflected in the gate. Therefore, gate adjustment is also performed when updating the sleep state of BiomimeticAdapter. # Implemented: - BiomimeticAdapter.modulatory_gain() — Dopamine/noradrenaline equivalent gain (0.6-1.6) + emotional/motivational boost Implemented Core Features (evospikenet/biomimetic/neuromodulators.py): | Implementation items | Details | |---------|------| | Plasticity Gate ✅ | NeuromodulatorGate.gated_learning_rate() — Modulate learning rate with DA/ACh scale multiplication | | Acetylcholine (ACh) Module ✅ | AcetylcholineModule provides theta band-linked release and attention level factors (11-16 integrated) | | Reward delay handling ✅ | VTADopamineModel.update() — Update dopamine with TD error (reward_circuit.py) | Remaining TODO: | Unimplemented items | Details | |------------|------| | ~~plasticity.py direct integration~~ | ✅ 2026-03-11 resolved — InstantiatedBrain.apply_weight_delta() reflects STDP delta to nn.Linear weights in real time | | Oxytocin model | No social bond/trust reinforcement signals | | Debug UI | No real-time visualization panel for modulator levels | Code stub:```python

evospikenet/biomimetic/neuromodulators.py

from dataclasses import dataclass @dataclass class NeuromodulatorState: dopamine: float = 0.5 # [0,1] — reward prediction error noradrenaline: float = 0.5 # [0,1] — Awakening/Attention acetylcholine: float = 0.5 # [0,1] — Encoding/θ wave serotonin: float = 0.5 # [0,1] — stability/mood oxytocin: float = 0.0 # [0,1] — social ties class NeuromodulatorGate: """STDP学習率への調節物質ゲート""" def init(self, base_lr: float = 0.01): self.base_lr = base_lr self.state = NeuromodulatorState() def gated_learning_rate(self) -> float: """ドーパミン/ACh による学習率スケーリング""" da_scale = 0.5 + self.state.dopamine # 0.5–1.5 ach_scale = 0.8 + self.state.acetylcholine * 0.4 # 0.8–1.2 return self.base_lr * da_scale * ach_scale def update_from_reward(self, reward: float, prediction: float) -> None: """TD誤差からドーパミンを更新""" td_error = reward - prediction da = max(0.0, min(1.0, self.state.dopamine + 0.1 * td_error)) self.state.dopamine = da ```

Effort estimate: 2 weeks (gate integration) + 1 week (UI) = 3 weeks


1.5 | 11-5: Memory system expansion (hippocampus/frontal lobe) ✅ Implemented [Patent MT25-EV025]

(existing)

Assumption: EpisodicMemoryNode / SemanticMemoryNode exists in evospikenet/memory_nodes.py, but hippocampal replay is not included.

Implemented Core Features: | Features | Details | |------|------| | Hippocampal-style episode buffer | Implement short-term high-capacity episodic ring buffer (~10,000 items) in HippocampalBuffer | | Prioritized replay | Implement prioritized_replay + replay wrapper | | Replay Scheduler | Added ReplayScheduler class and can work with buffer | | Frontal lobe working memory | Create a new WorkingMemoryBlock based on PyTorch GRU | | Cortical integration interface | Can be transferred to semantic node using HippocampalBuffer.transfer_to_semantic method | | Evaluation Bench | Added MemoryEvaluator class, evaluate_replay_precision, forgetting_curve |

Implementation destination:``` evospikenet/biomimetic/hippocampal_memory.py # Functional enhancement evospikenet/biomimetic/working_memory.py # GRU module

**Code stub**:```python
# evospikenet/biomimetic/hippocampal_memory.py
import collections
from dataclasses import dataclass, field
from typing import Any, Deque, List

@dataclass
class Episode:
    timestamp_ns: int
    context: Any
    reward: float = 0.0
    surprise: float = 0.0  # Prediction error (replay priority)

class HippocampalBuffer:
    """海馬風エピソードバッファ + 優先リプレイ"""
    MAX_SIZE = 10_000

    def __init__(self):
        self._buffer: Deque[Episode] = collections.deque(maxlen=self.MAX_SIZE)

    def store(self, episode: Episode) -> None:
        self._buffer.append(episode)

    def prioritized_replay(self, n: int = 32, alpha: float = 0.6) -> List[Episode]:
        """サプライズ重みベースのサンプリング"""
        import random, math
        eps = self._buffer
        if not eps:
            return []
        weights = [max(e.surprise, 1e-6) ** alpha for e in eps]
        total = sum(weights)
        probs = [w / total for w in weights]
        idx = random.choices(range(len(eps)), weights=probs, k=min(n, len(eps)))
        return [list(eps)[i] for i in idx]

# evospikenet/biomimetic/working_memory.py
import torch, torch.nn as nn

class WorkingMemoryBlock(nn.Module):
    """前頭葉作業記憶: GRU ベース再帰ループ (容量 7±2 チャンク)"""
    CAPACITY = 7

    def __init__(self, chunk_size: int = 64):
        super().__init__()
        self.gru = nn.GRU(chunk_size, chunk_size, batch_first=True)
        self.chunk_size = chunk_size
        self._hidden = None

    def reset(self):
        self._hidden = None

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        out, self._hidden = self.gru(x, self._hidden)
        return out

Effort estimate: 3 weeks


1.6 | 11-6: Sensory-motor closed loop reinforcement ✅ Complete implementation [Patent MT25-EV027] [Patent MT25-EV022]

sensory_preprocessing.py (DoG/Gabor/Gammatone + SensoryPreprocessingModule) and motor_efference.py (EfferenceCopy / ProprioceptiveFeedback) has been implemented. Complete with scipy/skimage independent numpy fallback. Tested with test_motor_efference.py / test_sensory_motor.py.

Implemented Core Features: | Implementation items | Details | |---------|------| | Visual DoG preprocessing ✅ | dog_filter() — Difference-of-Gaussians with scipy / numpy fallback | | Visual Gabor filter ✅ | gabor_bank() — skimage / numpy Orientation x frequency filter bank with fallback | | Auditory Cochlear filter ✅ | gammatone_filterbank() — ERB scale numpy implementation (32 filters default) | | Efference Copy ✅ | EfferenceCopy.record() / predict_reafference() Implemented | | Proprioceptive Feedback ✅ | ProprioceptiveFeedback.compare() — Calculate the difference with the expected value |

Remaining TODO: | Unimplemented items | Details | |------------|------| | Vestibular sense | Balance signal from acceleration and angular velocity (sensory_motor.py with room for expansion) |

Implemented files:``` evospikenet/biomimetic/sensory_preprocessing.py # Fully implemented evospikenet/biomimetic/motor_efference.py # Fully implemented

**Code stub**:```python
# evospikenet/biomimetic/sensory_preprocessing.py
import numpy as np
from scipy.ndimage import gaussian_filter

def dog_filter(image: np.ndarray, sigma1: float = 1.0, sigma2: float = 2.0) -> np.ndarray:
    """Difference of Gaussians — 網膜/LGN 前処理"""
    return gaussian_filter(image, sigma1) - gaussian_filter(image, sigma2)

def gabor_bank(thetas: int = 8, freqs: tuple = (0.1, 0.2, 0.4)) -> list:
    """V1 方位/周波数フィルタバンク"""
    from skimage.filters import gabor_kernel
    kernels = []
    for f in freqs:
        for t in range(thetas):
            theta = t * np.pi / thetas
            kernels.append(gabor_kernel(f, theta=theta))
    return kernels

def gammatone_filterbank(n_filters: int = 64, fs: float = 16000.0) -> np.ndarray:
    """蝸牛模倣ガンマトーンフィルタバンク"""
    # Implementation: Brian2 / cochlea library recommended
    raise NotImplementedError("gammatone_filterbank: install 'cochlea' package")

# evospikenet/biomimetic/motor_efference.py
class EfferenceCopy:
    """エフェレンスコピー — 運動指令の内部照合"""
    def __init__(self):
        self._last_command: np.ndarray | None = None

    def record(self, motor_cmd: np.ndarray) -> None:
        self._last_command = motor_cmd.copy()

    def predict_reafference(self, sensory: np.ndarray) -> np.ndarray:
        """感覚フィードバックから自発的成分を除去"""
        if self._last_command is None:
            return sensory
        predicted = self._last_command * 0.9  # Simple linear prediction (room for improvement)
        return sensory - predicted

Effort estimate: 4 weeks


1.7 | 11-7: Energy homeostasis constraints ✅ Fully implemented

Energy budget classes now handle the following high load factors: Used by both adapters and evolutionary algorithms.

Implemented: - Added "firing rate penalty" (firing_rate_penalty) to NodeEnergyBudget. energy_fitness_term() decreases at high firing rates. - energy_fitness_term() considers both wattage and firing rate, Returns a scale from 0.5 to 1.5. - BiomimeticAdapter.homeostasis_scale() in metadata Read energy_consumption and firing_rate_hz and update evaluation. - EvoGenome.apply_energy_fitness() helper to evolution score history Energy terms are included. - Added 21 corresponding unit tests to cover behavior.

Remaining TODO: | Unimplemented items | Details | |------------|------| | Energy budget model by node | No consumption tracking for each rank node | | Energy visualization | No consumption dashboard by node |

# evospikenet/biomimetic/energy_homeostasis.py
# (See current implementation: firing_rate_penalty(), energy_fitness_term(), etc.)

Effort estimate: 2 weeks


1.8 | 11-8: Developmental dynamics (critical period/pruning) ✅ Complete implementation [Patent MT25-EV021]

Implemented: - DevelopmentalSchedule class provides plasticity multiplier, pruning decision, conduction delay reduction (myelination) - Added task difficulty scheduler with CurriculumScheduler - Unit test tests/unit/test_developmental_dynamics.py covers the behavior of each method

Supplementary note: All items that were previously considered "partially implemented" have been coded, and behavior verification has been completed through testing. BiomimeticAdapter also has a dev_gain() call, which adjusts the gain according to the development stage.

Remaining TODO: - More biological pruning algorithms (importance filters, etc.) - Integrate the link between task difficulty and plasticity into the operational pipeline

# evospikenet/biomimetic/developmental_dynamics.py
import math
from dataclasses import dataclass

class DevelopmentalSchedule:
    def __init__(self, total_epochs: int = 1000):
        self.total_epochs = total_epochs

    def plasticity_multiplier(self, epoch: int) -> float:
        """エポック進行に応じて 0.1→1.0 に線形 ramp"""
        if epoch <= 0:
            return 0.1
        return float(min(1.0, 0.1 + 0.9 * (epoch / self.total_epochs)))

    def should_prune(self, epoch: int, weight: float) -> bool:
        """重み閾値が発育とともに厳しくなる"""
        threshold = 0.05 + 0.45 * (1.0 - epoch / self.total_epochs)
        return weight < threshold

    def conduction_delay_ms(self, epoch: int, base_delay_ms: float = 5.0) -> float:
        """発達に伴い伝導遅延が半分まで低下"""
        factor = 1.0 - 0.5 * min(1.0, epoch / self.total_epochs)
        return base_delay_ms * factor

class CurriculumScheduler:
    """[11-19] 難易度ステージ化"""
    def __init__(self, stages: int = 5):
        self.stages = stages

    def difficulty(self, epoch: int, total_epochs: int) -> float:
        stage = min(self.stages - 1, int(epoch / (total_epochs / self.stages)))
        return stage / float(max(1, self.stages - 1))

Effort estimate: 3 weeks


1.9 | 11-9: Purpose/intention expression module ✅ Complete implementation [Patent MT25-EV026]

The intention module has already been implemented as evospikenet/biomimetic/intention_module.py, and the integration into BiomimeticAdapter and related tests have been completed. Brain area: Prefrontal cortex (PFC) + anterior cingulate cortex (ACC)

Implemented Core Features (evospikenet/biomimetic/intention_module.py): | Implementation items | Details | |---------|------| | Intention vector management API ✅ | set_goal(), goal_vector(), goal_priority(), decay_priority() — Complete state/priority management | | PFC fusion layer ✅ | pfc.py L1590 — Integrate intention scalar into decision_vector with forward(intention_priority=...) | | Intent history log ✅ | save_history() / load_history() — Supports JSON persistence |

Remaining TODO: | Unimplemented items | Details | |------------|------| | Intent change UI | No intent setting interface from front end |

Implemented files:``` evospikenet/biomimetic/intention_module.py # Fully implemented evospikenet/pfc.py # intention_priority supported

**Code stub**:```python
# evospikenet/biomimetic/intention_module.py
import time
from dataclasses import dataclass, field
from typing import Any, List, Optional
import numpy as np

@dataclass
class IntentionState:
    goal_vector: np.ndarray      # Embedding vector (dim=128)
    subgoals: List[str]
    priority: float              # [0,1]
    timestamp: float = field(default_factory=time.time)
    source: str = "internal"     # "internal" | "external" | "reward"

class IntentionModule:
    """PFC+ACC 意図表現・計画駆動モジュール"""
    DIM = 128

    def __init__(self):
        self._current: Optional[IntentionState] = None
        self._history: List[IntentionState] = []

    def set_goal(self, goal_embedding: np.ndarray, subgoals: List[str] = [],
                 priority: float = 0.5, source: str = "internal") -> None:
        assert goal_embedding.shape == (self.DIM,)
        state = IntentionState(goal_vector=goal_embedding.copy(),
                               subgoals=subgoals, priority=priority, source=source)
        if self._current:
            self._history.append(self._current)
        self._current = state

    def current_goal(self) -> Optional[IntentionState]:
        return self._current

    def goal_similarity(self, other: np.ndarray) -> float:
        if self._current is None:
            return 0.0
        dot = float(np.dot(self._current.goal_vector, other))
        norm = float(np.linalg.norm(self._current.goal_vector) *
                     np.linalg.norm(other) + 1e-8)
        return dot / norm

    def history_json(self) -> List[dict]:
        return [{"timestamp": s.timestamp, "priority": s.priority,
                 "source": s.source, "subgoals": s.subgoals}
                for s in self._history]

Effort estimate: 2 weeks


1.10 | 11-10: Creativity generation engine ✅ Complete implementation [Patent MT25-EV019]

Brain areas: temporal association cortex, hippocampus, DMN

Fully implemented as evospikenet/biomimetic/creativity_engine.py. NoveltyEvaluator and CreativityEngine work together and are called from future_sim mode of DefaultModeNetwork (11-18). Tested in tests/unit/test_creativity_engine.py.

Implemented Core Features: | Implementation items | Details | |---------|------| | Memory recombination operation ✅ | CreativityEngine.recombine() — Dirichlet weighted stochastic fragment mixing | | Novelty evaluation function ✅ | NoveltyEvaluator.score() — Average cosine distance from the last 100 items | | DMN feedback loop ✅ | DefaultModeNetwork._generate_activity() called in future_sim mode (11-18 integrated) |

Remaining TODO: | Unimplemented items | Details | |------------|------| | Generated results audit | No integration with safety and ethics filter (safety_filter.py) |

Code stub:```python

evospikenet/biomimetic/creativity_engine.py

import numpy as np from typing import List, Tuple

class NoveltyEvaluator: """生成物の新規性スコア計算""" def init(self, history_size: int = 1000): self._embeddings: List[np.ndarray] = [] self.history_size = history_size

def score(self, embedding: np.ndarray) -> float:
    """既存記憶との平均コサイン距離 → 新規性スコア [0,1]"""
    if not self._embeddings:
        return 1.0
    dists = [1 - float(np.dot(embedding, h) /
                       (np.linalg.norm(embedding) * np.linalg.norm(h) + 1e-8))
             for h in self._embeddings[-100:]]
    return float(np.mean(dists))

def update(self, embedding: np.ndarray) -> None:
    self._embeddings.append(embedding.copy())
    if len(self._embeddings) > self.history_size:
        self._embeddings.pop(0)

class CreativityEngine: """記憶再組換えによる創発的出力生成""" def init(self, novelty_evaluator: NoveltyEvaluator): self.evaluator = novelty_evaluator

def recombine(self, fragments: List[np.ndarray], temperature: float = 1.0) -> np.ndarray:
    """断片の確率的混合"""
    if not fragments:
        raise ValueError("fragments must not be empty")
    weights = np.random.dirichlet([1.0 / temperature] * len(fragments))
    combined = sum(w * f for w, f in zip(weights, fragments))
    norm = np.linalg.norm(combined)
    return combined / (norm + 1e-8)

``` Effort estimate: 3 weeks


1.11 | 11-11: Self-awareness/reflection layer ✅ Fully implemented [Patent MT25-EV030]

Brain areas: medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC) Fully implemented as evospikenet/biomimetic/introspection.py. SelfState data class and IntrospectionLayer provide self-state recording, trend analysis, and report generation. Used from self_ref mode of DefaultModeNetwork. Tested in tests/unit/test_introspection.py. Implemented Core Features: - SelfState — performance_score / confidence / error_rate / meta_vector - IntrospectionLayer.update() — Manage 100 history items - IntrospectionLayer.trend() — Determination of improvement/deterioration trend using polyfit - IntrospectionLayer.self_report() — summary output in dict format Code (for reference):```python

evospikenet/biomimetic/introspection.py

import time from dataclasses import dataclass, field from typing import Any, Dict, List import numpy as np @dataclass class SelfState: timestamp: float = field(default_factory=time.time) performance_score: float = 0.0 # Recent task performance confidence: float = 0.5 # Prediction confidence error_rate: float = 0.0 # Latest error rate meta_vector: np.ndarray = field(default_factory=lambda: np.zeros(64)) class IntrospectionLayer: """自身の状態・履歴をメタ的に評価するモジュール""" HISTORY_LEN = 100 def init(self): self._history: List[SelfState] = [] def update(self, state: SelfState) -> None: self._history.append(state) if len(self._history) > self.HISTORY_LEN: self._history.pop(0) def trend(self, metric: str = "performance_score") -> float: """直近10件の傾向 (正=改善, 負=悪化)""" vals = [getattr(s, metric) for s in self._history[-10:]] if len(vals) < 2: return 0.0 return float(np.polyfit(range(len(vals)), vals, 1)[0]) def self_report(self) -> Dict[str, Any]: if not self._history: return {} s = self._history[-1] return {"performance": s.performance_score, "confidence": s.confidence, "error_rate": s.error_rate, "trend": self.trend()} ```

Effort estimate: 2 weeks


1.12 | 11-12: Dynamic target selector ✅ Complete implementation [Patent MT25-EV026]

Brain area: basal ganglia (caudate nucleus/striatum) + PFC loop

Fully implemented as evospikenet/biomimetic/goal_switcher.py. GoalCandidate and CaudateSelectorLayer (Softmax goal selection) have been implemented, and select() also includes measures against division by zero and empty list. Tested in tests/unit/test_goal_switcher.py.

Implemented Core Features: - GoalCandidate — goal_id / value / cost - CaudateSelectorLayer.select() — Softmax stochastic selection with temperature

Code (for reference):```python

evospikenet/biomimetic/goal_switcher.py

import numpy as np from dataclasses import dataclass from typing import Dict, List

@dataclass class GoalCandidate: goal_id: str value: float # expected value cost: float = 0.0 # switching cost

class CaudateSelectorLayer: """尾状核模倣: 基底核 Soft-max 目標選択""" def init(self, temperature: float = 1.0): self.temperature = temperature

def select(self, candidates: List[GoalCandidate]) -> GoalCandidate:
    vals = np.array([c.value - c.cost for c in candidates])
    probs = np.exp(vals / self.temperature)
    probs /= probs.sum()
    idx = np.random.choice(len(candidates), p=probs)
    return candidates[int(idx)]

``` Effort estimate: 2 weeks


1.13 | 11-13: Emotional/emotional system (amygdala/limbic system) ✅ Implemented [Patent MT25-EV023]

Summary: Implemented the amygdala and insular cortex model to provide attention bias and learning rate modulation. In BiomimeticAdapter, modulatory_gain and sleep_state output these outputs. We are utilizing this information and incorporating it into the news start gate update. Implemented features: - AmygdalaModel.plasticity_modulator() and attention_bias() are Reflected in attention bias of neuromodulator gate and adapter. - Intrasensory state can be generated by InsulaCortex.update(). - Tests test_emotion_system.py, test_adapter_rhythm_emotion_reward.py are Verify integration operation. ```python

evospikenet/biomimetic/emotion_system.py

import numpy as np from dataclasses import dataclass @dataclass class EmotionState: valence: float = 0.0 arousal: float = 0.5 fear: float = 0.0 reward_signal: float = 0.0 class AmygdalaModel: """扁桃体モデル: 感情価値評価と学習率変調""" FEAR_THRESHOLD = 0.6 def init(self): self.state = EmotionState() def evaluate(self, stimulus_embedding: np.ndarray, threat_prototypes: list[np.ndarray]) -> EmotionState: """刺激と脅威プロトタイプの類似度から感情状態を更新""" if threat_prototypes: sims = [float(np.dot(stimulus_embedding, tp) / (np.linalg.norm(stimulus_embedding) * np.linalg.norm(tp) + 1e-8)) for tp in threat_prototypes] self.state.fear = float(max(sims)) # Simple valence/arousal mapping self.state.arousal = 0.3 + 0.7 * self.state.fear self.state.valence = -self.state.fear return self.state def plasticity_modulator(self) -> float: """感情が高い場合、記憶固定を促進 (扁桃体→海馬投射)""" return 1.0 + self.state.arousal * 0.5 # 1.0〜1.5 def attention_bias(self) -> float: """脅威/高価値刺激へのアテンションバイアス [0,1]""" return min(1.0, self.state.fear + self.state.reward_signal) class InsulaCortex: """島皮質モデル: 内感覚フィードバック""" def init(self): self.interoceptive_state: dict = {} def update(self, heart_rate: float, skin_conductance: float, respiration_rate: float) -> dict: self.interoceptive_state = { "heart_rate_norm": (heart_rate - 70) / 40, # Normalization "skin_conductance": skin_conductance, "respiration": (respiration_rate - 15) / 10, } return self.interoceptive_state ```

Effort estimate: 3 weeks


1.14 | 11-14: Sleep phase/memory consolidation cycle ✅ Implemented [Patent MT25-EV017]

Implemented: - δ-wave synchronous batch replay using SleepConsolidation class. - Switch the neuromodulator gate to the plasticity promoting state at the peak of delta waves. - BiomimeticAdapter.sleep_state() updates gate with emotional/motivational information. - Use of sharp wave ripple method and HippocampalBuffer.

Addendum: The offline integration loop has been tested (test_hippocampal_sleep.py).

Code stub:```python

evospikenet/biomimetic/sleep_consolidation.py

import asyncio, time from typing import Any, List

class SleepConsolidation: """睡眠フェーズ記憶固定: バッチリプレイ + δ波トリガー可塑性""" REPLAY_BATCH = 32 SLOW_WAVE_HZ = 1.0 # Delta wave: ~1 Hz

def __init__(self, hippocampal_buffer, neuromod_gate):
    self.buffer = hippocampal_buffer
    self.gate   = neuromod_gate

async def offline_consolidation(self, n_cycles: int = 10) -> None:
    """非レム睡眠相当のオフライン統合ループ"""
    for cycle in range(n_cycles):
        # Wait for 1 cycle of δ wave
        await asyncio.sleep(1.0 / self.SLOW_WAVE_HZ)

        # δ wave peak: plasticity becomes highly plastic state
        self.gate.state.acetylcholine = 0.2   # Low ACh → Fixed mode
        self.gate.state.dopamine      = 0.8   # High DA → Enhancement

        # batch replay
        episodes = self.buffer.prioritized_replay(n=self.REPLAY_BATCH)
        for ep in episodes:
            await self._replay_episode(ep)

        # Return plasticity to normal
        self.gate.state.acetylcholine = 0.5
        self.gate.state.dopamine      = 0.5

async def sharp_wave_ripple(self, duration_ms: float = 100.0) -> None:
    """シャープウェーブリップル: 高頻度バースト (100–200 Hz 模倣)"""
    n_bursts = int(duration_ms / 1000 * 150)   # 150 Hz
    for _ in range(n_bursts):
        await asyncio.sleep(1.0 / 150)
        # TODO: trigger hippocampal output spike to cortex

async def _replay_episode(self, episode: Any) -> None:
    """エピソードをSTDP学習に再入力 (スタブ)"""
    pass  # Call plasticity.apply(episode.context)

``` Effort estimate: 3 weeks


1.15 | 11-15: Mirror neuron system ✅ Fully implemented 🟠 High priority [Patent MT25-EV024]

first The MirrorNeuronSystem proposal has already been introduced as evospikenet/biomimetic/mirror_neurons.py, with observation-to-motion mapping, mimetic reward, and optional post-processor functionality. All relevant tests have passed. Brain area: F5 area, inferior frontal area (IFG) Code stub:```python

evospikenet/biomimetic/mirror_neurons.py

import numpy as np from dataclasses import dataclass from typing import Callable, Optional @dataclass class ObservedAction: action_embedding: np.ndarray # Expression of observed behavior agent_id: str timestamp: float class MirrorNeuronSystem: """観察行動→自己運動プリミティブへのマッピング""" def init(self, motor_dim: int = 64): self._motor_dim = motor_dim self._mapping: dict = {} # action_class -> motor_primitive self._imitation_reward: float = 0.0 def register_action_class(self, action_class: str, motor_primitive: np.ndarray) -> None: assert motor_primitive.shape == (self._motor_dim,) self._mapping[action_class] = motor_primitive.copy() def observe_and_mirror(self, observed: ObservedAction, classify_fn: Callable) -> Optional[np.ndarray]: """観察→運動プリミティブ活性化""" action_class = classify_fn(observed.action_embedding) motor = self._mapping.get(action_class) if motor is not None: # Mimetic reward signal (similarity to observation) self._imitation_reward = float(np.dot( observed.action_embedding[:self._motor_dim], motor ) / (np.linalg.norm(observed.action_embedding[:self._motor_dim]) * np.linalg.norm(motor) + 1e-8)) return motor @property def imitation_reward(self) -> float: return self._imitation_reward ```

Effort estimate: 3 weeks


1.16 | 11-16: Acetylcholine module ✅ Fully implemented 🟢 Low priority

Brain area: basal forebrain, hippocampus

Implemented: - Added acetylcholine state to NeuromodulatorGate, gated_learning_rate multiplies the ACh scale. - AcetylcholineModule class triggers emission with Theta zonal force, Returns the memory encoding coefficient according to the attention level. - When the biomimetic adapter ach_module performs electroencephalogram synchronized measurement (check in tests/unit/test_biomimetic_adapter.py). - Added theta to unit test tests/unit/test_neuromodulators.py and attention modulation.

Remaining TODO: - Hippocampal spike timing dependent release model - End-to-end test of θ-ACh and sleep/memory consolidation

# Reference existing neuromodulators.py

1.17 | 11-17: Nucleus accumbens/VTA loop (motivation/reward) ✅ Implemented 🟠 High priority [Patent MT25-EV023]

Brain areas: nucleus accumbens (NAcc), ventral tegmental area (VTA)

Implementation details: - Implemented TD update and prediction_error saving in VTADopamineModel. - NAccMotivationScaler provides a motivation scale based on prediction error. - BiomimeticAdapter's modulatory_gain and sleep_state are now This scale is read and reflected on the neuromodulator gate. - Test test_reward_circuit.py and adapter integration case passed.

# The following is the same as the actual implementation
from dataclasses import dataclass

@dataclass
class TDState:
    value: float = 0.0
    prediction_error: float = 0.0

class VTADopamineModel:
    LEARNING_RATE = 0.05
    DISCOUNT = 0.95

    def __init__(self):
        self._td = TDState()
        self._values: dict = {}

    def update(self, state_key: str, reward: float, next_state_key: str) -> float:
        v_s = self._values.get(state_key, 0.0)
        v_s1 = self._values.get(next_state_key, 0.0)
        td_error = reward + self.DISCOUNT * v_s1 - v_s
        self._values[state_key] = v_s + self.LEARNING_RATE * td_error
        self._td.value = self._values[state_key]
        self._td.prediction_error = td_error
        return td_error

class NAccMotivationScaler:
    def __init__(self, vta: VTADopamineModel):
        self.vta = vta

    def motivation_scale(self, base_drive: float = 0.5) -> float:
        prd_err = self.vta._td.prediction_error
        return max(0.0, base_drive + 0.3 * prd_err)

Effort estimate: 2 weeks


1.18 | 11-18: Default mode network (DMN) dedicated module ✅ Fully implemented 🟡 Medium priority [Patent MT25-EV019]

Brain areas: mPFC, PCC, angular gyrus, parahippocampal gyrus

Fully implemented as evospikenet/biomimetic/dmn.py. DefaultModeNetwork provides an asynchronous generator that uses IntrospectionLayer and CreativityEngine in three modes: self_ref / future_sim / social_sim. Argument None Supports default and can be used alone. run_idle_loop() is an async generator typed as AsyncIterator[DMNActivity]. Tested in tests/unit/test_dmn.py.

Implemented Core Features: - DMNActivity — mode / content / salience - DefaultModeNetwork.run_idle_loop() — async generator (0.1 Hz) - stop() / deactivate() — Stop control - _generate_activity() — self_ref / future_sim / social_sim generation logic

Code (for reference):```python

evospikenet/biomimetic/dmn.py

import asyncio, random import numpy as np from dataclasses import dataclass, field from typing import List

@dataclass class DMNActivity: mode: str # "idle" | "self_ref" | "future_sim" | "social_sim" content: np.ndarray = field(default_factory=lambda: np.zeros(128)) salience: float = 0.0

class DefaultModeNetwork: """タスク非実行時の自発活動・自己参照・将来シミュレーション""" IDLE_RATE_HZ = 0.1 # Spontaneous frequency of idol activities

def __init__(self, introspection_layer, creativity_engine):
    self.introspection = introspection_layer
    self.creativity    = creativity_engine
    self._active = False

async def run_idle_loop(self, episodes: list) -> None:
    """タスク非実行期間に自発活動を生成"""
    self._active = True
    while self._active:
        await asyncio.sleep(1.0 / self.IDLE_RATE_HZ)
        mode = random.choice(["self_ref", "future_sim", "social_sim"])
        activity = self._generate_activity(mode, episodes)
        yield activity   # async generator

def _generate_activity(self, mode: str, episodes: list) -> DMNActivity:
    if mode == "self_ref":
        report = self.introspection.self_report()
        vec = np.array(list(report.values()) if report else [0.0] * 4)
        vec = np.pad(vec, (0, 128 - len(vec)))
    elif mode == "future_sim" and episodes:
        fragments = [np.random.randn(128) for _ in episodes[:3]]
        vec = self.creativity.recombine(fragments)
    else:
        vec = np.random.randn(128) * 0.1
    norm = np.linalg.norm(vec)
    return DMNActivity(mode=mode, content=vec/(norm+1e-8), salience=float(norm))

def deactivate(self) -> None:
    self._active = False

``` Effort estimate: 3 weeks


1.19 | 11-19: Curriculum learning scheduler ✅ Fully implemented 🟢 Low priority [Patent MT25-EV021]

Fully implemented as the CurriculumScheduler class in evospikenet/biomimetic/developmental_dynamics.py. It coexists in the same file as DevelopmentalSchedule, and the difficulty(epoch, total_epochs) method returns the difficulty scale [0,1] according to the epoch. Tested in tests/unit/test_developmental_dynamics.py. Implemented Core Features: - CurriculumScheduler.difficulty() — linear difficulty scale with stage divisions - Same file integration with DevelopmentalSchedule Effort estimate: ✅ Completed (integrated with 11-8)


2. Priority ranking

Priority Item ID Name Reason
🔴 Best 11-13 Emotions and emotional systems Directly linked to judgment, memory fixation, and attention bias
🔴 Best 11-14 Sleep phase/memory consolidation Essential for stabilizing long-term learning
🟠 High 11-17 Nucleus accumbens/VTA reward loop Foundation of motivation/goal-driven behavior
🟠 High 11-15 Mirror neurons Foundations of social learning and imitation
🟠 High 11-4 Complement Plasticity gate integration Practical application of BiomimeticAdapter
🟠 High 11-1 Complement Full-band phase synchronization Rhythm-plasticity linkage
🟡 Medium 11-9 Intention expression module Plan/goal-driven behavior
🟡 Medium 11-18 DMN exclusive module Foundation for creativity and reflection
🟡 Medium 11-5 Hippocampal memory expansion Close collaboration with 11-14
🟡 Medium 11-8 Complement Pruning/Plasticity Schedule Developmental Dynamics Completed
🟡 Medium 11-3 Layered topology Cortex-like wiring structure
🟡 Medium 11-10 Creativity generation engine DMN dependent
🟡 Medium 11-11 Self-awareness/introspection Premise of complex tasks
🟡 Medium 11-12 Dynamic Goal Selector Integrated with 11-17
🟢 Low 11-6 Strengthening sensory-motor loop Robot integration is a prerequisite
🟢 Low 11-7 Complement Energy Ignition Penalty EVO Score Linkage
🟢 Low 11-16 Acetylcholine 11-4 Low cost by reusing frames
🟢 Low 11-2 Complement STP/Suppression subtype integration Basic implementation completed
🟢 Low 11-19 Curriculum Scheduler Expansion of 11-8
---
## 3. Implementation roadmap by phase
```
〜2026 Q2 (フェーズ2A) ─ 感情・報酬・睡眠の基盤
┌─────────────────────────────────────────────────────────────────┐
│ Week 1–3 : 11-13 AmygdalaModel + InsulaCortex │
│ Week 4–6 : 11-17 VTADopamineModel + NAccMotivationScaler │
│ Week 7–9 : 11-14 SleepConsolidation + HippocampalBuffer (11-5) │
│ Week 10–12: 統合テスト + BiomimeticAdapter 完全接続 │
└─────────────────────────────────────────────────────────────────┘

〜2026 Q3 (フェーズ2B) ─ 高次認知・社会性 ┌─────────────────────────────────────────────────────────────────┐ │ Week 1–3 : 11-1 全帯域リズム + Zenoh 遅延タグ完成 │ │ Week 4–6 : 11-4 NeuromodulatorGate → plasticity.py 接続 │ │ Week 7–9 : 11-15 MirrorNeuronSystem │ │ Week 10–12: 11-9 IntentionModule + PFC融合拡張 │ └─────────────────────────────────────────────────────────────────┘

〜2026 Q4 (フェーズ3) ─ 自律性・アーキテクチャ完成 ┌─────────────────────────────────────────────────────────────────┐ │ Week 1–3 : 11-8 補完(刈り込み・ミエリン化)+ 11-19 │ │ Week 4–6 : 11-3 CorticalTopologyGenerator │ │ Week 7–9 : 11-18 DMN + 11-10 CreativityEngine │ │ Week 10–12: 11-11 IntrospectionLayer + 11-12 CaudateSelector │ └─────────────────────────────────────────────────────────────────┘

〜2027 Q1 (フェーズ4) ─ 感覚・運動・エネルギー ┌─────────────────────────────────────────────────────────────────┐ │ 11-6 感覚・運動ループ(DoG/Gabor/蝸牛/前庭) │ │ 11-7 補完(発火率ペナルティ + 進化スコア統合) │ │ 11-16 アセチルコリン系(11-4 フレーム拡張) │ └─────────────────────────────────────────────────────────────────┘

---

## 4. Module dependency graph
BiomimeticAdapter (既存) ├── rhythm_metrics() ──→ [11-1] BrainRhythmSynchronizer ├── modulatory_gain() ──→ [11-4] NeuromodulatorGate │ └──→ [11-16] AcetylcholineModule ├── homeostasis_scale() ──→ [11-7] NodeEnergyBudget ├── dev_gain() ──→ [11-8] DevelopmentalSchedule │ └──→ [11-19] CurriculumScheduler └── sleep_state() ──→ [11-14] SleepConsolidation ├──→ [11-5] HippocampalBuffer └──→ [11-1] rhythm_metrics (δ 波)

[11-13] AmygdalaModel ├── plasticity_modulator() ──→ plasticity.py (STDP 学習率変調) ├── attention_bias() ──→ spatial_processing.py (アテンション制御) └──→ [11-14] SleepConsolidation (感情強化リプレイ優先度)

[11-17] VTADopamineModel ├──→ [11-4] NeuromodulatorGate.state.dopamine 更新 └──→ [11-12] CaudateSelectorLayer (動的目標選択)

[11-9] IntentionModule ├──→ evospikenet/pfc.py (PFC融合レイヤ拡張) └──→ [11-12] CaudateSelectorLayer

[11-10] CreativityEngine ├──→ [11-5] HippocampalBuffer (記憶断片取得) └──→ [11-18] DefaultModeNetwork

[11-11] IntrospectionLayer └──→ [11-18] DefaultModeNetwork (自己参照コンテンツ生成)

[11-15] MirrorNeuronSystem └──→ [11-17] NAccMotivationScaler (模倣報酬シグナル)

[11-3] CorticalTopologyGenerator ├──→ [11-2] TsodyksMarkramSynapse (各レイヤのシナプス型) └──→ evospikenet/hierarchical_plasticity.py

[11-6] SensoryPreprocessing └──→ evospikenet/spatial_processing.py (V1 前処理追加)

---

## 5. Implementation Guidelines

### 5.1 File structure
evospikenet/ └── biomimetic/ # new package ├── init.py ├── rhythm_sync.py # [11-1] ├── cortical_topology.py # [11-3] ├── neuromodulators.py # [11-4] + [11-16] ├── hippocampal_memory.py # [11-5] ├── working_memory.py # [11-5 continued] ├── sensory_preprocessing.py # [11-6] ├── motor_efference.py # [11-6 continued] ├── energy_homeostasis.py # [11-7] ├── developmental_dynamics.py # [11-8] + [11-19] ├── intention_module.py # [11-9] ├── creativity_engine.py # [11-10] ├── introspection.py # [11-11] ├── goal_switcher.py # [11-12] ├── emotion_system.py # [11-13] ├── sleep_consolidation.py # [11-14] ├── mirror_neurons.py # [11-15] ├── reward_circuit.py # [11-17] └── dmn.py # [11-18]

tests/unit/ ├── test_rhythm_sync.py ├── test_emotion_system.py ├── test_sleep_consolidation.py ├── test_reward_circuit.py ├── test_mirror_neurons.py ├── test_intention_module.py ├── test_hippocampal_memory.py ├── test_developmental_dynamics.py ├── test_dmn.py └── test_cortical_topology.py

tests/integration/ └── test_biomimetic_full_pipeline.py # Full module integration test

### 5.2 Integration policy into BiomimeticAdapter

Existing `BiomimeticAdapter` will be expanded gradually. The new modules operate independently, employing a facade pattern where the `BiomimeticAdapter` calls each module with the `apply()` method.

```python
# Image added to evospikenet/eeg_integration/distributed_brain_executor.py
class BiomimeticAdapter:
    def __init__(self, config):
        ...
        # Phase 2A (Q2)
        self.emotion      = AmygdalaModel()            # [11-13]
        self.vta          = VTADopamineModel()         # [11-17]
        self.sleep_cons   = SleepConsolidation(...)    # [11-14]

        # Phase 2B (Q3)
        self.rhythm_sync  = BrainRhythmSynchronizer()  # [11-1]
        self.neuromod     = NeuromodulatorGate()        # [11-4]
        self.mirror       = MirrorNeuronSystem()        # [11-15]
        self.intention    = IntentionModule()           # [11-9]

    def apply(self, eeg_data, metadata, command) -> dict:
        """統合ゲイン計算 (後方互換)"""
        rhythms  = self.rhythm_sync.compute_all_bands(eeg_data)
        emotion  = self.emotion.evaluate(...)
        td_error = self.vta.update(...)
        ...
        return {"biomimetic_gain": ..., "emotion_state": ..., ...}

5.3 Testing Strategy

  • Each module can be unit tested independently (replace dependencies with pytest.fixture)
  • Integration test test_biomimetic_full_pipeline.py is run on Docker (dev/ubuntu:22.04/CPU)
  • Performance test: BiomimeticAdapter.apply() execution time < 5 ms/call

5.4 Principle of implementation order

  1. Commit the code stub first → confirm that CI/CD passes
  2. Write unit tests first (TDD)
  3. Integration into BiomimeticAdapter will be done after testing each module.
  4. Pay attention to the order dependence of emotions (11-13) → sleep fixation (11-14)

6. Effort/resource estimate summary

Phase Items included Total effort Completion goal
2A (emotions/rewards/sleep) 11-13, 11-17, 11-14, 11-5 11th week End of 2026-Q2
2B (Rhythm, Plasticity, Society) 11-1 Complement, 11-4 Complement, 11-15, 11-9 10 weeks End of 2026-Q3
3 (Higher-order cognition/structure) 11-8, 11-3, 11-18, 11-10, 11-11, 11-12 16 weeks End of 2026-Q4
4 (Sensory/Motor/Energy) 11-6, 11-7, 11-16, 11-2 complement, 11-19 8 weeks End of 2027-Q1
Total 19 items ~45 weeks 2027-Q1

7. Risks and mitigations

Risk Impact Mitigation
Emotion state interferes with existing tests 23 existing tests are broken Disabled by default with emotion flag
Increase sleep buffer memory OOM occurs Buffer upper limit (MAX_SIZE=10,000) forced
δ wave trigger loop deadlock Distributed processing stop asyncio timeout (10s) Required setting
NeuromodulatorGate causes learning divergence STDP runaway clamp with upper limit (≤5×base_lr) on gated_lr
Conflict with PFC intent vector Existing PFC test failure Keep existing interface and only extend

8. References/Implementation standards

  • Tsodyks-Markram STP model: Tsodyks & Markram (1997), PNAS
  • Hippocampal sharp wave ripple: Buzsáki (2015), Neuron
  • Nucleus accumbens-VTA TD model: Schultz et al. (1997), Science
  • Mirror neurons: Rizzolatti & Craighero (2004), Annu. Rev. Neurosci.
  • DMN & spontaneous activities: Raichle et al. (2001), PNAS
  • Gabor/DoG visual preprocessing: Daugman (1985), JOSAA
  • Sleep Memory Fixation: Wilson & McNaughton (1994), Science

*This document was developed based on a review of Remaining_Functionality.md Section 11 and docs-dev/REMAINING_FEATURES.md. Please update from time to time according to the implementation progress. *


9. Phase A/B integrated record (2026-03-06)

This section documents the “last mile” integration effort (Phase A/B) conducted after Section 11 was completed.

9.1 BrainSimulationFrameworkbiomimetic/ All module integration layer

from evospikenet.brain_simulation import BrainSimulationFramework

framework = BrainSimulationFramework(enable_biomimetic=True)
result = framework.run_simulation(duration=1000)    # 6 stages: Development, Control, STDP, Energy, Hippocampus, Sleep

activities = await framework.run_idle_phase(duration_s=10.0)  # DMN idle cycle
status = framework.biomimetic_status()  # All module status snapshot

Interior walkthrough:

BrainSimulationFramework.run_simulation(t)
 ├─ ① DevelopmentalSchedule.plasticity_multiplier(t)    ← biomimetic ✅
 ├─ ② NeuralCircuitModeler.simulate_timestep()            ← Izhikevich 対応 (B-2) ✅
 ├─ ③ STDP × NeuromodulatorGate.gated_learning_rate()     ← biomimetic (A-3) ✅
 ├─ ④ NodeEnergyBudget.energy_fitness_term() → weight 制御 ← biomimetic ✅
 ├─ ⑤ HippocampalBuffer.store(episode)                    ← biomimetic ✅
 └─ ⑥ SleepConsolidation.offline_consolidation()           ← biomimetic (A-4) ✅

9.2 Izhikevich backend (B-2)

from evospikenet.brain_simulation import NeuralCircuitModeler, NeuralCircuitConfig

cfg = NeuralCircuitConfig(num_neurons=100, num_inputs=10, connectivity=0.2)
circuit = NeuralCircuitModeler(cfg, neuron_type="izhikevich")
spikes, membrane_v = circuit.simulate_timestep(input_current, t=0)
# Internally call IzhikevichNeuron.step(input, dt) for all neurons

9.3 CorticalTopologyGeneratorBrainRegionIntegrator(B-3)

from evospikenet.biomimetic import CorticalTopologyGenerator
from evospikenet.brain_simulation import BrainRegionIntegrator

gen = CorticalTopologyGenerator()
integrator = BrainRegionIntegrator()
added = integrator.add_cortical_topology(gen, nx_cols=4, ny_cols=4)
# 16 columns are registered as BrainRegionConfig and have small-world connections within a neighborhood of √2 mm.

9.4 STDP ⇔ NeuromodulatorGate(A-3)

from evospikenet.biomimetic import NeuromodulatorGate
from evospikenet.plasticity import STDP

gate = NeuromodulatorGate()
stdp = STDP.with_neuromodulation(gate)       # factory method
# or
stdp.connect_plasticity_gate(gate)           # Retrofit connection

9.5 NeuromodulatorGate ⇔ NeuromodulatorRegistry(B-1)

from evospikenet.biomimetic import NeuromodulatorGate, NeuromodulatorRegistry

registry = NeuromodulatorRegistry()
gate = NeuromodulatorGate()
gate.connect_to_registry(registry)   # Bidirectional bridge establishment
gate.push_to_registry()              # Gate status → Registry
gate.pull_from_registry()            # Registry value → Gate state

9.6 Test maintenance

Test file Contents
tests/unit/test_biomimetic_init_api.py __init__.py All symbol verification
tests/unit/test_stdp_neuromodulation.py STDP ⇔ Gate wiring
tests/unit/test_sleep_consolidation_stdp.py STDP replay, offline_consolidation(), stats
tests/unit/test_efference_copy_adaptive.py Adaptive gain, reset()
tests/unit/test_mirror_neurons_default_classify.py _default_classify(), backwards compatibility
tests/integration/test_brain_simulation_biomimetic.py BrainSimulationFramework Full integration
tests/integration/test_dmn_idle_phase.py run_idle_phase(), Confirm DMN stop
# Docker
docker compose -f docker-compose.test.yml --profile biomimetic run --rm biomimetic-test

10. Phase D — Distributed Node Integration (2026-03-11)

This section records modifications made to make the biomimicry module work at the DistributedBrainNode level.

10.1 Adding BrainSimulation alias

distributed_brain_node.py was imported by from evospikenet.brain_simulation import BrainSimulation, but the class did not exist and ImportError occurred. The issue was resolved by adding the BrainSimulation(BrainSimulationFramework) wrapper class to the end of brain_simulation.py.

# evospikenet/brain_simulation.py (added at the end)
class BrainSimulation(BrainSimulationFramework):
    """DistributedBrainNode 互換エイリアス。"""
    def __init__(self, node_id: str = "node", config: dict | None = None, **kwargs):
        cfg = config or {}
        circuit_config = NeuralCircuitConfig(num_neurons=int(cfg.get("neuron_count", 1000)))
        super().__init__(circuit_config=circuit_config, **kwargs)
        self.node_id = node_id
        self.node_config = cfg
        self.specialization: str = str(cfg.get("specialization", "general"))

10.2 InstantiatedBrain.apply_weight_delta()

STDP's apply_plasticity_update() computes and returns an INT16 delta tensor, but there was no way to apply it to the actual weights. The problem was resolved by adding apply_weight_delta() to InstantiatedBrain.

# evospikenet/genome_to_brain.py — Addition to the InstantiatedBrain class
def apply_weight_delta(self, module_name: str, delta: torch.Tensor,
                       learning_rate: float = 1e-4) -> None:
    """INT16 STDP デルタを nn.Linear 重みにインプレース適用する。"""
    # Scale by delta / 32767 × lr and update the first Linear layer that fits the shape

Usage flow:```python delta = brain.apply_plasticity_update("pfc", spike_hist, syn_mat) if delta is not None: brain.apply_weight_delta("pfc", delta) # Immediate reflection on weight

### 10.3 `DistributedBrainNode.deploy_genome()` and genome-driven forward pass

There was no mechanism for `DistributedBrainNode` to receive genomes, and it was working only with plain `BrainSimulation`. Added the `deploy_genome()` method and changed it to perform forward pass via `InstantiatedBrain` in `_process_brain_command()`.

```python
def deploy_genome(self, genome) -> None:
    """EvoGenome を InstantiatedBrain としてノードに展開する。"""
    from evospikenet.genome_to_brain import GenomeToBrainConverter
    self.instantiated_brain = GenomeToBrainConverter().instantiate(genome)

10.4 DistributedEvolutionEngine.deploy_to_nodes()

A bridge from the evolution engine to the distributed nodes was missing. Added deploy_to_nodes(nodes) method.

def deploy_to_nodes(self, nodes: list) -> None:
    """best_genome を DistributedBrainNode の一覧に一括展開する。"""
    for node in nodes:
        if hasattr(node, "deploy_genome") and self.best_genome is not None:
            node.deploy_genome(self.best_genome)

10.5 List of changed files

File Changes
evospikenet/brain_simulation.py BrainSimulation wrapper class added
evospikenet/genome_to_brain.py InstantiatedBrain.apply_weight_delta() added
evospikenet/distributed_brain_node.py deploy_genome() + genome forward pass + get_stats() update
evospikenet/distributed_evolution_engine.py deploy_to_nodes() added