Skip to content

Distributed Brain EEG Integration System

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

overview

The system generates EEG in real time from EvoSpikeNet's distributed spiking neural network (SNN) simulation and delivers it via WebSocket.

Architecture

Component

  1. Distributed Brain Simulator (DistributedBrainSimulator)
  2. Parallel simulation of multiple brain areas (prefrontal cortex, motor cortex, visual cortex, auditory cortex)
  3. Each region is implemented as a population of LIF neurons
  4. Generate spike trains in real time

  5. Spike-to-EEG Converter (SpikeToEEGConverter)

  6. Convert spike trains into continuous EEG signals
  7. Modeling postsynaptic potentials (PSPs) using Alpha function kernels
  8. Addition of background vibration (alpha, beta, gamma waves)

  9. WebSocket Server (DistributedBrainEEGServer)

  10. Real-time streaming at 100Hz
  11. 4 channels (each brain region) of EEG data
  12. Integrates with front-end EEG Visualizer

Mapping of brain regions and EEG channels

Channel Brain Region Feature Frequency Function
Ch1 Prefrontal cortex (PFC) 12 Hz (α wave) Executive function, decision making
Ch2 Motor cortex 20 Hz (β waves) Motor planning and execution
Ch3 Visual cortex 40 Hz (γ waves) Visual processing
Ch4 Auditory cortex 30 Hz (high β/low γ waves) Speech processing

Implementation details

1. Spike train generation

Each brain region generates spikes using the LIF neuron model:

# LIF neuron dynamics
V(t+1) = V(t) * leak + I_syn(t) + I_ext(t)
if V(t) >= threshold:
    spike = 1
    V(t) = reset_potential

Parameters: - threshold: 1024 (fixed point representation) - leak: 230/256 ≈ 0.9 (damping coefficient) - reset_potential: 0

2. Spike-EEG conversion

Spike trains are convolved with a physiologically plausible Alpha function kernel:

kernel(t) = (t/τ) * exp(1 - t/τ)
EEG(t) = spike_train(t)  kernel(t)
  • τ: Synaptic time constant (10ms)
  • kernel_size: 50ms

3. Add background vibration

Add characteristic frequency components of each brain region:

oscillation(t) = A * sin(2π * f * t + φ)
EEG_final(t) = EEG_spikes(t) + oscillation(t) + noise(t)

4. Real-time streaming

The WebSocket server delivers data at 100Hz in the following format:

{
  "timestamp": 1769832375.04,
  "data": {
    "Ch1": 1.23,
    "Ch2": -0.45,
    "Ch3": 2.11,
    "Ch4": 0.87
  }
}

Implementation status (updated: 2026-03-11)

  • A stabilization patch has been applied to evospikenet/eeg_integration/comparative_analysis.py to improve the numerical stability of statistical processing (Rayleigh test, wPLI) for phase synchronization/comparative analysis.
  • EEG-related unit tests (including comparative analysis) have been executed in the container: tests/unit/eeg_integration/test_comparative_analysis.py → 16 passed, 20 warnings.
  • Improvements to existing EEG pipelines: enhanced data validation in eeg_translator.py, wavelet name compatibility (morletmorl) in spectrum_converter.py, driver compatibility fixes in device_interface.py, and many EEG integration tests have passed.
  • (Added on 2026-03-11) Phase D integration is complete, including BrainSimulation alias correction, deploy_genome() addition, genome-driven forward pass, and apply_weight_delta(), allowing DistributedBrainNode to directly utilize the results of genome evolution for inference (see the "Genome-driven inference pipeline" section for details).
  • Remaining work: One test remains to be fixed regarding the OpenBCI driver's disconnect() state transition (return to DISCONNECTED).

How to use

1. Launching the distributed brain EEG server

# Run inside a Docker container
docker-compose exec dev python scripts/start_distributed_brain_eeg_server.py

# or run in the background
docker-compose exec -d dev python scripts/start_distributed_brain_eeg_server.py

Note: Due to repository settings, the dev service has been moved to profiles: ["full"]. It will not start with normal docker compose up. To start dev specify the profile as follows:

docker compose --profile full up -d dev

2. Display with EEG Visualizer

  1. Access http://localhost:8052/eeg-visualizer in your browser
  2. Set Connection Type to WebSocket
  3. WebSocket URL is ws://evospikenet-dev:8765 (default)

Note: evospikenet-dev is the container name. If you are connecting from the host, make sure the dev container is running and the port is exposed. If you have not started dev, start it with the profile specified above, or use ws://localhost:8765 (if port mapping to host is enabled). 4. Click the Connect button

3. Testing on the command line

import asyncio
import json
import websockets

async def test():
    async with websockets.connect('ws://localhost:8765') as ws:
        await ws.send(json.dumps({'type': 'start'}))

        # 10 samples received
        for i in range(10):
            msg = await ws.recv()
            data = json.loads(msg)
            print(f"Sample {i+1}: {data['data']}")

asyncio.run(test())

performance

System requirements

  • CPU: 2 cores or more recommended
  • Memory: 2GB or more
  • Network: Low latency (<10ms)

Throughput

  • Simulation: 1000 timesteps/second
  • EEG generation: 100 samples/sec
  • WebSocket delivery: 100 Hz
  • Latency: <20ms (simulation → delivery)

Scalability

  • Number of neurons: up to 10,000 neurons/region
  • Number of brain regions: up to 8 regions
  • Simultaneous connections: up to 10 clients

Biological plausibility

1. Neuron model

The LIF model captures the fundamental behavior of real cortical neurons: - leakage integral - Threshold firing - refractory period

2. EEG generation mechanism

Actual EEG signals are generated from the synchronized activity of thousands to millions of neurons: 1. Postsynaptic potential (PSP) at dendrites 2. Total current to the cortical surface 3. Attenuation by skull and scalp

This simulation implements: - PSP Alpha function modeling - Linear summation of the activities of multiple neurons - Add background noise and vibration

3. Frequency band

The characteristic frequencies of each brain region are based on neuroscientific findings:

Frequency Band Range Associated Brain Area Function
δ (Delta) 0.5-4 Hz Deep Structure Deep Sleep
θ (Theta) 4-8 Hz Hippocampus Memory Formation
α (Alpha) 8-13 Hz Occipital lobe Resting state
β (Beta) 13-30 Hz Motor cortex Arousal, concentration
γ (Gamma) 30-100 Hz General Cognitive Processing

Extensibility

Adding custom brain regions

regions = [
    BrainRegionConfig(
        name="Hippocampus",
        num_neurons=150,
        base_frequency=7.0,  # Theta rhythm
        firing_rate=20.0,
        connectivity=0.20,
        region_type="excitatory",
    ),
    # ... other areas
]

Advanced neuron model

# Switching to the Izhikevich model

<!-- from evospikenet.core import IzhikevichNeuronLayer -->

model = IzhikevichNeuronLayer(
    num_neurons=100,
    a=0.02,
    b=0.2,
    c=-65,
    d=8
)

Enabling distributed processing

<!-- TODO: update or remove - import fail<!-- Remember: Automatic conversion not possible  please fix manually -->port DistributedBrainExecutor -->

executor = DistributedBrainExecutor(
    node_id="eeg-coordinator",
    target_nodes=["pfc-0", "motor-0", "visual-0"]
)
await executor.start_eeg_processing(eeg_stream)

Biomimetic Overlay (BiomimeticAdapter) ⭐ NEW 2026-02-25

DistributedBrainExecutor integrates BiomimeticAdapter into the EEG processing pipeline. Enabled with the setting enable_biomimetic=True (default), it grants the following biological adjustments to each command:

Main methods

Method Input Output Description
rhythm_metrics(eeg_data) EEG ndarray delta_power, alpha_power Extract δ/α band power
modulatory_gain(conf, meta, rhythms) Reliability/metadata/rhythms Gain between 0.6 and 1.6 Dopamine/noradrenaline equivalent plasticity multiplier
homeostasis_scale(metadata) Energy/cognitive load/developmental stage Scale from 0.5 to 1.5 Homeostasis constraint scale factor
dev_gain() config.development_stage Gain between 0.5 and 1.5 Growth gain according to developmental stage
sleep_state(metadata, now_ns) sleep_pressure, attention sleep buffer size/pressure Buffering commands during high sleep pressure

Setting example

from evospikenet.eeg_integration.distributed_brain_executor import (
    DistributedBrainConfig, DistributedBrainExecutor
)

config = DistributedBrainConfig(
    enable_biomimetic=True,      # Enable biomimetic overlay
    low_latency_mode=False,      # Set to True to skip biomimetic processing and reduce latency.
    development_stage=0.8,       # 0.0 (initial) ~ 1.0 (mature)
    energy_budget=1.0,           # 0.0~1.0: Available energy percentage
    sleep_buffer_seconds=3.0,    # Sleep buffer retention time (seconds)
)

executor = DistributedBrainExecutor(config=config)

Check biomimicry metadata

The command's metadata["biomimetic"] contains:

{
  "delta_power": 0.12,
  "alpha_power": 0.34,
  "modulatory_gain": 1.18,
  "homeostasis_scale": 0.95,
  "dev_gain": 1.04,
  "sleep": { "size": 0, "pressure": 0.1 }
}

Low latency mode

low_latency_mode=True is recommended for applications such as real-time BCI. Skip biomimetic calculations and minimize EEG→command conversion latency.

config = DistributedBrainConfig(low_latency_mode=True)
executor = DistributedBrainExecutor(config=config)

troubleshooting

Connection error

ConnectionClosedError: received 1011 (internal error)

Solution: 1. Check the server log: docker-compose exec dev tail -f /tmp/distributed_brain_eeg.log 2. Check if the server is running: docker-compose exec dev ps -ef | grep distributed_brain 3. Check if port 8765 is not in use

Low frame rate

If the frame rate is low in EEG Visualizer:

  1. Reduce the server sampling rate:python server = DistributedBrainEEGServer(sampling_rate=50.0) # 100 → 50 Hz

  2. Reduce the number of neurons:python BrainRegionConfig(num_neurons=50) # 100 → 50

Out of memory

RuntimeError: CUDA out of memory

Solution: 1. Run in CPU mode 2. Reduce batch size 3. Reduce the number of neurons

Biomimetics integration (BrainSimulationFramework)

BrainSimulationFramework is an integration layer between all biomimetic/ modules and DistributedBrainExecutor.

Quick Start

from evospikenet.brain_simulation import BrainSimulationFramework

# Launching distributed brain simulation in biomimetic mode
framework = BrainSimulationFramework(enable_biomimetic=True)
result = framework.run_simulation(duration=1000)
# → Six phases of development, control, STDP, energy, hippocampus, and sleep are executed in sequence.

# DMN idle cycle (default mode network)
import asyncio
activities = asyncio.run(framework.run_idle_phase(duration_s=10.0))

# Get snapshot of all module status
status = framework.biomimetic_status()
print(status)

Izhikevich neuron circuit (B-2)

The NeuralCircuitModeler used by DistributedBrainEEGServer can be switched to the Izhikevich model backend by specifying neuron_type="izhikevich":

from evospikenet.brain_simulation import NeuralCircuitModeler, NeuralCircuitConfig

cfg = NeuralCircuitConfig(num_neurons=100, num_inputs=10, connectivity=0.2)
circuit = NeuralCircuitModeler(cfg, neuron_type="izhikevich")
spikes, membrane_v = circuit.simulate_timestep(input_current=0.5, t=0)
Model Features Application Situation
``lif'' (default) Lightweight and high speed Large-scale brain region simulation
``izhikevich'' Diverse firing patterns such as RS/IB/CH/FS/LTS Cortical layered realistic circuit

Cortical topology registration (B-3)

from evospikenet.biomimetic import CorticalTopologyGenerator
from evospikenet.brain_simulation import BrainRegionIntegrator

gen = CorticalTopologyGenerator()
integrator = BrainRegionIntegrator()
added = integrator.add_cortical_topology(gen, nx_cols=4, ny_cols=4)
# Register 16 columns as BrainRegionConfig, connect small world within adjacent √2 mm

STDP ↔ NeuromodulatorGate(A-3)

from evospikenet.plasticity import STDP
from evospikenet.biomimetic import NeuromodulatorGate

gate = NeuromodulatorGate()
stdp = STDP.with_neuromodulation(gate)
# Real-time modulation of learning rate depending on dopamine/ACh levels

Key fields of biomimetic_status()

{
  "stdp_connected_gate": True,
  "sleep_consolidation_replay": True,
  "izhikevich_circuits": 1,
  "cortical_columns_registered": 16,
  "neuromodulator_registry_linked": True,
  "efference_copy_adaptive": True,
  "mirror_neuron_default_classifier": True,
  "dmn_idle_phase_available": True,
}

Evaluation score: docs-dev/biomimetic_integration_evaluation.md v2.0 — 8.7/10 (Phase A/B all 11 items completed)


Genome-driven inference pipeline (Phase D — 2026-03-11)

With Phase D integration, you can now deploy genomes generated by the evolution engine directly to DistributedBrainNode. This incorporates the results of genome evolution into the EEG → Distributed Brain inference loop.

Flow overview

DistributedEvolutionEngine.run_evolution()
    └─→ best_genome (EvoGenome)
           │
           ▼  deploy_to_nodes([pfc, motor, memory])
    DistributedBrainNode.deploy_genome(genome)
           │
           ▼  GenomeToBrainConverter().instantiate(genome)
    InstantiatedBrain (nn.Module)
           │
           ▼  _process_brain_command() 内
    genome-driven forward pass  →  confidence 補正

Code example

import asyncio
from evospikenet.distributed_evolution_engine import DistributedEvolutionEngine
from evospikenet.distributed_brain_node import DistributedBrainNode

# Evolution execution
engine = DistributedEvolutionEngine(config={"population_size": 50})
best = asyncio.run(engine.run_evolution(generations=50))

# Deploy to distributed brain nodes
pfc_node   = DistributedBrainNode("pfc",   config={"neuron_count": 1000})
motor_node = DistributedBrainNode("motor", config={"neuron_count": 512})
engine.deploy_to_nodes([pfc_node, motor_node])

# Deployment confirmation
for node in [pfc_node, motor_node]:
    print(node.get_stats()["genome_deployed"])  # True

Instant reflection of STDP plasticity weights

You can apply STDP deltas from the EEG stream to the InstantiatedBrain weights in real time:

delta = brain.apply_plasticity_update("pfc", spike_history, synapse_matrix)
if delta is not None:
    brain.apply_weight_delta("pfc", delta, learning_rate=1e-4)

BrainSimulation alias

The BrainSimulation class used internally by DistributedBrainNode is Wrapper alias for BrainSimulationFramework (defined at the end of brain_simulation.py). You can use the same biomimetic functionality as when initialized with enable_biomimetic=True.


reference

Paper

  • Dayan & Abbott (2001). Theoretical Neuroscience
  • Buzsáki (2006). Rhythms of the Brain
  • Izhikevich (2007). Dynamical Systems in Neuroscience

EvoSpikeNet Documentation

  • Brain Simulation Brief
  • Biomimetic Integration Plan
  • biomimetic_integration_evaluation.md
  • README.md
  • Distributed Brain Architecture

License

Copyright 2026 Moonlight Technologies Inc. All Rights Reserved.