Simulation data recording/analysis guide
[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).
Creation date: 2025-12-06
Copyright: 2025 Moonlight Technologies Inc. All Rights Reserved.
Author: Masahiro Aoki
Target: EvoSpikeNet Zenoh Distributed Brain Simulation
Purpose and use of this document
- Purpose: Instructions for setting up recording and analysis functions for distributed brain simulations and making them ready for use.
- Target audience: Engineers in charge of experiments and verification, and log analysis.
- Read first: Overview → Quick Start → Recording Options List → Analysis Procedures.
- Related links: Execution script is
examples/run_zenoh_distributed_brain.py, recording/analysis module isevospikenet/sim_recorder.py,evospikenet/sim_analyzer.py.
overview
We have implemented a system that can record and analyze the following data when running a distributed brain simulation:
- Spike data: Spike trains from each neuron layer
- Membrane potential data: Neuron membrane potential (optional)
- Weight data: Network weight matrix snapshot (optional)
- Control data: Node state transition, task execution status
Quick start
Run simulation with recording enabled
# Basic recording (spikes + control data)
python examples/run_zenoh_distributed_brain.py \
--node-id pfc-0 \
--module-type pfc \
--enable-recording
# Record all data (including membrane potential + weight)
python examples/run_zenoh_distributed_brain.py \
--node-id visual-0 \
--module-type visual \
--enable-recording \
--record-membrane \
--record-weights \
--session-name my_experiment_001
Analysis of recorded data
# Automatic analysis (report + graph generation)
python evospikenet/sim_analyzer.py ./sim_recordings/sim_20251206_001234
# Skip plot generation
python evospikenet/sim_analyzer.py ./sim_recordings/sim_20251206_001234 --no-plots
Detailed guide
1. Recording options
Command line arguments
| Arguments | Description | Default |
|---|---|---|
--enable-recording |
Enable recording | False |
--record-spikes |
Record spike data | True |
--record-membrane |
Record membrane potential | False |
--record-weights |
Record weight matrix | False |
--record-control |
Record control state | True |
--recording-dir |
Recording storage directory | ./sim_recordings |
--session-name |
Session name | Automatically generated (timestamp) |
Usage from Python API
<!-- from evospikenet.sim_recorder import SimulationRecorder, RecorderConfig -->
# Create recording settings
config = RecorderConfig(
enable_recording=True,
record_spikes=True,
record_membrane=True,
record_weights=False,
output_dir="./my_recordings",
session_name="experiment_xor_task",
spike_subsample_rate=1.0, # Record all spikes
membrane_subsample_rate=0.1, # Membrane potential is sampled at 10%
max_recording_duration=300.0 # Record up to 5 minutes
)
# Initialize recorder
recorder = SimulationRecorder(config)
# Set as global recorder (available on all nodes)
<!-- TODO: update or remove - import fail<!-- Remember: Automatic conversion not possible — please fix manually -->t set_global_recorder -->
set_global_recorder(recorder)
# ... Run simulation ...
# end recording
recorder.close()
2. Recorded data structure
Directory structure
sim_recordings/
└── sim_20251206_001234/ # session directory
├── simulation_data.h5 # HDF5 data file (spikes, membrane potential, weights)
├── control_states.jsonl # Control state (JSONL format)
├── recording_statistics.json # record statistics
└── plots/ # Automatically generated plot (after analysis)
├── pfc-0_lif_raster.png
├── pfc-0_lif_timeline.png
└── ...
HDF5 data structure
simulation_data.h5
├── /spikes # spike data
│ ├── /pfc-0
│ │ ├── /input
│ │ │ └── /t_1733420400000000000 # timestamped dataset
│ │ └── /output
│ ├── /visual-0
│ └── ...
├── /membrane # Membrane potential data
│ ├── /pfc-0
│ │ └── /lif_layer
│ └── ...
├── /weights # weight snapshot
│ └── /lang-main
│ └── /transformer_layer_0
└── /metadata # Metadata (settings information, etc.)
3. Data analysis
Basic analysis
<!-- Module 'evospikenet' not found. Check moves/renames within the package -->
<!-<!-- Remember: Cannot convert automatically — please fix manually -->er = SimulationAnalyzer("./sim_recordings/sim_20251206_001234")
# Show recorded nodes
nodes = analyzer.get_recorded_nodes()
print(f"Recorded nodes: {nodes}")
# Show layers for each node
for node_id in nodes:
layers = analyzer.get_recorded_layers(node_id)
print(f"{node_id}: {layers}")
# Get spike data
timestamps, spike_arrays = analyzer.get_spike_data("pfc-0", "output")
print(f"Recorded {len(spike_arrays)} timesteps")
# Calculate firing rate
stats = analyzer.compute_firing_rate("pfc-0", "output")
print(f"Mean firing rate: {stats['mean_rate_hz']:.2f} Hz")
print(f"Total spikes: {stats['total_spikes']:,}")
# Finish analysis
analyzer.close()
Visualization
<!-- TODO: update<!-- Module 'evospikenet' not found. Please check moves/renames in the package -->kenet.sim_analyzer import Si<!-- Please note: Cannot convert automatically — please fix manually -->01234") as analyzer:
# spy cluster plot
analyzer.plot_spike_raster(
node_id="pfc-0",
layer_name="output",
max_neurons=100, # Maximum number of neurons to display
save_path="./pfc_raster.png"
)
# Firing rate time series plot
analyzer.plot_firing_rate_timeline(
node_id="pfc-0",
layer_name="output",
bin_size_ms=50.0, # Aggregated in 50ms bins
save_path="./pfc_timeline.png"
)
# Summary report generation
report = analyzer.generate_summary_report("./analysis_report.txt")
print(report)
Analyzing node behavior
<!-- TODO: update or remove - impo<!-- Module 'evospikenet' not found. Check moves/renames within the package -->r import SimulationAnalyzer -->
analyzer = Simulati<!-- Remember: Cannot convert automatically — please fix manually -->= analyzer.load_control_states()
print(f"Total control records: {len(control_states)}")
# Node operation statistics
behavior = analyzer.analyze_node_behavior("pfc-0")
print(f"Task active ratio: {behavior['task_active_ratio']:.2%}")
print(f"Unique statuses: {behavior['unique_statuses']}")
print(f"Step range: {behavior['step_range']}")
analyzer.close()
4. Advanced usage examples
Recording custom metadata
# Custom recording within ZenohBrainNode
def _process_pfc_timestep(self):
# Existing processing...
# Records PFC-specific metadata
if self.recorder and self.pfc_engine:
# Record PFC entropy
entropy = self.pfc_engine.calculate_entropy()
self.recorder.record_control_state(
node_id=self.node_id,
module_type=self.module_type,
status="Processing",
active_task=self.active_task,
step_count=self.step_count,
metadata={
"pfc_entropy": float(entropy),
"working_memory_size": len(self.working_memory),
"quantum_modulation": self.pfc_engine.alpha_t
}
)
Reduce storage with subsampling
# Settings for long-term simulations
config = RecorderConfig(
enable_recording=True,
record_spikes=True,
record_membrane=False, # Membrane potential is not recorded
spike_subsample_rate=0.1, # Only 10% of spikes recorded
max_recording_duration=3600.0, # up to 1 hour
buffer_size=2000, # Increase buffer size to reduce number of writes
compression="gzip", # GZIP compression
compression_level=6 # Compression level (1-9, higher is smaller but slower)
)
Manual flush of batch records
recorder = SimulationRecorder(config)
for step in range(10000):
# Simulation step execution
process_timestep()
# Manually flush every 100 steps
if step % 100 == 0:
recorder.flush_all()
stats = recorder.get_statistics()
print(f"Step {step}: {stats['total_spikes_recorded']:,} spikes recorded")
recorder.close()
5. Performance considerations
Memory usage
| Recording settings | Estimated memory usage (1 node, 1000 steps) |
|---|---|
| Spike only | ~10-50 MB |
| Spike + Membrane Potential | ~50-200 MB |
| All data (including weights) | ~500 MB - 2 GB |
Storage requirements
# Estimated storage size (without compression)
neurons = 1000
timesteps = 10000
nodes = 4
# Spike: binary (1 byte/neuron/timestep)
spike_size = neurons * timesteps * nodes * 1 # ~40 MB
# Membrane potential: float32 (4 bytes/neuron/timestep)
membrane_size = neurons * timesteps * nodes * 4 # ~160 MB
# Weight: float32, e.g. 1000x1000 matrix
weight_size = neurons * neurons * 4 # ~4 MB per snapshot
# Total (can be reduced by 50-70% with compression)
total_uncompressed = spike_size + membrane_size + weight_size
total_compressed = total_uncompressed * 0.3 # GZIP compression
Optimization Tips
-
Subsampling: Use subsampling for long-term simulations
python spike_subsample_rate=0.1 # Only 10% recorded -
Buffer size: Increase the buffer size if there is enough memory.
python buffer_size=5000 # Reduce disk I/O times -
Compression: Increase compression level if storage is your priority
python compression="gzip" compression_level=9 # Highest compression (but slow) -
Selective Recording: Record only necessary data
python record_membrane=False # Membrane potential is usually not required record_weights=False # Weight is only for periodic snapshots
6. Troubleshooting
Problem: HDF5 file is corrupted
Cause: Buffer not flushed when simulation is interrupted
Solution:```python
Use context manager (auto close)
with SimulationRecorder(config) as recorder: # Simulation execution pass # automatically closed
or explicit try-finally
recorder = SimulationRecorder(config) try: # simulation pass finally: recorder.close()
#### Problem: Insufficient disk space
**Solution**:```python
# Set maximum recording time
config = RecorderConfig(
max_recording_duration=600.0, # Automatically stops after 10 minutes
...
)
# Or check your storage regularly
import shutil
disk_usage = shutil.disk_usage(config.output_dir)
if disk_usage.free < 1e9: # Less than 1GB
recorder.close()
logger.warning("Disk space low, stopped recording")
Problem: Recording affects performance
Solution:```python
More aggressive subsampling
config = RecorderConfig( spike_subsample_rate=0.05, # 5% only auto_flush=False, # Disable automatic flush ... )
Manually flush regularly
if step % 1000 == 0: recorder.flush_all()
## Use case example
### Use case 1: Detailed recording for debugging
```bash
# Detailed recording in a short time (all data)
python examples/run_zenoh_distributed_brain.py \
--node-id pfc-0 \
--module-type pfc \
--enable-recording \
--record-spikes \
--record-membrane \
--record-weights \
--record-control \
--session-name debug_session
Use case 2: Efficient recording of long-term experiments
# Long-term simulation (subsampling)
python examples/run_zenoh_distributed_brain.py \
--node-id visual-0 \
--module-type visual \
--enable-recording \
--record-spikes \
--session-name long_run_experiment
Corresponding Python configuration:```python config = RecorderConfig( enable_recording=True, spike_subsample_rate=0.1, max_recording_duration=7200.0, # 2 hours compression_level=6 )
### Use case 3: Analysis of cooperative behavior of multiple nodes
```bash
# Terminal 1: PFC node (recording enabled)
python examples/run_zenoh_distributed_brain.py \
--node-id pfc-0 \
--module-type pfc \
--enable-recording \
--session-name multi_node_test
# Terminal 2: Visual node (same session)
python examples/run_zenoh_distributed_brain.py \
--node-id visual-0 \
--module-type visual \
--enable-recording \
--session-name multi_node_test
# Terminal 3: Lang-Main node (same session)
python examples/run_zenoh_distributed_brain.py \
--node-id lang-main-0 \
--module-type lang-main \
--enable-recording \
--session-name multi_node_test
analysis:```python analyzer = SimulationAnalyzer("./sim_recordings/multi_node_test")
Compare the behavior of all nodes
for node_id in analyzer.get_recorded_nodes(): behavior = analyzer.analyze_node_behavior(node_id) print(f"\n{node_id}:") print(f" Task active: {behavior['task_active_ratio']:.2%}")
for layer in analyzer.get_recorded_layers(node_id):
stats = analyzer.compute_firing_rate(node_id, layer)
print(f" {layer}: {stats['mean_rate_hz']:.2f} Hz")
```
summary
- ✅ Optional feature: Easily enable/disable with
--enable-recording - ✅ Flexible Recording: Separately control spikes, membrane potential, weights, and control states
- ✅ Efficient: Performance optimization with subsampling, compression, and buffering
- ✅ Analysis tools: Automatic report generation, visualization, statistical calculations
- ✅ Scalable: Supports long-term and large-scale simulations
Reference materials
evospikenet/sim_recorder.py: Recorder implementationevospikenet/sim_analyzer.py: Analysis tool implementation
examples/run_zenoh_distributed_brain.py: Integration example