Skip to content

Whole brain simulation query response analysis

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

Implementation notes (artifacts): See docs/implementation/ARTIFACT_MANIFESTS.md for the artifact_manifest.json output by the training script and recommended CLI flags.

overview

A document that examines the response content and data flow when a query is executed in a whole-brain simulation.

Creation date: December 13, 2025

Purpose and use of this document

  • Objective: Understand the data flow and artifacts of query responses and identify bottlenecks and points of failure.
  • Target audience: API/Backend implementers, QA, and operations personnel.
  • Read first: Overview → Data Flow → Log/Output Format → Failure/Delay Analysis.
  • Related links: Examples/run_zenoh_distributed_brain.py for distributed brain execution script, docs/implementation/PFC_ZENOH_EXECUTIVE.md for PFC/Zenoh/Executive details.

1. Response data flow

1.1 Query sending flow

ユーザー入力 (UI)
    ↓
フロントエンド (distributed_brain.py)
    ↓
APIエンドポイント (/api/distributed_brain/prompt)
    ↓
プロンプトファイル書き込み (/tmp/evospikenet_prompt_*.json)
    ↓
Zenoh Publish (evospikenet/api/prompt)
    ↓
各ノード (run_zenoh_distributed_brain.py)

1.2 Response return flow

各ノード (推論処理)
    ↓
レスポンス生成
    ↓
Zenoh Publish (evospikenet/api/result)
    ↓
APIサーバー (Zenohサブスクライバー)
    ↓
結果ファイル書き込み (/tmp/evospikenet_query_result_*.json)
    ↓
フロントエンド (ポーリング)
    ↓
UI表示 (query-response-area)

2. Response structure

2.1 Basic structure

{
    "response": "生成されたテキストレスポンス",
    "prompt_id": "UUID形式のプロンプトID",
    "timestamp": 1234567890.123
}

2.2 Field details

Field Type Required Description
response string Actual generated text or simulation results
prompt_id string UUID that uniquely identifies the query
timestamp float Response generation time (UNIX timestamp)

3. Response type and content

3.1 Generated by SpikingLM (lang-main node)

Condition: - Node type: lang-main - Model: SpikingEvoTextLM - Tokenizer: Available

process:```python

Tokenize

inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"]

Inference execution

generated_ids = model.generate(input_ids, max_new_tokens=20)

decode

response_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True)

**Response format:**```
[HH:MM:SS] SpikingLM Generated: '生成されたテキスト'

example:```json { "response": "[14:30:45] SpikingLM Generated: 'The capital of Japan is Tokyo.'", "prompt_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890" }

### 3.2 Simulated responses (other nodes)

**Condition:**
- If SpikingLM is unavailable
- Other node types (visual, audio, motor, etc.)

**Response format:**```
[HH:MM:SS] Zenoh Node Processed: 'プロンプト' (Simulated Response)

example:```json { "response": "[14:30:47] Zenoh Node Processed: 'What is AI?' (Simulated Response)", "prompt_id": "b2c3d4e5-f6a7-8901-bcde-f23456789012" }

### 3.3 Visual Node Response

**process:**```python
# visual processing simulation
time.sleep(1)
response_text = f"Visual processing completed for: '{prompt}'"

example:```json { "response": "Visual processing completed for: 'Analyze this image'", "prompt_id": "c3d4e5f6-a7b8-9012-cdef-345678901234" }

### 3.4 Audio node response

**process:**```python
# Audio processing simulation
time.sleep(1)
response_text = f"Audio processing completed for: '{prompt}'"

example:```json { "response": "Audio processing completed for: 'Transcribe this audio'", "prompt_id": "d4e5f6a7-b8c9-0123-def0-456789012345" }

### 3.5 Error response

**Condition:**
- Error occurred during inference

**Response format:**```
Error during inference: エラーメッセージ

example:```json { "response": "Error during inference: CUDA out of memory", "prompt_id": "e5f6a7b8-c9d0-1234-ef01-567890123456" }

---

## 4. API endpoint

### 4.1 Send Prompt

**Endpoint:** `POST /api/distributed_brain/prompt`

**Request body:**```json
{
    "prompt": "テキストプロンプト",
    "priority": 1,
    "session_id": "セッションID",
    "image": "Base64エンコードされた画像データ (オプション)",
    "audio": "Base64エンコードされた音声データ (オプション)"
}

response:```json { "message": "Prompt received and written to file.", "file": "/tmp/evospikenet_prompt_1234567890_uuid.json", "prompt_id": "uuid" }

### 4.2 Get results

**Endpoint:** `GET /api/distributed_brain/result`

**Query parameters:**
- `prompt_id` (optional): get results for a specific prompt

**Response (with results):**```json
{
    "response": "生成されたテキスト",
    "timestamp": 1234567890.123,
    "prompt_id": "uuid"
}

Response (no results):```json { "response": null, "timestamp": null, "prompt_id": "uuid" }

---

## 5. File system operations

### 5.1 Prompt File

**Location:** `/tmp/evospikenet_prompt_{timestamp}_{prompt_id}.json`

**Contents:**```json
{
    "prompt": "テキストプロンプト",
    "priority": 1,
    "session_id": "session_id",
    "timestamp": 1234567890.123,
    "prompt_id": "uuid",
    "image_path": "/tmp/{prompt_id}_image.png",
    "audio_path": "/tmp/{prompt_id}_audio.wav"
}

TTL (Time To Live): - Default: 3600 seconds (1 hour) - Automatic cleanup: run in background thread

5.2 Result file

Location: /tmp/evospikenet_query_result_{prompt_id}.json

Contents:```json { "response": "生成されたテキスト", "prompt_id": "uuid", "timestamp": 1234567890.123 }

**Features:**
- Auto delete after loading
- TTL: 3600 seconds

### 5.3 Media Files

**Image:**
- Location: `/tmp/{prompt_id}_image.png`
- Format: Binary after Base64 decoding

**Audio:**
- Location: `/tmp/{prompt_id}_audio.wav`
- Format: Binary after Base64 decoding

---

## 6. Zenoh Communication

### 6.1 Prompt Delivery

**Topic:** `evospikenet/api/prompt`

**payload:**```json
{
    "prompt": "テキストプロンプト",
    "priority": 1,
    "session_id": "session_id",
    "timestamp": 1234567890.123,
    "prompt_id": "uuid",
    "image_path": "/tmp/{prompt_id}_image.png",
    "audio_path": "/tmp/{prompt_id}_audio.wav"
}

6.2 Results distribution

Topic: evospikenet/api/result

payload:```json { "response": "生成されたテキスト", "prompt_id": "uuid" }

### 6.3 Task completion notification

**Topic:** `task/completion`

**payload:**```json
{
    "node_id": "pfc-0",
    "prompt_id": "uuid"
}


7. Response processing on the front end

7.1 Sending a query (execute_query callback)

Location: distributed_brain.py

Processing flow:```python

Payload creation

payload = { "prompt": query, "session_id": sim_state.get('session_id'), "priority": int(priority) }

Add media file

if image_contents: payload['image'] = content_string if audio_contents: payload['audio'] = content_string

API sending

response = requests.post(f"{api_base_url}/api/distributed_brain/prompt", json=payload) response_data = response.json() prompt_id = response_data.get("prompt_id")

Save prompt ID

return "", prompt_id # Clear input and save ID

### 7.2 Response display (update_visualizations callback)

**Location:** `distributed_brain.py`

**Polling logic:**```python
# If you have the current prompt ID
if current_prompt_id:
    # Attempt to obtain results
    result_response = requests.get(
        f"{api_base_url}/api/distributed_brain/result",
        params={"prompt_id": current_prompt_id}
    )

    if result_response.status_code == 200:
        result_data = result_response.json()
        response_text = result_data.get('response')

        if response_text:  # If there is a valid response
            query_response_out = response_text
            new_prompt_id = None  # clear prompt id

Display component:```python dcc.Textarea( id='query-response-area', style={'width': '100%', 'height': 150}, readOnly=True, placeholder="The simulation's response will appear here..." )

---

## 8. Example of usage with SDK

### 8.1 Basic usage

**Location:** `run_simulation_query.py`

```python
```python
from evospikenet.sdk import EvoSpikeNetAPIClient
# Example: use EvoSpikeNetAPIClient as implemented in evospikenet.sdk

Client initialization

client = EvoSpikeNetAPIClient()

prompt send

prompt_text = "What is the capital of Japan?" submission_response = client.submit_prompt(prompt=prompt_text)

poll for results

result = client.poll_for_result(timeout=120, interval=5)

Results display

if result and result.get("response"): print(f"Response: {result['response']}") print(f"Timestamp: {result['timestamp']}")

### 8.2 Multimodal use

**Location:** `run_multimodal_simulation_query.py`

```python
# Sending multimodal prompts
submission_response = client.submit_prompt(
    prompt="Describe this image and audio",
    image_path=DUMMY_IMAGE_PATH,
    audio_path=DUMMY_AUDIO_PATH,
    priority=1
)

# poll for results
result = client.poll_for_result(timeout=120, interval=5)


9. Response timeline

Typical execution sequence

T+0秒:   ユーザーがクエリを入力し "Execute Query" をクリック
T+0.1秒: フロントエンドがAPIにPOSTリクエスト送信
T+0.2秒: APIがプロンプトファイルを作成し、Zenohにパブリッシュ
T+0.3秒: 各ノードがZenohからプロンプトを受信
T+0.4秒: Lang-mainノードが推論スレッドを起動
T+2秒:   推論処理完了
T+2.1秒: Lang-mainノードが結果をZenohにパブリッシュ
T+2.2秒: APIサーバーが結果を受信し、ファイルに書き込み
T+2-7秒: フロントエンドがポーリング (5秒間隔)
T+7秒:   フロントエンドが結果を取得し、UIに表示

Performance factors

Factor Impact time Notes
Network delay 0.1-0.5 seconds API request round trip
Zenoh communication 0.01-0.1 seconds Very fast
Model inference 1-10 seconds Depends on model size and GPU performance
Polling Interval 0-5 seconds Default 5 seconds interval

10. Debugging and troubleshooting

10.1 No response returned

Confirmation points:

  1. Check if the simulation is runningpython status = client.get_simulation_status() print(json.dumps(status, indent=2))

  2. Check the existence of prompt filebash ls -la /tmp/evospikenet_prompt_*.json

  3. Check the result filebash ls -la /tmp/evospikenet_query_result_*.json

  4. Verify Zenoh connection

  5. Check "✅ Zenoh session established in API" in API log
  6. Check Zenoh subscription messages in node logs

  7. Check the node logbash # For Docker environment docker-compose exec frontend cat /tmp/sim_rank_4.log # Lang-main

10.2 Verifying response content

Expected Content: - ✅ Includes timestamp - ✅ Prompt ID matches - ✅ Response text is not null

Problematic example:```json { "response": null, "timestamp": null, "prompt_id": "uuid" }

**Cause:**
- Node has not completed inference
- Zenoh communication error
- Model loading error

### 10.3 Common error patterns

#### Error 1: "CUDA out of memory"

**response:**```json
{
    "response": "Error during inference: CUDA out of memory"
}

Workaround: - Reduce batch size - Reduce model parameter size - Increase GPU memory

Error 2: "Could not write result file"

log:``` WARNING: Could not write result file (expected in remote mode): [Errno 2] No such file or directory

**Cause:**
- No access permission to /tmp directory on remote node

**Workaround:**
- Relies only on Zenoh communication (no files required)
- Warnings can be ignored (normal in remote mode)

#### Error 3: Timeout

**Symptoms:**```
Polling timed out. No result received.

Workaround: - Increase timeout value: poll_for_result(timeout=300) - Check model inference time - Check if the node is working properly


11. Response extensibility

11.1 Currently supported information

  • ✅ Text response
  • ✅ Prompt ID
  • ✅ Timestamp
  • ✅ Node ID (in task completion notification)

11.2 Future expansion candidates

Detailed metadata

{
    "response": "生成されたテキスト",
    "prompt_id": "uuid",
    "timestamp": 1234567890.123,
    "metadata": {
        "node_id": "lang-main-0",
        "model_type": "SpikingEvoTextLM",
        "inference_time_ms": 1234,
        "tokens_generated": 20,
        "energy_consumed": 0.5,
        "spike_count": 15000
    }
}

Attach spike data

{
    "response": "生成されたテキスト",
    "prompt_id": "uuid",
    "spike_data": {
        "total_spikes": 15000,
        "spike_rate": 75.5,
        "layer_stats": {
            "layer_0": 5000,
            "layer_1": 6000,
            "layer_2": 4000
        }
    }
}

Multi-node cooperation results

{
    "response": "統合された最終レスポンス",
    "prompt_id": "uuid",
    "node_responses": [
        {
            "node_id": "visual-0",
            "response": "Visual: 画像に猫が写っています"
        },
        {
            "node_id": "lang-main-0",
            "response": "Lang: これは猫の画像です"
        },
        {
            "node_id": "pfc-0",
            "response": "PFC: 統合された最終レスポンス"
        }
    ]
}

12. Best Practices

12.1 Efficient Polling

# ✅ Recommended: Proper timeouts and intervals
result = client.poll_for_result(timeout=120, interval=5)

# ❌ Not recommended: Interval too short
result = client.poll_for_result(timeout=120, interval=1)  # Load on API

12.2 Error Handling

try:
    result = client.poll_for_result(timeout=120, interval=5)

    if result and result.get("response"):
        # success
        print(f"Response: {result['response']}")
    else:
        # Timeout or no result
        print("No result received within timeout")

        # Get debug information
        status = client.get_simulation_status()
        print("Simulation status:", status)

except RequestException as e:
    # network error
    print(f"Network error: {e}")

12.3 Managing prompt IDs

# ✅ Recommended: Save and track prompt IDs
submission = client.submit_prompt(prompt="...")
prompt_id = submission.get("prompt_id")

# Get specific results later
result = client.get_simulation_result()  # Latest results

12.4 Multimodal input

# ✅ Recommended: Validate all inputs
if os.path.exists(image_path) and os.path.exists(audio_path):
    response = client.submit_prompt(
        prompt="Describe this",
        image_path=image_path,
        audio_path=audio_path
    )
else:
    print("Error: Media files not found")

File Role
api.py API endpoint definition, Zenoh subscriber
sdk.py Python SDK, polling logic
distributed_brain.py UI, query sending, response display
run_zenoh_distributed_brain.py Node implementation, inference processing, result publishing
run_simulation_query.py SDK usage example
run_multimodal_simulation_query.py Multimodal usage example

14. Summary

Response characteristics

  1. Asynchronous processing: Query sending and result retrieval are separated
  2. Polling-based: Frontend checks results periodically
  3. Zenoh Communications: Fast and reliable messaging
  4. File-based: Supports both local and remote execution
  5. UUID Tracking: Uniquely identifies each query

Design Benefits

  • ✅ Scalability: parallel processing on multiple nodes
  • ✅ Flexibility: Local/remote compatible
  • ✅ Debuggability: Intermediate status can be checked on a file basis
  • ✅ Fault tolerance: Zenoh communication redundancy

Room for improvement

  • ⚠️ Polling efficiency: Consider WebSockets and Server-Sent Events
  • ⚠️ Missing metadata: information such as inference time, energy consumption, etc.
  • ⚠️ Error details: Provide more detailed error information