Skip to content

Advanced Decision Engine - Implementation Guide

Author: Masahiro Aoki

Implementation notes (artifacts): See docs/implementation/ARTIFACT_MANIFESTS.md for the artifact_manifest.json output by the training script and recommended CLI flags.

Purpose and use of this document

  • Purpose: To enable implementers and verifiers to quickly understand the functions and usage of AdvancedPFCEngine/ExecutiveControl.
  • Target audience: Engineers modifying PFC/Executive, implementers incorporating distributed brain nodes, QA.
  • Read first: Overview → Architecture → Use with distributed brain nodes → Configuration options.
  • Related links: The execution script is examples/run_zenoh_distributed_brain.py, and the implementation details are implementation/PFC_ZENOH_EXECUTIVE.md.

overview

We have implemented an advanced decision-making engine in EvoSpikeNet's distributed brain simulation. This implementation extends the existing PFCDecisionEngine (Quantum Modulation Feedback Loop) and adds the following functionality:

  1. Hierarchical Planning: Perform complex tasks efficiently by breaking down high-level goals into subtasks and managing dependencies. Recursive decomposition with a maximum depth of 3 enables long-term goal achievement.

  2. Meta-Cognitive Monitoring: Monitor the system's own decision-making process and assess uncertainty and confidence to improve the quality of decisions. It features uncertainty estimation, confidence evaluation, and error detection, and classifies decision quality into high_quality, moderate, low_quality, and critical.

  3. Multi-Step Reasoning: Make smarter decisions by considering multiple steps in reasoning instead of a single decision. Generate actionable plans through goal management and prioritization.

  4. Dynamic Resource Allocation: Dynamically allocate compute resources to optimize efficiency based on task priority and situation. Automate resource allocation to neural modules.

  5. Error Detection & Recovery: Increases system robustness by detecting errors and automatically implementing recovery strategies. Supports RETRY, DEGRADATION, REPLAN, FAILOVER, and RESTART as RecoveryStrategy.

With this extension, EvoSpikeNet simulates more advanced cognitive functions and enables adaptive behavior in distributed brain environments.

Architecture

Main components

AdvancedPFCEngine
├── PFCDecisionEngine (Base)
│   ├── QuantumModulationSimulator
│   ├── WorkingMemory (LIF層)
│   └── ChronoSpikeAttention
│
└── ExecutiveControlEngine
    ├── HierarchicalPlanner
    ├── MetaCognitiveMonitor
    ├── GoalManager
    └── ResourceAllocator

New module

1. ExecutiveControlEngine (evospikenet/executive_control.py)

Role: The highest level of executive control for whole-brain simulations

Main features: - Goal management and prioritization - Plan creation and execution tracking - Performance monitoring and adaptation - Context-aware decision making

Class composition:```python class ExecutiveControlEngine(nn.Module): def init(self, input_dim, num_modules, max_concurrent_goals=5) def add_goal(goal, goal_embedding) def select_next_action(current_state) def allocate_resources(action) def execute_step(action, result_state) def replan(failed_plan) def get_status_summary()

#### 2. MetaCognitiveMonitor

**Role:** Monitoring and evaluating the system's own decision-making process

**Features:**
- Uncertainty Estimation
- Confidence Assessment
- Error Detection
- Self-Assessment

**Neural network configuration:**
- `uncertainty_net`: Estimate the uncertainty of the decision (0=certain, 1=uncertain)
- `confidence_net`: Evaluate the degree of trust (0=low trust, 1=high trust)
- `error_detector`: Calculate error probability

**Decision quality classification:**
- `high_quality`: Low uncertainty & high reliability
- `moderate`: Moderate uncertainty/confidence
- `low_quality`: high uncertainty or low confidence
- `critical`: High error probability or very low reliability

#### 3. HierarchicalPlanner

**Role:** Hierarchical task decomposition and dependency management of high-level goals

**Features:**
- Recursive decomposition of goals (maximum depth 3)
- Prediction of dependencies between subgoals
- Automatic priority assignment
- Generate executable plan

**Algorithm:**
1. Goal encoding
2. Subgoal generation (num_modules)
3. Activation filtering (keep only above threshold)
4. Dependency prediction (pairwise evaluation)
5. Priority assignment (CRITICAL/HIGH/NORMAL/LOW)

#### 4. AdvancedPFCEngine (added to `evospikenet/pfc.py`)

**Role:** Integration of basic PFC engine and Executive Control

**New method:**```python
def forward_with_planning(input_data, context) -> Dict:
    """
    拡張forward pass with planning & meta-cognition

    Returns:
        - route_probs: ルーティング確率分布
        - entropy: 認知エントロピー
        - spikes, potential: LIF状態
        - decision_state: 決定状態ベクトル
        - meta_assessment: メタ認知評価 (quality, uncertainty, confidence)
        - next_action: 次の実行アクション (planning有効時)
        - resource_allocation: リソース配分
        - executive_status: 実行状態サマリー
    """

def add_goal(goal_description, priority, metadata) -> str:
    """ゴール追加インターフェース"""

def execute_step(action, result_state) -> bool:
    """ステップ実行と状態更新"""

def get_performance_stats() -> Dict:
    """パフォーマンス統計取得"""

How to use

1. Basic usage (compatibility with standard PFC)

<!-- from evospikenet.pfc import AdvancedPFCEngine -->

# Initialization
pfc = AdvancedPFCEngine(
    size=128,
    num_modules=4,
    n_heads=4,
    time_steps=16,
    enable_executive_control=True  # Executive Control Enable
)

# Standard forward (compatible with existing code)
input_tensor = torch.randint(0, 256, (1, 32), dtype=torch.long)
route_probs, entropy, spikes, potential = pfc.forward(input_tensor)

2. Advanced Use (Planning & Metacognition)

# Extension forward with planning
result = pfc.forward_with_planning(
    input_tensor,
    context={"enable_planning": True}
)

# Confirmation of metacognitive assessment
meta = result["meta_assessment"]
print(f"Decision Quality: {meta['quality']}")
print(f"Confidence: {meta['confidence']:.3f}")
print(f"Uncertainty: {meta['uncertainty']:.3f}")
print(f"Error Probability: {meta['error_probability']:.3f}")

# If there is a recommended action
if "next_action" in result:
    action = result["next_action"]
    allocation = result["resource_allocation"]
    print(f"Next Action: {action['step_id']}")
    print(f"Resource Allocation: {allocation}")

3. Goal management

# Add goal
goal_id = pfc.add_goal(
    goal_description="画像認識タスクの実行",
    priority="HIGH",
    metadata={"image_path": "/path/to/image.jpg"}
)

# step execution
if "next_action" in result:
    success = pfc.execute_step(
        action=result["next_action"],
        result_state=torch.randn(128)  # State after execution
    )
    print(f"Step executed: {success}")

# Check execution status
status = pfc.get_executive_status()
print(f"Active Goals: {status['goals']['in_progress']}")
print(f"Completed Goals: {status['goals']['completed']}")
print(f"Active Plans: {status['plans']['active']}")

4. Use with distributed brain nodes

Integration in examples/run_zenoh_distributed_brain.py (see implementation/PFC_ZENOH_EXECUTIVE.md for details):

# During PFC node initialization
if module_type == "pfc":
    zenoh_cfg = {"connect": ["tcp/127.0.0.1:7447"]}
    self.advanced_pfc = AdvancedPFCEngine(
        size=config.get("d_model", 128),
        num_modules=len(self.module_mapping),
        n_heads=4,
        time_steps=16,
        enable_executive_control=True,
        node_id=node_id,
        zenoh_config=zenoh_cfg
    )

# When processing prompts
result = self.advanced_pfc.forward_with_planning(
    prompt_tensor,
    context={"enable_planning": False}
)

# ✅ Automatically publish decision results to Zenoh
# Topic: pfc/{node_id}/decisions
# Payload: route_probs, entropy, alpha_t, routing_temp, modulation_factor

# metacognitive log
if "meta_assessment" in result:
    meta = result["meta_assessment"]
    logger.info(
        f"[META-COGNITION] Quality={meta['quality']} | "
        f"Confidence={meta['confidence']:.3f} | "
        f"Uncertainty={meta['uncertainty']:.3f}"
    )

Configuration options

Distributed brain node configuration (docker-compose.yml)

pfc-0:
  environment:
    - USE_ADVANCED_PFC=true  # Advanced PFC enabled (default: true)

Zenoh Topic

Newly added topics:

  • pfc/add_goal: Goal addition request
  • pfc/goal_added: Goal addition completion notification
  • pfc/get_status: execution status request
  • pfc/status_response: Execution status response

Example of adding goals:```python comm.publish("pfc/add_goal", { "description": "視覚情報を処理して物体を認識", "priority": "HIGH", "metadata": {"timeout": 30.0} })

**Example of status acquisition:**```python
comm.publish("pfc/get_status", {})
# Received in pfc/status_response

Performance statistics

Metrics tracked:

  • total_decisions: Total number of decisions made
  • successful_decisions: number of successful decisions
  • failed_decisions: number of failed decisions
  • average_entropy: Average entropy
  • success_rate: success rate
stats = pfc.get_performance_stats()
print(f"Success Rate: {stats['success_rate']:.2%}")
print(f"Average Entropy: {stats['average_entropy']:.3f}")

Test

Test suite: tests/unit/test_advanced_pfc.py (29 test cases)

How ​​to do it:```bash cd /Users/maoki/Documents/GitHub/EvoSpikeNet python3 tests/unit/test_advanced_pfc.py

**Test coverage:**
- MetaCognitiveMonitor: Uncertainty/Confidence/Error Detection
- HierarchicalPlanner: Goal decomposition/dependencies/priorities
- ExecutiveControlEngine: Goal management/action selection/resource allocation
- AdvancedPFCEngine: Integration testing/performance tracking
- Integration Scenarios: Multi-step/Error Recovery

## Technical details

### Data structure

```python
@dataclass
class Goal:
    goal_id: str
    description: str
    priority: TaskPriority
    created_at: float
    deadline: Optional[float]
    parent_goal_id: Optional[str]
    status: TaskStatus
    progress: float  # 0.0 to 1.0
    metadata: Dict[str, Any]

@dataclass
class Plan:
    plan_id: str
    goal_id: str
    steps: List[Dict[str, Any]]
    current_step: int
    dependencies: Dict[str, List[str]]
    estimated_duration: float
    actual_duration: float
    status: TaskStatus
    metadata: Dict[str, Any]

@dataclass
class ExecutionContext:
    active_goals: List[Goal]
    active_plans: List[Plan]
    resource_allocation: Dict[str, float]
    performance_metrics: Dict[str, float]
    error_history: List[Dict[str, Any]]

Decision flow

Input
  ↓
PFCDecisionEngine.forward()
  ↓
[Quantum Modulation] → α(t) 生成 → ルーティング温度制御
  ↓
decision_state 取得
  ↓
MetaCognitiveMonitor(decision_state)
  ↓
[uncertainty, confidence, error_prob] 計算
  ↓
HierarchicalPlanner.select_next_action(decision_state)
  ↓
ResourceAllocator.allocate_resources(action)
  ↓
Output: {route_probs, meta_assessment, next_action, allocation}

Future expansion

Planned features

  1. Reinforcement Learning Integration: Self-improvement using metacognitive feedback
  2. Long-term memory: Episodic memory of goal history
  3. Multimodal integration: visual/auditory/linguistic integrated planning
  4. Distributed Planning: Collaborative decision making among multiple PFC nodes
  5. Extending the attention mechanism: Transformer-like hierarchical attention

Implementation status

✅ Implemented features (v1.0)

Features Status Details
ExecutiveControlEngine ✅ Fully implemented Goal management, plan creation, execution tracking
HierarchicalPlanner ✅ Fully implemented Goal decomposition, dependency prediction, and prioritization
MetaCognitiveMonitor ✅ Fully implemented Uncertainty estimation, confidence evaluation, error detection
MetaCognitiveRLAgent ✅ Fully implemented Reinforcement learning agent with metacognitive feedback
EpisodicMemory integration ✅ Fully implemented Episodic memory and learning with long-term memory
add_goal() ✅ Implemented Add goal, generate plan
select_next_action() ✅ Implemented Select executable steps
allocate_resources() ✅ Implemented Resource allocation to neural module
execute_step() ✅ Implemented Step execution and status update
replan() ✅ Implemented Replan function in case of failure
get_executive_status() ✅ Implemented Get execution status
get_performance_stats() ✅ Implemented Performance statistics (number of decisions, success rate, entropy)
forward_with_planning() ✅ Implemented Planning integration forward path
distributed_brain integration ✅ Implemented Distributed brain node integration via Zenoh
Zenoh topic integration ✅ Implemented Automatic distribution of decision results with pfc/{node_id}/decisions (AsyncZenohCommunicator collaboration)
Enhanced error recovery strategy ✅ Implemented RecoveryStrategy (retry/degradation/replan/failover/restart) and automatic recovery integration

⏳ Planning/partial implementation

Features Status Description
Parallel execution switching 📋 Planning Parallel execution support for multiple plans

📝 API documentation updated

get_executive_status()

def get_executive_status(self) -> Dict[str, Any]:
    """
    実行エンジンの現在の状態を取得します。
    get_status_summary() へのエイリアスメソッドです。

    戻り値:
        {
            'goals': {
                'total': int,           # Total number of goals
                'completed': int,       # Number of goals completed
                'failed': int,          # Number of goals missed
                'in_progress': int      # Number of goals in progress
            },
            'plans': {
                'total': int,           # Total number of plans
                'active': int           # Number of active plans
            },
            'errors': int,              # Number of error history
            'resource_allocation': dict # resource allocation
        }
    """
    return self.get_status_summary()

get_performance_stats()

def get_performance_stats(self) -> Dict[str, Any]:
    """
    意思決定エンジンのパフォーマンスメトリクスを取得します。

    戻り値:
        {
            'total_decisions': int,          # Total number of decisions taken
            'successful_decisions': int,     # Number of successful decisions
            'failed_decisions': int,         # Number of failed decisions
            'average_entropy': float,        # Average entropy (0-1)
            'success_rate': float            # Success rate (0-1)
        }
    """

Implementation details: - decision_history field added to ExecutionContext - Each decision is recorded as {'success': bool, 'entropy': float, ...} - If there is no decision history, all metrics will be returned as 0

Error recovery API (ExecutiveControl)

  • RecoveryStrategy: RETRY, DEGRADATION, REPLAN, FAILOVER, RESTART
  • initiate_graceful_degradation(severity: str): Gradual degradation (low/medium/high/critical)
  • attempt_automatic_recovery(error_info: Dict[str, Any]) -> RecoveryStrategy: Automatically select strategy based on error type and severity
  • execute_recovery_strategy(strategy, context) -> bool: Execute strategy (retry/degradation/replan/failover/restart)
  • Attempt automatic recovery on failure within execute_step() and call replan() if necessary

Research direction

  • Automatic optimization of planning strategies using meta-learning
  • Integration of causal inference
  • Counterfactual Reasoning
  • Explainable AI

License

Copyright 2026 Moonlight Technologies Inc.

reference

  • evospikenet/executive_control.py: Executive Control implementation
  • evospikenet/pfc.py: PFC Decision Engine & Advanced PFC
  • examples/run_zenoh_distributed_brain.py: Distributed brain integration
  • tests/test_advanced_pfc.py: Test suite
  • docs/DISTRIBUTED_BRAIN_SYSTEM.md: Distributed brain system specification