Aggregated LLM selection strategy in federated learning
[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).
Implementation notes (artifacts): See
docs/implementation/ARTIFACT_MANIFESTS.mdfor theartifact_manifest.jsonoutput by the training script and recommended CLI flags.
overview
Federated learning in EvoSpikeNet's distributed brain architecture implements a dynamic aggregation LLM selection mechanism. This allows the best node (LLM) to be selected as the aggregation center in each learning round.
Importance of aggregate LLM selection
Traditional federated learning simply averages model parameters across all clients. But in a distributed brain system:
- Different roles of nodes: PFC (central control), Visual, Audio, etc.
- There is a difference in performance: GPU-equipped nodes and CPU-only nodes, etc.
- Load status fluctuates: Processing load changes in real time
Therefore, it is necessary to choose the best node according to the situation as the center of aggregation.
Implemented selection strategy
1. PFCentricStrategy
Application: Standard distributed brain architecture
<!-- from evospikenet.federated_strategy import create_dynamic_fedavg_strategy -->
# Always make the PFC node the center of aggregation
strategy = create_dynamic_fedavg_strategy(
strategy_type="pfc_centric",
pfc_node_id="pfc-0", # PFC node ID
min_fit_clients=2
)
Features: - PFC node coordinates all aggregations as a central control hub - The simplest and most stable strategy - Mimics the role of the brain's prefrontal cortex (PFC)
2. LoadBasedStrategy
Application: Environments that emphasize load balancing
# Make the least loaded node the center of aggregation
strategy = create_dynamic_fedavg_strategy(
strategy_type="load_based",
cpu_weight=0.6, # CPU usage weight
memory_weight=0.4, # Memory usage weight
min_fit_clients=2
)
Features: - Monitor CPU usage and memory usage - Select a node with sufficient resources - Optimize overall system throughput
3. Performance Based Strategy
Application: Environments where model quality is given top priority
# Make the highest performing node the center of aggregation
strategy = create_dynamic_fedavg_strategy(
strategy_type="performance_based",
accuracy_weight=0.5, # precision weight
loss_weight=0.3, # loss weight
convergence_weight=0.2, # Convergence speed weight
min_fit_clients=2
)
Features: - Evaluate training accuracy, loss, and convergence speed - Select the node with the best performance - Focus on improving model quality
4. Hybrid Strategy ★Recommended★
Application: Balanced operation (recommended)
# Considering both load and performance
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
load_weight=0.4, # load weight
performance_weight=0.6, # weight of performance
pfc_bonus=0.1, # Bonus to PFC nodes
aggregator_weight_multiplier=2.0, # Weight multiplier for selected nodes
min_fit_clients=2
)
Features: - Comprehensive evaluation of load status and performance - Grants a small bonus to PFC nodes - The most balanced strategy
5. QuantumModulatedStrategy
Applications: Leverage EvoSpikeNet's unique quantum modulation capabilities
# Considering the quantum modulation coefficient α(t) of PFC
strategy = create_dynamic_fedavg_strategy(
strategy_type="quantum_modulated",
entropy_threshold=0.5, # entropy threshold
min_fit_clients=2
)
Features: - Utilizing PFC's cognitive entropy and quantum coefficient α(t) - Prioritize PFC with low entropy (high confidence) state - Utilizes EvoSpikeNet's patented quantum modulation feedback loop
Usage example
Basic usage
# Server startup script (run_fl_server.py)
import flwr as fl
<!-- TODO: update or remove - import fail<!-- Remember: Automatic conversion not possible — please fix manually --> import create_dynamic_fedavg_strategy -->
def main():
# Dynamically select aggregate LLM with hybrid strategy
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
load_weight=0.4,
performance_weight=0.6,
pfc_bonus=0.1,
aggregator_weight_multiplier=2.0,
fraction_fit=0.8,
min_fit_clients=3,
min_available_clients=3
)
# Start Flower server
fl.server.start_server(
server_address="0.0.0.0:8080",
config=fl.server.ServerConfig(num_rounds=10),
strategy=strategy,
)
if __name__ == "__main__":
main()
Update node metrics
# Collect and send metrics on the client side
class EvoSpikeNetClient(fl.client.NumPyClient):
def fit(self, parameters, config):
# ... training process ...
# Collect metrics
metrics = {
'accuracy': 0.95,
'loss': 0.05,
'cpu_usage': 45.2,
'memory_usage': 62.8,
'convergence_rate': 0.8,
'entropy': 0.3, # For PFC
'alpha_t': 0.65, # For PFC
}
return parameters, num_examples, metrics
Metrics update on the server side
# Update metrics with custom callbacks
class MetricsCallback:
def __init__(self, strategy):
self.strategy = strategy
def on_fit_end(self, server_round, results, failures):
# Incorporate metrics from each client into your strategy
for client_proxy, fit_res in results:
node_id = client_proxy.cid
if fit_res.metrics:
self.strategy.update_node_metrics(node_id, fit_res.metrics)
Selection process details
1. Hybrid strategy selection process
各ラウンドで:
1. 全クライアントから訓練結果を収集
2. 各ノードのハイブリッドスコアを計算:
hybrid_score = load_weight × load_score +
performance_weight × performance_score +
pfc_bonus (if PFC)
where:
- load_score = 1.0 - (cpu_usage + memory_usage) / 200
- performance_score = (accuracy + 1/(1+loss)) / 2
3. 最高スコアのノードを集約の中心として選択
4. 選択されたノードの重みを2倍(デフォルト)にして集約
5. 重み付き平均でグローバルモデルを更新
2. Quantum modulation strategy selection process
各ラウンドで:
1. PFCノードから認知エントロピーとα(t)を取得
2. 量子スコアを計算:
quantum_score = (1.0 - entropy) × alpha_t
- 低エントロピー = 確信度が高い状態
- 適度なα(t) = バランスの取れた変調
3. 機能モジュールからはスパイク蒸留知識を評価
4. 最高量子スコアのノードを集約の中心として選択
Parameter tuning
aggregator_weight_multiplier (aggregation node weight multiplier)
- Default: 2.0
- Recommended range: 1.5 to 3.0
- Effect: Adjust influence of selected node
# Increase influence of selected nodes
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
aggregator_weight_multiplier=3.0 # stronger influence
)
# Moderate influence of selected nodes
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
aggregator_weight_multiplier=1.5 # modest influence
)
Hybrid strategy weight balance
# Focus on load (prioritize resource efficiency)
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
load_weight=0.7,
performance_weight=0.3
)
# Emphasis on performance (quality priority)
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
load_weight=0.3,
performance_weight=0.7
)
# Balanced type (recommended)
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
load_weight=0.4,
performance_weight=0.6
)
Check the aggregation results
Log output
2025-12-16 10:15:23 INFO Round 1: Aggregator selected - pfc-0
2025-12-16 10:15:23 INFO Selected aggregator: pfc-0 (hybrid score: 0.872)
2025-12-16 10:16:45 INFO Round 2: Aggregator selected - lang-main
2025-12-16 10:16:45 INFO Selected aggregator: lang-main (hybrid score: 0.895)
Check metrics
# Aggregator_node is included in the aggregated metrics
{
'accuracy': 0.94,
'loss': 0.06,
'aggregator_node': 'pfc-0'
}
Best practices
1. Development environment
- Recommended strategy: PFC-centered strategy
- Why: Simple and predictable behavior
strategy = create_dynamic_fedavg_strategy(
strategy_type="pfc_centric",
min_fit_clients=2
)
2. Production environment (emphasis on balance)
- Recommended strategy: Hybrid strategy
- Reason: Good balance between load and performance
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
load_weight=0.4,
performance_weight=0.6,
pfc_bonus=0.1,
aggregator_weight_multiplier=2.0
)
3. Production environment (quality first)
- Recommended strategy: Performance-based strategy
- Why: Maximize model quality
strategy = create_dynamic_fedavg_strategy(
strategy_type="performance_based",
accuracy_weight=0.5,
loss_weight=0.3,
convergence_weight=0.2
)
4. Research/experimental environment
- Recommended strategy: Quantum modulation strategy
- Reason: Utilizes EvoSpikeNet's unique patented technology
strategy = create_dynamic_fedavg_strategy(
strategy_type="quantum_modulated",
entropy_threshold=0.5
)
troubleshooting
Q1: The same node is always selected
Cause: Node metrics are not updated Solution: Send the latest metrics on the client side
def fit(self, parameters, config):
# Be sure to include metrics
metrics = {
'accuracy': current_accuracy,
'loss': current_loss,
'cpu_usage': get_cpu_usage(),
'memory_usage': get_memory_usage()
}
return parameters, num_examples, metrics
Q2: Aggregation is unstable
Cause: aggregator_weight_multiplier is too high Solution: Lower the weight multiplier
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
aggregator_weight_multiplier=1.5 # Lower from 2.0
)
Q3: PFC node is not selected
Cause: Insufficient pfc_bonus Solution: Increase the bonus
strategy = create_dynamic_fedavg_strategy(
strategy_type="hybrid",
pfc_bonus=0.2 # Increase from 0.1
)
summary
The choice of aggregated LLM in federated learning is by using a hybrid strategy:
- Dynamically select the best node
- Balance load and performance
- Moderate preference for PFC nodes
- Optimize overall system efficiency
can be achieved. Choose the appropriate strategy depending on your application.