EvoSpikeNet Python SDK
[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).
EvoSpikeNet Python SDK is a high-level API client for interacting with the EvoSpikeNet distributed brain simulation system. Features such as text generation, distributed brain simulation, spatial cognitive processing (Feature 13), and artifact management are available with simple Python code.
Last updated: 2026-02-19 🎯 Feature 13 + Genome management SDK added
🚀 Quick start
install
pip install -e .
Basic usage example
<!-- from evospikenet.sdk import EvoSpikeNetAPIClient -->
# Client initialization
client = EvoSpikeNetAPIClient()
# text generation
result = client.generate("人工知能とは何ですか?")
print(result['generated_text'])
# distributed brain simulation
response = client.submit_prompt(prompt="脳の学習メカニズムを説明してください")
result = client.poll_for_result(timeout=60)
if result:
print(result.get('response', 'No response'))
📋 Table of Contents
✨ Features
Core features
- 🔄 Text generation: High quality text generation
- 🧠 Distributed brain simulation: Multimodal query processing
- 📦 Artifact Management: Manage models, logs, and settings
- 📊 Monitoring: Real-time statistics and health checks
- 🧬 Evolution/Genome Management: Genome list/edit/application (
/api/evolution/genomes)
Advanced features
- 💾 Snapshot management: Save and restore system state
- 📈 Scalability Test: Performance Evaluation
- 🌐 Zenoh communication: Asynchronous distributed communication
- 🎯 AEG-Comm communication optimization: Intelligent communication gating (85-93% reduction)
- ⚖️ Consensus: Distributed decision system
- Distributed coordinator: Distributed processing with Zenoh DDS + Raft consensus (built-in tasks: federated_learning=average aggregation, distributed_inference=immediate completion, model_aggregation=weights average. Node discovery cleans up old heartbeats via Zenoh)
- �🛡️ High Availability: Automatic failover and monitoring
- 🔧 Hardware Optimization: Model optimization and benchmarking
Developer Experience
- 🔒 Type safety: Full type hinting
- ⚡ Asynchronous support: async/await support
- 📚 Jupyter integration: Rich display in notebook environment
- 🧪 Comprehensive testing: More than 95% coverage
- 📖 Detailed Documentation: Tutorials and API Reference
📦 Installation
System requirements
-Python 3.8+ - 4GB RAM (8GB+ recommended) - Internet connection (for API server access)
Basic installation
# From the project root
pip install -e .
Installing optional features
# Jupyter Notebook integration
pip install -e ".[jupyter]"
# Asynchronous processing
pip install -e ".[async]"
# WebSocket communication
pip install -e ".[websocket]"
# all options
pip install -e ".[all]"
Installation using Docker
# Docker image build
docker build -t evospikenet-sdk .
# Container execution
docker run -it evospikenet-sdk
🎯 How to use
Initialization
<!-- TODO: update or remove - import fail<!-- Remember: Automatic conversion not possible — please fix manually -->eNetAPIClient -->
# Basic settings
client = EvoSpikeNetAPIClient()
# custom settings
client = EvoSpikeNetAPIClient(
base_url="http://localhost:8000",
api_key="your-api-key",
timeout=120
)
Text generation
# simple generation
result = client.generate("機械学習について説明してください")
print(result['generated_text'])
# batch generation
prompts = ["AIの未来", "量子コンピューティング", "脳科学"]
results = client.batch_generate(prompts, max_length=100)
Distributed brain simulation
# text query
response = client.submit_prompt(prompt="記憶のメカニズムを説明してください")
result = client.poll_for_result(timeout=120)
# Multimodal (image + text)
response = client.submit_prompt(
prompt="この画像を分析してください",
image_path="./brain_scan.jpg"
)
# Simulation status monitoring
status = client.get_simulation_status()
print(f"Active nodes: {status['active_nodes']}")
### Genome management (preservation/application)
```python
genomes = client.list_genomes()
target = genomes[0]["name"]
genome = client.get_genome(target)
genome.setdefault("metadata", {})["note"] = "edited via SDK"
client.save_genome(target, genome, make_active=True)
client.apply_genome(target)
サンプル: examples/genome_management_sdk.py
### Artifact Management
```python
# Create session
session = client.create_log_session("実験セッション")
# model upload
with open('model.pkl', 'rb') as f:
client.upload_artifact(
session_id=session['session_id'],
artifact_type="model",
name="trained_model",
file=io.BytesIO(f.read())
)
# Artifact list
artifacts = client.list_artifacts(artifact_type="model")
# download
client.download_artifact(artifact_id, "downloaded_model.pkl")
Advanced features
# Create snapshot
snapshot = client.create_snapshot("backup_2026", include_models=True)
# Scalability test
test_result = client.run_scalability_test(max_nodes=50)
# Zenoh communication
client.connect_zenoh(node_id="my_client")
client.publish_zenoh_message("topic/test", {"data": "hello"})
# consensus
proposal = client.propose_consensus_decision(
"resource_allocation",
{"gpu": 2, "hours": 1}
)
📚 API Reference
For detailed API specifications, please refer to the following documents:
- API Reference: Complete method specification
- Tutorial: Step-by-step guide
- Installation Guide: Detailed setup instructions
- Developer Guide: Extensions and development information
Main classes
| Class | Description |
|---|---|
EvoSpikeNetAPIClient |
Main API client |
JupyterAPIClient |
Jupyter Notebook integration client |
WebSocketClient |
Real-time communication client |
Main methods
Basic operations
generate(prompt, max_length)- Text generationsubmit_prompt(prompt, image_path, audio_path)- Multimodal queryget_simulation_status()- Get simulation statuspoll_for_result(timeout)- result polling
Artifact Management
upload_artifact(session_id, artifact_type, name, file)- Uploadlist_artifacts(artifact_type)- Get listdownload_artifact(artifact_id, path)- Download
Advanced features
create_snapshot(name, include_models)- Create snapshotrun_scalability_test(max_nodes)- Scalability testconnect_zenoh(node_id)- Zenoh connectionpropose_consensus_decision(type, payload)- Consensus proposal
💡 Sample code
The SDK includes extensive sample code:
Basic sample
sdk_quickstart.py- Quickstartsimple_generation.py- Basic text generationmultimodal_generation.py- Multimodal processingrun_simulation_query.py- Simulation execution
🎯 Feature 13 - Spatial cognition processing sample
Example of implementation of spatial cognition system (Rank 12-15 spatial processing node)
spatial_processing_example.py⭐ NEW- Features: Text, multimodal, batch processing, condition monitoring
- Implementation example:
- Text prompts → Spatial analysis (location, depth, object recognition)
- Image analysis → Rank 12-15 activation
- Batch generation → parallel processing of multiple queries
- Health check → System status check
-
How to run:
python examples/spatial_processing_example.py -
async_spatial_processing_example.py⭐ NEW - Features: Asynchronous processing, parallel execution, rate limiting, error handling
- Implementation example:
- Concurrent tasks (simultaneous processing of multiple prompts)
- Rate limited transmission (throughput management)
- Asynchronous health check
- Error scenario handling
- How to run:
python examples/async_spatial_processing_example.py - Benefits:
- High throughput processing (simultaneous execution of multiple requests)
- Resource efficient (processing continues while waiting for I/O)
- For distributed systems (scalable)
Advanced samples
advanced_features_demo.py- Advanced features demoaeg_comm_demo.py- AEG-Comm communication optimization demoasync_operations_demo.py- Asynchronous processingbatch_generation.py- Batch processingerror_handling_example.py- Error handling
Jupyter Sample```python
<!-IClient() client.set_display_mode("html")
Check results in rich display
client.show_server_info() client.show_stats()
## 🛠️ Developer Guide
For information on extending and developing the SDK, please refer to the [Developer Guide](SDK_DEVELOPER_GUIDE.md).
### Development environment setup
```bash
# Development dependency installation
pip install -e ".[dev]"
# test run
pytest
# coverage report
pytest --cov=evospikenet --cov-report=html
# Code quality check
black evospikenet/
isort evospikenet/
mypy evospikenet/
flake8 evospikenet/
Contribution flow
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Implement the changes
- Add test
- Commit (
git commit -m 'Add amazing feature') - Push (
git push origin feature/amazing-feature) - Create a pull request
🧪 Test
# Run all tests
pytest
# specific test run
pytest tests/unit/test_sdk.py
# Run with coverage
pytest --cov=evospikenet --cov-report=term-missing
# Integration test execution
pytest tests/integration/
📊 Performance
Benchmark results
- Response time: < 500ms (average)
- Availability: 99.9%+
- Scalability: Supports more than 1000 nodes
- Parallel processing: Supports 100 simultaneous requests
System requirements
- Minimum: 4GB RAM, 2GHz CPU
- Recommended: 8GB RAM, 3GHz CPU
- Optimal: 16GB RAM, multi-core CPU
🔒 Security
- Authentication by API key
- SSL/TLS encryption
- Timeouts and retry limits
- Input validation and sanitization
🤝 Contribution
Contributions are welcome! For details, see Contributing Guide.
Coding Standards
- Style: Black + isort
- Type checking: mypy
- lint: flake8
- Test: pytest (coverage 80%+)
- Documentation: Google style docstring
📄 License
This project is released under the MIT license. Please refer to the LICENSE file for details.
📞 Support
- Documentation: docs/
- Issue: GitHub Issues
- Discussions: GitHub Discussions
🙏 Acknowledgments
- Dear EvoSpikeNet Team
- Open source community
- Dear Contributors
EvoSpikeNet SDK - The power of distributed brain simulation at your fingertips.