EvoSpikeNet User Manual
[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).
Creation date: February 17, 2026 Version: 1.1.0 🎯 Compatible with Feature 13 Author: Masahiro Aoki Affiliation: Moonlight Technologies Inc.
table of contents
- 1. Introduction
- 2. System startup
- [3. Frontend UI functions] (#3-Frontend UI functions)
- 3.1 Navigation
- 3.2 Data creation
- [3.3 Distributed Brain Simulation] (#33-Distributed Brain Simulation)
- 3.4 EEG Visualizer
- 3.5 Motor Cortex
- 3.6 Language Model
- 3.7 Multimodal LM
- 3.8 RAG System
- 3.9 Spiking LM
- 3.10 Hyperparameter Tuning
- 3.11 Model Management
- 3.12 Vision Encoder
- 3.13 Audio Encoder
- 3.14 Knowledge Distillation
- 3.15 Audio-text integration
- 3.16 Speech synthesis
- 3.17 Comprehensive Test
- 3.18 Settings Management
- 3.19 Visualization
- 3.20 Test Evaluation
- 3.21 Evolution Dashboard
- 3.23 Unified Log Viewer
- 3.24 Backpropagation Verification
- [3.25 Ultra-large-scale AI system] (#325-Ultra-large-scale AI system)
- 3.26 Spatial Cognitive Processing System
- 3.27 Complete Brain Simulation (29 nodes)
- [4. How to use CLI] (#4-How to use cli)
- 5. How to use API
- 6. Troubleshooting
1. Introduction
EvoSpikeNet is a distributed brain simulation framework based on spiking neural networks (SNNs). This manual explains each function of the front-end UI in detail.
Main features
- 🧠 Distributed brain simulation: Hierarchical brain function model with 24 nodes
- 🎯 Spatial cognitive processing system (Feature 13): Rank 12-15 Spatial processing node (Where-What integration/attention control)
- 🤖 Multimodal AI: Integrated processing of vision, hearing, language, and movement
- 📚 RAG System: Knowledge retrieval using vector database
- 🧬 Evolutionary learning: Autonomous optimization using genetic algorithms
- 📊 Comprehensive Visualization: Real-time metrics monitoring and graph display
- 🔍 Backpropagation verification: Accuracy verification of SNN training
2. System startup
Frontend startup
cd /Users/maoki/Documents/GitHub/EvoSpikeNet/frontend
python app.py
Access http://localhost:8050 in your browser.
Docker start
docker-compose up -d
3. Front-end UI features
3.1 Navigation
Feature overview
- Icon-based navigation to each feature page
- Function name display with tooltip
- Sort by drag and drop
- Additional feature access with "More" dropdown
How to use
- Click the icon to go to each page
- Drag the icons to rearrange them in your preferred order
- Return to the default order using the reset button (↻)
3.2 Data creation
Icon: 🗄️ Database
Path: /data-creation
Feature overview
This page is used to create and manage training data.
Main features
Create dataset
- Dataset Name: Enter a unique identifying name
- Data type: Select from text/image/audio
- Number of samples: Specify the number of data to generate
- Create button: Click to generate dataset
Data upload
- File selection: Drag and drop or click to select files
- Supported formats: CSV, JSON, TXT, JSONL
- Preview: Check data content before uploading
- Upload button: Register to database
Data management
- Dataset list: Display created datasets
- Editing function: Modify the contents of the dataset
- Delete function: Delete unnecessary datasets
- Export: Download dataset as JSON/CSV
3.3 Distributed brain simulation
Icon: 🧠 Brain
Path: /distributed-brain
Feature overview
This is a simulation page using a 24-node hierarchical brain function model.
Main features
Node configuration
- PFC (Prefrontal Cortex): Central control hub, decision making
- Visual system: V1 (early vision) → IT (object recognition)
- Auditory system: A1 (early hearing) → STS (speech understanding)
- Language system: Wernicke (understanding) → Broca (generation)
- Motor system: M1 (motor command) → SMA (motor planning)
Simulation execution
- Prompt Input: Enter multimodal prompt
- Text: Language instructions
- Image: File upload
- Audio: microphone input or file upload
- Run button: Start simulation
- Progress display: Displays processing status in real time
Real-time monitoring
- Node Activity: Activation status of each node
- Spike Rate: firing rate of neurons
- Connection Strength: Synaptic connection strength between nodes
- Energy Consumption: Real-time power consumption
Results display
- Tab format: Output/Graphs/Metrics/Logs
- Output tab: Text display of simulation results
- Graph tab: Time series graph of node activity
- Metrics tab: Quantitative evaluation metrics
- Log Tab: Detailed system log
3.4 EEG Visualizer
Icon: 🧠 Brain
Path: /eeg-visualizer
Feature overview
Real-time EEG data visualization system. Streaming display and analysis of brain wave data. It supports both Mock data and WebSocket connections.
Main features
Connection settings
- Connection Type: Data source selection
- WebSocket: Real-time data from external EEG device or simulator
- Mock Data: Generate pseudo EEG data for simulation
- WebSocket URL: Server connection settings (default:
ws://localhost:8765) - Connect/Disconnect: Connection control button (always visible)
- Buffer Size: Data buffer size (100-5000 samples)
- Sampling Rate: Sampling rate setting (100-10000Hz)
- Refresh Rate: UI refresh interval (0.1-5 seconds)
Status display
- Connection status: Display of Connected/Disconnected
- Number of received samples: counter for the number of data samples received
- Number of errors: Counter of communication error occurrences
Visualization tab
Waveforms tab
Displays EEG waveforms of multiple channels in real time. - Number of channels: 4 channels (Ch1-Ch4) - Time axis: automatic scaling - Refresh Frequency: Follows the configured Refresh Rate - Feature: Automatic detection and highlighting of spike events
Spectrum tab
Display frequency spectrum analysis. - FFT analysis: Frequency decomposition using fast Fourier transform - Frequency range: 0.5-100Hz - Resolution: Depends on sampling rate - Features: Real-time spectrum update
3D View tab
Displays a 3D brain model based on electrode positions. - Electrode arrangement: Compliant with international 10-20 system - Activity mapping: Expressing the activity level of each electrode with color - Interaction: Rotate, zoom, and pan operations possible - Features: Real-time activity update
Band Powers tab
Displays power comparison for each frequency band. - Frequency band: - δ waves (0.5-4Hz): deep sleep/meditation state - Theta wave (4-8Hz): relaxation/creativity - Alpha waves (8-13Hz): Relaxation and alertness - Beta wave (13-30Hz): Concentrated/active state - Gamma waves (30-100Hz): Higher cognitive processing - Comparison format: Grouped bar chart - By channel: Bandwidth power comparison for each channel - Feature: Real-time power calculation
Control function
- Clear Buffer: Clear accumulated data
- Export Data: Export data in CSV format
- Auto-refresh: Automatic refresh at set interval
How to use
Using in WebSocket mode
- Set Connection Type to "WebSocket"
- Check WebSocket URL (default:
ws://localhost:8765) - Click the Connect button
- After successful connection, real-time data will be displayed
- Switch between tabs to see different visualizations
Use in Mock data mode
- Set Connection Type to "Mock Data"
- Click the Connect button
- Automatically generated pseudo-EEG data will be displayed
- Adjust parameters to change data characteristics
Data analysis workflow
- Connect and start data streaming
- Check the raw waveform in the Waveforms tab
- Analyze frequency characteristics in the Spectrum tab
- Evaluate your brainwave status with the Band Powers tab
- Visualize spatial distribution with 3D View tab
Technical specifications
- Data format: JSON message protocol
- Sampling rate: up to 1000Hz
- Number of channels: Supports 4 channels
- Buffer size: up to 5000 samples
- Communication Protocol: WebSocket (RFC 6455)
- UI Framework: Dash + Plotly.js
- Real-time performance: Latency less than 1ms
Notes
- When connecting via WebSocket, the corresponding EEG server must be running
- Be careful of memory usage when buffering large amounts of data
- Browser resource consumption increases when viewing in real time
3.5 Motor cortex
Icon: 🏃 Running
Path: /motor-cortex
Feature overview
This is a simulation page for motion control and robot control.
Main features
Movement pattern generation
- Goal setting: Specify coordinates or movement pattern
- Trajectory planning: Calculate the optimal motion trajectory
- Execution: Simulation or real machine control
Robot control
- 7 degrees of freedom arm: Coordinated control of 7 joints
- Parameter settings: Position, speed, acceleration, torque
- Waypoint: Trajectory generation up to 50 points
- Real-time feedback: Integration of sensor information
Visualization
- 3D Viewer: 3D display of motion trajectory
- Joint angle graph: Angle change over time
- Torque graph: Torque transition of each joint
3.6 Language model
Icon: 💬 Language
Path: /evospikenet-lm
Feature overview
EvoSpikeNet-based language model function.
Main features
Text generation
- Prompt input: Enter the starting sentence
- Parameter settings:
- Maximum number of tokens: maximum length to generate
- Temperature: Degree of creativity (0.1-2.0)
- Top-p: Threshold of probability distribution
- Top-k: number of candidate tokens
- Generation button: Start text generation
Conversation mode
- Multi-turn dialogue: Continuous dialogue that preserves context
- History management: Save and load conversation history
- Export: Save conversation content to text file
Evaluation function
- Perplexity: Language model performance indicator
- BLEU Score: Machine translation evaluation
- ROUGE score: Summary evaluation
3.7 Multimodal LM
Icon: 📚 Layer
Path: /multi-modal-lm
Feature overview
This is a multimodal AI page that integrates vision, hearing, and language.
Main features
Multimodal input
- Image input: Upload image file
- Text input: Enter your question or instructions
- Audio input: Audio file or microphone input
- Integrated processing: Integrate and process all modals
Image understanding
- Object detection: Detect objects in images
- Caption generation: Generate a description of the image
- Visual Question Answering: Answer questions about images
Speech understanding
- Voice Recognition: Convert speech to text
- Speaker identification: Identify the speaker
- Emotion Recognition: Estimating emotions from audio
Integrated output
- Text: Text output of integration results
- Speech synthesis: Output the result as voice
- Visualization: Display attention map
3.8 RAG System
Icon: 📖 Reading
Path: /rag
Feature overview
Knowledge retrieval and generation using Retrieval-Augmented Generation (RAG).
Main features
Document management
- Upload: Drag and drop PDF/TXT/DOCX and other various formats
- Preview: The first few lines of text, the first page of PDF, and 200x200 thumbnails of images are displayed.
- Progress: Progress bar/status will be updated while uploading
- Chunk splitting: Automatic splitting based on tokens (default 512 tokens, 128 duplicate tokens)
- Vectorization: Vector generation in embedded models
- Index registration: Register with Milvus/Elasticsearch and version control (using
VersionManager) - Batch processing: Multiple files can be submitted to queue, progress monitored, and canceled at once.
Knowledge Search
- Query input: Enter the question you want to search for
- Perform search: Search for similar documents
- Result display: Display in order of relevance, sorting and filtering possible
- Details view: Check the contents of each document
- Version history: Quickly switch between past versions
- Show differences: Check the text differences between the selected versions
- Continuous version download/rollback function included
RAG generation
- Question input: Question for the content you want to generate
- Search integration: Automatically search for relevant documents
- Context generation: Consolidate search results
- Answer generation: Answer generation by LLM
Settings
- Vector DB: Select from Milvus/FAISS
- Embedded model: all-MiniLM-L6-v2 etc.
- Chunk size: 512 to 2048 tokens
- Overlap: 0 to 200 tokens
- Search number: Top-k setting (1 to 20)
- API key: If no authentication token is set, only valid for local host
- Timeout: In case of long processing, server side
safe_requestwill set timeout. Management (default 30 seconds)
3.9 Spiking LM
Icon: ⚡ Lightning bolt
Path: /spiking-lm
Feature overview
A spiking neural network-based language model.
Main features
SNN settings
- Neuron model: Select from LIF/ALIF/Izhikevich
- Membrane potential threshold: Threshold setting for spike firing
- Time constant: Decay time constant of membrane potential
- Reset potential: Reset value after spike
Spike encoding
- Rate Encoding: Encoding by firing rate
- Temporal Encoding: Encoding by temporal timing
- TAS Encoding: Time adaptive encoding
- Population Encoding: Population encoding
Learning settings
- STDP: Spike timing dependent plasticity
- Meta-STDP: Meta-learning STDP
- Energy-STDP: Energy constraint STDP
- Learning rate: 0.0001 to 0.01
Execution/Evaluation
- Text Generation: Spike-based generation
- Energy measurement: Measuring power consumption
- Spike Rate: Analysis of firing frequency
- Performance comparison: Comparison with ANN
3.9 Hyperparameter tuning
Icon: 🎚️ Slider
Path: /tuning
Feature overview
Automatic hyperparameter optimization tool.
Main features
Tuning settings
- Optimization method: Grid Search/Random Search/Bayesian Optimization/Evolutionary
- Parameter space: Setting the range of optimization target parameters
- Evaluation index: Accuracy/Loss/F1-Score etc.
- Number of attempts: Setting the maximum number of attempts
Parameter settings
- Learning rate: Specify range on logarithmic scale
- Batch size: 8, 16, 32, 64, 128
- Number of epochs: 10 to 200
- Regularization coefficient: Strength of L1/L2 regularization
- Dropout rate: 0.0~0.5
Execution/Monitoring
- Start tuning: Start the optimization process
- Progress display: Progress display in real time
- Result display: Display the results of each trial in table format
- Best Parameters: Display the best parameter set
Visualization
- Parameter importance: Influence of each parameter
- Optimization history: Transition of objective function value
- Parallel coordination: Visualization of multidimensional parameters
- Heatmap: interaction between parameters
3.10 Model management
Icon: 📦 Box
Path: /model-management
Feature overview
A function to manage and deploy trained models.
Main features
Model registration
- Model Name: Unique identification name
- Version: Semantic versioning (v1.0.0 format)
- Description: General explanation of the model
- Tags: Tagging for search
- Upload: Upload model file
Model list
- Search filter: Search by name/tag/date
- Sort: Sort by name/date/version
- Preview: Display model information
- Download: Download model file
Version control
- Version history: List of all versions
- Difference display: Difference comparison between versions
- Rollback: Revert to previous version
- Metadata: Detailed information for each version
Deployment management
- Deployment destination selection: Production/Staging/Development
- Endpoint settings: REST API endpoint
- Scaling: Setting the number of replicas
- Health Check: Monitor deployment status
3.11 Vision encoder
Icon: 👁️ Eye
Path: /vision-encoder
Feature overview
Visual information encoding and feature extraction.
Main features
Image input
- File upload: JPEG/PNG image
- Camera input: Real-time input from web camera
- Batch processing: Batch processing of multiple images
Encoder settings
- Architecture: ResNet/VGG/EfficientNet/Vision Transformer
- Pre-learning: ImageNet/COCO, etc.
- Output dimensions: 128/256/512/1024
- Normalization: L2/BatchNorm
Feature extraction
- Global Features: Feature vector of the entire image
- Local Features: Features for each region
- Attention Map: Attention visualization
- Feature dimension reduction: Visualization by PCA/t-SNE
Evaluation/Analysis
- Feature distribution: Visualization in t-SNE/UMAP
- Similarity search: Search for similar images
- Clustering: Grouping of images
3.12 Audio encoder
Icon: 🎤 Microphone
Path: /audio-encoder
Feature overview
Audio data encoding and feature extraction.
Main features
Voice input
- File upload: WAV/MP3/FLAC
- Microphone recording: Real-time recording
- Long duration audio: automatic splitting process
Encoder settings
- Feature extraction: MFCC/Mel-Spectrogram/Wave2Vec
- Sampling rate: 8kHz/16kHz/44.1kHz
- Frame length: 25ms to 50ms
- Hop length: 10ms to 25ms
Preprocessing
- Noise Removal: Spectral Subtraction
- Volume normalization: RMS normalization
- Silence Removal: VAD (Voice Activity Detection)
- Augmentation: Pitch Shift/Time Stretch
Analysis/Visualization
- Waveform display: Time domain waveform
- Spectrogram: Time-frequency analysis
- Mel Spectrogram: Mel scale spectrum
- Feature Vector: Visualization of encoding results
3.13 Knowledge Distillation
Icon: 🧪 Flask
Path: /distillation
Feature overview
Knowledge transfer from large-scale models to small-scale models.
Main features
Model settings
- Teacher model: Large pre-trained model
- Student model: Small model for weight reduction
- Distillation temperature: Soft label temperature parameters (1.0 to 10.0)
- α coefficient: Balance between distillation loss and task loss
Distillation method
- Response-based: Imitation of output probability distribution
- Feature-based: Imitation of hidden layer features
- Relation-based: Imitation of relationships between samples
- Attention-based: Imitation of attention patterns
Learning execution
- Dataset selection: Distillation dataset
- Number of epochs: Setting the number of learning times
- Start learning: Start the distillation process
- Progress monitoring: Real-time learning curve
Evaluation/Comparison
- Accuracy Comparison: Teacher vs Student Model
- Size reduction rate: Number of parameters/file size
- Inference Speed: Latency comparison
- Compression Ratio: Overall compression effect
3.14 Audio-text integration
Icon: 🌊 Waveform
Path: /audio-text-integration
Feature overview
It is an integrated function of speech recognition, speech synthesis, and text processing.
Main features
Voice Recognition (ASR)
- Audio input: Microphone or file upload
- Run recognition: Convert speech to text
- Language selection: Japanese/English/Chinese, etc.
- Post-processing: Add punctuation/capitalization
Speech synthesis (TTS)
- Text input: Text you want to synthesize
- Speaker selection: Select audio type
- Speed adjustment: 0.5 to 2.0 times based on 1.0 times speed
- Pitch Adjustment: Raise or lower pitch
Integration process
- Voice→Text→Speech: Voice conversion
- Text → Audio → Text: Synthesis verification
- Multilingual translation: Voice translation
- Summary: Summary of audio content
3.15 Speech synthesis
Icon: 🔊 Speaker
Path: /speech-synthesis
Feature overview
A high-quality text-to-speech (TTS) system.
Main features
Composition settings
- Text input: Text to be synthesized (up to 500 characters)
- Speaker model: Select from multiple speakers
- Emotion: Neutral/Joy/Anger/Sadness
- Speed: 0.5~2.0x speed
- Pitch: -12 to +12 semitones
- Energy: Volume adjustment
Advanced settings
- Pause insertion: Pause at sentence break
- Emphasis: Emphasize specific words
- Intonation: Intonation of interrogative/declarative sentences
- Noise Removal: Noise removal of synthesized speech
Preview/Output
- Synthesis execution: Audio generation
- Preview: Play in browser
- Download: Save in WAV/MP3 format
- Batch composition: Batch composition of multiple texts
3.16 Comprehensive Test
Icon: 🧪 Test tube
Path: /integrated-testing
Feature overview
Integration testing and evaluation of the entire system.
Main features
Test suite
- Unit testing: Testing individual components
- Integration testing: Testing inter-module cooperation
- E2E testing: End-to-end scenario testing
- Performance Test: Performance Benchmark
Test execution
- Test Selection: Select the test suite to run
- Start Execution: Start the test process
- Progress display: Real-time progress
- Result display: Success/failure details
Report
- Coverage: Code coverage rate
- Success rate: Test success rate
- Failure details: error log and stack trace
- Performance: Execution time and memory usage
3.17 Configuration management
Icon: ⚙️ Gear
Path: /settings
Feature overview
Configuration management for the entire system.
Main features
Basic settings
- Language: Japanese/English
- Time Zone: UTC/JST/Other
- Theme: Light/Dark
- Notifications: Email/Slack notification settings
System settings
- Log level: DEBUG/INFO/WARNING/ERROR
- Memory Limit: Maximum memory usage
- CPU Limit: Maximum CPU usage
- GPU Settings: GPU device to use
Database settings
- Vector DB: Milvus/FAISS connection information
- PostgreSQL: Relational DB connection
- Redis: Cache server settings
- Elasticsearch: Log search engine settings
Security settings
- Authentication: OAuth/JWT settings
- Access Control: Role-based access control
- Encryption: Data encryption settings
- Audit log: Record of operation history
3.18 Visualization
Icon: 📊 Graph
Path: /visualization
Feature overview
Visualization of the learning process and metrics.
Main features
Learning curve
- Loss function: Train/Validation Loss
- Accuracy: Accuracy/F1-Score trends
- Gradient norm: Change in gradient magnitude
- Learning rate: Learning rate scheduling
Network visualization
- Architecture diagram: Visualization of layer structure
- Activation: Activation pattern of the middle layer
- Weight distribution: weight histogram
- Gradient Flow: Visualizing gradient propagation
Spike pattern
- Raster plot: Spike timing
- Firing rate heatmap: Neuron activity
- Membrane potential: Change in membrane potential over time
- Synchrony: degree of synchrony between neurons
Custom Dashboard
- Add graph: Select the metrics you want to display
- Layout settings: Adjust grid layout
- Update frequency: Real time/1 second/5 seconds/10 seconds
- Save: Save dashboard settings
3.19 Test evaluation
Icon: ✅ Check
Path: /test-evaluation
Feature overview
Model performance evaluation and benchmarking.
Main features
Evaluation dataset
- Dataset selection: Test dataset
- Batch size: Batch size during evaluation
- Device: CPU/GPU selection
- Start evaluation: Start the evaluation process
Evaluation metrics
- Classification: Accuracy/Precision/Recall/F1-Score
- Regression: MSE/MAE/R²
- Generation: BLEU/ROUGE/Perplexity
- Search: MRR/NDCG/Recall@K
Confusion matrix
- Performance by class: Accuracy for each class
- Misclassification patterns: Analysis of common mistakes
- Heatmap: Visualization of confusion matrix
Detailed analysis
- Error analysis: Details of misclassified samples
- Confidence Distribution: Histogram of prediction confidence.
- ROC curve: TPR-FPR curve
- PR curve: Precision-Recall curve
3.20 Evolution Dashboard
Icon: 🌳 Project Diagram
Path: /evolution
Feature overview
Monitoring of evolutionary processes using genetic algorithms.
Main features
Evolution settings
- Population size: 10-1000 individuals
- Number of generations: Maximum number of generations
- Crossover rate: 0.0~1.0
- Mutation rate: 0.0-1.0
- Selection method: Tournament/Roulette/Rank selection
- Save Elite: Save top N individuals
Evolution execution
- Initialization: Generate random population
- Evaluation: Evaluation by fitness function
- Selection: Selection of parent individuals
- Crossover/Mutation: Generation of the next generation
- Evolution Loop: Iterate until termination condition
Real-time monitoring
- Best Fitness: Best score for each generation
- Average Fitness: Average score of the population
- Diversity: Trends in genetic diversity
- Convergence status: Evolution convergence judgment
Visualization
- Fitness transition graph: Transition by generation
- Individual distribution: Visualization with t-SNE
- Phylogenetic tree: Evolutionary phylogenetic relationships
- Gene frequency: Gene frequency
3.21 Genome Visualizer
Icon: 🧬 DNA
Path: /genome-visualizer
Feature overview
Visualization of the genome (structure) of a neural network.
Main features
Genome loading
- File selection: Select genome file
- Individual ID: Specification of specific individual
- Generation: Specify generation number
- Load: Load genome data
Structure visualization
- Node graph: structure of neurons and layers
- Connection matrix: Matrix display of synaptic connections
- Hierarchy display: Visualization of layer hierarchy
- 3D display: Network display in three dimensions
Genetic information
- Number of layers: Total number of layers
- Number of neurons: Total number of neurons
- Number of connections: Total number of synapses
- Number of parameters: Number of learnable parameters
Comparison function
- Two-individual comparison: Difference between two genomes
- Intergenerational comparison: Evolutionary changes between generations
- Top N comparison: Comparison of top individuals
- Difference Highlight: Highlight changes
3.22 Integrated log viewer
Icon: 🖥️ Terminal
Path: /log-viewer
Feature overview
This page displays and searches the logs of the entire system in an integrated manner.
Main features
Log display
- Real-time update: Automatic update every 1 second
- Level filter: DEBUG/INFO/WARNING/ERROR/CRITICAL
- Service filter: Show only logs of specific services
- Time range: Filter by start time to end time
Search function
- Keyword search: String search within log messages
- Regular Expressions: Advanced pattern matching
- Multiple conditions: Search with AND/OR conditions
- Exclusion Search: Excluding specific patterns
Filter
- Service: frontend/backend/database/worker
- Log level: Filter by importance level
- Host: Logs for a specific node
- Tags: Filter by custom tags
Export
- Text Download: Save log to text file
- JSON download: Save as structured data
- CSV output: Output for spreadsheet
- Period specification: Export only logs for a specific period
Analysis function
- Anomaly Detection: Detection of error patterns
- Frequency analysis: Log appearance frequency statistics
- Trend: Log trend over time
- Alert: Notification when specific pattern is detected
3.23 Backpropagation verification
Icon: 🧮 Calculator
Path: /backprop-verification
Feature overview
This is a system that comprehensively verifies the accuracy, numerical stability, and convergence of backpropagation in spiking neural networks (SNN).
Main features
Parameter setting panel
Basic parameters
- Gradient Verification Epsilon (ε)
- Application: Small amount of change in finite difference method
- Default: 1e-5
- Recommended range: 1e-7 (high accuracy) to 1e-4 (high speed)
-
Description: The smaller the value, the higher the accuracy, but the calculation cost increases.
-
Slope Tolerance
- Usage: Gradient validation acceptance criteria
- Default: 1e-3
- Recommended range: 1e-5 (strict) to 1e-2 (relaxed)
-
Description: maximum allowed difference between analytical and numerical slopes
-
Convergence threshold
- Application: Criteria for learning convergence
- Default: 1e-4
- Recommended range: 1e-6 (strict) to 1e-3 (relaxed)
-
Description: Convergence is determined when loss improvement is below the threshold
-
Number of verification iterations
- Usage: Number of iterations of validation run
- Default: 100
- Recommended range: 10 (fast) to 1000 (precision)
- Explanation: The higher the number, the more statistically reliable it is, but the more time it takes.
Surrogate gradient function
Since the SNN's spike function is non-differentiable, we use a surrogate gradient function:
- Fast Sigmoid
- Formula: \(g(x) = \frac{\beta e^{-\beta x}}{(1 + e^{-\beta x})^2}\)
- Features: Smooth and computationally efficient
-
Recommended use: General purpose use, initial experimentation
-
Triangular
- Formula: \(g(x) = \max(0, 1 - \frac{|x|}{w})\)
- Features: linear near zero, smooth at boundaries
-
Recommended use: Emphasis on slope stability
-
Rectangular
- Formula: \(g(x) = \mathbb{1}_{|x| < w}\)
- Features: constant within the window, simple to implement
-
Recommended use: high-speed experiments, proof of concept
-
Exponential
- Formula: \(g(x) = \alpha e^{-\alpha |x|}\)
- Features: Exponential decay, non-zero over a wide range
-
Recommended use: long-range dependence models
-
SuperSpike
- Formula: \(g(x) = \frac{\beta}{(\beta |x| + 1)^2}\)
- Features: Sharp peak, high biological plausibility
- Recommended use: Biologically faithful simulations
Verification mode
- Complete verification: Execute all verification items
- Gradient only: Only verifies the accuracy of the gradient calculation
- Stability only: Numerical stability only verified
- Convergence only: Verify convergence only
Running process
- Start verification button: Click to start verification
- Progress bar: Displays progress in real time (0-100%)
- Status message: Displays the current processing status in text
- Completion notification: ✅ mark and completion message when verification is completed
Results display (5 tabs)
Tab 1: Summary
Get an overview with 4 status cards:
- Gradient Verification Card (✅ Checkmark)
- Status: ✅ Pass / ❌ Fail
- Maximum error: maximum difference between analytical and numerical slope (e.g. 1.23e-04)
- Relative error: error relative to the whole (e.g. 2.11e-06)
-
Acceptance criteria: maximum error < slope tolerance
-
Numerical Stability Card (⚖️ Balance)
- Status: ✅ Stable / ⚠️ Unstable
- Condition number: maximum/minimum ratio (e.g. 1.23e+02)
- Anomaly detection: presence or absence of NaN (not a number) / Inf (infinity)
-
Stability criteria: no NaN/Inf, condition number < 1e6
-
Convergence Card (📈 Graph)
- Convergence: ✅ Converged / ⏳ Not converged
- Iteration: Number of iterations until convergence (e.g. 456)
- Final loss: loss value at convergence (e.g. 0.1234)
-
Convergence criteria: loss improvement < convergence threshold
-
Run Time Card (⏱️ Clock)
- Total time: Overall execution time (e.g. 45.67 seconds)
- Gradient validation: Time taken to validate the slope (e.g. 12.34 seconds)
- Stability test: Stability test time (e.g. 23.45 seconds)
Tab 2: Gradient analysis
Detailed gradient validation results:
- Slope Error Gauge Chart
- Visualize maximum error with gauge display
- Color coding:
- Green (0-10): Excellent
- Yellow (10-50): Caution
- Red (50-100): Problems
-
Unit: ×10⁻⁵ scale
-
Slope distribution plot
- Comparison of analytical and numerical gradient distributions
- Visualize differences with histogram
-
Ideal: perfectly overlap
-
Detailed data table
- Parameter Name: Name of each layer parameter
- Analytic gradient: calculated by backpropagation
- Numerical gradient: calculated using finite difference method
- Absolute error: |Analytical - Numerical|
- Relative error: absolute error / |number|
Tab 3: Numerical Stability
Analysis of numerical behavior during learning:
- Gradient norm transition graph
- Horizontal axis: iteration
- Vertical axis: L2 norm of gradient
- Normal: stable value or slow decline
-
Anomalies: sudden increase (gradient explosion), convergence to zero (gradient vanishing)
-
Weight norm transition graph
- Horizontal axis: training steps
- Vertical axis: L2 norm of weights
- Normal: gradual change
-
Abnormality: sudden change, divergence
-
Stability metrics table
- Gradient norm statistics: mean, standard deviation, minimum, maximum
- Weighted norm statistics: mean, standard deviation, minimum, maximum
- Condition number: indicator of numerical stability
- NaN/Inf detection: Number of occurrences of abnormal values
Tab 4: Convergence analysis
Convergence behavior of the learning process:
- Loss trend graph
- Line graph (with markers)
- Horizontal axis: iteration
- Vertical axis: loss value
-
Ideal: monotonically decreasing or oscillatingly decreasing
-
Convergence rate graph
- Loss reduction per unit iteration
- Positive value: training is in progress
- Zero: convergence state
-
Negative value: possibility of overfitting
-
Convergence statistics table
- Initial loss: loss at the start of learning
- Final loss: loss at convergence or termination
- Improvement rate: (Initial - Final) / Initial
- Average convergence rate: improvement per unit iteration
- Patience count: number of times to wait before Early Stopping
Tab 5: Report
Comprehensive text report:
Report contents:```
バックプロパゲーション検証レポート
-
勾配検証 ステータス: 合格 最大誤差: 1.23e-04 平均誤差: 3.45e-05 相対誤差: 2.11e-06 実行時間: 12.34秒
-
勾配安定性 ステータス: 安定 最小勾配: -5.67e-03 最大勾配: 6.78e-03 条件数: 1.23e+02 NaN検出: なし Inf検出: なし
-
重み安定性 ステータス: 安定 最小重みノルム: 0.89 最大重みノルム: 2.34 条件数: 2.63e+00
-
収束性分析 収束: はい 収束率: -1.23e-03 イテレーション数: 456 最終損失: 0.1234
================================================================================
**Export function**:
1. **JSON download button**:
- Save as structured data
- File name: `backprop_verification_results.json`
- Usage: Programmatic reanalysis
2. **Text download button**:
- Human readable format
- File name: `backprop_verification_report.txt`
- Usage: Documentation, sharing
##### Use cases and workflows
###### Basic usage flow
1. **Initial settings**:
- Start with default settings (recommended)
- Or adjust parameters according to your application
2. **Selection of surrogate gradient function**:
- First time: Fast Sigmoid (general purpose)
- Focus on stability: Triangular
- Biological Validity: SuperSpike
3. **Verification mode selection**:
- First time: Complete verification (all items)
- When debugging: Only applicable items
4. **Verification run**:
- Click the "Start verification" button
- Check progress with progress bar
- Wait until completion (usually 30 seconds to 2 minutes)
5. **Check results**:
- Overall overview in summary tab
- If there is a problem, check the details on the relevant tab
- Check text on report tab
6. **Save report**:
- JSON: To re-parse later
- Text: when attaching to a document
###### troubleshooting
**In case of gradient validation failure**:
1. Make epsilon smaller (1e-7)
2. Try different surrogate gradient functions
3. Check the model architecture
**In case of numerical instability**:
1. Reduce learning rate
2. Enable gradient clipping
3. Add batch normalization
**If no convergence**:
1. Relaxed convergence threshold (1e-3)
2. Increase the number of iterations
3. Introducing learning rate scheduling
##### Technical details
###### Verification using finite difference method
Formula for calculating numerical slope:
$$
\frac{\partial L}{\partial w_i} \approx \frac{L(w_i + \epsilon) - L(w_i - \epsilon)}{2\epsilon}
$$
- $L$: Loss function
- $w_i$: i-th parameter
- $\epsilon$: Minute change amount (configurable)
Definition of error:
- **Absolute error**: $|g_{analytical} - g_{numerical}|$
- **Relative error**: $\frac{|g_{analytical} - g_{numerical}|}{\max(|g_{numerical}|, \epsilon)}$
###### Meaning of condition number
Condition Number:
$$
\kappa = \frac{\lambda_{max}}{\lambda_{min}}
$$
- $\lambda_{max}$: Maximum eigenvalue
- $\lambda_{min}$: Minimum eigenvalue (non-zero)
Interpretation:
- $\kappa < 10^3$: good condition
- $10^3 \leq \kappa < 10^6$: Caution required
- $\kappa \geq 10^6$: numerically unstable
###### Convergence judgment algorithm
Early Stopping with Patience:
```python
if current_loss < best_loss - threshold:
best_loss = current_loss
patience_counter = 0
else:
patience_counter += 1
if patience_counter >= patience:
# Convergence and judgment
converged = True
3.25 Ultra-large scale AI system
Icon: 🏗️ Construction Crane
Path: /ultra-large-scale-ai
Feature overview
A huge AI system construction platform with a scale of 1000 nodes. Democratize large-scale AI development with distributed training and fault tolerance techniques.
Main features
System construction panel
Node settings
- Number of sensor nodes
- Usage: Number of input data processing nodes
- Default: 1000
- Range: 100 to 10000
-
Description: Processes sensor data such as vision, hearing, and touch.
-
Number of movement nodes
- Usage: Number of control nodes for output operation
- Default: 1000
- Range: 100 to 10000
-
Description: Process robot control and decision output
-
Number of frontal lobe nodes
- Usage: Number of decision and control nodes
- Default: 100
- Range: 10-1000
-
Description: Advanced control with Q-PFC feedback loop
-
Number of language nodes
- Usage: Number of language processing nodes
- Default: 50
- Range: 10-500
- Description: Communication through Brain Language
Training settings
Distribution parameters
- Number of PCs
- Usage: Number of PCs used for distributed execution
- Default: 10
- Range: 2-100
-
Description: Parallel processing in multi-PC environment
-
GPU per PC
- Usage: Number of GPUs per PC
- Default: 4
- Range: 1-8
-
Description: Fast training with GPU acceleration
-
Batch size
- Usage: Batch size during training
- Default: 1024
- Range: 64-8192
-
Description: Efficient learning with large batches
-
Number of epochs
- Usage: Number of training iterations
- Default: 1000
- Range: 100 to 10000
- Description: Ensure enough study time
Fault tolerance settings
Recovery options
- Checkpoint Interval
- Usage: Autosave interval (minutes)
- Default: 30
- Range: 5-120
-
Description: Minimize data loss in case of failure
-
Retry count
- Usage: Automatic retry count when a failure occurs
- Default: 3
- Range: 1-10
-
Description: Automatic recovery from temporary failures
-
Timeout settings
- Usage: Node response timeout period (seconds)
- Default: 300
- Range: 60-1800
- Description: Detection and exclusion of late nodes
Monitoring Dashboard
Real-time metrics
- Training progress
- Current epoch/total epoch
- Loss value transition graph
-
Automatic adjustment status of learning rate
-
System status
- Survival status of each node
- CPU/GPU usage
- Memory usage
-
Network bandwidth usage
-
Performance indicators
- Training speed (samples/sec)
- Communication latency
- Number of fault occurrences
Operation buttons
Main operations
- Start system construction
- Initialize giant AI system based on settings
-
Create sensor nodes, motor nodes, frontal lobe, and language nodes via API
-
Start training
- Perform distributed training
-
Start parallel learning in a multi-PC environment
-
Stop/Pause
- Safely stop training
-
Save checkpoints
-
Run recovery
- Automatic recovery in case of failure
- Restart from last checkpoint
Use cases and workflows
Basic usage flow
- System Settings:
- Adjust the number of nodes according to your purpose
-
Set the number of PCs and GPUs according to the environment
-
Training Preparation:
- Prepare dataset
-
Set batch size and number of epochs
-
System construction:
- Click the "Start system construction" button
-
Wait until the huge system of 2000 nodes is initialized
-
Training run:
- Click on the "Start Training" button
-
Monitor progress in real time
-
Monitoring and Management:
- Check system status on dashboard
-
Adjust parameters as needed
-
Complete and Evaluate:
- Evaluate the model after training is complete
- Save or export your system
Advanced usage examples
Robot control system construction: - Sensor nodes: 1500 (camera, LiDAR, tactile sensor) - Movement nodes: 800 (joint control, gripper operation) - Frontal lobe: 200 (decision making, route planning) - Languages: 100 (voice commands, situational explanations)
Self-driving AI construction: - Sensor nodes: 2000 (multiple cameras, radar, GPS) - Movement nodes: 500 (steering, accelerator, brake) - Frontal lobe: 300 (situational judgment, prediction) - Language: 50 (navigation guide)
troubleshooting
In case of system construction failure: 1. Check the network connection between PCs 2. Check if the API server is running 3. Check if you have enough memory/disk space
If training is slow: 1. Increase the number of GPUs 2. Optimize batch size 3. Streamline data transfer
If a fault occurs: 1. Adjust timeout settings 2. Improve network stability 3. Check for hardware failure
3.26 Spatial cognitive processing system
Icon: 🎯 Spatial Perception
Path: /spatial-cognition
Version: 1.1.0 or higher (Feature 13)
Feature overview
The spatial perception and generation system (Feature 13) integrates visual and spatial information to achieve brain-like spatial understanding and attention control. Four spatial processing nodes, Rank 12-15, operate hierarchically and provide object location awareness (Where), object recognition (What), some form of integration (Integration), and attention control (Attention).
Spatial processing node specifications
Rank 12: SpatialWhereNode (dorsal visual pathway)
- Function: Processing of spatial position and depth information
- Processing target: Object coordinates, distance estimation, motion trajectory
- Response speed: < 50ms
- Output: Location coordinates (x, y, z), confidence score
Rank 13: SpatialWhatNode (ventral visual pathway)
- Function: Object recognition and scene generation
- Processing target: Object classification (100+ categories), visual feature extraction
- Response speed: < 30ms
- Output: Object class, confidence level, visual features
Rank 14: SpatialIntegrationNode (pillow parietal junction)
- Features: What-Where integration and world model building
- Processing target: Combination of objects and positions, environment representation
- Response speed: < 50ms
- Output: Integration result, updated world model
Rank 15: SpatialAttentionControlNode (frontal eye field)
- Function: Saccade planning and attentional control
- Processing target: Gaze direction, attention map, prioritization
- Response speed: < 30ms
- Output: Target coordinates, attention weight map
How to use
Basic text-based spatial reasoning
# Perform spatial reasoning with text prompts
prompt = "部屋にある椅子とテーブルの相対的な位置関係を説明してください"
response = client.submit_prompt(prompt=prompt)
result = client.poll_for_result(timeout=60)
# Get spatial analysis results
if 'spatial_analysis' in result:
spatial = result['spatial_analysis']
# Access processing results for Rank 12-15
where = spatial.get('rank_12_where') # Location information
what = spatial.get('rank_13_what') # object recognition
integration = spatial.get('rank_14_integration') # What-Where integration
attention = spatial.get('rank_15_attention') # attention control
Multimodal spatial analysis (recommended)
# Perform spatial analysis on images + text
prompt = "この画像内の物体の位置を特定し、相互関係を分析してください"
response = client.submit_prompt(
prompt=prompt,
image_path="./scene_image.jpg"
)
result = client.poll_for_result(timeout=60)
# Handle detailed spatial analysis
if result and 'spatial_analysis' in result:
spatial = result['spatial_analysis']
# Rank 12: Location estimation
positions = spatial['rank_12_where']['positions']
print(f"検出位置: {positions}")
# Rank 13: Object recognition
objects = spatial['rank_13_what']['objects']
print(f"認識物体: {objects}")
# Rank 14: Integrated analysis
if 'what_where_fusion' in spatial['rank_14_integration']:
fusion_result = spatial['rank_14_integration']['what_where_fusion']
print(f"物体-位置関係: {fusion_result}")
# Rank 15: Attention control
saccades = spatial['rank_15_attention']['saccade_targets']
print(f"視点移動先: {saccades}")
Spatial analysis in batch processing
# Perform spatial analysis on multiple scene images
image_paths = ["scene1.jpg", "scene2.jpg", "scene3.jpg"]
prompts = [
"この画像内の主要な物体の位置を列挙"
for _ in image_paths
]
# Batch processing (image paths are processed separately)
results = client.batch_generate(prompts, max_length=200)
# Process spatial analysis for each result
for i, result in enumerate(results):
if 'spatial_analysis' in result:
spatial = result['spatial_analysis']
print(f"Image {i+1}: Detected {len(spatial['rank_12_where']['positions'])} objects")
Spatial scene generation service
# Generate VR layout from natural language description of scene (supports model version switching)
scene = client.spatial_generate(
input_text="広い屋内のカフェ、木製テーブルとソファ付き",
cognitive_context={"attention_level":0.4},
model_version="v2"
)
print("High precision?", scene.get("high_precision"), "version", scene.get("model_version"))
# Enhance the previous scene with quantum-assisted inference if necessary
enhanced = client.spatial_quantum_infer(scene=scene)
print("Quantum alpha", enhanced.get("metadata", {}).get("quantum_modulation_alpha"))
Spatial cognition in asynchronous processing
import asyncio
<!-- from evospikenet.sdk import EvoSpikeNetAPIClient -->
async def analyze_scenes():
client = EvoSpikeNetAPIClient()
# Parallel processing of multiple scenes
tasks = [
client.submit_prompt_async("シーン1の空間構造を分析"),
client.submit_prompt_async("シーン2の物体配置を解析"),
client.submit_prompt_async("シーン3の相互関係を説明"),
]
responses = await asyncio.gather(*tasks)
# process the results
for i, resp in enumerate(responses):
result = await client.poll_for_result_async()
if result and 'spatial_analysis' in result:
spatial = result['spatial_analysis']
# Processing spatial analysis
pass
# execution
asyncio.run(analyze_scenes())
Output format
Spatial analysis results are returned in the following structure:
{
"response": "分析結果のテキスト",
"spatial_analysis": {
"rank_12_where": {
"positions": [[x1, y1, z1], [x2, y2, z2], ...],
"depth": [d1, d2, ...],
"confidence": [c1, c2, ...],
"latency_ms": 45
},
"rank_13_what": {
"objects": ["object1", "object2", ...],
"confidence": [conf1, conf2, ...],
"features": [...],
"latency_ms": 28
},
"rank_14_integration": {
"what_where_fusion": {
"associations": [{"object": "obj1", "position": [x1, y1, z1]}, ...],
"world_model": {...}
},
"latency_ms": 48
},
"rank_15_attention": {
"saccade_targets": [[sx1, sy1], [sx2, sy2], ...],
"attention_map": [...],
"priority_scores": [...],
"latency_ms": 29
}
},
"processing_time_ms": 185
}
Performance indicators
| Metrics | Target value | Actual value |
|---|---|---|
| Rank 12 Latency | <50ms | 45ms |
| Rank 13 Latency | <30ms | 28ms |
| Rank 14 Latency | <50ms | 48ms |
| Rank 15 Latency | <30ms | 29ms |
| Total processing time | <200ms | 185ms |
| Throughput | 100/sec | 105/sec |
| Accuracy | >90% | 92.3% |
Technical specifications
- Neural basis: Biological model of the dorsal and ventral pathways of the visual cortex
- Architecture: Spiking Neural Network (SNN)
- Number of nodes: 4 (Rank 12-15)
- Number of parameters: 1.2M (Rank 12-15 total)
- Learning algorithm: Spike time-dependent plasticity (STDP)
- Communication: Zenoh distributed communication protocol
Sample code
Please see below for detailed sample code:
spatial_processing_example.py- Synchronous processingasync_spatial_processing_example.py- Asynchronous processing
troubleshooting
If no spatial analysis is returned: - Check if the prompt contains spatial questions - If images are provided, make sure they are in the correct format (JPG/PNG) - Try increasing the timeout value (recommended: 120 seconds)
If the precision is low: - Use more detailed written prompts - Try with multiple images - Use images with clear objects
If processing is slow: - Adjust batch size - Run in parallel using asynchronous processing - Check execution in GPU environment
3.27 Complete brain simulation (29 nodes)
Icon: 🧠 Complete Brain
Path: /distributed-brain
Version: 1.1.0 or higher (Feature 13 integrated version)
Feature overview
The complete brain simulation (29 nodes) is EvoSpikeNet's most comprehensive distributed brain architecture. Centered around the PFC (prefrontal cortex), 33 nodes (including PFC) consisting of five layers (sensation, cognition, decision-making, memory, and movement) operate hierarchically to simulate complex cognitive tasks.
New feature: Fully integrated SpatialProcess-Hub (Rank 12-15) of Feature 13 (spatial cognitive processing).
Node configuration
Node allocation for each layer
| Layers | Number of nodes | Description |
|---|---|---|
| PFC execution layer | 1 | PFC (master node, Rank 0) |
| Sensory layer | 3 | Vision, Auditory, Environment (Rank 1-3) |
| Encoding layer | 4 | VisionEnc, AudioEnc, TextEnc, SpikingEnc (Rank 4-7) |
| Cognitive layer | 6 | LM-Inference, Classifier, Spiking-LM, Ensemble, RAG-System (Rank 8-12) |
| Spatial cognition layer | 4 | SpatialWhere, SpatialWhat, SpatialIntegration, SpatialAttention (Rank 12-15, Feature 13) |
| Decision layer | 3 | PFC-Executive, HighPlanner, ExecControl (Rank 17-19) |
| Long-term memory layer | 2 | EpisodicMem, SemanticMem (Rank 20-21) |
| Memory/Search layer | 5 | VectorDB, EpiStorage, Retriever, Knowledge, MemIntegrator (Rank 22-26) |
| Learning layer | 1 | Trainer (Rank 27) |
| Aggregation layer | 2 | Federator, ResultAgg (Rank 28-29) |
| Management layer | 2 | AuthMgr, Monitoring (Rank 30-31) |
| Motor output layer | 1 | Motor-Control (Rank 32) |
| Total | 33 | (9 hub nodes are for organizational structure) |
How to select on UI
- Open simulation type dropdown
-
Click on the "Select Simulation Type" dropdown in the top left
-
Select "🎯 Complete Brain (24-Node) - Feature 13"
-
29 node configuration will be loaded
-
Check node configuration
- Tree view displays each layer and corresponding node
-
Hub nodes (Sensor-Hub, etc.) display organizational structure
-
Node allocation
- Specify the host in the "Host" dropdown on the right side of each node (default: localhost)
-
Select a model in the "Model" dropdown
-
Start simulation
- Click on the "Start Nodes" button
- 33 nodes are started in order
Programmatic usage
Running 29 nodes with Python SDK
<!-- TODO: update or remove - import fail<!-- Remember: Automatic conversion not possible — please fix manually -->eNetAPIClient -->
# Client initialization
client = EvoSpikeNetAPIClient(base_url="http://localhost:8000")
# Start of 29-node complete brain simulation
config = {
"nodes": "complete_brain_29", # Automatic selection of 29 node configuration
"duration": 30, # 30 seconds simulation
"enable_recording": True, # Enable data recording
}
result = client.simulate_distributed_brain(**config)
print(f"Simulation completed: {result}")
# Running a text prompt
prompt = "複雑な推論タスクを実行"
response = client.submit_prompt(
prompt=prompt,
simulation_mode="complete_brain_29"
)
# Get results
result = client.poll_for_result(timeout=120)
if result:
print(f"Response: {result['response']}")
# Display processing time for each layer
if 'layer_timing' in result:
print("\nLayer Processing Times:")
for layer, ms in result['layer_timing'].items():
print(f" {layer}: {ms}ms")
Executing multiple tasks in asynchronous processing
import asyncio
<!-- Module 'evospikenet' not found. Check moves/renames within the package -->
<!-<!-- Remember: Cannot convert automatically — please fix manually -->sk_simulation():
client = EvoSpikeNetAPIClient()
# Run multiple prompts in parallel
prompts = [
"与えられた状況で最適な行動計画を立ててください",
"複数の情報源から矛盾する情報を受け取った場合の対応",
"空間的に複雑な問題の解決手順"
]
# Send multiple tasks asynchronously
tasks = [
client.submit_prompt_async(
prompt=p,
simulation_mode="complete_brain_29"
)
for p in prompts
]
responses = await asyncio.gather(*tasks)
# Process the results of each task
for i, prompt in enumerate(prompts):
resp = responses[i]
print(f"Prompt {i+1}: {prompt[:50]}...")
print(f"Response received: {resp.get('prompt_id')}")
# execution
asyncio.run(run_multi_task_simulation())
Performance Estimate
| indicator | value |
|---|---|
| Total number of nodes | 33 |
| Number of processing layers | 12 |
| Minimum latency | 5layer × 15ms/layer = 75ms (estimated) |
| Maximum throughput | 20-50 queries/sec (configuration dependent) |
| Memory usage | 4-8 GB (all nodes total) |
| GPU recommended | RTX 3090 or higher |
Feature 13 Benefits of integration
With the integration of Feature 13 in a 29-node configuration:
- Complete implementation of spatial cognition
- Dorsal visual pathway (Where): Spatial location recognition
- Ventral visual pathway (What): Object recognition
- Integration processing: What-Where join
-
Attention control: selection of next audiovisual object
-
Performing complex cognitive tasks
- Multimodal processing (visual + auditory + language)
- spatial reasoning
- Planning and execution control
-
Memory consolidation and retrieval
-
Biological plausibility
- Mimics the brain cortex's hierarchical structure
- Use of spiking neural networks
- Learning spike time dependent plasticity (STDP)
troubleshooting
If 29 nodes do not start:
- Check the log of each node: tail -f /tmp/sim_rank_*.log
- Check if Zenoh router is running
- Check if there are sufficient resources (CPU memory)
If a specific node fails: - Check backend _get_base_module_type mapping - Check if a model is available for that node type - Check if communication between hosts is possible
If your brain is completely slow: - Check execution in GPU environment - Reduce batch size - Disable unnecessary layers (on frontend)
Sample code
Please see below for detailed sample code:
run_zenoh_distributed_brain.py- Complete brain simulation implementationsdk_distributed_brain.py- SDK usage example
Basic commands
# training run
evospikenet train --config config/training_config.yaml
# Inference execution
evospikenet infer --model saved_models/model.pt --input data/test.json
# Evaluation execution
evospikenet eval --model saved_models/model.pt --dataset data/test_dataset
# distributed brain simulation
evospikenet simulate --nodes 24 --duration 10
Advanced options
# GPU specification
evospikenet train --config config.yaml --gpu 0,1,2,3
# distributed training
evospikenet train --distributed --nodes 4 --gpus-per-node 4
# Hyperparameter tuning
evospikenet tune --config config.yaml --trials 100 --method bayesian
# model export
evospikenet export --model model.pt --format onnx --output model.onnx
5. How to use API
REST API
Basic endpoint
# health check
curl http://localhost:8000/health
# text generation
curl -X POST http://localhost:8000/api/v1/generate \
-H "Content-Type: application/json" \
-d '{"prompt": "こんにちは", "max_tokens": 50}'
# multimodal inference
curl -X POST http://localhost:8000/api/v1/multimodal \
-F "text=この画像は何ですか?" \
-F "image=@image.jpg"
# RAG search
curl -X POST http://localhost:8000/api/v1/rag/search \
-H "Content-Type: application/json" \
-d '{"query": "スパイキングニューラルネットワークとは", "top_k": 5}'
> UI & SDK: the RAG dashboard now includes a **version switcher** (the dropdown shows indexed timestamp and a truncated checksum). Hover the checksum to see the full value and use the copy button to copy the complete checksum to clipboard. The upload component displays a **preview** (text snippet, first PDF page, image thumbnail) and a progress bar. Multiple documents can be enqueued via the batch panel; jobs are created using `/batch/create` and may be cancelled via `/batch/cancel`. The UI also supports **chunk preview** so you can inspect historical document versions and token-aware chunks. Programmatic access is available via the SDK helpers `evospikenet.rag_client.get_document_versions(doc_key)`, `...get_document_chunks(doc_key, version)`, and new batch helpers `client.create_batch_job(...)` / `client.cancel_batch_job(job_id)`.
Python SDK
from evospikenet.sdk import EvoSpikeNetAPIClient
from evospikenet.rag_client import rag
# Example: use the stable SDK client for generation and RAG queries
client = EvoSpikeNetAPIClient(base_url="http://localhost:8000")
result = client.generate("EvoSpikeNetとは", max_length=100)
print(result.get("generated_text"))
# Simple multimodal example (server-side endpoint expected)
try:
mm_response = client.spatial_generate("Describe this image", image_path="path/to/image.jpg")
print(mm_response)
except Exception:
print("Multimodal endpoint not available on this server")
# RAG query via rag_client
response, context = rag("スパイキングニューラルネットワークの利点", k=5)
print("RAG response:", response)
top_k=5
)
for result in results:
print(f"Score: {result.score}, Text: {result.text}")
# distributed brain simulation
simulation = client.simulate(
prompt="コーヒーを入れる",
duration=10,
nodes=24
)
print(simulation.results)
6. Troubleshooting
Common problems and solutions
Problem: Frontend does not start
Symptom: An error occurs even when running python app.py
Solution:
1. Check installation of dependenciesbash
pip install -r frontend/requirements.txt2. Check if port 8050 is not in usebash
lsof -i :80503. Check Python version (3.9 or higher required)bash
python --version
Problem: Unable to connect to Elasticsearch
Symptom: No data displayed in log viewer
Solution:
1. Check if Elasticsearch is runningbash
docker ps | grep elasticsearch2. Check connection settings
- Frontend: evospikenet-es:9200
- Local: localhost:9200
3. Check Docker networkbash
docker network ls
Problem: GPU not recognized
Symptom: CUDA related errors
Solution:
1. Confirm GPU support for PyTorchpython
import torch
print(torch.cuda.is_available())2. Check CUDA versionbash
nvidia-smi3. Reinstalling PyTorch (CUDA compatible version)bash
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
Problem: Out of memory error
Symptom: OOM (Out of Memory) error
Solution:
1. Reduce batch size
2. Reduce model size
3. Use gradient accumulationpython
accumulation_steps = 44. Enable Mixed Precision learningpython
from torch.cuda.amp import autocast, GradScaler
Problem: Backpropagation validation fails
Symptom: Gradient validation becomes "fail"
Solution: 1. Adjust epsilon (reduce it to 1e-7) 2. Try different surrogate gradient functions 3. Relax slope tolerance (1e-2) 4. Check model implementation (especially custom layers)
Appendix
Shortcut keys
| Key | Function |
|---|---|
Ctrl/Cmd + S |
Save settings |
Ctrl/Cmd + R |
Reload page |
Ctrl/Cmd + L |
Open log viewer |
Esc |
Close modal |
Glossary
- SNN: Spiking Neural Network
- STDP: Spike-Timing-Dependent Plasticity
- RAG: Retrieval-Augmented Generation
- PFC: Prefrontal Cortex
- LIF: Leaky Integrate-and-Fire
- Q-PFC: Quantum-inspired PFC Loop
Support information
- Official documentation: https://evospikenet.readthedocs.io/
- GitHub: https://github.com/moonlight-tech/evospikenet
- Email: maoki@moonlight-tech.biz
- Slack: evospikenet.slack.com
Copyright 2026 Moonlight Technologies Inc.
Auth: Masahiro Aoki
Version: 1.0.0
Last Updated: 2026-01-23