Skip to content

🚀 EvoSpikeNet GPU/CPU Quick Start Guide

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

overview

EvoSpikeNet is designed so that the GPU version and CPU version can be launched separately. Please select the appropriate version according to your hardware environment.

🎯 How to start

1. GPU version (for fast training)

When using in an environment equipped with NVIDIA GPU:

# Recommended: Use dedicated script
./launch.sh gpu

# or use Makefile
make train-gpu

# When explicitly building GPU with docker compose
BASE_IMAGE=nvidia/cuda:12.4.1-base-ubuntu22.04 ENABLE_GPU=true \
  docker compose up -d api frontend

Requirements: - NVIDIA GPU (CUDA compatible) - NVIDIA driver installed - Docker NVIDIA Runtime - BASE_IMAGE=nvidia/cuda:12.4.1-base-ubuntu22.04 (build ARG)

Access: http://localhost:8000

2. CPU version (emphasis on compatibility)

When using in a CPU-only environment:

# Recommended: Use dedicated script
./launch.sh cpu

# or use Makefile
make train-cpu

# When building CPU with docker compose (CUDA image not required)
BASE_IMAGE=ubuntu:22.04 ENABLE_GPU=false \
  docker compose up -d api frontend

Features: - GPU/CUDA not required (base image is ubuntu:22.04) - Multi-core CPU optimization - Low power consumption - CUDA dependent packages such as bitsandbytes are not installed.

Access: http://localhost:8001

📊 Comparison table

Item GPU version CPU version
Base image nvidia/cuda:12.4.1-base-ubuntu22.04 ubuntu:22.04
ENABLE_GPU true false
bitsandbytes Installed Not installed
Startup Time Fast Standard
Training Speed 10-50x Baseline
Memory Usage GPU VRAM System RAM
Power consumption High Low
Parallel Processing CUDA Core CPU Thread

🛠️ Management commands

# Status check
./launch.sh status
make train-status

# Log display
./launch.sh logs
make train-logs

# Stop
./launch.sh stop
make train-stop

# help
./launch.sh help

🔧 Troubleshooting

If the GPU version does not start```bash

Check NVIDIA driver

nvidia-smi

Docker NVIDIA runtime check

docker run --rm --gpus all nvidia/cuda:11.0-base nvidia-smi

### In case of port conflict```bash
# Check ports in use
lsof -i :8000
lsof -i :8001

# Container stopped
./launch.sh stop

In case of insufficient memory```bash

Check system resources

docker system df docker system prune -a

## 📚 Detailed documentation

- [BUILD_GUIDE.md](BUILD_GUIDE.md) - List of Compose variants and services (including RAG/Microservices/GPU overlay)

## 🧠 When starting the RAG system at the same time (simple)
## 🎉 Next steps```bash
# RAG dependency set (includes Milvus/Elasticsearch)
docker compose --profile rag up -d rag-api milvus-standalone elasticsearch

# RAG API log
docker compose --profile rag logs -f rag-api

# Stop
docker compose --profile rag down

Points: - rag-system/data is mounted inside the container at /home/appuser/app/rag-system/data. - Environment variables: MILVUS_HOST=milvus-standalone, ELASTICSEARCH_HOST=elasticsearch, authentication key EVOSPIKENET_API_KEY(S). - See BUILD_GUIDE.md for details.

  1. Startup confirmation: http://localhost:8000/docs (GPU) or http://localhost:8001/docs (CPU)
  2. API Test: Check the provided endpoints
  3. Start training: data upload and model training

💡 Tip: If this is your first time, please try the CPU version first. We recommend using the GPU version during full-scale training.