SWITCH TO EVOSPIKENET LM

EvoSpikeNet LM Migration Guide

  • Purpose: Procedures for performing inference, learning, and evolutionary learning using a local SpikingEvo* model rather than an external LLM.

  • Reasoning

  • API: Use the newly added /lm/generate endpoint.
  • Input JSON: { "prompt": "...", "max_new_tokens": 64, "temperature": 1.0, "task_type": "text"}
  • Example: curl -X POST -H "Content-Type: application/json" -d '{"prompt":"Hello"}' http://localhost:8000/lm/generate

  • Learning

  • Simple training script: tools/train_spiking_lm.py is provided.
  • In actual operation, please add distributed learning, checkpoints, learning rate schedules, etc.

  • Model management

  • Load/initialize the model through AutoModelSelector.
  • Manage trained weights using the standard torch.save/torch.load.

  • Evolving

  • Evolutionary learning loops can be implemented as wrappers around learning scripts (mutation, crossover, evaluation functions, generation management).
  • In the future, we recommend extending tools/train_spiking_lm.py to add generation management with the --evolve flag.

  • Points to note

  • Containerization is recommended due to environment differences due to PyTorch / NumPy / OpenMP.
  • If you use evospikenet/model_selector.py, the optimal device (cuda/mps/cpu) will be automatically selected.

API: About LM summaries for /process

  • POST /process receives video and audio in a synchronous process and returns the following fields in addition to the existing timeline / summary / counts:
  • lm_summary: A short Japanese summary generated by EvoSpikeNet internal LM (e.g. "Person moved 5 times, audio event detected"). Empty string if empty.
  • lm_backend: The LM backend name used (usually evospikenet_lm).

  • LM summarization is enabled by default. To disable it, set the environment variable VIDEO_ANALYSIS_ENABLE_LM=0.

  • LM summaries undergo a quick clean (clean_lm_text in evospikenet/video_analysis/postprocess.py) in the postprocessor. Please expand this process as necessary.

Example (curl):

curl -X POST -H "Content-Type: application/json" -d '{"frames_path":"./data/frames.npy","audio_path":"./data/audio.npy"}' http://localhost:8000/process

The response includes lm_summary, which clients of the API can use for UI display and logging.