Skip to content

Test marker guide

[!NOTE] For the latest implementation status, please refer to Functional Implementation Status (Remaining Functionality).

overview

The EvoSpikeNet project uses pytest's marker feature to categorize tests and automatically skip them based on required dependencies. This allows you to safely run tests even in CI environments or environments where dependencies are missing.

Available markers

Dependency Marker

@pytest.mark.requires_torch

  • Description: Tests that require PyTorch
  • Auto-skip: If PyTorch is not installed
  • Usage example:```python import pytest

@pytest.mark.requires_torch def test_neural_network(): import torch model = torch.nn.Linear(10, 5) assert model.out_features == 5

#### `@pytest.mark.requires_gpu`
- **Description**: Tests that require GPU/CUDA
- **Auto-skip**: if CUDA is not available
- **Usage example**:```python
@pytest.mark.requires_torch
@pytest.mark.requires_gpu
def test_cuda_acceleration():
    import torch
    device = torch.device('cuda')
    tensor = torch.randn(100, 100, device=device)
    assert tensor.is_cuda

@pytest.mark.requires_milvus

  • Description: Tests that require Milvus vector database
  • Usage example:```python @pytest.mark.requires_milvus @pytest.mark.integration def test_vector_search(): from pymilvus import connections connections.connect() # test code
    #### `@pytest.mark.requires_elasticsearch`
    - **Description**: Tests that require Elasticsearch
    - **Usage example**:```python
    @pytest.mark.requires_elasticsearch
    @pytest.mark.integration
    def test_text_search():
        from elasticsearch import Elasticsearch
        es = Elasticsearch()
        # test code
    

@pytest.mark.requires_zenoh

  • Description: Tests that require Zenoh communication library
  • Usage example:```python @pytest.mark.requires_zenoh @pytest.mark.integration def test_distributed_communication(): import zenoh # test code
    ### Test category marker
    
    #### `@pytest.mark.unit`
    - **Description**: Unit testing (testing the operation of a single component)
    - **Recommended**: No external dependencies, fast execution
    
    #### `@pytest.mark.integration`
    - **Description**: Integration testing (verification of cooperation between multiple components)
    - **Recommended**: Minimum necessary external dependencies
    
    #### `@pytest.mark.e2e`
    - **Description**: End-to-end testing (operation verification of the entire system)
    - **Recommended**: Conditions close to the actual operational environment
    
    #### `@pytest.mark.slow`
    - **Description**: Long running test (usually more than 1 second)
    - **Usage example**:```python
    @pytest.mark.slow
    @pytest.mark.integration
    def test_long_running_training():
        # Training process that takes a long time
        pass
    

@pytest.mark.patent

  • Description: Validation test of patent implementation
  • Target: Implementation verification of EvoSpikeNet patented technology (MT25-EV001~EV016)
  • Usage example:```python @pytest.mark.patent def test_chrono_spike_attention(): """ChronoSpikeAttention (MT25-EV001) の実装検証"""
model = ChronoSpikeAttention(embed_dim=256, num_heads=8)
# test code

```

Apply marker to entire module

You can apply a marker to all tests in a module at once using the pytestmark variable: ```python import pytest

Apply the requires_torch marker to all tests in this module

pytestmark = pytest.mark.requires_torch def test_example_1(): # requires_torch is automatically applied to this test pass def test_example_2(): # requires_torch is automatically applied to this test as well. pass ```

When applying multiple markers:

pytestmark = [
    pytest.mark.requires_torch,
    pytest.mark.requires_gpu,
    pytest.mark.slow
]

Test execution example

Run all tests```bash

pytest tests/

### Run unit tests only```bash
pytest -m unit tests/

Run non-GPU required tests```bash

pytest -m "not requires_gpu" tests/

### Run only PyTorch required tests```bash
pytest -m requires_torch tests/

Exclude integration and E2E tests```bash

pytest -m "not integration and not e2e" tests/

### Exclude slow tests```bash
pytest -m "not slow" tests/

Combination of multiple conditions```bash

Unit test and PyTorch required test

pytest -m "unit and requires_torch" tests/

Integration tests that do not require GPU

pytest -m "integration and not requires_gpu" tests/

### List tests with specific marker```bash
pytest --collect-only -m requires_gpu tests/

Use in CI/CD environment

CPU-only environment```bash

Skip GPU required tests

pytest -m "not requires_gpu" tests/

### Lightweight environment (minimal dependencies)```bash
# Skip tests that require external dependencies
pytest -m "not requires_milvus and not requires_elasticsearch and not requires_zenoh" tests/

Fast test only```bash

Skip slow tests

pytest -m "not slow" tests/

## Best practices

### 1. Choosing the right marker```python
# ✅ Good example: clear and specific markers
@pytest.mark.unit
@pytest.mark.requires_torch
def test_lif_neuron():
    pass

# ❌ Bad example: No marker (unclear dependencies)
def test_something():
    import torch  # possibility of sudden failure
    pass

2. Explicit dependency relationships```python

✅ Good example: Specify GPU requirement

@pytest.mark.requires_torch @pytest.mark.requires_gpu def test_cuda_kernel(): device = torch.device('cuda') # GPU processing

❌ Bad example: Performing CUDA availability checks within tests

def test_maybe_cuda(): if torch.cuda.is_available(): # GPU processing else: pytest.skip("CUDA not available")

### 3. Test layering```python
# Unit testing: fast, few dependencies
@pytest.mark.unit
@pytest.mark.requires_torch
def test_neuron_activation():
    pass

# Integration testing: medium speed, multiple components
@pytest.mark.integration
@pytest.mark.requires_torch
def test_network_training():
    pass

# E2E test: low speed, full system
@pytest.mark.e2e
@pytest.mark.slow
@pytest.mark.requires_torch
@pytest.mark.requires_gpu
def test_full_pipeline():
    pass

4. Patent Test Management```python

@pytest.mark.patent @pytest.mark.integration @pytest.mark.requires_torch def test_patent_implementation_mt25_ev001(): """特許MT25-EV001の実装を検証""" pass

## How automatic skip works

The `pytest_collection_modifyitems` hook defined in `tests/conftest.py` automatically adds skip markers when collecting tests:

```python
def pytest_collection_modifyitems(config, items):
    skip_torch = pytest.mark.skip(reason="PyTorch not available")
    skip_gpu = pytest.mark.skip(reason="GPU/CUDA not available")

    for item in items:
        if "requires_torch" in item.keywords and not _HAS_TORCH:
            item.add_marker(skip_torch)

        if "requires_gpu" in item.keywords:
            if not _HAS_TORCH or not torch.cuda.is_available():
                item.add_marker(skip_gpu)

troubleshooting

Marker not recognized```bash

See available markers

pytest --markers

Detect unregistered markers in strict-markers mode

pytest --strict-markers tests/ ```

Tests that should be skipped are executed

  • Check autoskip logic in conftest.py
  • Check if markers are applied correctly: pytest --collect-only -m requires_gpu

Unexpected tests are skipped

  • Check marker combinations
  • Display skip reason with -v option: pytest -v tests/