Skip to content

DEV BATCH SHAPING

Batch/Sequence Shape Management (Developer Guide)

Purpose - Describe the conventions used in evospikenet for handling batched spike-train tensors and rules to follow when adding new modules.

Conventions - Canonical internal tensor shape: (batch, time, seq, dim) where: - batch: independent parallel sequences - time: simulation timesteps - seq: sequence length (e.g., tokens or spatial positions) - dim: feature dimension (model hidden size)

  • Neuron layers and synapse matrices may expect 2D inputs (batch*seq, dim) or 1D vectors depending on implementation. To ensure compatibility:
  • Flatten batch and seq into a single leading dimension before calling neuron layers that operate per-neuron group: x_flat = x.reshape(batch*seq, dim).
  • Run neuron processing on x_flat.
  • Reshape outputs back to (batch, time, seq, dim) before returning from higher-level modules.

Guidelines - Prefer explicit reset(batch_size) calls for stateful layers before stepping through time. - Use num_workers=0 in DataLoader for reliable terminal logging during interactive runs. - When adding new modules, include a small unit test that exercises batch_size in {1,2,4} and verifies output shapes.

Compatibility - When integrating 3rd-party neuron layers (e.g., snntorch.Leaky) that accept/return a mem context, use adapter logic in higher-level modules to initialise and pass the mem only when required.


Advanced Batch Shaping Management System

EvoSpikeNet now includes a comprehensive batch shaping management system that provides adaptive batch sizing, memory-aware tensor reshaping, and performance optimization capabilities.

Core Components

1. AdaptiveBatchSizer

Dynamically adjusts batch sizes based on memory usage and performance metrics.

<!-- from evospikenet.batch_shaping import AdaptiveBatchSizer -->

# Initialize with target memory usage (80% of available GPU memory)


> [!NOTE]
> For the latest implementation status, please refer to [Functional Implementation Status (Remaining Functionality)](REMAINING_FUNCTIONALITY.md).

sizer = AdaptiveBatchSizer(target_memory_usage=0.8)

# Get optimal batch size for current conditions
optimal_batch_size = sizer.get_optimal_batch_size(
    model=model,
    input_shape=(batch_size, seq_len, hidden_dim),
    device='cuda'
)

# Adaptive sizing during training
for epoch in range(num_epochs):
    current_batch_size = sizer.adapt_batch_size(
        current_loss=loss.item(),
        current_memory_usage=torch.cuda.memory_allocated() / torch.cuda.max_memory_allocated(),
        performance_metrics={'throughput': samples_per_sec}
    )

Key Features: - Memory-aware batch size optimization - Performance-based adaptation - Gradient accumulation support - Multi-GPU coordination

2. MemoryAwareShaper

Reshapes tensors while considering GPU memory constraints and computational efficiency.

<!-- TODO: update or remove - import<!-- Remember: Automatic conversion not possible  please fix manually --> import MemoryAwareShaper -->

shaper = MemoryAwareShaper(memory_threshold=0.9)

# Reshape tensor with memory constraints
reshaped_tensor = shaper.reshape_with_memory_check(
    tensor=input_tensor,
    target_shape=(new_batch, seq_len, hidden_dim),
    preserve_gradients=True
)

# Optimize tensor layout for computation
optimized_tensor = shaper.optimize_tensor_layout(
    tensor=input_tensor,
    operation_type='attention'  # or 'convolution', 'recurrent'
)

Key Features: - Memory usage prediction - Automatic shape optimization - Gradient preservation - Operation-specific optimizations

3. PerformanceMonitor

Monitors batch processing performance and provides optimization recommendations.

<!-- Required dependency: Module 'GPUtil' not found. Consider 'pip install GPUtil' in your execution environment -->tor = PerformanceMonitor(window_size=100)

# Track batch processing performance
monitor.record_batch(
    batch_size=current_batch_size,
    processing_time=time_elapsed,
    memory_usage=memory_used,
    loss_value=loss.item()
)

# Get performance analysis
analysis = monitor.get_performance_analysis()
print(f"Average throughput: {analysis['avg_throughput']:.2f} samples/sec")
print(f"Memory efficiency: {analysis['memory_efficiency']:.2%}")

# Get optimization recommendations
recommendations = monitor.get_optimization_recommendations()

Key Features: - Real-time performance tracking - Memory usage analysis - Bottleneck identification - Optimization suggestions

4. BatchShapeOptimizer

Recommends optimal batch shapes for different model architectures and hardware configurations.

<!-- TODO: update or remove - impo<!-- Required dependency: Module 'GPUtil' not found. Consider 'pip install GPUtil' in your execution environment -->n<!-- Remember: Cannot convert automatically  please fix manually -->del=model,
    hardware_specs={'gpu_memory': 24e9, 'cpu_memory': 128e9}
)

# Find optimal batch configuration
optimal_config = optimizer.optimize_batch_shape(
    input_shapes=[(batch, seq, dim) for batch in [8, 16, 32, 64]],
    constraints={'max_memory': 0.8, 'min_throughput': 100}
)

print(f"Recommended batch size: {optimal_config['batch_size']}")
print(f"Recommended sequence length: {optimal_config['seq_length']}")
print(f"Expected throughput: {optimal_config['throughput']}")

Key Features: - Hardware-aware optimization - Multi-objective optimization (memory, throughput, accuracy) - Architecture-specific recommendations - Constraint-based optimization

5. BatchShapeValidator

Validates batch shapes for compatibility and performance.

<!-- TODO: update or remove - im<!-- Required dependency: Module 'GPUtil' not found. Consider 'pip install GPUtil' in your execution environment -->ping import BatchShapeValidator -->

va<!-- Remember: Cannot convert automatically  please fix manually -->valid, issues = validator.validate_batch_shape(
    tensor_shape=(batch_size, seq_len, hidden_dim),
    model_requirements=model.get_shape_requirements(),
    hardware_limits={'max_batch_size': 128, 'max_seq_len': 2048}
)

if not is_valid:
    print("Shape validation issues:")
    for issue in issues:
        print(f"- {issue}")

# Validate performance implications
performance_check = validator.check_performance_implications(
    batch_shape=(batch_size, seq_len, hidden_dim),
    operation_sequence=['embedding', 'attention', 'feedforward']
)

Key Features: - Shape compatibility validation - Hardware constraint checking - Performance impact analysis - Automatic issue detection

6. BatchShapeManager

Integrated manager that coordinates all batch shaping components.

<!-- TODO: update or remove - <!-- Required dependency: Module 'GPUtil' not found. Consider 'pip install GPUtil' in your execution environment -->haping import BatchShapeManager -->

manager = BatchShapeManager(
model=m<!-- Please note: Cannot convert automatically  please fix manually -->ue
)

# Initialize for training
manager.initialize_for_training(
    initial_batch_size=16,
    max_batch_size=128,
    adaptation_interval=10
)

# Process batch with automatic optimization
optimized_batch = manager.process_batch(
    input_batch=raw_batch,
    training_step=step,
    current_loss=loss.item()
)

# Get current status
status = manager.get_status()
print(f"Current batch size: {status['current_batch_size']}")
print(f"Memory usage: {status['memory_usage']:.2%}")
print(f"Performance trend: {status['performance_trend']}")

Key Features: - Unified batch shaping interface - Automatic component coordination - Real-time adaptation - Comprehensive monitoring

Integration Examples

Training Loop Integration

<!-- TODO: update or remove <!-- Required dependency: Module 'GPUtil' not found. Consider 'pip install GPUtil' in your execution environment -->_shaping import BatchShapeManager -->

# Initialize batch shape manager
batch_manager = BatchShapeManager(
mode<!-- Remember: Automatic conversion not possible  please fix manually --> adaptive batch shaping
for epoch in range(num_epochs):
    for step, batch in enumerate(dataloader):
        # Adapt batch size based on current conditions
        adapted_batch = batch_manager.adapt_and_process_batch(
            batch=batch,
            step=step,
            current_metrics={'loss': loss.item(), 'memory': get_memory_usage()}
        )

        # Forward pass
        outputs = model(adapted_batch['inputs'])
        loss = criterion(outputs, adapted_batch['targets'])

        # Backward pass
        loss.backward()
        optimizer.step()

        # Update batch manager with results
        batch_manager.update_metrics(
            loss=loss.item(),
            processing_time=time.time() - start_time,
            memory_usage=get_memory_usage()
        )

Memory-Constrained Inference

# Configure for inference with memory constraints
inference_manager = BatchShapeManager(
    model=model,
    device=device,
    memory_target=0.9,  # Use more memory for inference
    inference_mode=True
)

# Process large dataset with automatic batching
results = inference_manager.process_large_dataset(
    dataset=large_dataset,
    batch_size_strategy='memory_adaptive',
    max_concurrent_batches=4
)

Configuration Options

batch_shaping:
  adaptive_sizing:
    enabled: true
    target_memory_usage: 0.8
    min_batch_size: 1
    max_batch_size: 128
    adaptation_interval: 10
    performance_window: 50

  memory_aware_shaping:
    memory_threshold: 0.9
    preserve_gradients: true
    optimization_hints:
      attention: "contiguous"
      convolution: "channels_last"
      recurrent: "packed_sequence"

  performance_monitoring:
    enabled: true
    metrics_window: 100
    alert_thresholds:
      memory_usage: 0.95
      throughput_drop: 0.2

  validation:
    strict_mode: false
    hardware_checks: true
    performance_validation: true

Best Practices

  1. Initialization: Always initialize BatchShapeManager before training/inference
  2. Memory Monitoring: Enable memory monitoring for adaptive sizing
  3. Performance Tracking: Use PerformanceMonitor for bottleneck identification
  4. Validation: Run BatchShapeValidator before deploying to production
  5. Hardware Awareness: Configure hardware-specific optimizations
  6. Testing: Test with various batch sizes and shapes during development

Troubleshooting

Common Issues: - Memory allocation errors: Reduce memory_target or enable gradient accumulation - Performance degradation: Check PerformanceMonitor for bottlenecks - Shape validation failures: Review model requirements and input shapes - Adaptation instability: Increase adaptation_interval or adjust thresholds

Debug Mode:

batch_manager.enable_debug_mode()
batch_manager.log_detailed_metrics()