============================================================ Job ID: 14360867 Node: gpub040 Start time: Sat Dec 27 06:04:46 CST 2025 ============================================================ Configuration: EPOCHS: 30 DEPTHS: 2 4 6 8 10 12 HIDDEN_DIM: 128 LAMBDA_REG: 0.1 LR: 0.001 USE_SYNTHETIC: true ============================================================ NVIDIA A40, 46068 MiB ============================================================ Running: python files/experiments/depth_comparison.py --epochs 30 --depths 2 4 6 8 10 12 --hidden_dim 128 --lambda_reg 0.1 --lr 0.001 --seed 42 --out_dir runs/depth_comparison --device cuda --synthetic ============================================================ ====================================================================== Experiment: Vanilla vs Lyapunov-Regularized SNN ====================================================================== Depths: [2, 4, 6, 8, 10, 12] Hidden dim: 128 Epochs: 30 Lambda_reg: 0.1 Device: cuda Using SYNTHETIC data for quick testing Data: T=50, D=100, classes=10 ================================================== Depth = 2 layers ================================================== Training VANILLA... Final: loss=0.0000 acc=1.000 val_acc=1.000 λ=N/A ∇=0.00 Training LYAPUNOV... Final: loss=0.0087 acc=1.000 val_acc=1.000 λ=0.295 ∇=0.00 ================================================== Depth = 4 layers ================================================== Training VANILLA... Final: loss=0.0000 acc=1.000 val_acc=1.000 λ=N/A ∇=0.00 Training LYAPUNOV... Final: loss=0.0429 acc=1.000 val_acc=1.000 λ=0.654 ∇=0.07 ================================================== Depth = 6 layers ================================================== Training VANILLA... Final: loss=0.0000 acc=1.000 val_acc=1.000 λ=N/A ∇=0.00 Training LYAPUNOV... Final: loss=0.0746 acc=1.000 val_acc=1.000 λ=0.859 ∇=0.20 ================================================== Depth = 8 layers ================================================== Training VANILLA... Final: loss=0.0000 acc=1.000 val_acc=1.000 λ=N/A ∇=0.00 Training LYAPUNOV... Final: loss=0.1389 acc=0.986 val_acc=0.978 λ=1.003 ∇=1.11 ================================================== Depth = 10 layers ================================================== Training VANILLA... Final: loss=2.3013 acc=0.107 val_acc=0.084 λ=N/A ∇=0.59 Training LYAPUNOV... Final: loss=0.8785 acc=0.667 val_acc=0.666 λ=1.116 ∇=7.17 ================================================== Depth = 12 layers ================================================== Training VANILLA... Final: loss=2.3013 acc=0.107 val_acc=0.084 λ=N/A ∇=0.63 Training LYAPUNOV... Final: loss=2.4441 acc=0.107 val_acc=0.084 λ=1.196 ∇=0.63 ====================================================================== SUMMARY: Final Validation Accuracy by Depth ====================================================================== Depth Vanilla Lyapunov Difference ---------------------------------------------------------------------- 2 1.000 1.000 0.000 4 1.000 1.000 0.000 6 1.000 1.000 0.000 8 1.000 0.978 -0.022 10 0.084 0.666 +0.582 12 0.084 0.084 0.000 ====================================================================== Gradient Norm Analysis (final epoch): ---------------------------------------------------------------------- Depth Vanilla ∇ Lyapunov ∇ ---------------------------------------------------------------------- 2 0.00 0.00 4 0.00 0.07 6 0.00 0.20 8 0.00 1.11 10 0.59 7.17 12 0.63 0.63 Results saved to runs/depth_comparison/20251227-071838 ============================================================ Generating plots for: runs/depth_comparison/20251227-071838/