diff options
| author | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-13 23:49:05 -0600 |
|---|---|---|
| committer | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-13 23:49:05 -0600 |
| commit | cd99d6b874d9d09b3bb87b8485cc787885af71f1 (patch) | |
| tree | 59a233959932ca0e4f12f196275e07fcf443b33f /runs/slurm_logs/14360854_test.out | |
init commit
Diffstat (limited to 'runs/slurm_logs/14360854_test.out')
| -rw-r--r-- | runs/slurm_logs/14360854_test.out | 86 |
1 files changed, 86 insertions, 0 deletions
diff --git a/runs/slurm_logs/14360854_test.out b/runs/slurm_logs/14360854_test.out new file mode 100644 index 0000000..44463f3 --- /dev/null +++ b/runs/slurm_logs/14360854_test.out @@ -0,0 +1,86 @@ +============================================================ +Quick Test: SNN with Lyapunov Regularization +Job ID: 14360854 | Node: gpub040 +============================================================ +NVIDIA A40, 46068 MiB +============================================================ +Test 1: Model and Lyapunov computation... + Logits shape: torch.Size([8, 10]) + Lyapunov exponent: 0.3758 + Spikes shape: torch.Size([8, 50, 64]) + PASSED + +Test 2: Training loop with Lyapunov regularization... + Step 1: loss=5.7394, lyap=0.4977 + Step 2: loss=4.2242, lyap=0.4870 + Step 3: loss=3.0435, lyap=0.4873 + Step 4: loss=2.5410, lyap=0.4870 + Step 5: loss=2.1885, lyap=0.4874 + PASSED + +Test 3: Depth comparison (2 epochs, depths 1,2,4)... +====================================================================== +Experiment: Vanilla vs Lyapunov-Regularized SNN +====================================================================== +Depths: [1, 2, 4] +Hidden dim: 64 +Epochs: 2 +Lambda_reg: 0.1 +Device: cuda + +Using SYNTHETIC data for quick testing +Data: T=50, D=100, classes=10 + +================================================== +Depth = 1 layers +================================================== + + Training VANILLA... + Final: loss=0.1148 acc=0.979 val_acc=0.998 λ=N/A ∇=0.99 + + Training LYAPUNOV... + Final: loss=0.1152 acc=0.979 val_acc=1.000 λ=-0.063 ∇=0.99 + +================================================== +Depth = 2 layers +================================================== + + Training VANILLA... + Final: loss=0.0306 acc=0.999 val_acc=1.000 λ=N/A ∇=0.24 + + Training LYAPUNOV... + Final: loss=0.0424 acc=1.000 val_acc=1.000 λ=0.285 ∇=0.28 + +================================================== +Depth = 4 layers +================================================== + + Training VANILLA... + Final: loss=0.7594 acc=0.758 val_acc=0.826 λ=N/A ∇=1.03 + + Training LYAPUNOV... + Final: loss=0.7975 acc=0.774 val_acc=0.828 λ=0.638 ∇=1.02 + +====================================================================== +SUMMARY: Final Validation Accuracy by Depth +====================================================================== +Depth Vanilla Lyapunov Difference +---------------------------------------------------------------------- +1 0.998 1.000 +0.002 +2 1.000 1.000 0.000 +4 0.826 0.828 +0.002 +====================================================================== + +Gradient Norm Analysis (final epoch): +---------------------------------------------------------------------- +Depth Vanilla ∇ Lyapunov ∇ +---------------------------------------------------------------------- +1 0.99 0.99 +2 0.24 0.28 +4 1.03 1.02 + +Results saved to runs/test_output/20251227-055553 + +============================================================ +All tests PASSED +============================================================ |
