summaryrefslogtreecommitdiff
path: root/runs/slurm_logs/14632871_test_opt.out
diff options
context:
space:
mode:
authorYurenHao0426 <blackhao0426@gmail.com>2026-01-13 23:50:59 -0600
committerYurenHao0426 <blackhao0426@gmail.com>2026-01-13 23:50:59 -0600
commit00cf667cee7ffacb144d5805fc7e0ef443f3583a (patch)
tree77d20a3adaecf96bf3aff0612bdd3b5fa1a7dc7e /runs/slurm_logs/14632871_test_opt.out
parentc53c04aa1d6ff75cb478a9498c370baa929c74b6 (diff)
parentcd99d6b874d9d09b3bb87b8485cc787885af71f1 (diff)
Merge master into main
Diffstat (limited to 'runs/slurm_logs/14632871_test_opt.out')
-rw-r--r--runs/slurm_logs/14632871_test_opt.out29
1 files changed, 29 insertions, 0 deletions
diff --git a/runs/slurm_logs/14632871_test_opt.out b/runs/slurm_logs/14632871_test_opt.out
new file mode 100644
index 0000000..5478903
--- /dev/null
+++ b/runs/slurm_logs/14632871_test_opt.out
@@ -0,0 +1,29 @@
+Testing optimized SpikingVGG forward...
+Testing on device: cuda
+Using: Global delta + Global renorm (Option 1 - textbook LE)
+Model depth: 6 conv layers
+Parameters: 1,187,274
+
+[Test 1] Forward without Lyapunov...
+ Logits shape: torch.Size([8, 10]) ✓
+
+[Test 2] Forward with Lyapunov...
+ Logits shape: torch.Size([8, 10]) ✓
+ Lyapunov exponent: 3.3281 ✓
+
+[Test 3] Backward pass...
+ Loss: 5.3691 ✓
+ Gradient norm: 25.8283 ✓
+ No NaN gradients ✓
+
+[Test 4] Multiple training steps...
+ Step 1: loss=7.0680, λ=3.0538
+ Step 2: loss=7.3570, λ=3.2788
+ Step 3: loss=8.4584, λ=2.8295
+ Step 4: loss=5.7036, λ=3.3123
+ Step 5: loss=7.4161, λ=3.4701
+
+==================================================
+ALL TESTS PASSED!
+==================================================
+Done!