blob: 06d658d32542208f0767b9ddfad1da36a60074d6 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
|
============================================================
Job ID: 14360867
Node: gpub040
Start time: Sat Dec 27 06:04:46 CST 2025
============================================================
Configuration:
EPOCHS: 30
DEPTHS: 2 4 6 8 10 12
HIDDEN_DIM: 128
LAMBDA_REG: 0.1
LR: 0.001
USE_SYNTHETIC: true
============================================================
NVIDIA A40, 46068 MiB
============================================================
Running: python files/experiments/depth_comparison.py --epochs 30 --depths 2 4 6 8 10 12 --hidden_dim 128 --lambda_reg 0.1 --lr 0.001 --seed 42 --out_dir runs/depth_comparison --device cuda --synthetic
============================================================
======================================================================
Experiment: Vanilla vs Lyapunov-Regularized SNN
======================================================================
Depths: [2, 4, 6, 8, 10, 12]
Hidden dim: 128
Epochs: 30
Lambda_reg: 0.1
Device: cuda
Using SYNTHETIC data for quick testing
Data: T=50, D=100, classes=10
==================================================
Depth = 2 layers
==================================================
Training VANILLA...
Final: loss=0.0000 acc=1.000 val_acc=1.000 λ=N/A ∇=0.00
Training LYAPUNOV...
Final: loss=0.0087 acc=1.000 val_acc=1.000 λ=0.295 ∇=0.00
==================================================
Depth = 4 layers
==================================================
Training VANILLA...
Final: loss=0.0000 acc=1.000 val_acc=1.000 λ=N/A ∇=0.00
Training LYAPUNOV...
Final: loss=0.0429 acc=1.000 val_acc=1.000 λ=0.654 ∇=0.07
==================================================
Depth = 6 layers
==================================================
Training VANILLA...
Final: loss=0.0000 acc=1.000 val_acc=1.000 λ=N/A ∇=0.00
Training LYAPUNOV...
Final: loss=0.0746 acc=1.000 val_acc=1.000 λ=0.859 ∇=0.20
==================================================
Depth = 8 layers
==================================================
Training VANILLA...
Final: loss=0.0000 acc=1.000 val_acc=1.000 λ=N/A ∇=0.00
Training LYAPUNOV...
Final: loss=0.1389 acc=0.986 val_acc=0.978 λ=1.003 ∇=1.11
==================================================
Depth = 10 layers
==================================================
Training VANILLA...
Final: loss=2.3013 acc=0.107 val_acc=0.084 λ=N/A ∇=0.59
Training LYAPUNOV...
Final: loss=0.8785 acc=0.667 val_acc=0.666 λ=1.116 ∇=7.17
==================================================
Depth = 12 layers
==================================================
Training VANILLA...
Final: loss=2.3013 acc=0.107 val_acc=0.084 λ=N/A ∇=0.63
Training LYAPUNOV...
Final: loss=2.4441 acc=0.107 val_acc=0.084 λ=1.196 ∇=0.63
======================================================================
SUMMARY: Final Validation Accuracy by Depth
======================================================================
Depth Vanilla Lyapunov Difference
----------------------------------------------------------------------
2 1.000 1.000 0.000
4 1.000 1.000 0.000
6 1.000 1.000 0.000
8 1.000 0.978 -0.022
10 0.084 0.666 +0.582
12 0.084 0.084 0.000
======================================================================
Gradient Norm Analysis (final epoch):
----------------------------------------------------------------------
Depth Vanilla ∇ Lyapunov ∇
----------------------------------------------------------------------
2 0.00 0.00
4 0.00 0.07
6 0.00 0.20
8 0.00 1.11
10 0.59 7.17
12 0.63 0.63
Results saved to runs/depth_comparison/20251227-071838
============================================================
Generating plots for: runs/depth_comparison/20251227-071838/
|