1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
|
============================================================
POST-HOC FINE-TUNING Experiment
Job ID: 15261462 | Node: gpub032
Start: Sat Jan 3 09:41:02 CST 2026
============================================================
NVIDIA A40, 46068 MiB
============================================================
================================================================================
POST-HOC LYAPUNOV FINE-TUNING EXPERIMENT
================================================================================
Dataset: cifar100
Depths: [4, 8, 12, 16]
Pretrain: 100 epochs (vanilla, lr=0.001)
Finetune: 50 epochs (Lyapunov, lr=0.0001)
Lyapunov: reg_type=extreme, λ_reg=0.1, threshold=2.0
================================================================================
============================================================
POST-HOC FINE-TUNING: Depth = 4
Pretrain: 100 epochs (vanilla)
Finetune: 50 epochs (Lyapunov, reg_type=extreme)
============================================================
Parameters: 1,756,836
--- Phase 1: Vanilla Pre-training (100 epochs) ---
Epoch 10: train=0.494 test=0.429
Epoch 20: train=0.629 test=0.503
Epoch 30: train=0.707 test=0.521
Epoch 40: train=0.763 test=0.561
Epoch 50: train=0.815 test=0.562
Epoch 60: train=0.851 test=0.567
Epoch 70: train=0.884 test=0.588
Epoch 80: train=0.901 test=0.601
Epoch 90: train=0.912 test=0.601
Epoch 100: train=0.916 test=0.603
Best pretrain acc: 0.610
--- Phase 2: Lyapunov Fine-tuning (50 epochs) ---
reg_type=extreme, lambda_reg=0.1, threshold=2.0
Epoch 110: train=0.911 test=0.606 λ=1.995
Epoch 120: train=0.921 test=0.602 λ=1.995
Epoch 130: train=0.925 test=0.611 λ=1.999
Epoch 140: train=0.926 test=0.609 λ=1.996
Epoch 150: train=0.929 test=0.608 λ=1.999
Best finetune acc: 0.615
Final λ: 1.999
============================================================
POST-HOC FINE-TUNING: Depth = 8
Pretrain: 100 epochs (vanilla)
Finetune: 50 epochs (Lyapunov, reg_type=extreme)
============================================================
Parameters: 4,892,196
--- Phase 1: Vanilla Pre-training (100 epochs) ---
Epoch 10: train=0.392 test=0.358
Epoch 20: train=0.545 test=0.423
Epoch 30: train=0.642 test=0.483
Epoch 40: train=0.716 test=0.504
Epoch 50: train=0.779 test=0.502
Epoch 60: train=0.830 test=0.529
Epoch 70: train=0.870 test=0.532
Epoch 80: train=0.898 test=0.538
Epoch 90: train=0.911 test=0.540
Epoch 100: train=0.913 test=0.532
Best pretrain acc: 0.545
--- Phase 2: Lyapunov Fine-tuning (50 epochs) ---
reg_type=extreme, lambda_reg=0.1, threshold=2.0
Epoch 110: train=0.062 test=0.038 λ=2.002
Epoch 120: train=0.085 test=0.016 λ=1.950
Epoch 130: train=0.099 test=0.016 λ=1.923
Epoch 140: train=0.104 test=0.016 λ=1.911
Epoch 150: train=0.106 test=0.014 λ=1.907
Best finetune acc: 0.516
Final λ: 1.907
============================================================
POST-HOC FINE-TUNING: Depth = 12
Pretrain: 100 epochs (vanilla)
Finetune: 50 epochs (Lyapunov, reg_type=extreme)
============================================================
Parameters: 8,027,556
--- Phase 1: Vanilla Pre-training (100 epochs) ---
Epoch 10: train=0.213 test=0.087
Epoch 20: train=0.289 test=0.069
Epoch 30: train=0.346 test=0.083
Epoch 40: train=0.388 test=0.076
Epoch 50: train=0.430 test=0.076
Epoch 60: train=0.467 test=0.087
Epoch 70: train=0.502 test=0.101
|