summaryrefslogtreecommitdiff
path: root/runs/slurm_logs/15261462_posthoc.out
diff options
context:
space:
mode:
authorYurenHao0426 <blackhao0426@gmail.com>2026-01-13 23:50:59 -0600
committerYurenHao0426 <blackhao0426@gmail.com>2026-01-13 23:50:59 -0600
commit00cf667cee7ffacb144d5805fc7e0ef443f3583a (patch)
tree77d20a3adaecf96bf3aff0612bdd3b5fa1a7dc7e /runs/slurm_logs/15261462_posthoc.out
parentc53c04aa1d6ff75cb478a9498c370baa929c74b6 (diff)
parentcd99d6b874d9d09b3bb87b8485cc787885af71f1 (diff)
Merge master into main
Diffstat (limited to 'runs/slurm_logs/15261462_posthoc.out')
-rw-r--r--runs/slurm_logs/15261462_posthoc.out92
1 files changed, 92 insertions, 0 deletions
diff --git a/runs/slurm_logs/15261462_posthoc.out b/runs/slurm_logs/15261462_posthoc.out
new file mode 100644
index 0000000..f20cf31
--- /dev/null
+++ b/runs/slurm_logs/15261462_posthoc.out
@@ -0,0 +1,92 @@
+============================================================
+POST-HOC FINE-TUNING Experiment
+Job ID: 15261462 | Node: gpub032
+Start: Sat Jan 3 09:41:02 CST 2026
+============================================================
+NVIDIA A40, 46068 MiB
+============================================================
+================================================================================
+POST-HOC LYAPUNOV FINE-TUNING EXPERIMENT
+================================================================================
+Dataset: cifar100
+Depths: [4, 8, 12, 16]
+Pretrain: 100 epochs (vanilla, lr=0.001)
+Finetune: 50 epochs (Lyapunov, lr=0.0001)
+Lyapunov: reg_type=extreme, λ_reg=0.1, threshold=2.0
+================================================================================
+
+============================================================
+POST-HOC FINE-TUNING: Depth = 4
+Pretrain: 100 epochs (vanilla)
+Finetune: 50 epochs (Lyapunov, reg_type=extreme)
+============================================================
+Parameters: 1,756,836
+
+--- Phase 1: Vanilla Pre-training (100 epochs) ---
+ Epoch 10: train=0.494 test=0.429
+ Epoch 20: train=0.629 test=0.503
+ Epoch 30: train=0.707 test=0.521
+ Epoch 40: train=0.763 test=0.561
+ Epoch 50: train=0.815 test=0.562
+ Epoch 60: train=0.851 test=0.567
+ Epoch 70: train=0.884 test=0.588
+ Epoch 80: train=0.901 test=0.601
+ Epoch 90: train=0.912 test=0.601
+ Epoch 100: train=0.916 test=0.603
+ Best pretrain acc: 0.610
+
+--- Phase 2: Lyapunov Fine-tuning (50 epochs) ---
+ reg_type=extreme, lambda_reg=0.1, threshold=2.0
+ Epoch 110: train=0.911 test=0.606 λ=1.995
+ Epoch 120: train=0.921 test=0.602 λ=1.995
+ Epoch 130: train=0.925 test=0.611 λ=1.999
+ Epoch 140: train=0.926 test=0.609 λ=1.996
+ Epoch 150: train=0.929 test=0.608 λ=1.999
+ Best finetune acc: 0.615
+ Final λ: 1.999
+
+============================================================
+POST-HOC FINE-TUNING: Depth = 8
+Pretrain: 100 epochs (vanilla)
+Finetune: 50 epochs (Lyapunov, reg_type=extreme)
+============================================================
+Parameters: 4,892,196
+
+--- Phase 1: Vanilla Pre-training (100 epochs) ---
+ Epoch 10: train=0.392 test=0.358
+ Epoch 20: train=0.545 test=0.423
+ Epoch 30: train=0.642 test=0.483
+ Epoch 40: train=0.716 test=0.504
+ Epoch 50: train=0.779 test=0.502
+ Epoch 60: train=0.830 test=0.529
+ Epoch 70: train=0.870 test=0.532
+ Epoch 80: train=0.898 test=0.538
+ Epoch 90: train=0.911 test=0.540
+ Epoch 100: train=0.913 test=0.532
+ Best pretrain acc: 0.545
+
+--- Phase 2: Lyapunov Fine-tuning (50 epochs) ---
+ reg_type=extreme, lambda_reg=0.1, threshold=2.0
+ Epoch 110: train=0.062 test=0.038 λ=2.002
+ Epoch 120: train=0.085 test=0.016 λ=1.950
+ Epoch 130: train=0.099 test=0.016 λ=1.923
+ Epoch 140: train=0.104 test=0.016 λ=1.911
+ Epoch 150: train=0.106 test=0.014 λ=1.907
+ Best finetune acc: 0.516
+ Final λ: 1.907
+
+============================================================
+POST-HOC FINE-TUNING: Depth = 12
+Pretrain: 100 epochs (vanilla)
+Finetune: 50 epochs (Lyapunov, reg_type=extreme)
+============================================================
+Parameters: 8,027,556
+
+--- Phase 1: Vanilla Pre-training (100 epochs) ---
+ Epoch 10: train=0.213 test=0.087
+ Epoch 20: train=0.289 test=0.069
+ Epoch 30: train=0.346 test=0.083
+ Epoch 40: train=0.388 test=0.076
+ Epoch 50: train=0.430 test=0.076
+ Epoch 60: train=0.467 test=0.087
+ Epoch 70: train=0.502 test=0.101