diff options
| author | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-13 23:50:59 -0600 |
|---|---|---|
| committer | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-13 23:50:59 -0600 |
| commit | 00cf667cee7ffacb144d5805fc7e0ef443f3583a (patch) | |
| tree | 77d20a3adaecf96bf3aff0612bdd3b5fa1a7dc7e /runs/slurm_logs/15348086_scaled_grid_d12.out | |
| parent | c53c04aa1d6ff75cb478a9498c370baa929c74b6 (diff) | |
| parent | cd99d6b874d9d09b3bb87b8485cc787885af71f1 (diff) | |
Merge master into main
Diffstat (limited to 'runs/slurm_logs/15348086_scaled_grid_d12.out')
| -rw-r--r-- | runs/slurm_logs/15348086_scaled_grid_d12.out | 151 |
1 files changed, 151 insertions, 0 deletions
diff --git a/runs/slurm_logs/15348086_scaled_grid_d12.out b/runs/slurm_logs/15348086_scaled_grid_d12.out new file mode 100644 index 0000000..43d72c4 --- /dev/null +++ b/runs/slurm_logs/15348086_scaled_grid_d12.out @@ -0,0 +1,151 @@ +============================================================ +SCALED REGULARIZATION GRID SEARCH - DEPTH 12 +Job ID: 15348086 | Node: gpub017 +Start: Mon Jan 5 13:08:27 CST 2026 +============================================================ +Grid: λ_reg=[0.01, 0.05, 0.1, 0.3] × reg_type=[mult_linear, mult_log] +Total: 8 experiments +============================================================ +NVIDIA A40, 46068 MiB +============================================================ +====================================================================== +SCALED REGULARIZATION GRID SEARCH +====================================================================== +Depth: 12 +Epochs: 100 +Device: cuda +GPU: NVIDIA A40 +====================================================================== + +Grid: 4 λ_reg × 2 reg_types = 8 experiments +λ_reg values: [0.01, 0.05, 0.1, 0.3] +reg_types: ['mult_linear', 'mult_log'] + +Loading CIFAR-100... +Train: 50000, Test: 10000 + +============================================================ +Config: depth=12, reg_type=mult_linear, λ_reg=0.01 +============================================================ + Training Vanilla... + Epoch 10: test=0.228 + Epoch 20: test=0.358 + Epoch 30: test=0.429 + Epoch 40: test=0.445 + Epoch 50: test=0.421 + Epoch 60: test=0.447 + Epoch 70: test=0.463 + Epoch 80: test=0.456 + Epoch 90: test=0.466 + Epoch 100: test=0.462 + Training Lyapunov (mult_linear, λ_reg=0.01)... + Epoch 10: test=0.010 λ=1.696 + Epoch 20: test=0.010 λ=1.740 + Epoch 30: test=0.010 λ=1.622 + Epoch 40: test=0.010 λ=1.614 + Epoch 50: test=0.010 λ=1.609 + Epoch 60: test=0.010 λ=1.606 + Epoch 70: test=0.010 λ=1.620 + Epoch 80: test=0.010 λ=1.625 + Epoch 90: test=0.010 λ=1.630 + Epoch 100: test=0.010 λ=1.636 + Result: Vanilla=0.466, Lyap=0.010, Δ=-0.456 + +============================================================ +Config: depth=12, reg_type=mult_log, λ_reg=0.01 +============================================================ + Training Vanilla... + Epoch 10: test=0.210 + Epoch 20: test=0.290 + Epoch 30: test=0.370 + Epoch 40: test=0.441 + Epoch 50: test=0.464 + Epoch 60: test=0.448 + Epoch 70: test=0.458 + Epoch 80: test=0.469 + Epoch 90: test=0.473 + Epoch 100: test=0.482 + Training Lyapunov (mult_log, λ_reg=0.01)... + Epoch 10: test=0.013 λ=1.679 + Epoch 20: test=0.010 λ=1.636 + Epoch 30: test=0.010 λ=1.613 + Epoch 40: test=0.012 λ=1.598 + Epoch 50: test=0.010 λ=1.628 + Epoch 60: test=0.011 λ=1.620 + Epoch 70: test=0.010 λ=1.604 + Epoch 80: test=0.010 λ=1.585 + Epoch 90: test=0.010 λ=1.588 + Epoch 100: test=0.010 λ=1.578 + Result: Vanilla=0.482, Lyap=0.013, Δ=-0.470 + +============================================================ +Config: depth=12, reg_type=mult_linear, λ_reg=0.05 +============================================================ + Training Vanilla... + Epoch 10: test=0.260 + Epoch 20: test=0.325 + Epoch 30: test=0.377 + Epoch 40: test=0.406 + Epoch 50: test=0.455 + Epoch 60: test=0.457 + Epoch 70: test=0.485 + Epoch 80: test=0.467 + Epoch 90: test=0.466 + Epoch 100: test=0.467 + Training Lyapunov (mult_linear, λ_reg=0.05)... + Epoch 10: test=0.010 λ=1.630 + Epoch 20: test=0.010 λ=1.633 + Epoch 30: test=0.010 λ=1.637 + Epoch 40: test=0.010 λ=1.650 + Epoch 50: test=0.010 λ=1.615 + Epoch 60: test=0.010 λ=1.601 + Epoch 70: test=0.010 λ=1.596 + Epoch 80: test=0.010 λ=1.589 + Epoch 90: test=0.010 λ=1.582 + Epoch 100: test=0.010 λ=1.588 + Result: Vanilla=0.485, Lyap=0.010, Δ=-0.475 + +============================================================ +Config: depth=12, reg_type=mult_log, λ_reg=0.05 +============================================================ + Training Vanilla... + Epoch 10: test=0.236 + Epoch 20: test=0.344 + Epoch 30: test=0.414 + Epoch 40: test=0.462 + Epoch 50: test=0.457 + Epoch 60: test=0.461 + Epoch 70: test=0.483 + Epoch 80: test=0.488 + Epoch 90: test=0.498 + Epoch 100: test=0.496 + Training Lyapunov (mult_log, λ_reg=0.05)... + Epoch 10: test=0.010 λ=1.625 + Epoch 20: test=0.012 λ=1.626 + Epoch 30: test=0.010 λ=1.597 + Epoch 40: test=0.010 λ=1.627 + Epoch 50: test=0.010 λ=1.589 + Epoch 60: test=0.010 λ=1.572 + Epoch 70: test=0.010 λ=1.569 + Epoch 80: test=0.010 λ=1.560 + Epoch 90: test=0.010 λ=1.563 + Epoch 100: test=0.010 λ=1.561 + Result: Vanilla=0.498, Lyap=0.012, Δ=-0.486 + +============================================================ +Config: depth=12, reg_type=mult_linear, λ_reg=0.1 +============================================================ + Training Vanilla... + Epoch 10: test=0.180 + Epoch 20: test=0.291 + Epoch 30: test=0.316 + Epoch 40: test=0.406 + Epoch 50: test=0.452 + Epoch 60: test=0.446 + Epoch 70: test=0.464 + Epoch 80: test=0.472 + Epoch 90: test=0.455 + Epoch 100: test=0.468 + Training Lyapunov (mult_linear, λ_reg=0.1)... + Epoch 10: test=0.018 λ=1.644 + Epoch 20: test=0.010 λ=1.600 |
