| Age | Commit message (Collapse) | Author | |
|---|---|---|---|
| 35 hours | fp16 tied embedding + lr/warmdown tuning — val_bpb 1.2197 (#42) | Renier Velazco | |
| keep tok_emb.weight in fp16 during int8 export (kills the quant gap), shrink MLP hidden to 992 to fit under 16MB, bump warmdown to 3600 and matrix LR to 0.06. tested on 8xH100 SXM (2 seeds) and 8xH200 SXM (3 seeds). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> | |||
