summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2026-03-19Int6 + MLP 3x + sliding window: val_bpb=1.1574 (#61)Sam Larson
* Warmdown-quantization co-optimization, val_bpb=1.2154 Novel finding: aggressive LR decay (WARMDOWN_ITERS=20000) reduces int8 quantization penalty from 0.014 to 0.005 BPB. Combined with FP16 tied embeddings and moderate NTK-RoPE extrapolation (eval@1408). Full warmdown sweep across 10 values and detailed analysis in README. * breakthrough: 1.1574 BPB via int6 + MLP 3x + sliding window stride=256 --------- Co-authored-by: Sam Larson <saml212@users.noreply.github.com>
2026-03-19Update README.mdWill DePue
2026-03-19Record: Sliding Window + FP16 Embed + 10L + Muon WD + Overtone Init ↵notapplica
(val_bpb=1.1748) (#60) * Add NTK Eval + Overtone Init submission (1.2160 BPB) Train@1024 with overtone embedding init and phase-transition residual mixing, eval@2048 with NTK-aware dynamic RoPE scaling. Mean val_bpb 1.2160 across 3 seeds (p=0.0012 for 0.0194-nat improvement over baseline). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Update submission: Muon WD + NTK Eval + Overtone Init (1.2094 BPB, p=0.0002) * Update submission: 10-Layer + Muon WD + NTK Eval + Overtone Init (1.2029 BPB, p=0.0006) * Update submission: FP16 Embed + 10L + Muon WD + NTK + Overtone (1.2008 BPB) * Update submission: 1.2000 BPB — FP16 Embed + 10L + Muon WD + NTK@1408 + Overtone * Update: 1.1748 BPB — Sliding Window + FP16 Embed + 10L + Muon WD + Overtone --------- Co-authored-by: notapplica <notapplica@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-19Update README.mdWill DePue
2026-03-19Update README.mdWill DePue
2026-03-19Update README.mdWill DePue
2026-03-19New SOTA attempt (#52)spokane-way
Co-authored-by: spokane-way <spokane@way>
2026-03-19Update README.mdWill DePue
2026-03-19Fix: score final partial window in sliding window eval (#124)Matthew Li
The window_starts filter dropped windows shorter than stride, silently skipping up to (stride-1) tokens at the end of the validation set. Now includes all windows with >= 1 scoreable token, and clamps the score start for short final windows.
2026-03-19Add record: Sliding Window Eval (stride=64), val_bpb=1.1925 (#50)Matthew Li
2026-03-19Update README.mdWill DePue
2026-03-19SOTA attempt (val_bpb=1.2064) (#49)spokane-way
* SOTA attempt * Improve score on SXM --------- Co-authored-by: spokane-way <spokane@way>
2026-03-19clarify torch versionAlex
2026-03-19Update README.md (#105)Will DePue
2026-03-19fp16 tied embedding + lr/warmdown tuning — val_bpb 1.2197 (#42)Renier Velazco
keep tok_emb.weight in fp16 during int8 export (kills the quant gap), shrink MLP hidden to 992 to fit under 16MB, bump warmdown to 3600 and matrix LR to 0.06. tested on 8xH100 SXM (2 seeds) and 8xH200 SXM (3 seeds). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
2026-03-19Merge pull request #100 from sandsevenone/mlx_eager_evalWill DePue
Use eager mx.eval() to fix running train script on 16GB Mac devices
2026-03-20Add MLX_EAGER_EVAL flag to further reduce memory pressure by ↵sandrone
force-evaluating the graph after each sub-batch step
2026-03-18Update README.mdWill DePue
2026-03-18Update train_gpt_mlx.pyWill DePue
2026-03-18Update train_gpt.pyWill DePue
2026-03-18Merge pull request #35 from openai/0hq-patch-1Will DePue
Update README.md
2026-03-18Update README.mdWill DePue
2026-03-18Merge pull request #32 from yhn112/fix-mlx-eval-memory-growthWill DePue
Fix MLX multi-batch validation memory growth
2026-03-18Merge pull request #9 from oof-baroomf/patch-1Will DePue
Update README typo
2026-03-18Merge pull request #18 from berniwal/mainWill DePue
MLX Timing Mismatch with Main Script
2026-03-19Log MLX validation progressMichael Diskin
2026-03-19Fix MLX validation loss accumulationMichael Diskin
2026-03-18match timing to main script to exclude eval timingbernhardwalser
2026-03-18Update README typoDhruv Saini
2026-03-18Remove scriptsWill DePue
2026-03-18Update README.mdWill DePue
2026-03-18Launch snapshotWill DePue