summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorWill DePue <williamd@openai.com>2026-03-18 16:33:01 -0700
committerGitHub <noreply@github.com>2026-03-18 16:33:01 -0700
commit0f9518abc65c2d596ded2455b9974e93125be544 (patch)
tree0ec3d87e765a23ac9594cc8b76c5a67ac38efdd5
parent825357724a36e54bb61dca99700b21b07aaa8c47 (diff)
parent9f170d4c818840e417c69d9148eb63eedd916861 (diff)
Merge pull request #35 from openai/0hq-patch-1
Update README.md
-rw-r--r--README.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/README.md b/README.md
index de856a3..4fdb3e6 100644
--- a/README.md
+++ b/README.md
@@ -144,7 +144,7 @@ There's no perfectly clear answer here and it's hard to draw a clean line around
**What are the restrictions on evaluation?**
-We won't accept submissions that take more than 10 minutes on 8xH100 to evaluate, but otherwise you're free to evaluate however. As with modded-nanogpt, we allow evaluation at any sequence length. And, obviously, you aren't allowed to access any training data during evaluation, unless you pay for those bits in the <16MB limit. We encourage competitors to push the bounds of evaluation methods as aggressively as with training methods.
+We won't accept submissions that take more than 10 minutes on 8xH100 to evaluate (Note: This limit is in addition to the 10 minutes of training time allowed!), but otherwise you're free to evaluate however. As with modded-nanogpt, we allow evaluation at any sequence length. And, obviously, you aren't allowed to access any training data during evaluation, unless you pay for those bits in the <16MB limit. We encourage competitors to push the bounds of evaluation methods as aggressively as with training methods.
## Submission Process