summaryrefslogtreecommitdiff
path: root/code_eval/README.md
diff options
context:
space:
mode:
authorYuren Hao <yurenh2@timan108.cs.illinois.edu>2025-09-04 22:16:22 -0500
committerYuren Hao <yurenh2@timan108.cs.illinois.edu>2025-09-04 22:16:22 -0500
commitfc6d57ffb8d5ddb5820fcc00b5491a585c259ebc (patch)
treee9841f93a353e2107225cfc721d1ce57c0e594dc /code_eval/README.md
Initial commit
Diffstat (limited to 'code_eval/README.md')
-rw-r--r--code_eval/README.md26
1 files changed, 26 insertions, 0 deletions
diff --git a/code_eval/README.md b/code_eval/README.md
new file mode 100644
index 0000000..1e61ca6
--- /dev/null
+++ b/code_eval/README.md
@@ -0,0 +1,26 @@
+## Quick Start
+
+1. Download benchmark datasets:
+
+```bash
+cd OpenCodeEval/data
+bash dataset.sh
+```
+
+2. Install dependencies:
+
+```bash
+pip install -e .
+```
+
+3. **Configure Evaluation Scripts**
+ - Replace placeholders in the evaluation scripts with the actual model name and path.
+ - Adjust any other necessary settings (e.g., evaluation parameters, output paths) to suit your requirements.
+
+4. Execute the evaluation script for your desired benchmark. For example, to evaluate using the `test-humaneval-ckpt-list.sh` script:
+
+```bash
+bash test-humaneval-ckpt-list.sh
+```
+
+ > **Note**: Ensure that all configurations are correctly set before running the script to avoid errors.