summaryrefslogtreecommitdiff
path: root/code_eval/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'code_eval/README.md')
-rw-r--r--code_eval/README.md27
1 files changed, 27 insertions, 0 deletions
diff --git a/code_eval/README.md b/code_eval/README.md
new file mode 100644
index 0000000..32daaa9
--- /dev/null
+++ b/code_eval/README.md
@@ -0,0 +1,27 @@
+## Quick Start
+
+1. Download benchmark datasets:
+
+```bash
+cd OpenCodeEval/data
+bash dataset.sh
+```
+
+2. Install dependencies:
+
+```bash
+pip install -e .
+```
+
+3. **Configure Evaluation Scripts**
+ - Replace placeholders in the evaluation scripts with the actual model name and path.
+ - Adjust any other necessary settings (e.g., evaluation parameters, output paths) to suit your requirements.
+
+4. Execute the evaluation script for your desired benchmark. For example, to evaluate using the `test-humaneval-ckpt-list.sh` script:
+
+Such as:
+```bash
+bash test-humaneval-ckpt-list.sh
+```
+
+ > **Note**: Ensure that all configurations are correctly set before running the script to avoid errors.