From 947d9dfdf16ae37109898111a5caacae7377b96d Mon Sep 17 00:00:00 2001 From: = <=> Date: Wed, 4 Jun 2025 11:49:37 +0800 Subject: update code and kk eval --- code_eval/README.md | 27 +++++++++++++++++++++++++++ 1 file changed, 27 insertions(+) create mode 100644 code_eval/README.md (limited to 'code_eval/README.md') diff --git a/code_eval/README.md b/code_eval/README.md new file mode 100644 index 0000000..32daaa9 --- /dev/null +++ b/code_eval/README.md @@ -0,0 +1,27 @@ +## Quick Start + +1. Download benchmark datasets: + +```bash +cd OpenCodeEval/data +bash dataset.sh +``` + +2. Install dependencies: + +```bash +pip install -e . +``` + +3. **Configure Evaluation Scripts** + - Replace placeholders in the evaluation scripts with the actual model name and path. + - Adjust any other necessary settings (e.g., evaluation parameters, output paths) to suit your requirements. + +4. Execute the evaluation script for your desired benchmark. For example, to evaluate using the `test-humaneval-ckpt-list.sh` script: + +Such as: +```bash +bash test-humaneval-ckpt-list.sh +``` + + > **Note**: Ensure that all configurations are correctly set before running the script to avoid errors. -- cgit v1.2.3