summaryrefslogtreecommitdiff
path: root/code_eval/README.md
blob: 32daaa9292e9550a3cb1ec3a2575882c750e058d (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
## Quick Start

1. Download benchmark datasets:

```bash
cd OpenCodeEval/data
bash dataset.sh
```

2. Install dependencies:

```bash
pip install -e .
```

3. **Configure Evaluation Scripts**  
   - Replace placeholders in the evaluation scripts with the actual model name and path.  
   - Adjust any other necessary settings (e.g., evaluation parameters, output paths) to suit your requirements.

4. Execute the evaluation script for your desired benchmark. For example, to evaluate using the `test-humaneval-ckpt-list.sh` script:

Such as:
```bash
bash test-humaneval-ckpt-list.sh
```

   > **Note**: Ensure that all configurations are correctly set before running the script to avoid errors.