# One-shot Entropy Minimization   ### Installation ```bash pip install torch transformers accelerate deepspeed psutil pandas numpy wandb ``` --- ### Reproducing One-shot EM Training (SOTA) ```bash accelerate launch train.py \ --model_name Qwen2.5-Math-7B \ --model_path /path/to/Qwen2.5-Math-7B \ --train_data dataset/1shot_rlvr/pi1_r1280.parquet \ --effective_batch 64 \ --micro_batch_size auto \ --temperature 0.5 \ --learning_rate 2e-5 \ --max_steps 50 \ --log_steps 1 \ --save_steps 1 \ --run_name one_shot \ --wandb_project one-shot-em ``` --- ### Reproducing Multi-shot EM Training ```bash accelerate launch train.py \ --model_name Qwen2.5-Math-7B \ --model_path /path/to/Qwen2.5-Math-7B \ --train_data dataset/numina/numina_00.parquet \ --effective_batch 64 \ --micro_batch_size auto \ --temperature 0.5 \ --learning_rate 2e-5 \ --max_steps 50 \ --log_steps 1 \ --save_steps 1 \ --run_name multi_shot \ --wandb_project one-shot-em ``` --- ### Evaluation ```bash cd Qwen2.5-Eval/evaluation bash sh/eval_all_math.sh ``` --- ### Acknowledgements Our dataset references and builds upon the following open-source contributions: - [NuminaMath-CoT](https://huggingface.co/datasets/AI-MO/NuminaMath-CoT) - [DeepScaler](https://github.com/agentica-project/deepscaler) - [One-shot RLVR](https://github.com/ypwang61/One-Shot-RLVR/) – for data selection strategies - [Qwen2.5-Eval](https://github.com/QwenLM/Qwen2.5-Math/) – for evaluation benchmarks We sincerely thank the authors and maintainers of these projects for their excellent contributions to the research community! --- ### Citation ``` @misc{gao2025oneshotentropyminimization, title={One-shot Entropy Minimization}, author={Zitian Gao and Lynx Chen and Joey Zhou and Bryan Dai}, year={2025}, eprint={2505.20282}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2505.20282}, } ```