diff options
| author | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-27 09:57:37 -0600 |
|---|---|---|
| committer | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-27 09:57:37 -0600 |
| commit | dc801c07cf38b0c495686463e6ca6f871a64440e (patch) | |
| tree | 599f03114775921dbc472403c701f4a3a8ea188a /collaborativeagents/training/grpo_verl/outputs/2026-01-11/03-50-42/.hydra/overrides.yaml | |
| parent | e43b3f8aa36c198b95c1e46bea2eaf3893b13dc3 (diff) | |
Add collaborativeagents module and update gitignore
- Add collaborativeagents subproject with adapters, agents, and evaluation modules
- Update .gitignore to exclude large binary files (.whl, .tar), wandb logs, and results
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Diffstat (limited to 'collaborativeagents/training/grpo_verl/outputs/2026-01-11/03-50-42/.hydra/overrides.yaml')
| -rw-r--r-- | collaborativeagents/training/grpo_verl/outputs/2026-01-11/03-50-42/.hydra/overrides.yaml | 47 |
1 files changed, 47 insertions, 0 deletions
diff --git a/collaborativeagents/training/grpo_verl/outputs/2026-01-11/03-50-42/.hydra/overrides.yaml b/collaborativeagents/training/grpo_verl/outputs/2026-01-11/03-50-42/.hydra/overrides.yaml new file mode 100644 index 0000000..8c6f184 --- /dev/null +++ b/collaborativeagents/training/grpo_verl/outputs/2026-01-11/03-50-42/.hydra/overrides.yaml @@ -0,0 +1,47 @@ +- algorithm.adv_estimator=grpo +- data.train_files=/projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/training/grpo_verl/data/session_level_reflection_grpo_train.parquet +- data.val_files=/projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/training/grpo_verl/data/session_level_reflection_grpo_train.parquet +- data.train_batch_size=64 +- data.max_prompt_length=2048 +- data.max_response_length=1024 +- data.filter_overlong_prompts=True +- data.truncation=error +- data.prompt_key=prompt +- data.reward_fn_key=data_source +- actor_rollout_ref.model.path=/work/nvme/bfqt/yurenh2/sft_checkpoints/checkpoint-200 +- actor_rollout_ref.actor.optim.lr=1e-6 +- actor_rollout_ref.model.use_remove_padding=True +- actor_rollout_ref.actor.ppo_mini_batch_size=8 +- actor_rollout_ref.actor.ppo_micro_batch_size_per_gpu=4 +- actor_rollout_ref.actor.use_kl_loss=True +- actor_rollout_ref.actor.kl_loss_coef=0.003 +- actor_rollout_ref.actor.kl_loss_type=low_var_kl +- actor_rollout_ref.actor.entropy_coeff=0 +- actor_rollout_ref.model.enable_gradient_checkpointing=True +- actor_rollout_ref.actor.fsdp_config.model_dtype=bfloat16 +- actor_rollout_ref.actor.fsdp_config.param_offload=False +- actor_rollout_ref.actor.fsdp_config.optimizer_offload=False +- actor_rollout_ref.rollout.log_prob_micro_batch_size_per_gpu=4 +- actor_rollout_ref.rollout.tensor_model_parallel_size=1 +- actor_rollout_ref.rollout.name=vllm +- actor_rollout_ref.rollout.gpu_memory_utilization=0.5 +- actor_rollout_ref.rollout.n=8 +- actor_rollout_ref.ref.log_prob_micro_batch_size_per_gpu=4 +- actor_rollout_ref.ref.fsdp_config.model_dtype=bfloat16 +- actor_rollout_ref.ref.fsdp_config.param_offload=True +- actor_rollout_ref.rollout.temperature=0.9 +- actor_rollout_ref.rollout.top_p=0.9 +- custom_reward_function.path=/projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/training/grpo_verl/verl_reward_functions.py +- custom_reward_function.name=compute_score +- algorithm.use_kl_in_reward=False +- trainer.critic_warmup=0 +- trainer.val_before_train=False +- trainer.logger=["console"] +- trainer.project_name=collaborative-agent-reflection-grpo +- trainer.experiment_name=llama3.1-8b-grpo +- trainer.n_gpus_per_node=2 +- trainer.nnodes=1 +- trainer.save_freq=50 +- trainer.test_freq=100 +- trainer.total_epochs=1 +- trainer.default_local_dir=/scratch/bfqt/yurenh2/grpo_outputs |
