### LLaMA-Factory SFT Training Config ### ### For session-level reflection training ### ### Model model_name_or_path: /projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/models/llama-3.1-8b-instruct ### Method stage: sft do_train: true finetuning_type: full deepspeed: ds_z3_config.json ### Dataset dataset: sft_reflection dataset_dir: /projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/training template: llama3 cutoff_len: 4096 ### Output output_dir: /projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/training/outputs/sft_reflection_lf ### Training per_device_train_batch_size: 1 gradient_accumulation_steps: 16 learning_rate: 1.0e-6 num_train_epochs: 4.0 lr_scheduler_type: cosine warmup_ratio: 0.1 bf16: true ddp_timeout: 3600 ### Logging logging_steps: 10 save_steps: 100 save_total_limit: 3