### LLaMA-Factory SFT Training Config - QLoRA (Ultra Minimal Hardware) ### ### For session-level reflection training ### ### Can run on single GPU with ~12GB VRAM ### ### Model model_name_or_path: /projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/models/llama-3.1-8b-instruct ### Method - QLoRA (4-bit quantization + LoRA) stage: sft do_train: true finetuning_type: lora quantization_bit: 4 quantization_method: bitsandbytes ### LoRA Config lora_rank: 64 lora_alpha: 128 lora_dropout: 0.05 lora_target: all ### Dataset dataset: sft_reflection dataset_dir: /projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/training template: llama3 cutoff_len: 4096 ### Output output_dir: /projects/bfqt/users/yurenh2/ml-projects/personalization-user-model/collaborativeagents/training/outputs/sft_reflection_qlora ### Training - Single GPU friendly per_device_train_batch_size: 1 gradient_accumulation_steps: 64 learning_rate: 2.0e-5 num_train_epochs: 4.0 lr_scheduler_type: cosine warmup_ratio: 0.1 bf16: true ### Logging logging_steps: 10 save_steps: 100 save_total_limit: 3