summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2026-04-05Tune PromptTuning/PrefixTuning hyperparams: lr=1e-3/5e-4, steps=100YurenHao0426
Previous lr=0.01 gave R-L=0.01 (broken output). Reduced to lr=1e-3 for PromptTuning, lr=5e-4 for PrefixTuning, increased steps from 30 to 100. Also made steps parameter configurable in _run_peft. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05Fix PromptTuning/PrefixTuning cleanup crash and tune learning ratesYurenHao0426
- peft_baseline.py: Fix cleanup() to handle PromptTuning/PrefixTuning which don't support unload(). Falls back to base_model access. - run_all_methods.py: Reduce lr from 0.3 to 0.01 for PromptTuning, 0.01 to 0.001 for PrefixTuning. Previous lr caused R-L=0.03 (broken). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05Add Prompt Tuning and Prefix Tuning baselinesYurenHao0426
- peft_baseline.py: Add PromptTuningConfig (L=5,10,20) and PrefixTuningConfig (L=5,10) - run_all_methods.py: Add 5 new methods to dispatch (prompt_tuning_5/10/20, prefix_tuning_5/10) with per-method directory output structure Prompt Tuning: params = L * H (e.g. 10*1536 = 15360 params = 30KB) Prefix Tuning: params = L * num_layers * 2 * H (much larger) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-05Add PEFT baselines, ICL baselines, profile-based, and unified pipelineYurenHao0426
New baselines: - baselines/peft_baseline.py: LoRA, Tiny LoRA, VeRA (per-user PEFT adaptation) - baselines/dense_retrieval.py: Dense retrieval ICL (sentence-transformers) - baselines/profile_based.py: LLM-generated user profile conditioned generation New scripts: - scripts/run_all_methods.py: Unified pipeline running all 9 methods with per-method directory output structure (method/per_user.json) - scripts/run_peft_baselines.py: PEFT-only evaluation (legacy) - scripts/run_significance.py: Significance tests (UPH+Base per-user) - scripts/run_uph_base_per_user.py: UPH+Base with full per-user data - scripts/compute_bertscore.py: BERTScore from saved predictions - scripts/significance_test.py: Standalone significance test framework Updated .gitignore to exclude outputs/ directory. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-03Initial commit: UPH project codebase and experiment resultsYurenHao0426
Includes model code, evaluation scripts, configs, analysis outputs, and experiment results for the User Prior Head personalization method. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>