| Age | Commit message (Collapse) | Author |
|
Root cause: get_peft_model() modifies model in-place. After LoRA/TinyLoRA
cleanup, the model's modules are altered so VeRA can't find target_modules.
Fix: reload AutoModelForCausalLM from scratch before each PEFT method.
Slower but reliable — no more cross-contamination between PEFT methods.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|
Major rewrite of run_all_methods.py:
- Each example appends to progress.jsonl immediately (crash-safe)
- per_user.json written after each method completes (not at end of script)
- Resume support: re-running skips already-complete methods, resumes
partially-complete ones from the JSONL checkpoint
- is_method_complete() checks existing per_user.json before running
Also includes previous fixes:
- peft_baseline.py: save original model ref, restore in cleanup()
- fit_theta.py: CHUNK_SIZE 128→32 for K=16 OOM fix
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|
Supports d=8,16,32,64,128. UPH with non-default d saves to uph_d{d}/
directory. MethodRunner passes d to UnconditionalHead and fit_theta.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|
Previous lr=0.01 gave R-L=0.01 (broken output). Reduced to lr=1e-3 for
PromptTuning, lr=5e-4 for PrefixTuning, increased steps from 30 to 100.
Also made steps parameter configurable in _run_peft.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|
- peft_baseline.py: Fix cleanup() to handle PromptTuning/PrefixTuning
which don't support unload(). Falls back to base_model access.
- run_all_methods.py: Reduce lr from 0.3 to 0.01 for PromptTuning,
0.01 to 0.001 for PrefixTuning. Previous lr caused R-L=0.03 (broken).
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|
- peft_baseline.py: Add PromptTuningConfig (L=5,10,20) and PrefixTuningConfig (L=5,10)
- run_all_methods.py: Add 5 new methods to dispatch (prompt_tuning_5/10/20, prefix_tuning_5/10)
with per-method directory output structure
Prompt Tuning: params = L * H (e.g. 10*1536 = 15360 params = 30KB)
Prefix Tuning: params = L * num_layers * 2 * H (much larger)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
|
New baselines:
- baselines/peft_baseline.py: LoRA, Tiny LoRA, VeRA (per-user PEFT adaptation)
- baselines/dense_retrieval.py: Dense retrieval ICL (sentence-transformers)
- baselines/profile_based.py: LLM-generated user profile conditioned generation
New scripts:
- scripts/run_all_methods.py: Unified pipeline running all 9 methods with
per-method directory output structure (method/per_user.json)
- scripts/run_peft_baselines.py: PEFT-only evaluation (legacy)
- scripts/run_significance.py: Significance tests (UPH+Base per-user)
- scripts/run_uph_base_per_user.py: UPH+Base with full per-user data
- scripts/compute_bertscore.py: BERTScore from saved predictions
- scripts/significance_test.py: Standalone significance test framework
Updated .gitignore to exclude outputs/ directory.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|