<feed xmlns='http://www.w3.org/2005/Atom'>
<title>uph.git/scripts/run_all_methods.py, branch master</title>
<subtitle>Unnamed repository; edit this file 'description' to name the repository.
</subtitle>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/'/>
<entry>
<title>Fix VeRA crash: reload model fresh before each PEFT method</title>
<updated>2026-04-13T05:22:32+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-13T05:22:32+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=3b2b49845a256fcabc55af789562ca034bb69ebe'/>
<id>3b2b49845a256fcabc55af789562ca034bb69ebe</id>
<content type='text'>
Root cause: get_peft_model() modifies model in-place. After LoRA/TinyLoRA
cleanup, the model's modules are altered so VeRA can't find target_modules.
Fix: reload AutoModelForCausalLM from scratch before each PEFT method.
Slower but reliable — no more cross-contamination between PEFT methods.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Root cause: get_peft_model() modifies model in-place. After LoRA/TinyLoRA
cleanup, the model's modules are altered so VeRA can't find target_modules.
Fix: reload AutoModelForCausalLM from scratch before each PEFT method.
Slower but reliable — no more cross-contamination between PEFT methods.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Crash-safe incremental saving: never lose data again</title>
<updated>2026-04-11T21:41:19+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-11T21:41:19+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=bfdcc36c0e31adfa95410ce87e7da646e0b948fe'/>
<id>bfdcc36c0e31adfa95410ce87e7da646e0b948fe</id>
<content type='text'>
Major rewrite of run_all_methods.py:
- Each example appends to progress.jsonl immediately (crash-safe)
- per_user.json written after each method completes (not at end of script)
- Resume support: re-running skips already-complete methods, resumes
  partially-complete ones from the JSONL checkpoint
- is_method_complete() checks existing per_user.json before running

Also includes previous fixes:
- peft_baseline.py: save original model ref, restore in cleanup()
- fit_theta.py: CHUNK_SIZE 128→32 for K=16 OOM fix

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Major rewrite of run_all_methods.py:
- Each example appends to progress.jsonl immediately (crash-safe)
- per_user.json written after each method completes (not at end of script)
- Resume support: re-running skips already-complete methods, resumes
  partially-complete ones from the JSONL checkpoint
- is_method_complete() checks existing per_user.json before running

Also includes previous fixes:
- peft_baseline.py: save original model ref, restore in cleanup()
- fit_theta.py: CHUNK_SIZE 128→32 for K=16 OOM fix

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Add --d parameter for UPH theta dimension ablation</title>
<updated>2026-04-06T01:05:31+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-06T01:05:31+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=86a7ef5b8d12cea1032602f30c18d52392f1cc42'/>
<id>86a7ef5b8d12cea1032602f30c18d52392f1cc42</id>
<content type='text'>
Supports d=8,16,32,64,128. UPH with non-default d saves to uph_d{d}/
directory. MethodRunner passes d to UnconditionalHead and fit_theta.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Supports d=8,16,32,64,128. UPH with non-default d saves to uph_d{d}/
directory. MethodRunner passes d to UnconditionalHead and fit_theta.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Tune PromptTuning/PrefixTuning hyperparams: lr=1e-3/5e-4, steps=100</title>
<updated>2026-04-05T23:10:38+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-05T23:10:38+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=107155532f2683198aac33c9bb3bd647d357a80a'/>
<id>107155532f2683198aac33c9bb3bd647d357a80a</id>
<content type='text'>
Previous lr=0.01 gave R-L=0.01 (broken output). Reduced to lr=1e-3 for
PromptTuning, lr=5e-4 for PrefixTuning, increased steps from 30 to 100.
Also made steps parameter configurable in _run_peft.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Previous lr=0.01 gave R-L=0.01 (broken output). Reduced to lr=1e-3 for
PromptTuning, lr=5e-4 for PrefixTuning, increased steps from 30 to 100.
Also made steps parameter configurable in _run_peft.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix PromptTuning/PrefixTuning cleanup crash and tune learning rates</title>
<updated>2026-04-05T22:41:30+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-05T22:41:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=ab5fae5decb7d24aafd16d855885c1c99e51cf7f'/>
<id>ab5fae5decb7d24aafd16d855885c1c99e51cf7f</id>
<content type='text'>
- peft_baseline.py: Fix cleanup() to handle PromptTuning/PrefixTuning
  which don't support unload(). Falls back to base_model access.
- run_all_methods.py: Reduce lr from 0.3 to 0.01 for PromptTuning,
  0.01 to 0.001 for PrefixTuning. Previous lr caused R-L=0.03 (broken).

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
- peft_baseline.py: Fix cleanup() to handle PromptTuning/PrefixTuning
  which don't support unload(). Falls back to base_model access.
- run_all_methods.py: Reduce lr from 0.3 to 0.01 for PromptTuning,
  0.01 to 0.001 for PrefixTuning. Previous lr caused R-L=0.03 (broken).

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Add Prompt Tuning and Prefix Tuning baselines</title>
<updated>2026-04-05T21:20:20+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-05T21:20:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=6139a848c3b9d5d6c1322cf8acadf2baacee9e8a'/>
<id>6139a848c3b9d5d6c1322cf8acadf2baacee9e8a</id>
<content type='text'>
- peft_baseline.py: Add PromptTuningConfig (L=5,10,20) and PrefixTuningConfig (L=5,10)
- run_all_methods.py: Add 5 new methods to dispatch (prompt_tuning_5/10/20, prefix_tuning_5/10)
  with per-method directory output structure

Prompt Tuning: params = L * H (e.g. 10*1536 = 15360 params = 30KB)
Prefix Tuning: params = L * num_layers * 2 * H (much larger)

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
- peft_baseline.py: Add PromptTuningConfig (L=5,10,20) and PrefixTuningConfig (L=5,10)
- run_all_methods.py: Add 5 new methods to dispatch (prompt_tuning_5/10/20, prefix_tuning_5/10)
  with per-method directory output structure

Prompt Tuning: params = L * H (e.g. 10*1536 = 15360 params = 30KB)
Prefix Tuning: params = L * num_layers * 2 * H (much larger)

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Add PEFT baselines, ICL baselines, profile-based, and unified pipeline</title>
<updated>2026-04-05T15:31:36+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-05T15:31:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=ea4a8f837e81b5e5fab6086cb3014c711c5e58e9'/>
<id>ea4a8f837e81b5e5fab6086cb3014c711c5e58e9</id>
<content type='text'>
New baselines:
- baselines/peft_baseline.py: LoRA, Tiny LoRA, VeRA (per-user PEFT adaptation)
- baselines/dense_retrieval.py: Dense retrieval ICL (sentence-transformers)
- baselines/profile_based.py: LLM-generated user profile conditioned generation

New scripts:
- scripts/run_all_methods.py: Unified pipeline running all 9 methods with
  per-method directory output structure (method/per_user.json)
- scripts/run_peft_baselines.py: PEFT-only evaluation (legacy)
- scripts/run_significance.py: Significance tests (UPH+Base per-user)
- scripts/run_uph_base_per_user.py: UPH+Base with full per-user data
- scripts/compute_bertscore.py: BERTScore from saved predictions
- scripts/significance_test.py: Standalone significance test framework

Updated .gitignore to exclude outputs/ directory.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
New baselines:
- baselines/peft_baseline.py: LoRA, Tiny LoRA, VeRA (per-user PEFT adaptation)
- baselines/dense_retrieval.py: Dense retrieval ICL (sentence-transformers)
- baselines/profile_based.py: LLM-generated user profile conditioned generation

New scripts:
- scripts/run_all_methods.py: Unified pipeline running all 9 methods with
  per-method directory output structure (method/per_user.json)
- scripts/run_peft_baselines.py: PEFT-only evaluation (legacy)
- scripts/run_significance.py: Significance tests (UPH+Base per-user)
- scripts/run_uph_base_per_user.py: UPH+Base with full per-user data
- scripts/compute_bertscore.py: BERTScore from saved predictions
- scripts/significance_test.py: Standalone significance test framework

Updated .gitignore to exclude outputs/ directory.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
