<feed xmlns='http://www.w3.org/2005/Atom'>
<title>uph.git/baselines, branch master</title>
<subtitle>Unnamed repository; edit this file 'description' to name the repository.
</subtitle>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/'/>
<entry>
<title>Fix two bugs: PEFT cleanup model corruption and K=16 OOM</title>
<updated>2026-04-10T19:50:22+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-10T19:50:22+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=112c5d354f36d6ea6e8049cf1aeaebeb9944aa02'/>
<id>112c5d354f36d6ea6e8049cf1aeaebeb9944aa02</id>
<content type='text'>
Bug 1: PEFTBaseline.cleanup() corrupted wrapper.model after LoRA unload,
causing 'Qwen2Model has no attribute prepare_inputs_for_generation' for
subsequent methods. Fix: save reference to original model before wrapping,
restore it directly in cleanup() instead of relying on unload().

Bug 2: fit_theta OOM at K=16 due to large logit chunks (128 × 151936 vocab).
Fix: reduce CHUNK_SIZE from 128 to 32 (~4x less memory per chunk).

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Bug 1: PEFTBaseline.cleanup() corrupted wrapper.model after LoRA unload,
causing 'Qwen2Model has no attribute prepare_inputs_for_generation' for
subsequent methods. Fix: save reference to original model before wrapping,
restore it directly in cleanup() instead of relying on unload().

Bug 2: fit_theta OOM at K=16 due to large logit chunks (128 × 151936 vocab).
Fix: reduce CHUNK_SIZE from 128 to 32 (~4x less memory per chunk).

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Fix PromptTuning/PrefixTuning cleanup crash and tune learning rates</title>
<updated>2026-04-05T22:41:30+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-05T22:41:30+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=ab5fae5decb7d24aafd16d855885c1c99e51cf7f'/>
<id>ab5fae5decb7d24aafd16d855885c1c99e51cf7f</id>
<content type='text'>
- peft_baseline.py: Fix cleanup() to handle PromptTuning/PrefixTuning
  which don't support unload(). Falls back to base_model access.
- run_all_methods.py: Reduce lr from 0.3 to 0.01 for PromptTuning,
  0.01 to 0.001 for PrefixTuning. Previous lr caused R-L=0.03 (broken).

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
- peft_baseline.py: Fix cleanup() to handle PromptTuning/PrefixTuning
  which don't support unload(). Falls back to base_model access.
- run_all_methods.py: Reduce lr from 0.3 to 0.01 for PromptTuning,
  0.01 to 0.001 for PrefixTuning. Previous lr caused R-L=0.03 (broken).

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Add Prompt Tuning and Prefix Tuning baselines</title>
<updated>2026-04-05T21:20:20+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-05T21:20:20+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=6139a848c3b9d5d6c1322cf8acadf2baacee9e8a'/>
<id>6139a848c3b9d5d6c1322cf8acadf2baacee9e8a</id>
<content type='text'>
- peft_baseline.py: Add PromptTuningConfig (L=5,10,20) and PrefixTuningConfig (L=5,10)
- run_all_methods.py: Add 5 new methods to dispatch (prompt_tuning_5/10/20, prefix_tuning_5/10)
  with per-method directory output structure

Prompt Tuning: params = L * H (e.g. 10*1536 = 15360 params = 30KB)
Prefix Tuning: params = L * num_layers * 2 * H (much larger)

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
- peft_baseline.py: Add PromptTuningConfig (L=5,10,20) and PrefixTuningConfig (L=5,10)
- run_all_methods.py: Add 5 new methods to dispatch (prompt_tuning_5/10/20, prefix_tuning_5/10)
  with per-method directory output structure

Prompt Tuning: params = L * H (e.g. 10*1536 = 15360 params = 30KB)
Prefix Tuning: params = L * num_layers * 2 * H (much larger)

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Add PEFT baselines, ICL baselines, profile-based, and unified pipeline</title>
<updated>2026-04-05T15:31:36+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-05T15:31:36+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=ea4a8f837e81b5e5fab6086cb3014c711c5e58e9'/>
<id>ea4a8f837e81b5e5fab6086cb3014c711c5e58e9</id>
<content type='text'>
New baselines:
- baselines/peft_baseline.py: LoRA, Tiny LoRA, VeRA (per-user PEFT adaptation)
- baselines/dense_retrieval.py: Dense retrieval ICL (sentence-transformers)
- baselines/profile_based.py: LLM-generated user profile conditioned generation

New scripts:
- scripts/run_all_methods.py: Unified pipeline running all 9 methods with
  per-method directory output structure (method/per_user.json)
- scripts/run_peft_baselines.py: PEFT-only evaluation (legacy)
- scripts/run_significance.py: Significance tests (UPH+Base per-user)
- scripts/run_uph_base_per_user.py: UPH+Base with full per-user data
- scripts/compute_bertscore.py: BERTScore from saved predictions
- scripts/significance_test.py: Standalone significance test framework

Updated .gitignore to exclude outputs/ directory.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
New baselines:
- baselines/peft_baseline.py: LoRA, Tiny LoRA, VeRA (per-user PEFT adaptation)
- baselines/dense_retrieval.py: Dense retrieval ICL (sentence-transformers)
- baselines/profile_based.py: LLM-generated user profile conditioned generation

New scripts:
- scripts/run_all_methods.py: Unified pipeline running all 9 methods with
  per-method directory output structure (method/per_user.json)
- scripts/run_peft_baselines.py: PEFT-only evaluation (legacy)
- scripts/run_significance.py: Significance tests (UPH+Base per-user)
- scripts/run_uph_base_per_user.py: UPH+Base with full per-user data
- scripts/compute_bertscore.py: BERTScore from saved predictions
- scripts/significance_test.py: Standalone significance test framework

Updated .gitignore to exclude outputs/ directory.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
<entry>
<title>Initial commit: UPH project codebase and experiment results</title>
<updated>2026-04-03T20:12:34+00:00</updated>
<author>
<name>YurenHao0426</name>
<email>Blackhao0426@gmail.com</email>
</author>
<published>2026-04-03T20:12:34+00:00</published>
<link rel='alternate' type='text/html' href='https://git.blackhao.com/uph.git/commit/?id=8fe28101366dd32562b8c5534d7fe359b252bdf3'/>
<id>8fe28101366dd32562b8c5534d7fe359b252bdf3</id>
<content type='text'>
Includes model code, evaluation scripts, configs, analysis outputs,
and experiment results for the User Prior Head personalization method.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</content>
<content type='xhtml'>
<div xmlns='http://www.w3.org/1999/xhtml'>
<pre>
Includes model code, evaluation scripts, configs, analysis outputs,
and experiment results for the User Prior Head personalization method.

Co-Authored-By: Claude Opus 4.6 (1M context) &lt;noreply@anthropic.com&gt;
</pre>
</div>
</content>
</entry>
</feed>
