| Age | Commit message (Collapse) | Author |
|
- src/trainers.py: removed class VanillaGrAPETrainer (~30 lines) and
cleaned the module-level methods-compared docstring.
- experiments/run_ablation_20seeds.py: dropped VanillaGrAPE row from the
4-method ablation grid; sweep is now BP → DFA → DFA-GNN → KAFT.
Smoke test: BPTrainer / DFATrainer / DFAGNNTrainer / KAFTTrainer all train
cleanly at GCN L=4, Cora, 50 epochs (test acc 77.3 / 76.6 / 78.4 / 79.0).
|
|
- src/trainers.py: GraphGrAPETrainer → KAFTTrainer; module docstring + comments.
VanillaGrAPETrainer kept as-is (it is a separate control method, not KAFT).
- experiments/: all 19 runners pick up the new class name; result keys
('Cora_GRAFT' etc) become 'Cora_KAFT'; OUT_DIRs renamed (e.g.
bp_graft_depth_20seeds → bp_kaft_depth_20seeds).
- figures/: data-lookup keys + display labels both 'KAFT'; output filename
graft_depth_sweep.{pdf,png} → kaft_depth_sweep.{pdf,png}.
- File rename: experiments/run_bp_graft_depth.py → run_bp_kaft_depth.py;
figures/graft_depth_sweep.pdf → kaft_depth_sweep.pdf.
- README aligned.
Imports verified: from src.trainers import KAFTTrainer succeeds.
|
|
Topology-factorized Jacobian-aligned feedback for deep GNNs. Includes:
- src/: GraphGrAPETrainer (KAFT) + BP / DFA / DFA-GNN / VanillaGrAPE baselines
+ multi-probe alignment estimator + dataset / sparse-mm utilities.
- experiments/: 19 runners reproducing every figure / table in the paper.
- figures/: 4 generators + the 4 PDFs cited in the report.
- paper/: NeurIPS .tex and consolidated experiments_master notes.
Smoke test: 50-epoch Cora GCN L=4 gives BP 77.3% / KAFT 79.0%.
|