diff options
| author | YurenHao0426 <blackhao0426@gmail.com> | 2026-02-09 11:00:39 -0600 |
|---|---|---|
| committer | YurenHao0426 <blackhao0426@gmail.com> | 2026-02-09 11:00:39 -0600 |
| commit | 13ddc8dc583d8b1355909970cb8c27f85b7d3c8b (patch) | |
| tree | 073534138604c1c49021ca7e334322262129f6ac /scripts/slurm_train.sh | |
Initial implementation: DAGFormer Phase 1
- olmo_graph.py: Modified OLMo2-1B forward with per-head routing via 256x256 adjacency matrix A
- Proportional attribution for post-norm decomposition
- All 6 GPU sanity checks pass (baseline diff = 0.000001)
- predictor.py: Qwen3-Embedding encoder + MLP decoder + Gumbel-Sigmoid + cascading gate
- pipeline.py: End-to-end glue (predictor -> A -> OLMo -> NLL)
- trainer.py: Full training loop with DDP, gradient accumulation, eval, checkpointing
- dolma.py: Streaming Dolma v1.7 with sequence packing
- 43/43 unit tests pass
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Diffstat (limited to 'scripts/slurm_train.sh')
| -rw-r--r-- | scripts/slurm_train.sh | 30 |
1 files changed, 30 insertions, 0 deletions
diff --git a/scripts/slurm_train.sh b/scripts/slurm_train.sh new file mode 100644 index 0000000..6b283ea --- /dev/null +++ b/scripts/slurm_train.sh @@ -0,0 +1,30 @@ +#!/bin/bash +#SBATCH --partition=gpuA40x4 +#SBATCH --account=bfqt-delta-gpu +#SBATCH --nodes=1 +#SBATCH --gpus-per-node=1 +#SBATCH --time=02:00:00 +#SBATCH --mem=64g +#SBATCH --job-name=dagformer-sanity +#SBATCH --output=logs/sanity_%j.out +#SBATCH --error=logs/sanity_%j.err + +export HF_HOME=/projects/bfqt/users/yurenh2/hf_cache +export TRANSFORMERS_CACHE=/projects/bfqt/users/yurenh2/hf_cache/transformers +export HF_HUB_CACHE=/projects/bfqt/users/yurenh2/hf_cache/hub +export HF_DATASETS_CACHE=/projects/bfqt/users/yurenh2/hf_cache/datasets + +export PYTHONPATH=/projects/bfqt/users/yurenh2/ml-projects/DAGFormer:$PYTHONPATH +export PATH=$HOME/.local/bin:$PATH + +cd /projects/bfqt/users/yurenh2/ml-projects/DAGFormer +mkdir -p logs checkpoints + +echo "=== Job Info ===" +echo "Job ID: $SLURM_JOB_ID" +echo "Node: $SLURM_NODELIST" +echo "GPU: $(nvidia-smi --query-gpu=name,memory.total --format=csv,noheader)" +echo "" + +echo "=== Starting training ===" +python3 scripts/train.py --config configs/sanity_check.yaml |
