diff options
| author | YurenHao0426 <blackhao0426@gmail.com> | 2026-02-10 09:50:33 -0600 |
|---|---|---|
| committer | YurenHao0426 <blackhao0426@gmail.com> | 2026-02-10 09:50:33 -0600 |
| commit | 039c12d3cf7178db6a7d80b02cf022d67231014e (patch) | |
| tree | b3104310bfaced0d992729f59f1a7ef2e769c6bd /scripts | |
| parent | 80579d6cc254d337a23e71404ae7ecab1849d1e5 (diff) | |
Add auto-resume checkpointing, S1/S2 configs, and experiment results
- Auto-resume: find latest checkpoint in save_dir on startup
- SIGUSR1 handler: save checkpoint before SLURM timeout
- S1 config (constant tau=5, identity init verification)
- S2 config (constant tau=2, gradient flow check)
- Experiment results tracker with S0/S1 data
- Speed estimates and experiment plan
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Diffstat (limited to 'scripts')
| -rw-r--r-- | scripts/slurm_s1.sh | 23 | ||||
| -rw-r--r-- | scripts/slurm_s2.sh | 23 | ||||
| -rw-r--r-- | scripts/slurm_train.sh | 4 |
3 files changed, 49 insertions, 1 deletions
diff --git a/scripts/slurm_s1.sh b/scripts/slurm_s1.sh new file mode 100644 index 0000000..8a13e9b --- /dev/null +++ b/scripts/slurm_s1.sh @@ -0,0 +1,23 @@ +#!/bin/bash +#SBATCH --signal=SIGUSR1@120 +export HF_HOME=/projects/bfqt/users/yurenh2/hf_cache +export TRANSFORMERS_CACHE=/projects/bfqt/users/yurenh2/hf_cache/transformers +export HF_HUB_CACHE=/projects/bfqt/users/yurenh2/hf_cache/hub +export HF_DATASETS_CACHE=/projects/bfqt/users/yurenh2/hf_cache/datasets +export TOKENIZERS_PARALLELISM=false + +export PYTHONPATH=/projects/bfqt/users/yurenh2/ml-projects/DAGFormer:$PYTHONPATH +export PATH=$HOME/.local/bin:$PATH + +cd /projects/bfqt/users/yurenh2/ml-projects/DAGFormer +mkdir -p logs checkpoints/s1 + +echo "=== Job Info ===" +echo "Job ID: $SLURM_JOB_ID" +echo "Node: $SLURM_NODELIST" +echo "GPU: $(nvidia-smi --query-gpu=name,memory.total --format=csv,noheader)" +echo "" + +echo "=== Starting S1: identity init training ===" +echo " Auto-resume enabled: will pick up from latest checkpoint in checkpoints/s1/" +python3 -u scripts/train.py --config configs/s1_identity_init.yaml diff --git a/scripts/slurm_s2.sh b/scripts/slurm_s2.sh new file mode 100644 index 0000000..fe00552 --- /dev/null +++ b/scripts/slurm_s2.sh @@ -0,0 +1,23 @@ +#!/bin/bash +#SBATCH --signal=SIGUSR1@120 +export HF_HOME=/projects/bfqt/users/yurenh2/hf_cache +export TRANSFORMERS_CACHE=/projects/bfqt/users/yurenh2/hf_cache/transformers +export HF_HUB_CACHE=/projects/bfqt/users/yurenh2/hf_cache/hub +export HF_DATASETS_CACHE=/projects/bfqt/users/yurenh2/hf_cache/datasets +export TOKENIZERS_PARALLELISM=false + +export PYTHONPATH=/projects/bfqt/users/yurenh2/ml-projects/DAGFormer:$PYTHONPATH +export PATH=$HOME/.local/bin:$PATH + +cd /projects/bfqt/users/yurenh2/ml-projects/DAGFormer +mkdir -p logs checkpoints/s2 + +echo "=== Job Info ===" +echo "Job ID: $SLURM_JOB_ID" +echo "Node: $SLURM_NODELIST" +echo "GPU: $(nvidia-smi --query-gpu=name,memory.total --format=csv,noheader)" +echo "" + +echo "=== Starting S2: gradient flow check ===" +echo " Auto-resume enabled: will pick up from latest checkpoint in checkpoints/s2/" +python3 -u scripts/train.py --config configs/s2_gradient_flow.yaml diff --git a/scripts/slurm_train.sh b/scripts/slurm_train.sh index e1df687..361ba94 100644 --- a/scripts/slurm_train.sh +++ b/scripts/slurm_train.sh @@ -1,4 +1,5 @@ #!/bin/bash +#SBATCH --signal=SIGUSR1@120 export HF_HOME=/projects/bfqt/users/yurenh2/hf_cache export TRANSFORMERS_CACHE=/projects/bfqt/users/yurenh2/hf_cache/transformers export HF_HUB_CACHE=/projects/bfqt/users/yurenh2/hf_cache/hub @@ -18,4 +19,5 @@ echo "GPU: $(nvidia-smi --query-gpu=name,memory.total --format=csv,noheader)" echo "" echo "=== Starting training ===" -python3 -u scripts/train.py --config configs/sanity_check.yaml +echo " Auto-resume enabled: will pick up from latest checkpoint" +python3 -u scripts/train.py --config ${CONFIG:-configs/sanity_check.yaml} |
