diff options
| author | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-13 23:50:59 -0600 |
|---|---|---|
| committer | YurenHao0426 <blackhao0426@gmail.com> | 2026-01-13 23:50:59 -0600 |
| commit | 00cf667cee7ffacb144d5805fc7e0ef443f3583a (patch) | |
| tree | 77d20a3adaecf96bf3aff0612bdd3b5fa1a7dc7e /runs/slurm_logs/14632859_speedup.err | |
| parent | c53c04aa1d6ff75cb478a9498c370baa929c74b6 (diff) | |
| parent | cd99d6b874d9d09b3bb87b8485cc787885af71f1 (diff) | |
Merge master into main
Diffstat (limited to 'runs/slurm_logs/14632859_speedup.err')
| -rw-r--r-- | runs/slurm_logs/14632859_speedup.err | 12 |
1 files changed, 12 insertions, 0 deletions
diff --git a/runs/slurm_logs/14632859_speedup.err b/runs/slurm_logs/14632859_speedup.err new file mode 100644 index 0000000..3d75acb --- /dev/null +++ b/runs/slurm_logs/14632859_speedup.err @@ -0,0 +1,12 @@ +[2025-12-30T06:43:27.005] error: Prolog hung on node gpub073 +W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] torch._dynamo hit config.recompile_limit (8) +W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] function: '_base_sub' (/u/yurenh2/.local/lib/python3.9/site-packages/snntorch/_neurons/leaky.py:242) +W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] last reason: 5/7: tensor 'input_' size mismatch at index 2. expected 128, actual 256 +W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] To log all recompilation reasons, use TORCH_LOGS="recompiles". +W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html. +Traceback (most recent call last): + File "/projects/bfqt/users/yurenh2/ml-projects/snn-training/files/experiments/lyapunov_speedup_benchmark.py", line 638, in <module> + run_scaling_test(args.device) + File "/projects/bfqt/users/yurenh2/ml-projects/snn-training/files/experiments/lyapunov_speedup_benchmark.py", line 596, in run_scaling_test + model_base = BaselineSNN(**cfg).to(device) +TypeError: __init__() got an unexpected keyword argument 'batch_size' |
