summaryrefslogtreecommitdiff
path: root/runs/slurm_logs/14632859_speedup.err
diff options
context:
space:
mode:
Diffstat (limited to 'runs/slurm_logs/14632859_speedup.err')
-rw-r--r--runs/slurm_logs/14632859_speedup.err12
1 files changed, 12 insertions, 0 deletions
diff --git a/runs/slurm_logs/14632859_speedup.err b/runs/slurm_logs/14632859_speedup.err
new file mode 100644
index 0000000..3d75acb
--- /dev/null
+++ b/runs/slurm_logs/14632859_speedup.err
@@ -0,0 +1,12 @@
+[2025-12-30T06:43:27.005] error: Prolog hung on node gpub073
+W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] torch._dynamo hit config.recompile_limit (8)
+W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] function: '_base_sub' (/u/yurenh2/.local/lib/python3.9/site-packages/snntorch/_neurons/leaky.py:242)
+W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] last reason: 5/7: tensor 'input_' size mismatch at index 2. expected 128, actual 256
+W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] To log all recompilation reasons, use TORCH_LOGS="recompiles".
+W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html.
+Traceback (most recent call last):
+ File "/projects/bfqt/users/yurenh2/ml-projects/snn-training/files/experiments/lyapunov_speedup_benchmark.py", line 638, in <module>
+ run_scaling_test(args.device)
+ File "/projects/bfqt/users/yurenh2/ml-projects/snn-training/files/experiments/lyapunov_speedup_benchmark.py", line 596, in run_scaling_test
+ model_base = BaselineSNN(**cfg).to(device)
+TypeError: __init__() got an unexpected keyword argument 'batch_size'