[2025-12-30T06:43:27.005] error: Prolog hung on node gpub073 W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] torch._dynamo hit config.recompile_limit (8) W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] function: '_base_sub' (/u/yurenh2/.local/lib/python3.9/site-packages/snntorch/_neurons/leaky.py:242) W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] last reason: 5/7: tensor 'input_' size mismatch at index 2. expected 128, actual 256 W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] To log all recompilation reasons, use TORCH_LOGS="recompiles". W1230 06:50:52.141555 1118861 torch/_dynamo/convert_frame.py:1016] [5/8] To diagnose recompilation issues, see https://pytorch.org/docs/main/torch.compiler_troubleshooting.html. Traceback (most recent call last): File "/projects/bfqt/users/yurenh2/ml-projects/snn-training/files/experiments/lyapunov_speedup_benchmark.py", line 638, in run_scaling_test(args.device) File "/projects/bfqt/users/yurenh2/ml-projects/snn-training/files/experiments/lyapunov_speedup_benchmark.py", line 596, in run_scaling_test model_base = BaselineSNN(**cfg).to(device) TypeError: __init__() got an unexpected keyword argument 'batch_size'