summaryrefslogtreecommitdiff
path: root/collaborativeagents/slurm/logs/vllm_bench_70b_8b_14367370.err
blob: 91e66efc7f9e8cdd8d02746fcc1eae598c4bbb66 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180

Fetching 19 files:   0%|          | 0/19 [00:00<?, ?it/s]
Fetching 19 files:   5%|▌         | 1/19 [00:00<00:08,  2.19it/s]
Fetching 19 files:  37%|███▋      | 7/19 [01:23<02:29, 12.48s/it]
Fetching 19 files:  47%|████▋     | 9/19 [01:35<01:44, 10.44s/it]
Fetching 19 files:  53%|█████▎    | 10/19 [01:36<01:19,  8.86s/it]
Fetching 19 files: 100%|██████████| 19/19 [01:36<00:00,  5.08s/it]
/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/transformers/utils/hub.py:110: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/transformers/utils/hub.py:110: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
(APIServer pid=3643829) 
Parse safetensors files:   0%|          | 0/9 [00:00<?, ?it/s]
Parse safetensors files:  11%|█         | 1/9 [00:00<00:01,  5.28it/s]
Parse safetensors files: 100%|██████████| 9/9 [00:00<00:00, 46.86it/s]
/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/transformers/utils/hub.py:110: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/transformers/utils/hub.py:110: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers. Use `HF_HOME` instead.
  warnings.warn(
(EngineCore_DP0 pid=3644234) 
Loading safetensors checkpoint shards:   0% Completed | 0/4 [00:00<?, ?it/s]
(EngineCore_DP0 pid=3644234) 
Loading safetensors checkpoint shards:  25% Completed | 1/4 [00:07<00:22,  7.57s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:   0% Completed | 0/9 [00:00<?, ?it/s]
(EngineCore_DP0 pid=3644234) 
Loading safetensors checkpoint shards:  50% Completed | 2/4 [00:10<00:09,  4.64s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:  11% Completed | 1/9 [00:02<00:21,  2.72s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:  22% Completed | 2/9 [00:07<00:26,  3.72s/it]
(EngineCore_DP0 pid=3644234) 
Loading safetensors checkpoint shards:  75% Completed | 3/4 [00:18<00:06,  6.17s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:  33% Completed | 3/9 [00:11<00:23,  3.91s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:  44% Completed | 4/9 [00:15<00:21,  4.21s/it]
(EngineCore_DP0 pid=3644234) 
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:26<00:00,  6.99s/it]
(EngineCore_DP0 pid=3644234) 
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:26<00:00,  6.60s/it]
(EngineCore_DP0 pid=3644234) 
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:  56% Completed | 5/9 [00:21<00:18,  4.63s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:  67% Completed | 6/9 [00:26<00:14,  4.74s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:  78% Completed | 7/9 [00:30<00:09,  4.56s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards:  89% Completed | 8/9 [00:34<00:04,  4.34s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards: 100% Completed | 9/9 [00:36<00:00,  3.69s/it]
(EngineCore_DP0 pid=3644257) 
Loading safetensors checkpoint shards: 100% Completed | 9/9 [00:36<00:00,  4.06s/it]
(EngineCore_DP0 pid=3644257) 
(EngineCore_DP0 pid=3644234) 
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):   0%|          | 0/51 [00:00<?, ?it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):   4%|▍         | 2/51 [00:00<00:04, 10.42it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):   8%|▊         | 4/51 [00:00<00:04, 10.79it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  12%|█▏        | 6/51 [00:00<00:04, 10.82it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  16%|█▌        | 8/51 [00:00<00:03, 11.01it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  20%|█▉        | 10/51 [00:00<00:03, 11.51it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  24%|██▎       | 12/51 [00:01<00:03, 11.49it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  27%|██▋       | 14/51 [00:01<00:03, 11.72it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  31%|███▏      | 16/51 [00:01<00:02, 11.71it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  35%|███▌      | 18/51 [00:01<00:02, 11.92it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  39%|███▉      | 20/51 [00:01<00:02, 12.06it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  43%|████▎     | 22/51 [00:01<00:02, 11.85it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  47%|████▋     | 24/51 [00:02<00:02, 12.01it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  51%|█████     | 26/51 [00:02<00:02, 11.88it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  55%|█████▍    | 28/51 [00:02<00:01, 12.10it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  59%|█████▉    | 30/51 [00:02<00:01, 11.24it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  63%|██████▎   | 32/51 [00:02<00:01, 11.74it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  67%|██████▋   | 34/51 [00:02<00:01, 11.56it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  71%|███████   | 36/51 [00:03<00:01, 11.78it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  75%|███████▍  | 38/51 [00:03<00:01, 11.58it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  78%|███████▊  | 40/51 [00:03<00:00, 11.11it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  82%|████████▏ | 42/51 [00:03<00:00, 11.51it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  86%|████████▋ | 44/51 [00:03<00:00, 11.84it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  90%|█████████ | 46/51 [00:03<00:00, 11.82it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  94%|█████████▍| 48/51 [00:04<00:00, 12.05it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE):  98%|█████████▊| 50/51 [00:04<00:00, 12.14it/s]
Capturing CUDA graphs (mixed prefill-decode, PIECEWISE): 100%|██████████| 51/51 [00:04<00:00, 11.63it/s]
(EngineCore_DP0 pid=3644234) 
Capturing CUDA graphs (decode, FULL):   0%|          | 0/35 [00:00<?, ?it/s]
Capturing CUDA graphs (decode, FULL):   3%|▎         | 1/35 [00:00<00:05,  6.07it/s]
Capturing CUDA graphs (decode, FULL):   9%|▊         | 3/35 [00:00<00:03,  9.53it/s]
Capturing CUDA graphs (decode, FULL):  11%|█▏        | 4/35 [00:00<00:03,  9.54it/s]
Capturing CUDA graphs (decode, FULL):  17%|█▋        | 6/35 [00:00<00:02, 10.63it/s]
Capturing CUDA graphs (decode, FULL):  23%|██▎       | 8/35 [00:00<00:02, 10.81it/s]
Capturing CUDA graphs (decode, FULL):  29%|██▊       | 10/35 [00:00<00:02, 11.04it/s]
Capturing CUDA graphs (decode, FULL):  34%|███▍      | 12/35 [00:01<00:02, 11.24it/s]
Capturing CUDA graphs (decode, FULL):  40%|████      | 14/35 [00:01<00:01, 11.39it/s]
Capturing CUDA graphs (decode, FULL):  46%|████▌     | 16/35 [00:01<00:01, 11.39it/s]
Capturing CUDA graphs (decode, FULL):  51%|█████▏    | 18/35 [00:01<00:01, 11.46it/s]
Capturing CUDA graphs (decode, FULL):  57%|█████▋    | 20/35 [00:01<00:01, 11.36it/s]
Capturing CUDA graphs (decode, FULL):  63%|██████▎   | 22/35 [00:02<00:01, 11.34it/s]
Capturing CUDA graphs (decode, FULL):  69%|██████▊   | 24/35 [00:02<00:00, 11.51it/s]
Capturing CUDA graphs (decode, FULL):  74%|███████▍  | 26/35 [00:02<00:00, 11.47it/s]
Capturing CUDA graphs (decode, FULL):  80%|████████  | 28/35 [00:02<00:00, 11.52it/s]
Capturing CUDA graphs (decode, FULL):  86%|████████▌ | 30/35 [00:02<00:00, 11.58it/s]
Capturing CUDA graphs (decode, FULL):  91%|█████████▏| 32/35 [00:02<00:00, 11.47it/s]
Capturing CUDA graphs (decode, FULL):  97%|█████████▋| 34/35 [00:03<00:00, 11.55it/s]
Capturing CUDA graphs (decode, FULL): 100%|██████████| 35/35 [00:03<00:00, 11.22it/s]
(APIServer pid=3643830) INFO:     Started server process [3643830]
(APIServer pid=3643830) INFO:     Waiting for application startup.
(APIServer pid=3643830) INFO:     Application startup complete.
(EngineCore_DP0 pid=3644257) Process EngineCore_DP0:
(EngineCore_DP0 pid=3644257) Traceback (most recent call last):
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
(EngineCore_DP0 pid=3644257)     self.run()
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/multiprocessing/process.py", line 108, in run
(EngineCore_DP0 pid=3644257)     self._target(*self._args, **self._kwargs)
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 870, in run_engine_core
(EngineCore_DP0 pid=3644257)     raise e
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 857, in run_engine_core
(EngineCore_DP0 pid=3644257)     engine_core = EngineCoreProc(*args, **kwargs)
(EngineCore_DP0 pid=3644257)                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 637, in __init__
(EngineCore_DP0 pid=3644257)     super().__init__(
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 109, in __init__
(EngineCore_DP0 pid=3644257)     num_gpu_blocks, num_cpu_blocks, kv_cache_config = self._initialize_kv_caches(
(EngineCore_DP0 pid=3644257)                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/core.py", line 240, in _initialize_kv_caches
(EngineCore_DP0 pid=3644257)     available_gpu_memory = self.model_executor.determine_available_memory()
(EngineCore_DP0 pid=3644257)                            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/executor/abstract.py", line 126, in determine_available_memory
(EngineCore_DP0 pid=3644257)     return self.collective_rpc("determine_available_memory")
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/executor/uniproc_executor.py", line 75, in collective_rpc
(EngineCore_DP0 pid=3644257)     result = run_method(self.driver_worker, method, args, kwargs)
(EngineCore_DP0 pid=3644257)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/serial_utils.py", line 461, in run_method
(EngineCore_DP0 pid=3644257)     return func(*args, **kwargs)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
(EngineCore_DP0 pid=3644257)     return func(*args, **kwargs)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/worker/gpu_worker.py", line 340, in determine_available_memory
(EngineCore_DP0 pid=3644257)     self.model_runner.profile_run()
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4474, in profile_run
(EngineCore_DP0 pid=3644257)     hidden_states, last_hidden_states = self._dummy_run(
(EngineCore_DP0 pid=3644257)                                         ^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 120, in decorate_context
(EngineCore_DP0 pid=3644257)     return func(*args, **kwargs)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/worker/gpu_model_runner.py", line 4198, in _dummy_run
(EngineCore_DP0 pid=3644257)     outputs = self.model(
(EngineCore_DP0 pid=3644257)               ^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/compilation/cuda_graph.py", line 220, in __call__
(EngineCore_DP0 pid=3644257)     return self.runnable(*args, **kwargs)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1775, in _wrapped_call_impl
(EngineCore_DP0 pid=3644257)     return self._call_impl(*args, **kwargs)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1786, in _call_impl
(EngineCore_DP0 pid=3644257)     return forward_call(*args, **kwargs)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/model_executor/models/llama.py", line 623, in forward
(EngineCore_DP0 pid=3644257)     model_output = self.model(
(EngineCore_DP0 pid=3644257)                    ^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/compilation/decorators.py", line 526, in __call__
(EngineCore_DP0 pid=3644257)     output = TorchCompileWithNoGuardsWrapper.__call__(self, *args, **kwargs)
(EngineCore_DP0 pid=3644257)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/compilation/wrapper.py", line 218, in __call__
(EngineCore_DP0 pid=3644257)     return self._call_with_optional_nvtx_range(
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/compilation/wrapper.py", line 109, in _call_with_optional_nvtx_range
(EngineCore_DP0 pid=3644257)     return callable_fn(*args, **kwargs)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 845, in compile_wrapper
(EngineCore_DP0 pid=3644257)     raise e.remove_dynamo_frames() from None  # see TORCHDYNAMO_VERBOSE=1
(EngineCore_DP0 pid=3644257)     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 990, in _compile_fx_inner
(EngineCore_DP0 pid=3644257)     raise InductorError(e, currentframe()).with_traceback(
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 974, in _compile_fx_inner
(EngineCore_DP0 pid=3644257)     mb_compiled_graph = fx_codegen_and_compile(
(EngineCore_DP0 pid=3644257)                         ^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1695, in fx_codegen_and_compile
(EngineCore_DP0 pid=3644257)     return scheme.codegen_and_compile(gm, example_inputs, inputs_to_check, graph_kwargs)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/compile_fx.py", line 1505, in codegen_and_compile
(EngineCore_DP0 pid=3644257)     compiled_module = graph.compile_to_module()
(EngineCore_DP0 pid=3644257)                       ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/graph.py", line 2319, in compile_to_module
(EngineCore_DP0 pid=3644257)     return self._compile_to_module()
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/graph.py", line 2325, in _compile_to_module
(EngineCore_DP0 pid=3644257)     self.codegen_with_cpp_wrapper() if self.cpp_wrapper else self.codegen()
(EngineCore_DP0 pid=3644257)                                                              ^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/graph.py", line 2271, in codegen
(EngineCore_DP0 pid=3644257)     result = self.wrapper_code.generate(self.is_inference)
(EngineCore_DP0 pid=3644257)              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/codegen/wrapper.py", line 1552, in generate
(EngineCore_DP0 pid=3644257)     return self._generate(is_inference)
(EngineCore_DP0 pid=3644257)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/codegen/wrapper.py", line 1615, in _generate
(EngineCore_DP0 pid=3644257)     self.generate_and_run_autotune_block()
(EngineCore_DP0 pid=3644257)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/torch/_inductor/codegen/wrapper.py", line 1695, in generate_and_run_autotune_block
(EngineCore_DP0 pid=3644257)     raise RuntimeError(f"Failed to run autotuning code block: {e}") from e
(EngineCore_DP0 pid=3644257) torch._inductor.exc.InductorError: RuntimeError: Failed to run autotuning code block: CUDA out of memory. Tried to allocate 1.96 GiB. GPU 0 has a total capacity of 39.49 GiB of which 1.86 GiB is free. Including non-PyTorch memory, this process has 37.63 GiB memory in use. Of the allocated memory 37.11 GiB is allocated by PyTorch, and 20.90 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.  See documentation for Memory Management  (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
[rank0]:[W1229 07:04:13.476894153 ProcessGroupNCCL.cpp:1524] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())
(APIServer pid=3643829) Traceback (most recent call last):
(APIServer pid=3643829)   File "<frozen runpy>", line 198, in _run_module_as_main
(APIServer pid=3643829)   File "<frozen runpy>", line 88, in _run_code
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 1469, in <module>
(APIServer pid=3643829)     uvloop.run(run_server(args))
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/uvloop/__init__.py", line 92, in run
(APIServer pid=3643829)     return runner.run(wrapper())
(APIServer pid=3643829)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/asyncio/runners.py", line 118, in run
(APIServer pid=3643829)     return self._loop.run_until_complete(task)
(APIServer pid=3643829)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3643829)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=3643829)     return await main
(APIServer pid=3643829)            ^^^^^^^^^^
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 1398, in run_server
(APIServer pid=3643829)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 1417, in run_server_worker
(APIServer pid=3643829)     async with build_async_engine_client(
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/contextlib.py", line 210, in __aenter__
(APIServer pid=3643829)     return await anext(self.gen)
(APIServer pid=3643829)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 172, in build_async_engine_client
(APIServer pid=3643829)     async with build_async_engine_client_from_engine_args(
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/contextlib.py", line 210, in __aenter__
(APIServer pid=3643829)     return await anext(self.gen)
(APIServer pid=3643829)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 213, in build_async_engine_client_from_engine_args
(APIServer pid=3643829)     async_llm = AsyncLLM.from_vllm_config(
(APIServer pid=3643829)                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/async_llm.py", line 215, in from_vllm_config
(APIServer pid=3643829)     return cls(
(APIServer pid=3643829)            ^^^^
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/async_llm.py", line 134, in __init__
(APIServer pid=3643829)     self.engine_core = EngineCoreClient.make_async_mp_client(
(APIServer pid=3643829)                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 121, in make_async_mp_client
(APIServer pid=3643829)     return AsyncMPClient(*client_args)
(APIServer pid=3643829)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 820, in __init__
(APIServer pid=3643829)     super().__init__(
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/core_client.py", line 477, in __init__
(APIServer pid=3643829)     with launch_core_engines(vllm_config, executor_class, log_stats) as (
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/contextlib.py", line 144, in __exit__
(APIServer pid=3643829)     next(self.gen)
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/utils.py", line 903, in launch_core_engines
(APIServer pid=3643829)     wait_for_engine_startup(
(APIServer pid=3643829)   File "/u/yurenh2/miniforge3/envs/eval/lib/python3.11/site-packages/vllm/v1/engine/utils.py", line 960, in wait_for_engine_startup
(APIServer pid=3643829)     raise RuntimeError(
(APIServer pid=3643829) RuntimeError: Engine core initialization failed. See root cause above. Failed core proc(s): {}
[2025-12-29T07:04:21.056] error: *** JOB 14367370 ON gpua051 CANCELLED AT 2025-12-29T07:04:21 DUE to SIGNAL Terminated ***