Skip to content

Commit 200db3b

Browse files
authored
[None][infra] Waive failed tests on main branch (#7201)
Signed-off-by: qqiao <[email protected]>
1 parent bea5e07 commit 200db3b

File tree

2 files changed

+12
-0
lines changed

2 files changed

+12
-0
lines changed

tests/integration/test_lists/waives.txt

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -321,3 +321,11 @@ full:L40S/accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_
321321
full:L40S/accuracy/test_disaggregated_serving.py::TestLlama3_1_8BInstruct::test_tp_pp_symmetric[MMLU-tp2pp2] SKIP (https://nvbugs/5471108)
322322
test_e2e.py::test_multi_nodes_eval[llama4-models/nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8-tp8pp2-mmlu] SKIP (https://nvbugs/5473781)
323323
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[moe_backend=CUTLASS-mtp_nextn=0-tp4-fp8kv=True-attention_dp=True-cuda_graph=True-overlap_scheduler=True-torch_compile=True] SKIP (https://nvbugs/5476580)
324+
disaggregated/test_disaggregated_single_gpu.py::test_disaggregated_llama_context_capacity[False-False-DeepSeek-V3-Lite-fp8/fp8] SKIP (https://nvbugs/5477404)
325+
triton_server/test_triton.py::test_python_bls_unit_tests[python-bls-unit-tests] SKIP (https://nvbugs/5477392)
326+
triton_server/test_triton.py::test_mistral_ib[mistral-ib] SKIP (https://nvbugs/5477399)
327+
triton_server/test_triton.py::test_eagle[eagle] SKIP (https://nvbugs/5477378)
328+
examples/test_mixtral.py::test_llm_mixtral_moe_plugin_lora_4gpus[Mixtral-8x7B-v0.1-chinese-mixtral-lora] SKIP (https://nvbugs/5477421)
329+
accuracy/test_llm_api_pytorch.py::TestDeepSeekR1::test_nvfp4_multi_gpus[throughput_tp8] SKIP (https://nvbugs/5455140)
330+
unittest/_torch/multi_gpu_modeling/test_llama4.py::test_llama4[pp1-ep4-enable_adp-enable_graph-tp8-trtllm-scout] SKIP (https://nvbugs/5477730)
331+
test_e2e.py::test_openai_chat_example[trt] SKIP (https://nvbugs/5477444)

tests/unittest/llmapi/test_executor.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -277,6 +277,7 @@ def create_rsp(id, finished: bool = False):
277277
return tllm.Response(request_id=0, result=result, client_id=0)
278278

279279

280+
@pytest.mark.skip(reason="https://nvbugs/5477359")
280281
def test_GenerationResultBase():
281282
sampling_params = SamplingParams(max_tokens=4)
282283
result = GenerationResultBase(
@@ -291,6 +292,7 @@ def test_GenerationResultBase():
291292
assert result._done
292293

293294

295+
@pytest.mark.skip(reason="https://nvbugs/5477359")
294296
def test_GenerationResult():
295297
request = GenerationRequest(prompt_token_ids=[12, 23, 34],
296298
sampling_params=SamplingParams(max_tokens=4))
@@ -303,6 +305,7 @@ def test_GenerationResult():
303305
assert result._done
304306

305307

308+
@pytest.mark.skip(reason="https://nvbugs/5477359")
306309
def test_DetokenizedGenerationResultBase():
307310
sampling_params = SamplingParams(max_tokens=4)
308311
model_path = llm_models_root() / "llama-models/llama-7b-hf"
@@ -434,6 +437,7 @@ def ResponsePostprocessWorker_worker_task(pull_pipe_addr, push_pipe_addr,
434437
worker.start()
435438

436439

440+
@pytest.mark.skip(reason="https://nvbugs/5477369")
437441
def test_ResponsePostprocessWorker():
438442

439443
input_pipe = ZeroMqQueue(is_server=True)

0 commit comments

Comments
 (0)