Skip to content

Commit 8445416

Browse files
authored
Waive L0 tests (#5233)
Signed-off-by: Yiqing Yan <[email protected]>
1 parent b6ca677 commit 8445416

File tree

2 files changed

+4
-0
lines changed

2 files changed

+4
-0
lines changed

tests/integration/test_lists/waives.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -457,3 +457,6 @@ disaggregated/test_disaggregated.py::test_disaggregated_deepseek_v3_lite_fp8_att
457457
disaggregated/test_disaggregated.py::test_disaggregated_deepseek_v3_lite_fp8_attention_dp_one[DeepSeek-V3-Lite-fp8] SKIP (https://nvbugs/5340905)
458458
disaggregated/test_disaggregated.py::test_disaggregated_deepseek_v3_lite_fp8_attention_dp_one_mtp[DeepSeek-V3-Lite-fp8] SKIP (https://nvbugs/5340905)
459459
disaggregated/test_disaggregated.py::test_disaggregated_cache_aware_balance[TinyLlama-1.1B-Chat-v1.0] SKIP (https://nvbugs/5340905)
460+
examples/test_mistral.py::test_llm_mistral_v1_1gpu[mistral-7b-v0.1-float16-max_attention_window_size_4096-summarization_long] SKIP (https://nvbugs/5324976)
461+
triton_server/test_triton_llm.py::test_llava[False-1---False-True-False-0-128-enableDecoupleMode-inflight_fused_batching-disableTrtOverlap-0.7-max_utilization---1-1-1-False-tensorrt_llm_bls] SKIP (https://nvbugs/5308432)
462+
accuracy/test_llm_api_pytorch.py::TestDeepSeekV3Lite::test_nvfp4_4gpus[ep4-fp8kv=False-attention_dp=False-cuda_graph=False-overlap_scheduler=False-torch_compile=False] SKIP (https://nvbugs/5336321)

tests/unittest/_torch/test_attention_mla.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -357,6 +357,7 @@ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
357357
def test_attention_mla(scenario: Scenario, context_sequence_lengths: List[int],
358358
generation_seq_len_q: int,
359359
num_generation_steps: List[int]):
360+
pytest.skip("https://nvbugs/5344366")
360361
"""Test MLA computation for both context and generation phases"""
361362
num_heads = scenario.num_heads
362363
num_kv_heads = scenario.num_kv_heads

0 commit comments

Comments
 (0)