Skip to content

Commit 6ea8189

Browse files
isururanawakafacebook-github-bot
authored andcommitted
Fix test failure: torchrec/distributed/tests/test_quant_sequence_model_parallel.py::QuantSequenceModelParallelTest::test_sharded_quant_kv_zch (#3194)
Summary: Pull Request resolved: #3194 Fixed the test torchrec/distributed/tests/test_quant_sequence_model_parallel.py::QuantSequenceModelParallelTest::test_sharded_quant_kv_zch by removing settings , since, test does not use any parameters can be run as default unit test without any hypothesis. Reviewed By: aporialiao Differential Revision: D78356143 fbshipit-source-id: 404c2466edd2c10019554befe52ba604df379d1e
1 parent fd9d78a commit 6ea8189

File tree

1 file changed

+0
-1
lines changed

1 file changed

+0
-1
lines changed

torchrec/distributed/tests/test_quant_sequence_model_parallel.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -213,7 +213,6 @@ def test_quant_pred_shard(
213213
torch.cuda.device_count() <= 1,
214214
"Not enough GPUs available",
215215
)
216-
@settings(verbosity=Verbosity.verbose, max_examples=1, deadline=None)
217216
def test_sharded_quant_kv_zch(self) -> None:
218217
device = torch.device("cuda:0")
219218
num_features = 4

0 commit comments

Comments
 (0)