Skip to content

Commit c672b60

Browse files
Update docs/source/en/model_doc/bert-generation.md
Co-authored-by: Steven Liu <[email protected]>
1 parent 4734300 commit c672b60

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

docs/source/en/model_doc/bert-generation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ echo -e "Plants create energy through " | transformers run --task text2text-gene
8383

8484
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
8585

86-
The example below uses [BitsAndBytesConfig](../main_classes/quantization#transformers.BitsAndBytesConfig) to quantize the weights to 4-bit.
86+
The example below uses [BitsAndBytesConfig](../quantizationbitsandbytes) to quantize the weights to 4-bit.
8787

8888
```python
8989
from transformers import BertGenerationEncoder, BertTokenizer, BitsAndBytesConfig

0 commit comments

Comments
 (0)