We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
2 parents 47d4231 + dd1d51a commit b1ba92cCopy full SHA for b1ba92c
labml_nn/lora/__init__.py
@@ -14,7 +14,7 @@
14
15
Low-Rank Adaptation (LoRA) freezes pre-trained model weights and injects
16
trainable rank decomposition matrices into each layer of the transformer.
17
- This makes it possible to efficiently fine-tune large langauge models by
+ This makes it possible to efficiently fine-tune large language models by
18
reducing trainable parameters by a large factor.
19
20
Here's [the training code](experiment.html) for training a GPT2 model with LoRA
0 commit comments