Skip to content

Commit b1ba92c

Browse files
authored
Merge pull request #280 from kommentlezz/patch-1
Fix typo: language
2 parents 47d4231 + dd1d51a commit b1ba92c

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

labml_nn/lora/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
1515
Low-Rank Adaptation (LoRA) freezes pre-trained model weights and injects
1616
trainable rank decomposition matrices into each layer of the transformer.
17-
This makes it possible to efficiently fine-tune large langauge models by
17+
This makes it possible to efficiently fine-tune large language models by
1818
reducing trainable parameters by a large factor.
1919
2020
Here's [the training code](experiment.html) for training a GPT2 model with LoRA

0 commit comments

Comments
 (0)