diff --git a/labml_nn/lora/__init__.py b/labml_nn/lora/__init__.py index 47d3c4571..bd3c42c7b 100644 --- a/labml_nn/lora/__init__.py +++ b/labml_nn/lora/__init__.py @@ -14,7 +14,7 @@ Low-Rank Adaptation (LoRA) freezes pre-trained model weights and injects trainable rank decomposition matrices into each layer of the transformer. - This makes it possible to efficiently fine-tune large langauge models by + This makes it possible to efficiently fine-tune large language models by reducing trainable parameters by a large factor. Here's [the training code](experiment.html) for training a GPT2 model with LoRA