LoRA (Low-Rank Adaptation)

< Glossary
Training

An efficient fine-tuning method that updates only low-rank adapters instead of all model weights. Significantly reduces memory and compute requirements.

Related terms