Parameter Efficient Fine-Tuning for Large Language Models: Theory and Applications
Title
Parameter Efficient Fine-Tuning for Large Language Models: Theory and Applications
Subject
Statistics
Creator
Yikuan Li
Date
2025
Contributor
Dr Fanghui Liu (Supervisor)
Abstract
Large Language Models (LLMs) achieve outstanding performance but are expensive to fine-tune due to their size. Parameter-efficient fine-tuning (PEFT) methods like Low-Rank Adaptation (LoRA) address this by updating only small low-rank matrices while keeping pretrained weights frozen. This project explores LoRA and its variants—QLoRA, AdaLoRA, and LoRA-One—through theoretical analysis and experiments on the GLUE benchmark using T5-base. Results show that LoRA achieves near full fine-tuning performance with under 1% of trainable parameters. The study highlights LoRA’s efficiency–expressiveness trade-off and contributes to developing more scalable, adaptive fine-tuning strategies for large models.
Files
Collection
Citation
Yikuan Li, “Parameter Efficient Fine-Tuning for Large Language Models: Theory and Applications,” URSS SHOWCASE, accessed November 2, 2025, https://linen-dog.lnx.warwick.ac.uk/items/show/989.