π¨PEFT
State-of-the-art Parameter-Efficient Fine-Tuning methods
PEFT methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters.
It only fine-tunes a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. And according to the official repo's states, recent State-of-the-Art PEFT techniques achieve performance comparable to that of full fine-tuning.
Use Cases
Get comparable performance to full finetuning by adapting LLMs to downstream tasks using consumer hardware.
PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs
Reference
Last updated