# PEFT

{% hint style="info" %}
PEFT methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters.
{% endhint %}

It only fine-tunes a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. And according to the official repo's states, recent State-of-the-Art PEFT techniques achieve performance comparable to that of full fine-tuning.

### Use Cases

{% hint style="info" %}
Get comparable performance to full finetuning by adapting LLMs to downstream tasks using consumer hardware.
{% endhint %}

PEFT methods only fine-tune a small number of (extra) model parameters, significantly decreasing computational and storage costs

### Reference

<https://huggingface.co/docs/peft/index#peft>


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://aisuko.gitbook.io/wiki/ai-techniques/adaptation/peft.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
