# Parameter Efficient Fine Tuning

- [P-Tuning](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/p-tuning.md): The highly cited "GPT Understands Too" paper first submitted March 2021, introducing P-Tuning
- [The Power of Scale for Parameter-Efficient Prompt Tuning](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/p-tuning/the-power-of-scale-for-parameter-efficient-prompt-tuning.md)
- [Prefix-Tuning: Optimizing Continuous Prompts for Generation](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/prefix-tuning-optimizing-continuous-prompts-for-generation.md)
- [Harnessing the Power of PEFT: A Smarter Approach to Fine-tuning Pre-trained Models](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/harnessing-the-power-of-peft-a-smarter-approach-to-fine-tuning-pre-trained-models.md): Parameter-Efficient Fine-Tuning (PEFT) is a technique used to fine tune neural language models
- [What is Low-Rank Adaptation (LoRA) -  explained by the inventor](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/what-is-low-rank-adaptation-lora-explained-by-the-inventor.md): Edward Hu
- [Low Rank Adaptation (Lora)](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/low-rank-adaptation-lora.md)
- [Practical Tips for Fine-tuning LMs Using LoRA (Low-Rank Adaptation)](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/practical-tips-for-fine-tuning-lms-using-lora-low-rank-adaptation.md)
- [QLORA: Efficient Finetuning of Quantized LLMs](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/qlora-efficient-finetuning-of-quantized-llms.md)
- [Bits and Bytes](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/bits-and-bytes.md): Tim Dettmers (PhD candidate, University of Washington) presents "8-bit Methods for Efficient Deep Learning" in this Cohere For AI Technical Talk.
- [The Magic behind Qlora](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/the-magic-behind-qlora.md)
- [Practical Guide to LoRA: Tips and Tricks for Effective Model Adaptation](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/practical-guide-to-lora-tips-and-tricks-for-effective-model-adaptation.md): A range of practical tips and questions around using Lora
- [The quantization constant](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/the-quantization-constant.md)
- [QLORA: Efficient Finetuning of Quantized Language Models](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/qlora-efficient-finetuning-of-quantized-language-models.md)
- [QLORA and Fine-Tuning of Quantized Language Models (LMs)](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/qlora-and-fine-tuning-of-quantized-language-models-lms.md)
- [ReLoRA: High-Rank Training Through Low-Rank Updates](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/relora-high-rank-training-through-low-rank-updates.md)
- [SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/slora-federated-parameter-efficient-fine-tuning-of-language-models.md): Leveraging Lora
- [GaLora: Memory-Efficient LLM Training by Gradient Low-Rank Projection](https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning/galora-memory-efficient-llm-training-by-gradient-low-rank-projection.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://training.continuumlabs.ai/training/the-fine-tuning-process/parameter-efficient-fine-tuning.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
