# Parameter Efficient Fine Tuning

- [P-Tuning](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/p-tuning.md): The highly cited "GPT Understands Too" paper first submitted March 2021, introducing P-Tuning
- [The Power of Scale for Parameter-Efficient Prompt Tuning](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/p-tuning/the-power-of-scale-for-parameter-efficient-prompt-tuning.md)
- [Prefix-Tuning: Optimizing Continuous Prompts for Generation](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/prefix-tuning-optimizing-continuous-prompts-for-generation.md)
- [Harnessing the Power of PEFT: A Smarter Approach to Fine-tuning Pre-trained Models](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/harnessing-the-power-of-peft-a-smarter-approach-to-fine-tuning-pre-trained-models.md): Parameter-Efficient Fine-Tuning (PEFT) is a technique used to fine tune neural language models
- [What is Low-Rank Adaptation (LoRA) -  explained by the inventor](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/what-is-low-rank-adaptation-lora-explained-by-the-inventor.md): Edward Hu
- [Low Rank Adaptation (Lora)](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/low-rank-adaptation-lora.md)
- [Practical Tips for Fine-tuning LMs Using LoRA (Low-Rank Adaptation)](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/practical-tips-for-fine-tuning-lms-using-lora-low-rank-adaptation.md)
- [QLORA: Efficient Finetuning of Quantized LLMs](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/qlora-efficient-finetuning-of-quantized-llms.md)
- [Bits and Bytes](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/bits-and-bytes.md): Tim Dettmers (PhD candidate, University of Washington) presents "8-bit Methods for Efficient Deep Learning" in this Cohere For AI Technical Talk.
- [The Magic behind Qlora](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/the-magic-behind-qlora.md)
- [Practical Guide to LoRA: Tips and Tricks for Effective Model Adaptation](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/practical-guide-to-lora-tips-and-tricks-for-effective-model-adaptation.md): A range of practical tips and questions around using Lora
- [The quantization constant](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/the-quantization-constant.md)
- [QLORA: Efficient Finetuning of Quantized Language Models](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/qlora-efficient-finetuning-of-quantized-language-models.md)
- [QLORA and Fine-Tuning of Quantized Language Models (LMs)](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/qlora-and-fine-tuning-of-quantized-language-models-lms.md)
- [ReLoRA: High-Rank Training Through Low-Rank Updates](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/relora-high-rank-training-through-low-rank-updates.md)
- [SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/slora-federated-parameter-efficient-fine-tuning-of-language-models.md): Leveraging Lora
- [GaLora: Memory-Efficient LLM Training by Gradient Low-Rank Projection](/training/the-fine-tuning-process/parameter-efficient-fine-tuning/galora-memory-efficient-llm-training-by-gradient-low-rank-projection.md)
