Parameter Efficient Fine Tuning
P-TuningPrefix-Tuning: Optimizing Continuous Prompts for GenerationHarnessing the Power of PEFT: A Smarter Approach to Fine-tuning Pre-trained ModelsWhat is Low-Rank Adaptation (LoRA) - explained by the inventorLow Rank Adaptation (Lora)Practical Tips for Fine-tuning LMs Using LoRA (Low-Rank Adaptation)QLORA: Efficient Finetuning of Quantized LLMsBits and BytesThe Magic behind QloraPractical Guide to LoRA: Tips and Tricks for Effective Model AdaptationThe quantization constantQLORA: Efficient Finetuning of Quantized Language ModelsQLORA and Fine-Tuning of Quantized Language Models (LMs)ReLoRA: High-Rank Training Through Low-Rank UpdatesSLoRA: Federated Parameter Efficient Fine-Tuning of Language ModelsGaLora: Memory-Efficient LLM Training by Gradient Low-Rank Projection