LogoLogo
Ctrlk
Continuum WebsiteContinuum ApplicationsContinuum KnowledgeAxolotl Platform
  • Continuum
  • Data
    • Datasets
  • MODELS
    • Foundation Models
  • Training
    • The Fine Tuning Process
      • Why fine tune?
      • Tokenization
      • Parameter Efficient Fine Tuning
        • P-Tuning
        • Prefix-Tuning: Optimizing Continuous Prompts for Generation
        • Harnessing the Power of PEFT: A Smarter Approach to Fine-tuning Pre-trained Models
        • What is Low-Rank Adaptation (LoRA) - explained by the inventor
        • Low Rank Adaptation (Lora)
        • Practical Tips for Fine-tuning LMs Using LoRA (Low-Rank Adaptation)
        • QLORA: Efficient Finetuning of Quantized LLMs
        • Bits and Bytes
        • The Magic behind Qlora
        • Practical Guide to LoRA: Tips and Tricks for Effective Model Adaptation
        • The quantization constant
        • QLORA: Efficient Finetuning of Quantized Language Models
        • QLORA and Fine-Tuning of Quantized Language Models (LMs)
        • ReLoRA: High-Rank Training Through Low-Rank Updates
        • SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
        • GaLora: Memory-Efficient LLM Training by Gradient Low-Rank Projection
      • Hyperparameters
      • Training Processes
  • INFERENCE
    • Why is inference important?
  • KNOWLEDGE
    • Vector Databases
    • Retrieval Augmented Generation
    • Semantic Routing
    • Resource Description Framework (RDF)
  • AGENTS
    • What is agency?
  • Regulation and Ethics
    • Regulation and Ethics
  • DISRUPTION
    • Data Architecture
    • Search
    • Recommendation Engines
    • Logging
  • Infrastructure
    • The modern data centre
    • Servers and Chips
    • Networking and Connectivity
    • Data and Memory
    • Libraries and Complements
    • Vast Data Platform
    • Storage
Powered by GitBook
On this page

Was this helpful?

  1. Training
  2. The Fine Tuning Process

Parameter Efficient Fine Tuning

P-TuningPrefix-Tuning: Optimizing Continuous Prompts for GenerationHarnessing the Power of PEFT: A Smarter Approach to Fine-tuning Pre-trained ModelsWhat is Low-Rank Adaptation (LoRA) - explained by the inventorLow Rank Adaptation (Lora)Practical Tips for Fine-tuning LMs Using LoRA (Low-Rank Adaptation)QLORA: Efficient Finetuning of Quantized LLMsBits and BytesThe Magic behind QloraPractical Guide to LoRA: Tips and Tricks for Effective Model AdaptationThe quantization constantQLORA: Efficient Finetuning of Quantized Language ModelsQLORA and Fine-Tuning of Quantized Language Models (LMs)ReLoRA: High-Rank Training Through Low-Rank UpdatesSLoRA: Federated Parameter Efficient Fine-Tuning of Language ModelsGaLora: Memory-Efficient LLM Training by Gradient Low-Rank Projection
PreviousTokenMonsterNextP-Tuning

Was this helpful?

LogoLogo

Continuum - Accelerated Artificial Intelligence

  • Continuum Website
  • Axolotl Platform

Copyright Continuum Labs - 2023