LogoLogo
search
Ctrlk
Continuum WebsiteContinuum ApplicationsContinuum KnowledgeAxolotl Platform
LogoLogo
  • Continuum
  • Data
    • Datasets
  • MODELS
    • Foundation Models
  • Training
    • The Fine Tuning Process
      • Why fine tune?
      • Tokenization
      • Parameter Efficient Fine Tuning
        • P-Tuning
        • Prefix-Tuning: Optimizing Continuous Prompts for Generation
        • Harnessing the Power of PEFT: A Smarter Approach to Fine-tuning Pre-trained Models
        • What is Low-Rank Adaptation (LoRA) - explained by the inventor
        • Low Rank Adaptation (Lora)
        • Practical Tips for Fine-tuning LMs Using LoRA (Low-Rank Adaptation)
        • QLORA: Efficient Finetuning of Quantized LLMs
        • Bits and Bytes
        • The Magic behind Qlora
        • Practical Guide to LoRA: Tips and Tricks for Effective Model Adaptation
        • The quantization constant
        • QLORA: Efficient Finetuning of Quantized Language Models
        • QLORA and Fine-Tuning of Quantized Language Models (LMs)
        • ReLoRA: High-Rank Training Through Low-Rank Updates
        • SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
        • GaLora: Memory-Efficient LLM Training by Gradient Low-Rank Projection
      • Hyperparameters
      • Training Processes
  • INFERENCE
    • Why is inference important?
  • KNOWLEDGE
    • Vector Databases
    • Retrieval Augmented Generation
    • Semantic Routing
    • Resource Description Framework (RDF)
  • AGENTS
    • What is agency?
  • Regulation and Ethics
    • Regulation and Ethics
  • DISRUPTION
    • Data Architecture
    • Search
    • Recommendation Engines
    • Logging
  • Infrastructure
    • The modern data centre
    • Servers and Chips
    • Networking and Connectivity
    • Data and Memory
    • Libraries and Complements
    • Vast Data Platform
    • Storage
gitbookPowered by GitBook
block-quoteOn this pagechevron-down
  1. Trainingchevron-right
  2. The Fine Tuning Process

Parameter Efficient Fine Tuning

P-Tuningchevron-rightPrefix-Tuning: Optimizing Continuous Prompts for Generationchevron-rightHarnessing the Power of PEFT: A Smarter Approach to Fine-tuning Pre-trained Modelschevron-rightWhat is Low-Rank Adaptation (LoRA) - explained by the inventorchevron-rightLow Rank Adaptation (Lora)chevron-rightPractical Tips for Fine-tuning LMs Using LoRA (Low-Rank Adaptation)chevron-rightQLORA: Efficient Finetuning of Quantized LLMschevron-rightBits and Byteschevron-rightThe Magic behind Qlorachevron-rightPractical Guide to LoRA: Tips and Tricks for Effective Model Adaptationchevron-rightThe quantization constantchevron-rightQLORA: Efficient Finetuning of Quantized Language Modelschevron-rightQLORA and Fine-Tuning of Quantized Language Models (LMs)chevron-rightReLoRA: High-Rank Training Through Low-Rank Updateschevron-rightSLoRA: Federated Parameter Efficient Fine-Tuning of Language Modelschevron-rightGaLora: Memory-Efficient LLM Training by Gradient Low-Rank Projectionchevron-right
PreviousTokenMonsterchevron-leftNextP-Tuningchevron-right

Was this helpful?

LogoLogo

Continuum - Accelerated Artificial Intelligence

  • Continuum Website
  • Axolotl Platform

Copyright Continuum Labs - 2023

Was this helpful?