Instruction Tuning
Inspired by the Self-Instruct Paper
Last updated
Copyright Continuum Labs - 2023
Inspired by the Self-Instruct Paper
Last updated
Instruction tuning is a specialised form of fine-tuning in which a model is trained using pairs of input-output instructions, enabling it to learn specific tasks guided by these instructions.
Instruction tuning strategies are techniques used to refine a language model's understanding and response to instructions. These strategies differ from pre-training, focusing on efficiency and targeted improvement.
Below are concise explanations of these strategies:
Balancing Data Distribution
This involves ensuring a proportional representation of tasks during instruction tuning to prevent any single dataset from dominating.
Techniques include examples-proportional mixing, where instances are equally sampled from all datasets, and imposing a maximum cap on the number of examples per dataset to prevent data imbalance.
Combining Instruction Tuning with Pre-Training
This method enhances tuning effectiveness by mixing pre-training data (plain texts) with instruction-tuned data (formatted datasets), serving as regularization to prevent overfitting.
This approach can either integrate instruction data during pre-training or combine both phases into one, using multi-task learning to benefit from both pre-training and instruction tuning simultaneously.
Multi-stage Instruction Tuning
A phased approach where the model is initially fine-tuned on task-formatted instructions (usually more abundant) and then on daily chat instructions.
To mitigate the potential loss of previously learned information (capacity forgetting), task-formatted instructions are reintroduced in later stages. This tiered tuning can progressively introduce more complex and difficult tasks to incrementally challenge and improve the model.
Data augmentation
Augmenting the data such as by inverting inputs and outputs (e.g., turning a question answering task into a question generation task) is beneficial.
Neural language models can be enhanced to follow instructions on new, unseen tasks through a process called instruction tuning.
The concept of instruction tuning was first introduced in a widely cited paper from Google called "Fine tuned language models are zero shot-learners" .
The paper focuses on enhancing the ability of neural language models to perform tasks they haven't been explicitly trained on, known as zero-shot learning.
This is where the term 'instruction tuning' was first coined, which has become the foundation for creating datasets for fine tuning foundation language models.
The process has been further refined and developed. The paper below provides a comprehensive review of the history of the process:
Task Datasets
These datasets are analogous to comprehensive workbooks comprising a variety of tasks—such as text summarisation, classification, and translation—each prefaced with natural language instructions.
The role of these instructions is to clearly communicate the task's objective to the model, equipping it to perform the required function. This method closely mimics a supervised learning environment where the presence of instructions is integral to guiding the model’s understanding and response to a task.
Daily Chat Data
To mimic real-world interaction and capture the diversity of human communication, training data is also sourced from everyday dialogues.
These include a range of user queries that language models encounter when interacting with people in different contexts.
This not only facilitates the instruction-following capability of the models but also aids in the refinement of their responses to align with actual human inquiries and needs.
Additionally, this dataset includes human-generated instructions for a variety of real-life scenarios and the corresponding responses to construct a realistic dialogue training environment.
Synthetic Data
Recognising the limitations of solely relying on human-generated data, synthetic approaches are employed to augment training datasets.
Language models are prompted to generate new task instructions and associated input-output pairs based on existing instances.
This semi-automated process allows for the expansion of training materials without the excessive demand for human annotation, promoting a cost-effective method of enhancing the model's learning and generative capabilities.
These diverse data sources collectively contribute to the robustness of language models, ensuring that they are well-versed in both structured task-oriented interactions and the flexible, unpredictable nature of human dialogue.
Instruction tuning is a method of training large language models (LLMs) that involves augmenting traditional input-output data with explicit instructions, enhancing the model's ability to generalize to new tasks.
Below are the primary types of instruction tuning and the background research:
Developed by Mishra et al. (2022), this dataset comprises 193,000 instruction-output examples derived from 61 English NLP tasks.
The uniqueness lies in its structured approach, where instructions from each dataset are aligned to a common schema, including definitions, things to avoid, and examples.
Created by Wang et al. (2022), it is an extension of the Natural Instructions dataset.
It includes 5 million examples from 76 tasks in 55 languages, with instructions simplified to include task definitions and positive and negative examples with explanations.
Introduced by Honovich et al. (2023), this dataset contains 240,000 examples generated by prompting InstructGPT (text-davinci-002) with Super-Natural Instructions examples.
It covers a broader range of tasks than its predecessors and includes creative tasks beyond classical NLP challenges.
Similar to Unnatural Instructions, this dataset by Wang et al. (2023) consists of 82,000 examples generated using InstructGPT.
It decouples example generation into three steps: generating the instruction, then the input, and finally the output, aiming to reduce bias in classification tasks.
These datasets and approaches to instruction tuning highlight the evolving landscape of LLM training.
By integrating explicit instructions into training data, instruction tuning enables models to understand and perform tasks more effectively, bridging the gap between machine understanding and human-like comprehension and reasoning.