Enhanced Supervised Fine Tuning
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition
Last updated
Copyright Continuum Labs - 2023
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition
Last updated
This January 2024 paper investigates the interplay of data composition during supervised fine-tuning (SFT) of large language models (LLMs) to enhance their abilities in mathematical reasoning, code generation, and general human-aligning tasks.
The authors explore four key research questions related to model performance, data amount, composition ratio, model size, and Supervised Fine Tuning (SFT) strategies.
The authors explore four SFT strategies: multi-task learning, sequential training, mixed sequential training, and Dual-stage Mixed Fine-tuning (DMT).
Multi-task learning leads to conflicts, while sequential training results in catastrophic forgetting.
The proposed DMT strategy effectively alleviates both performance conflicts and catastrophic forgetting by balancing general and specialized abilities.
Collect the required datasets (GSM8K RFT, Code Alpaca, and ShareGPT) and evaluation benchmarks (GSM8Ktest set, HumanEval, and MT-Bench).
Prepare the pre-trained LLaMA models (7B, 13B, 33B).
For each research question:
a. Create the necessary data subsets according to the experimental design.
b. Fine-tune the LLaMA models using the specified training strategies and hyperparameters.
c. Evaluate the fine-tuned models on the corresponding benchmarks.
d. Analyze the results and compare the performance across different settings.
The authors visualise the semantic representations of different SFT abilities using t-SNE.
They observe a collapse phenomenon in the semantic representations of both the original LLaMA-13b and LLaMA-13b with DMT.
While there is some separation in the mathematical data representations, there is still overlap between code and general samples.
The authors investigate the impact of removing code and math-related samples from the ShareGPT dataset on the performance gains observed in mixed data settings.
They want to determine whether the performance improvements in low-resource scenarios are solely due to the presence of code and math samples in ShareGPT or if other factors contribute to these gains.
To do this, they use an open-set tagger (InsTag) to annotate samples in ShareGPT and remove instances containing the keywords "code" or "math" using regular expression matching. This process reduces the ShareGPT dataset from 86K to 63K samples.
They then conduct experiments - using different proportions of the modified ShareGPT dataset (without code and math) mixed with GSM8K and Code Alpaca datasets.
The results show that removing code and math samples from ShareGPT not only mitigates performance conflicts among different abilities under high-resource conditions but also maintains stable gains in low-resource settings.
This finding suggests that the diversity and variability of the data in ShareGPT, rather than the specific code and math samples, contribute to the performance improvements in low-resource scenarios.
The presence of code and math data within ShareGPT is not the key factor driving the performance gains identified in Section 3.3, which highlights the generalization of the conclusions.
In this section, the authors explore how different values of k (the proportion of specialised data) influence model performance in the Dual-stage Mixed Fine-tuning (DMT) strategy.
They adjust the value of k from 0 to 1 and observe the following:
When k increases from 0 to 1/256, the SFT models show significant improvements in both specialised ability and general human-aligning ability.
As k increases from 1/4 to 1, the model exhibits a decline in general ability, consistent with the findings that high-resource settings lead to conflicts.
When k increases from 1/256 to 1/4, there is a linear inverse trend between general ability and specialized ability, with an increase in general ability coinciding with a decrease in specialised ability.
These observations suggest that the value of k needs to be tuned based on specific requirements to achieve a balance between multiple abilities.
Improving the performance of LLMs in specialised domains by leveraging the DMT strategy and carefully tuning the amount of specialised data.
Developing more efficient and effective training strategies for LLMs to acquire multiple abilities while minimising performance conflicts and catastrophic forgetting.
Enhancing the adaptability of LLMs to low-resource settings by leveraging diverse and variable data sources.
Designing LLMs that can effectively balance and switch between general and specialised abilities based on the specific requirements of the task at hand.
Dual-stage Mixed Fine-tuning (DMT) is a training strategy proposed in the paper to address the challenges of ability conflicts during multi-task learning and catastrophic forgetting during sequential training.
The key idea behind DMT is to first learn a large amount of specialised data and then add a small amount of specialised data to the general data during the final stage of fine-tuning to prevent forgetting.
Stage 1
Fine-tune the pre-trained language model (LLaMA) on the specialised datasets (e.g., GSM8K RFT for math reasoning and Code Alpaca for code generation) using supervised fine-tuning (SFT).
This stage is similar to the first stage of the mixed sequential training strategy.
Stage 2
Fine-tune the model from Stage 1 using a mixed data source that combines the general data (e.g., ShareGPT) with varying proportions (k) of the specialized data (code and math).
The values of k can be 1, 1/2, 1/4, 1/8, 1/16, or 1/32. Adding a small amount of specialized data in this stage helps the model recall the specialized abilities learned in Stage 1.
Here's a simplified code structure to emulate the DMT process:
Note: The code above is a simplified representation of the process and would need to be adapted to work with the specific libraries and frameworks used for fine-tuning and evaluation.
By following this process, you can emulate the DMT strategy and investigate its effectiveness in mitigating catastrophic forgetting and achieving a balance between specialized and general abilities in the fine-tuned language models.