# Enhanced Supervised Fine Tuning

This <mark style="color:blue;">**January 2024**</mark> paper investigates the interplay of data composition during supervised fine-tuning (SFT) of large language models (LLMs) to *<mark style="color:yellow;">**enhance their abilities in mathematical reasoning, code generation, and general human-aligning tasks.**</mark>*&#x20;

{% embed url="<https://arxiv.org/abs/2310.05492>" %}
How Abilities in Large Language Models are Affected by Supervised Fine-tuning Data Composition
{% endembed %}

The authors explore four key research questions related to model performance, data amount, composition ratio, model size, and Supervised Fine Tuning (SFT) strategies.

### <mark style="color:purple;">Impact of different Supervised Fine Tuning strategies</mark>

The authors explore four SFT strategies: multi-task learning, sequential training, mixed sequential training, and <mark style="color:yellow;">**Dual-stage Mixed Fine-tuning (DMT).**</mark>

* Multi-task learning leads to conflicts, while sequential training results in catastrophic forgetting.
* The proposed *<mark style="color:yellow;">**DMT strategy effectively alleviates both performance conflicts and catastrophic forgetting by balancing general and specialized abilities.**</mark>*

<figure><img src="/files/Ht0luzuuwYopBt4c6WHN" alt=""><figcaption><p>The illustration of four different training strategies in this paper</p></figcaption></figure>

### <mark style="color:purple;">Process for emulating their experiment</mark>

1. Collect the required datasets (GSM8K RFT, Code Alpaca, and ShareGPT) and evaluation benchmarks (GSM8Ktest set, HumanEval, and MT-Bench).
2. Prepare the pre-trained LLaMA models (7B, 13B, 33B).
3. For each research question:&#x20;

a. Create the necessary data subsets according to the experimental design.

&#x20;b. Fine-tune the LLaMA models using the specified training strategies and hyperparameters.

&#x20;c. Evaluate the fine-tuned models on the corresponding benchmarks.&#x20;

d. Analyze the results and compare the performance across different settings.

```python
# Pseudo-code for fine-tuning and evaluation

# Fine-tuning
for model_size in [7, 13, 33]:
    for data_subset in data_subsets:
        model = load_pretrained_model(f"llama-{model_size}b")
        fine_tuned_model = fine_tune(model, data_subset, epochs=3, learning_rate=2e-5, batch_size=16)
        save_model(fine_tuned_model)

# Evaluation
for model_size in [7, 13, 33]:
    for fine_tuned_model in fine_tuned_models:
        gsm8k_score = evaluate(fine_tuned_model, gsm8k_test_set)
        human_eval_score = evaluate(fine_tuned_model, human_eval)
        mt_bench_score = evaluate(fine_tuned_model, mt_bench)
        log_results(model_size, fine_tuned_model, gsm8k_score, human_eval_score, mt_bench_score)
```

### <mark style="color:purple;">Discussion</mark>

#### <mark style="color:green;">Visualisation of Different SFT Abilities</mark>

* The authors visualise the semantic representations of different SFT abilities using <mark style="color:blue;">**t-SNE**</mark>.
* They observe a collapse phenomenon in the semantic representations of both the original LLaMA-13b and LLaMA-13b with DMT.
* While there is some separation in the mathematical data representations, there is still overlap between code and general samples.

#### <mark style="color:green;">Ablation of the Specialised Domains in ShareGPT</mark>

The authors investigate the impact of removing code and math-related samples from the ShareGPT dataset on the performance gains observed in mixed data settings.&#x20;

They want to determine whether the performance improvements in low-resource scenarios are solely due to the presence of code and math samples in ShareGPT or if other factors contribute to these gains.

To do this, they use an open-set tagger (InsTag) to annotate samples in ShareGPT and remove instances containing the keywords "code" or "math" using regular expression matching. This process *<mark style="color:yellow;">**reduces the ShareGPT dataset from 86K to 63K samples**</mark>*.

They then conduct experiments - using different proportions of the modified ShareGPT dataset (without code and math) mixed with GSM8K and Code Alpaca datasets. &#x20;

The results show that removing code and math samples from ShareGPT not only mitigates performance conflicts among different abilities under high-resource conditions but also maintains stable gains in low-resource settings.

This finding suggests that the diversity and variability of the data in ShareGPT, rather than the specific code and math samples, contribute to the performance improvements in low-resource scenarios.&#x20;

The presence of code and math data within ShareGPT is not the key factor driving the performance gains identified in Section 3.3, which highlights the generalization of the conclusions.

#### <mark style="color:green;">Specialised Data Amount in Dual-stage Mixed Fine-tuning (DMT)</mark>

In this section, the authors explore how different values of k (<mark style="color:yellow;">the proportion of specialised data</mark>) influence model performance in the Dual-stage Mixed Fine-tuning (DMT) strategy.&#x20;

They adjust the value of k from 0 to 1 and observe the following:

1. When k increases from 0 to 1/256, the SFT models show significant improvements in both specialised ability and general human-aligning ability.
2. As k increases from 1/4 to 1, the model exhibits a decline in general ability, consistent with the findings that high-resource settings lead to conflicts.
3. When k increases from 1/256 to 1/4, there is a linear inverse trend between general ability and specialized ability, with an increase in general ability coinciding with a decrease in specialised ability.

These observations suggest that the value of k needs to be tuned based on specific requirements to achieve a balance between multiple abilities.&#x20;

### <mark style="color:purple;">Practical applications of the paper's findings</mark>

1. Improving the performance of LLMs in specialised domains by leveraging the DMT strategy and carefully tuning the amount of specialised data.
2. Developing more efficient and effective training strategies for LLMs to acquire multiple abilities while minimising performance conflicts and catastrophic forgetting.
3. Enhancing the adaptability of LLMs to low-resource settings by leveraging diverse and variable data sources.
4. Designing LLMs that can effectively balance and switch between general and specialised abilities based on the specific requirements of the task at hand.

### <mark style="color:purple;">Dual-stage Mixed Fine-tuning (DMT)</mark>&#x20;

Dual-stage Mixed Fine-tuning (DMT) is a training strategy proposed in the paper to *<mark style="color:yellow;">**address the challenges of ability conflicts during multi-task learning and catastrophic forgetting during sequential training.**</mark>*&#x20;

The key idea behind DMT is to <mark style="color:yellow;">**first learn a large amount of specialised data**</mark> and then add a small amount of specialised data to the general data during the final stage of fine-tuning to prevent forgetting.

#### <mark style="color:green;">**The DMT process consists of two stages**</mark>

<mark style="color:purple;">**Stage 1**</mark>

Fine-tune the pre-trained language model (LLaMA) on the specialised datasets (e.g., GSM8K RFT for math reasoning and Code Alpaca for code generation) using supervised fine-tuning (SFT). &#x20;

This stage is similar to the first stage of the mixed sequential training strategy.

<mark style="color:purple;">**Stage 2**</mark>

Fine-tune the model from Stage 1 using a mixed data source that combines the general data (e.g., ShareGPT) with varying proportions (k) of the specialized data (code and math).&#x20;

The values of k can be 1, 1/2, 1/4, 1/8, 1/16, or 1/32. Adding a small amount of specialized data in this stage helps the model recall the specialized abilities learned in Stage 1.

Here's a simplified code structure to emulate the DMT process:

```python
# Prepare the datasets
specialized_datasets = ["gsm8k_rft", "code_alpaca"]
general_dataset = "sharegpt"

# Stage 1: Specialized fine-tuning
for dataset in specialized_datasets:
    model = pretrained_llama_model()
    fine_tuned_model = fine_tune(model, dataset)
    save_model(fine_tuned_model, f"{dataset}_stage1")

# Stage 2: Mixed fine-tuning
k_values = [1, 1/2, 1/4, 1/8, 1/16, 1/32]
for dataset in specialized_datasets:
    for k in k_values:
        mixed_dataset = create_mixed_dataset(general_dataset, dataset, k)
        model = load_model(f"{dataset}_stage1")
        fine_tuned_model = fine_tune(model, mixed_dataset)
        save_model(fine_tuned_model, f"{dataset}_stage2_k{k}")

# Evaluation
for dataset in specialized_datasets:
    for k in k_values:
        model = load_model(f"{dataset}_stage2_k{k}")
        evaluate(model, dataset)
```

Note: The code above is a simplified representation of the process and would need to be adapted to work with the specific libraries and frameworks used for fine-tuning and evaluation.

By following this process, you can emulate the DMT strategy and investigate its effectiveness in mitigating catastrophic forgetting and achieving a balance between specialized and general abilities in the fine-tuned language models.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://training.continuumlabs.ai/data/datasets/enhanced-supervised-fine-tuning.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
