Phi-3 Technical Report
A Highly Capable Language Model Locally on Your Phone
Last updated
Copyright Continuum Labs - 2023
A Highly Capable Language Model Locally on Your Phone
Last updated
This April 2024 paper from the team at Microsoft introduces phi-3-mini, a 3.8 billion parameter language model that achieves performance rivalling much larger models like Mixtral 8x7B and GPT-3.5, despite being small enough to run on a phone.
This feat was achieved solely by improving the training data, not by increasing model size.
This continues to Microsoft's work in training smaller language models such at Phi 1.5 and Phi 2.0.
Phi-3-mini's small size (3.8 billion parameters) enables it to run on devices like smartphones, opening up possibilities for local, private, and efficient applications.
The innovation lies in the dataset used for training phi-3-mini, which is a scaled-up version of the one used for phi-2. It consists of heavily filtered web data and synthetic data generated by language models.
phi-3-mini is a transformer decoder with a default context length of 4K tokens.
It has a similar block structure to Llama-2 and uses the same tokenizer with a vocabulary size of 320,641. The model has 3072 hidden dimensions, 32 heads, and 32 layers.
The authors focused on the quality of data for a given scale, aiming to calibrate the training data to be closer to the "data optimal" regime for small models.
They filtered web data to contain the correct level of "knowledge" and prioritised web pages that could potentially improve the model's reasoning ability.
The authors also provided initial parameter-scaling results with 7B and 14B models (phi-3-small and phi-3-medium) trained on 4.8T tokens. These models significantly outperform phi-3-mini on benchmarks like MMLU and MT-bench.
phi-3-mini underwent supervised fine-tuning (SFT) and direct preference optimization (DPO) to improve its performance in math, coding, reasoning, robustness, and safety. This process also transformed the language model into an AI assistant suitable for user interaction.
A long context version of phi-3-mini (phi-3-mini-128K) was developed using LongRope, extending the context length to 128K tokens while maintaining performance on par with the 4K version.
The achievement of creating a highly capable language model that can run on a phone is surprising because it challenges the assumption that larger models are always better.
By focusing on data quality and optimizing the training process, the researchers have demonstrated that smaller models can achieve impressive results when trained on the right data.
The next section of the paper discusses the performance of phi-3-mini on various academic benchmarks and compares it with other models such as phi-2, Mistral-7b, Mixtral-8x7b, Gemma 7B, Llama-3-instruct-8b, and GPT-3.5.
General Observation: We want to express our doubts about the use of these academic benchmarks to assess model quality. We may be seeing situations where model developers are using techniques to improve model performance on benchmarks by including them in the training data.
The models are evaluated on a wide range of tasks, including common sense reasoning (e.g., PIQA, SociQA), logical reasoning (e.g., ANLI, GSM-8K), and domain-specific knowledge (e.g., MedQA, TriviaQA). The evaluation uses few-shot prompts (varying from 0 to 10 shots) at temperature 0.
phi-3-mini (3.8B parameters) achieves impressive results across most benchmarks, often outperforming larger models like Mistral-7b and Llama-3-instruct-8b.
It even rivals the performance of GPT-3.5 on some tasks (e.g., MMLU, HellaSwag, ANLI).
The preview results for phi-3-small (7B) and phi-3-medium (14B) show further improvements in performance, with phi-3-medium achieving an average score of 78.2% across the benchmarks, surpassing GPT-3.5's average of 75.3%.
phi-3-mini performs exceptionally well on coding tasks like HumanEval (59.1%) and MBPP (70.0%), outperforming larger models like Mixtral-8x7b and GPT-3.5.
Overall, the paper demonstrates that phi-3-mini achieves remarkable performance on a wide range of benchmarks while maintaining a strong focus on safety and responsible AI principles.
The model's ability to rival larger models in terms of both performance and safety is a testament to the effectiveness of the training methodology and data optimisation techniques employed by the researchers.
The main weaknesses of phi-3-mini are:
Limited capacity for storing factual knowledge due to its small size, resulting in lower performance on tasks like TriviaQA that require a vast amount of factual information.
Restricted language capabilities, as the model is mostly trained on English data, limiting its multilingual performance.
Challenges common to most LMs, such as factual inaccuracies (hallucinations), reproduction or amplification of biases, inappropriate content generation, and safety issues, despite the efforts made to mitigate these problems.
phi-3-mini is a ground-breaking language model that demonstrates the potential of optimising training data and methodology to achieve impressive performance in a compact model size.
Despite its limitations in storing factual knowledge and multilingual capabilities, phi-3-mini rivals the performance of much larger models on a wide range of benchmarks while maintaining a strong focus on safety and responsible AI principles.
The model's ability to run on a phone while delivering high-quality results opens up new possibilities for applications that require on-device processing and privacy.
phi-3-mini's small size and strong performance make it an ideal candidate for a personal AI assistant that can run on a smartphone.
Users can interact with the model directly on their devices without the need for internet connectivity, ensuring privacy and faster response times. The assistant can help with tasks such as answering questions, providing recommendations, and offering creative writing suggestions.
phi-3-mini's strong performance on coding tasks like HumanEval and MBPP suggests that it can be used as an educational tool for students learning programming.
The model can provide explanations, generate code snippets, and offer guidance on coding best practices. Its ability to run on a phone makes it accessible to students in regions with limited internet access.
phi-3-mini can be integrated into customer support applications that run on smartphones, allowing users to receive instant assistance without the need for an internet connection.
The model can answer common queries, provide troubleshooting steps, and guide users through various processes. Its strong language understanding and reasoning abilities ensure that users receive accurate and helpful responses, improving customer satisfaction and reducing the workload on human support staff.
The phi-3-mini paper references a diverse set of prior work, including research on language model scaling, training methodologies, benchmarking, and responsible AI. The key areas covered by the referenced papers are:
Scaling laws for neural language models [KMH+20]
Training compute-optimal large language models [HBM+22]
Scaling data-constrained language models [MRB+23]
Attention is all you need [VSP+17]
LongRope: Extending LLM context window beyond 2 million tokens [DZZ+24]
Textbooks are all you need [GZA+23]
Textbooks are all you need ii: phi-1.5 technical report [LBE+23]
phi-2: The surprising power of small language models [JBA+23]
MMLU [HBK+21], HellaSwag [ZHB+19], ANLI [NWD+20], GSM-8K [CKB+21], MedQA [JPO+20], AGIEval [ZCG+23], TriviaQA [JCWZ17], Arc-C/Arc-E [CCE+18], PIQA/SociQA [BZGC19], BigBench-Hard [SRR+22, SSS+22], WinoGrande [SLBBC19], OpenBookQA [MCKS18], BoolQ [CLC+19], CommonSenseQA [THLB19], TruthfulQA [LHE22], HumanEval [CTJ+21], MBPP [AON+21], GPQA [RHS+23], MTBench [ZCS+23]
Training a helpful and harmless assistant with reinforcement learning from human feedback [BJN+22]
Beavertails: Towards improved safety alignment of LLM via a human-preference dataset [JLD+23]
Safety-tuned LLaMas: Lessons from improving the safety of large language models that follow instructions [BSA+24]
GPT-2 [RWC+19], Llama [TLI+23], Mistral [JSM+23], Mixtral [JSR+24], Gemma [TMH+24]