Pre Training Data
Training Foundation Models
Foundation models are the foundation of the generative AI industry. Post the release of a paper that led to the development of the Transformer architecture, foundation model development has continued to grow.
The first public release of a pre-trained large language model based on the Transformer was the 117 million parameter GPT model by OpenAI. Following this, various models of increasing size were developed by many different companies including OpenAI, Google, Meta, Microsoft and Nvidia.
Foundation models are trained on extensive corpora, often encompassing billions of documents. Most of this text data has been derived from public sources. but there is an increasing demand for proprietary or private datasets.
The capacity of these models to accurately predict subsequent elements in a sequence is based on their understanding of language patterns and context learnt from the datasets on which they are trained.
These models have been the bedrock of the generative AI revolution, offering both proprietary and increasingly open-source options for a range of applications.
Their development over recent years has underscored the potential of large-scale models to transform various sectors by providing advanced capabilities in language understanding and generation.
Sources of Training Data
The Internet
The Internet has been a rich resource for pre-training LLMs, offering a breadth of linguistic knowledge due to the wide range of content available online.
However, the quality of such data varies greatly, with high-quality sources like Wikipedia and lower-quality sources such as spam emails.
This source of data will continue to remain important, but it is critical this data is cleaned to improve its quality.
Conversational Text
Conversational text from sources like Reddit or social media platforms is used to improve model ability to engage in dialogues and perform on question-answering tasks.
The primary issue with this source of data is the invasion of privacy. Conversational text, while on public forums - was never conceived to be used as training data for artificial intelligence.
Books
Book data provides formal and coherent long texts, contributing to a model's ability to understand complex linguistic structures and dependencies.
Open-source datasets like Books3 and Bookcorpus2, found in the Pile dataset, are common sources for this type of data, which aids LLMs in generating narrative texts and understanding formal language.
The Pile
The most famous source of training data for foundation models is known as the "The Pile".
The Pile is a 825 gigabyte diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. The Pile
The primary constituents of "The Pile"
Last updated