A Survey on Retrieval-Augmented Text Generation
Huayang Li, Yixuan Su, Deng Cai, Yan Wang, Lemao Liu
Last updated
Copyright Continuum Labs - 2023
Huayang Li, Yixuan Su, Deng Cai, Yan Wang, Lemao Liu
Last updated
This February 2022 paper provides a survey on the topic of retrieval-augmented text generation, a technique that combines deep learning with traditional retrieval methods to increase the performance and utility of large language model applications.
The approach has demonstrated superior performance by leveraging existing human-written texts or other external knowledge sources to guide the generation process, enhancing both the quality and relevance of the generated content.
Formulation
Retrieval-augmented text generation is described as an approach where the model, besides the usual input sequence (x), also leverages an additional set of relevant instances (z) retrieved from training sets or external data sources.
This extra layer of information (z) aims to enrich the model's output (y), enhancing the generation process's relevance and accuracy.
Retrieval Sources
Training Corpus: The model retrieves relevant examples from its training data, using these instances as references to guide the generation process and reduce uncertainty.
External Data: Using external datasets provides additional, potentially uncontained information in the training set, aiding in scenarios like domain adaptation or updating the model's knowledge base.
Unsupervised Data: Particularly in machine translation, the approach involves retrieving target language sentences directly from unsupervised (monolingual) corpora, aligning source and target data in a dense vector space to enhance translation accuracy without relying on parallel text pairs.
Retrieval Metrics
Sparse-vector Retrieval: Techniques like TF-IDF and BM25, which rely on keyword matching, are used to fetch relevant instances based on lexical similarities.
Dense-vector Retrieval: This method retrieves semantically relevant instances, not just lexically similar ones, by representing text in dense vectors and computing retrieval scores through vector inner products.
Task-specific Retrieval: Rather than just relying on generic textual similarity, some methods optimise retrieval metrics for specific tasks, ensuring the retrieved content genuinely enhances the generation outcome.
Integration Methods
Data Augmentation: The retrieved content is combined with the original input to create augmented training instances, helping the model learn to utilize the retrieved information effectively.
Attention Mechanisms: Leveraging attention mechanisms allows the model to focus on and integrate useful information from the retrieved content, enhancing the generation process.
Skeleton Extraction: This approach involves extracting and integrating only the most relevant portions of the retrieved content, allowing the model to focus on useful information while discarding the irrelevant.
Dialogue Systems Classification
Task-Oriented Systems: These are designed to accomplish specific user tasks, like booking tickets.
Chit-Chat Systems: Aim to generate engaging and relevant responses without a fixed objective, facing the one-to-many problem where multiple responses can be suitable for a single dialogue history.
Dialogue Response Generation Models
Retrieval-Based Models: These models fetch an existing response from a dataset, ensuring informativeness and grammatical correctness. However, they struggle with unique dialogue histories not present in the dataset.
Generation-Based Models: Capable of generating new responses, these models offer better generalisation but often produce generic and less informative replies.
Integration Approaches
Shallow Integration: Early attempts combined retrieval and generation-based outputs, aiming to leverage the strengths of both. For instance, re-ranking outputs from both models was one such technique.
Deep Integration: More sophisticated methods integrate retrieval results directly into the generation process. For example, some models use an additional encoder for the retrieval result or construct an edit vector to account for context differences between the dialogue history and the retrieved response. This approach aims to refine the generation process by incorporating relevant retrieved content.
Knowledge-Enhanced Generation
Retrieval-augmented dialogue systems can also leverage external knowledge sources, not just dialogue corpora, to enrich responses. This inclusion of varied knowledge forms aims to produce more grounded and contextually appropriate responses.
Limitations and Future Directions
The current dialogue response generation frameworks typically use a single retrieved response, potentially limiting the response's richness. Future research could explore integrating multiple retrieval responses.
Customised retrieval metrics could offer more tailored and relevant responses, especially for generating responses with specific characteristics like persona or emotion.
Expanding the retrieval pool beyond dialogue corpora to include diverse domains or modalities could provide a broader context and enhance the response generation process.