# LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders

This <mark style="color:blue;">**April 2024**</mark> paper explores how to effectively transform decoder-only Large Language Models (LLMs) into powerful text encoders for generating rich, contextualised text embeddings.&#x20;

This transformation is done through a process called <mark style="color:blue;">**LLM2Vec**</mark>.

LLM2Vec provides an effective method for transforming generic, decoder-only large language models into powerful text encoders that <mark style="color:yellow;">generate domain-specific embeddings</mark>.&#x20;

This capability is important for industries that require nuanced understanding and processing of text specific to particular fields such as legal, medical, financial, or technical documents.

{% embed url="<https://arxiv.org/abs/2404.05961>" %}
LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders
{% endembed %}

### <mark style="color:purple;">Core Approach of LLM2Vec</mark>

LLM2Vec modifies decoder-only LLMs using three main steps:

<mark style="color:blue;">**Enabling Bidirectional Attention**</mark><mark style="color:blue;">:</mark> This step overcomes the inherent limitation of decoder-only models, which traditionally use causal attention that only allows each token to attend to previous tokens. By enabling bidirectional attention, *<mark style="color:yellow;">each token can now attend to all other tokens in the sequence</mark>*, improving the model's ability to generate more contextually rich embeddings.

<details>

<summary><mark style="color:green;"><strong>How LLM2Vec Enables Bidirectional Attention in Decoder-Only Models</strong></mark></summary>

In decoder-only language models like GPT and LLaMA, the <mark style="color:purple;">**attention mechanism**</mark> is typically <mark style="color:blue;">**causal**</mark>, meaning that each token can only attend to itself and previous tokens in the sequence.&#x20;

This unidirectional (left-to-right) attention is essential for tasks like text generation, where predicting the next word should not be influenced by future tokens.

However, for tasks that require a deep understanding of the entire sequence—such as generating high-quality text embeddings for semantic similarity, clustering, or classification—it's beneficial for tokens to attend to both past and future tokens. This is known as **bidirectional attention**.

LLM2Vec modifies the attention mechanism of decoder-only models to enable bidirectional attention, allowing each token to consider the full context of the sequence.&#x20;

Here's how this is achieved:

***

#### <mark style="color:blue;">**Modifying the Attention Mechanism to Bidirectional**</mark>

**1. Understanding Self-Attention and Attention Masks**

* **Self-Attention Mechanism:**
  * In transformer models, the self-attention mechanism computes representations of each token by attending to other tokens in the sequence.
  * The attention scores determine how much influence each token has on others when computing these representations.
* **Attention Masks:**
  * An **attention mask** is used to control which tokens can attend to which others.
  * In a matrix form, the attention mask has dimensions `[sequence_length, sequence_length]`.
  * For **causal attention**, the mask is typically a lower-triangular matrix, which zeros out attention scores from future tokens.

**2. Changing to Bidirectional Attention**

* **Replacing the Causal Mask:**
  * LLM2Vec replaces the causal attention mask with a **fully open mask** (an all-ones matrix or a zero mask, depending on implementation conventions).
  * This means that during the self-attention computation, every token can attend to every other token in the sequence, regardless of their position.
* **Implementation Steps:**
  * **Modify the Attention Mask:**
    * In the model's code, adjust the function or module that generates the attention mask to produce an all-ones matrix instead of a lower-triangular matrix.
    * This effectively removes any restrictions on token-to-token attention.
* **Resulting Changes:**
  * Each token's representation is now computed using information from all tokens in the sequence.
  * The model gains the ability to capture richer contextual information, as tokens can be influenced by both preceding and succeeding tokens.

**3. Adapting the Model to the New Attention Mechanism**

* **Why Adaptation Is Necessary:**
  * The model was originally trained with causal attention; simply changing the mask may not yield optimal results.
  * The model needs to learn how to utilize the additional context provided by bidirectional attention.
* **Fine-Tuning with Masked Next Token Prediction (MNTP):**
  * **Objective:**
    * Train the model to predict masked tokens using the full bidirectional context.
  * **Process:**
    * Randomly mask a fraction of tokens in input sequences.
    * Use the modified model (with bidirectional attention) to predict these masked tokens.
    * This helps the model adjust its internal representations to leverage both past and future tokens effectively.

***

#### <mark style="color:blue;">**Why This Modification Enhances Contextual Understanding**</mark>

* **Rich Contextual Embeddings:**
  * Bidirectional attention allows the model to capture dependencies and relationships that span the entire sequence.
  * This results in embeddings that are more representative of the overall meaning of the text.
* **Alignment with Encoder Models:**
  * Encoder models like BERT use bidirectional attention and are known for their strong performance in understanding tasks.
  * By enabling bidirectional attention, decoder-only models can achieve similar capabilities in terms of contextual understanding.

***

***

#### **Key Considerations**

* **Compatibility with Model Architecture:**
  * The modification is compatible with transformer architectures used in decoder-only models.
  * It requires access to the model's attention mechanism code to adjust the mask.
* **No Additional Parameters Introduced:**
  * Changing the attention mask does not increase the number of model parameters.
  * The model's capacity remains the same; only the way it attends to input tokens changes.
* **Effectiveness Demonstrated:**
  * Experiments have shown that this modification, combined with appropriate fine-tuning, significantly improves the model's performance on tasks requiring rich contextual embeddings.

</details>

<mark style="color:purple;">**Masked Next Token Prediction (MNTP)**</mark><mark style="color:purple;">:</mark> After enabling bidirectional attention, the next step involves adapting the model to effectively use this new capability through MNTP. This training technique combines aspects of masked language modeling and next token prediction, allowing the model to predict a token based on both previous and following context.

<mark style="color:yellow;">**Unsupervised Contrastive Learning**</mark><mark style="color:yellow;">:</mark> The final step involves using an unsupervised learning approach called [<mark style="color:blue;">SimCSE</mark>](https://training.continuumlabs.ai/knowledge/vector-databases/simcse-simple-contrastive-learning-of-sentence-embeddings), where the model learns to pull closer the embeddings of the same text with different dropout masks, and push apart embeddings of different texts. This method helps in refining the sequence-level embeddings without needing labeled data.

<figure><img src="https://1839612753-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FpV8SlQaC976K9PPsjApL%2Fuploads%2FYZc9NEdIJGdGZs0EON4Z%2Fimage.png?alt=media&#x26;token=7d0829fa-6f07-4df4-82f1-e4aceb45bc2d" alt=""><figcaption><p>The 3 steps of LLM2Vec. First, we enable bidirectional attention to overcome the restrictions of causal attention (Bi). Second, we adapt the model to use bidirectional attention by masked next token prediction training (MNTP). Third, we apply unsupervised contrastive learning with mean pooling to learn better sequence representations (SimCSE)</p></figcaption></figure>

### <mark style="color:purple;">Unsupervised and supervised learning</mark>

<mark style="color:blue;">**Unsupervised Learning**</mark>

* **Definition**: In unsupervised learning, the model is trained on data without labels. The goal is often to identify patterns or structures within the data without any guidance on what outcomes should be predicted.
* **Application in LLM2Vec**: Unsupervised training, such as using SimCSE, involves training the model to learn effective text representations by maximising the similarity between differently augmented views of the same data while minimising similarity with other contrasting examples in the dataset. This is done without specific target labels indicating the correct answer, focusing purely on the data's inherent characteristics.

<mark style="color:blue;">**Supervised Learning**</mark>

* **Definition**: Supervised learning involves training a model on a labeled dataset, where each training example includes an input paired with a correct label (or outcome). The model learns to map inputs to outputs based on this data.
* **Application in LLM2Vec**: Supervised aspects in training could involve fine-tuning the model on specific tasks where the outcomes are known. For example, using labeled datasets where the text embeddings are specifically optimised to perform well on predefined tasks like text classification or entity recognition based on labeled examples.

<mark style="color:blue;">**Key Differences in Context**</mark>

* **Nature of Data**: Unsupervised training does not require labels and works by extracting patterns directly from the data. Supervised training relies on labeled data to learn the mapping from inputs to outputs.
* **Objective**: The training objective in unsupervised learning typically involves learning good data representations or clustering, whereas in supervised learning, it is about predicting the label correctly.
* **Use Cases**: Unsupervised learning is often used for exploring data, dimensionality reduction, or initial learning of data representation. Supervised learning is used when specific tasks need to be performed, like classification or regression.
* **Dependency on Data Labelling**: Supervised learning's effectiveness heavily depends on the quantity and quality of the labelling in the training data, which can be costly and time-consuming. Unsupervised learning, however, can leverage unlabelled data, which is more abundantly available.

### <mark style="color:purple;">Evaluation and Contributions</mark>

* The paper evaluates LLM2Vec on several NLP tasks, including word-level tasks like chunking, named-entity recognition, and part-of-speech tagging, and more complex sequence-level tasks using the Massive Text Embeddings Benchmark (MTEB).
* LLM2Vec outperforms traditional encoder-only models in these tasks, setting new benchmarks for unsupervised performance on MTEB.

<details>

<summary><mark style="color:green;"><strong>MTEB (Massive Text Embeddings Benchmark) tasks</strong></mark></summary>

The MTEB (Massive Text Embeddings Benchmark) tasks encompass a variety of classification, clustering, retrieval, and reranking tasks designed to evaluate text embedding models across diverse scenarios. Here's a breakdown of these tasks:

#### <mark style="color:blue;">Classification Tasks</mark>

* **AmazonCounterfactualClassif**: Determine if an Amazon review contains counterfactual statements (hypothetical or contrary-to-fact statements).
* **AmazonPolarityClassification**: Classify Amazon reviews as having positive or negative sentiment.
* **AmazonReviewsClassification**: Categorize Amazon reviews based on a rating scale.
* **Banking77Classification**: Identify specific intents from online banking queries.
* **EmotionClassification**: Classify Twitter messages into one of six emotions (anger, fear, joy, love, sadness, surprise).
* **ImdbClassification**: Classify movie reviews from IMDB as positive or negative.
* **MassiveIntentClassification**: Identify user intents from user utterances.
* **MassiveScenarioClassification**: Determine scenarios from user utterances.
* **MTOPDomainClassification**: Classify the domain of an intent in task-oriented conversations.
* **MTOPIntentClassification**: Classify the intent of an utterance in task-oriented conversations.
* **ToxicConversationsClassif**: Classify comments as toxic or non-toxic.
* **TweetSentimentClassification**: Classify tweets' sentiment as positive, negative, or neutral.

#### <mark style="color:blue;">Clustering Tasks</mark>

* **Arxiv/Biorxiv/Medrxiv Clustering**: Identify primary and secondary categories of scientific papers based on titles and abstracts, applied to different repositories.
* **RedditClustering**: Cluster Reddit posts based on topics or themes from titles or full posts.
* **StackExchangeClustering**: Cluster StackExchange posts by topic or theme from titles or specific paragraphs.
* **TwentyNewsgroupsClustering**: Cluster news articles into topic categories.

#### <mark style="color:blue;">Retrieval Tasks</mark>

* **SprintDuplicateQuestions**: Retrieve duplicate questions from a Sprint forum.
* **TwitterSemEval2015/TwitterURLCorpus**: Retrieve tweets that are semantically similar.
* **AskUbuntuDupQuestions**: Find duplicate questions on AskUbuntu forums.
* **MindSmallReranking/SciDocsRR/StackOverflowDupQuestions**: Retrieve and rank documents or questions based on relevance to a given query.
* **ArguAna/ClimateFEVER/CQADupstackRetrieval/DBPedia/FEVER/FiQA2018/HotpotQA/MSMARCO/NFCorpus/NQ/QuoraRetrieval/SCIDOCS/SciFact/Touche2020/TRECCOVID**: Various retrieval tasks where the goal is to find documents or answers that are relevant or directly answer a given query or claim.

#### <mark style="color:blue;">Reranking Tasks</mark>

* **STS**\*: Semantic Textual Similarity tasks where the goal is to assess the degree of semantic similarity between pairs of text segments.
* **BUCC/Tatoeba**: Tasks involved in finding parallel sentences across bilingual corpora.
* **SummEval**: Evaluate the similarity of different text summaries.

These tasks are crucial for testing the capabilities of text embeddings to understand and process natural language in a way that mimics human understanding and categorisation.&#x20;

They cover a broad spectrum from understanding user intents and emotional sentiments to retrieving and organizing knowledge across various domains. This comprehensive evaluation helps in improving and fine-tuning the models for practical applications in real-world scenarios.

</details>

The performance scores of the LLM2Vec-transformed models on the Massive Text Embeddings Benchmark (MTEB) provide insights into the capabilities and areas of strength of each model variant (S-LLaMA-1.3B, LLaMA-2-7B, and Mistral-7B).&#x20;

<figure><img src="https://1839612753-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FpV8SlQaC976K9PPsjApL%2Fuploads%2FYnnclF9VWOQXEdpLv9VU%2Fchrome_0agodQJGSX.png?alt=media&#x26;token=7a79fd30-50a4-4dbe-8431-8025c5d68350" alt=""><figcaption><p>Top-10 models on the MTEB leader board as of April 2024.LLM2Vec achieves the 6th rank overall, and the top rank among models trained with only publicly available data</p></figcaption></figure>

Here are the top five key points based on the provided scores:

<mark style="color:blue;">**Overall Superiority of Mistral-7B**</mark>

Across nearly all tasks, Mistral-7B consistently scores the highest, indicating its superior ability to generate and use text embeddings.&#x20;

<mark style="color:blue;">**Strength in Classification and Retrieval Tasks**</mark>

All three models demonstrate strong performance in various classification and retrieval tasks. Notably, Mistral-7B excels in classifying intent in contextual queries, which is critical for applications in digital customer service and chatbots.

<mark style="color:blue;">**High Performance in Semantic Textual Similarity (STS) Tasks**</mark>

The models perform well in semantic similarity tasks, which are crucial for applications involving text comparison such as document deduplication, information retrieval, and summarisation.&#x20;

<mark style="color:blue;">**Variability Across Different Tasks**</mark>

There is noticeable variability in performance across different task types and datasets. This highlights the models' adaptability and potential need for domain-specific tuning to achieve optimal performance across various NLP tasks.

The <mark style="color:yellow;">**absolute numbers in the scores provided for the MTEB tasks**</mark> represent the model's performance metrics on each specific task.&#x20;

### <mark style="color:purple;">How to interpret the scores</mark>

These scores are likely percentages representing the accuracy or a similar measure (such as F1-score, precision, recall, or a custom metric relevant to the task) that quantifies how well the model is performing against the benchmark's standards.

<figure><img src="https://1839612753-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FpV8SlQaC976K9PPsjApL%2Fuploads%2FDIMSDaYbW87ZAC2lmMco%2Fchrome_NrnrrnLxGV.png?alt=media&#x26;token=b8117936-e90d-4790-9350-69fd5213d80a" alt=""><figcaption><p>Unsupervised results of LLM2Vec transformed models on MTEB</p></figcaption></figure>

1. <mark style="color:blue;">**Accuracy**</mark><mark style="color:blue;">:</mark> This is a straightforward measurement where the score indicates the percentage of instances that the model predicted correctly out of the total number of instances evaluated.
2. <mark style="color:blue;">**F1-Score**</mark><mark style="color:blue;">:</mark> This is a harmonic mean of precision and recall and is often used in cases where the balance between precision and recall is critical, especially in datasets with uneven class distributions.
3. <mark style="color:blue;">**Precision and Recall**</mark><mark style="color:blue;">:</mark> Precision measures the accuracy of the positive predictions made by the model, while recall measures the ability of the model to find all the relevant cases (true positives).

Each task might use one of these or other specialised metrics depending on its requirements and the nature of the data.&#x20;

For instance, classification tasks might use accuracy or F1-score, while retrieval tasks might measure precision at a certain cut-off rank or mean average precision.

The scores thus provide a quantitative measure of how well the model handles the task's challenges, like understanding context, distinguishing between nuanced categories, or retrieving the most relevant documents.&#x20;

A higher score indicates better performance, showing that the model's embeddings effectively capture and utilize the linguistic and semantic nuances necessary for the task.

### <mark style="color:purple;">Significance and Novelty</mark>

* **Data and Parameter Efficiency**: LLM2Vec is highlighted for its efficiency in terms of data use and minimal adaptation required, making it suitable for scenarios with limited computational resources.
* **Universal Text Encoding**: The approach demonstrates that decoder-only LLMs can be converted into universal text encoders capable of handling a wide variety of NLP tasks effectively, challenging the traditional use of encoder-only or encoder-decoder models for generating text embeddings.

### <mark style="color:purple;">**Why It's Better Than Other Embedding Models**</mark>

**Contextualised Representations**

By converting decoder-only models to use bidirectional attention, LLM2Vec allows the model to consider full contextual information from both prior and subsequent tokens, leading to richer text embeddings.

**Efficiency**

LLM2Vec is described as highly efficient in terms of data and computational resources. It does not require extensive retraining or fine-tuning on large labeled datasets, which is often a significant limitation in deploying advanced NLP models.

**Versatility and Adaptability**

The transformed models can be adapted to a wide range of NLP tasks and domains with minimal additional investment, making it an economical choice for businesses.

**State-of-the-Art Performance**

LLM2Vec has demonstrated superior performance on benchmarks like the Massive Text Embeddings Benchmark (MTEB), outperforming existing encoder-only models, which emphasizes its capability to deliver high-quality embeddings that can significantly enhance various NLP applications.

### <mark style="color:purple;">Implications</mark>

This research suggests a paradigm shift in how we use LLMs for text embeddings, proposing a method that leverages the inherent strengths of decoder-only models (like training efficiency and robustness) and transforms them into versatile tools for embedding generation.&#x20;

The simplicity and effectiveness of LLM2Vec potentially lower the barriers for adopting advanced NLP technologies in resource-constrained settings and broaden the scope of applications for LLMs in industry and academia.


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://training.continuumlabs.ai/knowledge/vector-databases/llm2vec-large-language-models-are-secretly-powerful-text-encoders.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
