# BM25 - Search Engine Ranking Function

<mark style="color:blue;">**BM25**</mark>, which stands for <mark style="color:blue;">**Best Matching 25**</mark>, is a ranking function used by search engines to estimate the relevance of documents to a given search query.&#x20;

<mark style="color:blue;">**BM25**</mark> is lauded for its simplicity, effectiveness, scalability, and its approach to addressing document length bias.

Nonetheless, the algorithm's focus on term frequency and document length *<mark style="color:yellow;">**can overlook semantic meanings, context, and term dependencies**</mark>* , that newer Transformer based models such as BERT can achieve.&#x20;

<mark style="color:blue;">**BM25**</mark> is based on the probabilistic information retrieval model and is considered <mark style="color:green;">**an extension**</mark> of the <mark style="color:yellow;">TF-IDF (Term Frequency-Inverse Document Frequency)</mark> approach. &#x20;

<mark style="color:blue;">**BM25**</mark> is designed to address some of the limitations of TF-IDF by providing a more sophisticated scoring system.

Here's a detailed breakdown of how it works:

### <mark style="color:green;">**Term Frequency (TF)**</mark>

Similar to TF-IDF, <mark style="color:blue;">**BM25**</mark> considers the frequency of each query term in a document.&#x20;

However, instead of using the raw frequency, <mark style="color:blue;">**BM25**</mark>**&#x20;applies a saturation function** to prevent a term's frequency from disproportionately influencing the document's relevance. &#x20;

This saturation function ensures that after a certain number of occurrences, additional appearances of the term have a diminishing effect on the score.&#x20;

The term frequency component in <mark style="color:blue;">**BM25**</mark> is calculated using the formula:

$$
TF = \frac{f \cdot (k\_1 + 1)}{f + k\_1 \cdot (1 - b + b \cdot \frac{\text{avgdl}}{d})}
$$

where:

* $$f$$ is the <mark style="color:blue;">**term frequency**</mark> in the document.
* &#x20;is the <mark style="color:blue;">**document length**</mark> (the number of words).
* $$avgdl$$ is the <mark style="color:blue;">**average document length**</mark> in the corpus.
* $$k1​$$ and $$b$$ are <mark style="color:blue;">**free parameters**</mark>, usually chosen empirically. $$k1​$$ controls the saturation point, and $$b$$ controls the degree of length normalisation.

### <mark style="color:green;">Regarding the role of k1 and b parameters in BM2</mark>

The $$k1$$ and $$b$$ parameters in <mark style="color:blue;">**BM25**</mark> are free parameters that are chosen empirically, which means they are *<mark style="color:yellow;">**determined based on experimentation and observation rather than theory**</mark>*.&#x20;

These parameters control the term frequency saturation and the document length normalization, respectively.

### <mark style="color:blue;">**k1 parameter**</mark>

* k1 controls the term frequency saturation, which is the point at which increasing the frequency of a term in a document has diminishing returns on the document's relevance score.
* Higher values of k1 result in delayed saturation, meaning that higher term frequencies will have a more significant impact on the relevance score before reaching the saturation point.
* Lower values of k1 result in earlier saturation, meaning that the impact of term frequency on the relevance score will diminish more quickly.

### <mark style="color:blue;">**b parameter**</mark>

* b controls the degree of document length normalisation applied to the relevance score.
* Document length normalisation aims to balance the scores of shorter and longer documents, as longer documents naturally tend to have higher term frequencies.
* Higher values of b (closer to 1) increase the impact of document length normalisation, giving more weight to shorter documents.
* Lower values of b (closer to 0) decrease the impact of document length normalisation, giving more weight to longer documents.

The optimal values for k1 and b are typically determined through experimentation and tuning on a specific dataset or application.&#x20;

Common values for k1 range from 1.2 to 2.0, and common values for b range from 0.5 to 0.8, but these can vary depending on the characteristics of the data and the specific retrieval task.

### <mark style="color:green;">**Inverse Document Frequency (IDF)**</mark>

Inverse Document Frequency (IDF) is a measure that *<mark style="color:yellow;">assigns more weight to rarer terms across the document corpus</mark>***.**

BM25 incorporates a *<mark style="color:yellow;">variant of IDF</mark>* to measure how much information the word provides, i.e., whether it's common or rare across all documents.&#x20;

Unlike the standard IDF in TF-IDF,  BM25's IDF is calculated in a way that prevents negative values (which can happen when a term appears in more than half of the documents).&#x20;

The IDF component in BM25 is typically calculated as:

$$
IDF = \log \left( \frac{n + 0.5}{N - n + 0.5} + 1 \right)
$$

where:

* $$N$$ is the <mark style="color:blue;">**total number of documents**</mark> in the collection.
* $$n$$ is the <mark style="color:blue;">**number of documents containing the term**</mark>.

### <mark style="color:green;">**Scoring**</mark>

The final BM25 score for a document given a query is the <mark style="color:yellow;">**sum of the TF and IDF components**</mark> for each query term present in the document:

$$
Score = \sum\_{i=1}^{n} IDF\_i \times TF\_i
$$

where $$n$$ is the number of <mark style="color:blue;">**unique terms in the query**</mark> that also appear in the document, $$IDFi​$$ is the IDF for term $$i$$, and $$TF  i$$ is the term frequency component for term $$i$$.

BM25's effectiveness comes from its balance between term frequency and document frequency, as well as its normalisation for document length.&#x20;

This makes it robust in handling a variety of search queries and document types, making it a popular choice in information retrieval systems.

### <mark style="color:purple;">Example of BM25 scoring</mark>

Let's consider a simple example to demonstrate how BM25 scores a document given a specific query.

Suppose we have a document collection containing <mark style="color:yellow;">**three documents**</mark>:

* <mark style="color:blue;">**D1:**</mark> "The quick brown fox jumps over the lazy dog"
* <mark style="color:blue;">**D2:**</mark> "A quick brown fox quickly jumps over the lazy dog"
* <mark style="color:blue;">**D3:**</mark> "The lazy dog sleeps all day long"

And we have a <mark style="color:blue;">**query**</mark>: "quick fox"

To calculate the BM25 scores for each document, we need to compute the <mark style="color:blue;">**term frequency (TF)**</mark> and <mark style="color:blue;">**inverse document frequency (IDF)**</mark> components for each query term.

Assuming $$k1 = 1.5$$ and $$b = 0.75$$, and the <mark style="color:blue;">**average document length is 10 terms**</mark>, the BM25 scores would be calculated as follows:

<mark style="color:blue;">**For D1**</mark>

* $$TF("quick") = 1, TF("fox") = 1$$
* $$IDF("quick") = log((3 + 1) / (1 + 1)) = 0.69, IDF("fox") = log((3 + 1) / (1 + 1)) = 0.69$$
* $$BM25 score = (1.5 \* 1 \* 0.69) / (1 + 1.5 \* (1 - 0.75 + 0.75 \* 9 / 10)) + (1.5 \* 1 \* 0.69) / (1 + 1.5 \* (1 - 0.75 + 0.75 \* 9 / 10)) = 1.14$$

<mark style="color:blue;">**For D2**</mark>

* $$TF("quick") = 2, TF("fox") = 1$$
* $$IDF("quick") = 0.69, IDF("fox") = 0.69$$
* $$BM25 score = (1.5 \* 2 \* 0.69) / (2 + 1.5 \* (1 - 0.75 + 0.75 \* 11 / 10)) + (1.5 \* 1 \* 0.69) / (1 + 1.5 \* (1 - 0.75 + 0.75 \* 11 / 10)) = 1.73$$

<mark style="color:blue;">**For D3**</mark>

* $$TF("quick") = 0, TF("fox") = 0$$
* $$BM25 score = 0$$

Based on the BM25 scores, the ranking of the documents for the query "quick fox" would be:&#x20;

D2, D1, D3.

This example illustrates how BM25 takes into account both the term frequency within documents and the inverse document frequency across the collection to estimate the relevance of documents to a given query.

### <mark style="color:purple;">Where you can find this algorithm?</mark> <a href="#where-you-can-find-this-algorithm" id="where-you-can-find-this-algorithm"></a>

BM25 algorithm can be found and applied in various domains where information retrieval and search functionality are required.

<mark style="color:green;">**Web Search Engines**</mark>

Many popular web search engines, like Google, Bing, or Yahoo, employ BM25 or similar ranking algorithms to determine the relevance of search results for a given query.

<mark style="color:green;">**Enterprise Search Systems**</mark>

In large organisations, enterprise search systems use BM25 to provide employees with relevant documents, files, and information from internal databases.

<mark style="color:green;">**E-commerce Websites**</mark>

Online shopping platforms often use BM25 or similar algorithms to rank products based on relevance to user queries. and can provide personalised product recommendations.

### <mark style="color:purple;">Comparison with other methods</mark>

While BM25 is a highly effective and widely used ranking algorithm in information retrieval, it has some limitations compared to more recent methods, particularly those based on deep learning, such as <mark style="color:blue;">**BERT (Bidirectional Encoder Representations from Transformers)**</mark>.

BERT and other transformer-based models have the *<mark style="color:yellow;">ability to capture semantic meaning and context</mark>* more effectively than BM25.&#x20;

### <mark style="color:green;">**Some key differences between BM25 and BERT-like models**</mark>

<mark style="color:purple;">**Semantic understanding**</mark>

BERT can better understand the semantic meaning of words and phrases in the context of the surrounding text. It can capture synonyms, polysemy (words with multiple meanings), and other linguistic nuances that BM25 may struggle with.

<mark style="color:purple;">**Contextual relevance**</mark>

BERT takes into account the context in which words appear, both within the query and the documents. This allows it to disambiguate the meaning of words based on their context and provide more accurate relevance judgments.

<mark style="color:purple;">**Query-document interaction**</mark>

BERT can model the interaction between the query and the document in a more sophisticated manner, considering the relationship between query terms and document terms at a deeper level than BM25's term frequency and inverse document frequency approach.

<mark style="color:purple;">**Language understanding**</mark>

BERT's pre-training on a large corpus enables it to have a better general understanding of language, which can be beneficial for tasks that require language comprehension, such as question answering or sentiment analysis.

However, it's important to note that *<mark style="color:yellow;">**BERT and similar models have higher computational requirements compared to BM25**</mark>*, both in terms of training and inference time.&#x20;

BM25 remains a popular choice in many information retrieval systems due to its simplicity, efficiency, and effectiveness, especially when computational resources are limited, or the dataset is relatively small.

### <mark style="color:purple;">Conclusion</mark> <a href="#conclusion" id="conclusion"></a>

In conclusion, BM25 is a powerful ranking algorithm and valuable tool for enhancing search relevance and delivering accurate and useful user results.

While BM25 remains a widely used and effective ranking function in information retrieval, the field has seen significant advancements, particularly with the introduction of machine learning and deep learning techniques.&#x20;

These newer methods can sometimes outperform BM25, especially in *<mark style="color:yellow;">**contexts where understanding the semantic meaning and context of the text is crucial**</mark>*. Here are some developments that have expanded upon or complemented BM25 in information retrieval:

For example. the introduction of models like BERT (Bidirectional Encoder Representations from Transformers) - when applied to information retrieval, can significantly improve the relevance of search results by understanding the intent behind a query and the content of the documents.
