BM25 - Search Engine Ranking Function
BM25, which stands for Best Matching 25, is a ranking function used by search engines to estimate the relevance of documents to a given search query.
BM25 is lauded for its simplicity, effectiveness, scalability, and its approach to addressing document length bias.
Nonetheless, the algorithm's focus on term frequency and document length can overlook semantic meanings, context, and term dependencies , that newer Transformer based models such as BERT can achieve.
BM25 is based on the probabilistic information retrieval model and is considered an extension of the TF-IDF (Term Frequency-Inverse Document Frequency) approach.
BM25 is designed to address some of the limitations of TF-IDF by providing a more sophisticated scoring system.
Here's a detailed breakdown of how it works:
Term Frequency (TF)
Similar to TF-IDF, BM25 considers the frequency of each query term in a document.
However, instead of using the raw frequency, BM25 applies a saturation function to prevent a term's frequency from disproportionately influencing the document's relevance.
This saturation function ensures that after a certain number of occurrences, additional appearances of the term have a diminishing effect on the score.
The term frequency component in BM25 is calculated using the formula:
where:
is the term frequency in the document.
is the document length (the number of words).
is the average document length in the corpus.
and are free parameters, usually chosen empirically. controls the saturation point, and controls the degree of length normalisation.
Regarding the role of k1 and b parameters in BM2
The and parameters in BM25 are free parameters that are chosen empirically, which means they are determined based on experimentation and observation rather than theory.
These parameters control the term frequency saturation and the document length normalization, respectively.
k1 parameter
k1 controls the term frequency saturation, which is the point at which increasing the frequency of a term in a document has diminishing returns on the document's relevance score.
Higher values of k1 result in delayed saturation, meaning that higher term frequencies will have a more significant impact on the relevance score before reaching the saturation point.
Lower values of k1 result in earlier saturation, meaning that the impact of term frequency on the relevance score will diminish more quickly.
b parameter
b controls the degree of document length normalisation applied to the relevance score.
Document length normalisation aims to balance the scores of shorter and longer documents, as longer documents naturally tend to have higher term frequencies.
Higher values of b (closer to 1) increase the impact of document length normalisation, giving more weight to shorter documents.
Lower values of b (closer to 0) decrease the impact of document length normalisation, giving more weight to longer documents.
The optimal values for k1 and b are typically determined through experimentation and tuning on a specific dataset or application.
Common values for k1 range from 1.2 to 2.0, and common values for b range from 0.5 to 0.8, but these can vary depending on the characteristics of the data and the specific retrieval task.
Inverse Document Frequency (IDF)
Inverse Document Frequency (IDF) is a measure that assigns more weight to rarer terms across the document corpus.
BM25 incorporates a variant of IDF to measure how much information the word provides, i.e., whether it's common or rare across all documents.
Unlike the standard IDF in TF-IDF, BM25's IDF is calculated in a way that prevents negative values (which can happen when a term appears in more than half of the documents).
The IDF component in BM25 is typically calculated as:
where:
is the total number of documents in the collection.
is the number of documents containing the term.
Scoring
The final BM25 score for a document given a query is the sum of the TF and IDF components for each query term present in the document:
where is the number of unique terms in the query that also appear in the document, is the IDF for term , and is the term frequency component for term .
BM25's effectiveness comes from its balance between term frequency and document frequency, as well as its normalisation for document length.
This makes it robust in handling a variety of search queries and document types, making it a popular choice in information retrieval systems.
Example of BM25 scoring
Let's consider a simple example to demonstrate how BM25 scores a document given a specific query.
Suppose we have a document collection containing three documents:
D1: "The quick brown fox jumps over the lazy dog"
D2: "A quick brown fox quickly jumps over the lazy dog"
D3: "The lazy dog sleeps all day long"
And we have a query: "quick fox"
To calculate the BM25 scores for each document, we need to compute the term frequency (TF) and inverse document frequency (IDF) components for each query term.
Assuming and , and the average document length is 10 terms, the BM25 scores would be calculated as follows:
For D1
For D2
For D3
Based on the BM25 scores, the ranking of the documents for the query "quick fox" would be:
D2, D1, D3.
This example illustrates how BM25 takes into account both the term frequency within documents and the inverse document frequency across the collection to estimate the relevance of documents to a given query.
Where you can find this algorithm?
BM25 algorithm can be found and applied in various domains where information retrieval and search functionality are required.
Web Search Engines
Many popular web search engines, like Google, Bing, or Yahoo, employ BM25 or similar ranking algorithms to determine the relevance of search results for a given query.
Enterprise Search Systems
In large organisations, enterprise search systems use BM25 to provide employees with relevant documents, files, and information from internal databases.
E-commerce Websites
Online shopping platforms often use BM25 or similar algorithms to rank products based on relevance to user queries. and can provide personalised product recommendations.
Comparison with other methods
While BM25 is a highly effective and widely used ranking algorithm in information retrieval, it has some limitations compared to more recent methods, particularly those based on deep learning, such as BERT (Bidirectional Encoder Representations from Transformers).
BERT and other transformer-based models have the ability to capture semantic meaning and context more effectively than BM25.
Some key differences between BM25 and BERT-like models
Semantic understanding
BERT can better understand the semantic meaning of words and phrases in the context of the surrounding text. It can capture synonyms, polysemy (words with multiple meanings), and other linguistic nuances that BM25 may struggle with.
Contextual relevance
BERT takes into account the context in which words appear, both within the query and the documents. This allows it to disambiguate the meaning of words based on their context and provide more accurate relevance judgments.
Query-document interaction
BERT can model the interaction between the query and the document in a more sophisticated manner, considering the relationship between query terms and document terms at a deeper level than BM25's term frequency and inverse document frequency approach.
Language understanding
BERT's pre-training on a large corpus enables it to have a better general understanding of language, which can be beneficial for tasks that require language comprehension, such as question answering or sentiment analysis.
However, it's important to note that BERT and similar models have higher computational requirements compared to BM25, both in terms of training and inference time.
BM25 remains a popular choice in many information retrieval systems due to its simplicity, efficiency, and effectiveness, especially when computational resources are limited, or the dataset is relatively small.
Conclusion
In conclusion, BM25 is a powerful ranking algorithm and valuable tool for enhancing search relevance and delivering accurate and useful user results.
While BM25 remains a widely used and effective ranking function in information retrieval, the field has seen significant advancements, particularly with the introduction of machine learning and deep learning techniques.
These newer methods can sometimes outperform BM25, especially in contexts where understanding the semantic meaning and context of the text is crucial. Here are some developments that have expanded upon or complemented BM25 in information retrieval:
For example. the introduction of models like BERT (Bidirectional Encoder Representations from Transformers) - when applied to information retrieval, can significantly improve the relevance of search results by understanding the intent behind a query and the content of the documents.
Last updated