Model driven recommendation systems
Last updated
Copyright Continuum Labs - 2023
Last updated
This May 2023 paper introduces an approach to developing recommendation models by leveraging the power of large language models (LLMs) and considering recommendation as an instruction following task.
The key idea is that user preferences or needs can be expressed in natural language descriptions, which are referred to as instructions. By understanding and executing these instructions, LLMs can fulfill the recommendation task more effectively.
The authors propose a framework called InstructRec, which consists of two main components:
The authors formulate a general instruction format that comprehensively describes the user's preferences, intentions, task form, and context in natural language.
They manually design 39 instruction templates and automatically generate a large amount of user-personalised instruction data (252K instructions) with varying types of preferences and intentions.
The authors of InstructRec take a novel approach to adapting Large Language Models (LLMs) for recommender systems.
Instead of relying on public LLM APIs, they fine-tune an open-source model, specifically the 3B Flan-T5-XL. This process, known as instruction tuning, optimises the LLM to better understand and execute user instructions in the context of recommendations.
The key innovation lies in bridging the gap between LLMs, which are trained on natural language text, and recommender systems, which typically deal with user behaviour data. By reformulating recommendation tasks as natural language instructions, the authors effectively transform them into language processing tasks that LLMs are well-suited to handle.
This approach offers several advantages:
Personalisation: Users can express their specific needs and preferences through natural language instructions, allowing for more nuanced and personalized recommendations.
Flexibility: The system can handle a wide range of recommendation scenarios, from traditional item suggestions to more complex, context-dependent recommendations.
Enhanced user control: By allowing users to articulate their needs directly, the system gives them more control over the recommendation process.
Leveraging LLM capabilities: The approach harnesses the powerful language understanding and generation capabilities of LLMs for recommendation tasks.
To evaluate the effectiveness of their method, the authors constructed various interaction scenarios using real-world datasets. Their experiments demonstrate that InstructRec outperforms several competitive baselines, including the powerful GPT-3.5 model, in these recommendation tasks.
This instruction tuning approach represents a step towards more user-centric and flexible recommender systems, potentially opening up new possibilities for human-AI interaction in recommendation contexts.
The methodology of the proposed InstructRec approach consists of three main components:
The authors design a unified instruction format that integrates three key aspects related to the expressions of user's needs: preference, intention, and task form.
Preference refers to user's personalised tastes towards item attributes or characteristics, which can be categorised into none (cold-start scenarios), implicit (user's context information), and explicit (direct expression of user preference).
Intention refers to user's more immediate demands for certain types of items, which can be none (exploratory interaction), vague (ambiguous idea of target), or specific (clear idea of target).
Task Form specifies the task for execution by the LLM, which can be pointwise (examining a candidate item), pairwise (comparing a pair of items), matching (generating potential candidates), or reranking (reranking retrieved candidate items).
The instruction format also allows the inclusion of context information about the user's situation.
The authors generate instruction data by simulating user preferences and intentions based on available interaction data, using GPT-3.5 as a teacher-LLM to generate personalised information for each user based on their historical interactions and reviews.
Preference annotation: For implicit preferences, item titles from user's historical interactions are used. For explicit preferences, GPT-3.5 generates expressions based on historical interactions.
Intention annotation: For vague intentions, GPT-3.5 extracts them from target item reviews. For specific intentions, category information of target items is used.
Task form annotation: Instructions are formulated for pointwise recommendation, matching, and reranking tasks.
The authors also propose strategies to increase the diversity of instructions, such as turning the task around, enforcing relatedness between preference and intention, and chain-of-thought-like reasoning.
Instruction Tuning for Recommendations
The authors use the 3B Flan-T5-XL as the backbone model for instruction tuning.
Optimisation via Instruction Tuning: The LLM is optimised using the generated instructions in a supervised fine-tuning manner, with the desired system responses annotated as the target output.
Inference: The instruction-tuned LLM is employed as a reranker to generate the final ranking for candidates based on users' instructions. Appropriate coarse-grained instruction templates are selected based on user instructions and other information, and the LLM executes the model instruction that specifies the user's needs.
The proposed InstructRec approach differs from traditional recommendation methods and existing LLM-based recommendation approaches by focusing on aligning LLMs with more diverse and detailed user needs in practical scenarios through the design and generation of recommendation-oriented instructions containing different types of preferences, intentions, and task forms.
The experiments conducted in this study demonstrate the effectiveness of the proposed InstructRec approach in accommodating diverse user information needs and its generalisation ability across different scenarios and datasets.
The authors evaluate InstructRec's performance in various interaction scenarios, including sequential recommendation, product search, and personalised search. They compare InstructRec with baseline models specific to each scenario and the universal GPT-3.5 model.
In the sequential recommendation scenario, InstructRec outperforms baselines like BERT4Rec and SASRec, while GPT-3.5 does not perform satisfactorily due to the mismatch between its universal textual nature and the specificity of behavioural information in the private domain.
For product search, InstructRec achieves superior or comparable performance to the DSSM baseline, especially in top-ranking metrics, due to its vast amount of encoded general knowledge.
In personalised search scenarios, InstructRec outperforms the TEM baseline by a large margin, effectively bridging the gap between universal semantic knowledge and personalised user behaviours.
Discriminating Hard Negative Item Candidates
InstructRec demonstrates a strong ability to discriminate and select items that align with user information needs among similar items retrieved by a strong matching module.
Discriminating More Candidate Items: InstructRec exhibits considerable performance advantage over the traditional baseline when reranking a larger pool of candidate items (100 items).
Effects of Instructions: Increasing the diversity of instructions during instruction tuning improves InstructRec's performance on held-out interaction scenarios, demonstrating its generalization capabilities.
Generalisation across Datasets: InstructRec outperforms powerful LLMs in a zero-shot setting when transferring from the "Games" dataset to the "CDs" dataset, indicating its ability to capture universal knowledge across distinct domains.
Measuring Success: The authors measure the success of InstructRec using top-K hit ratio (HR) and top-K normalised discounted cumulative gain (NDCG) metrics, with K set to 1, 3, and 5. These metrics evaluate the model's ability to rank the ground-truth item higher among the candidate items.
InstructRec can be applied in real-world recommender systems to accommodate diverse user information needs and provide personalised recommendations.
The approach can be particularly useful in the following scenarios:
E-commerce platforms: InstructRec can help users find desired products by understanding their preferences, intentions, and contextual information through natural language instructions.
Content recommendation: The model can be applied to recommend articles, videos, or music based on user instructions, enhancing the user experience and engagement.
Personalised search engines: InstructRec can improve the relevance of search results by incorporating user preferences and intentions, making the search experience more tailored to individual needs.
To apply InstructRec in real-world systems, practitioners would need to collect user interaction data, generate instruction data using the proposed methodology, and fine-tune the language model using the generated instructions.
The fine-tuned model can then be deployed as a reranker in the recommendation pipeline to provide personalized and relevant recommendations based on user instructions.
A wedding planning recommendation model has significant commercial value for businesses in the wedding industry, such as wedding planning websites, bridal magazines, or e-commerce stores selling wedding-related products.
By providing personalised recommendations based on user preferences and interactions, these businesses can:
Enhance user engagement: Personalised recommendations keep users interested and encourage them to explore more content or products related to their preferences, leading to increased website traffic and user satisfaction.
Boost sales: Recommending products or services that align with a user's preferences and current needs increases the likelihood of conversions, resulting in higher sales and revenue.
Improve customer loyalty: By providing valuable and relevant recommendations, businesses can build trust and establish long-term relationships with customers, encouraging repeat visits and purchases.
Streamline the wedding planning process: Personalized recommendations help users navigate the overwhelming amount of wedding-related information and products, making the planning process more efficient and less stressful.
Recording User Queries: To record user queries and interactions, the recommendation system would be integrated with the business's website or application. User actions, such as searches, page views, and purchases, would be logged and stored in a database. For example:
When a user searches for "affordable wedding favours," their search query would be recorded along with a timestamp and any relevant user information (e.g., user ID, session ID).
As the user interacts with the website, their actions (e.g., clicking on a product, adding an item to the cart, or making a purchase) would be logged and associated with their user profile.
User preferences can be inferred from their interactions, such as the types of products they view or purchase, the content they engage with, and any explicit feedback they provide (e.g., ratings, reviews).
Using the Model for Recommendations: To generate recommendations for a user, the recommendation system would follow these steps:
Retrieve the user's profile, including their historical interactions, preferences, and current query (if applicable).
Format the user's information into an instruction that matches the model's expected input structure, such as: "The user has previously purchased the following items: {1. 'Personalised Wedding Favour Ideas on a Budget' 2. 'DIY Wedding Decoration Ideas'}. The user is now searching for {'affordable wedding centerpieces'}. Please provide recommendations."
Feed the formatted instruction into the trained recommendation model, which will generate a set of recommendations based on the user's preferences and current needs.
Display the generated recommendations to the user on the website or application, typically as a list of suggested products, services, or content.
Monitor the user's interaction with the recommendations (e.g., clicks, purchases) and update their profile accordingly, allowing the model to continuously learn and improve its recommendations over time.
By implementing this recommendation model, businesses in the wedding industry can provide a more personalised and efficient experience for their users, ultimately leading to increased engagement, sales, and customer satisfaction.
The model's ability to process natural language instructions and generate relevant recommendations based on user preferences and interactions makes it a valuable tool for businesses looking to improve their online presence and drive growth.
The model learns to make recommendations based on the user query through the process of instruction tuning.
During training, the model is provided with a large number of instruction-output pairs, where the instructions contain information about the user's preferences, historical interactions, and current intentions.
By learning from these examples, the model develops an understanding of how to map user queries and contextual information to appropriate recommendations.
Here's an example 20-set instruction set for training a wedding planning model in JSON format:
Traditional recommendation systems often rely on structured data, such as user ratings, item features, or transaction history, to make recommendations.
The proposed model leverages natural language to process and understand user queries, preferences, and interactions in natural language format, allowing for a more intuitive and human-like interaction between the user and the recommendation system.
Existing recommendation systems typically use techniques such as collaborative filtering, content-based filtering, or matrix factorization to learn user preferences and generate recommendations.
The proposed model employs instruction tuning, a novel approach where the model is trained on a large set of instruction-output pairs, enabling it to understand and follow instructions in natural language to generate personalised recommendations.
Traditional recommendation systems are often designed for specific domains or use cases, such as movie recommendations or product recommendations, and may struggle to adapt to new domains or changing user preferences.
The instruction-tuned model can handle a wide range of user queries, preferences, and interactions, making it more flexible and adaptable to various scenarios and domains.
Many existing recommendation systems act as "black boxes," providing limited insight into why a particular recommendation was made.
The proposed model, by generating recommendations in natural language, can provide more explainable and transparent recommendations, as the output can include reasoning or justification for the suggested items or content.
Handling Cold Start
Cold start is a common problem in traditional recommendation systems, where the system struggles to make accurate recommendations for new users or items with limited data.
The instruction-tuned model can potentially handle cold start situations better by leveraging the knowledge encoded in the pre-trained language model and the ability to process natural language instructions, even with limited user data.
While existing recommendation systems aim to provide personalised recommendations, they often rely on implicit feedback or user-item interactions.
The proposed model can incorporate explicit user preferences and intents expressed through natural language, enabling a higher level of personalisation in the recommendations.
It's important to note that the proposed wedding planning recommendation model is still a conceptual approach, and its effectiveness compared to existing recommendation systems would need to be validated through rigorous testing and evaluation.
However, the use of instruction tuning and natural language processing in recommendation systems is a promising avenue for future research and development, with the potential to create more user-centric, adaptable, and explainable recommendation experiences.
Based on the survey paper, some of the top LLM-based recommendation systems highlighted include:
M6 Rec (Cui et al., 2022) - Pre-trains the entire model on massive recommendation datasets using transformer-based models for next-item prediction and language modeling tasks.
TallRec (Bao et al., 2023b) - Uses LoRA, a parameter-efficient tuning method, for two-stage tuning of LLMs, first on general data and then on user historical information.
BIGRec (Bao et al., 2023a) - Fine-tunes LLMs through instruction tuning to produce tokens symbolizing items and aligns them with real items using statistical data like popularity.
RecAgent (Wang et al., 2023b) - Treats each user as an FM-based autonomous agent within a virtual simulator, allowing for free interaction, behavior, and evolution of agents.
Agent4Rec (Zhang et al., 2023a) - A recommender system simulator with 1,000 LLM-empowered generative agents interacting with personalized movie recommendations.
VIP5 (Geng et al., 2023) - Focuses on visual and textual modalities in recommendation tasks, incorporating multi-modal personalized prompts and parameter-efficient training.
Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5) (Geng et al., 2022) - Presents a unified paradigm for using foundation models in recommendation systems.
Exploring adapter-based transfer learning for recommender systems: Empirical studies and practical insights (Fu et al., 2023a) - Investigates the efficacy of adapter-based learning for cross-domain recommendation.
A unified framework for multi-domain CTR prediction via large language models (Fu et al., 2023b) - Introduces a prompting strategy to convert features into a prompt sequence for semantic representation in multi-domain recommendation.
Vip5: Towards multimodal foundation models for recommendation (Geng et al., 2023) - Explores the use of multi-modal foundation models in recommendation tasks.