Foundation Models for Recommender Systems
Last updated
Copyright Continuum Labs - 2023
Last updated
This February 2024 paper provides a comprehensive survey of the emerging field of Foundation Model-based Recommender Systems (FM4RecSys).
While I am not a fan of this acronym they have created, it is an important and growing field.
The authors review the research background, provide a taxonomy of existing FM4RecSys works, and discuss open problems and opportunities in this area.
The taxonomy is divided into four parts: data characteristics, representation learning, model type, and downstream tasks.
Key developments in each area are reviewed, with a focus on representative models and their characteristics.
The authors also classify FM4RecSys frameworks by model type, including language foundation models, personalised agents, and multi-modal foundation models.
They then discuss applications of FM4RecSys in tasks such as top-k recommendation, context-aware recommendation, interactive recommendation, cross-domain recommendation, and interpretability and fairness.
Motivation for FM4RecSys
Enhanced generalisation capabilities: FMs can learn from large-scale data and generalise better to new, unseen data, improving recommendations for new users or items with limited information.
Elevated recommendation experience: FMs enable more dynamic and unstructured conversational interactions, offering enhanced interactivity and flexibility in user engagements.
Improved explanation and reasoning capabilities: FMs can generate more coherent and logically sound explanations by leveraging a comprehensive grasp of commonsense and user-specific context.
Distinguishing features from recent LLM-based RecSys surveys
The authors provide a unique viewpoint for examining FM4RecSys, systematically outlining the framework from data characteristics to downstream tasks.
The survey categorises FM4RecSys based on both the types of models used and the recommendation tasks themselves.
The survey covers a wider array of foundation models, not just large language models (LLMs).
Data characteristics: User/Item ID, user profile, item side information, and external knowledge bases.
Representation learning: ID-based representation, multi-modal representation, and hybrid representation.
Model type: Language FM, personalised agent FM, and multi-modal FM.
Downstream tasks: Top-K recommendation, interpretability in recommendation, context-aware recommendation, fairness in recommendation, and interactive recommendation.
Large Language Models (LLMs) have shown potential in enhancing recommendation systems (RSs) by leveraging their extensive knowledge, reasoning capabilities, and generative power.
Here are some key ways LLMs can be used to create better recommendation systems, along with implementation details:
LLMs can be fine-tuned on user-specific data, such as user profiles, item interactions, and reviews, to generate personalised recommendations.
Models like InstructRec, TallRec, and BIGRec demonstrate the effectiveness of fine-tuning LLMs for recommendation tasks.
Implementation: Collect user data, preprocess it into a suitable format (e.g., user-item interaction sequences), and fine-tune the LLM using techniques like instruction tuning or parameter-efficient methods such as LoRA.
LLMs can leverage their pre-trained knowledge to provide accurate recommendations even for new users or items with limited data.
By using item titles, descriptions, or other side information, LLMs can infer user preferences and generate relevant recommendations.
Implementation: Represent new users or items using available side information (e.g., item titles) and feed this into the fine-tuned LLM to generate recommendations without relying on extensive interaction data.
LLMs can generate natural language explanations for recommended items, improving transparency and user trust.
Prompting techniques, such as asking the LLM to explain why an item is recommended to a specific user, can be used to generate coherent and informative explanations.
Implementation: Design prompts that include user and item information, and use the LLM to generate explanations. Optionally, incorporate item features or continuous prompt vectors to guide the explanation process.
LLMs can integrate various contextual information, such as user preferences, item attributes, and external knowledge, to provide more relevant recommendations.
Techniques like knowledge prompt tuning and hybrid item representation can be used to incorporate structured knowledge and bridge the semantic gap between traditional RSs and LLMs.
Implementation: Construct prompts that include contextual information (e.g., user preferences, item attributes) and use the LLM to generate context-aware recommendations. Optionally, integrate external knowledge bases or adapt the LLM's input space to facilitate knowledge transfer.
LLMs can engage in multi-turn conversations with users, refining preferences and providing personalized recommendations based on the conversation history.
Techniques like role-playing prompts and augmentations such as Retrieval-Augmented Generation (RAG) and Chain-of-Thought (CoT) can enhance the LLM's conversational abilities.
Implementation: Design conversational prompts and fine-tune the LLM on conversation datasets. Use the fine-tuned LLM to engage in multi-turn conversations with users, updating recommendations based on the evolving user preferences.
LLMs can be guided to generate recommendations that are fair and unbiased towards different user and item groups.
Techniques like Counterfactually-Fair-Prompting (CFP) and conditional ranking can be used to promote fairness in LLM-based RSs.
Implementation: Craft prompts that include fairness constraints or sensitive attributes, and train the LLM to generate recommendations that satisfy these constraints. Evaluate the fairness of the generated recommendations using appropriate metrics.
LLMs can leverage their pre-trained knowledge across various domains to provide recommendations in data-scarce target domains.
Techniques like domain-specific adapters, behavior mixing, and knowledge adaptation can be used to facilitate cross-domain knowledge transfer.
Implementation: Fine-tune the LLM on data from multiple domains, using techniques like adapter-based learning or prompting strategies that capture domain-specific characteristics. Use the fine-tuned LLM to generate recommendations in the target domain.
To implement these ideas effectively, it is important to have access to high-quality user and item data, design appropriate prompts and fine-tuning strategies, and carefully evaluate the generated recommendations using relevant metrics and user feedback.
Collaboration between domain experts, data scientists, and ML engineers is essential to successfully integrate LLMs into recommendation systems and create a more personalised, explainable, and fair user experience.
To better understand the practical applications of Foundation Model-based Recommender Systems (FM4RecSys), it is essential to examine specific examples of models and their characteristics.
This section will explore various types of LLM-based recommendation models.
By reviewing their workings, architecture, and best applications of these models, we can gain insights into how FM4RecSys can be effectively implemented to enhance the performance and user experience of recommendation systems.
These models are pre-trained on massive recommendation datasets using transformer-based architectures. They employ language modeling tasks such as masked language modeling and permutation language modeling.
They require a large amount of domain data and have high training costs. They are best suited for scenarios where extensive domain-specific data is available.
These models are fine-tune pre-trained LLMs using techniques like instruction tuning and parameter-efficient methods such as LoRA.
These models can understand and follow different instructions for recommendations and are effective for cold-start recommendations. They are best applied in situations where specific instructions or user history is available for fine-tuning.
This approach focuses on extracting knowledge from LLMs using prompting strategies without fine-tuning the model parameters.
These models design appropriate prompts to stimulate the recommendation abilities of LLMs. They are suitable for scenarios where fine-tuning is not feasible or when quick adaptation to new recommendation tasks is required.
These models treat users as FM-based autonomous agents within a virtual simulator.
These agents can simulate complex user behaviours and interactions within the recommendation system.
They are best applied in scenarios where gathering real user behaviour data is expensive or ethically complex, and when simulating realistic user interactions is crucial for training and evaluation.
These models leverage the reasoning, reflection, and tool usage capabilities of FMs for recommendation.
They treat FMs as the brain and recommendation models as tools that supply domain-specific knowledge. These models are suitable for scenarios that require advanced reasoning and tool integration for personalised recommendations.
While the examples of Foundation Model-based Recommender models demonstrate the potential for improved recommendation systems, it is important to acknowledge and address the open problems and opportunities in this field.
By identifying and tackling these challenges, researchers and practitioners can further advance the development and deployment of FM4RecSys, ensuring that these systems continue to evolve and meet the growing demands of businesses and users alike.
In the following section, we will explore some of the key open problems and opportunities in FM4RecSys.
Issue: FMs have fixed context window limitations, which can impact their effectiveness in tasks requiring extensive context, such as context-aware recommendations.
Resolution: Adapt techniques from NLP, including segmenting and summarising inputs to fit within the context window, and employ strategies like attention mechanisms and memory augmentation to focus on relevant input segments. The RoPE technique with rotary position embedding shows promise in managing long inputs.
Issue: The complexity and size of FMs introduce new hurdles in explaining FM4RecSys, making it challenging to enhance explainability and trustworthiness.
Resolution: Align FMs with explicit knowledge bases like knowledge graphs to make the decision-making process traceable. Techniques like Chain/Tree of Thoughts can further enhance the explanations. Generate natural language explanations for recommendations to improve transparency.
Issue: Time series modeling in recommendation systems has not fully benefited from large-scale pretraining due to its unique challenges, such as variable scales, sampling rates, and data gaps.
Resolution: Leverage approaches like LLMTIME2, which encodes time series as numerical strings and treats forecasting as a next-token prediction task. Transform token distributions into continuous densities to facilitate the application of LLMs to time series forecasting without specialised knowledge or high computational costs.
Issue: Integrating multi-modal understanding and interaction in recommendation systems to achieve more intelligent and personalised recommendations.
Resolution: Develop agent AI systems that can perceive and act in various domains, process visual and contextual data, understand user actions and behaviours, and produce meaningful responses. Use these agents as simulators for both RSs and users to collect data and train in an offline environment, reducing real-world A/B testing costs.
Issue: FMs may have outdated knowledge, generate incorrect information, or lack domain expertise, which can negatively impact the quality of recommendations.
Resolution: Integrate RAG techniques into FM4RecSys to enhance the generative capability by incorporating external data retrieval into the generative process. Use RAG to selectively extract relevant portions of a user's interaction history and associated external knowledge, ensuring up-to-date and reliable recommendations.
Issue: The development of FM-based recommendation systems involves costs related to training, inference, and API usage, which can escalate with more complex or extensive usage.
Resolution: Employ targeted solutions such as data selection for pre-training and fine-tuning, parameter-efficient fine-tuning methods (e.g., LoRA), embedding caching, lightweight FMs (e.g., distillation, pruning, quantization), prompt selection, and adaptive RAG to reduce training costs, inference latency, and API costs.
Issue: The current benchmarks for FM4RecSys are limited in terms of datasets, recommendation tasks, and evaluation metrics, hindering comprehensive evaluation and comparison of different models.
Resolution: Create a holistic and diversified benchmark that includes a variety of datasets, diverse recommendation tasks, and adaptable metrics for different models. Devise new benchmarks and evaluation metrics specifically for multi-modal and personalized agent FMs in recommendation scenarios.
Issue: FMs in RSs face challenges related to safety (e.g., red teaming attacks, undesirable content generation) and privacy (e.g., potential access to sensitive user data through prompt injection).
Resolution: Align FMs with human values by gathering relevant negative data and employing supervised fine-tuning techniques like online and offline human preference training. Incorporate approaches such as federated learning and machine unlearning into FM4RecSys to enhance user privacy protection.
By addressing these key issues and implementing the proposed resolutions, researchers and practitioners can further advance the field of FM4RecSys, improving the effectiveness, explainability, trustworthiness, and efficiency of foundation model-based recommender systems.]
The field of Foundation Model-based Recommender Systems (FM4RecSys) has shown significant promise in enhancing the capabilities and performance of recommendation systems.
By leveraging the power of large language models and other foundation models, FM4RecSys can provide more personalised, explainable, and context-aware recommendations, even in scenarios with limited data.
However, the development and deployment of FM4RecSys come with their own set of challenges, such as handling long sequences, ensuring explainability and trustworthiness, and addressing privacy and safety concerns.
By focusing on these open problems and opportunities, researchers and practitioners can further advance the field of FM4RecSys and create more effective, efficient, and user-centric recommendation systems that cater to the diverse needs of businesses and consumers alike.
The most promising commercial applications of FM4RecSys likely lie in domains where personalisation, explainability, and cross-domain knowledge integration are crucial, such as e-commerce, content streaming, and digital marketing.
Here is a summary of the mentioned papers including their names, topics, authors, and publication years:
Abi-step Grounding Paradigm for Large Language Models in Recommendation Systems
Topic: The paper proposes a new paradigm to integrate large language models effectively in recommendation systems.
Authors: Keqin Bao, Jizhi Zhang, Wenjie Wang, Yang Zhang, Zhengyi Yang, Yancheng Luo, Fuli Feng, Xiangnan He, Qi Tian
Year: 2023
TallRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation
Topic: This study introduces an efficient tuning framework called TallRec, which aligns large language models to enhance recommendation systems.
Authors: Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, Xiangnan He
Year: 2023
Large Language Models for Recommendation: Progresses and Future Directions
Topic: A review of the current state and future directions of using large language models in recommendation systems.
Authors: Keqin Bao, Jizhi Zhang, Yang Zhang, Wang Wenjie, Fuli Feng, Xiangnan He
Year: 2023
Longformer: The Long-Document Transformer
Topic: Introduction of Longformer, a transformer model designed to efficiently process longer documents.
Authors: Iz Beltagy, Matthew E. Peters, Arman Cohan
Year: 2020
On the Opportunities and Risks of Foundation Models
Topic: An extensive discussion on the potential benefits and risks associated with foundation models in AI.
Authors: Rishi Bommasani and colleagues
Year: 2021
Language Models are Few-Shot Learners
Topic: An exploration of the capabilities of language models to perform tasks with minimal task-specific data.
Authors: Tom Brown and colleagues
Year: 2020
Instruction Mining: High-Quality Instruction Data Selection for Large Language Models
Topic: The paper focuses on selecting high-quality instructional data to improve the performance of large language models.
Authors: Yihan Cao, Yanbin Kang, Lichao Sun
Year: 2023
The Lottery Ticket Hypothesis for Pretrained BERT Networks
Topic: Investigates the "lottery ticket hypothesis" in the context of pretrained BERT networks, suggesting that certain subnetworks are especially effective.
Authors: Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Zhangyang Wang, Michael Carbin
Year: 2020
Recommendation Unlearning
Topic: Discusses methods for 'unlearning' in recommendation systems to potentially improve model adaptability and data privacy.
Authors: Chong Chen, Fei Sun, Min Zhang, Bolin Ding
Year: 2022
Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low Training Data Instruction Tuning
Topic: Explores the efficiency of using significantly reduced datasets for tuning large language models.
Authors: Hao Chen, Yiming Zhang, Qi Zhang, Hantao Yang, Xiaomeng Hu, Xuetao Ma, Yifan Yanggong, Junbo Zhao
Year: 2023
M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems
Topic: Introduces M6-Rec, a framework that utilizes generative pretrained language models as flexible, open-ended recommender systems.
Authors: Zeyu Cui, Jianxin Ma, Chang Zhou, Jingren Zhou, Hongxia Yang
Year: 2022
Uncovering ChatGPT’s Capabilities in Recommender Systems
Topic: An investigation into how ChatGPT can be utilized to enhance the functionality of recommender systems.
Authors: Sunhao Dai and colleagues
Year: 2023
An Unified Search and Recommendation Foundation Model for Cold-Start Scenario
Topic: This paper presents a foundation model designed to address the cold-start problem in search and recommendation systems.
Authors: Yuqi Gong, Xichen Ding, Yehui Su, Kaiming Shen, Zhongyi Liu, Guannan Zhang
Year: 2023
Large Language Models are Zero-Shot Time Series Forecasters
Topic: Exploration of the capability of large language models to perform zero-shot forecasting for time series data.
Authors: Nate Gruver, Marc Anton Finzi, Shikai Qiu, Andrew Gordon Wilson
Year: 2023
Leveraging Large Language Models for Sequential Recommendation
Topic: Investigates how large language models can be utilized for making sequential recommendations.
Authors: Jesse Harte, Wouter Zorgdrager, Panos Louridas, Asterios Katsifodimos, Dietmar Jannach, Marios Fragkoulis
Year: 2023
Large Language Models as Zero-Shot Conversational Recommenders
Topic: Discusses the use of large language models for conversational recommendation without specific training on the task.
Authors: Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian J. McAuley
Year: 2023
Learning Vector-Quantized Item Representation for Transferable Sequential Recommenders
Topic: The study focuses on developing transferable sequential recommendation systems using vector-quantized item representation.
Authors: Yupeng Hou, Zhankui He, Julian McAuley, Wayne Xin Zhao
Year: 2023
Large Language Models are Zero-Shot Rankers for Recommender Systems
Topic: Examines the potential of large language models to function as zero-shot rankers in recommender systems.
Authors: Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, Wayne Xin Zhao
Year: 2023
LoRA: Low-Rank Adaptation of Large Language Models
Topic: Introduces a method for adapting large language models using low-rank matrices to improve efficiency and performance.
Authors: Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen
Year: 2022
In-Context Analogical Reasoning with Pre-Trained Language Models
Topic: This paper explores how pre-trained language models can be used for analogical reasoning within a given context.
Authors: Xiaoyang Hu, Shane Storks, Richard L. Lewis, Joyce Chai
Year: 2023
UP5: Unbiased Foundation Model for Fairness-Aware Recommendation
Topic: Proposes a new foundation model aimed at ensuring fairness in recommendation systems by addressing bias.
Authors: Wenyue Hua, Yingqiang Ge, Shuyuan Xu, Jianchao Ji, Yongfeng Zhang
Year: 2023
Tutorial on Large Language Models for Recommendation
Topic: A comprehensive tutorial on employing large language models for various recommendation tasks.
Authors: Wenyue Hua, Lei Li, Shuyuan Xu, Li Chen, Yongfeng Zhang
Year: 2023
How to Index Item IDs for Recommendation Foundation Models
Topic: Discusses techniques for effectively indexing item IDs to optimize foundation models for recommendation.
Authors: Wenyue Hua, Shuyuan Xu, Yingqiang Ge, Yongfeng Zhang
Year: 2023
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Topic: Introduces an AI agent that integrates large language models to provide interactive and dynamic recommendations.
Authors: Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
Year: 2023
Here is the continuation of the summary for the listed papers:
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Topic: Presents LoftQ, a quantization approach that considers fine-tuning aspects of LoRA to enhance the performance of large language models.
Authors: Yixiao Li, Yifan Yu, Chen Liang, Pengcheng He, Nikos Karampatziakis, Weizhu Chen, Tuo Zhao
Year: 2023
LLARA: Aligning Large Language Models with Sequential Recommenders
Topic: Introduces LLARA, a methodology for integrating large language models with sequential recommendation systems to improve alignment.
Authors: Jiayi Liao, Sihang Li, Zhengyi Yang, Jiancan Wu, Yancheng Yuan, Xiang Wang
Year: 2023
Sparks of Artificial General Recommender (AGR): Early Experiments with ChatGPT
Topic: Discusses early experimental results using ChatGPT in developing an Artificial General Recommender system.
Authors: Guo Lin, Yongfeng Zhang
Year: 2023
AWQ: Activation-Aware Weight Quantization for LLM Compression and Acceleration
Topic: Explores AWQ, a technique for quantizing the weights of large language models based on their activation characteristics to optimize performance.
Authors: Ji Lin, Jiaming Tang, Haotian Tang, Shang Yang, Xingyu Dang, Song Han
Year: 2023
How Can Recommender Systems Benefit from Large Language Models: A Survey
Topic: A comprehensive survey on the integration of large language models into recommender systems, highlighting benefits and methodologies.
Authors: Jianghao Lin, Xinyi Dai, Yunjia Xi, Weiwen Liu, Bo Chen, Xiangyang Li, Chenxu Zhu, Huifeng Guo, Yong Yu, Ruiming Tang, Weinan Zhang
Year: 2023
Rella: Retrieval-Enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation
Topic: Proposes Rella, a retrieval-enhanced approach for large language models aimed at comprehending sequential behaviors across a user's lifetime for recommendations.
Authors: Jianghao Lin, Rong Shan, Chenxu Zhu, Kounianhua Du, Bo Chen, Shigang Quan, Ruiming Tang, Yong Yu, Weinan Zhang
Year: 2023
A Multi-Facet Paradigm to Bridge Large Language Model and Recommendation
Topic: Introduces a multi-facet paradigm that bridges the gap between large language models and recommendation systems, focusing on various integration aspects.
Authors: Xinyu Lin, Wenjie Wang, Yongqi Li, Fuli Feng, See-Kiong Ng, Tat-Seng Chua
Year: 2023
Is ChatGPT a Good Recommender? A Preliminary Study
Topic: This preliminary study investigates the effectiveness of ChatGPT as a recommender system.
Authors: Junling Liu, Chao Liu, Renjie Lv, Kang Zhou, Yan Zhang
Year: 2023
LLMRec: Benchmarking Large Language Models on Recommendation Task
Topic: LLMRec benchmarks the performance of large language models on recommendation tasks, providing insights into their capabilities and limitations.
Authors: Junling Liu, Chao Liu, Peilin Zhou, Qichen Ye, Dading Chong, Kang Zhou, Yueqi Xie, Yuwei Cao, Shoujin Wang, Chenyu You, Philip S. Yu
Year: 2023
Pre-Train, Prompt, and Recommendation: A Comprehensive Survey of Language Modelling Paradigm Adaptations in Recommender Systems
Topic: This survey reviews how language modeling paradigms, particularly pre-training and prompting, are adapted for use in recommender systems.
Authors: Peng Liu, Lemei Zhang, Jon Atle Gulla
Year: 2023
MissRec: Pre-training and Transferring Multi-modal Interest-Aware Sequence Representation for Recommendation
Topic: This paper presents MissRec, a model that pre-trains and transfers multi-modal, interest-aware sequence representations for improving recommendation systems.
Authors: Jinpeng Wang, Ziyun Zeng, Yunxiao Wang, Yuting Wang, Xingyu Lu, Tianxiang Li, Jun Yuan, Rui Zhang, Hai-Tao Zheng, Shu-Tao Xia
Year: 2023
RecAgent: A Novel Simulation Paradigm for Recommender Systems
Topic: Introduces RecAgent, a novel simulation-based framework to evaluate and enhance the performance of recommender systems.
Authors: Lei Wang, Jingsen Zhang, Xu Chen, Yankai Lin, Ruihua Song, Wayne Xin Zhao, Ji-Rong Wen
Year: 2023
Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models
Topic: Explores new methodologies for evaluating conversational recommender systems in light of advancements in large language models.
Authors: Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, Ji-Rong Wen
Year: 2023
Enhancing Recommender Systems with Large Language Model Reasoning Graphs
Topic: Discusses the integration of reasoning graphs derived from large language models into recommender systems to enhance performance.
Authors: Yan Wang, Zhixuan Chu, Xin Ouyang, Simeng Wang, Hongyan Hao, Yue Shen, Jinjie Gu, Siqiao Xue, James Y. Zhang, Qing Cui, Longfei Li, Jun Zhou, Sheng Li
Year: 2023
RecMind: Large Language Model Powered Agent for Recommendation
Topic: Describes RecMind, an agent powered by large language models to provide dynamic and effective recommendation services.
Authors: Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
Year: 2023
DRDT: Dynamic Reflection with Divergent Thinking for LLM-Based Sequential Recommendation
Topic: Introduces DRDT, a novel approach combining dynamic reflection and divergent thinking strategies for enhancing sequential recommendations using large language models.
Authors: Yu Wang, Zhiwei Liu, Jianguo Zhang, Weiran Yao, Shelby Heinecke, Philip S. Yu
Year: 2023
Aligning Large Language Models with Human: A Survey
Topic: This survey reviews the strategies and methodologies for aligning large language models with human-like reasoning and decision-making processes.
Authors: Yufei Wang, Wanjun Zhong, Liangyou Li, Fei Mi, Xingshan Zeng, Wenyong Huang, Lifeng Shang, Xin Jiang, Qun Liu
Year: 2023
PTUM: Pretraining User Model from Unlabeled User Behaviors via Self-Supervision
Topic: Discusses PTUM, a method for pretraining user models using self-supervised learning from unlabeled user behavior data.
Authors: Chuhan Wu, Fangzhao Wu, Tao Qi, Jianxun Lian, Yongfeng Huang, Xing Xie
Year: 2020
Personalized Prompts for Sequential Recommendation
Topic: Explores the use of personalized prompts to improve the efficacy of sequential recommendation systems.
Authors: Yiqing Wu, Ruobing Xie, Yongchun Zhu, Fuzhen Zhuang, Xu Zhang, Leyu Lin, Qing He
Year: 2022
A Survey on Large Language Models for Recommendation
Topic: Provides a comprehensive survey on the application of large language models in recommendation systems, highlighting key advancements and future directions.
Authors: Likang Wu, Zhi Zheng, Zhaopeng Qiu, Hao Wang, Hongchao Gu, Tingjia Shen, Chuan Qin, Chen Zhu, Hengshu Zhu, Qi Liu, Hui Xiong, Enhong Chen
Year: 2023