This paper explores how AI-driven decision-making impacts consumer satisfaction, introducing the concept of autonomy-technology tension—where the deployment of AI could potentially undermine consumers' sense of choice and control.
The research underscores a critical conflict in the digital age: while AI offers personalised experiences and convenience, it may also infringe on users' autonomy, leading to dissatisfaction.
The paper suggests that AI's utility isn't universally positive or negative but depends on the alignment between recommendations and individual preferences.
This relationship is mediated by performance expectancy, highlighting that consumers' expectations play a crucial role in how they perceive and react to AI-driven recommendations.
The paper contributes to the broader discourse on AI in consumer decision-making, offering valuable insights for streaming services and technology firms aiming to balance AI's advantages with the imperative of maintaining user autonomy.
It suggests that while AI can streamline decision-making and enhance user experience, a deep understanding of consumer expectations and preferences is vital to avoid diminishing user satisfaction.
Conclusions
For practitioners, the study offers insights into how AI can be optimised to enhance user experiences without compromising their sense of autonomy.
It underscores the necessity for streaming companies and other AI-using businesses to carefully consider how their AI systems interact with consumers, ensuring that these technologies offer personalised recommendations while maintaining a balance with user control and decision-making.
The authors suggest that future AI systems should not only be adept at aligning recommendations with user preferences but also be adaptable and transparent, allowing consumers to adjust the level of AI involvement in their decisions.
Bring in a generative AI model - on the side of the consumer
The integration of a private AI model trained to understand an individual's preferences with existing recommendation systems is a solution to the issue of recommendations provided that are not on target, or provide no context to the user.
Data Collection and Model Training
Initially, the private generative AI model would gather data specific to an individual user's interactions, preferences, and behaviours within the streaming platform.
This data collection would be privacy-centric, ensuring the user's data is securely stored and processed. The model would learn from various data points, such as watched content, search queries, interaction times, ratings given, and possibly even contextual information like time of day or device used.
Continuous Learning and Refinement
The model would continually update its understanding of the user's preferences by incorporating new data from ongoing interactions. This continuous learning ensures the model stays relevant and adapts to any changes in user preferences over time.
Integration with Existing Systems
The private generative AI model would not replace but rather augment the existing recommendation algorithms.
When a user interacts with the platform, the existing system would generate initial content suggestions based on broader user data, trends, and possibly collaborative filtering methods.
Then, the private generative AI model would refine these suggestions by applying its deep, user-specific insights, ensuring the recommendations are highly personalised and relevant.
Output Presentation
The final output that the user sees would be a blend of general recommendations fine-tuned by the user-specific insights from the generative AI model.
This hybrid approach balances the broad appeal of popular or trending content with the nuanced understanding of an individual's unique preferences.
User Feedback Loop
User interactions with the refined recommendations (such as selecting a suggested movie, skipping it, or rating it) would feed back into the system, helping both the generative AI model and the broader recommendation system to learn and improve over time.
Transparency and Control
To address potential concerns about autonomy and trust, users could be given insights into why certain recommendations were made and control over how much their data influences the recommendations.
This transparency and control mechanism could alleviate concerns about autonomy loss and enhance user trust in the AI system.
In this integrated setup, the private generative AI acts as a personalisation layer that tailors the platform's recommendations to suit individual tastes, enhancing satisfaction and engagement without compromising user autonomy.