Scheduling on a budget: Avoiding stale recommendations with timely updates

Machine Learning with Applications(2023)

引用 0|浏览16
暂无评分
摘要
Recommendation systems usually create static models from historical data. Due to concept drift and changes in the environment, such models are doomed to become stale, which causes their performance to degrade. In live production environments, models are therefore typically retrained at fixed time-intervals. Of course, every retraining comes at a significant computational cost, making very frequent model updates unrealistic in practice. In some cases, the cost is worth it, but in other cases an update could be redundant and the cost an unnecessary loss. The research question then consists of finding an acceptable update schedule for your recommendation system, given a limited budget. This work provides a pragmatic analysis of model staleness for a variety of collaborative filtering algorithms in news and retail domains, where concept drift is a known impediment. We highlight that the rate at which models become stale is highly dependent on the environment they perform in and that this property can be derived from data. These findings are corroborated by empirical observations from four large-scale online experiments. Instead of retraining at regular intervals, we propose an adaptive scheduling method that aims to maximise the accuracy of the recommendations within a fixed resource budget. Offline experiments show that our proposed approach improves recommendation performance while keeping the cost constant. Our findings can guide practitioners to spend their available resources more efficiently.
更多
查看译文
关键词
Recommender systems,Scheduling,Online trials
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要