Title
|
|
|
|
Revisiting offline evaluation for implicit-feedback recommender systems
| |
Author
|
|
|
|
| |
Abstract
|
|
|
|
Recommender systems are typically evaluated in an offline setting. A subset of the available user-item interactions is sampled to serve as test set, and some model trained on the remaining data points is then evaluated on its performance to predict which interactions were left out. Alternatively, in an online evaluation setting, multiple versions of the system are deployed and various metrics for those systems are recorded. Systems that score better on these metrics, are then typically preferred. Online evaluation is effective, but inefficient for a number of reasons. Offline evaluation is much more efficient, but current methodologies often fail to accurately predict online performance. In this work, we identify three ways to improve and extend current work on offline evaluation methodologies. More specifically, we believe there is much room for improvement in temporal evaluation, off-policy evaluation, and moving beyond using just clicks to evaluate performance. |
| |
Language
|
|
|
|
English
| |
Source (book)
|
|
|
|
Proceedings of the 13th ACM Conference on Recommender Systems (RecSys '19), September 16-20, 2019, Copenhagen, Denmark
| |
Publication
|
|
|
|
New york
:
Assoc computing machinery
,
2019
| |
ISBN
|
|
|
|
978-1-4503-6243-6
| |
DOI
|
|
|
|
10.1145/3298689.3347069
| |
Volume/pages
|
|
|
|
(2019)
, p. 596-600
| |
ISI
|
|
|
|
000557263400119
| |
Full text (Publisher's DOI)
|
|
|
|
| |
Full text (open access)
|
|
|
|
| |
|