Evaluating Recommender Systems

Asela Gunawardana, Guy Shani, Sivan Yogev

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Recommender systems are now popular both commercially and in the research community, where many approaches have been suggested for providing recommendations. In many cases a system designer that wishes to employ a recommendater system must choose between a set of candidate approaches. A first step towards selecting an appropriate algorithm is to decide which properties of the application to focus upon when making this choice. Indeed, recommender systems have a variety of properties that may affect user experience, such as accuracy, robustness, scalability, and so forth. In this paper we discuss how to compare recommenders based on a set of properties that are relevant for the application. We focus on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. We describe experimental settings appropriate for making choices between algorithms. We review three types of experiments, starting with an offline setting, where recommendation approaches are compared without user interaction, then reviewing user studies, where a small group of subjects experiment with the system and report on the experience, and finally describe large scale online experiments, where real user populations interact with the system. In each of these cases we describe types of questions that can be answered, and suggest protocols for experimentation. We also discuss how to draw trustworthy conclusions from the conducted experiments. We then review a large set of properties, and explain how to evaluate systems given relevant properties. We also survey a large set of evaluation metrics in the context of the property that they evaluate.

Original languageAmerican English
Title of host publicationRecommender Systems Handbook
Subtitle of host publicationThird Edition
PublisherSpringer US
Pages547-601
Number of pages55
ISBN (Electronic)9781071621974
ISBN (Print)9781071621967
DOIs
StatePublished - 1 Jan 2022

All Science Journal Classification (ASJC) codes

  • General Computer Science

Fingerprint

Dive into the research topics of 'Evaluating Recommender Systems'. Together they form a unique fingerprint.

Cite this