Abstract
Many websites allow users to rate items and share their ratings with others, for social or personalisation purposes. In recommender systems in particular, personalised suggestions are generated by predicting ratings for items that users are unaware of, based on the ratings users provided for other items. Explicit user ratings are collected by means of graphical widgets referred to as ‘rating scales’. Each system or website normally uses a specific rating scale, in many cases differing from scales used by other systems in their granularity, visual metaphor, numbering or availability of a neutral position. While many works in the field of survey design reported on the effects of rating scales on user ratings, these, however, are normally regarded as neutral tools when it comes to recommender systems. In this paper, we challenge this view and provide new empirical information about the impact of rating scales on user ratings, presenting the results of three new studies carried out in different domains. Based on these results, we demonstrate that a static mathematical mapping is not the best method to compare ratings coming from scales with different features, and suggest when it is possible to use linear functions instead.
Original language | American English |
---|---|
Pages (from-to) | 985-1004 |
Number of pages | 20 |
Journal | Behaviour and Information Technology |
Volume | 36 |
Issue number | 10 |
DOIs | |
State | Published - 3 Oct 2017 |
Keywords
- Rating scales
- human–machine interface
- recommender system
- user studies
All Science Journal Classification (ASJC) codes
- Developmental and Educational Psychology
- Arts and Humanities (miscellaneous)
- Human-Computer Interaction
- General Social Sciences