How to Compare Summarizers without Target Length? Pitfalls, Solutions and Re-Examination of the Neural Summarization Literature

Simeng Sun, Ori Shapira, Ido Dagan, Ani Nenkova

פרסום מחקרי: פרק בספר / בדוח / בכנספרסום בספר כנסביקורת עמיתים

תקציר

We show that plain ROUGE F1 scores are not ideal for comparing current neural systems which on average produce different lengths. This is due to a non-linear pattern between ROUGE F1 and summary length. To alleviate the effect of length during evaluation, we have proposed a new method which normalizes the ROUGE F1 scores of a system by that of a random system with same average output length. A pilot human evaluation has shown that humans prefer short summaries in terms of the verbosity of a summary but overall consider longer summaries to be of higher quality. While human evaluations are more expensive in time and resources, it is clear that normalization, such as the one we proposed for automatic evaluation, will make human evaluations more meaningful.
שפה מקוריתאנגלית
כותר פרסום המארחProceedings of the Workshop on Methods for Optimizing and Evaluating Neural Language Generation
עורכיםThomas Wolf, Hannah Rashkin, Urvashi Khandelwal, Srinivasan Iyer, Marjan Ghazvininejad, Asli Celikyilmaz, Antoine Bosselut
עמודים21-29
מספר עמודים9
מזהי עצם דיגיטלי (DOIs)
סטטוס פרסוםפורסם - 1 יוני 2019

סדרות פרסומים

שםACL Anthology

פורמט ציטוט ביבליוגרפי