TY - GEN
T1 - Are Layout-Infused Language Models Robust to Layout Distribution Shifts? A Case Study with Scientific Documents
AU - Chen, Catherine
AU - Shen, Zejiang
AU - Klein, Dan
AU - Stanovsky, Gabriel
AU - Downey, Doug
AU - Lo, Kyle
N1 - Publisher Copyright: © 2023 Association for Computational Linguistics.
PY - 2023
Y1 - 2023
N2 - Recent work has shown that infusing layout features into language models (LMs) improves processing of visually-rich documents such as scientific papers. Layout-infused LMs are often evaluated on documents with familiar layout features (e.g., papers from the same publisher), but in practice models encounter documents with unfamiliar distributions of layout features, such as new combinations of text sizes and styles, or new spatial configurations of textual elements. In this work, we test whether layout-infused LMs are robust to layout distribution shifts. As a case study, we use the task of scientific document structure recovery, segmenting a scientific paper into its structural categories (e.g., TITLE, CAPTION, REFERENCE). To emulate distribution shifts that occur in practice, we re-partition the GROTOAP2 dataset. We find that under layout distribution shifts model performance degrades by up to 20 F1. Simple training strategies, such as increasing training diversity, can reduce this degradation by over 35% relative F1; however, models fail to reach in-distribution performance in any tested out-of-distribution conditions. This work highlights the need to consider layout distribution shifts during model evaluation, and presents a methodology for conducting such evaluations.
AB - Recent work has shown that infusing layout features into language models (LMs) improves processing of visually-rich documents such as scientific papers. Layout-infused LMs are often evaluated on documents with familiar layout features (e.g., papers from the same publisher), but in practice models encounter documents with unfamiliar distributions of layout features, such as new combinations of text sizes and styles, or new spatial configurations of textual elements. In this work, we test whether layout-infused LMs are robust to layout distribution shifts. As a case study, we use the task of scientific document structure recovery, segmenting a scientific paper into its structural categories (e.g., TITLE, CAPTION, REFERENCE). To emulate distribution shifts that occur in practice, we re-partition the GROTOAP2 dataset. We find that under layout distribution shifts model performance degrades by up to 20 F1. Simple training strategies, such as increasing training diversity, can reduce this degradation by over 35% relative F1; however, models fail to reach in-distribution performance in any tested out-of-distribution conditions. This work highlights the need to consider layout distribution shifts during model evaluation, and presents a methodology for conducting such evaluations.
UR - http://www.scopus.com/inward/record.url?scp=85175416790&partnerID=8YFLogxK
U2 - 10.18653/v1/2023.findings-acl.844
DO - 10.18653/v1/2023.findings-acl.844
M3 - منشور من مؤتمر
T3 - Proceedings of the Annual Meeting of the Association for Computational Linguistics
SP - 13345
EP - 13360
BT - Findings of the Association for Computational Linguistics, ACL 2023
PB - Association for Computational Linguistics (ACL)
T2 - Findings of the Association for Computational Linguistics, ACL 2023
Y2 - 9 July 2023 through 14 July 2023
ER -