Abstract
This work investigates the most basic units that underlie contextualized word embeddings, such as BERT-the so-called word pieces. In Morphologically-Rich Languages (MRLs) which exhibit morphological fusion and non-concatenative morphology, the different units of meaning within a word may be fused, intertwined , and cannot be separated linearly. Therefore, when using word-pieces in MRLs, we must consider that: (1) a linear segmen-tation into sub-word units might not capture the full morphological complexity of words; and (2) representations that leave morphological knowledge on sub-word units inaccessible might negatively affect performance. Here we empirically examine the capacity of word-pieces to capture morphology by investigating the task of multi-tagging in Hebrew, as a proxy to evaluating the underlying segmentation. Our results show that, while models trained to predict multi-tags for complete words out-perform models tuned to predict the distinct tags of WPs, we can improve the WPs tag prediction by purposefully constraining the word-pieces to reflect their internal functions. We conjecture that this is due to the na¨ıvena¨ıve linear to-kenization of words into word-pieces, and suggest that linguistically-informed word-pieces schemes, that make morphological knowledge explicit, might boost performance for MRLs.
Original language | English |
---|---|
Title of host publication | Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, SIGMORPHON 2020, Online, July 10, 2020 |
Editors | Garrett Nicolai, Kyle Gorman, Ryan Cotterell |
Pages | 204-209 |
Number of pages | 6 |
DOIs | |
State | Published - 2020 |