Latent trees for compositional generalization

Jonathan Herzig, Jonathan Berant, Ben Bogin

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

Abstract

Despite the success of neural networks in many natural language processing tasks, recent work has shown that they often fail in compositional generalization, i.e., the ability to generalize to new structures built from components observed during training. In this chapter, we posit that this behavior, in standard architectures such as LSTMs and Transformers, stems from the fact that fragments on the output side are not explicitly tied to fragments on the input side. To address this, we introduce models that explicitly construct latent trees over the input, which are used to compositionally compute representations necessary for predicting the output. We show the compositional generalization abilities of our models exceed the abilities of pre-trained Transformer models on several datasets for both semantic parsing and grounded question answering.

Original languageEnglish
Title of host publicationCompendium of Neurosymbolic Artificial Intelligence
EditorsPascal Hitzler, Aaron Eberhart, Md Kamruzzaman Sarker
PublisherIOS Press
Pages631-664
Number of pages34
ISBN (Electronic)9781643684079
ISBN (Print)9781643684062
DOIs
StatePublished - 4 Aug 2023

Publication series

Name369

All Science Journal Classification (ASJC) codes

  • Artificial Intelligence
  • General Computer Science

Fingerprint

Dive into the research topics of 'Latent trees for compositional generalization'. Together they form a unique fingerprint.

Cite this