Code2Seq: Generating sequences from structured representations of code

Uri Alon, Omer Levy, Shaked Brody, Eran Yahav

Research output: Contribution to conferencePaperpeer-review

Abstract

The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present CODE2SEQ: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding. We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to 16M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as state-of-the-art NMT models. An online demo of our model is available at http://code2seq.org. Our code, data and trained models are available at http://github.com/tech-srl/code2seq.

Original languageEnglish
StatePublished - 2019
Event7th International Conference on Learning Representations, ICLR 2019 - New Orleans, United States
Duration: 6 May 20199 May 2019

Conference

Conference7th International Conference on Learning Representations, ICLR 2019
Country/TerritoryUnited States
CityNew Orleans
Period6/05/199/05/19

All Science Journal Classification (ASJC) codes

  • Education
  • Computer Science Applications
  • Linguistics and Language
  • Language and Linguistics

Fingerprint

Dive into the research topics of 'Code2Seq: Generating sequences from structured representations of code'. Together they form a unique fingerprint.

Cite this