TAIL: TASK-SPECIFIC ADAPTERS FOR IMITATION LEARNING WITH LARGE PRETRAINED MODELS

Zuxin Liu, Jesse Zhang, Kavosh Asadi, Yao Liu, Ding Zhao, Shoham Sabach, Rasool Fakoor

Research output: Contribution to conferencePaperpeer-review

Abstract

The full potential of large pretrained models remains largely untapped in control domains like robotics. This is mainly due to data scarcity and computational challenges associated with training or fine-tuning large models for such applications. Prior work mainly emphasizes either effective pretraining of large models for decision-making or single-task adaptation. But real-world problems will require data-efficient, continual adaptation for new control tasks. Recognizing these constraints, we introduce TAIL (Task-specific Adapters for Imitation Learning), a framework for efficient adaptation to a stream of new control tasks. Inspired by recent advancements in parameter-efficient fine-tuning in language domains, we explore efficient fine-tuning techniques-e.g., Bottleneck Adapters, P-Tuning, and Low-Rank Adaptation (LoRA)-in TAIL to adapt large pretrained models for new tasks with limited demonstration data. Our extensive experiments comparing prevalent parameter-efficient fine-tuning techniques and adaptation baselines suggest that TAIL with LoRA can achieve the best post-adaptation performance with only 1% of the trainable parameters of full fine-tuning while avoiding catastrophic forgetting and preserving adaptation plasticity in continual learning settings.

Original languageEnglish
StatePublished - 2024
Externally publishedYes
Event12th International Conference on Learning Representations, ICLR 2024 - Hybrid, Vienna, Austria
Duration: 7 May 202411 May 2024

Conference

Conference12th International Conference on Learning Representations, ICLR 2024
Country/TerritoryAustria
CityHybrid, Vienna
Period7/05/2411/05/24

All Science Journal Classification (ASJC) codes

  • Language and Linguistics
  • Computer Science Applications
  • Education
  • Linguistics and Language

Fingerprint

Dive into the research topics of 'TAIL: TASK-SPECIFIC ADAPTERS FOR IMITATION LEARNING WITH LARGE PRETRAINED MODELS'. Together they form a unique fingerprint.

Cite this