CALVIN: Improved Contextual Video Captioning via Instruction Tuning

Gowthami Somepalli, Arkabandhu Chowdhury, Ronen Basri, Jonas Geiping, Tom Goldstein, David Jacobs

Research output: Contribution to journalConference articlepeer-review

Abstract

The recent emergence of powerful Vision-Language models (VLMs) has significantly improved image captioning. Some of these models are extended to caption videos as well. However, their capabilities to understand complex scenes are limited, and the descriptions they provide for scenes tend to be overly verbose and focused on the superficial appearance of objects. Scene descriptions, especially in movies, require a deeper contextual understanding unlike general-purpose video captioning. To address this challenge, we propose a model, CALVIN, a specialized video LLM that leverages previous movie context to generate fully “contextual” scene descriptions. To achieve this, we train our model on a suite of tasks that integrate both image-based question-answering and video captioning within a unified framework, before applying instruction tuning to refine the model's ability to provide scene captions. Lastly, we observe that our model responds well to prompt engineering and few-shot in-context learning techniques, enabling the user to adapt it to any new movie with very little additional annotation.

Original languageEnglish
Number of pages28
JournalAdvances in Neural Information Processing Systems
Volume37
StatePublished - 25 Sep 2024
Event38th Conference on Neural Information Processing Systems, NeurIPS 2024 - Vancouver, Canada
Duration: 9 Dec 202415 Dec 2024

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'CALVIN: Improved Contextual Video Captioning via Instruction Tuning'. Together they form a unique fingerprint.

Cite this