Coarse-to-Fine Multi-Scene Pose Regression With Transformers

Yoli Shavit, Ron Ferens, Yosi Keller

Research output: Contribution to journalArticlepeer-review

Abstract

Absolute camera pose regressors estimate the position and orientation of a camera given the captured image alone. Typically, a convolutional backbone with a multi-layer perceptron (MLP) head is trained using images and pose labels to embed a single reference scene at a time. Recently, this scheme was extended to learn multiple scenes by replacing the MLP head with a set of fully connected layers. In this work, we propose to learn multi-scene absolute camera pose regression with Transformers, where encoders are used to aggregate activation maps with self-attention and decoders transform latent features and scenes encoding into pose predictions. This allows our model to focus on general features that are informative for localization, while embedding multiple scenes in parallel. We extend our previous MS-Transformer approach Shavit et al. (2021) by introducing a mixed classification-regression architecture that improves the localization accuracy. Our method is evaluated on commonly benchmark indoor and outdoor datasets and has been shown to exceed both multi-scene and state-of-the-art single-scene absolute pose regressors.

Original languageEnglish
Pages (from-to)14222-14233
Number of pages12
JournalIEEE Transactions on Pattern Analysis and Machine Intelligence
Volume45
Issue number12
Early online date31 Aug 2023
DOIs
StatePublished - 1 Dec 2023

Keywords

  • Absolute pose regression
  • coarse-to-fine camera localization
  • deep learning

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition
  • Computational Theory and Mathematics
  • Artificial Intelligence
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Coarse-to-Fine Multi-Scene Pose Regression With Transformers'. Together they form a unique fingerprint.

Cite this