Deep fused two-step cross-modal hashing with multiple semantic supervision

Peipei Kang, Zehang Lin, Zhenguo Yang, Alexander M. Bronstein, Qing Li, Wenyin Liu

Research output: Contribution to journalArticlepeer-review

Abstract

Existing cross-modal hashing methods ignore the informative multimodal joint information and cannot fully exploit the semantic labels. In this paper, we propose a deep fused two-step cross-modal hashing (DFTH) framework with multiple semantic supervision. In the first step, DFTH learns unified hash codes for instances by a fusion network. Semantic label and similarity reconstruction have been introduced to acquire binary codes that are informative, discriminative and semantic similarity preserving. In the second step, two modality-specific hash networks are learned under the supervision of common hash codes reconstruction, label reconstruction, and intra-modal and inter-modal semantic similarity reconstruction. The modality-specific hash networks can generate semantic preserving binary codes for out-of-sample queries. To deal with the vanishing gradients of binarization, continuous differentiable tanh is introduced to approximate the discrete sign function, making the networks able to back-propagate by automatic gradient computation. Extensive experiments on MIRFlickr25K and NUS-WIDE show the superiority of DFTH over state-of-the-art methods.

Original languageEnglish
Pages (from-to)15653-15670
Number of pages18
JournalMultimedia Tools and Applications
Volume81
Issue number11
DOIs
StatePublished - May 2022

Keywords

  • Cross-modal hashing
  • Deep fusion network
  • Semantic reconstruction
  • Supervised learning
  • Two-step learning

All Science Journal Classification (ASJC) codes

  • Software
  • Media Technology
  • Hardware and Architecture
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'Deep fused two-step cross-modal hashing with multiple semantic supervision'. Together they form a unique fingerprint.

Cite this