Abstract
Existing cross-modal hashing methods ignore the informative multimodal joint information and cannot fully exploit the semantic labels. In this paper, we propose a deep fused two-step cross-modal hashing (DFTH) framework with multiple semantic supervision. In the first step, DFTH learns unified hash codes for instances by a fusion network. Semantic label and similarity reconstruction have been introduced to acquire binary codes that are informative, discriminative and semantic similarity preserving. In the second step, two modality-specific hash networks are learned under the supervision of common hash codes reconstruction, label reconstruction, and intra-modal and inter-modal semantic similarity reconstruction. The modality-specific hash networks can generate semantic preserving binary codes for out-of-sample queries. To deal with the vanishing gradients of binarization, continuous differentiable tanh is introduced to approximate the discrete sign function, making the networks able to back-propagate by automatic gradient computation. Extensive experiments on MIRFlickr25K and NUS-WIDE show the superiority of DFTH over state-of-the-art methods.
Original language | English |
---|---|
Pages (from-to) | 15653-15670 |
Number of pages | 18 |
Journal | Multimedia Tools and Applications |
Volume | 81 |
Issue number | 11 |
DOIs | |
State | Published - May 2022 |
Keywords
- Cross-modal hashing
- Deep fusion network
- Semantic reconstruction
- Supervised learning
- Two-step learning
All Science Journal Classification (ASJC) codes
- Software
- Media Technology
- Hardware and Architecture
- Computer Networks and Communications