Abstract

Cross-modal retrieval across image and text modalities is a challenging task due to its inherent ambiguity: An image often exhibits various situations, and a caption can be coupled with diverse images. Set-based embedding has been studied as a solution to this problem. It seeks to encode a sample into a set of different embedding vectors that capture different semantics of the sample. In this paper, we present a novel set-based embedding method, which is distinct from previous work in two aspects. First, we present a new similarity function called smooth-Chamfer similarity, which is designed to alleviate the side effects of existing similarity functions for set-based embedding. Second, we propose a novel set prediction module to produce a set of embedding vectors that effectively captures diverse semantics of input by the slot attention mechanism. Our method is evaluated on the COCO and Flickr30K datasets across different visual backbones, where it outperforms existing methods including ones that demand substantially larger computation at inference.

Overall Architecture of DivE

Figure 1. An overview of our model. (a) The overall framework of our model. The model consists of three parts: visual feature extractor, textual feature extractor, and set-prediction modules $f^V$ and $f^T$. First, the feature extractors of each modality extract local and global features from input samples. Then, the features are fed to the set prediction modules to produce embedding sets $S^V$ and $S^T$. The model is trained with the loss using our smooth-Chamfer similarity. (b) Details of our set prediction module and attention maps that slots of each iteration produce. A set prediction module consists of multiple aggregation blocks that share their weights. Note that $f^V$ and $f^T$ have the same model architecture.

Experimental results

1. Performance comparison with other methods

Table 1. Recall@K (\%) and RSUM on the COCO dataset. Evaluation results on both 1K test setting (average of 5-fold test dataset) and 5K test setting are presented. The best RSUM scores are marked in bold. CA and dagger indicate models using cross-attention and ensemble models of two hypotheses, respectively.

2. Qualitative results regarding elements of the visual embedding set

Figure 2. For each element of the image embedding set, we present its attention map and the caption nearest to the element in the embedding space. Matching captions are colored in green. Entities corresponding to the attention maps are underlined.

Acknowledgements

This work was supported by the NRF grant and the IITP grant funded by Ministry of Science and ICT, Korea (NRF-2018R1A5-A1060031-20%, NRF-2021R1A2C3012728-50%, IITP-2019-0-01906-10%, IITP-2022-0-00290-20%).

Paper

Improving Cross-Modal Retrieval With Set of Diverse Embeddings
Dongwon Kim, Namyup Kim, and Suha Kwak
CVPR (Highlight), 2023
[paper] [arXiv] [poster]

Code

Code is under refactoring currently, and it will be made avilable online soon. Until then, please refer this codebase that we attached as a supplementary material on submission. It is little messy but reproduces most of the core experiments in the paper.