Cross-Modal Guidance Network For Sketch-Based 3d Shape Retrieval

2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME)(2020)

引用 5|浏览7
暂无评分
摘要
The main challenge of sketch-based 3D shape retrieval is the large cross-modal differences between 2D sketches and 3D shapes. Most recent works employed two heterogeneous networks and a shared loss to directly map the features from different modalities to a common feature space, which failed to reduce the cross-modal differences effectively. In this paper, we propose a novel method that adopts a teacher-student strategy to learn an aligned cross-modal feature space indirectly. Specifically, our method first employs a classification network to learn the discriminative features of 3D shapes. Then, the pre-learned features are considered as a teacher to guide the feature learning of 2D sketches. In order to align the cross-modal features, 2D sketch features are transferred to the pre-learned 3D feature space. Our experiments on two benchmark datasets demonstrate that our method obtains superior retrieval performance than the state-of-the-art approaches.
更多
查看译文
关键词
sketch, 3D shape retrieval, cross-modal differences, guidance network, feature alignment
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要