Enhancing 2d Representation Via Adjacent Views For 3d Shape Retrieval

2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019)(2019)

引用 24|浏览32
暂无评分
摘要
Multi-view shape descriptors obtained from various 2D images are commonly adopted in 3D shape retrieval. One major challenge is that significant shape information is discarded during 2D view rendering through projection. In this paper, we propose a convolutional neural network based method, Neighbor-Center Enhanced Network, to enhance each 2D view using its neighboring ones. By exploiting cross-view correlations, Neighbor-Center Enhanced Network learns how adjacent views can be maximally incorporated for an enhanced 2D representation to effectively describe shapes. We observe that a very small amount of, e.g., six, enhanced 2D views, are already sufficient for panoramic shape description. Thus, by simply aggregating features from six enhanced 2D views, we arrive at a highly compact yet discriminative shape descriptor. The proposed shape descriptor significantly outperforms state-of-the-art 3D shape retrieval methods on the ModelNet and ShapeNet-Core55 benchmarks, and also exhibits robustness against object occlusion.
更多
查看译文
关键词
2D representation,adjacent views,convolutional neural network based method,CenterNet,cross-view correlations,panoramic shape description,discriminative shape descriptor,3D shape retrieval methods,multi-view shape descriptors
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要