Semantically Grounded Visual Embeddings for Zero-Shot Learning

IEEE Conference on Computer Vision and Pattern Recognition(2022)

引用 4|浏览30
暂无评分
摘要
Zero-shot learning methods rely on fixed visual and semantic embeddings, extracted from independent vision and language models, both pre-trained for other large-scale tasks. This is a weakness of current zero-shot learning frameworks as such disjoint embeddings fail to adequately associate visual and textual information to their shared semantic content. Therefore, we propose to learn semantically grounded and enriched visual information by computing a joint image and text model with a two-stream network on a proxy task. To improve this alignment between image and textual representations, provided by attributes, we leverage ancillary captions to provide grounded semantic information. Our method, dubbed joint embeddings for zero-shot learning is evaluated on several benchmark datasets, improving the performance of existing state-of-the-art methods in both standard (+1.6% on aPY, +2.6% on FLO) and generalized (+2.1% on AWA2, +2.2% on CUB) zero-shot recognition.
更多
查看译文
关键词
visual embeddings,learning,zero-shot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要