Multimodal Prototypical Networks For Few-Shot Learning

2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WACV 2021(2021)

引用 29|浏览14
暂无评分
摘要
Although providing exceptional results for many computer vision tasks, state-of-the-art deep learning algorithms catastrophically struggle in low data scenarios. However, if data in additional modalities exist (e.g. text) this can compensate for the lack of data and improve the classification results. To overcome this data scarcity, we design a cross-modal feature generation framework capable of enriching the low populated embedding space in few-shot scenarios, leveraging data from the auxiliary modality. Specifically, we train a generative model that maps text data into the visual feature space to obtain more reliable prototypes. This allows to exploit data from additional modalities (e.g. text) during training while the ultimate task at test time remains classification with exclusively visual data. We show that in such cases nearest neighbor classification is a viable approach and outperform state-of-the-art single-modal and multimodal few-shot learning methods on the CUB-200 and Oxford-102 datasets.
更多
查看译文
关键词
few-shot learning methods,multimodal prototypical networks,exceptional results,computer vision tasks,state-of-the-art deep learning algorithms,low data scenarios,additional modalities,data scarcity,cross-modal feature generation framework,low populated embedding space,few-shot scenarios,auxiliary modality,generative model,maps text data,visual feature space,reliable prototypes,ultimate task,exclusively visual data,cases nearest neighbor classification,outperform state-of-the-art
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要