Distribution and gradient constrained embedding model for zero-shot learning with fewer seen samples

Knowledge-Based Systems(2022)

引用 0|浏览14
暂无评分
摘要
Zero-Shot Learning (ZSL), which aims to recognize unseen classes with no training data, has made great progress in recent years. However, established ZSL methods implicitly assumed that there exist sufficient labeled samples for each seen class, which is quite idealistic in general as collecting sufficient labeled samples is a labor-intensive task and may even be naturally impractical for some low-probability events. Accordingly, we investigate how to perform ZSL with fewer seen samples. Specifically, we propose a Distribution and Gradient constrained Embedding Model (DGEM), which aims to predict the visual prototypes (means) for the given semantic vectors of seen classes. Specifically, we summarize the main challenges brought by limited seen samples as the representation bias problem and the over-fitting problem. Correspondingly, two regularizers are proposed to solve them: (1) a prototype refinement loss which uses the relative distribution of class semantics to constrain that of the predicted visual prototypes; (2) a projection smoothing constraint that prevents the model from forming sharp decision boundaries. We validate the effectiveness of DGEM on five ZSL datasets and compare it with several representative ZSL methods. Experimental results show that DGEM outperforms the other established methods when each seen class has only 1/5 sample(s).
更多
查看译文
关键词
Zero-shot learning,Fewer seen samples,Representation bias,Over-fitting
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要