Discriminative Regularized Deep Generative Models for Semi-Supervised Learning

2019 IEEE International Conference on Data Mining (ICDM)(2019)

引用 6|浏览74
暂无评分
摘要
Deep generative models (DGMs) have shown strong performance in semi-supervised learning (SSL), which incorporate discrete class information into the learning process. Yet existing methods generally overfit to the given labeled data, for only considering the conditional probability of labels. In this paper, we propose a novel discriminative regularized deep generative method for SSL, which fully exploits the discriminative and geometric information of data to address the aforementioned issue. Our method introduces the cluster and manifold assumption that maximizes the classification margin between clusters and simultaneously smooths the predictions of the data which is close in the sub-manifold of each cluster, to regularize the learning of the classifier in DGMs. To derive the regularization based on introduced assumptions, we adopt the generated data of DGMs along with labelled and unlabelled data, to model the data manifold and yield clusters based on the Gumbel-softmax distribution. Experimental results on both text and image datasets demonstrate the effectiveness and flexibility of our method, and prove that two introduced assumptions are complementary in guiding the classification boundary, thus improving the discriminative ability of the classifier.
更多
查看译文
关键词
Deep Generative Models,Discriminative Regularization,Semi-supervised Learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要