Learning Multi-Domain Adversarial Neural Networks for Text Classification.

IEEE ACCESS(2019)

引用 21|浏览58
暂无评分
摘要
Deep neural networks have been applied to learn transferable features for adapting text classification models from a source domain to a target domain Conventional domain adaptation used to adapt models from an individual specific domain with sufficient labeled data to another individual specific target domain without any (or with little) labeled data. However, in this paradigm, we lose sight of correlation among different domains where common knowledge could be shared to improve the performance of both the source domain and the target domain. Multi-domain learning proposes learning the sharable features from multiple source domains and the target domain However, previous work mainly focuses on improving the performance of the target domain and lacks the effective mechanism to ensure that the shared feature space is not contaminated by domain-specific features. In this paper, we use an adversarial training strategy and orthogonality constraints to guarantee that the private and shared features do not collide with each other, which can improve the performances of both the source domains and the target domain. The experimental results, on a standard sentiment domain adaptation dataset and a consumption intention identification dataset labeled by us, show that our approach dramatically outperforms state-of-the-art baselines, and it is general enough to be applied to more scenarios.
更多
查看译文
关键词
Adversarial learning,domain adaptation,consumption intention,text classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要