Simultaneous Deep Transfer Across Domains and Tasks

2015 IEEE International Conference on Computer Vision (ICCV)(2015)

引用 1553|浏览362
暂无评分
摘要
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.
更多
查看译文
关键词
classification,semisupervised adaptation setting,visual domain adaptation tasks,soft label distribution matching loss,domain transfer,domain invariance,fine-tuning deep models,generic supervised deep CNN model,simultaneous deep transfer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要