Foreground object structure transfer for unsupervised domain adaptation

Jieren Cheng,Le Liu,Boyi Liu, Ke Zhou, Qiaobo Da,Yue Yang

INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS(2022)

引用 1|浏览2
暂无评分
摘要
Unsupervised domain adaptation aims to train a classification model from the labeled source domain for the unlabeled target domain. Since the data distribution of the two domains are different, the model often performs poorly on the target domain. The existing methods align the global features of the source domain and the target domain, and learn the domain invariant features to improve the performance of the model, which ignores the difference between the foreground features and the background features, and does not consider the structural information in the image foreground object. Therefore we proposed a method called foreground object structure transfer (FOST), it avoids the problem of ignoring differences in the structure information of foreground features and background features, exploits foreground feature enhancement from source-to-target transfer during adaptation and structural contrast loss to drive the domain alignment process. FOST relies on prior knowledge to distinguish foreground and background features, and considers the structural information of the object, which makes the intra-class spatial distribution more compact, the interclass spatial distribution more separated, improves the transferability and improves the classification efficiency. Extensive experimental results on various benchmarks under different domain adaptation settings illustrated that our FOST compares favorably against the state-of-the-art domain adaptation methods, we achieved the accuracies of 95.3%, 91.3%, 76.6%, and 87.55% on the ImageCLEF-DA, Office-31, Office-Home, and Visda-2017 data sets, respectively.
更多
查看译文
关键词
contrastive learning,object structure,unsupvised domain adaptation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要