VisDA: A Synthetic-to-Real Benchmark for Visual Domain Adaptation

IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops(2018)

引用 162|浏览128
暂无评分
摘要
The success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. However, real training images are expensive to collect and annotate for both computer vision and robotic applications. The synthetic images are easy to generate but model performance often drops significantly on data from a new deployment domain, a problem known as dataset shift, or dataset bias. Changes in the visual domain can include lighting, camera pose and background variation, as well as general changes in how the image data is collected. While this problem has been studied extensively in the domain adaptation literature, progress has been limited by the lack of large-scale challenge benchmarks.
更多
查看译文
关键词
machine learning,large labeled datasets,real training images,camera pose,large-scale challenge benchmarks,domain adaptation literature,background variation,dataset bias,dataset shift,deployment domain,synthetic images,robotic applications,computer vision,labeled datasets,visual recognition tasks,visual domain adaptation,synthetic-to-real benchmark,VisDA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要