VTNCT: an image-based virtual try-on network by combining feature with pixel transformation

VISUAL COMPUTER(2022)

引用 1|浏览4
暂无评分
摘要
Image-based virtual try-on tasks with the goal of transferring a target clothing item onto the corresponding region of a person have attracted increasing research attention recently. However, most of the existing image-based virtual try-on methods have a shortcoming in detail generation and preservation. To resolve these issues, we propose a novel virtual try-on network to generate photo-realistic try-on image while preserving the details of clothes and non-target regions. We introduce two key innovations. One is the clothing warping module, which uses a warping strategy combining feature with pixel transformation to obtain the warped clothes with realistic texture and robust alignment. The other is the arm generation module, which is an original module and is highly effective for dealing with occlusion and generating the details of the arm region. In addition, we use a distillation strategy to solve the degeneration caused by the wrong parsing, which further proves the effectiveness of our components. Extensive experiments on a public fashion dataset demonstrate our system achieves the state-of-the-art virtual try-on performance both qualitatively and quantitatively. The code is available at https://github.com/changyuan96/VTNCT .
更多
查看译文
关键词
Virtual try-on,Image-based,Feature transformation,Occlusion handling
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要