Hairstyle Transfer between Portrait Images

semanticscholar(2021)

Cited 0|Views3
No score
Abstract
This thesis proposes a compact solution for high-fidelity hairstyle transfer between portrait images. Given a hair image and a face image, our network produces an output image having the input hair and face seamlessly merged. The architecture consists of two encoders and a tiny mapping network that map the two inputs into the latent space of the pretrained StyleGAN2, which generates a high-quality image. The method needs neither annotated data nor an external dataset; the whole pipeline is trained using only synthetically generated images by the StyleGAN2. We demonstrate additional applications of the proposed framework, e.g., hairstyle manipulation and hair generation for 3D morphable model renderings. The extensive evaluation shows that our network is robust to various challenging conditions, where a head pose, face size, gender, ethnicity, and illumination differ between the inputs. The hairstyle transfer fidelity is assessed by a user study and using a trained hair similarity metric.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined