A Virtual 3 D Hair Reconstruction Method from a 2 D Picture

Weng Zufeng,Wang Shoujue, Huang Lianfen

semanticscholar(2016)

引用 0|浏览7
暂无评分
摘要
In this paper, we present a method for reconstructing virtual 3D hair from a 2D human picture. The proposed method only needs the user to provide a 2D positive picture as the reference texture. A canny operator combined with region hair color is used to detect the edge of the hair picture. The three ellipsoidal head modal axis parameters are then calculated from the edge contour. A multi-triangle mesh is constructed by adding point sets on the edge and interior. After that the Cartesian-triangular coordinate transform is used and the corresponding depth of each texture pixel is given to obtain the initial 3D hair shape. In order to simulate a multi-level real hair strip effect we use a label map to simulate hair strip growth from the edge of the given 2D positive picture. A B-Spline function is then applied to refine the preliminary hair strips. Finally 3D hair with the texture features of the given picture is modeled. The experimental results show that this method can be used to model 3D hair strips from a 2D picture with very few manual operations, presenting good stereo vision effect.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要