Pose Guided Human Motion Transfer by Exploiting 2D and 3D Information

2022 International Conference on 3D Vision (3DV)(2022)

引用 0|浏览28
暂无评分
摘要
Human motion transfer aims to animate the pose of a human in a source image driven by the poses of a human in a target video. To warp (transfer) human poses, most of the existing methods are based on optical flow or affine transformations as an intermediate representation followed by a generator module to perform the motion transfer. Existing methods perform well in terms of reconstruction quality. However, the quality of the human pose transfer has received less attention although it is an important part of the motion transfer process. Therefore, in this paper, we propose a method focusing on both the reconstruction quality as well as pose consistency. In contrast to existing methods, performing warping procedures in 2D- or 3D-space, we introduce a strategy to combine the warped features in both 2D- and 3D-space to alleviate the self-occlusion problem. In this way, our method benefits from 2D (robustness) and 3D (steering) information to guide the generation process. To reduce the pose error caused by inaccurate 3D estimation, a method is proposed to maintain semantic consistency between predictions and target images at arm and leg regions. Experiments conducted on large scale datasets show that the proposed method outperforms existing methods. Ablation studies clarify the benefits of using feature fusion and semantic consistency.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要