1Tsinghua University 2Tencent AI Lab 3Tsinghua University
Previous portrait image generation methods roughly fall into two categories: 2D GANs and 3D-aware GANs. 2D GANs can generate high fidelity portraits but with low view consistency. 3D-aware GAN methods can maintain view consistency but their generated images are not locally editable. To overcome these limitations, we propose FENeRF, a 3D-aware generator that can produce view-consistent and locally-editable portrait images. Our method uses two decoupled latent codes to generate corresponding facial semantics and texture in a spatial aligned 3D volume with shared geometry. Benefiting from such underlying 3D representation, FENeRF can jointly render the boundary-aligned image and semantic mask and use the semantic mask to edit the 3D volume via GAN inversion. We further show such 3D representation can be learned from widely available monocular image and semantic mask pairs. Moreover, we reveal that joint learning semantics and texture helps to generate finer geometry. Our experiments demonstrate that FENeRF outperforms state-of-the-art methods in various face editing tasks.
Figure 2. Overall pipeline of FENeRF. Our generator produces the spatially-aligned density, semantic and texture fields conditioned on disentangled latent codes Zs and Zt. The positional feature embedding ecorrd is also injected to the network together with view direction for color prediction to preserve high-frequency details in generated image. By sharing the same density, aligned rgb image and semantic map are rendered. Finally two discriminators Ds and Dc are fed with semantic map/image pairs and real/fake image pairs, and trained with adversarial objectives LDs and LDc, respectively.
@inproceedings{sun2022fenerf,
title={Fenerf: Face editing in neural radiance fields},
author={Sun, Jingxiang and Wang, Xuan and Zhang, Yong and Li, Xiaoyu and Zhang, Qi and Liu, Yebin and Wang, Jue},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={7672--7682},
year={2022}
}
This paper is sponsed by NSFC No. 62125107.