Next3D

Generative Neural Texture Rasterization for 3D-Aware Head Avatars

arXiv

Tsinghua Univesity

Tencent AL Lab

Tsinghua University

Tencent AI Lab

Tencent AI Lab

Tsinghua University

Tsinghua University

Paper
</Code>
Data

Overview Video

Abstract & Method

We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images. To achieve both deformation accuracy and topological flexibility, we present a 3D representation called Generative Texture-Rasterized Tri-planes.

The proposed representation learns Generative Neural Textures on top of parametric mesh templates and then projects them into three orthogonal-viewed feature planes through rasterization, forming a tri-plane feature representation for volume rendering. In this way, we combine both fine-grained expression control of mesh-guided explicit deformation and the flexibility of implicit volumetric representation. We further propose specific modules for modeling mouth interior which is not taken into account by 3DMM.

Facial Animation for Virtual Results

Geometry

Here we visualize the animated shapes with the camera pose fixed.

One-Shot Facial Avatars

Next3D is able to create 3D-aware facial avatars from one single real portrait image by GAN inversion.

3D-Aware Stylization

Next3D is able to create out-of-domain facial avatars by 3D-aware Stylization.

Citation

@article{sun2022next,
title={Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars},
author={Sun Jingxiang and Wang Xuan and Wang Lizhen and Li Xiaoyu and Zhang Yong and Zhang Hongwen and Liu Yebin},
year={2022},
Journal={arXiv preprint arXiv:2205.15517},
}

[if lte IE 9]>