PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN

PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN

Computer Graphics Forum (EGSR 2023)

Kai-En Lin1 Alex Trevithick1 Keli Cheng2 Michel Sarkis2
Mohsen Ghafoorian2 Ning Bi2 Gerhard Reitmayr2 Ravi Ramamoorthi1
1University of California, San Diego 2Qualcomm Technologies Inc.




Abstract:

Portrait synthesis creates realistic digital avatars which enable users to interact with others in a compelling way. Recent advances in StyleGAN and its extensions have shown promising results in synthesizing photorealistic and accurate reconstruction of human faces. However, previous methods often focus on frontal face synthesis and most methods are not able to handle large head rotations due to the training data distribution of StyleGAN. In this work, our goal is to take as input a monocular video of a face, and create an editable dynamic portrait able to handle extreme head poses. The user can create novel viewpoints, edit the appearance, and animate the face. Our method utilizes pivotal tuning inversion (PTI) to learn a personalized video prior from a monocular video sequence. Then we can input pose and expression coefficients to MLPs and manipulate the latent vectors to synthesize different viewpoints and expressions of the subject. We also propose novel loss functions to further disentangle pose and expression in the latent space. Our algorithm shows much better performance over previous approaches on monocular video datasets, and it is also capable of running in real-time at 54 FPS on an RTX 3080.





Links

Bibtex

    @Article{lin2023pvp,
      booktitle = {Computer Graphics Forum},
      title = {PVP: Personalized Video Prior for Editable Dynamic Portraits using StyleGAN},
      author = {Kai-En Lin and Alex Trevithick and Keli Cheng and Michel Sarkis and Mohsen Ghafoorian and Ning Bi and Gerhard Reitmayr and Ravi Ramamoorthi},
      year = {2023},
    }