Makeup Prior Models for 3D Facial Makeup Estimation and Applications

1 CyberAgent, AI Lab  2 University of Tsukuba
CVPR 2024

Banner image

Example of 3D facial makeup estimation and applications using makeup prior models. Top left: The effectiveness of our prior models (PCA and StyleGAN2) for estimating 3D facial makeup layers. Bottom left: The result of 3D face reconstruction using makeup prior models. Our method accurately recovers the makeup of 3D faces and it can be compatible with the existing 3D face reconstruction framework. Right: 3D makeup interpolation and transfer applications using the PCA-based prior model. Note that the StyleGAN2-based prior model has equivalent functionality.

Abstract

In this work, we introduce two types of makeup prior models to extend existing 3D face prior models: PCA-based and StyleGAN2-based priors. The PCA-based prior model is a linear model that is easy to construct and is computationally efficient. However, it retains only low-frequency information. Conversely, the StyleGAN2-based model can represent high-frequency information with relatively higher computational cost than the PCA-based model. Although there is a trade-off between the two models, both are applicable to 3D facial makeup estimation and related applications. By leveraging makeup prior models and designing a makeup consistency module, we effectively address the challenges that previous methods faced in robustly estimating makeup, particularly in the context of handling self-occluded faces. In experiments, we demonstrate that our approach reduces computational costs by several orders of magnitude, achieving speeds up to 180 times faster. In addition, by improving the accuracy of the estimated makeup, we confirm that our methods are highly advantageous for various 3D facial makeup applications such as 3D makeup face reconstruction, user-friendly makeup editing, makeup transfer, and interpolation.

Datasets Used for Makeup Prior Models Construction

This project is based on and extends the following projects and uses some of the dataset, code and models from them:

Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition,
Xingchao Yang, Takafumi Taketomi, Yoshihiro Kanamori,
Computer Graphics Forum (Proc. of Eurographics 2023)

The makeup-extract-dataset is used to build the makeup prior models.

If you find our models and dataset useful, please consider citing the following papers:

BibTeX

@inproceedings{yang2024makeuppriors,
        author = {Yang, Xingchao and Taketomi, Takafumi and endo, yuki and Kanamori, Yoshihiro},
        title={Makeup Prior Models for {3D} Facial Makeup Estimation and Applications},
        booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
        year={2024}
        }
      
@article{makeup-extraction,
        author = {Yang, Xingchao and Taketomi, Takafumi and Kanamori, Yoshihiro},
        title = {Makeup Extraction of 3D Representation via Illumination-Aware Image Decomposition},
        journal = {Computer Graphics Forum},
        volume = {42},
        number = {2},
        pages = {293-307},
        year = {2023}
    }