logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Mesoscopic Facial Geometry Inference Using Deep Neural Networks

Take a look at a novel approach to synthesizing facial geometry proposed by researchers from USC Institute for Creative Technologies, University of Southern California, Google, and Pinscreen.

Take a look at a novel approach to synthesizing facial geometry proposed by researchers from USC Institute for Creative Technologies, University of Southern California, Google, and Pinscreen. The paper studies a learning-based approach for synthesizing facial geometry at medium and fine scales from diffusely-lit facial texture maps. The thing is that the new approach manages to deal with interpreting dark features as concavities such as at moles, hair stubble, and occluded pores.

Abstract

We present a learning-based approach for synthesizing facial geometry at medium and fine scales from diffusely-lit facial texture maps. When applied to an image sequence, the synthesized detail is temporally coherent. Unlike current state-of-the-art methods, which assume ”dark is deep”, our model is trained with measured facial detail collected using polarized gradient illumination in a Light Stage. This enables us to produce plausible facial detail across the entire face, including where previous approaches may incorrectly interpret dark features as concavities such as at moles, hair stubble, and occluded pores. Instead of directly inferring 3D geometry, we propose to encode fine details in high-resolution displacement maps which are learned through a hybrid network adopting the state-of-the-art image-to-image translation network and super-resolution network. To effectively capture geometric detail at both mid- and high frequencies, we factorize the learning into two separate sub-networks, enabling the full range of facial detail to be modeled. Results from our learning-based approach compare favorably with a high-quality active facial scanning technique and require only a single passive lighting condition without a complex scanning setup.

Are you excited? You can find the full paper and study it here

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more