Text2Tex: A Novel Method for Generating 3D Textures From Text Prompts

The technique incorporates inpainting into a pre-trained image diffusion model to synthesize partial textures from multiple viewpoints.

A team of researchers from the Technical University of Munich and Snap Research has recently published a paper unveiling Text2Tex, a novel method for generating textures for 3D models based on the given text prompts.

According to the team, the technique incorporates inpainting into a pre-trained depth-aware image diffusion model to synthesize high-resolution partial textures from multiple viewpoints. Additionally, the researchers introduced an automatic view sequence generator to determine the next best view for updating the partial texture, to avoid artifacts in the generated textures.

"In progressive texture generation, we start by rendering the object from an initial preset viewpoint. We generate a new appearance according to the input prompt via a depth-to-image diffusion model, and project the generated image back to the partial texture. Then, we repeat this process until the last preset viewpoint to output the initial textured mesh," commented the team. "In the subsequent texture refinement, we update the initial texture from a sequence of automatically selected viewpoints to refine the stretched and blurry artifacts."

You can learn more here. Also, don't forget to join our 80 Level Talent platform and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more.

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more