Artomatix: Revolutionising Texture Production

Dr. Eric Risser and lead artist, Seán Walsh talked about Artomatix – new service, that uses deep learning and clever algorithms to create fantastic modern textures from different sources. 

Dr. Eric Risser and lead artist, Seán Walsh talked about the power of Artomatix – new service, that uses deep learning and clever algorithms to create fantastic modern textures from different sources. 

Artomatix

Dr. Eric Risser: Artomatix was founded by Dr. Eric Risser after he finished his PhD on the topic of creative AI in early 2012. At the time there was no industry surrounding this topic and Eric thought there should be.

Today the main people behind Artomatix are Eric along with his co-founder Bart Kiss and the company’s chairman Steve Collins, who also founded Havok and Swrve.

1 of 2

Lead Artist, Seán Walsh, is on hand to answer artist specific questions.

What are the main technical development factors that are driving the adoption of the AI and procedural texture work?

Dr. Eric Risser: That’s a great question! There are a few reasons for the surge in creative AI recently. I think the key developments have been:

(1) Nvidia’s commitment to supporting Deep Learning and all the tools they’ve built to help people turn their GPU into a neural network supercomputer. This super powerful, super parallel hardware gave AI programmers the critical mass of neurons they needed to make Deep Learning perform well on visual tasks.

(2) A recent cultural shift in Academia to be significantly more forthcoming with data, know-how and source code. When I finished my PhD only five years ago, it was normal to publish your work around once a year and sharing code was very uncommon. This made it slow and difficult to extend other people’s work or prototype new ideas quickly. Recently a website called Arxiv has made it standard practice for researchers to pre-publish their ideas. This led to the publication of interesting new papers go from every few months to every few days. People are also really likely to share their training data and source code on github now, which means that the moment a researcher comes up with an idea and implements it, a week later they can publish a paper on the topic. The next day another researcher can read that paper, get a great follow up idea, start from the first researchers code and have a follow up paper published two weeks later. At Artomatix we really love this recent trend towards global collaboration and sharing of ideas. We’ve decided to give back to the research community by publishing our recent ideas as well!

Algorithm

Dr. Eric Risser: Artomatix is capable of performing a number of seemingly different operations on a texture, but under the hood there’s actually a lot of machinery in common between all of these features.

Typically when we get a texture from the user, our first step is to build a set of textures that highlight features from the input at different frequencies. The next step is to filter those images in a way that replaces the color information we’d typically have in a pixel with values that correspond to a semantically meaningful feature. So instead of colors, we think of images as collections of overlapping shapes. Next we do some kind of statistical process that figures out how all these various features relate to each other globally. These three things pretty much always happen.

Color pixels can be thought of as three dimensional vectors, or as points in 3D space where one axis represents the red value, another the green and the last is blue. Once we filter the textures, each pixel contains hundreds of dimensions because they encode perceptually meaningful features, in reality this number changes throughout the algorithm, but let’s say that each feature has 500 dimensions. There are typically millions of pixels in an image. If we were to plot these millions of points in 500 dimensional space, then we’d get a point cloud. This cloud would have a unique shape. Every process we do is essentially playing with these point clouds. If we want to grow a texture to a larger size, that’s really just increasing the density of the cloud. If we want to transfer a texture into the style of another, we’re just combining the fine scale shape of one cloud, with the coarse shape of another cloud. When we draw masks to guide the synthesis process, we’re really just telling the AI that one area of the cloud is more or less important than the rest. Once we’re done creating a new point cloud that has the properties we’re looking for, we invert the first process and turn everything back into color textures.

This doesn’t properly explain everything we do, and of course I’m leaving out a lot of details, but this could be seen as the theoretical core of creative AI.

Changes

Sean Walsh: So far, our main algorithm can remove seams, remove specific texture features, grow a texture to any specified size (up to 8k), mutate a texture to create multiple variations of itself and make Infinity Tiles. Our newer neural network based features include Style Transfer, in which the user specifies two input images, that combine the content of one with the appearance or ‘style’ of another.

We also have non-machine learning features such as Shape From Shading (not unlike Allegorithmic’s Bitmap 2 Material, Quixels NDO or Knald) that we are currently improving in some interesting ways using neural networks.

Output

Sean Walsh: Our algorithm does its best to retain the highest level of detail, and quite often does so even better than a manual effort would produce. Furthermore, we fully support physically-based rendering (PBR) which has been a trend in game development over the last few years. This means our output supplies all the image maps required to readily slot into a modern game production pipeline.

All of our features provide artistic controls to help sculpt the output you desire, this is inline with our idea of ‘example based content creation’, in which an artist specifies the input, controls some influencing factors and allows our AI technology complete the task. That being said, it is an AI and there is currently a limit in its creation of new features in an image. It just changes, adapts and grows what’s there. Essentially it is statistically, but not artistically, aware and as such still requires a human to creatively review, iterate and adapt (for now!).

Style Transfer

Dr. Eric Risser: Style Transfer is a simple concept, the idea is that you have two images, one that contains a distinct composition and another that contains a distinct texture. Going back to the earlier question about how our AI works under the hood. We essentially just map the texture features from one image over the composition features from another. The tricky part is melding them together so they look natural and artistic.

Not every image is a good candidate for style transfer. For example if the image you want to change the style of doesn’t really have complex features or a distinct composition (e.g. grass, asphalt, sand) that can be preserved, then style transfer will produce results similar to our texture mutation feature.

As opposed to apps like Prisma, our style transfer technology was designed around maximum quality and flexibility. As such, you can supply any style image or material. We’re currently working on extending our approach to work on the entire PBR map set.

Details

Dr. Eric Risser: Our technology is great for detailed objects. We support up to 8k textures right now and we’ll start supporting 16k with our next release. In general the level of detail the artist puts in is the level of detail they get out.

 

Sign up

Sean Walsh: You can sign up for a trial at www.artomatix.com! We realize that there are many types of studios out there looking to level-up their texture processing workflow, so we’re offering a few solutions depending on who you are and what you need.

(1) A tool: We launched a web-browser based prototype of this tool last March at GDC to garner interest and feedback. We’re now working on the next evolution of that tool. We’re moving away from the browser and offering a more powerful, more responsive local application. We’re currently testing our alpha with several industry partners and we’ll be opening up our beta in October. If you’d like early access, please reach out!

(2) An API: For the more technical studios out there with an automated texture processing pipeline, we offer an API to help your programmers integrate Artomatix directly into your infrastructure.

(3) A Service: Don’t have artists or programmers on staff but you do have a backlog of hundreds to thousands of textures that need to be fixed-up? We employ in-house artists to process your textures so you don’t have to. We’re already working with some of the top fortune 500 companies in the industrial design space using this model.

Dr. Eric Risser and Sean Walsh, the team of Artomatix

Interview conducted by Kirill Tokarev

Follow 80.lv on Facebook, Twitter and Instagram

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more