Kieran Fogarty showed us how Stable Diffusion can be used for creating 3D textures and spoke about the ethical side of using AI generators in production.
Hi, My name is Kieran Fogarty, and I have worked in the games industry since 2005. Starting at EA Sports, I worked on numerous projects as Environment and Lighting Artist and dabbled in Lead roles and companies around Vancouver.
These days, I wear the Art Director and Game Designer hats, working on a social multiplayer game named Moonland. I also dabbled quite a bit in Photogrammetry and Digitization and ran a 3D printing store for a few years.
Using Stable Diffusion for Texturing
I have been interested in AI art and AI chatbots since the early Alice chatbot days, so it's always been part of my feed in one shape or form. I recently saw a YouTube video from Default Cube, where he was able to create an AC unit texture on just a cube. As he added cuts to the object and extruded faces, the Blender add-on seemed to adapt and create better and more accurate textures. I had to try this.
The first thing I tried was different types of city buildings and city streets, realizing at an early stage that you can only camera project the texture. It seems to cut out the 3D model as a 2D silhouette and then render a 3D image of whatever you request onto that image and re-project it back onto the 3D object. This made me realize that an object can be mirrored to get a better result. At first, I mirrored a model of a Hovercar I had downloaded from blendswap. It was messy, but I was impressed. I decided to try quartering and mirroring on both the X and Y-axis. This can make a symmetrical object quite quickly.
This started with a large oil container in my texture-generated city scene and led me to a 5-minute model of a pumpkin with a quality shocking enough for me to post on social media. Does it look amazing? No. But was it created with a string of text? Mostly yes. I want to point out that this is the first version of this tech.
Within 20 minutes, I built a simple scene with a tower, a rocky field, and a path and bridge. Count me impressed.
If you want to play with the add-on yourself, one thing to remember is to copy your seed value. After generating, each texture is named after its seed value. These seed values will allow you to make similar textures, but consistency is still something harder to do when generating textures.
I look forward to the future of this tech and want to try building my own weight model using all of my own art to see what it turns out. Other than that, I feel we have to wait and see what happens next while AI and humanity become more complexly intertwined.
The Ethical Side of Things
Technology is a very fast, always-changing beast with two heads, a double-edged sword. Can you ask it to paint you a portrait of your dog in 30 seconds with a few source images? Yes. Does it use copyrighted work as its learning material to construct the images for you? Yes. Jaron Lanier's opinion of the matter aligns with my own. During one of his talks about early VR and if it was going to replace the outside world for people, he said that VR makes the real world that much more real when you take the headset off.
And I feel the same about AI-generated media. It truly shows the beauty in work created and shared by the individual human imagination.
At the moment, Adobe is already experimenting with NVIDIA and others to create AI-generated landscapes using simple brush strokes. With large libraries of textures and objects such as rocks, plants, and base materials becoming available on platforms like Quixel and Sketchfab, I see the potential for using AI tools to blend between library assets to create new and original works.
Humanity has always sought to bring technology to more people, These tools and their use seem to follow the same path as any invented technology that alters our progress on this planet and beyond.