logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Look at This Volumetric Lego Generator Created In Substance 3D Designer

Maxime Guyard-Morin has walked us through the Lego Generator project, providing a detailed overview of the pipeline involved in incorporating volume and depths, and shared his workflow in Substance 3D Designer.

In case you missed it

You may find this article interesting

Introduction

Hi! My name is Maxime Guyard-Morin. I like cats, metal music, and video games and I try to find time to paint my small plastic dudes. I make a living by making nice pixels for 3D objects and surfaces, mostly textures and shaders. I'm also teaching game art and texturing. If you're looking to get someone like me for your team, I'm currently available.



I directly contributed to an unannounced Gearbox project, F1 Manager 2022 & 2023, Cellyon: Boss Confrontation, Dark Age, and the initial series of 6th sense VR projects. Additionally, my Non-Uniform Edge Detect, a custom Substance 3D Designer node, is used by lots of well-known game companies I'm not able to disclose here.


I don't recall the exact circumstances of how I started my journey toward material art. As far as I remember, I got bored of having untextured 3D models after 3-4 weeks of learning 3D art. It then turned out I had more fun playing with pixels in Substance 3D Painter than messing with vertices in 3ds Max (It was a long time ago; I use Blender nowadays). Otherwise, I started painting mini around age 13, so maybe my passion for putting colors on shapes started there. Who knows?

Then I started using Substance 3D Designer to make custom tools, grunges, and patterns for Substance 3D Painter and quickly ended up starting to make full materials. I guess I simply never stopped from there.

Some of my custom Substance 3D Painter tools.

The Starting Brick

I've seen multiple Lego SD materials these past few years, notably Daniel Olondriz and Luke Marchant's. One thing I thought was that they were missing the volume and depth that you would expect by building stuff with Lego, even if they indeed look cool. So, I gave it a try. That, plus it would hurt if you stepped on it.

Originally, I wanted to buy the Lego piece and display a real-life version of it in a frame before I realized that it was going to cost a bit too much money for my liking. That's the reason why my renders are in a wooden frame like you would have for paintings.

References

As I'm used to say, the quality of your work can never exceed those of your references. Not using references is either flexing or complete inexperience, sometimes both. 

Unluckily for my wallet, I've recently got my hands on some decorative Lego plant sets so most bricks I'm using were at my disposal for measurement purposes. I quickly jumped in and made a quick schema of each piece and thing.

A real-life reference

Art direction-wise I was looking for a museum look, so a bit of dust on a clean piece.

World Unit Matching

For the sake of readability and simplicity, I will use a hemisphere shape and a 32 x 32 x 7 Lego block size in all examples. All of the artworks I made are 128 x 128 x whatever was good for the job. Also, I won't explain in detail how the function/pixel processor works and stick to the main concept of what I'm doing. If you're curious about how it works in detail, I highly encourage you to copy them and play with them.


Our goal here is to make bricks as close as possible to reality, which involves making sure the Normal Map will always be accurate as well as the Displacement one.

To obtain accurate Normal Maps, I recommend the Height-to-Normal Worlds Unit node. To put it simply, as long as you're aware of the real-world scale of your material, it will always output a Normal Map with the correct intensity.



In our case, we can control how many bricks are used for the depth and the tiling of the overall result with exposed parameters, so it can't be a fixed value.


Our basic brick size is 0.8 x 0.8 x 1cm, so the Surface Size needs to be Bricks_Tiling*0.8. So that goes into a function.

If you're unfamiliar with this concept, functions are like standard Substance 3D Designer graphs but output a single value rather than pixels. You have to right-click and then "set as output" a node so its output value is used as the graph output (the node turns orange).

Turing a parameter into a function

Bricks_Tiling (converted to float) * 0.8, the Multiplication node is orange showing it output is the graph output value

Our basic brick size is 1 cm, so the Height_Depth parameter is essentially equal to our depth. The RTAO node, which generates the graph AO, uses a similar setting.

Matching the height is a bit different. It's not possible to control the Tessellation Scale in the 3D view directly from the graph, so I approached it differently: I scaled down the Height Map.

I set the 3D view Tessellation Scale to 100 using the High-rez Plane mesh. With a uniform white color as the Height Map, I'm able to achieve a 100 x 100 x 100 cm cube.

It looks weird because there is no Normal Map, but it's indeed a 100 x 100 x 100 cm cube

Currently, our Height Map is read as a 100 x 100 x 100 cm texture. However, in reality, our Height Map corresponds to Bricks_Depth x Lego_Bricks_Tiling0.8 x Lego_Bricks_Tiling0.8 cm.

To accurately represent this, we need to adjust our height range pixel values from [0-1] to [0-Bricks_Depth1/Lego_Bricks_Tilling0.8]. This can be achieved by using the Height Range Parameter in a Base Material node and converting it into a function.

Now have a Height Map that dynamically scales based on how many bricks we have and will always be displayed correctly in the 3D viewport.



Defining the Base Bricks



Turning the input picture into a grid of blocks with an equal value is the first thing to do now. It is achieved using the Flood Fill to Grayscale.


Usually, I would use a Tile Sampler + Edge Detect + Flood Fill combination to feed the Flood Fill to Grayscale as the first input, and I had planned to do so. However, in this case, detecting 16k+ shapes became impractical and severely impacted performance, not to mention the errors in shape detection. I needed to create my own map that perfectly mimics a Flood Fill without freezing the computer indefinitely.

Essentially, a Flood Fill generates a map structured as follows:

  1. Red: Left to right gradient within each shape
  2. Green: Top-to-bottom gradient within each shape
  3. Blue: Bounding box size on the y-axis
  4. Alpha: Bounding box size on the x-axis.

Since we are dealing exclusively with squares, it was simple to replicate this using a pixel processor node. If you are unfamiliar with the Pixel Processor node, it functions similarly to function graphs but outputs pixels instead of a single value.

Here's the function breakdown: X, Y, Z, and W correspond to R, G, B, and A respectively. $pos outputs UVs, resulting in a left-to-right gradient on the X-axis and a top-to-bottom gradient on the Y-axis. Frac is used to eliminate the integer part of a number, leaving only the fractional part (e.g., 1.256 becomes 0.256, 1861.648 becomes 0.648). And there you have it, my custom Flood Fill Map.

Back to our soon-to-be brick! We now want a bit of free space. Currently, our pixels are in the [0-1] range so we’ll be unable to add the characteristic Lego bricks top dots as pixel value can’t go above 1. So we’re going to free a tiny bit of space on the histogram using a Level. That is done through a function again to make sure it will scale properly. I’m changing Level Out Low rather than Level Out High because that avoids problems during the next step.

The smallest Lego bricks I'll be using are 1/3 cm tall. Therefore, we will consider a layer to be 1/3 of the exposed graph parameter Depth. For instance, if Bricks_Depth = 5, we would have 15 layers in total. Now, our goal is to "snap" each "brick" to a specific layer, essentially creating a step function. In the given example, a value of 0.893 is converted to 0.8666666, which corresponds to the height of the 13th layer.

Our current range is [something - 1], we want [0 - something], so we can add value. Again, I'm using a pixel processor, moving the range down from half a layer. 

Placing the Slope

Now that the base is set up, it's time to solve a new problem: how do we place the sloped bricks? How do we sample slope bricks in a correct orientation, and with the correct slope?

Instinctively, we would likely think that we need to look at the surrounding bricks like we would do in real life. That is divided into 2 parts. First, we pick the slope brick we want to use, so we want to compare each brick's height with its surrounding ones to pick the right slope depth. Then, we want to define in what direction we want the slope to be and repeat for each brick slope type.


That will give us 2 maps that we will be using as Rotation Map Input and Pattern Distribution Map Input for the Tile Sampler node that will scatter our slope brick.



Sadly we can't ask a brick to look at his neighbor directly. But we can offset the texture from one brick and then compare that new texture with the original one that doesn't have any offset. That is very similar to how kernel processing is done in a way.

So the first thing is a mask. The concept is quite simple: if the height value of one of the surrounding bricks is the "height value of a slope brick" taller, you need to return a white value.


Similarly, for the slope direction, if the height value of a surrounding brick is a "height value of a slope brick" taller, return a value based on which one it is. This way we have a value when the brick on the left is higher, a value when the brick on the right is higher and so on.

The most observant individuals may have noticed that I have only been testing the horizontal and vertical axes, deliberately excluding cases where multiple directions would be possible. These cases need a bit more care to look right and natural.

The main idea is that if multiple directions are possible, we randomly select one of the correct directions. Also, I created a mask to identify where these cases occur, allowing me to blend this information with the previously generated rotation and pattern Distribution Map.

This process is then repeated for each type of brick slope. While it could be combined into a single-pixel processor, I prefer managing multiple smaller systems instead of one excessively large system. Compartmentalization is crucial, as it facilitates easy maintenance, updates, debugging, and editing.

At this stage, each type of slope brick mask is assigned a distinct color, which subsequently determines the slope brick that the Tiles Sampler should select for each location.

That concludes the most complex and interesting part. Now is left to give each brick a bit of personality and variation, adding the dots on each brick, sometimes not for flat bricks, adding other specific bricks for variation, and marking the separation between each brick a bit more. All of this is done using Tile Sampler and Tile Generator nodes with random masking.

Choosing the Color

That's an interesting one. When I planned to make this artwork, I thought getting the color right would be easy but I was wrong.
 I started with a Quantize Color node, but it did not do well.
 The Quantize Color does exactly what its grayscale counterpart does, and processes each color channel individually. That way we end up with unpredictable, unnatural results because of what RGB values represent.


Original / Standard Quantize node

Nicely, there are other ways of storing/describing colors than the RGBA format. You may be familiar with HSL filters, which allow for predictable shifts in color value. Well, HSL (hue, saturation, lightness) is another way of color representation compared to the RGB color model. In our case, it makes more sense as it describes the appearance of color rather than the blend needed to create it. You can explore this topic more on Wikipedia.

However, I don't like it. It requires adjusting multiple sliders to modify what should be a single property. Instead, I opt for the HSV model (hue, saturation, value), which is more predictable and controllable.

In Substance 3D Designer, I use a pixel processor to convert the RGB input into HSV, then manipulate the H, S, and V values individually before converting the entire result back to RGB. I place this pixel processor in a sub-graph for easier implementation in the main graph.

The W is the RGBA alpha channel and thus, it isn't modified

And voilà, a better result. The main downside of that method is that it tends to slightly tint the final image with red. However, this can easily be fixed using an HSL node.

Original / Standard Quantize node / Custom HSV Quantize + red tint correction

The base color is later detailed using a combo of two dirt nodes and a few grunges, adding some dust here and there, defining most of my Roughness Map at the same time.

Almost Barbecue Time

After some polishing and a huge amount of time searching for a nice part of the moon, it was time to jump into the rendering boat.

As you know, materials artists like to crank up the subdivision setting to a maximum in their renders to get the best result possible. That's initially what I did, and it worked great within Substance 3D Designer. It made sense since the 3D view Renderer is designed for such purposes.

However, Marmoset Toolbag 4 was not suitable for this approach. While it technically supports subdivisions, the performance cost is significant. As I mentioned in a previous article, subdivisions are directly applied to the mesh rather than in the shader, leading to numerous performance issues. Consequently, I reverted to using Marmoset Toolbag 3. 

Even in Toolbag 3, using tessellation alone was insufficient. We were dealing with nearly 90° angles, causing strange artifacts unless the polygon count was 10-20 times higher than the number of bricks. Unfortunately, I cannot provide an exact value as my laptop went into take-off mode and crashed before reaching a stable solution.

There is not enough subdivision near edges

Subdivision, in summary, evenly adds polygon definition based on the mesh topology by dividing each square into four smaller squares. This means that we have control over where the subdivision generates polygons by strategically placing squares. Instead of subdividing the entire plane, resulting in an astronomical number of polygons, we can selectively add polygons where needed, while keeping the overall polygon count at a reasonable and manageable level.

That's when I turned to Blender. I created a custom plane and added geometry specifically where each brick's border would be.

Custom plane made in Blender

Then it subdivided nicely with no artifact.

Smooth edges with the custom plane mesh vs. simple subdivided plane

Otherwise, the final render setup is quite simple, a custom plane for the material, a frame, a wall, a directional light, and that's very much it.

Learning Substance 3D Designer

If you're looking to start or improve, my advice would be to learn the logic behind graph creation. Instead of blindly following tutorials node by node, watch and understand the process. That's where the true value of a tutorial lies – in understanding the logic behind it. While you may discover new nodes along the way, the core thing remains understanding the logic behind things.

What tools are used to make this or that effect? How can it be tweaked to make something different? What does it technically do? What description of it would work in all cases? That's what you want to know. Reading the filter nodes' documentation can greatly help with that.

Next, practice consistently. Challenge yourself to open Substance 3D Designer every single day. The hardest part is getting started, so simply opening the software is a significant accomplishment. Some days, you may only work on 3-4 nodes, while other days you may create full materials. The pace is up to you, but make it a habit to open Substance 3D Designer daily. By doing this, you should start seeing pleasing results within 3-6 weeks. However, perfecting your art will take much longer, so keep pushing forward, and don't be afraid to try new things.

Afterword

Special thanks to those who took the time to read my article, Théa DorangeonLouis Cocquet for their feedback, and 80 Level for this amazing opportunity.



I would like to say that you're all welcome to ask any questions about my material. You can find me on LinkedIn and ArtStation.

 I wish you a nice day, and see you next time!

Maxime Guyard-Morin, Material & Tech Artist

Interview conducted by Theodore McKenzie

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more