Carlos Lemos has recently finished his 4-part tutorial on normal mapping and kindly allowed us to repost it on 80 Level (originally posted on Artstation). In this article, read about what normal maps are and the process of baking them.
Part 1. What Normal Maps Are and How They Work
Throughout the years, I have been trying to understand normal mapping and the problems that usually appear when working with them.
Most explanations I found were usually too technical, incomplete or too hard to understand for my taste, so I decided to give it a try and explain what I gathered so far. I recognize that these explanations may be also incomplete or not 100% accurate, but I want to try anyway.
The first 3D models ever made looked something like this:
We see an obvious limitation: it looked too polygonal.
The first obvious solution was to add more polygons, making the surface more even and smooth, up to a point where the polygons would look like a single, smooth surface. Turns out this needed a huge amount of polygons (especially at that time) in order to make surfaces such as a sphere look smooth.
Another solution was needed, and thus normals were invented (not really, but it's easier to explain and understand that way).
Let's draw a line from the center of a polygon, completely perpendicular to its surface. We will call this line a super confusing name: normal.
The purpose of this normal is to control where a surface is pointing at so that when light bounces from this surface, it will use this normal to calculate the resulting bounce.
When light hits a polygon, we compare the angle of the light ray to the normal of the polygon. Light will get bounced back using the same angle relative to the normal direction:
In other words, the light bounce will be symmetrical relative to the polygon normal. This is how most bounces work in the real world.
By default, all polygons bounce light rays completely perpendicularly to their surface (as they would in real life), because polygon normals are, by default, perpendicular to the polygon surface. If there are gaps in the normals, we will see them as separate surfaces, since light will bounce in either one direction or the other.
Now, if we have two faces connected, we can tell the computer to smooth the transition between the normal of one polygon and the other so that the normal gradually aligns with the closest polygon normal. This way, when light hits one polygon directly in its center, the light will bounce straight, following the normal direction. But, in between polygons, this normal direction is smoothed, bending how light bounces.
We will perceive the transition as a single surface, as the light will bounce from one polygon to the other in a smooth fashion, and there will be no gaps. Effectively, the light bounces from these polygons smoothly, as it would do if it had a ton of polygons.
This is what we are controlling when we set smoothing groups (3ds Max, Blender) or set edges to be hard or smooth (Modo, Maya): we are telling the program which transitions between faces we want to be smooth and which we want to be hard.
Here's a comparison of the same sphere with hard and smooth transitions, both with 288 polys:
We could potentially set something like a box so that all its vertices would have averaged normals. The 3D software will try to smooth its surface, so it'd look like a single, smooth surface. This makes perfect sense for the 3D program, but it looks very weird because we have something that should have several separate surfaces (each face of the box) but the program is trying to show it as a single, smooth surface.
This is why we usually have a smoothing angle setting in 3D software: if we have 2 connected polygons in an angle greater than this smoothing angle, their transition will be soft, and polygons connected in an angle smaller than the smoothing angle will be hard. This way, extreme angles between surfaces will be shown as different surfaces, as they would in the real world.
So, we used normals to control the transitions between faces in our model, but we can take this a step further.
Since we are changing how light bounces from an object, we can also make a very simple object bounce light as a complex one would. We use a texture to bend the direction of the light that bounces from a 3D object, making it look more complex than it really is.
A real-life example of this would be those holograms that were gifted back in the day with potato chips (at least here in Spain). These are completely flat, but bounce light in a way that's similar to how a 3D object would, making it look more complex than it really is. In the 3D world these work even better, but still have some limitations (since the surface would still be flat).
While we do use the normals of the polygons for some other black magic related things, we don't actually control the smoothing of our model surface using the polygon normals. We use the vertex normals to control the smoothing of our normals. This is basically the same idea, but a little bit more complex.
Each vertex can have one or more normals associated. If it has a single normal, we call it an averaged vertex normal and if it has more than one, we call it a split vertex normal.
Let's take two polygons connected by an edge. If the transition between the two faces is smooth (we set it to smooth in Maya/Modo, or they both have the same smoothing group in Max/Blender), each vertex has a single normal, which is the average of the polygon normals (this is why it's called averaged vertex normal).
Important note: up until very recently, each 3D program used its own method of calculating averaged vertex normals, which meant that normal maps calculated in one program might look completely different in another 3D program. I explain more about this in the second part of this tutorial (see below in this article).
If the transition is hard (hard edge or the smoothing groups are different), each vertex has several normals: one for each connected vertex, and aligned with their normals. This leaves a gap in the normals, which looks like 2 different surfaces. This is what we call a split vertex normal.
As you can probably guess, controlling the vertex normals is vital if we want to control our normal maps. Fortunately, we don't really need to directly modify the normals or even see them, but knowing how this works will help us understand why we do things the way we do and know more about the problems we may see.
When baking a normal map, we are basically telling the baking program to modify the direction that the low poly normals follow so that they match the direction of the high-poly model; as a result, the low-poly model is bouncing light as the high-poly would. All this information is stored in a texture called normal map. Let's see an example.
Let's say that we have a low-poly model like this one - a flat plane with 4 vertices and a UV set that our baking program will use to create the normal map.
And it has to receive the normal information from this high-poly model the normals of which are more complicated.
Keep in mind that we are only transferring normal information, so the UVs, material, topology, transformations, etc. are completely irrelevant. Rule of thumb: if your high-poly looks good, it means that its normals are good and should be fine enough for baking.
Our baking program will take the low-poly and cast rays following its normal directions (this is why we need to control the low-poly normals). Those rays have a limited length, to avoid getting normal information from far away faces (usually named bake distance or cage distance). When those rays collide with the high-poly, the baker calculates how to bend those rays so that they follow the same normal direction as the high-poly, and stores that information into a normal map.
Here's the bake result of this example:
We have a texture that our engine uses to modify the low-poly normals, so light bounces from this low-poly model the way it would if we had the high-poly version. Keep in mind that this is a texture, and can't affect the silhouette of your low-poly (you can't modify how the light bounces from your model if it doesn't hit your model).
It's obvious that normal maps are not regular textures. This is because they don't carry color information, but normal information. This also means that normal maps should not be treated as regular textures and they have special compression and gamma correction settings, as we will see.
You can think of a normal map as a set of 3 greyscale textures, stored in a single image:
- The first image is telling the engine how this model should bounce light when lit from the right side, and it's stored in the red channel of the normal map texture.
- The second image is telling the engine how the model should bounce light when lit from below*, and it's stored in the green channel of the normal map texture.
*Some programs use above instead of below, so we can have "left-handed" and "right-handed" normal maps, and this can cause some problems as we will see later.
- The third image is telling the engine how the model should bounce light when lit from the front, and it's stored in the blue channel of the normal map texture. Since most things look white when lit from the front, normal maps usually look blueish.
When we combine all three images in a single one, we have a normal map. Please keep in mind that this explanation is not 100% correct, but it will hopefully help you understand the information inside a normal map and have a better understanding of what it does.
So, in conclusion:
Normals are vectors that we use to define how light bounces from a surface. They can be used to control the transition between faces (by averaging the normals of connected vertices to make a smooth transition or splitting them to make a hard transition), but they can also be reoriented, to make a low-poly model bounce light the way a much more complex model would.
This information is stored in 3 separate channels of an image, and the 3D program reads it in order to understand which direction the surface of the model should look towards.
Now that we have a general idea of what normals are, and how a normal map works, let's talk about how we can bake these details from high-poly to low-poly.
Part 2. Baking Normal Maps
The general idea of baking a normal map is relatively simple: you have a low-poly with UVs and a high-poly model, and you transfer the normal information from the high-poly to the low-poly. This way, the low-poly will bounce light as the high-poly would.
During this process, the baking program will basically cast rays from the low-poly, following the vertex normals and searching for the high-poly. This is the most important aspect of normal mapping, and most problems people have when working with normal maps are related to this.
If you don´t control the vertex normals of your low-poly model, you will lose control over your normal map.
Bad Normal Map Correlation
In order to control the smoothing of our low-poly model, we can have split vertex normals (to create hard edges) or averaged vertex normals (to create soft edges).
Turns out that not all 3D programs use the same calculations to average the vertex normals. This means that your low-poly will look different and have its vertex normals pointing at slightly different locations depending on your 3D program. This isn't a big problem usually since these deviations are very small, but it can affect how your model looks like, and these differences are exaggerated when using normal maps since your normal maps are modifying the low-poly normals that are changing between applications.
The 3D industry is working on fixing this problem, and one solution has recently appeared, called Mikk space. This is a method of calculating vertex normals that all 3D apps could use, so vertex normal don't change between 3D programs. Keep in mind that not all 3D apps use it yet.
Another way to reduce this effect is not to rely too much on normal maps when baking. Try to match your low-poly more closely to the high-poly and use more hard edges on flat surfaces. This way, your normal map won't have to do all the work and these small deviations will be less noticeable.
Normal Map Detail Skewing
When the computer averages the normal direction of your low-poly vertex normals, big changes in the angles of your surface can "skew" the low-poly normals and they won't be perpendicular to the low-poly surface.
Since the normal map baker uses the low-poly normal directions when searching for the high-poly details, if these directions are skewed they will appear skewed on the normal map:
This is a very common problem, and several solutions have been found. There isn't the best one though, it really depends on the geometry.
- Some 3D bakers have the option of rebaking these parts modifying the low-poly normals temporarily, so they are baked without skewing. Marmoset Toolbag has this option. Reddit user Tanagashi kindly explained to me that some programs such as xNormal can tesselate the low-poly to add new vertices and make the normals perpendicular to the low-poly surface, bake an object space normal map and then convert it to tangent space using the original low-poly normals. Using this new normal map, the program can create masks to control where to use the original normal map and the one created from the tesselated low-poly.
- Adding vertices will make the transition between the vertex normals less skewed, as one 90º angle can be split into smaller degrees, making the second transition less skewed. This obviously increases your polycount and, since you are adding geometry. I recommend you to use this extra geometry to add a more interesting silhouette to your model.
- Split the averaged vertex normals (making the edge hard/use separate smoothing groups): this way, each vertex will have several normals, each one perpendicular to the low-poly surface. Keep in mind that, when the 3D program has a split vertex normal it actually creates a duplicate of the vertex, so this will increase your vertex count and slightly decrease performance. Additionally, hard edges will also give you a "black edge" problem, as we will see later.
- We can modify the normals of our models to bend the normals of our low-poly so that they are perpendicular to the high-poly details. Keep in mind that not all programs allow modified normals (ZBrush only has averaged normals, OBJ and older FBX files don't have custom normal information). There are basically 2 ways of modifying the normals:
1. Weighted normals: this is an automatic method similar to the average vertex normals. The idea here is that when averaging the vertex normal not all faces will have the same strength: larger faces will "pull" the vertex normals towards them with more strength than smaller faces. This way, larger faces which are usually more important will have better detail projection. This works especially well with high-poly panels.
2. Custom normals: using tools of your 3D software, you can bend your low-poly normals. This is a relatively new idea and there aren't standardized tools for this. Keep in mind that bending the normals can create very weird unintended shading on other parts of your model, so this technique is usually combined with bevels. Some people call this technique "mid-poly modeling".
Cage/Baking Distance
By default, the rays that are cast from the low-poly surface travel a limited distance, to prevent the low-poly from receiving normal information from far away parts of the high-poly. This distance is usually called "frontal/rear distance", as the rays can be cast towards the inside, outside of the model, or both. You can see this distance represented in red in the following image:
Some 3D apps (3ds Max, for instance) also allow us to use a cage. A cage is a "copy" of our low-poly model that we can modify so that it encapsulates the high-poly perfectly. And, in some cases (not all), it also allows us to change the direction of the rays, without changing your original low-poly vertex normals. This can help us get the best baking extremes and avoid skews, but keep in mind that though you are not baking using the normal direction of your vertex normals, in the end, you will use a normal map to modify the actual low-poly normals, so the result could look strange.
Edge Seams
As we have seen, if you have a model with hard edges (some faces have more than 1 smoothing group, or some edges are set to "hard"), the baking program will split the normal of the vertices between 2 faces. This has good and bad effects.
The good effect is that the normals are not averaged, so there are less normal map distortions: the vertex normals are completely perpendicular to the surface of the low-poly. It can also help make your low-poly look better if it has faces on extreme angles, more appropriate for hard edges.
The bad effect is that there is now a gap between the normals, and this could mean that you lose information if your low-poly has a gap in the normals where it can't get high-poly details. Furthermore, some parts of your low-poly projection could be intersecting and will compete for the same UV space. Both effects leave a seam along the edge, that's more or less noticeable depending on your engine.
This, however, can be easily avoided using a simple trick: if you have two faces separated by a hard edge, put each face on a different UV island, with some space between them.
If both faces are connected in the UVs, there are drastic changes in color from one face to the other, and color can bleed (because of the processes that occur when rendering faces) which is extremely noticeable in the normal map. By separating the faces in the UVs, the baker can add padding between your faces and avoid this color bleeding. The video below might help you understand this process:
In conclusion:
Once I have my low-poly model ready and adjusted to the high-poly model as close as possible, I start working on the smoothing before UVs.
I set my smoothing for the low-poly (if it's organic, I start with a completely smooth model; if it's hard-surface I start with an angle smooth set to 30-60º; and tweak the model smoothing until it looks good).
Once I have the smoothing for the model set, I work on the UVs, making sure that all hard edges are split into separate UV islands (to avoid edge seams).
If I have skewing errors, I add additional edges (usually bevels, to keep a more rounded silhouette). This works for most of my models, but I could also fix the skewing errors in Marmoset Toolbag if I used it for baking, or by using custom/weighted normals.
If there are projection errors, I modify the baking distance/cage, modify the low-poly/high-poly so that they are better fit for baking, or erase the normal maps on certain, really hard parts such as the tip of a cone.
In the next part, I'll be making a troubleshooting guide for normal map baking and discuss some of the most common problems and solutions. If you are enjoying this tutorial so far, please comment here or below the posts on ArtStation (Part 1 & Part 2) - I'm looking forward to some feedback, even if it's negative. I'm doing this so that I could learn and improve, but a complete silence can be discouraging. Thank you for your time!
Carlos Lemos, 3D Artist
Keep reading
You may find this article interesting