Working on Realistic Skin & Hair For a 3D Guns N’ Roses Girl Project

Peter Stumpf, a 3D and VFX hobbyist, has shared a detailed breakdown of the Blake project with ZBrush, Substance 3D Painter, and XGen, from sculpting to texturing, shading, grooming, and LookDev.

In case you missed it

You may find these articles interesting

Introduction

Hi everyone, I’m Peter Stumpf, a 3D and VFX hobbyist. Last year, I graduated from high school and will soon move to Cologne Germany, to attend a VFX university there.

Since my first encounter with 3D software, approximately 3 years ago, I’ve been fascinated by it and cannot wait to take my skills to the next level. 

Before heading to university though, I planned to undertake a few larger projects to prepare myself. With that being said, I have recently released Blake, a personal project that I’ve been working on for the last 5 months in my spare time and I want to briefly break down the main steps I took to achieve the final result.

The Blake Project

Working on her was exciting and fun but also quite challenging because it was my first attempt to create a realistic portrait from sculpting to texturing, shading, grooming, and LookDev.

Before I started this project, I took some time to gather references. On Artstation I found some artworks that were very similar to my idea. So I found them to be a good starting point for my overall character. I really liked one artwork by Rajitha Naranpanawa, so I stuck to it and used it as my main inspiration for the whole project.

I believe it’s crucial to include various real-life references alongside anatomical ones.

Here is a small collection of what has inspired me:

To be honest, I wasn’t trying to make an exact copy of Rajitha’s work. I was more interested in capturing the texture and appearance of her.

I’m planning to divide the work into five main parts (sculpting, texturing, grooming, creating clothes, and LookDev) despite working on all the elements at the same time.

Sculpting the Face

I kicked off the process by using a DynaMeshed sphere in ZBrush and gradually shaping up her facial features. Since my goal was to hand sculpt the entire portrait, without relying on any pre-made base mesh, this stage took longer than expected. There isn’t a lot to elaborate on other than trying to perfect the anatomy first before moving on to finer details such as wrinkles and pores. 

Sculpting relies more on developing a sense of what looks aesthetically pleasing and is anatomically accurate, rather than following a gradual procedure.

Remember: There’s no special trick or hidden formula to sculpting, especially when it comes to creating a likeness. It’s all about refining your ability to sculpt based on what you see. It is crucial to observe your references as much as possible. Avoid getting caught up in those tiny details. If your proportions are off, the details won’t enhance the realism of the face and it will just feel like something is still missing or does not feel right.

Once I was satisfied with most of the shapes, I moved on to the tertiary and micro details. For this, I decided to wrap some topology using R3DSWrap. This approach provided me with an improved base for sculpting. You can find more information about this here. Not only does it provide excellent animatable topology but also allows you to generate UVs directly within Wrap.

Once I had good enough topology, I re-imported my head into ZBrush with correct scaling, to generate finer details such as wrinkles around the eyebags, neck area, and an overall noise layer to get rid of uniformity.

Unfortunately, at this point, I made a mistake by not keeping things symmetrical, as I later chose to retopologize the head in Maya manually.

To capture the necessary micro details, I used a variety of techniques and brushes. Ultimately, I had both a ZBrush Displacement Map with bigger and more harsh details and a multichannel XYZ Map, which I could then combine in Maya later to get those tiny pores.

My ZBrush map consists of various layers of noise and larger wrinkles. In contrast, the XYZ Map provides three different types of maps, which you can then play around with. They can serve as a standard Multi-Channel Display texture, or alternatively, you can isolate and channel them out, so they can function as a Utility Map to enhance your roughness, coat, or SSS Map inside of your shading network.

For transferring the actual XYZ Displacement Map onto my sculpture I used the projection tool inside of Mari. This was very straightforward. I just imported a lower-resolution version of my head into Mari and painted the map as a Multi-Channel EXR texture onto my sculpture.

I brought her into my Maya scene quite early, to see how she would look with a basic lighting scenario. Here is a closeup shot of the displacement only, rendered with Arnold.

Texturing

For the texturing phase, I heavily relied on Mari. Before starting the actual painting process, I used the same R3DS wrapping method, as I had done previously. This technique allowed me to make use of TexturingXYZ’s high-quality VFace packs.

Since I created my own mesh, I had to wrap the Vface-provided model onto mine to transfer the textures.

Note: It is important to keep in mind that if you’re using the VFace model. This wrapping step is not required, as the model is already set up well for sculpting. In my case, I wanted to start with a blank canvas in ZBrush mainly to study my anatomy, so I didn’t use the provided mesh. But if you like, you can choose to start with the mesh included in the pack.

After wrapping, I took both meshes (the wrapped VFace and mine) into Mari and transferred all the maps that I required. Since I chose a high resolution of 8K for all of them, I had to wait around 5-6 hours for Mari to compile all the textures. I expected this process to be faster, but in the end, I achieved a fairly clean result from it.

Once that was done, I loaded all the data into a new Mari scene to start adding textures. My initial wrapping attempt was okay, but there were small errors, like weird UV lines, dirt patches, and some moles. So, I had to go in and try to clean them up, while also adding multiple layers of hand-painted details to get the look I wanted.

I made lots of changes to the overall color. Once more, I started by making simple changes and gradually went into more detailed ones, like adding darker and lighter patches, moles, and freckles as well as adjusting the warmth or coolness of different parts of the face.

To create the Specular Roughness Map, I painted all my UDIMS with a greyish base value of 0.6. Then, I made some parts glossier and rougher by using darker or lighter colors. A neat little trick to make things look more detailed is to merge one of your displacement channels (in this case I used the Cavity Map) on top of your painted roughness map. This isn’t limited to your Specular Map. 

I use this technique in nearly all my maps. Sometimes I combine them directly in Mari, but for this, I chose to merge them in Maya’s Hypershade with some grade nodes applied to the initial texture. This made it more flexible, so I didn’t have to switch between different programs too often. Some say this might be a bit overkill since the displacement map from TexturingXYZ is already really good on its own straight out of the box. You might not necessarily need to use it for your additional maps.

I opted only to paint the roughness for the Specular and Coat Maps. Instead of creating a map for the SSS, I used a reddish color in the radius of the aiStandard Surface shader and played around with the scale until I found something that I liked.

The final step of the texturing process involved generating multiple ISO Maps (Isolation Maps) to aid in the LookDev process. I achieved this by painting various black and white masks on my face and then combining them using a custom channel packer gizmo I created in Mari, which then gave me four different RGB or ID Maps, serving the purpose of isolating specific areas on the face.

These maps can be created and utilized in various ways. In my case, I shuffled them out in Maya and used them as masks to control the roughness, oiliness, and sweat regions of the face.

Creating Hair

Grooming can be a bit challenging, but I’m generally happy with how it turned out. I knew it would take quite some time to get the desired look, so I took my time, without rushing it. It’s easy to lack realism, so I decided to start the groom, by blocking out the primary shapes of the hair in ZBrush. This helped me a lot when it came to placing the guides since I could use the snapping tool inside of Maya, to quickly place my curves.

I chose to use XGen for grooming, as it’s the tool I’m most familiar with. Essentially with XGen, you place guide curves and shape them to achieve the desired style. The tool then utilizes these guide curves to generate the groom.

Quick advice: do not hesitate to add more detail. Do as much separation as you deem necessary, as it will help you to maintain a clean and organized scene. By breaking down your hair into several collections and descriptions, you’ll gain better control over your groom. This way, you can tweak specific areas without affecting the rest, especially if you want to have different CV counts or modifiers.

Ultimately, I had a total of 10 descriptions divided into 2 collections, each controlling a separate part of the groom.

Here are all the guides and descriptions I used to create my final groom.

For almost all my descriptions, I used a variety of density and width masks, to achieve a more authentic and natural outcome. These can help to nicely blend your hair with the skin. I know that Maya’s paint tools are not the best out there, so I would suggest switching to Photoshop when it comes to creating those masks.

It’s worth noting that there are numerous ways of breaking up and blending hair. To integrate your groom better, you could also use even more descriptions on top of your already exciting ones to blend your hair with the skin. Just make sure these have different values like lower melanin, lower width, and overall, less opacity.

Once I placed all my guide curves, it was time to break all of them up by using modifiers. I keep it simple in most cases, by sticking to a “clump-cut-noise” structure. It’s pretty straightforward. 

First, you break up your hair by adding Clumping modifiers (I wouldn’t go above 4), 2 or 3 are usually enough. I used my first Clumping modifier to clump to guides, the other ones are breaking up the first one. After that, I used a Cut modifier to soften the ends and finally, I added 3 different Noise modifiers, each with different values. For example, one has a high frequency but low magnitude and the other has a low frequency but high magnitude for creating flyaways. I also used an assortment of expressions for each modifier.

If you are not familiar with XGen expressions and how they can help you to achieve a more natural-looking groom, check out Jesus Fernandez’s website, I highly recommend his classes about groom fundamentals and XGen – an excellent starting point.

Last but not least, I converted all my descriptions into an interactive groom, providing a more convenient way to handle shaders and textures compared to the original XGen primitive splines.

For shading the hair, I used multiple aiStandard Hair shaders and played around with the melanin, melanin redness, specular roughness, and IOR. Additionally, I introduced variations in color and roughness to the fringe, as they would set the main focus on the overall groom. To achieve this effect, I essentially generated a PerStrand ID and PerClump ID merged them together, and added noise to break them up.

For the final look, I created two different ID masks and incorporated two noise layers to generate the hair shader. However, the customization possibilities are endless, allowing you to introduce as much variation as you want. Below you will find the shading network I used for my final hair:

The Outfit

Regarding the clothes, I quickly sketched out the base forms in ZBrush, refined the topology for cloth simulations by triangulating my mesh using the Decimation Master, and imported them into Marvelous Designer, for a basic sim. After exporting, I added more details by sculpting cloth folds and stitches onto it.

Read more about the workflow I followed here. The founder, Laura Gallagher made a fantastic cheat sheet for the pipeline between Marvelous Designer and ZBrush.

I did my final retopology and UVs in Maya before moving on to texturing them in Substance 3D Painter this time. There is not much to say about the texturing process. I worked with two files, one for the Guns N’ Roses T-shirt and another for the leather jacket. Let’s take a closer look at one of them.

To create my Guns N’ Roses t-shirt, I initially started by adding a fill layer and experimenting with different fabric Normal Maps. I adjusted the parameters till I found something that I liked and thought would look quite good on her, duplicated that layer, introduced a different Normal Map, and blended them. Next, I painted a mask, isolating the neckband from the rest of the shirt.

Again, I created a fill layer, applied a different Normal Map to it, and used this mask to restrict its area, so it would only appear on the white part.

Following that pattern, I moved on to add color and incorporated details such as stitches and patches with a more weathered look.

For my neck choker, I modeled the basic shape in Maya, sculpted details in ZBrush, and used Mari for texturing.

Below you can see my progress of this asset and the node graph inside of Mari.

LookDev and Final Presentation

As I already mentioned, I created several isolation masks in Mari to manage various facial regions. Two of these masks were dedicated to makeup and two were for adjusting the spec. I linked multiple shaders to an aiLayerShader node, placing the skin shader as the primary layer. The masks painted in Mari were then used to regulate any shaders applied beyond the skin layer.

The makeup shader is quite straightforward. I used an aiFlakes node to generate the glitter effect, connecting its output to the Normal Camera attribute within the shader.

When working on your LookDev, it’s a good idea to experiment with different lighting setups. Try using various rigs to see how your materials respond to different types of light. This process assists in identifying and correcting any issues or errors in your shaders, making them look more real and better overall.

For the lighting, I utilized mostly area lights in Maya. My lighting scene consists of three area lights and an additional HDRI skydome light, that serves as an overall atmospheric light, to give my reflections a more natural look. It took some time to finalize the lighting, but I already had a vision of how I wanted my final image to look, by looking at my main references.

Also, bear in mind that a poor lighting setup can make the render appear dull, even if the model is well-made. So, take your time and experiment with different rigs, HDRI, and even entire light compositions.

I rendered everything out using Arnold with the ACES colorspace.

The final compositing took place in Nuke. This was the first time I worked with Nuke, and I must admit, I found it to be very intuitive. I really liked the node-based system, since I was already familiar with Mari’s nodes, making it relatively easy for me to grasp.

Having said that, the first thing I did in Nuke was to balance out my light groups to emphasize the central focus on her face. I also exported several AOVs (render passes) out of Maya, which I added to my composition to help me diel in different shaders without having to re-render the entire image all over again.

I did a basic beauty rebuild of the raw render I got out of Arnold, allowing me to incorporate multiple grade and color correction nodes under specific AOVs to make some minor tweaks, without affecting other shader values.

Since I didn’t render out depth of field directly in Maya, because it would take too long, I instead rendered out a depth layer which I could then convert to a ZDepth pass inside of Nuke.

There are numerous ways to use this pass. In my case, I added depth of field, giving a nice bokeh effect to the background as well as the foreground. To enhance realism, I added a custom lens kernel image for my bokeh shapes.

Additionally, I did stuff like lens distortion, chromatic aberration, and grain to the image.

Once finished with all my compositing in Nuke, I focused on refining the colors in DaVinci Resolve, applying a final color correction, and adding vignetting.

Conclusion

I gained a ton of experience from working on this project, as this was my first time creating a portrait. If you decide to take on a similar project, here are a couple of tips I’d suggest:

  • In all stages of your artwork creation, especially when sculpting a convincing human face, gathering references is crucial. You have probably heard this advice countless times before, as have I. However, it proves incredibly beneficial, when starting a new project. There is nothing more frustrating, than sitting in front of your computer, unsure of what to do next or what to add to your sculpt. Counteract this by collecting all your references right at the beginning.
  • Use ISOs. Even if you decide not to use them at the end, it is always good to have them. It allows you to tweak shader values without the need to go back to a different program.
  • Keep everything organized and easy by doing all your Look Development in separate files.

Thank you for reading this interview. I hope I could give some insight into the creation of Blake and maybe you guys even learned something.  

Peter Stumpf, 3D and VFX Artist

Interview conducted by Theodore McKenzie

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more