An Environment Artist Gaetan Osman talked about the project DESOLATE, discussed the workflow, and gave advice about time management while working as a team.
Hi! My name is Gaetan Osman. I am a 3D Environment Modeler/Texture and Lighting Artist and just recently graduated from Vancouver Film School. Originally, I was pursuing a degree in the film production stream. But then – there was an assignment to create our set using cardboard. Because I had some basic skills in Blender, I decided to create the set in 3D. I remembered how much fun it was to virtually create ANYTHING, so it hit me right then and there that film production wasn’t for me and that I wanted to do something more in the 3D creative space!
A week later, I applied for a program transfer, and before I knew it, I was creating all sorts of crazy stuff all within a meter distance of my computer.
I am currently working as a freelancer, slowly building up my portfolio so that I have something impressive to show the day I apply for my dream job at Santa Monica Studio.
I graduated as a Software Engineer from the University of Balamand in Lebanon and then worked in the family business as a software developer for 5 years. While working in the family business, I always found myself leaning towards the creative aspects of the projects. So, one day, I stumbled upon a video of Blender Guru, the infamous YT blender channel owned by Andrew Price, and I just became fascinated with what was possible using an open-source 3D software such as Blender. All this time, I wanted to create, let my intuition flow but I lacked the tools and skills to express until I began to learn 3D. Thanks, Andrew, you’re the man!
Once I decided to fully commit myself to this new realm of possibilities, I applied and got accepted for the 3D animation and VFX program at VFS in Canada. From there on, it was like a rocket set for take-off! There is something about being surrounded by mentors that are experts in the field, hundreds of other aspiring creative students, and receiving constant feedback, that just leaps your progress forward by years as opposed to doing things the self-taught way. Let me clarify that I do not think there is anything negative about being self-taught – not at all! I encourage and am inspired by it! But as a beginner with no clear roadmap, there’s definitely an added benefit of having mentors that lay down the fundamental groundwork for you, so that later on you have a clear vision on whether you’d like to be an animator, a rigger, a 3D modeler, a lighter…etc.
The Desolate Project
I simply adore steampunk movies. If my fiancée were here, she’d roll her eyes right now – the entire genre is all I talk about! So of course, I knew I wanted to incorporate that genre into my modeling demo reel. However, I wanted it to be grounded and a little more realistic looking than the over-the-top whimsical steampunk art I often see online. So we took references from the Wild Wild West, Captain Nemo, the Golden Compass, and the Time Machine. One of my team members, the brilliantly talented concept artist, Daniel Satiwan, helped translate my creative vision into a visual medium that was then shared with the other team members so that our idea of the project was aligned.
Using proxy renders and the proper scale as a base, would allow the concept artist, Daniel Satiwan, to focus more on the mood and art and worry less about technical stuff such as perspective and scale.
Last frame from our reel:
Time machine's blueprint and render:
We eventually deviated quite a bit from the animatic, but it was important to have a clear storyboard represented visually. We did this through a combination of proxy modeling and sketches, and then took snippets of them and placed them in our storyboard.
Once we had a rough sketch of our animatic, we began to pull up references for the large assets first, such as buildings, environments, large-scale objects, etc., and began proxy modeling them into our scenes.
You’ll often find that the steampunk genre uses a lot of industrialized elements infused into the Neo-Victorian era, so we used that time period as our main source of inspiration.
Blockout and Modeling
Something I severely underestimated was the importance of having an animatic. We were too eager to model whatever caught our interest, causing our scene and camera placement to lose a sense of coherence. We reverted to our storyboard and began proxy modeling each scene based on what is seen in the camera. There is such a thing as “over modeling” or modeling too much detail in areas that are barely seen in the camera, causing the viewport to bog down and renders taking much longer than it should. Using this approach, we would only increase geometric detail on assets close to the camera and use normal maps and displacement maps for objects further away from the camera. For some assets, such as the vehicles and the mansion, we had a proxy version and a high detail one so that once the camera changes position, we’d swap out the proxy one with the detailed one.
The general blockout was first done in Maya, once we entered the detail phase, we brought objects that needed to be destroyed into ZBrush. Some of you are probably wondering why we didn’t use Houdini, trust me we would’ve if any of us knew how to use Houdini! I’ve found that Blender had additional tools for modeling and was easier to use for complicated stuff such as ornamentation and assets that have a more intricate design seen mostly in the interiors of our demo reel.
Proxy – simple light pass:
Light detail – refined light pass:
Heavy detail phase – Finalized light set up:
Once our animatic was locked, we started to UV Unwrap most of the assets in their light detailed shape. In any 3D software, I found it better to UV Unwrap a little prior to adding too much detail because once I subdivide the asset, it becomes a mess to UV Unwrap it. As long as proper edge loops were placed, the texture distortion was barely noticeable in case I needed to subdivide the asset later on.
No edge loops holding the texture in place, texture distorts when entering SubDiv mode:
Proper edge loops holding the texture in place, no texture distortion when entering SubDiv mode:
Most of the large-scale environment textures such as the interior of the mansion and the broken asphalt of the exterior were done procedurally. It can be daunting and overwhelming at first, but I’ve found that if you approach it like a puzzle, it becomes a fun challenge to take on. It’s all about black and white values and how they affect your shaders and masks. You’re either subtracting or adding values between black and white. The following were procedurally done in Arnold
Procedural dust (whenever an object is placed on the floor, dust accumulates at its feet). I then created a slider for the dust amount and a slider for the dust noise and turbulence.
To create the puddled effect, I simply duplicated the plane with the cobblestone material, applied a new aiStandardSurface material, and gave it water-like settings. Then I dragged the new water-shaded plane ever so slightly below the cobblestone. Then soft selecting a few vertices on the cobblestone and dragged them downwards so that they’re lower than the duplicated plane with the water shader. So now there are sections where the water shaded plane is intersecting with the cobblestone shaded plane giving the puddle-like appearance. Then using the aiCurvature node, the aiAmbientOcclusion node, and some noise, you can control the roughness surrounding those puddles so that there is a Roughness falloff between the middle of the road and those puddles.
Nearly anything that wasn’t done procedurally, was textured within Substance Painter. There were multiple passes on the assets; A clean texture pass, light dirt and grime pass, and finally a heavy worn-out pass for assets that are enveloped in rubble and dirt.
About 5% of the assets in the project were pre-made assets from websites such as TurboSquid and 3Dsky, to fill up empty spaces in the background and that was very repetitive and simple such as debris, trees, and plants. However, since this is a modeling demo reel, most of the assets clearly seen in the shots were built by my team and me. We created a few kitbashes for the repetitive steampunk elements such as gears, knots, ornamentation, pipes, and beams.
A Gears kitbash that we created proved to be very useful later on when we started to create props for our scenes:
To be honest, we didn’t use Marvelous Designer that much because the licenses were very limited at VFS and there were people with characters that had priority over environment artists, which makes sense, or else they’d have naked characters for their reel. We did have a few flapping cloth simulations in the exterior scene. This could’ve been achieved within Maya, but the desired effect was easier achieved within MD. It was then just a matter of importing the alembic file into Maya for scene integration and texturing.
If there’s one section I am completely going to geek out on, it’s lighting! As a freelance photographer and cinematographer, prior to traveling to Canada, I was already quite familiar with the technicalities of composition, lighting and how massively it affects the story. Those lighting fundamentals operate very similarly in the virtual world, so I just had to apply what I already knew. Since the animatic was already locked, I didn’t have to worry about camera placement, the priority was to make sure the lighting is consistent and whether it is conveying the right mood for the shot. An over-exposed shot is not always a bad thing, even though I doubt it is highly applicable, it is all about having intentional lighting. Every light placement should have a purpose and I know that sounds a bit abstract, so I’ll use the exterior scene of our reel as an example.
After opening the exterior scene, the very first thing I’ll often do is place a simple dome light and try out different HDRI’s. It’s a quick way to get an idea of what sort of light setup you’re looking for.
Once I’ve found something interesting, I’ll place my first key light. The key light represents my main source of light. In the exterior scene, for instance, my key light was the sun. Because our reel plays itself in a post-war era, I wanted to convey a sense of desolation without making it feel grim or sad. So, the time of day is important when considering your lights, in our case, it was around midday with an overblown cloudy sky. Once the key light is placed, I’ll often look where to place the rim light. The rim light is that nice edge of light on the backside of your subject that creates separation between your subject and the background.
Keeping in mind that our eyes are drawn to the highest ratio of contrast, I would then add fill lights to areas that are a little too dark to my liking. When making a modeling/texturing/lighting reel, the most challenging part was to make sure everything is visible and has enough detail seen without making the shot appear flat. Most of the time, all I am using are the principles of a 3-point light set up or a variant of it, to light my scenes.
Another key element of good lighting is ACES! It is the industry standard for color. As opposed to sRGB, ACES has no restrictions on the color space and is able to capture light and color values far beyond the standard sRGB color gamut. Theoretically, it can capture everything our eyes can see; however, we haven’t developed the tech yet to display that range of color and light unto our monitors or screens. I’ll be honest, it’s a little too complex to cover ACES in a few words but for those interested, I definitely suggest reading more into it because it will most certainly elevate your renders to the next level.
And finally, once I’ve set all my lights, I separate them into AOV light groups by enabling the include “All Light Groups” option.
By separating all lights into AOV light groups, you can then pick and choose which light source you’d like to manipulate in NUKE (or any other compositing software) using a Shuffle node. The “dust effect” in the study room is done by simply isolating one of the lights using that Shuffle node and then multiplying that by an animating noise node in NUKE. No simulations, no extra render times, and therefore less of a headache. There is no such thing as “no headache” in 3D, unfortunately.
Two words… Render layers! Creating separate render layers and then matting out the rest according to category and material will most certainly optimize your renders. Let’s take the study room for example, in one render layer, I would override the material on all the props with a matte material, and then only render out the study room set (floor, walls, curtains…etc.). In another render layer, I would do the exact same but in reverse. However, among props, I would create multiple layers to separate the transmissive assets (glass, bottles…etc.) and the subsurface assets (couch, globe, paper boxes…etc.) from the rest. This way I would be able to override the sample settings respective of the materials contained in that render layer.
Finally, once I have everything rendered out into separate layers, I drop them into compositing software such as Nuke. It’s true that taking this approach won’t decrease overall render time since I’m rendering more frames, but the individual renders are rendered much quicker, allowing me to optimize the samples and make quick changes if I needed to. There is also the huge added benefit of having much more flexibility in post.
A lot of 3D artists, modelers, in particular, shy away from some extra post-production work but in doing so, they’re omitting the huge opportunity to create more complex environments and have less flexibility in case they want to make any changes. Especially when you’re new to the CG industry and trying to get an entry-level job, the skill to present your work aesthetically pleasing is very important in my opinion.
Nuances of Teamwork
If I take any pride in my skills as a team leader, it was thanks to my team - Kevin Edyanto, Cherry Lau, and Daniel Satiwan. As focused and determined as I was, they’re the ones who kept me accountable when I went off track, and they never lost faith in me. Thank you guys, love you all!
For me, the best way for a team to collaborate is to involve everyone in the creative process. Most of us want to create the stuff that we come up with, so listening to what they have to say their ideas, and suggestions were crucial for the health of the collaborative spirit. Plus, they enhanced the original idea I had – exponentially!
At first, there’s an influx of ideas and suggestions, you slowly begin to navigate through it as a team, and eventually, as creative director, I made the final call on what gets to be incorporated into the desolate project.
Once the idea is set, the project was divided into multiple phases with time slots assigned to each block. Can’t stress enough that having self-set deadlines with clear requirements, is probably the key to success. It creates accountability among the team, and everyone is clear on what they’re supposed to do. The following is a rough outline of our entire project from beginning to end:
- Proxy SET blockout + rough animatic + storyboard + large list of references library split into categories. + mood board + color palette references+ lighting board + creating a huge list of props to be modeled for every individual team member and categorizing them under “proxy”, “detailed”, “UV unwrapped” and “textured” and to which SET they belong. (70 days)
- Proxy PROP blockout + light detailing of certain assets + refinement of the animatic + kitbash creation for repetitive components of our assets (UV unwrapped). (45 days)
- Heavy detail Phase + UV unwrapping certain assets + finished animatic (not locked yet, but no more major changes, shots are set). (45 days)
- Texture test for one asset per team member to make sure the look and feel of our assets are aligned. + UV unwrapping remaining assets + refined lighting pass + locked animatic (no more changes!). + optimizing all scenes, removing unnecessary cached data…etc. (30 days)
- Texturing all assets per scene + Textured scene is prepped for lighting and divided into render layers + Scenes are prepped to be uploaded to the render farm. (45 days)
- The remaining scenes are uploaded to the render farm + Composition and post-processing. (21 days)
If there’s one take-away from this, it’s that clarity, a clear end goal, and a list of weekly/monthly deliverables are what kept us on track and consistent throughout the project’s development.
I would say my biggest challenge was finding consistent motivation because of the pandemic. At school, I was surrounded by so many other creative people, and as someone who is very competitive, working in that environment is very motivational for me. So, when we had to work from home, continuously finding ways to motivate myself was probably the hardest. One of the best ways to remedy that was through the digital platform Discord. My team and I had weekly Discord meet-ups which almost felt like “class time,” so it was a great way to stay connected and accountable. In hindsight, this was probably a good way to prep my self-discipline skills and dip my toes into the work-from-home life – with COVID, home offices could be the future of the field! Another thing I did to overcome my lack of motivation was by joining a bunch of other 3D discord channels and immersing myself even further into the field. One specific channel I love is by Arvid Schneider, an amazing lookdev artist. He has quite an extensive library on Arnold and Houdini on his YouTube channel, I highly recommend checking his videos out!
As for what’s next - I’m not too sure! I am currently learning Houdini which is quite a journey on its own! Life is up in the air these days with Covid, but what I’m certain of is that I’ve only just dipped my toes into my career within 3D…this is just the beginning.