From AAA Gaming to Journalism, Assisted Creation is Helping Create 3D

Gui Rambelli tells us about his career pivot from EA DICE to The New York Times and how he’s leveraging procedural and AI tools, such as Unity ArtEngine, to create interactive 3D experiences.

Having spent years in the gaming industry at companies such as EA DICE (Digital Illusions CE AB) where he worked on Battlefield 1 and Battlefield V, 3D Artist Gui Rambelli was invited to work for a News organization. As an artist at The New York Times working on developing their photogrammetry pipeline, Gui sat down with us (virtually, of course) to share a bit about his career journey and current work, including how he’s leveraging tools such as ArtEngine to speed up his creative processes.

Tell us a bit about yourself. How’d you get into gaming and 3D art?

My passion for video games started all the way back when I was 2. My parents got me a Sega Genesis with the game called “Golden Axe”, and I was hooked. Since then, games have always been present in my life. When I was 15, my mother encouraged me to study and eventually work on something related to games, since I was already so caught up in that world.

After studying Game Art for a few years in Brazil, I decided to move to the US to continue my studies at the New York School of the Arts (then called the National Academy School). While in New York, I picked up a few freelance gigs as a 3D artist. The work made me realize my interests were more on the digital side of art. So in 2012, I packed up again and headed out to California to take classes at the Gnomon School of Visual Effects.

What were you doing before your current role?

After taking a few classes at Gnomon, I landed a role as a 3D generalist at a market agency called There, for whom I worked for a few years while wrapping up my education. My end goal was always to work in gaming, but in the short term, I was open to opportunities in any industry. As I learned about new tools and workflows, I always considered how they could be applied to gaming.

In 2015, I started getting really into scanning. We were using LiDAR to create digital assets for movies and commercials. Until this point, I’d been modeling from scratch, focused on mastering tools like ZBrush. But things clicked when I started working with scans. I remember thinking “Why would I ever do things by hand if I can’t match the quality of a scanned model?”

Around this time, Star Wars™ Battlefront™ came out. The realism of the game was unprecedented, and generated a lot of buzz among the 3D art community, all thanks to EA DICE’s heavy investment in a photogrammetry pipeline. So I thought: “EA DICE already sees the value of scanning, and they’re creating awesome games. I want to work there.” I reached out to them and got a job.

I spent three years at EA DICE working on titles like Battlefield 1, Battlefield V,  and more. I worked closely with the Tech Art team, Art Directors, and Art Leads in order to understand their needs so that I could organize the scanning trips and plan which assets we’d have to scan and process for use in our game levels. I worked with Kyle Nikolich, Jesse Yerkes, and our Technical Art Director, Anders Caspersson.

What were you doing at The New York Times?

As a Senior 3D Artist in the R&D department at The New York Times, I worked with the newsroom team to help them leverage the power of 3D to tell stories in a more engaging way.

Today’s attention economy means that it’s becoming increasingly important for traditional media companies like the NYT to publish content that leverages new, immersive, bite-sized formats. Indeed, newspapers are competing with platforms like YouTube and Instagram for the next generation of readers, many of whom don’t have the time, or prefer not, to read a traditional long-form article.

At NYT, our team produced a series of interactive articles utilizing this new medium we have developed. As an initial explainer piece, we reconstructed an artist’s loft located in Providence, Rhode Island, in 3D. In this article (or rather, experience), readers can explore the studio space, and interact and read about different objects.

Our format was adopted immediately, leading the R&D team to collaborate with the NYT Culture Desk on more pieces. We created virtual walk tours for Chinatown, NYC; the new FaZe Clan esports compound in Los Angeles; and others.

Did your games industry experience help you with the photogrammetry research you were doing at The New York Times?

I believe the biggest challenge we had was creating a robust framework that we could use as a template for generating multiple stories in a short amount of time, while still enabling a variety of ways to explore the captured environments.

That is where we took advantage of game techniques, such as polycount and texture constraints, color correcting our Linear color scenes to sRGB using LUT, and adding the stylization needed for each piece within the respective Editors. We also standardized our capture techniques, so that we could plan the time it would take to capture a certain environment ahead of time.

Establishing a baseline in terms of technical constraints was crucial for us. It enabled us to quickly process our scenes with RealityCapture using AWS (Amazon Web Services) instances and have the results meet our expectations.

Do you make art in your personal time?

Definitely. I’m always looking to learn new things, and my personal projects enable that. I always try to do scanning trips by myself to expand my knowledge. Sometimes, I work on projects with friends, so that we can do research together and exchange our experiences.

Pat Goodwin is a great friend of mine, and our common interest in photorealism and advanced photogrammetry techniques has always drawn us together. We go on asset capture trips and work on side projects that allow us to test the limits of what’s possible in 3D. Pat works at Unity as a Senior 3D Artist, involved in R&D for photogrammetry. He and the Demo Team are constantly raising the bar for visuals.

Can you tell us about a personal project you worked on?

For this project, Pat and I went to Glacier Park in Montana, in September 2019.

We set out to create a library of 3D assets and assemble a scene. Our goal was to improve our asset capture technique, learn more about processing data with Houdini, as well as gain a better understanding of recreating vegetation captured using photometric stereo analysis.

Render in Unity - Glacier Park - Gui Rambelli, Pat Goodwin:

Pat and I worked together on the project, from asset capture during our trip all the way to the final render. We used Reality Capture, Houdini, Unity ArtEngine, and Substance Painter for creating the 3D assets and tiling textures.

The final scene was created in Unity. I worked on setting up the terrain, level art dressing, and Houdini erosion simulation for the asset bedding of all props in the terrain. Pat did the final pass on fine-tuning the placement of the scatter mesh, vista assets, and VFX; and took care of the lighting setup.

For the final render, we began with a scene based on the key reference photo that we took. During the lighting pass, Pat pushed for a more stylized look, increasing the contrast between the lights and shadows to create a more interesting composition.

Key Reference Image for scene assembly:

What tools do you use in your pipeline for creating these scenes?

I tend to take a slightly different approach depending on the type of 3D content I’m working on.

If I’m creating a 3D prop, I will likely scan the item with photogrammetry, and process the data through RealityCapture and Houdini to create a real-time asset. I use Substance Painter and xNormal to bake all the maps I need, and then use the Unity De-lighting tool to remove any shadows, highlights, and global illumination data that could be present in the base color texture.

For tileable materials, I can just export an orthographic projection for the base color texture and heightmap from RealityCapture straight to Unity ArtEngine. With ArtEngine, I can automatically generate all the other PBR maps and tile the material seamlessly so that it can be used for terrain or architectural pieces in the scene.

For landscapes, it depends. When I have access to good height data, a simple simulation pass in Houdini can add enough detail to hold up as a good base for the terrain.

When I don’t have access to that data, I can use Google Earth to do photogrammetry capture on the topography of the area I want to recreate and reconstruct that with RealityCapture. I can then extract my own heightmap from that data. Once I bake the color texture for the landscape, I can use Unity ArtEngine to automatically de-light it by simply using the Hard Shadow Removal node.

How does ArtEngine help you in this and other projects?

ArtEngine is a huge time saver. Very often, I need to quickly conceptualize a scene. If I can’t find a specific material that I need online at a high enough quality, then I need to create it, either procedurally from scratch (using Substance Designer), or by going out and scanning it myself. It would take hours to produce quality content with either of these workflows.

That’s where ArtEngine comes in. With it, I can import the Color/Height maps from the raw scan, and using only a few nodes I can generate a material complete with all the PBR maps. I can quickly iterate on this material using a reference image, as ArtEngine can automatically match the coloring of the texture to the reference.

What’s your typical workflow for processing materials in ArtEngine?

When working with a typical material, I’ll de-light it using Albedo Generation, then use Seam Removal to make it tile. In case I want to tweak things or remove specific features, I’ll use the Mutation node, and that’s about it for a basic material.

If there’s significant light information in the input, I’ll use Hard Shadow Removal coupled with the heightmap to get rid of hard light seams. But once I have that, getting the material to tile is really simple.

In case I want to blend one material with another, I’ll use a Blend by Height node, and set a threshold for where I want the two to intersect.

In general though, I keep things simple. I don’t like to go too crazy.

ArtEngine Tileable Material from Glacier Park, rendered in Marmoset:

Do you have a favorite tool?

I’m a big fan of Houdini, and more generally, any tools that let me create 3D content quickly, with little manual touching up needed. ArtEngine is definitely approaching that realm, so I’m excited to see how that keeps evolving in the near future.

Looking to the future of digital art, what are some key trends you’re following?

I’m quite interested to see how augmented reality (AR) plays out as a technology for the consumer market. It’s simple, phone-based, and you can get decent results with LiDAR. It has mass consumption potential but hasn’t quite gotten there yet.

Mesh streaming will also be big. Platforms that do this well will rise to the top. Everything is headed toward the cloud now, so it makes sense it will become more popular.

And I’m always keeping up with capturing reality techniques that I can take advantage of to help create assets for real-time render.

What are you doing next?

After being part of The New York Times R&D team, and having successfully developed versatile and streamlined photogrammetry and LiDAR workflow, I have decided to return to the games industry as an Art Specialist at Activision Treyarch. I’ll be focusing on photogrammetry, LiDAR, and other techniques that enable reality capture for real-time rendering.

If you’d like to learn more about my work, you can visit my website or ArtStation, or connect with me on LinkedIn.

If you’d like to give ArtEngine a try, Unity is offering the tool for $19/mo (vs regular price of $95/mo) until May 17, and you can check it out here. Thanks for reading about my journey and workflows!

Gui Rambelli, Senior 3D Artist

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more