Building and Optimizing 3D Scans in UE4 Scenes

A beautiful look at the way you can use photogrammetry to get top-notch assets and ways to optimize these complex objects and textures for your game.

Introduction

Hi! My name is Danny Ivan Flu. I am a studying Environment Artist from the Netherlands. I have started my studies at Grafisch Lyceum Rotterdam, and after graduating four years later as Game Artist, I then followed the four-year Visual Artist program at NHTV – International Game Architecture and Design. During the past years, I had the opportunity to work on several cool Indie games, VR games, a AAA game and worked with some amazing people in the industry. On most of the projects, I was responsible for prop creation, environment building, and optimizations.

For my graduation project, I wanted to explore the world of Photogrammetry and all the techniques that go with it. It needed to give me a solid base where I could build up from. I also wanted to learn new programs to boost my ways of working and work speed in overall. At the start of the project, I had to set some guidelines. I did not want to create a photo realistic result, but game ready nice visuals. The final scene needed to be something creative, not a 1:1 copy of a real-life location. I also did not want to use any downloaded, non-self-made resources or fully procedural methods when creating assets.

1 of 3
1 of 4

I mostly looked at existing research and techniques in the field of Photogrammetry that Epic and DICE have created in the past years, who are currently applied in, or outside of games. Google and YouTube were also a big source for my initial research, since it contains lots of images and videos of other people’s final work and sometimes the initial steps they have taken in the world of Photogrammetry.

Project

For this project, I wanted to go outside and scan parts of the world. Rocks, vegetation and trees were easy to find and capture, but I needed something special, something to break up the nature here and there. In the initial documentation, I wrote that I also wanted to scan crabs, ruins, crocodiles, a ship, statues, small fish and bones of (extinct) animals (like the leviathan whale) and some dinosaurs. Sadly, the only animal in that list that I was able to start working on was a real crocodile, but it never made it into the final scene.

1 of 2

For the first time ever, I worked with a Color Checker to match real-life color values with the screen colors. Normally I would just guess the values, but this technique was -for me- a real eye-opener. I also started working in RealityCapture. This was the biggest time saver -ever-. Normally I would sculpt something, while looking at research images. It mostly matches the shape and quality, but it never looked like a full 1:1 copy of the reference, it was always something creative, an artist’s approach to the object. The meshes out of RealityCapture were a full 1:1 copy of the “reference” subjects I found. No more constantly looking at reference images anymore when working, since the final scan result is your reference.

1 of 2

Traveling

Most of the assets were not concentrated in one particular place. I had to travel all over the Netherlands to find different types of rocks, trees, ground surfaces and human made objects. This made it that more interesting to work on this project, it gave me the opportunity to visit the zoo, the beach, walk through forests, travel on my bike through the city and go visit museums after having to travel for hours by train. Yes, in general it would have been better to visit a single location, capture some elements there and recreate the environment to look like a full copy of that single location. But is it fun, working in and on a single location for months?

No, for my graduation project, I wanted to have some fun, walking on -in and around a real United East India Company (VOC) Ship made me feel like a real pirate. I had the feeling the guards and staff there could walk up to you at any moment and ask: “What aaaaarrrrr you doin’ here matie?”. Normally, I would never visit locations like this, and activities like visiting a forest, beach or zoo are usually taken when going on a “vacation”. Constantly traveling to different locations and finding different “assets” was more than worth it, simply because it made me look at my surroundings in a different way. 

3D Scans

After creating the high-density point cloud for the meshes, creating the raw textures and exporting it all, I went over to MeshLab and imported the high poly mesh. I then used the quadric edge collapse decimation option to lower the poly count to a stable working amount. That mesh was then imported into Maya, where I used the quad-draw tools to retopologize the mesh. For stones, some of the vegetation, the buddha statue and most of the trees, this was the least time-consuming solution. The ship was also retopologized by hand, and after days of work, I had to find a better and faster solution to this process.

After some searching on the web, I found a program named “InstantMeshes (Instant Field-Aligned Meshes)”. It created a nice retopologized mesh, just by dragging some lines over the imported mesh. It was a real eye-opener and a huge time saver for me. After importing that mesh back into Maya, I unwrapped the mesh in RoadKill UV. I have been using that program for years, since “back in the day”, it was the only program that I knew for Maya, that represented the ways of unwrapping in Blender. When the unwrap was done, and the right settings were in place, I just clicked “Optimize”, “Unfold”, “Straighten UV Border” and “Layout” in Maya, and the UV’s for the low poly mesh were created.

I then created a cage by opening the Transfer Maps panel in Maya, displaying “both” in the target meshes section, and increasing the search envelope. This process took minutes, instead of hours by hand, and gave a very good result. The low poly mesh, cage and raw high poly mesh with base texture were then imported into xNormal. After baking the Base, Normal, Height and sometimes AO texture, I imported all of the textures and low poly mesh into Substance Painter.

Substance Painter was only used to fix the holes, gaps and blurry parts of the mesh/texture, since there were situations where I could not capture the full mesh 360-degrees. I used the clone stamp to sample from various parts of the texture to close the gaps and create a nicely looking asset. For example, I was unable to capture the lower part of the ship, since it was submerged in water. I created the lower part of the ship by extracting it out of a scale-model version of another ship, pasted that onto my full-scale ship, retopologized the mesh, textured the mesh and fixed the seams and gaps by sampling from the imported textures and some close-up images from the hull.

To add variation, mask in-engine seams and break up tiling, I created a tri-planar material and mostly applied that when painting moss and dirt in Unreal Engine 4. This way, I could import vertex painted meshes or paint vertex colors on meshes in UE4. Another quick “trick” I used in this project, was when I needed to make branches, or twigs/plants that were too “bendy” and too big to move to a wind-still location. I just took a picture of a twig/branch, converted that to a high-contrast; black and white image and used the “image-to-mesh” tools in Maya to form twigs and therefrom 3D plants. I just had to add polycards of leaves and small branches to it, to form a nice-looking shrub or tree.

1 of 2

For optimization purposes, I just used “game-ready” meshes, nothing too low, or too high. If, by some reason, the poly count was too low, and obvious edges were visible in the silhouette of a mesh, the tessellation kicked in. I used the previously baked heightmaps as an input in the UE4 material, to create nice surface/silhouette details and to remove all “low poly looking shapes”. This, however, had some impact on the performance, so the multiplier and distance when tessellation was used, was set close to the camera. The moment LOD1 or higher was kicked in, the material was swapped to a material (instance) that had the multiplier value set to “0” or had tessellation disabled completely.

Using Substance Painter

I did not make any (smart) materials in Substance, for this project. This simply because it was not described as one of the objectives at the start of this project. Substance Painter was only used to fix the minor errors. After exporting the meshes and textures out of RealityCapture, some holes, blurriness or other minor mistakes were visible. I then used the clone stamp brush to paint out those mistakes by sampling from other parts of the baked texture(s). I applied this method on almost every object, simply because it was almost impossible to capture an object completely in one run. Creating materials and texturing in Substance is great, but for this project I wanted to focus on Photogrammetry and the techniques behind it only.

1 of 2

Setting up materials and assets in Unreal Engine

Initially, some quick test assets were made. A single rock, tree(log) and some moss, small ground vegetation and a dirt floor material. This to see if I could make a nice tri-planar material that blended everything together. I first applied vertex paint on places where a mesh ends and the ground begins. After loading all meshes and textures into Unreal, I also loaded in a test grid image (in my case, the Blender UV test image). I created a tri-planar material, hooked up the Albedo, MRHA, Normal and Detail Normal textures to the black vertex colors, and added that test grid image to the Red (or Green, or Blue) vertex color. This way the mesh itself shows correctly, and has a tri-planar material at the base.

For variation, I blended in a black and white dirt texture, and subtracted that from the Red (or Green, or Blue) vertex color to make the blend less straight and more random. After everything was working correctly, I added some (blend and variation) parameters and removed the test grid image(s) and replaced them with the right PBR materials (a dirt ground material, or a forest floor material).

Time costs

This graduation project was made in six months. The first weeks/month was there to experiment and to get familiar with- and learn all of these new programs. The last month was filled with rendering, making the cinematics and getting the required documentations up to date. The rest of the time was used to capture all the assets and creating the scene.

For the process, I am going to take a (simple) stone as an example. I went outside and captured that stone. This took maybe 15 minutes, and got me around 175 images. This by walking around that stone for 2 or 3 times, starting from the top, all the way to the bottom. I then loaded all images in RealityCapture and let it calculate the High Detail point cloud, this took around 4 hours to calculate. After that, the program created the texture, this took around half an hour to complete. The mesh and texture were then exported and loaded into Maya. Retopologizing for a simple rock took around 10 minutes, unwrapping around 5 and applying vertex paint, around 10 minutes. Everything was then loaded into xNormal and baked, this took up to 15 minutes to calculate.

When all of that was completed, the mesh and (raw) baked textures were then loaded into Substance Painter. After painting and sampling for around 20 minutes, the nicely fixed textures were saved and imported together with the mesh into Unreal. An Instance version of the Master material created earlier was applied on the mesh, the Simplygon LOD settings were set, and the asset was saved and ready to be placed in scene.

In total, I could easily create a single asset per day, but sometimes if the asset was easy enough, finish the day with two assets. I do have to mention, some complex meshes in this project, like the ship, took several days to manually retopologize. This was done far before I started to use InstantMeshes to create nice topology for complex meshes.

Building the scene

When the project was completed, I concluded that this way of working was not the easiest way of working. When working, I made a full asset, and placed that in the scene, made another one, and added that to the scene as well. This process repeated itself throughout the entire project. I should have made all the assets at first, and then started to block out and later build the final scene. Like stated before, this would have been the correct way, when working on a single location, but I never knew what type of assets I would end up with, after visiting a different location every other week.

I ended up with 14 maps, 72 meshes, 228 textures and 125 material instances. Not all textures and material instances were used, so the numbers are not completely accurate. The big amount of maps was simply because I saved all my experimentations and tests in different maps, to make sure I would not destroy all the work, or make a map file corrupt after a crash.

Applying Simplygon

The biggest joy when working on this project, was indeed the launch of Simplygon Connect. “In the olden days”, I used the standalone version of Simplygon to generate the LOD’. This process of loading in, generating LOD’s and exporting every single asset each time, was a very time-consuming way of working. Nowadays, Simplygon is built into UE4 (just like UDK), and with a few sliders and numbers, you can easily create, adjust and see the LOD’s on screen. After applying these settings to several meshes, you can directly see the framerate in the scene rise again.

One of the earlier test scenes had 11.5 million triangles on screen on average, running at around 2 FPS, after applying normal LOD settings (LOD0 – 100%, LOD1 – 50% and LOD2 – 25/15%), the triangle count was lowered to 3.3 million triangles, running at 23/24 FPS. This just took several minutes of time, gaining a lot of performance. These settings were applied in general, and after some more optimization and tweaking, these numbers could have been only gotten better.

The final scene now is displaying around 9 million triangles in total, this is with LOD’s, and has quite a lot of complex materials applied to them, also running with partially baked lighting. Just some simple math would (theoretically) give me around 31.4 million triangles showing on screen, if I did not choose to work with Simplygon, this -as far as I know- would have ruined the performance and project in total, making it probably run at 0 or 1 FPS, making it impossible for me to work in the editor.

1 of 2

Lighting and shadows

I started out in a default Unreal template scene. No changes, no adjustments to the lights or sky. After testing out all the assets and building the playground scene, I searched around on the internet for images of dawn/dusk skies, took my own photos of the sky outside and sampled the average color values out of that. These values were then used for the sky and light colors as you see in the final result. I also made some small tweaks in Post Process volume, to make it a bit more vibrant. Sampling colors from photos was a huge improvement for me and the project, normally I would have just guessed the values.

At some point, after tweaking the lightmap density for most of the assets, Lightmass crashed when baking on Medium, High or Production quality. This was because the system was running out of physical RAM. I had to bake the scene on Preview, but changed some of the lights to Dynamic, making them cast nice looking shadows, while the rest of the (distant) scene was static and baked with low quality settings. A simple but effective solution when you are running in any of these types of problems.

Danny Ivan Flu, Environment Artist

Interview conducted by Kirill Tokarev

Follow 80.lv on FacebookTwitter and Instagram

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more