Rhythm Gandhi and Vivek Surve showed the work behind the Flora in the Woods project, explained how RealityCapture helped capture the data, and discussed the rendering process.
Rhythm Gandhi: Hi! My name is Rhythm Gandhi, I go by my art name Blackfang Wolf. I'm currently an Unreal Engine Generalist at Green Rain Studio based in Mumbai, India. I'm also a founder of Unreal Engine India Community, which I run in my country alongside my wonderful admins. One of them is Vivek Surve, who collaborated with me on this project.
Vivek Surve: Hi! I am Vivek and I don't have a cool art name like Rhythm. I am the founder of 'Parallax Labs', an immersive tech startup based in Mumbai, India. We have been working on providing solutions to applications like flight simulators for Indian defense, digital twins at a whole planet level for scientists and astronauts to work on for the 'Indian Space Research Organization', and VR exposure therapy to acclimatize specially-abled individuals in a challenging environment. I am also the community manager for the Unreal India community as Rhythm mentioned before. I am an 'Unreal Authorized Instructor' and I have already been an instructor for Unreal for the past 6 years.
The Flora in the Woods Project
Rhythm Gandhi: Vivek and I wanted to do something related to photogrammetry. We wanted to motivate people to utilize the power of photogrammetry with Unreal Engine by using RealityCapture. Vivek had already done a scan of the Flora Fountain at that time and that model became an inspiration for me to put it inside Unreal Engine and create a short cinematic.
The Flora Fountain is an architectural heritage monument and a renowned tourist spot in Mumbai. Vivek got lucky to capture it during the pandemic with no crowd around, usually, the place is very crowded.
NVIDIA Canvas and Photoshop
Rhythm Gandhi: When I started this project, I wanted to do quick cinematic shots and for that, I needed a quick concept that I had in my mind. I had heard of NVIDIA Canvas before, it was at this moment I had to put it to use. Within 5 minutes I already had something that was close to what I imagined. It was only a matter of touching up what I did in Photoshop.
Though I wanted to put in more ideas and make it much darker, I decided to stick with a lighter concept due to it being an important spot. Maybe in the future, I may add some variations.
Vivek Surve: The fountain was the central piece. The data was captured with two types of sensors:
- DJI Mavi Mini to capture the angles that I couldn't get access to using a handheld camera.
- My smartphone camera with a tripod and manual settings to avoid blurry images.
I took around 2,000 images to capture the subject from all possible angles. I also made sure to take a lot of close-up photos to capture the details up close. The data was processed in RealityCapture for a high-fidelity reconstruction of the subject
Mesh & Textures
Vivek Surve: The high poly 3D model and the textures were then brought into Blender for clean-up and retopo. I tried a few methods to retopologize the model but I ended up using the built-in Voxel Remesher in Blender.
I tried my best to represent the silhouette of the geometry with as few polygons as possible. Then I transferred the high-frequency data from the high poly to the low poly with the help of a normal map while still Ietting the 3D model to take care of the low-frequency data.
I would have optimized the model even further if the asset was meant to be used in a real-time game scenario, But since I had already made a decision that I was going to use this asset for cinematic purposes, I tried to maintain a good balance between asset handling (poly count and texture size) and the quality of the asset.
You can find the link to the 3D model on Sketchfab here.
Rhythm Gandhi: This whole project was about the scan, and with time constraints on my side, I didn't have a lot of time to spend on it. I wanted to do something short and something quick so I used Megascan assets (trees, ferns, and grass) and the Redwood Forest asset.
The automated material, ground mist, and godray material were made manually. I used the Unreal Engine modeling tool to make this cone using a cylinder and deleted the faces I didn't require and applied the godray material to it.
Rhythm Gandhi: The lighting setup was quite simple. I used two directional light sources, one being the main one, and the other was added as a bounce with no shadows. One of my colleagues, Shravankumar Rawal, advised a bit on the lighting as well. I learned some great lighting tips while working with him on previous projects.
I added a sky light along with volumetric fog, and some dust particles using Niagara; the fog added a little mood to the environment. I added some fog sheets and some local ground mist around the scene.
At this point, my GPU almost gave up because I was exhausting my video memory according to Unreal warning. I didn't even know if my GPU could handle rendering the whole scene itself. Luckily, it did, and it took me around 8 hours to render the whole scene on my RTX 2060 Super.
In the post process volume, I only activated Lumen with slight color correction on film while the rest was done in After Effects.
Rhythm Gandhi: NVIDIA Canvas definitely made the process of concepting fast by using only a few brush strokes that differentiate elements based on the brush selection. Anyone who is looking forward to doing some quick concept landscape or looking for ideas can use NVIDIA Canvas. RealityCapture saves a lot of modeling time, and with Nanite, it's just a matter of drag and drop. Unreal Engine has been a get-go to show off my cinematics quickly with the help of real-time capabilities that use ray tracing and Lumen.
You may find these articles interesting