logo80lv
Articlesclick_arrow
Research
Talentsclick_arrow
Events
Workshops
Aboutclick_arrow
profile_loginLogIn

Recreating Death Stranding Odradek Terrain Scanner in Unity

Sichen Liu explained how they recreated the Odradek Terrain Scanner effect from Death Stranding using Unity and showed how the visual components and icons were set up. 

Introduction

I’m Sichen Liu. I’m originally from Chengdu, China. I graduated from the University of Southern California’s Computer Science Games program in winter 2021. I am currently working at Naughty Dog as a Technical Artist, developing shaders, tools, and workflows for the art team.

In my junior year of college, I played one of my most memorable games – Death Stranding. The game has a stunning VFX called the Odradek Terrain Scanner – a device that scans for terrain traversal data. The effect not only looks visually impressive but also serves a gameplay purpose – revealing where the player could fall on slippery surfaces, hide from enemies, or be swept away in deep water.

For my last project at college, I decided to dissect and recreate this effect faithfully in Unity HDRP. The process revealed just how intricate this effect is, and it was a great learning experience. So I’d like to share how I made it using Unity’s custom Volume Component and VFX Graph.

Part I: Terrain Scanner Postprocessing

Let’s start with the most basic and noticeable – the scan. The scan spans about 120 degrees and presents four visual components:

  • Scan lines - Repeated blue lines along the scan, where the furthest line appears white

  • Edge Glow - thick blue gradient at the far edge

  • Darkening - Darkened color growing from the far edge

  • Object silhouette - silhouette around any object within the arc

These components can all be achieved using Unity’s custom post-processing volume and a custom post-processing shader. Knowing the scanner position and direction, we can construct a UV where U goes from 0 on the forward axis to 1 on the edge, and V is the distance from the origin. In the gif, the V is repeated from 0 to 1 for visual clarity.

Simply from the UV and various other parameters, we can extract the masks for the first three components: scan lines, edge glow, and darkening.

For the last component, the silhouette, we primarily use the depth texture. We can derive the silhouette by running a depth-based Sobel filter, which essentially calculates the difference between a given pixel’s depth and those of its neighboring pixels. The Sobel filter is very commonly used to create silhouettes. Here’s a very good article about it if you want to learn more: Sobel Outline with Unity Post-Processing · Vertex Fragment.

Compositing all the masks together, we get the final result for the scan lines.

Part II: Icons

The icons are the more challenging part of the effect. Let’s first look at the various icons needed. According to the in-game Tips menu, the Odradek Terrain Scanner is defined as such:

Summarizing the description in more technical terms, each icon has two properties: graphic and color.

The graphic represents what type of object is underneath the icon – terrain, water, or foliage, revealing where the player could trip on, drown, or hide. The color represents how dangerous the underlying surface is: how deep the water is or how slippery the ground is. Special icons have constant pre-assigned colors.

The in-game screenshots show some more specificity. Some bumpy rocks are intentionally marked slippery, so they have the distinct “Dangerous/Treacherous Terrain” icon. The foliage icons are elevated above the tall grass. In addition, the red dangerous icons have a larger kill radius than the rest.

To gather all this data, the on-screen information from the player camera is not enough. Therefore, every time the scan kicks off, an orthographic camera placed above the scan area captures a few textures including world-space normal, opaque z-depth, object ID, and water z-depth. The textures can be subsequently sent to a particle system that actually spawns the icons. Previously, Unity’s older Shuriken particle system did not have an easy interface to work with external data, but Unity’s latest VFX Graph makes it super easy to feed any parameters and textures to GPU particles. So as soon as the data is captured, it is immediately fed into the VFX Graph. 

I wrote three simple shaders to capture the necessary data. The first shader outputs to RT_NormalDepth – world-space normal to RGB, and z-depth to A. The second one simply outputs z-depth as RGBA to the RT_WaterDepth texture. The third outputs each mesh instance’s ID attribute, which I manually tag on each mesh renderer beforehand, to RT_ID.

In addition to the three shaders, I wrote a component called “Replacement Pass”, which is attached to the overhead scan camera. This component essentially renders the camera view using another shader, commonly called “replacement shaders”. I can easily customize which shader and shader pass to render, in any order. I can also specify the render queue and culling masks to render objects selectively.

Immediately after the replacement passes are complete, the icons are spawned on a grid. Each icon is assigned a planarly projected UV that matches the scan camera’s frustum. This UV is then used to sample the aforementioned render textures - RT_ID, RT_NormalDepth, and RT_WaterDepth. 

The single-channel RT_ID texture identifies the type of surface underneath each icon. The ID is represented as a float value in an 8-bit channel, which means we get up to 256 unique IDs. In the VFX Graph, we can sample the texture and multiply the sampled value by 256 to retrieve the original integer ID. In my project, IDs are defined as followed:

0 = Default, 1= Slippery surface, 2 = Water, 3 = Foliage.

RT_NormalDepth and RT_WaterDepth determine the steepness of the underlying surface. Again, RT_NormalDepth is an RGBA texture where RGB is the world-space normal, and A is the z-depth. RT_WaterDepth is a single-channel texture where R is the water z-depth. By taking a dot product of the world-space normal and the up vector, we can get how upward-pointing the surface is – the less it points up, the more slippery and dangerous it is. Since my RT_NormalDepth texture only contains opaque normal, as specified in the Replacement Pass, I also make sure that the normal is flattened wherever there is water on RT_WaterDepth, so underwater slippery surfaces are ignored.

Similarly, the depth of the water surface can be obtained by subtracting RT_WaterDepth from the z-depth component (A) of RT_NormalDepth. The depth of the water can then be converted to meters, so we can identify a certain water depth as the “very deep”, dangerous water level.

Knowing the type of surface underneath and how dangerous it is, we can now assign the correct graphic and color. The graphics are all contained in a flipbook texture, where the UV is laid out based on the danger level and surface type index of the underlying surface. Likewise, there is a similar flipbook texture for the colors. The graphic and color textures are intentionally separated, so that colors can be tweaked via HSV adjustment before outputting the final particle. If the textures were combined into RGBA texture, the colors would be baked in, making it more difficult to fine-tune the final colors.

Finally, any icons outside of the radius and arc angle are killed. Note that dangerous icons have a further kill radius than the rest.

Combining the post-processing effect and icon particles, we get the final result.

Once the icons are spawned, animating them is easy. The icons start by spawning out synchronously along the scan. Once the first burst completes, the icons start to pulse, from front to back, except the red icons. After six intervals, including the first burst, the icons fade out.

Conclusion

The Odradek Terrain Scanner proved to be quite complex, involving a fair amount of interdependent data and interesting visual qualities. Every time I thought the project was complete, I was caught off guard by more missing details. The darkened color, while visually appealing, also helps reveal terrain silhouettes during blinding snowstorms. The white line on the front edge, distinct from the blue lines, sells the feeling of the progressing scan. The red icons stand out by consistently flickering and remaining visible, providing clear visual clues for dangerous areas. The list goes on.

Compared to linear media like videos, video games thrive on interactivity. The video game magic comes alive when game components meaningfully interact with each other, such as NPCs engaging with each other, characters leaving trails in the snow, or surfaces changing appearance due to weather. As such, Death Stranding definitely made a context-aware, believable terrain scanner that belongs to a world where people’s lifelines rely on packages delivered through treacherous territories.

The entire VFX graph can be quite a handful, and there are lots of details I omitted for brevity. So here is the full screenshot if you want to learn more.

In addition, you can find the sample project on GitHub. Feel free to download it, play around, or use it for your own projects. Knock yourselves out! 

Finally, to all the Porters out there, keep on keeping on!

Sichen Liu, Technical Artist

Join discussion

Comments 1

  • Anonymous user

    I would like to ask why when I open the project, it's all black, and doesn't have the same interface as the one you demonstrated on github.

    0

    Anonymous user

    ·7 months ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more