Horizon Zero Dawn: Interview With the Team

The people behind the development of Horizon Zero Dawn describe the way they coped with various technical challenges encountered during the production.

The people behind the development of Horizon Zero Dawn describe the way they coped with various technical challenges encountered during the production.

There are a lot of game studios out there. But there are some studios, that kind of stand out. And not just because of their technical expertise, or because of dedication or revolutionary gameplay design. They stand out because of the inner freedom and tremendous courage. Guerrilla Games is one of those truly courageous studios, because Horizon: Zero Dawn is an incredibly brave game. It pushes the boundaries in terms of graphics, production techniques, design, and even story.

Back when the whole thing started, the idea of robotic dinosaurs in the vast and lush open world seemed a bit odd. I remember listening to the talk from Angie Smets, where she discussed the production and all the troubles they’ve faced, and it just felt incredible that the game was actually shipped. There were doubts, there were fears, but the team managed to overcome all of these problems and turn their open world experiment into a magnificent triumph. Today we present to you our interview with the people behind this massive title, celebrating the amazing technical and artistic achievements.

Here are the people from Guerrilla who took part in this talk:

We also extend our sincere thanks to Jan-Bart van Beek, for his amazing vision, and all of the amazing people at Guerrilla Games.

Dungeons

MISJA BAAS – Art Director: Horizon Zero Dawn actually has two types of ‘dungeons’: there are bunkers, which are manmade, and then there are Cauldrons, which are AI-controlled assembly lines.

We designed the bunkers to provide a strong contrast to the natural world above ground. The bunkers were key in providing story beats and emotional connection to the old world, while the Cauldrons were important for gaining skills and giving the player a view into the utterly alien inner workings of an AI-controlled robot factory. In that sense, the two environments are almost diametrical opposites of each other.

They came a little late into the project, so we had to be smart in how to build them. The biggest challenge we faced was that there was a limited set of components to build these environments with since the amount of real estate they covered was tiny compared to the world above ground. This made it tricky to create enough variation and standout moments, which could potentially lead to dull environments.

1 of 3

1 of 3

ENVIRONMENTS 
Gary Buchanan – Principal 
Steven de Vries 
Bo van Oord 
Leon Voorrips

LIGHTING 
Roderick van der Steen – Lead 
Kristal Plain – Bunker Interiors 
Julian Fries – Cinematics

SHADING/TEXTURING 
Maarten van der Gaag – Art Direction /Tech Wizard 
Lucas Bramlage 
Stefan Groenewoud

Images from Misja Baas Artstation portfolio.

To mitigate that risk, we covered the bunkers in stalactites and calcite or ice and snow. This let us create all kinds of formations, which help to break things up a bit and keep each space interesting. Our other solution was to add large holographic interfaces throughout the spaces to provide lighting and context. So even when consoles or machines in a room were mostly covered in gunk or ice, there would still be holographic displays floating over them to suggest the function of that room.  

For the Cauldrons, it was basically a matter of an AI constructed this thing so go big, go crazy and treat the environment like a giant sculpture! We didn’t have a human element there to limit us, which was very nice.

Reinventing the pipeline

MICHIEL VAN DER LEEUW – Technical Director: We certainly had to code a lot of new stuff! Of course, we already had a rendering engine that we could use, and we’d already started working on drawing larger distances and vegetation/trees when we were doing Killzone Shadow Fall. However, most systems needed work; scripting, streaming, level editing and placement of assets all needed attention. For each subsystem, we looked at what was required to improve it, and either killed it and started from scrap or slowly iterated towards a point where it was up to par for an open world.

Preparing Decima Engine for a new open world concept

MICHIEL VAN DER LEEUW: This was one of the areas where we started from scrap. Our old workflow involved a lot of exporting from DCC (Digital Content Creation) packages, reloading the game, playing up to the point where you left off or flying around to find where you were making your art. It certainly wasn’t up to par with the state of the art in the field and didn’t map very well to open world development.

We took a lot of inspiration from commercial/public engines, but also found areas where we thought we could improve over them. We designed a new tools framework that integrated all of our workflows, where there was little to no file management and you could play directly in the editor, and we built that. It sounds easier than it was, of course. In reality, some of the tools came late (or not at all), and we made some mistakes, but in the end, we pulled it all together and now we have a very decent toolset which makes us much more efficient than we ever were before.

1 of 3

Here are some images, which can help you understand how did the environments actually looked like during 2014-2015. The images are taken from the Artstation portfolio of Jonathan BENAINOUS, who worked on the game during that period. 

His responsibilities on the “Art Benchmark” scene here were to prototype modular assets and creating prefabs built from these building blocks. He also covered the buildings with vegetation and worked on the overall layout of the city. Other artists, who worked on this scene, include Ben Sprout, Gary Buchanan, Ryan Spinney, Sandra Parling, Amir Abdaoui, Tiffany Vongerichten, Desmond van den Berg, Sander Vereecken, Derk Over, Roderick Van Der Steen, Kim Van Heest and Jan-Bart Van Beek. 

1 of 3

Content optimization

MICHIEL VAN DER LEEUW: We have many in-game debugging tools for performance (for both the CPU and GPU side of things) and a lot of our artists and coders are very familiar with performance optimization. Towards the end of the project, we would hold specific meetings for CPU and GPU, look at badly performing parts of the game, and discuss ways to improve performance.

One thing I can recall was that during development we had some issues with mid-distance rock faces (say, at around the 300m distance mark). The faces were made out of thousands of individual rocks that looked super-detailed up close. In the far distance, they were reduced to a height map, which is cheap to draw and looks identical far away. At the mid-level, they were collapsed into groups, but we had many of them on screen at the same time, and they could still be 10-20K triangles because they were just merged versions of low-LOD rocks. Many of the triangles were unnecessary, because they penetrated into the ground or stuck with each other, and they were becoming a real performance bottleneck. In the end, somebody did an experiment where they turned the rock faces into a 3D voxel structure, traced a new low-poly mesh over this voxel structure and then re-projected the material information of the landscape back onto it. We already had the material information from our radiosity voxel lighting bake. The resulting mesh was both cheap, cost little memory, and looked much better.

There are many more of these stories – performance optimization takes a few good months of the entire project!

Using algorithms

MICHIEL VAN DER LEEUW: The main areas where we used more and more procedural content generation were in the landscape itself, in the placement of vegetation, in the environment sounds and in the shaders. Nature itself is procedural; it’s built out of simple rules which are applied at a very large scale, so it’s very amenable to procedural generation. We also created a system for storing data for input to procedural systems, so-called “World Data Maps”, so that each shader, placement system, or sound could query the current humidity, temperature, distance to nearest river, etc., and make decisions based on that. We’d spawn fireflies when the player was close to bushes at night. We’d spawn salmon when the player was in the river, swimming in the opposite direction of the river at various elevations.

80 Level: If you are interested to learn more about the proceduralism in Horizon Zero Dawn, don’t forget to check out this talk by Jaap van Muijden. There’s also an official post about GPU-based procedural placement in Horizon Zero Dawn on the company’s official website.

 

MICHIEL VAN DER LEEUW: The biggest challenge was keeping track of all of the things that needed to tie together. Most teams that make open world games are working on their second or third or more, but we were quite inexperienced. We knew a lot of things that we needed to do, but we didn’t know what we didn’t know, that makes you a little insecure.

Lighting

RODERICK VAN DER STEEN – Lead Lighting Artist: Lighting is crucial to the look and feels and the overall visual experience of any form of art or entertainment. I will start with some history, so people can understand where we came from. The base lighting (direct and indirect) at Guerrilla has historically been about getting the best bang for your buck in terms of performance vs. memory vs. quality – the latter always being the leading factor.  

For Killzone Shadow Fall (PS4, 2013) we used per-pixel lighting (spot, omni, directional, area light disk, rectangular, and point/sphere with an option for textured area lights). We used a custom BRDF for direct lighting, spherical harmonics for dynamic lightning, and directional lightmaps for the indirect lighting (using all the previous tricks we learned, but this time also occasionally using half float to get a lot better range and HDR-like levels for our baked lighting.)

For Horizon Zero Dawn (PS4, 2017) we used per-pixel lighting using all the Killzone Shadow Fall tricks, but using GGX as the BRDF of choice to give our materials a wider and more accurate range of material expression.  We also added a dynamic skylight to the mix, which lights everything based on the sky color from all angles.

1 of 3

1 of 2

Images from the Artstation profile of Lucas Bolt.

For our indirect lighting solution, we used irradiance volumes. You can think of them as multilayered lightmaps, where each pixel of each layer has a 3D location in space, and multiple directions based on probe positions. We used this for both our static and dynamic objects, giving us a fully unified way of doing indirect lighting for all objects. Doing so also finally moved us away from lightmaps, which meant we did not have to create lightmap UVs anymore (both a time and a memory win). It also allowed us to have a dynamic time of day. We baked the indirect lighting of our sunlight at 4 times of our day/night cycle, to 4 different sets of irradiance volume textures, which we then blended overtime to give the illusion of having accurate indirect lighting across all times of the day.

Next to this, we rendered a static indirect pass, which was stored into its own irradiance volume texture. It contained all static non-moving/non-destructible lights in the world. The third thing we baked was the sky visibility term, which is like a volumetric 3D AO map. We used it to mask out the skylight on all static objects, allowing us to have things like slightly darker lighting in forests (even at a large distance). The reason we chose to once again use a prebaked solution instead of a fully real-time solution is simple: the quality is higher, and the memory cost and runtime performance cost is really low compared to a fully real-time solution. We have the opportunity to stack multiple irradiance volumes as well, giving us higher fidelity and resolution.  

Another reason for us not to use a fully real-time solution is so we can add a lot of non-runtime lights into the equation in order to really paint with light, and create a more balanced, stage-lit experience. These ‘bake-only’ lights get baked down completely (primary and secondary rays) and provide a more rich-looking lighting scheme with a lot of gradients in both color and intensity. This enables us to create a slightly more unique version of reality and makes our games stand out a little bit. We can also flag lights to be sun bounce lights, which means we can render them into the sun bounce passes and thus have them affected by time of day scaling and recoloring.

Our renderer is built into our engine and works by voxelizing the world and baking information like albedo, normal data and translucency into a voxel cache, which is then re-lit for each time of day and the static indirect bake. This way we get material information and correct color bounce across the game.

Rendering the indirect lighting can either be done locally or on our render farm, which, like most things at Guerrilla, runs on our own custom distribution system and enables fast turnaround times.

When it comes to real-time lighting and effects associated with it we support various options. On the volumetric end we can enable volumetric lighting per-light (we do not have a volumetric volume one can place) and have different sampling options per light (so we can adjust performance or visual quality on a per-light basis).

Shadows also work with volumetrics, as do the shaders and projectors. For sunlight, we have a volumetric height fog solution, which can be used to create expressions of weather and mood. It also allows for volumetric god rays, which we can tweak for each environment and time of day. The volumetric fog quality can be tweaked per cascade.

All volumetric rendering goes through the same pipeline and gets rendered to depth, making it easy to integrate particles, forward-rendered assets, and other depth-based effects. It’s a very flexible system, which in turn enables us to use a lot of volumetric effects at very little cost.

When you talk about real-time lights, shadows are very important as well – which is why a big push was made for Horizon Zero Dawn to have visually appealing and stable real-time shadows. Because the sun moves all the time, we experienced a lot of aliasing artifacts. To circumvent these we temporally stabilized them, resulting in a completely stable moving shadow. Work was done on the filtering side to achieve more pleasing soft-shadow effects and remove any signs of pixelating artifacts.

Shadow quality also plays a major part in volumetric rendering, as shadow artifacts get magnified when rendered into a volumetric system. So by improving shadows, the overall visual quality went up as well.

Most of the lights in Horizon Zero Dawn had shadowcasting, and we had a lot of ways to optimize their performance per light. We lowered resolution over distance, faded them out at a certain distance (or light size in screen space), all while trying to keep the image as rich as we possibly could.

The most important feature we used was the shadow cache, which enabled various schemes for caching a light’s shadowmap: static only, static and dynamic, and so on. This meant we only had to update the parts of the shadowmap that were most important.

Another important part is reflections, and although we do have real-time raytraced reflections since Killzone Shadow Fall, cubemaps remain a very important part of the pipeline. They come in two different flavors: local and global. Local cubemaps can be perspective corrected, and global ones are projected at infinite distance. For outdoors environments, we renormalize the cubemaps to fit any time of day and weather, while indoors we usually use them as-is, due to the static nature of indoor scenes.

To enable certain special effects we have the ability to create custom light shaders, which are then projected by our lights. Because they use the same feature set as all other shaders in the game, we are able to create very complex light shaders and read certain buffers to change the way a projection is done or deform the projection based on UVs.

We can also leverage the power of animated textures/flipbooks/volumetric textures in order to create effects like caustics, fire, or cloud projections. Through light shaders, we can control the projection based on distance, which means we can have a certain effect appear 80 meters away and not interfere with the light closeby. The possibilities are endless.

One of the benefits of working in a less destructive pipeline (irradiance volumes versus lightmaps and separate probes) is that we spend less time fixing content (broken/out of date lightmap UVs anyone?) and more time lighting. That is not to say things always work perfectly. Our system deals very well with most situations, but in challenging situations it needs help placing the probe positions in the right spots. We have hint planes and exclusion zones to help us focus the probes where we need them to be.

Balancing all lighting and post effects is achieved through our Ambience Manager, which is arguably the most powerful visual tool we have in our engine when it comes to post effects, color correction and balancing all lighting and effects over time. The ambience manager controls all aspects of, amongst other things, sky rendering, atmospherics, fog, sun/moon, sun/moon trajectory, direct and indirect lighting, cloud rendering, color correction(both 2D and 3D), exposure, and more exotic things like aurora (used in the Horizon Zero Dawn: The Frozen Wilds expansion).

The system enables us to change these features based on time of day, and blend from and to different settings automatically. Ambiance cycles are then hooked up to climates, which we use throughout the game to give variation to each ecotope.

80 Level: There’s a good talk by Michał Drobot about the lighting in Killzone Shadow Fall, which you can find here:

Vegetation

GILBERT SANDERS – Principle Artist: When creating vegetation for Horizon Zero Dawn we do start out with a speed model, which informs us if we are making the asset at the correct scale and density. This speed model will most likely end up as the lowest geometry LOD in-game. As soon as we have this model in place and we are happy with how it looks, we start detailing out the speed model. We build high-resolution meshes that we can bake out into the UV space that we already tested with our speed model. For this, we go back and forth between Maya, Photoshop, and SpeedTree.

During the development of Horizon, Team Green was responsible for the creation of all vegetation assets. Another part of our time went into creating and maintaining the rule-sets used for populating the world with vegetation. Team Green consists of Nicholas Watkins, Mas Hein and Gilbert Sanders. Images taken from Artstation portfolio of Nicholas Watkins.
1 of 2

The Placement system came online really early, we already had it up and running in Killzone Shadow Fall! We knew we needed a system like this because we did some quick napkin-math and figured that the placement data needed to bring to life the lush world of Horizon Zero Dawn would be more than we could fit on a disk.

So the placement system that we have right now only has to deal with a couple of texture lookups as data inputs, which we can easily store. Authoring the placement system is like building a shader; you read the textures, do a bit of math on top of that and you have your placement logic.

1 of 2

This is a really over-simplified way of explaining it – the system is more elaborate than that! I did a presentation at GDC back in March called ‘Between Tech and Art: The Vegetation of ‘Horizon Zero Dawn‘ on all things related to the creation and rendering of our vegetation. It should be available from the GDC Vault. Here’s the link to the presentation file itself!

Anyway, for rendering our vegetation we render out all our alpha-tested meshes in two passes.

First as early occluders; after that, we render them normally. In this initial depth-only pass, we do our alpha test. Rendering this pass first will give us all the depth information we need, after which we only have to do a depth comparison when rendering our geometry pass without an alpha test. This is important because an alpha test is incredibly expensive, but in this way, it operates on a very cheap depth-only shader.  The geometry pass shader is much more expensive but is now accelerated by a very efficient fixed function depth test.

Landscape generation

ARJAN BAK – Game Designer: Landscape creation involved various steps and processes. First, we did a rough pass to establish distances and the layout of our intended design. At this stage, we defined our natural landmarks and made sure the paths our players could take through the world on their various quests would be interesting and varied. Next, we did many iterations over this rough pass, using a combination of data based on 2D maps we created in our editor and captured satellite data. After that, we moved on to a first detailing pass. When it came to the scale of the world and its landmark features, we aimed to retain the visual essence of the features while simultaneously condensing them to ensure that our travel distances wouldn’t become overly long. This way, we could make the world feel rich with activities and side content. The landscape dressing and detailing was done by a team of artists specialized in the various aspects of our landscape, such as rock formations, rivers, and vegetation. We used a combination of procedural systems and hand dressing for our vegetation and rivers, but our rock formations and mountains were entirely crafted by hand. We did various experiments and prototypes with placement methods and procedural systems in the early stages of development, but we found that the level of control and art direction we required was worth the marginal increase in creation time.

The Team of Guerrilla Games

Interview conducted by Kirill Tokarev

Join discussion

Comments 5

  • Darlene Dyer

    This game has been blowing my mind since I loaded up the game initially. It's so extremely well done!!

    0

    Darlene Dyer

    ·5 years ago·
  • Mustafa Hekmat

    Awesome article, gave some detailed insight.

    0

    Mustafa Hekmat

    ·5 years ago·
  • Benjamin Presley

    Good Read.  Thank you for putting this together.

    0

    Benjamin Presley

    ·5 years ago·
  • Anonymous user

    no, sorry, that's all we got. But it's still great!

    0

    Anonymous user

    ·5 years ago·
  • Michael Bittle

    Will there be a part 2?

    0

    Michael Bittle

    ·5 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more