Professional Services
Order outsourcing

Weapon Art: Plasma AKM Production Breakdown

Ilya Vasilenko shared the production details of his gun, Plasma AKM, made for a VR game, discussed his modeling and retopology workflows, and mentioned a few useful tutorials. 

1 of 5


Hello! My name is Ilya Vasilenko. I’m a self-taught 3D Environment and Material artist from Moscow, Russia. 

Before any of my working experience in the Game industry, I started enjoying art in 2008, drew some anime girls, but realized that 2D is not something that I really like to do. I started looking for other disciplines and found Game Design interesting.

I joined the game-development industry in 2010 as a Game Designer and had the opportunity to work for projects as WarThunder, Royal Quest online, and a few more. In 2013, I moved to South Korea, Seoul for 2 years and got some priceless working experience there and then came back to Moscow. 

I felt that Game design wasn’t enough for me to reveal my creative potential as I would like, so I tried 3D art. In 2012, before I left Russia I started studying 3D as a hobby. That was hard at the beginning, so I drop it for a while. In Korea, I started spending a couple of hours per week on it, and later increased the amount of time.

And in 2015, when I came back to Russia, I decided to become a professional 3D artist. 

I worked on some AA and AAA projects like World of Tanks, Apex, and others as a freelance/outsource artist, tried some indie and big studios. 

But my all-time favorite project is Mutant Year Zero from Swedish studio “The Bearded Ladies”.

My last job was Lead Environment Artist at ANVIO VR studio in Moscow.

As Environment and Material artist, I’m not a big fan of making guns, but I did some of them at work. No choice. 

About the Plasma AKM

As I mention this, Plasma AKM was a job task for a VR game, and my responsibility was to come up with a new pipeline for weapon creation that we also planned to use in Environment later on.

Our goal was to achieve high-texel density in the engine, save memory budget, and make the shader flexible but not complicated for other artists (I will explain it later).

We had a team working on the project, and it all started with concept art. Our story in the game happens in the not far future, so this design worked for us pretty well as a combination of “plasma” and classic AKM. 

My salvation was that our freelance concept artist used CAD for his concepts, and when we were happy with the gun design, I got a CAD model.

But here starts a battle. Cleaning is a boring part for me. It’s good to have a base from CAD, but you need to make a lot of stuff to prepare it for the next steps: 

First is to prepare a model and convert a CAD model to MOI3d and exported Triangulated model from MOI3d,

(thanks to this tutorial, I was new to CAD and MOI and had no idea how to export models from CAD)

Models from CAD are a real PAIN, I spent tons of hours cleaning model, retoping some parts, fixing design and shapes because some elements didn’t work for our gameplay. And in the end, I got a mid-poly version of the gun and was ready to start doing HighPoly.


I think nowadays making a hard-surface high poly model in ZBrush is very known. I saw this pipeline for the first time when UBISOFT explained how they work with weapons in Division.

And now we have a lot of tutorials and new features to work with this kind of workflow.

I did break a model into few parts (Front, Middle, and Back) because working with a full gun is too heavy for ZBrush (more than 20kk points in final).

I prepared a model, preparation was separating meshes for the easiest sub tool convertation, making UVs based on hard-edges where I want to break playgroups and polish edges there. 

More about the kind of the same workflow you can check here.

I created subtools of each element in ZBrush. Then, I DynaMesh’ed all elements (highly recommend using DynaMesh Master plugin in ZBrush) making soft edges and achieving shapes that I needed. I decided to make edges softer and wider that made the gun look bulkier and that is exactly what I aimed for.

It’s common to re-import a lot of parts in the process, remodel, and etc.

We wanted to make it look damaged in high poly, and I did about 3-4 iterations until we were happy with it.

I made more soft damage with some accents, so the model won’t be overload with details.

The tape was the most complicated part because a tight version didn’t work for us, it made some holes between riffle butt and tape, so we made not the best visual decision and placed the tape inside and close as possible. 

I heard a lot of commentaries about the tape being weird and looking stupid.

For the tape, I used the same technique that you can find in this video but I did it in MODO, it was my main modeling software at the time. This was pretty easy, and I just made some meshes in MODO then import them to ZBrush, made some thickness, sculpted some details, and tuned position here and there to achieve needed shapes.

Later, I just used decimate master to reduce polycount of my high poly and exported to MODO to started low poly retopology phase.



At first, I planned to make low poly about 100k triangles but ended up with about 40k. 

Retopology is the most straightforward part. Some parts of the model were retoped and some were just cleaned on the mid-poly version of the model. 

Actually, I can’t say a lot about it, just more polygons for visual shapes and less for flat surfaces so you can bake details there.

I think the main rule is to get more details on places that the player can see in the camera, and as far model gets from the camera as fewer details there are. But we made a VR game, so the player can see more details compare to PC/Console FPS games, and I tried to keep as many details as I could.

UV was the easiest part for me. I unwrapped in MODO and packed it all in RizomUV. Visible parts got more space in UV, and some parts underneath got less space, so the most important parts of a model got more texel density. 

Of course, I overlapped parts of the model that were not important in the player’s camera while playing.

After all stages of retopology and UVs, I prepared the model for baking, did correct naming for all parts on high and low poly meshes, and exported it to Marmoset.


I used Marmoset and Designer combo for baking. I’ve used this workflow for years and feel pretty happy with it. I did a tutorial on my baking workflow, and you can find it here.

The main pros for me are that I don’t even need to care about a clear normal or anything else because object space normal map can do it all for me after convertation to tangent (if you made it all correct).

You can see how cleaner the result in 1st iteration is. After all settings and cage range setups, I baked a normal in 1 click… DONE.

Also, I baked a few additional maps as Ambient Occlusion and Curvature.

Now let’s talk about the most interesting part of creating a model that is texturing.

With technical artist Ilya Kuzmichev, we started planning our pipeline and a way of achieving results that we wanted.

First, we broke the model into a few material IDs. Few additional draw calls were not so big of a problem as texture space, and instead of using 2 texture sets for 4k, we decided to use some semi-procedural texturing in the engine.

In the end, we decided to use the next samples: 

1024px Basecolor base:

  • RGB - For Albedo
  • A - for Global Roughness

4096px Normal map:

  • RG - For a normal map
  • B - Ambient occlusion
  • A - Edge/Dirt mask

512px Grunge map for masks details

32px or 64px Normal details

And we saved about 50-60% of memory space with the same texel density effect.

Some people can say that is not rational for quality if you pack something in Normal map channels but, for us, it was a justifiable decision, and we are pretty happy with the result.

Our main goal was to create a simple and understandable workflow for other artists.

BaseColor map was just Albedo map after texturing in Substance Painter but without Edges, Dirt, or any Wear details. Later in the engine, we scaled the resolution of Albedo down until 1024px because there was no important visible difference between them.

Using classic Albedo is a good way to add some love to textures, and easy to work with. 

And then to add a lot of sliders for colors in the shader we decided just adding Albedo itself but without any effects that we can add in the engine later.

Edge and Dirt masks are a bit tricky. We combined them both in one grayscale texture and extracted the values for each mask in the shader.

Edge and Dirt mask combined

As you can see, we used mid-grey value as a base and added both masks on top: white value for edges, black value for dirt. Then, we applied a grunge map on both masks to break smooth low-resolution mask edges, that gave us a much better and sharper look.

Roughness works from each material from 0 to 1, and we added a Global Tile Roughness map above all models. So we can separate each surface individually and make it look more consistent with Global roughness.

Here is a small example from the prototyping phase.

We can scale grunges and normal details, control wear amount, color, roughness, normal intensity and etc.

Also on the prototyping phase, we thought about some basic customization like color gradients and etc.


I think it doesn’t matter where you are making your work presentation, it can be any real-time engine. Mostly, I used Marmoset or Unreal Engine or even IRay in Painter.

In my opinion, lighting is 80% of what makes a shot beautiful. Even the worst asset can look juicy and amazing with good lighting. 

I added two main lighting sources for the front and back parts of a gun. Warmer yellowish at the front part and colder bluish at the back part.

And then I started adding some light accents. My goal was to hide any of my mistakes and highlight the strong parts.

I added some spots to show how Roughness works and made the front part a bit brighter, which keeps details there.

Then, I added some rim lights to contrast my shapes and make them more readable. I think you need to be more gentle and avoid overloading assets with too many details.

And that works pretty much the same for any angle. For me, lighting is the same tool as color, you can paint with it, add accents, contrasting your strongest parts, and make your model look better and more artistic.

After all was set up, I just tuned the intensity of light sources, radius, places, and etc. When I was happy with overall looks and took a shot.

The Main Challenges

I can’t say that was a hard task for me because I did weapons before, the most complicated part was that there were a lot of iterations in this process of pipeline creation. Which details you need to remove, which you need to keep. And I faced problems every step of this workflow, most of them were technical. For me, the hardest part is the most boring one, and the easiest is the most creative one.

When you work in a team, you have a lot of limitations, and the time is your worst enemy, there are many things that I wanted to improve but I had a deadline. 

Anyways, that works for us, we were happy with the result, and we made the pipeline easy to understand for every artist in a team. I think that is the most important thing in our job.

Ilya Vasilenko, Senior Environment Artist

Interview conducted by Arti Sergeev

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more