3D Artist Igor Witkowski has shared the workflow behind his recent animation made for Pwnisher's Moving Meditations challenge, spoke about setting up the environment and animations, and explained how KIRI Engine was used to create some of the assets.
Introduction
My name is Igor Witkowski, I’m 22 and I have just finished my last year of university in Poland. In my free time, I love to work on 3D projects in Blender, which is why I’m thinking of pursuing another degree in graphic design and 3D modeling.
While I started learning Blender around 4 years ago, I’m still relatively new to this environment. I paused for about 2 years; it took me a while to get the right hardware, and it’s been a steep learning process. I actually felt stuck for some time and didn’t really know how to make progress. But I started noticing real progression early this year – I just kept going and some projects started to make me think, "Hey, that’s not bad!"
So I don’t yet have professional experience in 3D, but it’s definitely a passion that I want to turn into a career. Sadly, it’s difficult in my country to find a stable 3D-related job without more relevant experience. I’m going to keep learning and gaining new skills until I achieve my goal.
Joining Pwnisher's Moving Meditations Challenge
I've been participating in Pwnisher's challenges for over a year now. His contribution to CGI-related topics is a huge inspiration for me, and I learn something new during each of these challenges.
This year, I had to take a few months break from CGI, and the Moving Meditations challenge was my first 3D project after coming back. It was my third time participating in one of his month-long challenges, but Pwnisher also sends out smaller, weekly challenges which I try to join whenever I can. It’s a great opportunity to learn new tips and tricks, and always a fun experience!
Choosing the Idea
The main idea was to portray a robot begging for oil and doing some street performance, and this idea actually came to me pretty fast. At the time, Stray (the video game) was a huge trend; it has a sort of hazy, dark-lit alley vibe to it. And I had always wanted to create something with a similar sci-fi/cyberpunk environment. I remembered a robot model I had already made for one of the previous challenges, so I tweaked it a bit by adding fingers to its hands and gave it a go.
Producing the Environment
Making the environment was a ton of fun for this challenge. I started by looking at reference images from Stray and modeled a back wall and two pillars on Blender. Adaptive SubDivision in Blender allowed me to make realistic displacement on the pillars while maintaining optimal and smooth viewport and rendering.
The lighting was tricky work, too. At first, I added two spots of light, as it was supposed to be a daylight scene. But it didn't quite fit the style I was going for, so it ended up as a dark alley with orange/pink lanterns and a rainy atmosphere.
The most time-consuming part was filling in the environment with things like cloth simulation, an electricity box, an empty paint can, a trash can, an oil can, graffiti on the wall, and so on. Many of the assets were 3D scanned with the KIRI Engine app or downloaded from the internet. Others, like the air conditioners or neon signs, I made from scratch.
Creating Assets With KIRI Engine
A little under a year ago, I was looking for an alternative to Polycam; it was iOS-exclusive at the time, and I’m on Android. I heard about an app in beta testing called KIRI Engine and it was said to work on Android devices.
So I tried it out and the results were more than satisfying. Their support team was also very friendly and helpful with any bugs/tips, so I stayed. I like that the app is constantly being updated and that you can chat with the developers or share new projects with the rest of the app’s users on Discord. The community has been great! There are always new functions being added and I can’t wait to see what else they are planning.
While walking through the city or wandering in the forest, I often stop to scan anything I find interesting. So over time, I’ve accumulated a lot of different assets from KIRI scans, and I don’t recall scanning anything specifically for the challenge.
The scanning process is pretty easy and doesn’t take much time. Not every scan will be perfect at the beginning, but with a little practice, the results can be amazing. You just need to know the basics of how the tech works to get good results. You walk around an object while taking a bunch of photos from different angles and heights. The app automatically sets focus and other parameters, so you only need to concentrate on keeping a stable hand to take sharp photos (blurry photos will highly reduce your scan quality).
Enter the code AZDMIS in the KIRI Engine app to receive free export coupons!
You have to take at least 20 pictures to get a model – the more, the better – and pay attention to lighting. If your light source is only coming in from one side of the object, then there will be shadows baked into the model’s textures. More importantly, you really need to know which surfaces work and which don’t. Anything reflective, like a window, a car chassis, a mirror, glossy finishes, and so on, will just mess with the algorithm. And last but not least, the better your camera is, the better your results will be.
Once you’re done taking the pictures and your model has been processed on KIRI’s servers, you just export the model by generating a download link. You can start and process as many projects as you like, but you can only download a few of them per week if you’re on the free version. KIRI Engine automatically recharges your account with three free export coupons each week. Once you’ve used a coupon on a given model, you can download the same model as many times as you want.
If a scanned model requires cleaning, I generally do it with Blender’s editing tools. I don’t really have a standard/universal process for this part, since all models are different. Sometimes I have to remove some faces, sometimes I have to smooth it out with a sculpting tool, and sometimes I just leave it as it is – it all depends on the model and what I need to use it for.
The Animations
Unless it’s for something simple like adding noise to moving grass or making a door open, I’m not very skilled at 3D animation yet, especially for a realistic creature or human. To do that, l generally like to use the free Mixamo library by Adobe. It’s a virtual library of pre-made animations, ranging from sitting to running, jumping, kicking, zombie-walking, and even dancing. They also have a bunch of variations like “angry” or “sad” walking, “joyful” jumping, and so on.
You just need to upload your 3D model and then indicate where certain body parts are (forearms, neck, etc.) with pins so the algorithm can automatically rig the model. It adds a sort of skeleton to the character, to which you can apply any of the animations from the library, and then download the final model. It’s an amazing tool and completely free to use, you just need to log in with your Adobe account or create one.
For the Moving Meditations challenge, however, the main robot animation was already provided by Pwnisher in collaboration with Tai-Chi master Lu Junchang, known for his stunt work on Shang-Chi. He prepared 27 animation files, all with the same start and end pose, but with different moves in between. I chose one, imported it into the provided Blender template file, and retargeted the animation to my robot model.
Many participants encountered issues with that last part because of failing weights; this is, for example, when you try to move your character’s foot but it somehow also moves the arm at the same time. Fortunately, my main character was made from a lot of individual parts, so I just manually parented each limb and other segments of the body to the animation skeleton.
Unfortunately, both the provided animations and the one I made with Mixamo sometimes weren’t clear, and there was some clipping in a few of the frames. For example, in my video, the passerby character seems like he’s holding a phone with both hands when one hand is actually just in the air and barely makes contact with the phone. Thank goodness for the camera perspective, which was the perfect workaround for this issue.
The worst part of the fixing I had to do was with the robot. The animation I chose from Lu’s files had major clipping issues on the arms and thighs. In this kind of contest, you’re allowed to change the animation a bit as long as it still has the same flow as before. But I didn’t want to accidentally ruin everything, so my safest option was to have a car drive by at the right moment. It was a nice, natural-looking touch to the scene. It worked as a sort of distraction and allowed me to hide the robot for just a few frames where there were clipping issues.
This impromptu move ended up as one of my favorite additions to the video, right behind my friend’s character and the little cockroach crawling out of the garbage.
Lighting and Rendering
After spending quite a bit of time on different kinds of lighting, I went for a dark alley mood. The first image that pops into my mind when I hear "cyberpunk" is a modern city with a lot of neon signs and lights. Since I wanted my scene to have more of a working-class district vibe, the only "cyberpunk-ish" elements I added are these lights, neon signs, and of course orange/pink lanterns.
I also added a cube with a foggy volume texture to set a misty, gloomy atmosphere. With an atmosphere like that, I just felt obliged to add rain thanks to Blender’s particle system. To make it rain, I created a droplet-shaped object, and set it as the main object to be projected by the particle system. I then set the number of particles (droplets) I wanted, being careful not to go too high. My PC only has 6GB of VRAM, and adding rain can consume a lot of VRAM depending on how dense you make it. I think I ended up choosing 80,000 particles, and then came the problem of getting an optimal viewport.
The particles are simulated by Blender, and can then be played in real-time or baked. A baked simulation will play smoother as it uses cache data to store it, and this is what I chose to do. Whichever option you choose, every single particle will be generated, and each has its own face count. In my case, each droplet had around 80 faces. Assuming I forgot to decimate (which I often do), that would mean 6.4M faces to generate on top of the existing scene and characters!
What’s fun though is that thanks to motion blur and rain movement, you can’t really tell if the projected particles (here, the droplets) are low or high poly. Since a poly count that high was simply out of reach for my laptop, I decimated the original droplet to just 12 faces. It looked quite bad as a still object, but you can’t see that once it’s all in movement. My rain therefore only added up to 960k instead of 6.4M, and still looked great.
After that, I added a wind source to influence the rain’s falling angle. I also wanted to make realistic splashes on the ground and simulated a fluid splash render in a separate Blender file. I made two variations of the splash and set it as the main particle object for the ground. Then, I used the image sequence generated from the rain particle system to tell the ground’s splash particle system where to splash. I set a much lower quantity of splashes than raindrops, so it was just visible enough to be noticed. This is something I’d never done before – I learned from a great tutorial by CG Geek:
The whole scene was then rendered as an image sequence with filmic sRGB settings in Blender. I thought the final image was too soft, so I added a sharpening filter inside my video editor to add a specific, crispy look and added a car splash simulation as a separate video layer, but it’s barely noticeable.
The Main Challenges
The main challenges I had to face were for sure PC memory usage and finding new ways to further optimize certain assets or simulations. Although I could have had more PC power if I had chosen a desktop setup, I’m really happy with my laptop and the fact that I can work on my projects from anywhere. But it also means I need to optimize everything as much as possible so my computer doesn’t lag or freeze. As I improve my skills, I’ll probably want a more efficient machine in the future, but I’m good for now!
Something that I improved on compared to the previous Pwnisher challenge, but that remains a challenge nonetheless, is time management. Previously, I spent way too many hours (and nights) on things and ended up unsatisfied with the final result.
This time, I decided that if and when I ran into hurdles, I’d just take a few days break or go on a stroll with my fiancé. I found that it really helped clear my mind and I always came back with a fresh set of eyes and brain. I also tried to reasonably estimate how much time each step would take, and to try to stick to those estimations. This Moving Meditations project took me 32 hours of work and 35 hours to render, and, contrary to the previous challenge, I’m super happy with the result and got enough sleep!
The last major issue I had was making my back wall too short in the top left corner. There was an empty, black void visible in the first few frames. I didn’t notice it at first because the render was dark, and when I finally did, it was too late for rerenders. So I just masked out a part of the wall in post-production and positioned it so it covered the hole.
Tips and Tricks for Participating in Pwnisher's Contests
To anyone willing to participate in Pwnisher’s challenges, I’d say don’t do it to win, do it for yourself and the experience you’ll gain. For this project, I honestly didn’t even think about the top 100 at first.
Of course, when you’re done and proud of your work, you might get your hopes up. And that’s OK! Just don’t give up when it turns out your project didn’t make it. I think a lot of people starting out with 3D think that if they didn’t get to the top of something, it means that their work is bad and they lose heart.
There are so many renders in the full compilation (over four hours of 3,600 artists) that I thought were so good, that I believe I can honestly say: do not think that! Of course, some of them don’t stand out as much as others, but that’s the main purpose of these challenges, to learn new skills and hone older ones.
Even though this time my work was selected, my 2 previous submissions weren’t. When I look back on them, I think they’re really bad, but I also appreciate them because I learned a lot from both.
On a more practical side, my final advice is to always plan for render time! I think that’s something not many new people pay attention to. It’s super important to schedule and finish your work at least a couple of days ahead of the deadline, so you still have time to render the scene.
I’ve seen too many comments saying, "Guys! My render is going to take 12 hours and the deadline is in 8 hours, what do I do?" and I’ve been there before myself. I even recall asking for help with rendering, and a kind soul rendered part of my frames on a render farm. This time, I left a whole week just for renders for this challenge and it really paid off.
TL;DR: Don’t give up, keep moving forward, and have fun. And always leave some extra time for the final render.