RADiCAL: Creating AI-Powered 3D Motion Animation

RADiCAL's CEO Gavan Gravesen has told us about the company's 3D motion capture solution, shared how the platform's AI was developed, and spoke about the recent investment from Autodesk.

Introduction

80.lv: Please introduce yourself and your leadership team. Where did you study? What companies have you worked for? What projects have you contributed to?

Gavan Gravesen: My name is Gavan Gravesen, I am RADiCAL’s co-founder and CEO. Over the past 10 years or so, following on from a corporate career, I have led, invested in, and advised a number of companies across technology and content creation. Among others, I co-founded @Slated.com, the world’s leading online film packaging and financing marketplace. I hold master's degrees from the Berlin Conservatory of Music and NYU School of Law.  

Matteo Giuberti is our co-founder and CTO. Matteo was previously a lead developer at Xsens Technologies, the world’s leader in inertial motion capture solutions, where he was responsible for the latest developments of the Xsens human motion capture suit. He holds a Ph.D. from the University of Parma.

RADiCAL

80.lv: Now, let's discuss your 3D motion capture solution, how did you come up with the idea of creating it? How does it work? What are its advantages? How did you manage to make it accessible on any device?

Gavan Gravesen: The idea for RADiCAL was born out of another project I was running between 2017 and 2018. We were experimenting with computer vision, human body detection, and reconstruction, as well as 3D computer graphics. That work led to the foundational technology architecture for RADiCAL. 

The impulse for RADiCAL is my deep affection for all content creation, be it visual or otherwise. I believe storytelling is the creative engine for filmmaking/VFX, gaming, and social media alike. It’s just that one form of storytelling is narrative, scripted and linear, whereas the others are dynamically evolving in real-time. And the common delivery vehicle between dynamic and linear storytelling across VFX, gaming, and social media is computer graphics. 

Human representation is central to nearly all storytelling, but it’s also the most difficult and demanding to get right.  That’s true for real-life actors just as much as it is for virtual characters. We judge by their appearance and expression whether they deliver a story plausibly, and ideally in a convincing, compelling way.

And yet, modeling and animating human representation is really hard. Prohibitively so, especially for real-time use cases. So what better thing to do with my life than to try to make that better.  

RADiCAL’s mission is to democratize scalable 3D animation and human virtualization for everyone, everywhere. We want to empower millions of independent creatives, serve billions of consumers, drive smart human analytics and accelerate autonomous systems that serve the public interest. 

With RADiCAL’s technology, all you do is point a regular 2D camera at yourself or an actor.  Our AI does the rest in the cloud, in real-time: we output 3D animation data in the form of skeletal joint rotations that you can stream or export into any 3D software client and any content creation pipeline.

We also have a fundamental belief that, within just a few years, content creation within the 3D industry will experience the democratization and explosive growth already witnessed within the 2D computer graphics industry over the past 25 years. 

The AI

80.lv: And what about your AI, how was it developed? How does it help creators who use RADiCAL? What are the challenges of creating a mocap AI?

Gavan Gravesen: We do things differently, in two important respects. First, we aim to deliver the strongest possible science, without compromise, that will produce the most advanced quality across AI-powered single-camera platforms.

To achieve that, we layer the fundamental biomechanical science of human skeletal joint rotations, expressed as quaternions, into an advanced, deeply customized deep learning architecture. Beyond an understanding of the 3D space within the context of skeletal biomechanics, our AI then explicitly considers the temporal dynamics of human motion, ie, we embed human movement over time into our analysis. Lastly, we do all that by training our AI on actual human motion, augmented with synthetic data for scale and robustness.   

Second, we strive for "massive scale," which means that everyone, everywhere has to be able to use our platform, at low cost, on any device, in any environment.  We, therefore, provide access to our AI through a cloud-first, end-to-end, real-time, multiplayer AI-based 3D motion capture platform that requires no coding, investment, designated hardware, or training.  

Promoting the Tool

80.lv: How do you approach the business side of things and promote the tool? What are the main challenges? How do you work with the community? How is your solution monetized?

Gavan Gravesen: Our delivery model is cloud-first and web-first. Our business model, therefore, is rather like that of a conventional SaaS. Essentially, we leverage cloud resources and web socket communications.   
In terms of marketing and pricing, we believe product-led growth and user-generated content will drive adoption between two user segments.

First, at the heart of our awareness strategy is a vibrant community to which we provide a low- or no-cost real-time multiplayer motion capture product, coupled with a new web-based, high-end, collaborative editor we call "Canvas".

Second, the community sandbox creates awareness among gaming, music, and social media publishers, TV and film studios, as well as industry partners to whom we offer scalable licensing arrangements, so they can enable new use cases at scale. 

Investment

80.lv: And what about the recent investment from Autodesk, could you tell us about it? What does it mean for RADiCAL and your users? How do you plan to use the investment to enhance the tool?

Gavan Gravesen: The partnership with Autodesk is essentially the perfect launchpad from which to enhance our product and increase adoption. Autodesk supports our strategy to empower a growing grassroots community with a low-cost, democratized, cloud-first platform.  

Specifically, the new capital coupled with technical industry domain expertise will enable us to: 

  • Expand our collaborative web editor platform to include animation tools traditionally only available to sophisticated and highly resourced professional users;
  • Enhance our AI with more data and even more advanced training algorithms;
  • Expand low-cost, and even free, use of our cloud platform around the world and across products;
  • Drive community engagement and accelerate enterprise adoption.

RADiCAL's Roadmap

80.lv: Please tell us about the company's future plans, what is your current roadmap? What should your users wait for? 

Gavan Gravesen: The immediate aim is to release RADiCAL Live, our real-time, multiplayer platform to the general public. RADiCAL Live is already licensed to a number of enterprise customers, but we’re looking to open the system up to everyone in 2022. 

However, our R&D efforts will mostly be concentrated on pushing our AI to the next level. Within the next year we plan to release significant qualitative improvements, especially in terms of fidelity (detail), motion domain (to support more challenging motion patterns), footlock (strengthening the relationship with the floor – a key feature in professional animation pipelines), and input tolerance (by making the AI more robust to challenging video inputs).  We’re also expanding our capabilities by adding facial, hands, and finger tracking, as well as support for multiple actors within the same video frame, all in real-time.

Gavan Gravesen, Co-Founder and CEO of RADiCAL

Interview conducted by Theodore McKenzie

Join discussion

Comments 1

  • Dubois Peter

    Great, but please also consider multi-camera input! This is important if body parts are just hidden, which happens often. In such cases no AI can work.  

    0

    Dubois Peter

    ·a year ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more