Meta's New Framework for Full-Body Tracking via Quest

The introduced system only requires the position of the headset and controllers to generate full-body avatars.

A team of researchers from Meta has introduced QuestSim, a brand new framework capable of creating full-body avatars for VR/AR experiences. According to the team, the proposed system only requires knowing the position and orientation of the Quest VR headset and controllers to construct an accurate digital representation of the wearer and simulate plausible and physically valid full-body motions.

According to the team, QuestSim's AI was trained with artificially generated sets of movements based on eight hours' worth of motion-capture clips that included walking, jogging, balancing, and such. After training, the system knew how to recognize which movement a person is performing based on real Quest headset and controller data. Using AI prediction, QuestSim is also capable of simulating movements of body parts that do not provide any real-time sensor data, such as legs.

"In the future, we want to supply the policy with more detailed skeleton and body shape information. Finally, we want to increase the diversity of motions the avatars can imitate. This could be achieved using mixture-of-expert policies, pre-trained low-level controllers that facilitate learning high-level tasks, or more informative observation representations," comments the team.

You can learn more here. Also, don't forget to join our Reddit page and our Telegram channel, follow us on Instagram and Twitter, where we share breakdowns, the latest news, awesome artworks, and more. 

Join discussion

Comments 0

    You might also like

    We need your consent

    We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more