Meta Connect 2022: Photorealistic AI-powered avatars

Zuckerberg scans a teddy bear, plus 3D maps for the visually impaired

Ben Wodecki

October 11, 2022

2 Min Read

Zuckerberg scans a teddy bear, plus 3D maps for the visually impaired

Researchers at Meta are experimenting with AI to make improvements to metaverse avatars and interfaces.

During the company’s annual Connect event, Meta showcased its ongoing work where researchers combined AI and electromyography to create more intuitive, realistic-looking avatars.

Earlier this year, Meta researchers unveiled Pixel Codec Avatars (PiCA), a deep generative model capable of generating realistic 3D human faces of people.

At Connect 2022, Meta showcased further work on Codec Avatars – including Instant Codec Avatars – designed to be created using a smartphone in a smaller length of time.

Despite being billed as ‘instant,’ the generation process still takes a few hours, but Meta said at its event it wants to cut that time down in the future.

The technology could also be similarly applied to generating models of objects for use in VR.

Related stories:

Meta Connect 2022: Meta partners with Microsoft in metaverse pivot to business

Meta Connect 2022: Introducing next-gen VR headset Quest Pro

Meta Connect 2022: Reporting live from the metaverse

At the event, Meta CEO Mark Zuckerberg used the tech to scan a teddy bear using a smartphone. After some processing time, a model of the bear was generated and could be imported into VR. The result was a high-fidelity model of the bear, with which users could interact.

“Neither approach is real time yet and each has its limitations,” said Michael Abrash, the chief scientist at Meta’s Reality Labs. “But the goal is to let you quickly and easily make physical objects a part of your virtual world.”

Carnegie Mellon alliance

Meta also revealed a partnership with Carnegie Mellon University to develop tools for visually impaired individuals.

On display were technologies able to create virtual spaces providing visually impaired people better directions and navigations to where they were going.

Scientists from both sides created a 3D map of the Pittsburgh National Airport using techniques including neural radiance fields and inverse rendering. The map can be accessed via a smartphone app.

The research is largely conducted through the company’s Reality Labs, a Meta Platforms business tasked with producing the next generation of VR and AR hardware and software.

“With Reality Labs, we’re inventing a new computing platform — one built around people, connections and the relationships that matter,” the company said.

Meta described its research work as developing “foundational technologies for future devices and the metaverse.”

Cover image: Meta CEO Mark Zuckerberg

About the Authors

Ben Wodecki

Assistant Editor

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.