Meta Connect 2022: Photorealistic AI-powered Avatars

Zuckerberg scans a teddy bear, plus 3D maps for the visually impaired

Ben Wodecki, Jr. Editor

October 11, 2022

2 Min Read
Codec Avatar of Meta CEO Mark Zuckerberg

Researchers at Meta are experimenting with AI to make improvements to metaverse avatars and interfaces.

During the company’s annual Connect event, Meta showcased its ongoing work where researchers combined AI and electromyography to create more intuitive, realistic-looking avatars.

Earlier this year, Meta researchers unveiled Pixel Codec Avatars (PiCA), a deep generative model capable of generating realistic 3D human faces of people.

At Connect 2022, Meta showcased further work on Codec Avatars – including Instant Codec Avatars – designed to be created using a smartphone in a smaller length of time.

Despite being billed as ‘instant,’ the generation process still takes a few hours, but Meta said at its event it wants to cut that time down in the future.

The technology could also be similarly applied to generating models of objects for use in VR.

At the event, Meta CEO Mark Zuckerberg used the tech to scan a teddy bear using a smartphone. After some processing time, a model of the bear was generated and could be imported into VR. The result was a high-fidelity model of the bear, with which users could interact.

“Neither approach is real time yet and each has its limitations,” said Michael Abrash, the chief scientist at Meta’s Reality Labs. “But the goal is to let you quickly and easily make physical objects a part of your virtual world.”

Related:Meta Connect 2022: Introducing Next-gen VR Headset Quest Pro

Carnegie Mellon alliance

Meta also revealed a partnership with Carnegie Mellon University to develop tools for visually impaired individuals.

On display were technologies able to create virtual spaces providing visually impaired people better directions and navigations to where they were going.

Scientists from both sides created a 3D map of the Pittsburgh National Airport using techniques including neural radiance fields and inverse rendering. The map can be accessed via a smartphone app.

The research is largely conducted through the company’s Reality Labs, a Meta Platforms business tasked with producing the next generation of VR and AR hardware and software.

“With Reality Labs, we’re inventing a new computing platform — one built around people, connections and the relationships that matter,” the company said.

Meta described its research work as developing “foundational technologies for future devices and the metaverse.”

About the Author(s)

Ben Wodecki

Jr. Editor

Ben Wodecki is the Jr. Editor of AI Business, covering a wide range of AI content. Ben joined the team in March 2021 as assistant editor and was promoted to Jr. Editor. He has written for The New Statesman, Intellectual Property Magazine, and The Telegraph India, among others. He holds an MSc in Digital Journalism from Middlesex University.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like