Instagram Uses AI to Verify User Ages

App uses tech from Yoti as pressure grows from parents

Ben Wodecki

November 9, 2022

2 Min Read

Social media platform Instagram is tightening its user controls by deploying AI-enabled age verification tools from Yoti.

Minors using Instagram in the U.K. and EU who attempt to change their date of birth to over the legal adult age of 18 will now have to go through Yoti’s tool, which can estimate ages by looking at a user’s face.

The system looks at a selfie to estimate ages through facial features. Yoti claimed in a white paper that AI tools can accurately identify users aged 13 to 17 as being under 23 years old 99.65% of the time. For 6- to 11-year-olds, 98.91% of the time they are correctly identified as under 13.

Yoti said on its website that all images are deleted once someone receives their estimated age − “nothing is ever viewed by a human.” The company also said the model, which uses a neural network, cannot infer anything else about a person or otherwise identify them.

Yoti’s digital verification and biometrics tools have previously been deployed in British Post Offices and the National Health Service (NHS), as well as in supermarkets for consumers attempting to purchase alcohol.

Instagram began exploring age verification options with Yoti back in June, with U.S. users given the option of using the age estimation tool, uploading an ID document or having a parent vouch for them.

“Knowing people’s age allows us to provide appropriate experiences to different age groups, specifically teens,” according to a June blog post from Instagram.

Impressionable youths

The age verification system comes after the suicide of a British 14-year-old named Molly Russell.

In September, U.K. coroner Andrew Walker wrapped up a 5-year investigation into her death and concluded in his report that she killed herself “whilst suffering from depression and the negative effects of on-line content.”

Walker said Russell’s depression worsened after viewing content from some online sites that “were not safe as they allowed access to adult content that should not have been available for a 14-year-old child to see.”

Some of the content “romanticized acts of self-harm by young people on themselves,” Walker said. In some cases, the content was “particularly graphic” and portrayed suicide as “inevitable.” Walker said that this content “contributed to her death in a more than minimal way.”

Her father, Ian Russell, said social media platforms including Instagram, Pinterest and WhatsApp “played a part” in his daughter’s suicide, according to a 2019 interview on ITV’s daytime TV show, This Morning.

Facebook, now Meta, said back then that Instagram was doing a full review of suicide-related content and engineers were working to make it hard for users to find self-harm posts that violate the platform’s policies.

About the Authors

Ben Wodecki

Assistant Editor

Get the newsletter
From automation advancements to policy announcements, stay ahead of the curve with the bi-weekly AI Business newsletter.