Coming later this year to the iPhone and iPad, Personal Voice uses on-device ML

Deborah Yao, Editor

May 17, 2023

1 Min Read

At a Glance

  • Apple's Personal Voice feature lets users clone their voice if they are in danger of losing the ability to speak.
  • Users only need to read text prompts for 15 minutes for the on-device ML to create their synthesized voice.

Apple is planning to roll out a new feature on the iPhone and iPad that will let users clone their voices.

Called Personal Voice, users can create synthesized voices that sound like them by reading a random set of text prompts for 15 minutes. The audio is recorded on an iPhone or iPad.

An on-device machine learning model will train on the recorded voice to create a clone so “you can ‘speak’ even if you eventually lose your speech,” according to an Apple blog post.

Apple unveiled this and other accessibility features this week. The features are coming later this year.

Apple also introduced Assistive Access, which simplifies the user interface on iPhones and iPads to make them easier to use by people with disabilities.

With Live Speech, speech-impaired users can type to speak during calls and conversations on the iPhone, iPad and Mac. Users can also save commonly used phrases as shortcuts.

Point and Speak reads out loud the text that blind or visually impaired users point towards. For example, to read controls on a microwave, users simply have to move their finger across the keypad and hear the text. Point and Speak is built into the Magnifier app on the iPhone and iPad. It uses the device’s camera, ML and LiDAR scanner (embedded in iPhone 12 Pro and above and iPad Pro models).

About the Author(s)

Deborah Yao

Editor

Deborah Yao runs the day-to-day operations of AI Business. She is a Stanford grad who has worked at Amazon, Wharton School and Associated Press.

Keep up with the ever-evolving AI landscape
Unlock exclusive AI content by subscribing to our newsletter!!

You May Also Like