Sponsored by Google Cloud
Choosing Your First Generative AI Use Cases
To get started with generative AI, first focus on areas that can improve human experiences with information.
Coming later this year to the iPhone and iPad, Personal Voice uses on-device ML
Apple is planning to roll out a new feature on the iPhone and iPad that will let users clone their voices.
Called Personal Voice, users can create synthesized voices that sound like them by reading a random set of text prompts for 15 minutes. The audio is recorded on an iPhone or iPad.
An on-device machine learning model will train on the recorded voice to create a clone so “you can ‘speak’ even if you eventually lose your speech,” according to an Apple blog post.
Apple unveiled this and other accessibility features this week. The features are coming later this year.
Apple also introduced Assistive Access, which simplifies the user interface on iPhones and iPads to make them easier to use by people with disabilities.
With Live Speech, speech-impaired users can type to speak during calls and conversations on the iPhone, iPad and Mac. Users can also save commonly used phrases as shortcuts.
Point and Speak reads out loud the text that blind or visually impaired users point towards. For example, to read controls on a microwave, users simply have to move their finger across the keypad and hear the text. Point and Speak is built into the Magnifier app on the iPhone and iPad. It uses the device’s camera, ML and LiDAR scanner (embedded in iPhone 12 Pro and above and iPad Pro models).
You May Also Like