With iOS 17, you can make a voice that sounds like you in just 15 minutes
New accessibility features for the iPhone, iPad, and Mac that will be available later this year were previewed by Apple this week. Personal Voice is a feature that has drawn a lot of interest because it will enable people who are at risk of losing their ability to speak to “create a voice that sounds like them” for communication with loved ones, friends, and others.
By reading a random set of text prompts aloud until 15 minutes of audio have been recorded on the device, users of an iPhone, iPad, or newer Mac will be able to create a personal voice. Apple announced that the feature, which uses on-device machine learning to ensure privacy and security, will initially only be available in English.
It will be possible for users of the iPhone, iPad, and Mac to type what they want to say in order for it to be spoken aloud during phone calls, FaceTime calls, and in-person conversations thanks to the integration of Personal Voice with another new accessibility feature called Live Speech.
According to Apple, Personal Voice is intended for people who may eventually lose the ability to speak, such as those who have recently been diagnosed with ALS (amyotrophic lateral sclerosis) or other conditions that may gradually impair speaking. Personal Voice will, nevertheless, be accessible to all users, just like other accessibility features. With iOS 17, which is anticipated to be unveiled next month and made available in September, the feature will probably be added to the iPhone.
The ability to communicate with friends and family is ultimately what matters most, according to Philip Green, a member of the ALS advocacy group Team Gleason who received his ALS diagnosis in 2018. “It makes all the difference in the world if you can tell them you love them in a voice that sounds like you. Being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary.”