Here's Why You Would Want Your iPhone to Talk in Your Own Voice

Apple's new accessibility features are all about AI

  • Apple introduces new AI-based accessibility features. 
  • Your iPhone will be able to learn how to speak using your voice.
  • For people at risk of losing their ability to speak, this will let them still speak in their own voice.
Someone wearing headphones and speaking into a microphone recording their voice on a smartphone.

Soundtrap / Unsplash

Soon, your iPhone will be able to speak using your voice, finally using AI for good instead of evil. 

Apple just published its roadmap for upcoming accessibility features, and AI is sneaking in. One notable new feature is Personal Voice, which lets users train their iPhones to speak in their voice. Why? So that people at risk of losing their voice can preserve it for future communication with their loved ones. But, of course, it could go much further than that. 

"AI-based voice cloning is a very potent new technology that promises to transform lives—in many cases for the better," Dr. Mohamed Lazzouni, CTO of biometrics company Aware, told Lifewire via email. "Imagine being able to create artificial voices for people who are unable to talk without assistance."

No, Apple's Not Ignoring AI

The current AI hype megawave seems to be washing past Apple. Partly that's because the company usually takes a long time to get stuff right and never announces anything until it's ready. And it could also be because the kind of open-ended AI that's exploding—where chatbots advise you to leave your spouse and anyone can rip off an artist's lifelong work with a short text prompt—doesn't really fit into Apple's way of doing things.

AI-based voice cloning is a very potent new technology that promises to transform lives—in many cases for the better.

Also, Apple has been using AI for years, only it still uses the previous term for AI: machine learning. Whenever your phone camera blurs the background in Portrait Mode or recognizes the faces of your friends and family member and groups them into albums, or identifies a plant in a photo—that's AI. And Apple is so all-in on AI that a significant chunk of its M-series and A-series systems-on-a-chip (SoC) have dedicated hardware for AI in the form of the Neural Engine. 

We've gotten so used to this year's trend of writing prompts to coax machine-learning models into cobbling together text, images, and videos that we might forget there are other ways to use AI. These AI bots are incredible at pretending to be intelligent assistants, but the large language model (LLM) tech behind them can also be used in focused, less open-ended environments, like learning a specific person's voice. 

Cloning Your Own Personal Voice

Amyotrophic lateral sclerosis (ALS) is a progressive neurodegenerative disease, and one of its effects is to reduce the ability to speak. If you can capture your voice soon after diagnosis, you can use it to speak using text-to-speech. 

Personal Voice is Apple's take on an existing practice called voice banking, but instead of requiring somebody to spend weeks reading thousands of phrases into a computer, Apple's version only requires 15 minutes of audio and is processed locally on your iPhone, iPad, or Apple Silicon Mac, for privacy reasons. 

A screen shot of the training used in Apple's Personal Voice.

Apple

"I've been thinking a lot about the privacy implications, but the implementation of this feature certainly seems that it will be harder to abuse than something like ElevenLabs's voice cloning tech," writes long-time Apple reporter Dan Moren on the Six Colors blog. "For example, just having to spend fifteen minutes training the model with a random set of words is going to make it a lot harder to create a model of someone else's voice without their knowledge."

Once you have your voice, you can use it with another iPhone accessibility feature called Live Speech. Say you're on a FaceTime call with your family. You can type in your part of the conversation, and Live Speech will read it to the other participants using your Personal Voice. 

Other possible uses for your voice would be reading stories to your kids while you're away traveling. Reading your own notes back to you. And here's one that would be really neat: having incoming text messages read out to you in the sender's voice. Siri already reads messages in its own voice, and personalized voices would be so much better. 

Apple accessibility FaceTime Live Speech

Apple

This feature is not only way more focused than most of the AI tools right now, but it is also much more private. Not just in Apple's usual way, where it keeps all your private data and biometrics on-device instead of in the cloud. But also in your control over your voice after it has been learned. It can only be used by you and is under your control, even in group chat situations. 

It could be that Apple never markets an AI product. AI might currently be the hottest thing in tech, but it is also the most controversial, the most feared, and definitely the most misunderstood. And the AI backlash has already begun. Perhaps Apple will wait it out, adding AI features like Personal Voice but not actually calling them AI.

And who knows, maybe one day, it will use AI to fix Siri.

Was this page helpful?