Apple today previewed a range of new features to improve cognitive, visual, and speech accessibility. These tools will appear on the iPhone, iPad, and Mac later this year. Apple, an established leader in mainstream technical accessibility, emphasizes that these tools are built with feedback from disabled communities.
Assistive Access, coming soon to iOS and iPadOS, is designed for people with cognitive disabilities. Assistive Access streamlines the iPhone and iPad interface and focuses specifically on making it easier to talk to loved ones, share photos, and listen to music. For example, the Phone and FaceTime apps have been merged into one.
The design has also been made more digestible by including large icons, more contrast, and clearer text labels to simplify the screen. However, the user can customize these visual features as desired, and those preferences apply to any app that is compatible with Assistive Access.
As part of the existing Magnifier tool, blind and visually impaired users can already use their phone to locate nearby doors, people or signs. Now Apple is introducing a feature called Point and Speak, which uses the device’s camera and LiDAR scanner to help visually impaired people interact with physical objects that have different text labels.
So if a visually impaired user wants to heat up food in the microwave, they can use Point and Speak to tell the difference between the “popcorn”, “pizza” and “power level” buttons. reads it aloud. Point and Speak is available in English, French, Italian, German, Spanish, Portuguese, Chinese, Cantonese, Korean, Japanese and Ukrainian.
One particularly interesting feature from the bunch is Personal Voice, which creates an automated voice that sounds like you, rather than Siri. The tool is designed for people who may be at risk of losing their vocal fluency due to conditions such as ALS. To generate a personalized voice, the user must spend about fifteen minutes reading randomly chosen text prompts clearly into the microphone. It then processes the audio locally on your iPhone, iPad or Mac using machine learning to create your personalized voice. It sounds similar to what Acapela has done with its “my own voice” service, which works with other tools.
It’s easy to see how a repository of unique, well-trained text-to-speech models can be dangerous in the wrong hands. But according to Apple, this custom voice data is never shared with anyone, not even Apple itself. In fact, Apple says it doesn’t even associate your personal voice with your Apple ID, as some households may share a login. Instead, users must sign in if they want a personal voice they create on their Mac to be accessible on their iPhone, or vice versa.
At launch, Personal Voice is only available for English speakers and can only be created on devices with Apple silicon.
Whether you speak like Siri or your AI voice twin, Apple makes it easier for nonverbal people to communicate. Live Speech, available on Apple devices, allows people to type what they want to say so it can be spoken aloud. The tool is readily available on the lock screen, but can also be used in other apps, such as FaceTime. And if users find themselves repeating the same phrases often, such as a regular coffee order, they can save preset phrases to Live Speech.
Apple’s existing speech-to-text tools are also getting an upgrade. Voice Control now includes phonetic text editing, making it easier for people who type with their voice to quickly correct mistakes. So if you see that your computer is transcribing “great” but you meant “grey”, it’s easier to make that correction. This feature, Phonetic Suggestions, is currently available in English, Spanish, French, and German.
These accessibility features are expected to roll out to several Apple products this year. As for existing offerings, Apple will expand SignTime access to Germany, Italy, Spain and South Korea on Thursday. SignTime provides users with on-demand sign language interpretation for Apple Store and Apple Support customers.