Speech impairments aren’t a problem for Google’s new voice app

It understands what users are trying to say even when other people can’t.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Google is developing an app that can recognize what people with speech impairments are trying to say, making it easier for them to both talk to other people and use voice-controlled technology.

The challenge: About 7.5 million people in the U.S. have difficulty speaking and being understood due to a brain injury, a disease like asmyotrophic lateral sclerosis (ALS), or some other condition. 

Not only can this make it hard for them to communicate with other people, it can also stifle their ability to use speech-recognition technology, which is particularly disheartening given that many people with speech impairments could greatly benefit from the tech.

“For example, people who have ALS often have speech impairments and mobility impairments as the disease progresses,” Julie Cattiau, a project manager at Google AI, told the New York Times in September. “So it would be helpful for them to be able to use the technology to turn the lights on and off or change the temperature without having to move around the house.”

Users record themselves saying 500 phrases to train the app to recognize their particular speech.

Project Relate: Google’s Project Relate app aims to solve both of these problems. 

The AI can be individually trained to understand what people with speech impairments are trying to say — this allows them to take advantage of Google’s voice-controlled Assistant. 

The app also has a “Listen” feature that transcribes users’ speech to text. They can then show the text to someone who’s having difficulty understanding them, or use the “Repeat” feature to repeat what they’re trying to say with a synthesized voice.

Beta testers: Google is currently looking for adults with speech impairments that make it difficult for them to be understood to beta test the app. They must use an Android phone, speak English, and live in Australia, Canada, New Zealand, or the United States.

Each tester will need to record themselves saying a list of 500 phrases to train the app to recognize their particular speech — that’ll take about 30 to 90 minutes, but it doesn’t have to be done in one session. They’ll then be asked to use the app and provide feedback on it to Google.

The big picture: People with speech impairments aren’t the only ones being left behind by speech-recognition tech.

Today’s systems are typically trained to understand “the average North American English voice,” Frank Rudzicz, a computer scientist at the University of Toronto, told the NYT. As a result, they aren’t as adept at understanding some English speakers, such as those with accents or who speak African American Vernacular English.

AI researchers will need to make collecting data from people who speak “non-standard” English a priority to make the speech recognition tech more universal, and Google’s Project Relate is a step in the right direction. 

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
My anxious generation: The unforeseen toll of a digital childhood
In this op-ed, columnist Rikki Schlott draws from personal experience to argue that a digital childhood is a childhood squandered.
Potato chips or heroin? The debate on social media and mental health
Experts disagree on whether social media causes mental health issues in adolescents despite looking at the same data. Here’s why.
How the TikTok case pits national security against freedom of speech
Whether the video-sharing app TikTok is banned or not, it will continue to add fuel to the fiery debate on freedom of speech.
How one streamer learned to play video games with only her mind
Perrikaryal uses an EEG to translate her brain activity into game commands, turning her mind into a video game controller.
Digital twins are an effective new way to control your metabolism
Digital twins: pioneered at NASA, innovated at Tesla, and now available for your own body, in a smartphone app.
Up Next
drug discovery
Subscribe to Freethink for more great stories