Startup breaks through “accent barrier” with real-time translator

The accent translator can integrate into Zoom, WhatsApp, or phone calls.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

After struggling to understand each other’s accents, three Stanford students — from China, Russia, and Venezuela — developed a technology that can listen to English spoken with one accent and replay it with another.

They’ve now formed a startup, Sanas, to release the tech, which they say is the world’s first real-time speech accent translator.

The challenge: Of the 1.5 billion people who know English, more than 1 billion speak it as a second language. Those who speak it as a first language hail from the U.S., the U.K., Ireland, Australia, and other regions with their own unique pronunciations of English words.

Given all of that, it’s easy to see how two people can both be speaking English and still have difficulty communicating due to accents, from either their home region or their home language.

“We knew from our own experience that forcing a different accent on yourself is uncomfortable.”

Andres Perez Soderi

Speech therapy can help non-native speakers lose their accents, but it takes a long time and doesn’t work for everyone, and some people would rather not “fake” a local accent.

“[W]e knew from our own experience that forcing a different accent on yourself is uncomfortable,” Sanas CFO Andres Perez Soderi told IEEE Spectrum. “I went to a British high school and tried to force a British accent; it was an experience that was hard to digest.”

How it works: Rather than trying to change how people speak, the students decided to train an accent translator algorithm. First, they had to feed it a lot of recordings of the exact same phrases spoken with different accents.

“You aren’t just doing audio signal processing, changing the pitch and tone — you have to change the phonetics,” Sanas CTO Shawn Zhang explained.

“So we really needed parallel data sets, created by readers using the same source material, so the neural network could learn to map from one to the other, examining both to learn how to transform the pronunciation,” he continued.

accent translator
A preview of what users see when using the accent translator. Credit: Sanas

Their finished accent translator works for five accents: American, British, Australian, Filipino, and Spanish — you could say something in Spanish-accented English, for example, and have it translated into a British accent.

It has a 150-millisecond delay (about one-sixth of a second), runs directly on a person’s computer (not in the cloud), and can integrate into apps such as  Zoom and WhatsApp. 

The total delay experienced while using the tech depends on the app you’re communicating with — Zoom, for example, averages a 50-millisecond delay itself, so someone using the accent translator with that service would experience a total delay of 200 milliseconds.

However, Soderi told IEEE Spectrum that anything below 300 milliseconds is generally imperceptible.

It could be a boon to businesses that provide customer service and tech support over the phone.

The next steps: Sanas has secured $5.5 million in funding, which the students will use to expand their team and further develop the tech. In addition to adding other accents within English, they plan to start translating other languages, too (Spanish spoken in various accents, for example).

While the students’ personal lives may have inspired them to develop the accent translator, they think it could be a boon to many businesses, particularly those that provide customer service and technical support over the phone — they already have seven such companies piloting the tech.

“There are also creative use cases such as those in entertainment and media where producers can make their films and programs understandable in different parts of the world by matching accents to localities,” Sanas CEO Maxim Serebryakov said.

“We are also exploring how machines can better interpret what people are saying,” he continued. “We’ve only begun to explore the possibilities.”

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Will AI supercharge hacking — if it hasn’t already?
The future of hacking is coming at us fast, and it isn’t clear yet whether AI will help attackers and defenders more.
No, LLMs still can’t reason like humans. This simple test reveals why.
Most AI models are incredible at taking tests but easily bamboozled by basic reasoning. “Simple Bench” shows us why.
The future of fertility, from artificial wombs to AI-assisted IVF
A look back at the history of infertility treatments and ahead to the tech that could change everything we thought we knew about reproduction.
“Model collapse” threatens to kill progress on generative AIs
Generative AIs start churning out nonsense when trained on synthetic data — a problem that could put a ceiling on their ability to improve.
Up Next
Subscribe to Freethink for more great stories