Self-driving cars can now tell passengers what they’re thinking

The same type of AI behind ChatGPT is now in Wayve’s autonomous vehicles.

Microsoft-backed autonomous vehicle (AV) startup Wayve has given its cars the ability to explain their decisions in conversational language — a move that could accelerate their development and increase public trust in self-driving cars.

AI’s black box: Given enough training data, AIs can learn to create art, detect diseases, and even read our minds, but explaining how they do any of these things is often beyond their grasp. Sometimes, even the people who made the AIs cannot explain why they made a decision.

This is known as AI’s “black box problem,” and it can prevent developers from understanding why their AIs made mistakes, which makes it harder to correct them. Users may also be hesitant to trust an AI if they don’t understand how it works.

If the AV industry can’t increase trust in self-driving cars, they may not get a chance to make our roads safer.

A lack of trust in AI is a particularly big problem for the AV industry — just 9% of respondents to a 2023 AAA survey said they trusted self-driving cars, compared to the 68% who said they feared them.

Because AVs remove human error from the equation, they have the potential to dramatically reduce the number of accidents on our roads, but if the AV industry can’t change the public’s perception of self-driving cars, they may not get a chance to make our roads safer.

Talking cars: In an attempt to get more people to feel comfortable in AVs and improve their performance, Wayve has launched LINGO-1, a self-driving AI that can explain its “thought process” in easy to understand language.

“LINGO-1 opens up many possibilities for self-driving, improving the intelligence of our end-to-end AI Driver as well as bridging the gap of public trust — and this is just the beginning of maximizing its potential,” said CEO Alex Kendall.

How it works: To train an AV, developers typically feed the systems tons of driving data, collected by cameras and sensors. The AIs learn the right actions to take based on what they see in the data.

They can’t easily explain why they make the decisions they do, though — so Wayve added another kind of data to its training: verbal commentary.

This commentary was provided by expert drivers as they navigated roads in the UK and consisted of them explaining why they were taking certain actions — a driver might say they were slowing down because a car was merging into their lane, for example.

The drivers were told to follow certain protocols while providing this commentary to make it as uniform and easy to aggregate as possible.

“LINGO-1 can generate a continuous commentary that explains the reasoning behind driving actions.”


Wayve then combined its self-driving software with a large language model (LLM) — a type of AI that can understand and respond to prompts in conversational language — to create LINGO-1, a self-driving AI that can explain itself the same way a human driver might.

“LINGO-1 can generate a continuous commentary that explains the reasoning behind driving actions,” writes Wayve. “This can help us understand in natural language what the model is paying attention to and what it is doing.”

That information can help Wayve improve the system and also help passengers feel more comfortable in its AVs. Instead of wondering — and worrying — about the car’s actions, a person could just ask for an explanation.

“This unique dialogue between passengers and autonomous vehicles could increase transparency, making it easier for people to understand and trust these systems,” writes Wayve. 

Looking ahead: While Cruise and Waymo are already carrying passengers in fully autonomous cars, Wayve is still testing its AVs with safety drivers behind the wheel in the UK. However, it’s hopeful that LINGO-1 will allow it to make up some ground on industry frontrunners — and earn the trust of future customers.

“Adding natural language as a modality will accelerate the development of this technology while building trust in AI decision-making, and this is vital for widespread adoption,” writes Wayve.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

OpenAI and Microsoft are reportedly planning a $100B supercomputer
Microsoft is reportedly planning to build a $100 billion data center and supercomputer, called “Stargate,” for OpenAI.
Elon Musk: Tesla will unveil a “robotaxi” on 8/8
While denying reports that Tesla won’t be making a low-cost EV, CEO Elon Musk announced plans to unveil a “robotaxi” on August 8, 2024.
Can we stop AI hallucinations? And do we even want to?
“Making stuff up” and “being creative” may be two sides of the same coin — but you have to be able to tell the difference.
When AI prompts result in copyright violations, who has to pay?
Who is responsible for copyright violations when they’re produced by generative AI? The technology is outpacing the law.
Google’s Deep Mind AI can help engineers predict “catastrophic failure”
How vulnerable is the electrical grid to a malicious attacker who destroys select substations? Google’s Deep Mind can help predict the answer.
Up Next
A group of robotic busts with different color lights at their bases
Subscribe to Freethink for more great stories