Self-driving cars can now tell passengers what they’re thinking

The same type of AI behind ChatGPT is now in Wayve’s autonomous vehicles.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

Microsoft-backed autonomous vehicle (AV) startup Wayve has given its cars the ability to explain their decisions in conversational language — a move that could accelerate their development and increase public trust in self-driving cars.

AI’s black box: Given enough training data, AIs can learn to create art, detect diseases, and even read our minds, but explaining how they do any of these things is often beyond their grasp. Sometimes, even the people who made the AIs cannot explain why they made a decision.

This is known as AI’s “black box problem,” and it can prevent developers from understanding why their AIs made mistakes, which makes it harder to correct them. Users may also be hesitant to trust an AI if they don’t understand how it works.

If the AV industry can’t increase trust in self-driving cars, they may not get a chance to make our roads safer.

A lack of trust in AI is a particularly big problem for the AV industry — just 9% of respondents to a 2023 AAA survey said they trusted self-driving cars, compared to the 68% who said they feared them.

Because AVs remove human error from the equation, they have the potential to dramatically reduce the number of accidents on our roads, but if the AV industry can’t change the public’s perception of self-driving cars, they may not get a chance to make our roads safer.

Talking cars: In an attempt to get more people to feel comfortable in AVs and improve their performance, Wayve has launched LINGO-1, a self-driving AI that can explain its “thought process” in easy to understand language.

“LINGO-1 opens up many possibilities for self-driving, improving the intelligence of our end-to-end AI Driver as well as bridging the gap of public trust — and this is just the beginning of maximizing its potential,” said CEO Alex Kendall.

How it works: To train an AV, developers typically feed the systems tons of driving data, collected by cameras and sensors. The AIs learn the right actions to take based on what they see in the data.

They can’t easily explain why they make the decisions they do, though — so Wayve added another kind of data to its training: verbal commentary.

This commentary was provided by expert drivers as they navigated roads in the UK and consisted of them explaining why they were taking certain actions — a driver might say they were slowing down because a car was merging into their lane, for example.

The drivers were told to follow certain protocols while providing this commentary to make it as uniform and easy to aggregate as possible.

“LINGO-1 can generate a continuous commentary that explains the reasoning behind driving actions.”

Wayve

Wayve then combined its self-driving software with a large language model (LLM) — a type of AI that can understand and respond to prompts in conversational language — to create LINGO-1, a self-driving AI that can explain itself the same way a human driver might.

“LINGO-1 can generate a continuous commentary that explains the reasoning behind driving actions,” writes Wayve. “This can help us understand in natural language what the model is paying attention to and what it is doing.”

That information can help Wayve improve the system and also help passengers feel more comfortable in its AVs. Instead of wondering — and worrying — about the car’s actions, a person could just ask for an explanation.

“This unique dialogue between passengers and autonomous vehicles could increase transparency, making it easier for people to understand and trust these systems,” writes Wayve. 

Looking ahead: While Cruise and Waymo are already carrying passengers in fully autonomous cars, Wayve is still testing its AVs with safety drivers behind the wheel in the UK. However, it’s hopeful that LINGO-1 will allow it to make up some ground on industry frontrunners — and earn the trust of future customers.

“Adding natural language as a modality will accelerate the development of this technology while building trust in AI decision-making, and this is vital for widespread adoption,” writes Wayve.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
How Ford built “an efficiency engine” around the Model T
An excerpt from author and structural engineer Brian Potter’s forthcoming book, “The Origins of Efficiency.”
Retro Biosciences wants to add 10 healthy years to your life
Backed by $180 million in funding from OpenAI’s Sam Altman, Joe Betts-LaCroix’s Retro Biosciences is racing to extend the human healthspan.
AI deadbots can keep “you” around after death — what does that mean for the living?
We can now use AI to create versions of real people that can live on long after their bodies die. But should we?
The AI vibe shift: From doom to realism
Existential anxiety surrounding AI is giving way to more realistic concerns about its potential impact on the workforce and beyond.
AI will never be a shortcut to wisdom
Real understanding, argues thought leader Jeff DeGraff, doesn’t come from outputs — it comes from practice.
Up Next
A group of robotic busts with different color lights at their bases
Subscribe to Freethink for more great stories