Self-driving cars can now tell passengers what they’re thinking

The same type of AI behind ChatGPT is now in Wayve’s autonomous vehicles.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Microsoft-backed autonomous vehicle (AV) startup Wayve has given its cars the ability to explain their decisions in conversational language — a move that could accelerate their development and increase public trust in self-driving cars.

AI’s black box: Given enough training data, AIs can learn to create art, detect diseases, and even read our minds, but explaining how they do any of these things is often beyond their grasp. Sometimes, even the people who made the AIs cannot explain why they made a decision.

This is known as AI’s “black box problem,” and it can prevent developers from understanding why their AIs made mistakes, which makes it harder to correct them. Users may also be hesitant to trust an AI if they don’t understand how it works.

If the AV industry can’t increase trust in self-driving cars, they may not get a chance to make our roads safer.

A lack of trust in AI is a particularly big problem for the AV industry — just 9% of respondents to a 2023 AAA survey said they trusted self-driving cars, compared to the 68% who said they feared them.

Because AVs remove human error from the equation, they have the potential to dramatically reduce the number of accidents on our roads, but if the AV industry can’t change the public’s perception of self-driving cars, they may not get a chance to make our roads safer.

Talking cars: In an attempt to get more people to feel comfortable in AVs and improve their performance, Wayve has launched LINGO-1, a self-driving AI that can explain its “thought process” in easy to understand language.

“LINGO-1 opens up many possibilities for self-driving, improving the intelligence of our end-to-end AI Driver as well as bridging the gap of public trust — and this is just the beginning of maximizing its potential,” said CEO Alex Kendall.

How it works: To train an AV, developers typically feed the systems tons of driving data, collected by cameras and sensors. The AIs learn the right actions to take based on what they see in the data.

They can’t easily explain why they make the decisions they do, though — so Wayve added another kind of data to its training: verbal commentary.

This commentary was provided by expert drivers as they navigated roads in the UK and consisted of them explaining why they were taking certain actions — a driver might say they were slowing down because a car was merging into their lane, for example.

The drivers were told to follow certain protocols while providing this commentary to make it as uniform and easy to aggregate as possible.

“LINGO-1 can generate a continuous commentary that explains the reasoning behind driving actions.”

Wayve

Wayve then combined its self-driving software with a large language model (LLM) — a type of AI that can understand and respond to prompts in conversational language — to create LINGO-1, a self-driving AI that can explain itself the same way a human driver might.

“LINGO-1 can generate a continuous commentary that explains the reasoning behind driving actions,” writes Wayve. “This can help us understand in natural language what the model is paying attention to and what it is doing.”

That information can help Wayve improve the system and also help passengers feel more comfortable in its AVs. Instead of wondering — and worrying — about the car’s actions, a person could just ask for an explanation.

“This unique dialogue between passengers and autonomous vehicles could increase transparency, making it easier for people to understand and trust these systems,” writes Wayve. 

Looking ahead: While Cruise and Waymo are already carrying passengers in fully autonomous cars, Wayve is still testing its AVs with safety drivers behind the wheel in the UK. However, it’s hopeful that LINGO-1 will allow it to make up some ground on industry frontrunners — and earn the trust of future customers.

“Adding natural language as a modality will accelerate the development of this technology while building trust in AI decision-making, and this is vital for widespread adoption,” writes Wayve.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
Up Next
A group of robotic busts with different color lights at their bases
Subscribe to Freethink for more great stories