We can now hear an AI robot’s thought process 

This Pepper bot “thinks out loud” to help people understand its actions.
Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week

Italian researchers have given one of SoftBank’s robots, called Pepper, the ability to share its internal monologue with humans while completing tasks.

Trusting the bots: AI systems can do a lot — drive cars, cool cities, and even save lives — but the systems usually can’t explain exactly how they do those things. Sometimes, their developers don’t even know the answer.

This is known as AI’s black box problem, and it’s holding back the mainstream adoption of AI.

“Trust is a major issue with artificial intelligence because people are the end-users, and they can never have full trust in it if they do not know how AI processes information,” Sambit Bhattacharya, a professor of computer science at Fayetteville State University, told the Economic Times in 2019.

The idea: University of Palermo researchers thought it might be easier for people to trust an AI bot if they knew the robot’s thought process during interactions.

“The robots will be easier to understand for laypeople, and you don’t need to be a technician or engineer,” study co-author Antonio Chella said in a press release. “In a sense, we can communicate and collaborate with the robot better.”

For a study published in iScience, Chella and first author Arianna Pipitone, trained a Pepper robot in table setting etiquette. They then gave it the ability to say, in plain English, what it was “thinking” when executing a task.

Inside a robot’s head: In a recorded demo, when asked to hand over a napkin, the robot starts talking in what sounds like a stream of consciousness: “Where is the napkin? The object is in the box….I am using my right arm to get the object.”

This gave the person working with Pepper an understanding of the robot’s thought process.

“The robot is no longer a black box, but it is possible to look at what happens inside it and why some decisions are (made),” the researchers wrote in their study.

The cold water: Equipping collaborative AI robots like Pepper with the ability to “think out loud” could make them more trustworthy — but it’s still only possible if the bot’s designer knows what’s going on inside the AI’s brain, or can program the system to figure it out.

Every explanation Pepper gave during the demo was something the researchers trained it to be able to say — they didn’t learn anything new about the robot’s thought process, though the rest of us would.

The robot is no longer a black box.


Antonio Chella & Arianna Pipitone

Bonus round: The experiment did yield one surprise, though: Pepper was better at solving problems — such as what to do when a command goes against the established rules — when it could think through them out loud with a human partner, who could then provide feedback.

When faced with 30 of these dilemmas, the Pepper that could share its thought process completed 26, while the one that couldn’t talk it out finished only 18 (a task was considered incomplete when the robot ran out of possible actions).

“Inner speech enables alternative solutions for the robots and humans to collaborate and get out of stalemate situations,” Pipitone said in the press release.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Subscribe to Freethink on Substack for free
Get our favorite new stories right to your inbox every week
Related
Three founders look to the future at Freethink’s inaugural Great Progression event
The tech community came together for the launch of the Great Progression event series, curated by Peter Leyden and produced by Freethink.
Why AI today is more toddler than Terminator
In “Raising AI,” author De Kai argues that AIs are more like society’s children than machines under our control.
How proof-of-human tech could save the internet
Sam Altman’s World Network uses iris-scanning Orbs to give people a way to prove that they are people — and not AIs — online.
The AI social network war has begun
A secret prototype, a hardware deal with Jony Ive, and millions of AI images suggest OpenAI is making a play for your social media feed.
Are large language models dyslexic?
Despite outperforming humans at many tasks, multimodal LLMs struggle to read time on a clock — just like many people with dyslexia.
Up Next
sound location
Exit mobile version