We can now hear an AI robot’s thought process 

This Pepper bot “thinks out loud” to help people understand its actions.
Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

Italian researchers have given one of SoftBank’s robots, called Pepper, the ability to share its internal monologue with humans while completing tasks.

Trusting the bots: AI systems can do a lot — drive cars, cool cities, and even save lives — but the systems usually can’t explain exactly how they do those things. Sometimes, their developers don’t even know the answer.

This is known as AI’s black box problem, and it’s holding back the mainstream adoption of AI.

“Trust is a major issue with artificial intelligence because people are the end-users, and they can never have full trust in it if they do not know how AI processes information,” Sambit Bhattacharya, a professor of computer science at Fayetteville State University, told the Economic Times in 2019.

The idea: University of Palermo researchers thought it might be easier for people to trust an AI bot if they knew the robot’s thought process during interactions.

“The robots will be easier to understand for laypeople, and you don’t need to be a technician or engineer,” study co-author Antonio Chella said in a press release. “In a sense, we can communicate and collaborate with the robot better.”

For a study published in iScience, Chella and first author Arianna Pipitone, trained a Pepper robot in table setting etiquette. They then gave it the ability to say, in plain English, what it was “thinking” when executing a task.

Inside a robot’s head: In a recorded demo, when asked to hand over a napkin, the robot starts talking in what sounds like a stream of consciousness: “Where is the napkin? The object is in the box….I am using my right arm to get the object.”

This gave the person working with Pepper an understanding of the robot’s thought process.

“The robot is no longer a black box, but it is possible to look at what happens inside it and why some decisions are (made),” the researchers wrote in their study.

The cold water: Equipping collaborative AI robots like Pepper with the ability to “think out loud” could make them more trustworthy — but it’s still only possible if the bot’s designer knows what’s going on inside the AI’s brain, or can program the system to figure it out.

Every explanation Pepper gave during the demo was something the researchers trained it to be able to say — they didn’t learn anything new about the robot’s thought process, though the rest of us would.

The robot is no longer a black box.


Antonio Chella & Arianna Pipitone

Bonus round: The experiment did yield one surprise, though: Pepper was better at solving problems — such as what to do when a command goes against the established rules — when it could think through them out loud with a human partner, who could then provide feedback.

When faced with 30 of these dilemmas, the Pepper that could share its thought process completed 26, while the one that couldn’t talk it out finished only 18 (a task was considered incomplete when the robot ran out of possible actions).

“Inner speech enables alternative solutions for the robots and humans to collaborate and get out of stalemate situations,” Pipitone said in the press release.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
AI is now designing chips for AI
AI-designed microchips have more power, lower cost, and are changing the tech landscape.
Why futurist Amy Webb sees a “technology supercycle” headed our way
Amy Webb’s data suggests we are on the cusp of a new tech revolution that will reshape the world in much the same way the steam engine and internet did in the past.
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Will AI supercharge hacking — if it hasn’t already?
The future of hacking is coming at us fast, and it isn’t clear yet whether AI will help attackers and defenders more.
No, LLMs still can’t reason like humans. This simple test reveals why.
Most AI models are incredible at taking tests but easily bamboozled by basic reasoning. “Simple Bench” shows us why.
Up Next
sound location
Subscribe to Freethink for more great stories