One robot was able to watch another bot and predict its actions

This could be the first demonstration of a robot showing basic empathy.

We humans start out as self-centered little things. It’s only around the age of three that most of us realize that other people have feelings, wants, and needs different from our own.

Not long after developing that skill — called “theory of mind” — we learn another: empathy. That’s the ability to put ourselves in another’s shoes, to understand their perspective even when it differs from our own.

It turns out, robots may be capable of displaying a kind of empathy, too — a discovery that could help teams of bots better serve us in the future.

Empathetic Robots

Many experts predict that we’re headed toward a future in which scores of AI robots live among us — they’ll be our colleagues, our cooks, our maids, and even our drivers.

If those robots know to some degree what the bots around them are going to do, it will help them work together and also stay out of one another’s way.

“Self-driving cars, for example, can better plan ahead if they can understand what other autonomous vehicles will do next,” Columbia University engineer Boyuan Chen told Freethink. “When two robots are tasked to assemble a table, if one anticipates that the other is going to put on the leg, it can help by picking up the table leg outside the reachable space of that robot.”

But training every robot to know what every other robot is going to do in every situation wouldn’t be feasible, and it would be expensive to equip every bot with the systems needed to make real-time communications with all other bots possible.

If a robot was able to demonstrate theory of mind and empathize with other robots — naturally putting itself in their shoes and predicting their actions — it could learn how to work within the larger network just by observing.

Now, a new study by Chen and his colleagues suggests that empathy between robots may be possible.

Predicting the Future

For their study, the Columbia researchers started by building a six-square-foot “playpen” for the robots.

One of the bots could roll around on its wheels inside the playpen and was trained to move toward any green circle it saw on its floor.

However, a red cube in the playpen would sometimes block the robot’s view of a green circle. In those instances, the bot either wouldn’t move, or it would move toward a different circle that it could see.

The other robot in the experiment was positioned above the center of the playpen. It couldn’t move, but it could see everything happening down below: the other robot, the cube, and every green circle.

For two hours, the “observer” robot watched the bot below as it rolled toward one green circle after another or stood still.

After that, it was able to predict the path of its partner robot 98 out of 100 times — even though it had never been told that the robot was programmed to move toward green circles or that it couldn’t see past the red cube.

“Our findings begin to demonstrate how robots can see the world from another robot’s perspective,” Chen said in a press release.

“The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy,” he continued.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Target is now offering the world’s first “robot manicure”
A robot that uses AI and 3D cameras to paint fingernails is now giving Target customers 10-minute manicures for just $8.
Meta can (kinda) guess what you’ve heard via your brain waves 
Meta has created an AI that can tell what you’re hearing based on non-invasive brain scan measurements.
First-of-its-kind trial shows AI beat humans at analyzing heart scans 
Echonet, an AI trained to assess a measure of heart function, has outperformed trained technicians in both accuracy and efficiency.
How AI could learn from Aesop’s fables 
When USC researchers tried to teach an AI simple fables, they discovered that the process is a lot harder than it sounds.
On the road to autonomous cars, driver fatigue will be a problem
New research from Waymo points the way to overcoming driver fatigue in cars that are almost — but not quite — self-driving.
Up Next
foot drop
Subscribe to Freethink for more great stories