One robot was able to watch another bot and predict its actions

This could be the first demonstration of a robot showing basic empathy.

We humans start out as self-centered little things. It’s only around the age of three that most of us realize that other people have feelings, wants, and needs different from our own.

Not long after developing that skill — called “theory of mind” — we learn another: empathy. That’s the ability to put ourselves in another’s shoes, to understand their perspective even when it differs from our own.

It turns out, robots may be capable of displaying a kind of empathy, too — a discovery that could help teams of bots better serve us in the future.

Empathetic Robots

Many experts predict that we’re headed toward a future in which scores of AI robots live among us — they’ll be our colleagues, our cooks, our maids, and even our drivers.

If those robots know to some degree what the bots around them are going to do, it will help them work together and also stay out of one another’s way.

“Self-driving cars, for example, can better plan ahead if they can understand what other autonomous vehicles will do next,” Columbia University engineer Boyuan Chen told Freethink. “When two robots are tasked to assemble a table, if one anticipates that the other is going to put on the leg, it can help by picking up the table leg outside the reachable space of that robot.”

But training every robot to know what every other robot is going to do in every situation wouldn’t be feasible, and it would be expensive to equip every bot with the systems needed to make real-time communications with all other bots possible.

If a robot was able to demonstrate theory of mind and empathize with other robots — naturally putting itself in their shoes and predicting their actions — it could learn how to work within the larger network just by observing.

Now, a new study by Chen and his colleagues suggests that empathy between robots may be possible.

Predicting the Future

For their study, the Columbia researchers started by building a six-square-foot “playpen” for the robots.

One of the bots could roll around on its wheels inside the playpen and was trained to move toward any green circle it saw on its floor.

However, a red cube in the playpen would sometimes block the robot’s view of a green circle. In those instances, the bot either wouldn’t move, or it would move toward a different circle that it could see.

The other robot in the experiment was positioned above the center of the playpen. It couldn’t move, but it could see everything happening down below: the other robot, the cube, and every green circle.

For two hours, the “observer” robot watched the bot below as it rolled toward one green circle after another or stood still.

After that, it was able to predict the path of its partner robot 98 out of 100 times — even though it had never been told that the robot was programmed to move toward green circles or that it couldn’t see past the red cube.

“Our findings begin to demonstrate how robots can see the world from another robot’s perspective,” Chen said in a press release.

“The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy,” he continued.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at tips@freethink.com.

Related
Why futurist Amy Webb sees a “technology supercycle” headed our way
Amy Webb’s data suggests we are on the cusp of a new tech revolution that will reshape the world in much the same way the steam engine and internet did in the past.
AI chatbots may ease the world’s loneliness (if they don’t make it worse)
AI chatbots may have certain advantages when roleplaying as our friends. They may also come with downsides that make our loneliness worse.
Will AI supercharge hacking — if it hasn’t already?
The future of hacking is coming at us fast, and it isn’t clear yet whether AI will help attackers and defenders more.
No, LLMs still can’t reason like humans. This simple test reveals why.
Most AI models are incredible at taking tests but easily bamboozled by basic reasoning. “Simple Bench” shows us why.
The future of fertility, from artificial wombs to AI-assisted IVF
A look back at the history of infertility treatments and ahead to the tech that could change everything we thought we knew about reproduction.
Up Next
foot drop
Exit mobile version