One robot was able to watch another bot and predict its actions

This could be the first demonstration of a robot showing basic empathy.

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox

We humans start out as self-centered little things. It’s only around the age of three that most of us realize that other people have feelings, wants, and needs different from our own.

Not long after developing that skill — called “theory of mind” — we learn another: empathy. That’s the ability to put ourselves in another’s shoes, to understand their perspective even when it differs from our own.

It turns out, robots may be capable of displaying a kind of empathy, too — a discovery that could help teams of bots better serve us in the future.

Empathetic Robots

Many experts predict that we’re headed toward a future in which scores of AI robots live among us — they’ll be our colleagues, our cooks, our maids, and even our drivers.

If those robots know to some degree what the bots around them are going to do, it will help them work together and also stay out of one another’s way.

“Self-driving cars, for example, can better plan ahead if they can understand what other autonomous vehicles will do next,” Columbia University engineer Boyuan Chen told Freethink. “When two robots are tasked to assemble a table, if one anticipates that the other is going to put on the leg, it can help by picking up the table leg outside the reachable space of that robot.”

But training every robot to know what every other robot is going to do in every situation wouldn’t be feasible, and it would be expensive to equip every bot with the systems needed to make real-time communications with all other bots possible.

If a robot was able to demonstrate theory of mind and empathize with other robots — naturally putting itself in their shoes and predicting their actions — it could learn how to work within the larger network just by observing.

Now, a new study by Chen and his colleagues suggests that empathy between robots may be possible.

Predicting the Future

For their study, the Columbia researchers started by building a six-square-foot “playpen” for the robots.

One of the bots could roll around on its wheels inside the playpen and was trained to move toward any green circle it saw on its floor.

However, a red cube in the playpen would sometimes block the robot’s view of a green circle. In those instances, the bot either wouldn’t move, or it would move toward a different circle that it could see.

The other robot in the experiment was positioned above the center of the playpen. It couldn’t move, but it could see everything happening down below: the other robot, the cube, and every green circle.

For two hours, the “observer” robot watched the bot below as it rolled toward one green circle after another or stood still.

After that, it was able to predict the path of its partner robot 98 out of 100 times — even though it had never been told that the robot was programmed to move toward green circles or that it couldn’t see past the red cube.

“Our findings begin to demonstrate how robots can see the world from another robot’s perspective,” Chen said in a press release.

“The ability of the observer to put itself in its partner’s shoes, so to speak, and understand, without being guided, whether its partner could or could not see the green circle from its vantage point, is perhaps a primitive form of empathy,” he continued.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Sign up for the Freethink Weekly newsletter!
A collection of our favorite stories straight to your inbox
Related
Should we turn the electricity grid over to AI?
AI could one day be woven throughout the grid management system — here are the pros and cons.
AI skeptic Gary Marcus on AI’s moral and technical shortcomings
From hallucinations to regulatory battles, Gary Marcus argues the AI status quo has failed us and it’s time citizens demand something more.
Flexport is using generative AI to create the “holy grail” of shipping
Flexport is using generative AI to read documents, talk to truckers, and create a “knowledge agent” that’s an expert in shipping.
The West needs more water. This Nobel winner may have the answer.
Paul Migrom has an Emmy, a Nobel, and a successful company. There’s one more big problem on the to-do list.
Can we automate science? Sam Rodriques is already doing it.
People need to anticipate the revolution that’s coming in how humans and AI will collaborate to create discoveries, argues Sam Rodrigues.
Up Next
foot drop
Subscribe to Freethink for more great stories