Meet MIT’s Kate Darling: Why we should rethink our relationship with robots

Robots may be more like animals than humans.

We’re nearly a quarter into the 21st century, and by now, the Terminator-style portrayal of robots taking over the world has become a tired cliche. While it’s seductive, most of us are aware that this isn’t (likely) the future of intelligent life. But what will that look like, then?

According to Kate Darling, a robot ethicist at MIT and author of “The New Breed: What Our History With Animals Reveals About Our Future with Robots,” the answer is right in front of us: animals.

While we have traditionally viewed robots as human-like, Darling believes the more apt comparison is seeing them as a different kind of “animal.”

When we expect a robot to behave like a human, it’s a very disappointing experience.


Kate Darling

Robots will increasingly occupy shared spaces with humans, social robots will take off, and the questions around how humans should treat and interact with robots has never been more critical, Darling argues.

Her point isn’t that robots and animals are the same or that they should be used exactly the same way, but that we should be open to the different ways we can collaborate with robots, harnessing their diverse range of skills and abilities — as we do with animals.

I spoke to Darling about how a robot’s design affects our interaction with it, why we should stop worrying about robots replacing humans, and more.  Here is our conversation, edited and condensed for clarity.

Why have robots traditionally been designed to look like humans? What is the thinking behind that?

We’ve always been fascinated with recreating ourselves. We had automata back in ancient times that were recreations of human bodies that could move around. Even the earliest artificial intelligence researchers started out with a goal of recreating human intelligence.

With robots and AI, in particular, they are machines that can sense and think and make autonomous decisions and learn. So we tend to automatically compare them to ourselves as well because of our inherent tendency to compare everything to ourselves. And traditionally, a lot of robots have been human-shaped — even though that’s not necessarily the most practical form.

What are the problems with this human-like design?

So there’s this subconscious comparison of robots to humans that has been enhanced by the design. I think it doesn’t make sense. First of all, AI is not like human intelligence — robots don’t have the same skills as people. So oftentimes when we expect a robot to behave like a human, it’s a very disappointing experience. That’s not to say that robots and AI aren’t smart, just that they have a very different type of intelligence and skill than people do.

Our question shouldn’t be, ‘at what point can we recreate human ability and human skill in a robot?’ The question is, ‘why would we want to do that in the first place when we can create something different?’


Kate Darling

Also, this comparison really limits us. The early AI researchers were trying to recreate a human brain and human intelligence, but that’s not where we’ve ended up. And so our question shouldn’t be, “at what point can we recreate human ability and human skill in a robot?” The question is, “why would we want to do that in the first place when we can create something different?” Robots and AI don’t think or behave like us, but they are very useful and very smart.

Instead, you suggest using animals as a way to think about robots. What are the parallels here?

There are so many fun parallels. For thousands of years, we’ve used animals as a supplement to human ability. Not because they do what we do, but because their skill sets are so different from ours.

We used oxen to plow our fields, we’ve used horses to let us travel around in new ways. In some ways, a horse-drawn carriage is the original semi-autonomous vehicle. We’ve used pigeons to carry mail or deliver medicine in ways similar to how we’re using drones today. We used them to take aerial photographs. So they were the original hobby photography drone. We’ve used dolphins in the Navy to detect mines underwater or locate lost underwater equipment, which is a similar function to how we’re starting to use underwater robots today.

But animals have feelings, and robots don’t. How does this affect the way we do, or should, treat robots?

Right. So this is something that has always really fascinated me about human-robot interaction. What it actually says about how we treat other entities. Because in many cases we have not treated animals very well in partnering with them. And in fact, in Western society, we’re often quite hypocritical about how we think about how we want to treat other beings and how we actually treat them.

So a lot of us think that we care about whether other beings feel or whether they have intelligence or whether they can suffer, but if you look at the history of animal rights in Western society, it quickly becomes apparent that we have only protected the animals that are cute or that we care about culturally, or that we have some emotional relationship to.

What’s so interesting about human-robot interaction research is it’s showing that we treat robots in very similar ways, where we treat some of them that we have no emotional connection to as tools and products, and then others we treat as companions or develop emotional attachments to.

So it’s entirely possible that if we don’t stop and think about this, that we may default to caring more about a robot that feels nothing than we might about a slimy slug in our backyard. It’s actually a unique moment in time where we could stop and think and maybe nudge our behavior in a way that’s more consistent with what we feel our values are.

It’s interesting you say that because I was thinking the opposite — that we might treat animals kindly, but we sometimes treat robots (especially social ones) with detachment. And there can be harmful effects of this. For instance, if we talk “down” to an Amazon Alexa, it has implications for the way we might treat women in our lives.

So I do think there actually is possibly an argument for treating technology with some kindness, as ridiculous as that sounds. Even though the technology can’t feel and we’re not anywhere close to having sentient robots or robot consciousness.

We’re seeing robots being designed in a very lifelike way, certain robots, and robots that can respond to being kicked, for example, with a simulation of pain. And one question is, even though the robot can’t feel, should we let people kick them?


Kate Darling

But there’s questions around our own behavior. So if you get used to barking commands at Alexa, or your kid gets used to barking commands at Alexa, you could get used to barking commands at women or women named Alexa or other people. Parents have raised enough concern about this that a lot of these home voice assistant companies have released features to turn on a magic word feature so that it makes Alexa only respond if you say “please” and “thank you,” for example.

But then you get into all sorts of questions with different designs of robots. Increasingly we’re seeing robots being designed in a very lifelike way, certain robots, and robots that can respond to being kicked, for example, with a simulation of pain. And one question is, even though the robot can’t feel, should we let people kick them?

And what if we had a real-life Westworld theme park, where people could go and do anything they want to life-like robots? Is that a healthy outlet for violent behavior or (does it) train people’s cruelty muscles? I don’t have an answer to the question, but it is a question that is going to be raised very soon.

Kate Darling, author and robot ethicist at MIT. Kate Darling

Right. So on the flip side, maybe we could go in the other direction and make them seem less lifelike at all and more like just a neutral object that we don’t associate with any kind of life?

We can try. What we’re also seeing in the research is that it’s really hard to turn off this tendency that we have to treat robots like living things. Even something as simple as the Roomba vacuum cleaner — just because it’s moving around on its own, people will name the Roomba. People will feel bad for the Roomba when it gets stuck. So it’s a very difficult human tendency to counteract.

And in fact, a lot of animal researchers and nature researchers have moved away from the idea that we have to get rid of how we project ourselves onto animals and have said, “Okay, this is something that is there, we just need to be very aware of it, and we can nudge our behavior in certain directions, but we’re not going to get rid of the tendency entirely.”

And maybe that’s a good thing because it means that we can relate to animals in certain ways that might be actually beneficial for humans. So having therapy dogs or having pets as companionship can actually be a very positive thing for people.

Or military robots, where soldiers are becoming emotionally attached to the bomb disposal units that they work with. Which, at first blush, you would say, “Okay, that’s terrible. We don’t want soldiers to be risking their lives or behaving in an inefficient way on a battlefield because they’ve developed an emotional connection to a robot.”

But at the same time, if you look at the history of the role that animals have played in war, yes, soldiers sometimes made bad decisions based on wanting to save their dog or their horse on the battlefield. But the animals brought so much emotional comfort to soldiers in very stressful situations that it’s not clear to me that it’s necessarily a bad thing, even if we could prevent it.

Many people fear that robots are going to threaten us in some way, or replace us. How does shifting it to a view of an animal change the way that we look at that issue?

Particularly in Western society, we have this idea that there’s this constant threat of robots rising up against us or coming to replace us. And in part that comes from this comparison of robots to humans — and it’s very limiting. It influences a lot of our conversations from what intelligence is, to our conversations about robots and jobs and robots replacing people one-to-one.

Using the animal analogy helps us step away from this fear of being replaced. And the animals obviously have not done that. Animals have disrupted society. They have created completely different workplaces for people. They have revolutionized farming and transportation and all sorts of things that technology is also going to disrupt — but it’s not the same type of fear that we’ve had with animals, about animals rising up against us.

Fear with robots is also quite misplaced, given that we’re not anywhere close to having artificial superintelligence or any type of science-fictional scenario that gets a lot of attention in the press — it’s actually the wrong question to be worried about.

What’s been the driving force behind your research? The question you are most interested in?

The thing that blows my mind is our tendency to treat robots like they’re alive, even if we know perfectly well that they’re just machines. Just a few weeks ago I got this baby harp seal robot called a PARO. It’s a medical device that’s used with dementia patients in a nursing home. And it looks like a very cute baby seal. It doesn’t do very much, it just kind of responds to touch. Makes these little movements and little sounds. I was showing it off to the group of roboticists that I work with. They create social robots — they specifically design robots that give off cues like this.

They were all like, “Oh, it’s so cute. Oh, look, it’s doing XYZ!” So even the people who build the programs are not immune. In fact, still very susceptible to being swayed by these artificial cues that we’ve programmed into these machines. It seems to be such a deep biological tendency that we have. It always surprises me, even though I’ve seen it happen and there’s so much research on it.

I think that we’re not talking about it enough and not acknowledging enough of the incredible social tendency that we have that is going to impact how we integrate these machines because we treat them so differently than other devices.

We’d love to hear from you! If you have a comment about this article or if you have a tip for a future Freethink story, please email us at [email protected].

Related
Bipedal robot takes a beating, keeps on hiking
LimX Dynamics’s bipedal robot, P1, can keep its footing, even when traversing rough terrain unlike any it’s seen before.
Humanoid robots are joining the Mercedes-Benz workforce
German automaker Mercedes-Benz is deploying Apptronik’s Apollo robots at a manufacturing plant in Hungary.
Robots who share your accent are more trusted, study shows
What makes a robot seem competent and trustworthy might be different for different people.
Watch: Figure’s humanoid robot just learned something new
Robotics startup Figure AI just shared a video showing its humanoid robot completing a fully autonomous “real world task.”
NASA tests autonomous space robots for off-world construction
NASA is developing autonomous space robots to build shelters, solar arrays, and more on the moon and Mars.
Up Next
AI dubbing
Subscribe to Freethink for more great stories