Kate Darling: Social Robotics #98

Transcript

00:00:00 The following is a conversation with Kate Darling, a researcher at MIT,

00:00:04 interested in social robotics, robot ethics, and generally how technology intersects with society.

00:00:11 She explores the emotional connection between human beings and lifelike machines,

00:00:15 which for me is one of the most exciting topics in all of artificial intelligence.

00:00:21 As she writes in her bio, she is a caretaker of several domestic robots,

00:00:26 including her plio dinosaur robots named Yochai, Peter, and Mr. Spaghetti.

00:00:33 She is one of the funniest and brightest minds I’ve ever had the fortune to talk to.

00:00:37 This conversation was recorded recently, but before the outbreak of the pandemic.

00:00:42 For everyone feeling the burden of this crisis, I’m sending love your way.

00:00:46 This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,

00:00:51 review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter

00:00:56 at Lex Friedman, spelled F R I D M A N. As usual, I’ll do a few minutes of ads now and never any

00:01:03 ads in the middle that can break the flow of the conversation. I hope that works for you and

00:01:08 doesn’t hurt the listening experience. Quick summary of the ads. Two sponsors,

00:01:13 Masterclass and ExpressVPN. Please consider supporting the podcast by signing up to

00:01:19 Masterclass at masterclass.com slash Lex and getting ExpressVPN at expressvpn.com slash Lex

00:01:27 Pod. This show is sponsored by Masterclass. Sign up at masterclass.com slash Lex to get a discount

00:01:35 and to support this podcast. When I first heard about Masterclass, I thought it was too good to

00:01:40 be true. For $180 a year, you get an all access pass to watch courses from, to list some of my

00:01:47 favorites. Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking and

00:01:53 communication, Will Wright, creator of SimCity and Sims, love those games, on game design,

00:02:00 Carlos Santana on guitar, Garry Kasparov on chess, Daniel Nagrano on poker, and many more.

00:02:07 Chris Hadfield explaining how rockets work and the experience of being launched into space alone

00:02:12 is worth the money. By the way, you can watch it on basically any device. Once again,

00:02:18 sign up on masterclass.com slash Lex to get a discount and to support this podcast.

00:02:25 This show is sponsored by ExpressVPN. Get it at expressvpn.com slash Lex Pod to get a discount

00:02:33 and to support this podcast. I’ve been using ExpressVPN for many years. I love it. It’s easy

00:02:39 to use, press the big power on button, and your privacy is protected. And, if you like, you can

00:02:45 make it look like your location is anywhere else in the world. I might be in Boston now, but I can

00:02:50 make it look like I’m in New York, London, Paris, or anywhere else. This has a large number of

00:02:56 obvious benefits. Certainly, it allows you to access international versions of streaming websites

00:03:01 like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I

00:03:08 use it on Linux. Shout out to Ubuntu, 2004, Windows, Android, but it’s available everywhere else too.

00:03:17 Once again, get it at expressvpn.com slash Lex Pod to get a discount and to support this podcast.

00:03:26 And now, here’s my conversation with Kate Darling.

00:03:31 You co taught robot ethics at Harvard. What are some ethical issues that arise

00:03:35 in the world with robots?

00:03:39 Yeah, that was a reading group that I did when I, like, at the very beginning,

00:03:44 first became interested in this topic. So, I think if I taught that class today,

00:03:48 it would look very, very different. Robot ethics, it sounds very science fictiony,

00:03:54 especially did back then, but I think that some of the issues that people in robot ethics are

00:04:01 concerned with are just around the ethical use of robotic technology in general. So, for example,

00:04:06 responsibility for harm, automated weapon systems, things like privacy and data security,

00:04:11 things like, you know, automation and labor markets. And then personally, I’m really

00:04:19 interested in some of the social issues that come out of our social relationships with robots.

00:04:23 One on one relationship with robots.

00:04:25 Yeah.

00:04:26 I think most of the stuff we have to talk about is like one on one social stuff. That’s what I

00:04:30 love. I think that’s what you’re, you love as well and are expert in. But a societal level,

00:04:35 there’s like, there’s a presidential candidate now, Andrew Yang running,

00:04:41 concerned about automation and robots and AI in general taking away jobs. He has a proposal of UBI,

00:04:48 universal basic income of everybody gets 1000 bucks as a way to sort of save you if you lose

00:04:55 your job from automation to allow you time to discover what it is that you would like to or

00:05:02 even love to do.

00:05:04 Yes. So I lived in Switzerland for 20 years and universal basic income has been more of a topic

00:05:12 there separate from the whole robots and jobs issue. So it’s so interesting to me to see kind

00:05:19 of these Silicon Valley people latch onto this concept that came from a very kind of

00:05:26 left wing socialist, kind of a different place in Europe. But on the automation labor markets

00:05:37 topic, I think that it’s very, so sometimes in those conversations, I think people overestimate

00:05:44 where robotic technology is right now. And we also have this fallacy of constantly comparing robots

00:05:51 to humans and thinking of this as a one to one replacement of jobs. So even like Bill Gates a few

00:05:57 years ago said something about, maybe we should have a system that taxes robots for taking people’s

00:06:03 jobs. And it just, I mean, I’m sure that was taken out of context, he’s a really smart guy,

00:06:10 but that sounds to me like kind of viewing it as a one to one replacement versus viewing this

00:06:15 technology as kind of a supplemental tool that of course is going to shake up a lot of stuff.

00:06:21 It’s going to change the job landscape, but I don’t see, you know, robots taking all the

00:06:27 jobs in the next 20 years. That’s just not how it’s going to work.

00:06:30 Right. So maybe drifting into the land of more personal relationships with robots and

00:06:36 interaction and so on. I got to warn you, I go, I may ask some silly philosophical questions.

00:06:43 I apologize.

00:06:43 Oh, please do.

00:06:45 Okay. Do you think humans will abuse robots in their interactions? So you’ve had a lot of,

00:06:52 and we’ll talk about it sort of anthropomorphization and this intricate dance,

00:07:00 emotional dance between human and robot, but there seems to be also a darker side where people, when

00:07:06 they treat the other as servants, especially, they can be a little bit abusive or a lot abusive.

00:07:13 Do you think about that? Do you worry about that?

00:07:16 Yeah, I do think about that. So, I mean, one of my main interests is the fact that people

00:07:22 subconsciously treat robots like living things. And even though they know that they’re interacting

00:07:28 with a machine and what it means in that context to behave violently. I don’t know if you could say

00:07:35 abuse because you’re not actually abusing the inner mind of the robot. The robot doesn’t have

00:07:42 any feelings.

00:07:42 As far as you know.

00:07:44 Well, yeah. It also depends on how we define feelings and consciousness. But I think that’s

00:07:50 another area where people kind of overestimate where we currently are with the technology.

00:07:54 Right.

00:07:54 The robots are not even as smart as insects right now. And so I’m not worried about abuse

00:08:00 in that sense. But it is interesting to think about what does people’s behavior towards these

00:08:05 things mean for our own behavior? Is it desensitizing the people to be verbally abusive

00:08:13 to a robot or even physically abusive? And we don’t know.

00:08:17 Right. It’s a similar connection from like if you play violent video games, what connection does

00:08:22 that have to desensitization to violence? I haven’t read literature on that. I wonder about that.

00:08:32 Because everything I’ve heard, people don’t seem to any longer be so worried about violent video

00:08:37 games.

00:08:38 Correct. The research on it is, it’s a difficult thing to research. So it’s sort of inconclusive,

00:08:46 but we seem to have gotten the sense, at least as a society, that people can compartmentalize. When

00:08:53 it’s something on a screen and you’re shooting a bunch of characters or running over people with

00:08:58 your car, that doesn’t necessarily translate to you doing that in real life. We do, however,

00:09:04 have some concerns about children playing violent video games. And so we do restrict it there.

00:09:09 I’m not sure that’s based on any real evidence either, but it’s just the way that we’ve kind of

00:09:14 decided we want to be a little more cautious there. And the reason I think robots are a little

00:09:19 bit different is because there is a lot of research showing that we respond differently

00:09:23 to something in our physical space than something on a screen. We will treat it much more viscerally,

00:09:29 much more like a physical actor. And so it’s totally possible that this is not a problem.

00:09:38 And it’s the same thing as violence in video games. Maybe restrict it with kids to be safe,

00:09:43 but adults can do what they want. But we just need to ask the question again because we don’t

00:09:48 have any evidence at all yet. Maybe there’s an intermediate place too. I did my research

00:09:55 on Twitter. By research, I mean scrolling through your Twitter feed.

00:10:00 You mentioned that you were going at some point to an animal law conference.

00:10:04 So I have to ask, do you think there’s something that we can learn

00:10:07 from animal rights that guides our thinking about robots?

00:10:12 Oh, I think there is so much to learn from that. I’m actually writing a book on it right now. That’s

00:10:17 why I’m going to this conference. So I’m writing a book that looks at the history of animal

00:10:22 domestication and how we’ve used animals for work, for weaponry, for companionship.

00:10:27 And one of the things the book tries to do is move away from this fallacy that I talked about

00:10:33 of comparing robots and humans because I don’t think that’s the right analogy. But I do think

00:10:39 that on a social level, even on a social level, there’s so much that we can learn from looking

00:10:43 at that history because throughout history, we’ve treated most animals like tools, like products.

00:10:49 And then some of them we’ve treated differently and we’re starting to see people treat robots in

00:10:53 really similar ways. So I think it’s a really helpful predictor to how we’re going to interact

00:10:57 with the robots. Do you think we’ll look back at this time like 100 years from now and see

00:11:05 what we do to animals as like similar to the way we view like the Holocaust in World War II?

00:11:13 That’s a great question. I mean, I hope so. I am not convinced that we will. But I often wonder,

00:11:22 you know, what are my grandkids going to view as, you know, abhorrent that my generation did

00:11:28 that they would never do? And I’m like, well, what’s the big deal? You know, it’s a fun question

00:11:33 to ask yourself. It always seems that there’s atrocities that we discover later. So the things

00:11:41 that at the time people didn’t see as, you know, you look at everything from slavery to any kinds

00:11:49 of abuse throughout history to the kind of insane wars that were happening to the way war was carried

00:11:56 out and rape and the kind of violence that was happening during war that we now, you know,

00:12:05 we see as atrocities, but at the time perhaps didn’t as much. And so now I have this intuition

00:12:12 that I have this worry, maybe you’re going to probably criticize me, but I do anthropomorphize

00:12:20 robots. I don’t see a fundamental philosophical difference between a robot and a human being

00:12:31 in terms of once the capabilities are matched. So the fact that we’re really far away doesn’t,

00:12:39 in terms of capabilities and then that from natural language processing, understanding

00:12:43 and generation to just reasoning and all that stuff. I think once you solve it, I see though,

00:12:48 this is a very gray area and I don’t feel comfortable with the kind of abuse that people

00:12:53 throw at robots. Subtle, but I can see it becoming, I can see basically a civil rights movement for

00:13:01 robots in the future. Do you think, let me put it in the form of a question, do you think robots

00:13:07 should have some kinds of rights? Well, it’s interesting because I came at this originally

00:13:13 from your perspective. I was like, you know what, there’s no fundamental difference between

00:13:19 technology and like human consciousness. Like we, we can probably recreate anything. We just don’t

00:13:24 know how yet. And so there’s no reason not to give machines the same rights that we have once,

00:13:32 like you say, they’re kind of on an equivalent level. But I realized that that is kind of a

00:13:38 far future question. I still think we should talk about it because I think it’s really interesting.

00:13:41 But I realized that it’s actually, we might need to ask the robot rights question even sooner than

00:13:47 that while the machines are still, quote unquote, really dumb and not on our level because of the

00:13:56 way that we perceive them. And I think one of the lessons we learned from looking at the history of

00:14:00 animal rights and one of the reasons we may not get to a place in a hundred years where we view

00:14:05 it as wrong to, you know, eat or otherwise, you know, use animals for our own purposes is because

00:14:11 historically we’ve always protected those things that we relate to the most. So one example is

00:14:17 whales. No one gave a shit about the whales. Am I allowed to swear? Yeah, no one gave a shit about

00:14:26 freedom. Yeah, no one gave a shit about the whales until someone recorded them singing. And suddenly

00:14:31 people were like, oh, this is a beautiful creature and now we need to save the whales. And that

00:14:35 started the whole Save the Whales movement in the 70s. So as much as I, and I think a lot of people

00:14:45 want to believe that we care about consistent biological criteria, that’s not historically

00:14:52 how we formed our alliances. Yeah, so what, why do we, why do we believe that all humans are created

00:15:00 equal? Killing of a human being, no matter who the human being is, that’s what I meant by equality,

00:15:07 is bad. And then, because I’m connecting that to robots and I’m wondering whether mortality,

00:15:14 so the killing act is what makes something, that’s the fundamental first right. So I am currently

00:15:21 allowed to take a shotgun and shoot a Roomba. I think, I’m not sure, but I’m pretty sure it’s not

00:15:29 considered murder, right. Or even shutting them off. So that’s, that’s where the line appears to

00:15:36 be, right? Is this mortality a critical thing here? I think here again, like the animal analogy is

00:15:44 really useful because you’re also allowed to shoot your dog, but people won’t be happy about it.

00:15:49 So we give, we do give animals certain protections from like, you’re not allowed to torture your dog

00:15:56 and set it on fire, at least in most states and countries, but you’re still allowed to treat it

00:16:04 like a piece of property in a lot of other ways. And so we draw these arbitrary lines all the time.

00:16:11 And, you know, there’s a lot of philosophical thought on why viewing humans as something unique

00:16:22 is not, is just speciesism and not, you know, based on any criteria that would actually justify

00:16:31 making a difference between us and other species. Do you think in general people, most people are

00:16:38 good? Do you think, or do you think there’s evil and good in all of us? Is that’s revealed through

00:16:49 our circumstances and through our interactions? I like to view myself as a person who like believes

00:16:55 that there’s no absolute evil and good and that everything is, you know, gray. But I do think it’s

00:17:03 an interesting question. Like when I see people being violent towards robotic objects, you said

00:17:08 that bothers you because the robots might someday, you know, be smart. And is that why?

00:17:15 Well, it bothers me because it reveals, so I personally believe, because I’ve studied way too,

00:17:21 so I’m Jewish. I studied the Holocaust and World War II exceptionally well. I personally believe

00:17:26 that most of us have evil in us. That what bothers me is the abuse of robots reveals the evil in

00:17:35 human beings. And it’s, I think it doesn’t just bother me. It’s, I think it’s an opportunity for

00:17:44 roboticists to make, help people find the better sides, the angels of their nature, right? That

00:17:53 abuse isn’t just a fun side thing. That’s a, you revealing a dark part that you shouldn’t,

00:17:59 that should be hidden deep inside. Yeah. I mean, you laugh, but some of our research does indicate

00:18:07 that maybe people’s behavior towards robots reveals something about their tendencies for

00:18:12 empathy generally, even using very simple robots that we have today that like clearly don’t feel

00:18:16 anything. So, you know, Westworld is maybe, you know, not so far off and it’s like, you know,

00:18:27 depicting the bad characters as willing to go around and shoot and rape the robots and the good

00:18:32 characters is not wanting to do that. Even without assuming that the robots have consciousness.

00:18:37 So there’s a opportunity, it’s interesting, there’s opportunity to almost practice empathy.

00:18:42 The, on robots is an opportunity to practice empathy.

00:18:47 I agree with you. Some people would say, why are we practicing empathy on robots instead of,

00:18:54 you know, on our fellow humans or on animals that are actually alive and experienced the world?

00:18:59 And I don’t agree with them because I don’t think empathy is a zero sum game. And I do

00:19:03 think that it’s a muscle that you can train and that we should be doing that. But some people

00:19:09 disagree. So the interesting thing, you’ve heard, you know, raising kids sort of asking them or

00:19:20 telling them to be nice to the smart speakers, to Alexa and so on, saying please and so on during

00:19:28 the requests. I don’t know if, I’m a huge fan of that idea because yeah, that’s towards the idea of

00:19:34 practicing empathy. I feel like politeness, I’m always polite to all the, all the systems that we

00:19:39 build, especially anything that’s speech interaction based. Like when we talk to the car, I’ll always

00:19:44 have a pretty good detector for please to, I feel like there should be a room for encouraging empathy

00:19:51 in those interactions. Yeah. Okay. So I agree with you. So I’m going to play devil’s advocate. Sure.

00:19:58 So what is the, what is the devil’s advocate argument there? The devil’s advocate argument

00:20:02 is that if you are the type of person who has abusive tendencies or needs to get some sort of

00:20:08 like behavior like that out, needs an outlet for it, that it’s great to have a robot that you can

00:20:14 scream at so that you’re not screaming at a person. And we just don’t know whether that’s true,

00:20:19 whether it’s an outlet for people or whether it just kind of, as my friend once said,

00:20:23 trains their cruelty muscles and makes them more cruel in other situations.

00:20:26 Oh boy. Yeah. And that expands to other topics, which I, I don’t know, you know, there’s a,

00:20:36 there’s a topic of sex, which is weird one that I tend to avoid it from robotics perspective.

00:20:42 And most of the general public doesn’t, they talk about sex robots and so on. Is that an area you’ve

00:20:50 touched at all research wise? Like the way, cause that’s what people imagine sort of any kind of

00:20:57 interaction between human and robot that shows any kind of compassion. They immediately think

00:21:04 from a product perspective in the near term is sort of expansion of what pornography is and all

00:21:10 that kind of stuff. Yeah. Do researchers touch this? Well that’s kind of you to like characterize

00:21:16 it as though there’s thinking rationally about product. I feel like sex robots are just such a

00:21:20 like titillating news hook for people that they become like the story. And it’s really hard to

00:21:27 not get fatigued by it when you’re in the space because you tell someone you do human robot

00:21:32 interaction. Of course, the first thing they want to talk about is sex robots. Yeah, it happens a

00:21:37 lot. And it’s, it’s unfortunate that I’m so fatigued by it because I do think that there

00:21:42 are some interesting questions that become salient when you talk about, you know, sex with robots.

00:21:48 See what I think would happen when people get sex robots, like if it’s some guys, okay, guys get

00:21:54 female sex robots. What I think there’s an opportunity for is an actual, like, like they’ll

00:22:03 actually interact. What I’m trying to say, they won’t outside of the sex would be the most

00:22:09 fulfilling part. Like the interaction, it’s like the folks who there’s movies and this, right,

00:22:15 who pray, pay a prostitute and then end up just talking to her the whole time. So I feel like

00:22:21 there’s an opportunity. It’s like most guys and people in general joke about this, the sex act,

00:22:27 but really people are just lonely inside and they’re looking for connection. Many of them.

00:22:32 And it’d be unfortunate if that connection is established through the sex industry. I feel like

00:22:40 it should go into the front door of like, people are lonely and they want a connection.

00:22:46 Well, I also feel like we should kind of de, you know, de stigmatize the sex industry because,

00:22:54 you know, even prostitution, like there are prostitutes that specialize in disabled people

00:22:59 who don’t have the same kind of opportunities to explore their sexuality. So it’s, I feel like we

00:23:07 should like de stigmatize all of that generally. But yeah, that connection and that loneliness is

00:23:13 an interesting topic that you bring up because while people are constantly worried about robots

00:23:19 replacing humans and oh, if people get sex robots and the sex is really good, then they won’t want

00:23:23 their, you know, partner or whatever. But we rarely talk about robots actually filling a hole where

00:23:29 there’s nothing and what benefit that can provide to people. Yeah, I think that’s an exciting,

00:23:37 there’s a whole giant, there’s a giant hole that’s unfillable by humans. It’s asking too much of

00:23:43 your, of people, your friends and people you’re in a relationship with in your family to fill that

00:23:47 hole. There’s, because, you know, it’s exploring the full, like, you know, exploring the full

00:23:54 complexity and richness of who you are. Like who are you really? Like people, your family doesn’t

00:24:02 have enough patience to really sit there and listen to who are you really. And I feel like

00:24:06 there’s an opportunity to really make that connection with robots. I just feel like we’re

00:24:11 complex as humans and we’re capable of lots of different types of relationships. So whether that’s,

00:24:18 you know, with family members, with friends, with our pets, or with robots, I feel like

00:24:23 there’s space for all of that and all of that can provide value in a different way.

00:24:29 Yeah, absolutely. So I’m jumping around. Currently most of my work is in autonomous vehicles.

00:24:35 So the most popular topic among the general public is the trolley problem. So most, most,

00:24:45 most roboticists kind of hate this question, but what do you think of this thought experiment?

00:24:52 What do you think we can learn from it outside of the silliness of

00:24:56 the actual application of it to the autonomous vehicle? I think it’s still an interesting

00:25:00 ethical question. And that in itself, just like much of the interaction with robots

00:25:06 has something to teach us. But from your perspective, do you think there’s anything there?

00:25:10 Well, I think you’re right that it does have something to teach us because,

00:25:14 but I think what people are forgetting in all of these conversations is the origins of the trolley

00:25:19 problem and what it was meant to show us, which is that there is no right answer. And that sometimes

00:25:25 our moral intuition that comes to us instinctively is not actually what we should follow if we care

00:25:34 about creating systematic rules that apply to everyone. So I think that as a philosophical

00:25:40 concept, it could teach us at least that, but that’s not how people are using it right now.

00:25:48 These are friends of mine and I love them dearly and their project adds a lot of value. But if

00:25:54 we’re viewing the moral machine project as what we can learn from the trolley problems, the moral

00:25:59 machine is, I’m sure you’re familiar, it’s this website that you can go to and it gives you

00:26:04 different scenarios like, oh, you’re in a car, you can decide to run over these two people or

00:26:10 this child. What do you choose? Do you choose the homeless person? Do you choose the person who’s

00:26:15 jaywalking? And so it pits these like moral choices against each other and then tries to

00:26:21 crowdsource the quote unquote correct answer, which is really interesting and I think valuable data,

00:26:29 but I don’t think that’s what we should base our rules in autonomous vehicles on because

00:26:34 it is exactly what the trolley problem is trying to show, which is your first instinct might not

00:26:39 be the correct one if you look at rules that then have to apply to everyone and everything.

00:26:45 So how do we encode these ethical choices in interaction with robots? For example,

00:26:50 autonomous vehicles, there is a serious ethical question of do I protect myself?

00:26:58 Does my life have higher priority than the life of another human being? Because that changes

00:27:05 certain control decisions that you make. So if your life matters more than other human beings,

00:27:11 then you’d be more likely to swerve out of your current lane. So currently automated emergency

00:27:16 braking systems that just brake, they don’t ever swerve. So swerving into oncoming traffic or

00:27:25 no, just in a different lane can cause significant harm to others, but it’s possible that it causes

00:27:31 less harm to you. So that’s a difficult ethical question. Do you have a hope that

00:27:41 like the trolley problem is not supposed to have a right answer, right? Do you hope that

00:27:46 when we have robots at the table, we’ll be able to discover the right answer for some of these

00:27:50 questions? Well, what’s happening right now, I think, is this question that we’re facing of

00:27:58 what ethical rules should we be programming into the machines is revealing to us that

00:28:03 our ethical rules are much less programmable than we probably thought before. And so that’s a really

00:28:11 valuable insight, I think, that these issues are very complicated and that in a lot of these cases,

00:28:19 it’s you can’t really make that call, like not even as a legislator. And so what’s going to

00:28:25 happen in reality, I think, is that car manufacturers are just going to try and avoid

00:28:31 the problem and avoid liability in any way possible. Or like they’re going to always protect

00:28:36 the driver because who’s going to buy a car if it’s programmed to kill someone?

00:28:40 Yeah.

00:28:41 Kill you instead of someone else. So that’s what’s going to happen in reality.

00:28:47 But what did you mean by like once we have robots at the table, like do you mean when they can help

00:28:51 us figure out what to do?

00:28:54 No, I mean when robots are part of the ethical decisions. So no, no, no, not they help us. Well.

00:29:04 Oh, you mean when it’s like, should I run over a robot or a person?

00:29:08 Right. That kind of thing. So what, no, no, no. So when you, it’s exactly what you said, which is

00:29:15 when you have to encode the ethics into an algorithm, you start to try to really understand

00:29:22 what are the fundamentals of the decision making process you make to make certain decisions.

00:29:28 Should you, like capital punishment, should you take a person’s life or not to punish them for

00:29:34 a certain crime? Sort of, you can use, you can develop an algorithm to make that decision, right?

00:29:42 And the hope is that the act of making that algorithm, however you make it, so there’s a few

00:29:49 approaches, will help us actually get to the core of what is right and what is wrong under our current

00:29:59 societal standards.

00:30:00 But isn’t that what’s happening right now? And we’re realizing that we don’t have a consensus on

00:30:05 what’s right and wrong.

00:30:06 You mean in politics in general?

00:30:08 Well, like when we’re thinking about these trolley problems and autonomous vehicles and how to

00:30:12 program ethics into machines and how to, you know, make AI algorithms fair and equitable, we’re

00:30:22 realizing that this is so complicated and it’s complicated in part because there doesn’t seem

00:30:28 to be a one right answer in any of these cases.

00:30:30 Do you have a hope for, like one of the ideas of the moral machine is that crowdsourcing can help

00:30:35 us converge towards, like democracy can help us converge towards the right answer.

00:30:42 Do you have a hope for crowdsourcing?

00:30:43 Well, yes and no. So I think that in general, you know, I have a legal background and

00:30:49 policymaking is often about trying to suss out, you know, what rules does this particular society

00:30:55 agree on and then trying to codify that. So the law makes these choices all the time and then

00:31:00 tries to adapt according to changing culture. But in the case of the moral machine project,

00:31:06 I don’t think that people’s choices on that website necessarily reflect what laws they would

00:31:12 want in place. I think you would have to ask them a series of different questions in order to get

00:31:18 at what their consensus is.

00:31:20 I agree, but that has to do more with the artificial nature of, I mean, they’re showing

00:31:25 some cute icons on a screen. That’s almost, so if you, for example, we do a lot of work in virtual

00:31:32 reality. And so if you put those same people into virtual reality where they have to make that

00:31:38 decision, their decision would be very different, I think.

00:31:42 I agree with that. That’s one aspect. And the other aspect is it’s a different question to ask

00:31:47 someone, would you run over the homeless person or the doctor in this scene? Or do you want cars to

00:31:55 always run over the homeless people?

00:31:57 I think, yeah. So let’s talk about anthropomorphism. To me, anthropomorphism, if I can

00:32:04 pronounce it correctly, is one of the most fascinating phenomena from like both the

00:32:09 engineering perspective and the psychology perspective, machine learning perspective,

00:32:14 and robotics in general. Can you step back and define anthropomorphism, how you see it in

00:32:23 general terms in your work?

00:32:25 Sure. So anthropomorphism is this tendency that we have to project human like traits and

00:32:32 behaviors and qualities onto nonhumans. And we often see it with animals, like we’ll project

00:32:38 emotions on animals that may or may not actually be there. We often see that we’re trying to

00:32:43 interpret things according to our own behavior when we get it wrong. But we do it with more

00:32:49 than just animals. We do it with objects, you know, teddy bears. We see, you know, faces in

00:32:53 the headlights of cars. And we do it with robots very, very extremely.

00:32:59 You think that can be engineered? Can that be used to enrich an interaction between an AI

00:33:05 system and the human?

00:33:07 Oh, yeah, for sure.

00:33:08 And do you see it being used that way often? Like, I don’t, I haven’t seen, whether it’s

00:33:17 Alexa or any of the smart speaker systems, often trying to optimize for the anthropomorphization.

00:33:26 You said you haven’t seen?

00:33:27 I haven’t seen. They keep moving away from that. I think they’re afraid of that.

00:33:32 They actually, so I only recently found out, but did you know that Amazon has like a whole

00:33:38 team of people who are just there to work on Alexa’s personality?

00:33:44 So I know that depends on what you mean by personality. I didn’t know that exact thing.

00:33:50 But I do know that how the voice is perceived is worked on a lot, whether if it’s a pleasant

00:33:59 feeling about the voice, but that has to do more with the texture of the sound and the

00:34:04 audio and so on. But personality is more like…

00:34:08 It’s like, what’s her favorite beer when you ask her? And the personality team is different

00:34:13 for every country too. Like there’s a different personality for German Alexa than there is

00:34:17 for American Alexa. That said, I think it’s very difficult to, you know, use the, really,

00:34:26 really harness the anthropomorphism with these voice assistants because the voice interface

00:34:34 is still very primitive. And I think that in order to get people to really suspend their

00:34:40 disbelief and treat a robot like it’s alive, less is sometimes more. You want them to project

00:34:47 onto the robot and you want the robot to not disappoint their expectations for how it’s

00:34:51 going to answer or behave in order for them to have this kind of illusion. And with Alexa,

00:34:57 I don’t think we’re there yet, or Siri, that they’re just not good at that. But if you

00:35:03 look at some of the more animal like robots, like the baby seal that they use with the

00:35:08 dementia patients, it’s a much more simple design. It doesn’t try to talk to you. It

00:35:12 can’t disappoint you in that way. It just makes little movements and sounds and people

00:35:17 stroke it and it responds to their touch. And that is like a very effective way to harness

00:35:23 people’s tendency to kind of treat the robot like a living thing.

00:35:28 Yeah. So you bring up some interesting ideas in your paper chapter, I guess,

00:35:35 Anthropomorphic Framing Human Robot Interaction that I read the last time we scheduled this.

00:35:40 Oh my God, that was a long time ago.

00:35:42 Yeah. What are some good and bad cases of anthropomorphism in your perspective?

00:35:49 Like when is the good ones and bad?

00:35:52 Well, I should start by saying that, you know, while design can really enhance the

00:35:56 anthropomorphism, it doesn’t take a lot to get people to treat a robot like it’s alive. Like

00:36:01 people will, over 85% of Roombas have a name, which I’m, I don’t know the numbers for your

00:36:07 regular type of vacuum cleaner, but they’re not that high, right? So people will feel bad for the

00:36:12 Roomba when it gets stuck, they’ll send it in for repair and want to get the same one back. And

00:36:15 that’s, that one is not even designed to like make you do that. So I think that some of the cases

00:36:23 where it’s maybe a little bit concerning that anthropomorphism is happening is when you have

00:36:28 something that’s supposed to function like a tool and people are using it in the wrong way.

00:36:32 And one of the concerns is military robots where, so gosh, 2000, like early 2000s, which is a long

00:36:44 time ago, iRobot, the Roomba company made this robot called the Pacbot that was deployed in Iraq

00:36:51 and Afghanistan with the bomb disposal units that were there. And the soldiers became very emotionally

00:36:59 attached to the robots. And that’s fine until a soldier risks his life to save a robot, which

00:37:08 you really don’t want. But they were treating them like pets. Like they would name them,

00:37:12 they would give them funerals with gun salutes, they would get really upset and traumatized when

00:37:17 the robot got broken. So in situations where you want a robot to be a tool, in particular,

00:37:23 when it’s supposed to like do a dangerous job that you don’t want a person doing,

00:37:26 it can be hard when people get emotionally attached to it. That’s maybe something that

00:37:32 you would want to discourage. Another case for concern is maybe when companies try to

00:37:39 leverage the emotional attachment to exploit people. So if it’s something that’s not in the

00:37:45 consumer’s interest, trying to like sell them products or services or exploit an emotional

00:37:51 connection to keep them paying for a cloud service for a social robot or something like that might be,

00:37:57 I think that’s a little bit concerning as well.

00:37:59 Yeah, the emotional manipulation, which probably happens behind the scenes now with some like

00:38:04 social networks and so on, but making it more explicit. What’s your favorite robot?

00:38:12 Fictional or real?

00:38:13 No, real. Real robot, which you have felt a connection with or not like, not anthropomorphic

00:38:23 connection, but I mean like you sit back and say, damn, this is an impressive system.

00:38:32 Wow. So two different robots. So the, the PLEO baby dinosaur robot that is no longer sold that

00:38:38 came out in 2007, that one I was very impressed with. It was, but, but from an anthropomorphic

00:38:45 perspective, I was impressed with how much I bonded with it, how much I like wanted to believe

00:38:50 that it had this inner life.

00:38:51 Can you describe PLEO, can you describe what it is? How big is it? What can it actually do?

00:38:58 Yeah. PLEO is about the size of a small cat. It had a lot of like motors that gave it this kind

00:39:06 of lifelike movement. It had things like touch sensors and an infrared camera. So it had all

00:39:11 these like cool little technical features, even though it was a toy. And the thing that really

00:39:18 struck me about it was that it, it could mimic pain and distress really well. So if you held

00:39:24 it up by the tail, it had a tilt sensor that, you know, told it what direction it was facing

00:39:28 and it would start to squirm and cry out. If you hit it too hard, it would start to cry.

00:39:34 So it was very impressive in design.

00:39:38 And what’s the second robot that you were, you said there might’ve been two that you liked.

00:39:43 Yeah. So the Boston Dynamics robots are just impressive feats of engineering.

00:39:49 Have you met them in person?

00:39:51 Yeah. I recently got a chance to go visit and I, you know, I was always one of those people who

00:39:55 watched the videos and was like, this is super cool, but also it’s a product video. Like,

00:39:59 I don’t know how many times that they had to shoot this to get it right.

00:40:02 Yeah.

00:40:03 But visiting them, I, you know, I’m pretty sure that I was very impressed. Let’s put it that way.

00:40:10 Yeah. And in terms of the control, I think that was a transformational moment for me

00:40:15 when I met Spot Mini in person.

00:40:17 Yeah.

00:40:18 Because, okay, maybe this is a psychology experiment, but I anthropomorphized the,

00:40:26 the crap out of it. So I immediately, it was like my best friend, right?

00:40:30 I think it’s really hard for anyone to watch Spot move and not feel like it has agency.

00:40:35 Yeah. This movement, especially the arm on Spot Mini really obviously looks like a head.

00:40:44 Yeah.

00:40:44 That they say, no, wouldn’t mean it that way, but it obviously, it looks exactly like that.

00:40:51 And so it’s almost impossible to not think of it as a, almost like the baby dinosaur,

00:40:57 but slightly larger. And this movement of the, of course, the intelligence is,

00:41:02 their whole idea is that it’s not supposed to be intelligent. It’s a platform on which you build

00:41:08 higher intelligence. It’s actually really, really dumb. It’s just a basic movement platform.

00:41:13 Yeah. But even dumb robots can, like, we can immediately respond to them in this visceral way.

00:41:19 What are your thoughts about Sophia the robot? This kind of mix of some basic natural language

00:41:26 processing and basically an art experiment.

00:41:31 Yeah. An art experiment is a good way to characterize it. I’m much less impressed

00:41:35 with Sophia than I am with Boston Dynamics.

00:41:37 She said she likes you. She said she admires you.

00:41:40 Yeah. She followed me on Twitter at some point. Yeah.

00:41:44 She tweets about how much she likes you.

00:41:45 So what does that mean? I have to be nice or?

00:41:48 No, I don’t know. I was emotionally manipulating you. No. How do you think of

00:41:55 that? I think of the whole thing that happened with Sophia is quite a large number of people

00:42:01 kind of immediately had a connection and thought that maybe we’re far more advanced with robotics

00:42:06 than we are or actually didn’t even think much. I was surprised how little people cared

00:42:13 that they kind of assumed that, well, of course AI can do this.

00:42:19 Yeah.

00:42:19 And then if they assume that, I felt they should be more impressed.

00:42:26 Well, people really overestimate where we are. And so when something, I don’t even think Sophia

00:42:33 was very impressive or is very impressive. I think she’s kind of a puppet, to be honest. But

00:42:38 yeah, I think people are a little bit influenced by science fiction and pop culture to

00:42:43 think that we should be further along than we are.

00:42:45 So what’s your favorite robots in movies and fiction?

00:42:48 WALLI.

00:42:49 WALLI. What do you like about WALLI? The humor, the cuteness, the perception control systems

00:42:58 operating on WALLI that makes it all work? Just in general?

00:43:02 The design of WALLI the robot, I think that animators figured out, starting in the 1940s,

00:43:10 how to create characters that don’t look real, but look like something that’s even better than real,

00:43:19 that we really respond to and think is really cute. They figured out how to make them move

00:43:23 and look in the right way. And WALLI is just such a great example of that.

00:43:27 You think eyes, big eyes or big something that’s kind of eyeish. So it’s always playing on some

00:43:35 aspect of the human face, right?

00:43:36 Often. Yeah. So big eyes. Well, I think one of the first animations to really play with this was

00:43:44 Bambi. And they weren’t originally going to do that. They were originally trying to make the

00:43:48 deer look as lifelike as possible. They brought deer into the studio and had a little zoo there

00:43:53 so that the animators could work with them. And then at some point they were like,

00:43:57 if we make really big eyes and a small nose and big cheeks, kind of more like a baby face,

00:44:02 then people like it even better than if it looks real. Do you think the future of things like

00:44:10 Alexa in the home has possibility to take advantage of that, to build on that, to create

00:44:18 these systems that are better than real, that create a close human connection? I can pretty

00:44:25 much guarantee you without having any knowledge that those companies are going to make these

00:44:32 things. And companies are working on that design behind the scenes. I’m pretty sure.

00:44:37 I totally disagree with you.

00:44:38 Really?

00:44:39 So that’s what I’m interested in. I’d like to build such a company. I know

00:44:43 a lot of those folks and they’re afraid of that because how do you make money off of it?

00:44:49 Well, but even just making Alexa look a little bit more interesting than just a cylinder

00:44:54 would do so much.

00:44:55 It’s an interesting thought, but I don’t think people are from Amazon perspective are looking

00:45:02 for that kind of connection. They want you to be addicted to the services provided by Alexa,

00:45:08 not to the device. So the device itself, it’s felt that you can lose a lot because if you create a

00:45:17 connection and then it creates more opportunity for frustration for negative stuff than it does

00:45:26 for positive stuff is I think the way they think about it.

00:45:29 That’s interesting. Like I agree that it’s very difficult to get right and you have to get it

00:45:35 exactly right. Otherwise you wind up with Microsoft’s Clippy.

00:45:40 Okay, easy now. What’s your problem with Clippy?

00:45:43 You like Clippy? Is Clippy your friend?

00:45:45 Yeah, I like Clippy. I was just, I just talked to, we just had this argument and they said

00:45:51 Microsoft’s CTO and they said, he said he’s not bringing Clippy back. They’re not bringing

00:45:57 Clippy back and that’s very disappointing. I think it was Clippy was the greatest assistance

00:46:05 we’ve ever built. It was a horrible attempt, of course, but it’s the best we’ve ever done

00:46:10 because it was a real attempt to have like a actual personality. I mean, it was obviously

00:46:17 technology was way not there at the time of being able to be a recommender system for assisting you

00:46:25 in anything and typing in Word or any kind of other application, but still it was an attempt

00:46:30 of personality that was legitimate, which I thought was brave.

00:46:34 Yes, yes. Okay. You know, you’ve convinced me I’ll be slightly less hard on Clippy.

00:46:39 And I know I have like an army of people behind me who also miss Clippy.

00:46:43 Really? I want to meet these people. Who are these people?

00:46:47 It’s the people who like to hate stuff when it’s there and miss it when it’s gone.

00:46:55 So everyone.

00:46:56 It’s everyone. Exactly. All right. So Enki and Jibo, the two companies,

00:47:04 the two amazing companies, the social robotics companies that have recently been closed down.

00:47:10 Yes.

00:47:12 Why do you think it’s so hard to create a personal robotics company? So making a business

00:47:17 out of essentially something that people would anthropomorphize, have a deep connection with.

00:47:23 Why is it so hard to make it work? Is the business case not there or what is it?

00:47:28 I think it’s a number of different things. I don’t think it’s going to be this way forever.

00:47:35 I think at this current point in time, it takes so much work to build something that only barely

00:47:43 meets people’s minimal expectations because of science fiction and pop culture giving people

00:47:49 this idea that we should be further than we already are. Like when people think about a robot

00:47:53 assistant in the home, they think about Rosie from the Jetsons or something like that. And

00:48:00 Enki and Jibo did such a beautiful job with the design and getting that interaction just right.

00:48:06 But I think people just wanted more. They wanted more functionality. I think you’re also right that

00:48:11 the business case isn’t really there because there hasn’t been a killer application that’s

00:48:17 useful enough to get people to adopt the technology in great numbers. I think what we did see from the

00:48:23 people who did get Jibo is a lot of them became very emotionally attached to it. But that’s not,

00:48:31 I mean, it’s kind of like the Palm Pilot back in the day. Most people are like, why do I need this?

00:48:35 Why would I? They don’t see how they would benefit from it until they have it or some

00:48:40 other company comes in and makes it a little better. Yeah. Like how far away are we, do you

00:48:45 think? How hard is this problem? It’s a good question. And I think it has a lot to do with

00:48:50 people’s expectations and those keep shifting depending on what science fiction that is popular.

00:48:56 But also it’s two things. It’s people’s expectation and people’s need for an emotional

00:49:01 connection. Yeah. And I believe the need is pretty high. Yes. But I don’t think we’re aware of it.

00:49:10 That’s right. There’s like, I really think this is like the life as we know it. So we’ve just kind

00:49:16 of gotten used to it of really, I hate to be dark because I have close friends, but we’ve gotten

00:49:24 used to really never being close to anyone. Right. And we’re deeply, I believe, okay, this is

00:49:32 hypothesis. I think we’re deeply lonely, all of us, even those in deep fulfilling relationships.

00:49:37 In fact, what makes those relationship fulfilling, I think is that they at least tap into that deep

00:49:43 loneliness a little bit. But I feel like there’s more opportunity to explore that, that doesn’t

00:49:49 inter, doesn’t interfere with the human relationships you have. It expands more on the,

00:49:55 that, yeah, the rich deep unexplored complexity that’s all of us, weird apes. Okay.

00:50:02 I think you’re right. Do you think it’s possible to fall in love with a robot?

00:50:05 Oh yeah, totally. Do you think it’s possible to have a longterm committed monogamous relationship

00:50:13 with a robot? Well, yeah, there are lots of different types of longterm committed monogamous

00:50:18 relationships. I think monogamous implies like, you’re not going to see other humans sexually or

00:50:26 like you basically on Facebook have to say, I’m in a relationship with this person, this robot.

00:50:32 I just don’t like, again, I think this is comparing robots to humans when I would rather

00:50:37 compare them to pets. Like you get a robot, it fulfills this loneliness that you have

00:50:46 in maybe not the same way as a pet, maybe in a different way that is even supplemental in a

00:50:52 different way. But I’m not saying that people won’t like do this, be like, oh, I want to marry

00:50:58 my robot or I want to have like a sexual relation, monogamous relationship with my robot. But I don’t

00:51:05 think that that’s the main use case for them. But you think that there’s still a gap between

00:51:11 human and pet. So between a husband and pet, there’s a different relationship. It’s engineering.

00:51:24 So that’s a gap that can be closed through. I think it could be closed someday, but why

00:51:30 would we close that? Like, I think it’s so boring to think about recreating things that we already

00:51:34 have when we could create something that’s different. I know you’re thinking about the

00:51:43 people who like don’t have a husband and like, what could we give them? Yeah. But I guess what

00:51:50 I’m getting at is maybe not. So like the movie Her. Yeah. Right. So a better husband. Well,

00:52:01 maybe better in some ways. Like it’s, I do think that robots are going to continue to be a different

00:52:07 type of relationship, even if we get them like very human looking or when, you know, the voice

00:52:13 interactions we have with them feel very like natural and human like, I think there’s still

00:52:18 going to be differences. And there were in that movie too, like towards the end, it kind of goes

00:52:22 off the rails. But it’s just a movie. So your intuition is that, because you kind of said

00:52:30 two things, right? So one is why would you want to basically replicate the husband? Yeah. Right.

00:52:39 And the other is kind of implying that it’s kind of hard to do. So like anytime you try,

00:52:46 you might build something very impressive, but it’ll be different. I guess my question is about

00:52:51 human nature. It’s like, how hard is it to satisfy that role of the husband? So we’re moving any of

00:53:01 the sexual stuff aside is the, it’s more like the mystery, the tension, the dance of relationships

00:53:08 you think with robots, that’s difficult to build. What’s your intuition? I think that, well, it also

00:53:16 depends on are we talking about robots now in 50 years in like indefinite amount of time. I’m

00:53:22 thinking like five or 10 years. Five or 10 years. I think that robots at best will be like, it’s

00:53:29 more similar to the relationship we have with our pets than relationship that we have with other

00:53:33 people. I got it. So what do you think it takes to build a system that exhibits greater and greater

00:53:41 levels of intelligence? Like it impresses us with this intelligence. Arumba, so you talk about

00:53:47 anthropomorphization that doesn’t, I think intelligence is not required. In fact, intelligence

00:53:52 probably gets in the way sometimes, like you mentioned. But what do you think it takes to

00:54:00 create a system where we sense that it has a human level intelligence? So something that,

00:54:07 probably something conversational, human level intelligence. How hard do you think that problem

00:54:11 is? It’d be interesting to sort of hear your perspective, not just purely, so I talk to a lot

00:54:18 of people, how hard is the conversational agents? How hard is it to pass the torrent test? But my

00:54:24 sense is it’s easier than just solving, it’s easier than solving the pure natural language

00:54:33 processing problem. Because I feel like you can cheat. Yeah. So how hard is it to pass the torrent

00:54:41 test in your view? Well, I think again, it’s all about expectation management. If you set up

00:54:47 people’s expectations to think that they’re communicating with, what was it, a 13 year old

00:54:52 boy from the Ukraine? Yeah, that’s right. Then they’re not going to expect perfect English,

00:54:56 they’re not going to expect perfect, you know, understanding of concepts or even like being on

00:55:00 the same wavelength in terms of like conversation flow. So it’s much easier to pass in that case.

00:55:08 Do you think, you kind of alluded this too with audio, do you think it needs to have a body?

00:55:14 I think that we definitely have, so we treat physical things with more social agency,

00:55:21 because we’re very physical creatures. I think a body can be useful.

00:55:29 Does it get in the way? Is there a negative aspects like…

00:55:33 Yeah, there can be. So if you’re trying to create a body that’s too similar to something that people

00:55:38 are familiar with, like I have this robot cat at home that has robots. I have a robot cat at home

00:55:44 that has roommates. And it’s very disturbing to watch because I’m constantly assuming that it’s

00:55:50 going to move like a real cat and it doesn’t because it’s like a $100 piece of technology.

00:55:57 So it’s very like disappointing and it’s very hard to treat it like it’s alive. So you can get a lot

00:56:04 wrong with the body too, but you can also use tricks, same as, you know, the expectation

00:56:09 management of the 13 year old boy from the Ukraine. If you pick an animal that people

00:56:13 aren’t intimately familiar with, like the baby dinosaur, like the baby seal that people have

00:56:17 never actually held in their arms, you can get away with much more because they don’t have these

00:56:22 preformed expectations. Yeah, I remember you thinking of a Ted talk or something that clicked

00:56:27 for me that nobody actually knows what a dinosaur looks like. So you can actually get away with a

00:56:34 lot more. That was great. So what do you think about consciousness and mortality

00:56:46 being displayed in a robot? So not actually having consciousness, but having these kind

00:56:55 of human elements that are much more than just the interaction, much more than just,

00:57:01 like you mentioned with a dinosaur moving kind of in an interesting ways, but really being worried

00:57:07 about its own death and really acting as if it’s aware and self aware and identity. Have you seen

00:57:16 that done in robotics? What do you think about doing that? Is that a powerful good thing?

00:57:24 Well, I think it can be a design tool that you can use for different purposes. So I can’t say

00:57:29 whether it’s inherently good or bad, but I do think it can be a powerful tool. The fact that the

00:57:36 pleo mimics distress when you quote unquote hurt it is a really powerful tool to get people to

00:57:46 engage with it in a certain way. I had a research partner that I did some of the empathy work with

00:57:52 named Palash Nandi and he had built a robot for himself that had like a lifespan and that would

00:57:57 stop working after a certain amount of time just because he was interested in whether he himself

00:58:02 would treat it differently. And we know from Tamagotchis, those little games that we used to

00:58:10 have that were extremely primitive, that people respond to this idea of mortality and you can get

00:58:17 people to do a lot with little design tricks like that. Now, whether it’s a good thing depends on

00:58:21 what you’re trying to get them to do. Have a deeper relationship, have a deeper connection,

00:58:27 sign a relationship. If it’s for their own benefit, that sounds great. Okay. You could do that for a

00:58:34 lot of other reasons. I see. So what kind of stuff are you worried about? So is it mostly about

00:58:39 manipulation of your emotions for like advertisement and so on, things like that? Yeah, or data

00:58:44 collection or, I mean, you could think of governments misusing this to extract information

00:58:51 from people. It’s, you know, just like any other technological tool, it just raises a lot of

00:58:57 questions. If you look at Facebook, if you look at Twitter and social networks, there’s a lot

00:59:02 of concern of data collection now. What’s from the legal perspective or in general,

00:59:12 how do we prevent the violation of sort of these companies crossing a line? It’s a great area,

00:59:19 but crossing a line, they shouldn’t in terms of manipulating, like we’re talking about and

00:59:24 manipulating our emotion, manipulating our behavior, using tactics that are not so savory.

00:59:32 Yeah. It’s really difficult because we are starting to create technology that relies on

00:59:38 data collection to provide functionality. And there’s not a lot of incentive,

00:59:44 even on the consumer side, to curb that because the other problem is that the harms aren’t

00:59:49 tangible. They’re not really apparent to a lot of people because they kind of trickle down on a

00:59:55 societal level. And then suddenly we’re living in like 1984, which, you know, sounds extreme,

01:00:02 but that book was very prescient and I’m not worried about, you know, these systems. I have,

01:00:11 you know, Amazon’s Echo at home and tell Alexa all sorts of stuff. And it helps me because,

01:00:19 you know, Alexa knows what brand of diaper we use. And so I can just easily order it again.

01:00:25 So I don’t have any incentive to ask a lawmaker to curb that. But when I think about that data

01:00:30 then being used against low income people to target them for scammy loans or education programs,

01:00:39 that’s then a societal effect that I think is very severe and, you know,

01:00:45 legislators should be thinking about.

01:00:47 But yeah, the gray area is the removing ourselves from consideration of like,

01:00:55 of explicitly defining objectives and more saying,

01:00:58 well, we want to maximize engagement in our social network.

01:01:03 Yeah.

01:01:04 And then just, because you’re not actually doing a bad thing. It makes sense. You want people to

01:01:11 keep a conversation going, to have more conversations, to keep coming back

01:01:16 again and again, to have conversations. And whatever happens after that,

01:01:21 you’re kind of not exactly directly responsible. You’re only indirectly responsible. So I think

01:01:28 it’s a really hard problem. Are you optimistic about us ever being able to solve it?

01:01:37 You mean the problem of capitalism? It’s like, because the problem is that the companies

01:01:43 are acting in the company’s interests and not in people’s interests. And when those interests are

01:01:47 aligned, that’s great. But the completely free market doesn’t seem to work because of this

01:01:53 information asymmetry.

01:01:55 But it’s hard to know how to, so say you were trying to do the right thing. I guess what I’m

01:02:01 trying to say is it’s not obvious for these companies what the good thing for society is to

01:02:07 do. Like, I don’t think they sit there with, I don’t know, with a glass of wine and a cat,

01:02:14 like petting a cat, evil cat. And there’s two decisions and one of them is good for society.

01:02:21 One is good for the profit and they choose the profit. I think they actually, there’s a lot of

01:02:26 money to be made by doing the right thing for society. Because Google, Facebook have so much cash

01:02:36 that they actually, especially Facebook, would significantly benefit from making decisions that

01:02:40 are good for society. It’s good for their brand. But I don’t know if they know what’s good for

01:02:46 society. I don’t think we know what’s good for society in terms of how we manage the

01:02:56 conversation on Twitter or how we design, we’re talking about robots. Like, should we

01:03:06 emotionally manipulate you into having a deep connection with Alexa or not?

01:03:10 Yeah. Yeah. Do you have optimism that we’ll be able to solve some of these questions?

01:03:17 Well, I’m going to say something that’s controversial, like in my circles,

01:03:22 which is that I don’t think that companies who are reaching out to ethicists and trying to create

01:03:28 interdisciplinary ethics boards, I don’t think that that’s totally just trying to whitewash

01:03:32 the problem and so that they look like they’ve done something. I think that a lot of companies

01:03:36 actually do, like you say, care about what the right answer is. They don’t know what that is,

01:03:42 and they’re trying to find people to help them find them. Not in every case, but I think

01:03:48 it’s much too easy to just vilify the companies as, like you say, sitting there with their cat

01:03:52 going, her, her, her, $1 million. That’s not what happens. A lot of people are well meaning even

01:03:59 within companies. I think that what we do absolutely need is more interdisciplinarity,

01:04:09 both within companies, but also within the policymaking space because we’ve hurtled into

01:04:17 the world where technological progress is much faster, it seems much faster than it was, and

01:04:23 things are getting very complex. And you need people who understand the technology, but also

01:04:28 people who understand what the societal implications are, and people who are thinking

01:04:33 about this in a more systematic way to be talking to each other. There’s no other solution, I think.

01:04:39 You’ve also done work on intellectual property, so if you look at the algorithms that these

01:04:45 companies are using, like YouTube, Twitter, Facebook, so on, I mean that’s kind of,

01:04:51 those are mostly secretive. The recommender systems behind these algorithms. Do you think

01:04:58 about an IP and the transparency of algorithms like this? Like what is the responsibility of

01:05:04 these companies to open source the algorithms or at least reveal to the public how these

01:05:11 algorithms work? So I personally don’t work on that. There are a lot of people who do though,

01:05:16 and there are a lot of people calling for transparency. In fact, Europe’s even trying

01:05:19 to legislate transparency, maybe they even have at this point, where like if an algorithmic system

01:05:26 makes some sort of decision that affects someone’s life, that you need to be able to see how that

01:05:31 decision was made. It’s a tricky balance because obviously companies need to have some sort of

01:05:41 competitive advantage and you can’t take all of that away or you stifle innovation. But yeah,

01:05:46 for some of the ways that these systems are already being used, I think it is pretty important that

01:05:51 people understand how they work. What are your thoughts in general on intellectual property in

01:05:56 this weird age of software, AI, robotics? Oh, that it’s broken. I mean, the system is just broken. So

01:06:04 can you describe, I actually, I don’t even know what intellectual property is in the space of

01:06:11 software, what it means to, I mean, so I believe I have a patent on a piece of software from my PhD.

01:06:20 You believe? You don’t know? No, we went through a whole process. Yeah, I do. You get the spam

01:06:26 emails like, we’ll frame your patent for you. Yeah, it’s much like a thesis. But that’s useless,

01:06:36 right? Or not? Where does IP stand in this age? What’s the right way to do it? What’s the right

01:06:43 way to protect and own ideas when it’s just code and this mishmash of something that feels much

01:06:51 softer than a piece of machinery? Yeah. I mean, it’s hard because there are different types of

01:06:58 intellectual property and they’re kind of these blunt instruments. It’s like patent law is like

01:07:03 a wrench. It works really well for an industry like the pharmaceutical industry. But when you

01:07:07 try and apply it to something else, it’s like, I don’t know, I’ll just hit this thing with a wrench

01:07:12 and hope it works. So software, you have a couple of different options. Any code that’s written down

01:07:21 in some tangible form is automatically copyrighted. So you have that protection, but that doesn’t do

01:07:27 much because if someone takes the basic idea that the code is executing and just does it in a

01:07:35 slightly different way, they can get around the copyright. So that’s not a lot of protection.

01:07:40 Then you can patent software, but that’s kind of, I mean, getting a patent costs,

01:07:47 I don’t know if you remember what yours cost or like, was it through an institution?

01:07:51 Yeah, it was through a university. It was insane. There were so many lawyers, so many meetings.

01:07:57 It made me feel like it must’ve been hundreds of thousands of dollars. It must’ve been something

01:08:02 crazy. Oh yeah. It’s insane the cost of getting a patent. And so this idea of protecting the

01:08:07 inventor in their own garage who came up with a great idea is kind of, that’s the thing of the

01:08:12 past. It’s all just companies trying to protect things and it costs a lot of money. And then

01:08:18 with code, it’s oftentimes by the time the patent is issued, which can take like five years,

01:08:25 probably your code is obsolete at that point. So it’s a very, again, a very blunt instrument that

01:08:31 doesn’t work well for that industry. And so at this point we should really have something better,

01:08:37 but we don’t. Do you like open source? Yeah. Is open source good for society?

01:08:41 You think all of us should open source code? Well, so at the Media Lab at MIT, we have an

01:08:48 open source default because what we’ve noticed is that people will come in, they’ll write some code

01:08:54 and they’ll be like, how do I protect this? And we’re like, that’s not your problem right now.

01:08:58 Your problem isn’t that someone’s going to steal your project. Your problem is getting people to

01:09:02 use it at all. There’s so much stuff out there. We don’t even know if you’re going to get traction

01:09:07 for your work. And so open sourcing can sometimes help, you know, get people’s work out there,

01:09:12 but ensure that they get attribution for it, for the work that they’ve done. So like,

01:09:17 I’m a fan of it in a lot of contexts. Obviously it’s not like a one size fits all solution.

01:09:23 So what I gleaned from your Twitter is, you’re a mom. I saw a quote, a reference to baby bot.

01:09:32 What have you learned about robotics and AI from raising a human baby bot?

01:09:42 Well, I think that my child has made it more apparent to me that the systems we’re currently

01:09:48 creating aren’t like human intelligence. Like there’s not a lot to compare there.

01:09:54 It’s just, he has learned and developed in such a different way than a lot of the AI systems

01:09:59 we’re creating that that’s not really interesting to me to compare. But what is interesting to me

01:10:07 is how these systems are going to shape the world that he grows up in. And so I’m like even more

01:10:13 concerned about kind of the societal effects of developing systems that, you know, rely on

01:10:19 massive amounts of data collection, for example. So is he going to be allowed to use like Facebook or

01:10:26 Facebook? Facebook is over. Kids don’t use that anymore. Snapchat. What do they use? Instagram?

01:10:33 Snapchat’s over too. I don’t know. I just heard that TikTok is over, which I’ve never even seen.

01:10:38 So I don’t know. No. We’re old. We don’t know. I need to, I’m going to start gaming and streaming

01:10:44 my, my gameplay. So what do you see as the future of personal robotics, social robotics, interaction

01:10:52 with other robots? Like what are you excited about if you were to sort of philosophize about what

01:10:58 might happen in the next five, 10 years that would be cool to see? Oh, I really hope that we get kind

01:11:05 of a home robot that makes it, that’s a social robot and not just Alexa. Like it’s, you know,

01:11:12 I really love the Anki products. I thought Jibo was, had some really great aspects. So I’m hoping

01:11:19 that a company cracks that. Me too. So Kate, it was a wonderful talking to you today. Likewise.

01:11:26 Thank you so much. It was fun. Thanks for listening to this conversation with Kate Darling.

01:11:32 And thank you to our sponsors, ExpressVPN and Masterclass. Please consider supporting the

01:11:37 podcast by signing up to Masterclass at masterclass.com slash Lex and getting ExpressVPN at

01:11:45 expressvpn.com slash LexPod. If you enjoy this podcast, subscribe on YouTube, review it with

01:11:52 five stars on Apple podcast, support it on Patreon, or simply connect with me on Twitter

01:11:57 at Lex Friedman. And now let me leave you with some tweets from Kate Darling. First tweet is

01:12:05 the pandemic has fundamentally changed who I am. I now drink the leftover milk in the bottom of

01:12:11 the cereal bowl. Second tweet is I came on here to complain that I had a really bad day and saw that

01:12:19 a bunch of you are hurting too. Love to everyone. Thank you for listening. I hope to see you next

01:12:26 time.