Transcript
00:00:00 The following is a conversation with Ayana Howard.
00:00:03 She’s a roboticist, professor Georgia Tech,
00:00:06 and director of the Human Automation Systems Lab,
00:00:09 with research interests in human robot interaction,
00:00:12 assisted robots in the home, therapy gaming apps,
00:00:15 and remote robotic exploration of extreme environments.
00:00:20 Like me, in her work, she cares a lot
00:00:23 about both robots and human beings,
00:00:26 and so I really enjoyed this conversation.
00:00:29 This is the Artificial Intelligence Podcast.
00:00:32 If you enjoy it, subscribe on YouTube,
00:00:34 give it five stars on Apple Podcast,
00:00:36 follow on Spotify, support it on Patreon,
00:00:39 or simply connect with me on Twitter
00:00:41 at Lex Friedman, spelled F R I D M A N.
00:00:45 I recently started doing ads
00:00:47 at the end of the introduction.
00:00:48 I’ll do one or two minutes after introducing the episode,
00:00:51 and never any ads in the middle
00:00:53 that can break the flow of the conversation.
00:00:55 I hope that works for you
00:00:56 and doesn’t hurt the listening experience.
00:01:00 This show is presented by Cash App,
00:01:02 the number one finance app in the App Store.
00:01:04 I personally use Cash App to send money to friends,
00:01:07 but you can also use it to buy, sell,
00:01:09 and deposit Bitcoin in just seconds.
00:01:11 Cash App also has a new investing feature.
00:01:14 You can buy fractions of a stock, say $1 worth,
00:01:17 no matter what the stock price is.
00:01:19 Broker services are provided by Cash App Investing,
00:01:22 a subsidiary of Square and Member SIPC.
00:01:25 I’m excited to be working with Cash App
00:01:28 to support one of my favorite organizations called First,
00:01:31 best known for their FIRST Robotics and Lego competitions.
00:01:35 They educate and inspire hundreds of thousands of students
00:01:38 in over 110 countries,
00:01:40 and have a perfect rating at Charity Navigator,
00:01:42 which means that donated money
00:01:44 is used to maximum effectiveness.
00:01:46 When you get Cash App from the App Store or Google Play
00:01:49 and use code LEXPODCAST, you’ll get $10,
00:01:53 and Cash App will also donate $10 to FIRST,
00:01:56 which again, is an organization
00:01:58 that I’ve personally seen inspire girls and boys
00:02:01 to dream of engineering a better world.
00:02:04 And now, here’s my conversation with Ayanna Howard.
00:02:09 What or who is the most amazing robot you’ve ever met,
00:02:13 or perhaps had the biggest impact on your career?
00:02:16 I haven’t met her, but I grew up with her,
00:02:21 but of course, Rosie.
00:02:22 So, and I think it’s because also.
00:02:25 Who’s Rosie?
00:02:26 Rosie from the Jetsons.
00:02:27 She is all things to all people, right?
00:02:30 Think about it.
00:02:31 Like anything you wanted, it was like magic, it happened.
00:02:35 So people not only anthropomorphize,
00:02:37 but project whatever they wish for the robot to be onto.
00:02:41 Onto Rosie.
00:02:42 But also, I mean, think about it.
00:02:44 She was socially engaging.
00:02:46 She every so often had an attitude, right?
00:02:50 She kept us honest.
00:02:51 She would push back sometimes
00:02:53 when George was doing some weird stuff.
00:02:56 But she cared about people, especially the kids.
00:03:01 She was like the perfect robot.
00:03:03 And you’ve said that people don’t want
00:03:06 their robots to be perfect.
00:03:09 Can you elaborate that?
00:03:11 What do you think that is?
00:03:11 Just like you said, Rosie pushed back a little bit
00:03:14 every once in a while.
00:03:15 Yeah, so I think it’s that.
00:03:18 So if you think about robotics in general,
00:03:19 we want them because they enhance our quality of life.
00:03:23 And usually that’s linked to something that’s functional.
00:03:27 Even if you think of self driving cars,
00:03:28 why is there a fascination?
00:03:29 Because people really do hate to drive.
00:03:31 Like there’s the like Saturday driving
00:03:34 where I can just speed,
00:03:35 but then there’s the I have to go to work every day
00:03:37 and I’m in traffic for an hour.
00:03:38 I mean, people really hate that.
00:03:40 And so robots are designed to basically enhance
00:03:45 our ability to increase our quality of life.
00:03:49 And so the perfection comes from this aspect of interaction.
00:03:55 If I think about how we drive, if we drove perfectly,
00:04:00 we would never get anywhere, right?
00:04:02 So think about how many times you had to run past the light
00:04:07 because you see the car behind you
00:04:09 is about to crash into you.
00:04:10 Or that little kid kind of runs into the street
00:04:15 and so you have to cross on the other side
00:04:17 because there’s no cars, right?
00:04:18 Like if you think about it, we are not perfect drivers.
00:04:21 Some of it is because it’s our world.
00:04:23 And so if you have a robot that is perfect
00:04:26 in that sense of the word,
00:04:28 they wouldn’t really be able to function with us.
00:04:31 Can you linger a little bit on the word perfection?
00:04:34 So from the robotics perspective,
00:04:37 what does that word mean
00:04:39 and how is sort of the optimal behavior
00:04:42 as you’re describing different
00:04:44 than what we think is perfection?
00:04:46 Yeah, so perfection, if you think about it
00:04:49 in the more theoretical point of view,
00:04:51 it’s really tied to accuracy, right?
00:04:54 So if I have a function,
00:04:55 can I complete it at 100% accuracy with zero errors?
00:05:00 And so that’s kind of, if you think about perfection
00:05:04 in the sense of the word.
00:05:05 And in the self driving car realm,
00:05:07 do you think from a robotics perspective,
00:05:10 we kind of think that perfection means
00:05:13 following the rules perfectly,
00:05:15 sort of defining, staying in the lane, changing lanes.
00:05:19 When there’s a green light, you go.
00:05:20 When there’s a red light, you stop.
00:05:22 And that’s the, and be able to perfectly see
00:05:26 all the entities in the scene.
00:05:29 That’s the limit of what we think of as perfection.
00:05:31 And I think that’s where the problem comes
00:05:33 is that when people think about perfection for robotics,
00:05:38 the ones that are the most successful
00:05:40 are the ones that are quote unquote perfect.
00:05:43 Like I said, Rosie is perfect,
00:05:44 but she actually wasn’t perfect in terms of accuracy,
00:05:47 but she was perfect in terms of how she interacted
00:05:50 and how she adapted.
00:05:51 And I think that’s some of the disconnect
00:05:53 is that we really want perfection
00:05:56 with respect to its ability to adapt to us.
00:05:59 We don’t really want perfection with respect to 100% accuracy
00:06:03 with respect to the rules that we just made up anyway, right?
00:06:06 And so I think there’s this disconnect sometimes
00:06:09 between what we really want and what happens.
00:06:13 And we see this all the time, like in my research, right?
00:06:15 Like the optimal, quote unquote optimal interactions
00:06:20 are when the robot is adapting based on the person,
00:06:24 not 100% following what’s optimal based on the rules.
00:06:29 Just to link on autonomous vehicles for a second,
00:06:32 just your thoughts, maybe off the top of the head,
00:06:36 how hard is that problem do you think
00:06:37 based on what we just talked about?
00:06:40 There’s a lot of folks in the automotive industry,
00:06:42 they’re very confident from Elon Musk to Waymo
00:06:45 to all these companies.
00:06:47 How hard is it to solve that last piece?
00:06:50 The last mile.
00:06:51 The gap between the perfection and the human definition
00:06:57 of how you actually function in this world.
00:06:59 Yeah, so this is a moving target.
00:07:00 So I remember when all the big companies
00:07:04 started to heavily invest in this
00:07:06 and there was a number of even roboticists
00:07:09 as well as folks who were putting in the VCs
00:07:13 and corporations, Elon Musk being one of them that said,
00:07:16 self driving cars on the road with people
00:07:19 within five years, that was a little while ago.
00:07:24 And now people are saying five years, 10 years, 20 years,
00:07:29 some are saying never, right?
00:07:31 I think if you look at some of the things
00:07:33 that are being successful is these
00:07:39 basically fixed environments
00:07:41 where you still have some anomalies, right?
00:07:43 You still have people walking, you still have stores,
00:07:46 but you don’t have other drivers, right?
00:07:50 Like other human drivers are,
00:07:51 it’s a dedicated space for the cars.
00:07:55 Because if you think about robotics in general,
00:07:57 where has always been successful?
00:07:59 I mean, you can say manufacturing,
00:08:00 like way back in the day, right?
00:08:02 It was a fixed environment, humans were not part
00:08:04 of the equation, we’re a lot better than that.
00:08:07 But like when we can carve out scenarios
00:08:10 that are closer to that space,
00:08:13 then I think that it’s where we are.
00:08:16 So a closed campus where you don’t have self driving cars
00:08:20 and maybe some protection so that the students
00:08:23 don’t jet in front just because they wanna see what happens.
00:08:27 Like having a little bit, I think that’s where
00:08:29 we’re gonna see the most success in the near future.
00:08:32 And be slow moving.
00:08:33 Right, not 55, 60, 70 miles an hour,
00:08:37 but the speed of a golf cart, right?
00:08:42 So that said, the most successful
00:08:45 in the automotive industry robots operating today
00:08:47 in the hands of real people are ones that are traveling
00:08:51 over 55 miles an hour and in unconstrained environments,
00:08:55 which is Tesla vehicles, so Tesla autopilot.
00:08:58 So I would love to hear sort of your,
00:09:01 just thoughts of two things.
00:09:04 So one, I don’t know if you’ve gotten to see,
00:09:07 you’ve heard about something called smart summon
00:09:10 where Tesla system, autopilot system,
00:09:13 where the car drives zero occupancy, no driver
00:09:17 in the parking lot slowly sort of tries to navigate
00:09:19 the parking lot to find itself to you.
00:09:22 And there’s some incredible amounts of videos
00:09:25 and just hilarity that happens as it awkwardly tries
00:09:28 to navigate this environment, but it’s a beautiful
00:09:32 nonverbal communication between machine and human
00:09:35 that I think is a, it’s like, it’s some of the work
00:09:38 that you do in this kind of interesting
00:09:40 human robot interaction space.
00:09:42 So what are your thoughts in general about it?
00:09:43 So I do have that feature.
00:09:46 Do you drive a Tesla?
00:09:47 I do, mainly because I’m a gadget freak, right?
00:09:52 So I say it’s a gadget that happens to have some wheels.
00:09:55 And yeah, I’ve seen some of the videos.
00:09:58 But what’s your experience like?
00:09:59 I mean, you’re a human robot interaction roboticist,
00:10:02 you’re a legit sort of expert in the field.
00:10:05 So what does it feel for a machine to come to you?
00:10:08 It’s one of these very fascinating things,
00:10:11 but also I am hyper, hyper alert, right?
00:10:16 Like I’m hyper alert, like my butt, my thumb is like,
00:10:20 oh, okay, I’m ready to take over.
00:10:23 Even when I’m in my car or I’m doing things like automated
00:10:27 backing into, so there’s like a feature where you can do
00:10:30 this automating backing into a parking space,
00:10:33 or bring the car out of your garage,
00:10:35 or even, you know, pseudo autopilot on the freeway, right?
00:10:40 I am hypersensitive.
00:10:42 I can feel like as I’m navigating,
00:10:44 like, yeah, that’s an error right there.
00:10:46 Like I am very aware of it, but I’m also fascinated by it.
00:10:52 And it does get better.
00:10:54 Like I look and see it’s learning from all of these people
00:10:58 who are cutting it on, like every time I cut it on,
00:11:02 it’s getting better, right?
00:11:04 And so I think that’s what’s amazing about it is that.
00:11:07 This nice dance of you’re still hyper vigilant.
00:11:10 So you’re still not trusting it at all.
00:11:12 Yeah.
00:11:13 And yet you’re using it.
00:11:14 On the highway, if I were to, like what,
00:11:17 as a roboticist, we’ll talk about trust a little bit.
00:11:22 How do you explain that?
00:11:23 You still use it.
00:11:25 Is it the gadget freak part?
00:11:26 Like where you just enjoy exploring technology?
00:11:30 Or is that the right actually balance
00:11:33 between robotics and humans is where you use it,
00:11:36 but don’t trust it.
00:11:38 And somehow there’s this dance
00:11:40 that ultimately is a positive.
00:11:42 Yeah, so I think I’m,
00:11:44 I just don’t necessarily trust technology,
00:11:48 but I’m an early adopter, right?
00:11:50 So when it first comes out,
00:11:51 I will use everything,
00:11:54 but I will be very, very cautious of how I use it.
00:11:57 Do you read about it or do you explore it by just try it?
00:12:01 Do you like crudely, to put it crudely,
00:12:04 do you read the manual or do you learn through exploration?
00:12:07 I’m an explorer.
00:12:08 If I have to read the manual, then I do design.
00:12:12 Then it’s a bad user interface.
00:12:14 It’s a failure.
00:12:16 Elon Musk is very confident that you kind of take it
00:12:19 from where it is now to full autonomy.
00:12:21 So from this human robot interaction,
00:12:24 where you don’t really trust and then you try
00:12:26 and then you catch it when it fails to,
00:12:29 it’s going to incrementally improve itself
00:12:32 into full where you don’t need to participate.
00:12:36 What’s your sense of that trajectory?
00:12:39 Is it feasible?
00:12:41 So the promise there is by the end of next year,
00:12:44 by the end of 2020 is the current promise.
00:12:47 What’s your sense about that journey that Tesla’s on?
00:12:53 So there’s kind of three things going on though.
00:12:56 I think in terms of will people go like as a user,
00:13:03 as a adopter, will you trust going to that point?
00:13:08 I think so, right?
00:13:10 Like there are some users and it’s because what happens is
00:13:13 when you’re hypersensitive at the beginning
00:13:16 and then the technology tends to work,
00:13:19 your apprehension slowly goes away.
00:13:23 And as people, we tend to swing to the other extreme, right?
00:13:28 Because it’s like, oh, I was like hyper, hyper fearful
00:13:30 or hypersensitive and it was awesome.
00:13:33 And we just tend to swing.
00:13:35 That’s just human nature.
00:13:37 And so you will have, I mean, and I…
00:13:38 That’s a scary notion because most people
00:13:41 are now extremely untrusting of autopilot.
00:13:44 They use it, but they don’t trust it.
00:13:46 And it’s a scary notion that there’s a certain point
00:13:48 where you allow yourself to look at the smartphone
00:13:51 for like 20 seconds.
00:13:53 And then there’ll be this phase shift
00:13:55 where it’ll be like 20 seconds, 30 seconds,
00:13:57 one minute, two minutes.
00:13:59 It’s a scary proposition.
00:14:02 But that’s people, right?
00:14:03 That’s just, that’s humans.
00:14:05 I mean, I think of even our use of,
00:14:09 I mean, just everything on the internet, right?
00:14:12 Like think about how reliant we are on certain apps
00:14:16 and certain engines, right?
00:14:20 20 years ago, people have been like, oh yeah, that’s stupid.
00:14:22 Like that makes no sense.
00:14:23 Like, of course that’s false.
00:14:25 Like now it’s just like, oh, of course I’ve been using it.
00:14:29 It’s been correct all this time.
00:14:30 Of course aliens, I didn’t think they existed,
00:14:34 but now it says they do, obviously.
00:14:37 100%, earth is flat.
00:14:39 So, okay, but you said three things.
00:14:43 So one is the human.
00:14:44 Okay, so one is the human.
00:14:45 And I think there will be a group of individuals
00:14:47 that will swing, right?
00:14:49 I just.
00:14:50 Teenagers.
00:14:51 Teenage, I mean, it’ll be, it’ll be adults.
00:14:54 There’s actually an age demographic
00:14:56 that’s optimal for technology adoption.
00:15:00 And you can actually find them.
00:15:02 And they’re actually pretty easy to find.
00:15:03 Just based on their habits, based on,
00:15:06 so if someone like me who wasn’t a roboticist
00:15:10 would probably be the optimal kind of person, right?
00:15:13 Early adopter, okay with technology,
00:15:15 very comfortable and not hypersensitive, right?
00:15:20 I’m just hypersensitive cause I designed this stuff.
00:15:23 So there is a target demographic that will swing.
00:15:25 The other one though,
00:15:26 is you still have these humans that are on the road.
00:15:31 That one is a harder, harder thing to do.
00:15:35 And as long as we have people that are on the same streets,
00:15:40 that’s gonna be the big issue.
00:15:42 And it’s just because you can’t possibly,
00:15:45 I wanna say you can’t possibly map the,
00:15:48 some of the silliness of human drivers, right?
00:15:51 Like as an example, when you’re next to that car
00:15:56 that has that big sticker called student driver, right?
00:15:59 Like you are like, oh, either I’m going to like go around.
00:16:04 Like we are, we know that that person
00:16:06 is just gonna make mistakes that make no sense, right?
00:16:09 How do you map that information?
00:16:11 Or if I am in a car and I look over
00:16:14 and I see two fairly young looking individuals
00:16:19 and there’s no student driver bumper
00:16:21 and I see them chit chatting to each other,
00:16:22 I’m like, oh, that’s an issue, right?
00:16:26 So how do you get that kind of information
00:16:28 and that experience into basically an autopilot?
00:16:35 And there’s millions of cases like that
00:16:37 where we take little hints to establish context.
00:16:41 I mean, you said kind of beautifully poetic human things,
00:16:44 but there’s probably subtle things about the environment
00:16:47 about it being maybe time for commuters
00:16:52 to start going home from work
00:16:55 and therefore you can make some kind of judgment
00:16:57 about the group behavior of pedestrians, blah, blah, blah,
00:17:00 and so on and so on.
00:17:01 Or even cities, right?
00:17:02 Like if you’re in Boston, how people cross the street,
00:17:07 like lights are not an issue versus other places
00:17:10 where people will actually wait for the crosswalk.
00:17:15 Seattle or somewhere peaceful.
00:17:18 But what I’ve also seen sort of just even in Boston
00:17:22 that intersection to intersection is different.
00:17:25 So every intersection has a personality of its own.
00:17:28 So certain neighborhoods of Boston are different.
00:17:30 So we kind of, and based on different timing of day,
00:17:35 at night, it’s all, there’s a dynamic to human behavior
00:17:40 that we kind of figure out ourselves.
00:17:42 We’re not able to introspect and figure it out,
00:17:46 but somehow our brain learns it.
00:17:49 We do.
00:17:50 And so you’re saying, is there a shortcut?
00:17:54 Is there a shortcut, though, for a robot?
00:17:56 Is there something that could be done, you think,
00:17:59 that, you know, that’s what we humans do.
00:18:02 It’s just like bird flight, right?
00:18:04 That’s the example they give for flight.
00:18:06 Do you necessarily need to build a bird that flies
00:18:09 or can you do an airplane?
00:18:11 Is there a shortcut to it?
00:18:13 So I think the shortcut is, and I kind of,
00:18:16 I talk about it as a fixed space,
00:18:19 where, so imagine that there’s a neighborhood
00:18:23 that’s a new smart city or a new neighborhood
00:18:26 that says, you know what?
00:18:27 We are going to design this new city
00:18:31 based on supporting self driving cars.
00:18:33 And then doing things, knowing that there’s anomalies,
00:18:37 knowing that people are like this, right?
00:18:39 And designing it based on that assumption
00:18:42 that like, we’re gonna have this.
00:18:43 That would be an example of a shortcut.
00:18:45 So you still have people,
00:18:47 but you do very specific things
00:18:49 to try to minimize the noise a little bit
00:18:51 as an example.
00:18:53 And the people themselves become accepting of the notion
00:18:56 that there’s autonomous cars, right?
00:18:57 Right, like they move into,
00:18:59 so right now you have like a,
00:19:01 you will have a self selection bias, right?
00:19:03 Like individuals will move into this neighborhood
00:19:06 knowing like this is part of like the real estate pitch,
00:19:09 right?
00:19:10 And so I think that’s a way to do a shortcut.
00:19:14 One, it allows you to deploy.
00:19:17 It allows you to collect then data with these variances
00:19:21 and anomalies, cause people are still people,
00:19:24 but it’s a safer space and it’s more of an accepting space.
00:19:28 I.e. when something in that space might happen
00:19:31 because things do,
00:19:34 because you already have the self selection,
00:19:36 like people would be, I think a little more forgiving
00:19:39 than other places.
00:19:40 And you said three things, did we cover all of them?
00:19:43 The third is legal law, liability,
00:19:46 which I don’t really want to touch,
00:19:47 but it’s still of concern.
00:19:50 And the mishmash with like with policy as well,
00:19:53 sort of government, all that whole.
00:19:55 That big ball of stuff.
00:19:57 Yeah, gotcha.
00:19:59 So that’s, so we’re out of time now.
00:20:03 Do you think from a robotics perspective,
00:20:07 you know, if you’re kind of honest of what cars do,
00:20:09 they kind of threaten each other’s life all the time.
00:20:14 So cars are various.
00:20:17 I mean, in order to navigate intersections,
00:20:19 there’s an assertiveness, there’s a risk taking.
00:20:22 And if you were to reduce it to an objective function,
00:20:25 there’s a probability of murder in that function,
00:20:28 meaning you killing another human being
00:20:31 and you’re using that.
00:20:33 First of all, it has to be low enough
00:20:36 to be acceptable to you on an ethical level
00:20:39 as an individual human being,
00:20:41 but it has to be high enough for people to respect you
00:20:45 to not sort of take advantage of you completely
00:20:47 and jaywalk in front of you and so on.
00:20:49 So, I mean, I don’t think there’s a right answer here,
00:20:53 but what’s, how do we solve that?
00:20:56 How do we solve that from a robotics perspective
00:20:57 when danger and human life is at stake?
00:21:00 Yeah, as they say, cars don’t kill people,
00:21:01 people kill people.
00:21:02 People kill people.
00:21:05 Right.
00:21:07 So I think.
00:21:08 And now robotic algorithms would be killing people.
00:21:10 Right, so it will be robotics algorithms that are pro,
00:21:14 no, it will be robotic algorithms don’t kill people.
00:21:16 Developers of robotic algorithms kill people, right?
00:21:19 I mean, one of the things is people are still in the loop
00:21:22 and at least in the near and midterm,
00:21:26 I think people will still be in the loop at some point,
00:21:29 even if it’s a developer.
00:21:30 Like we’re not necessarily at the stage
00:21:31 where robots are programming autonomous robots
00:21:36 with different behaviors quite yet.
00:21:39 It’s a scary notion, sorry to interrupt,
00:21:42 that a developer has some responsibility
00:21:47 in the death of a human being.
00:21:49 That’s a heavy burden.
00:21:50 I mean, I think that’s why the whole aspect of ethics
00:21:55 in our community is so, so important, right?
00:21:58 Like, because it’s true.
00:22:00 If you think about it, you can basically say,
00:22:04 I’m not going to work on weaponized AI, right?
00:22:07 Like people can say, that’s not what I’m gonna do.
00:22:09 But yet you are programming algorithms
00:22:12 that might be used in healthcare algorithms
00:22:15 that might decide whether this person
00:22:17 should get this medication or not.
00:22:18 And they don’t and they die.
00:22:21 Okay, so that is your responsibility, right?
00:22:25 And if you’re not conscious and aware
00:22:27 that you do have that power when you’re coding
00:22:30 and things like that, I think that’s just not a good thing.
00:22:35 Like we need to think about this responsibility
00:22:38 as we program robots and computing devices
00:22:41 much more than we are.
00:22:44 Yeah, so it’s not an option to not think about ethics.
00:22:46 I think it’s a majority, I would say, of computer science.
00:22:51 Sort of, it’s kind of a hot topic now,
00:22:53 I think about bias and so on, but it’s,
00:22:56 and we’ll talk about it, but usually it’s kind of,
00:23:00 it’s like a very particular group of people
00:23:02 that work on that.
00:23:04 And then people who do like robotics are like,
00:23:06 well, I don’t have to think about that.
00:23:09 There’s other smart people thinking about it.
00:23:11 It seems that everybody has to think about it.
00:23:14 It’s not, you can’t escape the ethics,
00:23:17 whether it’s bias or just every aspect of ethics
00:23:21 that has to do with human beings.
00:23:22 Everyone.
00:23:23 So think about, I’m gonna age myself,
00:23:25 but I remember when we didn’t have like testers, right?
00:23:30 And so what did you do?
00:23:31 As a developer, you had to test your own code, right?
00:23:33 Like you had to go through all the cases and figure it out
00:23:36 and then they realized that,
00:23:39 we probably need to have testing
00:23:40 because we’re not getting all the things.
00:23:42 And so from there, what happens is like most developers,
00:23:45 they do a little bit of testing, but it’s usually like,
00:23:48 okay, did my compiler bug out?
00:23:49 Let me look at the warnings.
00:23:51 Okay, is that acceptable or not, right?
00:23:53 Like that’s how you typically think about as a developer
00:23:55 and you’ll just assume that it’s going to go
00:23:58 to another process and they’re gonna test it out.
00:24:01 But I think we need to go back to those early days
00:24:04 when you’re a developer, you’re developing,
00:24:07 there should be like the say,
00:24:09 okay, let me look at the ethical outcomes of this
00:24:12 because there isn’t a second like testing ethical testers,
00:24:16 right, it’s you.
00:24:18 We did it back in the early coding days.
00:24:21 I think that’s where we are with respect to ethics.
00:24:23 Like let’s go back to what was good practices
00:24:26 and only because we were just developing the field.
00:24:30 Yeah, and it’s a really heavy burden.
00:24:33 I’ve had to feel it recently in the last few months,
00:24:37 but I think it’s a good one to feel like
00:24:39 I’ve gotten a message, more than one from people.
00:24:43 You know, I’ve unfortunately gotten some attention recently
00:24:47 and I’ve gotten messages that say that
00:24:50 I have blood on my hands
00:24:52 because of working on semi autonomous vehicles.
00:24:56 So the idea that you have semi autonomy means
00:24:59 people will become, will lose vigilance and so on.
00:25:02 That’s actually be humans, as we described.
00:25:05 And because of that, because of this idea
00:25:08 that we’re creating automation,
00:25:10 there’ll be people be hurt because of it.
00:25:12 And I think that’s a beautiful thing.
00:25:14 I mean, it’s, you know, there’s many nights
00:25:16 where I wasn’t able to sleep because of this notion.
00:25:18 You know, you really do think about people that might die
00:25:22 because of this technology.
00:25:23 Of course, you can then start rationalizing saying,
00:25:26 well, you know what, 40,000 people die in the United States
00:25:29 every year and we’re trying to ultimately try to save lives.
00:25:32 But the reality is your code you’ve written
00:25:35 might kill somebody.
00:25:36 And that’s an important burden to carry with you
00:25:38 as you design the code.
00:25:41 I don’t even think of it as a burden
00:25:43 if we train this concept correctly from the beginning.
00:25:47 And I use, and not to say that coding is like
00:25:50 being a medical doctor, but think about it.
00:25:52 Medical doctors, if they’ve been in situations
00:25:56 where their patient didn’t survive, right?
00:25:58 Do they give up and go away?
00:26:00 No, every time they come in,
00:26:02 they know that there might be a possibility
00:26:05 that this patient might not survive.
00:26:07 And so when they approach every decision,
00:26:10 like that’s in the back of their head.
00:26:11 And so why isn’t that we aren’t teaching,
00:26:15 and those are tools though, right?
00:26:17 They are given some of the tools to address that
00:26:19 so that they don’t go crazy.
00:26:21 But we don’t give those tools
00:26:24 so that it does feel like a burden
00:26:26 versus something of I have a great gift
00:26:28 and I can do great, awesome good,
00:26:31 but with it comes great responsibility.
00:26:33 I mean, that’s what we teach in terms of
00:26:35 if you think about the medical schools, right?
00:26:37 Great gift, great responsibility.
00:26:39 I think if we just change the messaging a little,
00:26:42 great gift, being a developer, great responsibility.
00:26:45 And this is how you combine those.
00:26:48 But do you think, I mean, this is really interesting.
00:26:52 It’s outside, I actually have no friends
00:26:54 who are sort of surgeons or doctors.
00:26:58 I mean, what does it feel like
00:27:00 to make a mistake in a surgery and somebody to die
00:27:03 because of that?
00:27:04 Like, is that something you could be taught
00:27:07 in medical school, sort of how to be accepting of that risk?
00:27:10 So, because I do a lot of work with healthcare robotics,
00:27:14 I have not lost a patient, for example.
00:27:18 The first one’s always the hardest, right?
00:27:20 But they really teach the value, right?
00:27:27 So, they teach responsibility,
00:27:28 but they also teach the value.
00:27:30 Like, you’re saving 40,000,
00:27:34 but in order to really feel good about that,
00:27:38 when you come to a decision,
00:27:40 you have to be able to say at the end,
00:27:42 I did all that I could possibly do, right?
00:27:45 Versus a, well, I just picked the first widget, right?
00:27:49 Like, so every decision is actually thought through.
00:27:52 It’s not a habit, it’s not a,
00:27:53 let me just take the best algorithm
00:27:55 that my friend gave me, right?
00:27:57 It’s a, is this it, is this the best?
00:27:59 Have I done my best to do good, right?
00:28:03 And so…
00:28:03 You’re right, and I think burden is the wrong word.
00:28:06 It’s a gift, but you have to treat it extremely seriously.
00:28:10 Correct.
00:28:13 So, on a slightly related note,
00:28:15 in a recent paper,
00:28:16 The Ugly Truth About Ourselves and Our Robot Creations,
00:28:20 you discuss, you highlight some biases
00:28:24 that may affect the function of various robotic systems.
00:28:27 Can you talk through, if you remember, examples of some?
00:28:30 There’s a lot of examples.
00:28:31 I usually… What is bias, first of all?
00:28:33 Yeah, so bias is this,
00:28:37 and so bias, which is different than prejudice.
00:28:38 So, bias is that we all have these preconceived notions
00:28:41 about particular, everything from particular groups
00:28:45 to habits to identity, right?
00:28:49 So, we have these predispositions,
00:28:51 and so when we address a problem,
00:28:54 we look at a problem and make a decision,
00:28:56 those preconceived notions might affect our outputs,
00:29:01 our outcomes.
00:29:02 So, there the bias can be positive and negative,
00:29:04 and then is prejudice the negative kind of bias?
00:29:07 Prejudice is the negative, right?
00:29:09 So, prejudice is that not only are you aware of your bias,
00:29:13 but you are then take it and have a negative outcome,
00:29:18 even though you’re aware, like…
00:29:20 And there could be gray areas too.
00:29:22 There’s always gray areas.
00:29:24 That’s the challenging aspect of all ethical questions.
00:29:27 So, I always like…
00:29:28 So, there’s a funny one,
00:29:30 and in fact, I think it might be in the paper,
00:29:31 because I think I talk about self driving cars,
00:29:34 but think about this.
00:29:35 We, for teenagers, right?
00:29:39 Typically, insurance companies charge quite a bit of money
00:29:44 if you have a teenage driver.
00:29:46 So, you could say that’s an age bias, right?
00:29:50 But no one will claim…
00:29:52 I mean, parents will be grumpy,
00:29:54 but no one really says that that’s not fair.
00:29:58 That’s interesting.
00:29:59 We don’t…
00:30:00 That’s right, that’s right.
00:30:01 It’s everybody in human factors and safety research almost…
00:30:06 I mean, it’s quite ruthlessly critical of teenagers.
00:30:12 And we don’t question, is that okay?
00:30:15 Is that okay to be ageist in this kind of way?
00:30:17 It is, and it is ageist, right?
00:30:18 It’s definitely ageist, there’s no question about it.
00:30:20 And so, this is the gray area, right?
00:30:24 Because you know that teenagers are more likely
00:30:29 to be in accidents,
00:30:30 and so, there’s actually some data to it.
00:30:33 But then, if you take that same example,
00:30:34 and you say, well, I’m going to make the insurance higher
00:30:39 for an area of Boston,
00:30:43 because there’s a lot of accidents.
00:30:45 And then, they find out that that’s correlated
00:30:48 with socioeconomics.
00:30:50 Well, then it becomes a problem, right?
00:30:52 Like, that is not acceptable,
00:30:55 but yet, the teenager, which is age…
00:30:58 It’s against age, is, right?
00:31:01 We figure that out as a society by having conversations,
00:31:05 by having discourse.
00:31:06 I mean, throughout history,
00:31:07 the definition of what is ethical or not has changed,
00:31:11 and hopefully, always for the better.
00:31:14 Correct, correct.
00:31:15 So, in terms of bias or prejudice in algorithms,
00:31:22 what examples do you sometimes think about?
00:31:25 So, I think about quite a bit the medical domain,
00:31:28 just because historically, right?
00:31:31 The healthcare domain has had these biases,
00:31:34 typically based on gender and ethnicity, primarily.
00:31:40 A little in age, but not so much.
00:31:43 Historically, if you think about FDA and drug trials,
00:31:49 it’s harder to find a woman that aren’t childbearing,
00:31:54 and so you may not test on drugs at the same level.
00:31:56 Right, so there’s these things.
00:31:58 And so, if you think about robotics, right?
00:32:02 Something as simple as,
00:32:04 I’d like to design an exoskeleton, right?
00:32:07 What should the material be?
00:32:09 What should the weight be?
00:32:10 What should the form factor be?
00:32:14 Who are you gonna design it around?
00:32:16 I will say that in the US,
00:32:19 women average height and weight
00:32:21 is slightly different than guys.
00:32:23 So, who are you gonna choose?
00:32:25 Like, if you’re not thinking about it from the beginning,
00:32:28 as, okay, when I design this and I look at the algorithms
00:32:33 and I design the control system and the forces
00:32:35 and the torques, if you’re not thinking about,
00:32:38 well, you have different types of body structure,
00:32:41 you’re gonna design to what you’re used to.
00:32:44 Oh, this fits all the folks in my lab, right?
00:32:48 So, think about it from the very beginning is important.
00:32:51 What about sort of algorithms that train on data
00:32:54 kind of thing?
00:32:55 Sadly, our society already has a lot of negative bias.
00:33:01 And so, if we collect a lot of data,
00:33:04 even if it’s a balanced way,
00:33:06 that’s going to contain the same bias
00:33:07 that our society contains.
00:33:08 And so, yeah, is there things there that bother you?
00:33:13 Yeah, so you actually said something.
00:33:15 You had said how we have biases,
00:33:19 but hopefully we learn from them and we become better, right?
00:33:22 And so, that’s where we are now, right?
00:33:24 So, the data that we’re collecting is historic.
00:33:28 So, it’s based on these things
00:33:29 when we knew it was bad to discriminate,
00:33:32 but that’s the data we have and we’re trying to fix it now,
00:33:35 but we’re fixing it based on the data
00:33:37 that was used in the first place.
00:33:39 Fix it in post.
00:33:40 Right, and so the decisions,
00:33:43 and you can look at everything from the whole aspect
00:33:46 of predictive policing, criminal recidivism.
00:33:51 There was a recent paper that had the healthcare algorithms,
00:33:54 which had a kind of a sensational titles.
00:33:58 I’m not pro sensationalism in titles,
00:34:00 but again, you read it, right?
00:34:03 So, it makes you read it,
00:34:05 but I’m like, really?
00:34:06 Like, ugh, you could have.
00:34:08 What’s the topic of the sensationalism?
00:34:10 I mean, what’s underneath it?
00:34:13 What’s, if you could sort of educate me
00:34:16 on what kind of bias creeps into the healthcare space.
00:34:18 Yeah, so.
00:34:19 I mean, you already kind of mentioned.
00:34:21 Yeah, so this one was the headline was
00:34:24 racist AI algorithms.
00:34:27 Okay, like, okay, that’s totally a clickbait title.
00:34:30 And so you looked at it and so there was data
00:34:34 that these researchers had collected.
00:34:36 I believe, I wanna say it was either Science or Nature.
00:34:39 It just was just published,
00:34:40 but they didn’t have a sensational title.
00:34:42 It was like the media.
00:34:44 And so they had looked at demographics,
00:34:47 I believe, between black and white women, right?
00:34:51 And they showed that there was a discrepancy
00:34:56 in the outcomes, right?
00:34:58 And so, and it was tied to ethnicity, tied to race.
00:35:02 The piece that the researchers did
00:35:04 actually went through the whole analysis, but of course.
00:35:08 I mean, the journalists with AI are problematic
00:35:11 across the board, let’s say.
00:35:14 And so this is a problem, right?
00:35:15 And so there’s this thing about,
00:35:18 oh, AI, it has all these problems.
00:35:20 We’re doing it on historical data
00:35:22 and the outcomes are uneven based on gender
00:35:25 or ethnicity or age.
00:35:27 But I am always saying is like, yes,
00:35:30 we need to do better, right?
00:35:32 We need to do better.
00:35:33 It is our duty to do better.
00:35:36 But the worst AI is still better than us.
00:35:39 Like, you take the best of us
00:35:41 and we’re still worse than the worst AI,
00:35:44 at least in terms of these things.
00:35:45 And that’s actually not discussed, right?
00:35:47 And so I think, and that’s why the sensational title, right?
00:35:51 And so it’s like, so then you can have individuals go like,
00:35:54 oh, we don’t need to use this AI.
00:35:55 I’m like, oh, no, no, no, no.
00:35:56 I want the AI instead of the doctors
00:36:00 that provided that data,
00:36:01 because it’s still better than that, right?
00:36:04 I think that’s really important to linger on,
00:36:06 is the idea that this AI is racist.
00:36:10 It’s like, well, compared to what?
00:36:14 Sort of, I think we set, unfortunately,
00:36:20 way too high of a bar for AI algorithms.
00:36:23 And in the ethical space where perfect is,
00:36:25 I would argue, probably impossible.
00:36:28 Then if we set the bar of perfection, essentially,
00:36:33 of it has to be perfectly fair, whatever that means,
00:36:37 it means we’re setting it up for failure.
00:36:39 But that’s really important to say what you just said,
00:36:41 which is, well, it’s still better than it is.
00:36:44 And one of the things I think
00:36:46 that we don’t get enough credit for,
00:36:50 just in terms of as developers,
00:36:52 is that you can now poke at it, right?
00:36:55 So it’s harder to say, is this hospital,
00:36:58 is this city doing something, right?
00:37:01 Until someone brings in a civil case, right?
00:37:04 Well, with AI, it can process through all this data
00:37:07 and say, hey, yes, there was an issue here,
00:37:12 but here it is, we’ve identified it,
00:37:14 and then the next step is to fix it.
00:37:16 I mean, that’s a nice feedback loop
00:37:18 versus waiting for someone to sue someone else
00:37:21 before it’s fixed, right?
00:37:22 And so I think that power,
00:37:25 we need to capitalize on a little bit more, right?
00:37:27 Instead of having the sensational titles,
00:37:29 have the, okay, this is a problem,
00:37:33 and this is how we’re fixing it,
00:37:34 and people are putting money to fix it
00:37:36 because we can make it better.
00:37:38 I look at like facial recognition,
00:37:40 how Joy, she basically called out a couple of companies
00:37:45 and said, hey, and most of them were like,
00:37:48 oh, embarrassment, and the next time it had been fixed,
00:37:53 right, it had been fixed better, right?
00:37:54 And then it was like, oh, here’s some more issues.
00:37:56 And I think that conversation then moves that needle
00:38:01 to having much more fair and unbiased and ethical aspects,
00:38:07 as long as both sides, the developers are willing to say,
00:38:10 okay, I hear you, yes, we are going to improve,
00:38:14 and you have other developers who are like,
00:38:16 hey, AI, it’s wrong, but I love it, right?
00:38:19 Yes, so speaking of this really nice notion
00:38:23 that AI is maybe flawed but better than humans,
00:38:26 so just made me think of it,
00:38:29 one example of flawed humans is our political system.
00:38:34 Do you think, or you said judicial as well,
00:38:38 do you have a hope for AI sort of being elected
00:38:46 for president or running our Congress
00:38:49 or being able to be a powerful representative of the people?
00:38:53 So I mentioned, and I truly believe that this whole world
00:38:58 of AI is in partnerships with people.
00:39:01 And so what does that mean?
00:39:02 I don’t believe, or maybe I just don’t,
00:39:07 I don’t believe that we should have an AI for president,
00:39:11 but I do believe that a president
00:39:13 should use AI as an advisor, right?
00:39:15 Like, if you think about it,
00:39:17 every president has a cabinet of individuals
00:39:21 that have different expertise
00:39:23 that they should listen to, right?
00:39:26 Like, that’s kind of what we do.
00:39:27 And you put smart people with smart expertise
00:39:31 around certain issues, and you listen.
00:39:33 I don’t see why AI can’t function
00:39:35 as one of those smart individuals giving input.
00:39:39 So maybe there’s an AI on healthcare,
00:39:41 maybe there’s an AI on education and right,
00:39:43 like all of these things that a human is processing, right?
00:39:48 Because at the end of the day,
00:39:51 there’s people that are human
00:39:53 that are going to be at the end of the decision.
00:39:55 And I don’t think as a world, as a culture, as a society,
00:39:59 that we would totally, and this is us,
00:40:02 like this is some fallacy about us,
00:40:05 but we need to see that leader, that person as human.
00:40:11 And most people don’t realize
00:40:13 that like leaders have a whole lot of advice, right?
00:40:16 Like when they say something, it’s not that they woke up,
00:40:19 well, usually they don’t wake up in the morning
00:40:21 and be like, I have a brilliant idea, right?
00:40:24 It’s usually a, okay, let me listen.
00:40:26 I have a brilliant idea,
00:40:27 but let me get a little bit of feedback on this.
00:40:29 Like, okay.
00:40:30 And then it’s a, yeah, that was an awesome idea
00:40:33 or it’s like, yeah, let me go back.
00:40:35 We already talked through a bunch of them,
00:40:37 but are there some possible solutions
00:40:41 to the bias that’s present in our algorithms
00:40:45 beyond what we just talked about?
00:40:46 So I think there’s two paths.
00:40:49 One is to figure out how to systematically
00:40:53 do the feedback and corrections.
00:40:56 So right now it’s ad hoc, right?
00:40:57 It’s a researcher identify some outcomes
00:41:02 that are not, don’t seem to be fair, right?
00:41:05 They publish it, they write about it.
00:41:07 And the, either the developer or the companies
00:41:11 that have adopted the algorithms may try to fix it, right?
00:41:14 And so it’s really ad hoc and it’s not systematic.
00:41:18 There’s, it’s just, it’s kind of like,
00:41:21 I’m a researcher, that seems like an interesting problem,
00:41:24 which means that there’s a whole lot out there
00:41:26 that’s not being looked at, right?
00:41:28 Cause it’s kind of researcher driven.
00:41:32 And I don’t necessarily have a solution,
00:41:35 but that process I think could be done a little bit better.
00:41:41 One way is I’m going to poke a little bit
00:41:44 at some of the corporations, right?
00:41:48 Like maybe the corporations when they think
00:41:50 about a product, they should, instead of,
00:41:53 in addition to hiring these, you know, bug,
00:41:57 they give these.
00:41:59 Oh yeah, yeah, yeah.
00:42:01 Like awards when you find a bug.
00:42:02 Yeah, security bug, you know, let’s put it
00:42:06 like we will give the, whatever the award is
00:42:09 that we give for the people who find these security holes,
00:42:12 find an ethics hole, right?
00:42:13 Like find an unfairness hole
00:42:15 and we will pay you X for each one you find.
00:42:17 I mean, why can’t they do that?
00:42:19 One is a win win.
00:42:20 They show that they’re concerned about it,
00:42:22 that this is important and they don’t have
00:42:24 to necessarily dedicate it their own like internal resources.
00:42:28 And it also means that everyone who has
00:42:30 like their own bias lens, like I’m interested in age.
00:42:34 And so I’ll find the ones based on age
00:42:36 and I’m interested in gender and right,
00:42:38 which means that you get like all
00:42:39 of these different perspectives.
00:42:41 But you think of it in a data driven way.
00:42:43 So like sort of, if we look at a company like Twitter,
00:42:48 it gets, it’s under a lot of fire
00:42:51 for discriminating against certain political beliefs.
00:42:54 Correct.
00:42:55 And sort of, there’s a lot of people,
00:42:58 this is the sad thing,
00:42:59 cause I know how hard the problem is
00:43:00 and I know the Twitter folks are working really hard at it.
00:43:03 Even Facebook that everyone seems to hate
00:43:04 are working really hard at this.
00:43:06 You know, the kind of evidence that people bring
00:43:09 is basically anecdotal evidence.
00:43:11 Well, me or my friend, all we said is X
00:43:15 and for that we got banned.
00:43:17 And that’s kind of a discussion of saying,
00:43:20 well, look, that’s usually, first of all,
00:43:23 the whole thing is taken out of context.
00:43:25 So they present sort of anecdotal evidence.
00:43:28 And how are you supposed to, as a company,
00:43:31 in a healthy way, have a discourse
00:43:33 about what is and isn’t ethical?
00:43:35 How do we make algorithms ethical
00:43:38 when people are just blowing everything?
00:43:40 Like they’re outraged about a particular
00:43:45 anecdotal piece of evidence that’s very difficult
00:43:48 to sort of contextualize in the big data driven way.
00:43:52 Do you have a hope for companies like Twitter and Facebook?
00:43:55 Yeah, so I think there’s a couple of things going on, right?
00:43:59 First off, remember this whole aspect
00:44:04 of we are becoming reliant on technology.
00:44:09 We’re also becoming reliant on a lot of these,
00:44:14 the apps and the resources that are provided, right?
00:44:17 So some of it is kind of anger, like I need you, right?
00:44:21 And you’re not working for me, right?
00:44:23 Not working for me, all right.
00:44:24 But I think, and so some of it,
00:44:27 and I wish that there was a little bit
00:44:31 of change of rethinking.
00:44:32 So some of it is like, oh, we’ll fix it in house.
00:44:35 No, that’s like, okay, I’m a fox
00:44:38 and I’m going to watch these hens
00:44:40 because I think it’s a problem that foxes eat hens.
00:44:44 No, right?
00:44:45 Like be good citizens and say, look, we have a problem.
00:44:50 And we are willing to open ourselves up
00:44:54 for others to come in and look at it
00:44:57 and not try to fix it in house.
00:44:58 Because if you fix it in house,
00:45:00 there’s conflict of interest.
00:45:01 If I find something, I’m probably going to want to fix it
00:45:04 and hopefully the media won’t pick it up, right?
00:45:07 And that then causes distrust
00:45:09 because someone inside is going to be mad at you
00:45:11 and go out and talk about how,
00:45:13 yeah, they canned the resume survey because it, right?
00:45:17 Like be nice people.
00:45:19 Like just say, look, we have this issue.
00:45:22 Community, help us fix it.
00:45:24 And we will give you like, you know,
00:45:25 the bug finder fee if you do.
00:45:28 Did you ever hope that the community,
00:45:31 us as a human civilization on the whole is good
00:45:35 and can be trusted to guide the future of our civilization
00:45:39 into a positive direction?
00:45:40 I think so.
00:45:41 So I’m an optimist, right?
00:45:44 And, you know, there were some dark times in history always.
00:45:49 I think now we’re in one of those dark times.
00:45:52 I truly do.
00:45:53 In which aspect?
00:45:54 The polarization.
00:45:56 And it’s not just US, right?
00:45:57 So if it was just US, I’d be like, yeah, it’s a US thing,
00:46:00 but we’re seeing it like worldwide, this polarization.
00:46:04 And so I worry about that.
00:46:06 But I do fundamentally believe that at the end of the day,
00:46:11 people are good, right?
00:46:13 And why do I say that?
00:46:14 Because anytime there’s a scenario
00:46:17 where people are in danger and I will use,
00:46:20 so Atlanta, we had a snowmageddon
00:46:24 and people can laugh about that.
00:46:26 People at the time, so the city closed for, you know,
00:46:30 little snow, but it was ice and the city closed down.
00:46:33 But you had people opening up their homes and saying,
00:46:35 hey, you have nowhere to go, come to my house, right?
00:46:39 Hotels were just saying like, sleep on the floor.
00:46:41 Like places like, you know, the grocery stores were like,
00:46:44 hey, here’s food.
00:46:45 There was no like, oh, how much are you gonna pay me?
00:46:47 It was like this, such a community.
00:46:50 And like people who didn’t know each other,
00:46:52 strangers were just like, can I give you a ride home?
00:46:55 And that was a point I was like, you know what, like.
00:46:59 That reveals that the deeper thing is,
00:47:03 there’s a compassionate love that we all have within us.
00:47:06 It’s just that when all of that is taken care of
00:47:09 and get bored, we love drama.
00:47:11 And that’s, I think almost like the division
00:47:14 is a sign of the times being good,
00:47:17 is that it’s just entertaining
00:47:19 on some unpleasant mammalian level to watch,
00:47:24 to disagree with others.
00:47:26 And Twitter and Facebook are actually taking advantage
00:47:30 of that in a sense because it brings you back
00:47:33 to the platform and they’re advertiser driven,
00:47:36 so they make a lot of money.
00:47:37 So you go back and you click.
00:47:39 Love doesn’t sell quite as well in terms of advertisement.
00:47:43 It doesn’t.
00:47:44 So you’ve started your career
00:47:46 at NASA Jet Propulsion Laboratory,
00:47:49 but before I ask a few questions there,
00:47:51 have you happened to have ever seen Space Odyssey,
00:47:54 2001 Space Odyssey?
00:47:57 Yes.
00:47:58 Okay, do you think HAL 9000,
00:48:01 so we’re talking about ethics.
00:48:03 Do you think HAL did the right thing
00:48:06 by taking the priority of the mission
00:48:08 over the lives of the astronauts?
00:48:10 Do you think HAL is good or evil?
00:48:15 Easy questions.
00:48:16 Yeah.
00:48:19 HAL was misguided.
00:48:21 You’re one of the people that would be in charge
00:48:24 of an algorithm like HAL.
00:48:26 Yeah.
00:48:26 What would you do better?
00:48:28 If you think about what happened
00:48:31 was there was no fail safe, right?
00:48:35 So perfection, right?
00:48:37 Like what is that?
00:48:38 I’m gonna make something that I think is perfect,
00:48:40 but if my assumptions are wrong,
00:48:44 it’ll be perfect based on the wrong assumptions, right?
00:48:47 That’s something that you don’t know until you deploy
00:48:51 and then you’re like, oh yeah, messed up.
00:48:53 But what that means is that when we design software,
00:48:58 such as in Space Odyssey,
00:49:00 when we put things out,
00:49:02 that there has to be a fail safe.
00:49:04 There has to be the ability that once it’s out there,
00:49:07 we can grade it as an F and it fails
00:49:11 and it doesn’t continue, right?
00:49:13 There’s some way that it can be brought in
00:49:16 and removed in that aspect.
00:49:19 Because that’s what happened with HAL.
00:49:21 It was like assumptions were wrong.
00:49:23 It was perfectly correct based on those assumptions
00:49:27 and there was no way to change it,
00:49:31 change the assumptions at all.
00:49:34 And the change to fall back would be to a human.
00:49:37 So you ultimately think like human should be,
00:49:42 it’s not turtles or AI all the way down.
00:49:45 It’s at some point, there’s a human that actually.
00:49:47 I still think that,
00:49:48 and again, because I do human robot interaction,
00:49:51 I still think the human needs to be part of the equation
00:49:54 at some point.
00:49:56 So what, just looking back,
00:49:58 what are some fascinating things in robotic space
00:50:01 that NASA was working at the time?
00:50:03 Or just in general, what have you gotten to play with
00:50:07 and what are your memories from working at NASA?
00:50:10 Yeah, so one of my first memories
00:50:13 was they were working on a surgical robot system
00:50:18 that could do eye surgery, right?
00:50:21 And this was back in, oh my gosh, it must’ve been,
00:50:25 oh, maybe 92, 93, 94.
00:50:30 So it’s like almost like a remote operation.
00:50:32 Yeah, it was remote operation.
00:50:34 In fact, you can even find some old tech reports on it.
00:50:38 So think of it, like now we have DaVinci, right?
00:50:41 Like think of it, but these were like the late 90s, right?
00:50:45 And I remember going into the lab one day
00:50:48 and I was like, what’s that, right?
00:50:51 And of course it wasn’t pretty, right?
00:50:53 Because the technology, but it was like functional
00:50:56 and you had this individual that could use
00:50:59 a version of haptics to actually do the surgery
00:51:01 and they had this mockup of a human face
00:51:04 and like the eyeballs and you can see this little drill.
00:51:08 And I was like, oh, that is so cool.
00:51:11 That one I vividly remember
00:51:13 because it was so outside of my like possible thoughts
00:51:18 of what could be done.
00:51:20 It’s the kind of precision
00:51:21 and I mean, what’s the most amazing of a thing like that?
00:51:26 I think it was the precision.
00:51:28 It was the kind of first time
00:51:31 that I had physically seen
00:51:34 this robot machine human interface, right?
00:51:39 Versus, cause manufacturing had been,
00:51:42 you saw those kind of big robots, right?
00:51:44 But this was like, oh, this is in a person.
00:51:48 There’s a person and a robot like in the same space.
00:51:51 I’m meeting them in person.
00:51:53 Like for me, it was a magical moment
00:51:55 that I can’t, it was life transforming
00:51:57 that I recently met Spot Mini from Boston Dynamics.
00:52:00 Oh, see.
00:52:01 I don’t know why, but on the human robot interaction
00:52:04 for some reason I realized how easy it is to anthropomorphize
00:52:09 and it was, I don’t know, it was almost
00:52:12 like falling in love, this feeling of meeting.
00:52:14 And I’ve obviously seen these robots a lot
00:52:17 on video and so on, but meeting in person,
00:52:19 just having that one on one time is different.
00:52:22 So have you had a robot like that in your life
00:52:25 that made you maybe fall in love with robotics?
00:52:28 Sort of like meeting in person.
00:52:32 I mean, I loved robotics since, yeah.
00:52:35 So I was a 12 year old.
00:52:37 Like I’m gonna be a roboticist, actually was,
00:52:40 I called it cybernetics.
00:52:41 But so my motivation was Bionic Woman.
00:52:44 I don’t know if you know that.
00:52:46 And so, I mean, that was like a seminal moment,
00:52:49 but I didn’t meet, like that was TV, right?
00:52:52 Like it wasn’t like I was in the same space and I met
00:52:54 and I was like, oh my gosh, you’re like real.
00:52:56 Just linking on Bionic Woman, which by the way,
00:52:58 because I read that about you.
00:53:01 I watched bits of it and it’s just so,
00:53:04 no offense, terrible.
00:53:05 It’s cheesy if you look at it now.
00:53:08 It’s cheesy, no.
00:53:09 I’ve seen a couple of reruns lately.
00:53:10 But it’s, but of course at the time it’s probably
00:53:15 captured the imagination.
00:53:16 But the sound effects.
00:53:18 Especially when you’re younger, it just catch you.
00:53:23 But which aspect, did you think of it,
00:53:24 you mentioned cybernetics, did you think of it as robotics
00:53:27 or did you think of it as almost constructing
00:53:30 artificial beings?
00:53:31 Like, is it the intelligent part that captured
00:53:36 your fascination or was it the whole thing?
00:53:38 Like even just the limbs and just the.
00:53:39 So for me, it would have, in another world,
00:53:42 I probably would have been more of a biomedical engineer
00:53:46 because what fascinated me was the parts,
00:53:50 like the bionic parts, the limbs, those aspects of it.
00:53:55 Are you especially drawn to humanoid or humanlike robots?
00:53:59 I would say humanlike, not humanoid, right?
00:54:03 And when I say humanlike, I think it’s this aspect
00:54:05 of that interaction, whether it’s social
00:54:09 and it’s like a dog, right?
00:54:10 Like that’s humanlike because it understand us,
00:54:14 it interacts with us at that very social level
00:54:18 to, you know, humanoids are part of that,
00:54:21 but only if they interact with us as if we are human.
00:54:26 Okay, but just to linger on NASA for a little bit,
00:54:30 what do you think, maybe if you have other memories,
00:54:34 but also what do you think is the future of robots in space?
00:54:38 We’ll mention how, but there’s incredible robots
00:54:41 that NASA’s working on in general thinking about
00:54:44 in our, as we venture out, human civilization ventures out
00:54:49 into space, what do you think the future of robots is there?
00:54:52 Yeah, so I mean, there’s the near term.
00:54:53 For example, they just announced the rover
00:54:57 that’s going to the moon, which, you know,
00:55:00 that’s kind of exciting, but that’s like near term.
00:55:06 You know, my favorite, favorite, favorite series
00:55:11 is Star Trek, right?
00:55:13 You know, I really hope, and even Star Trek,
00:55:17 like if I calculate the years, I wouldn’t be alive,
00:55:20 but I would really, really love to be in that world.
00:55:26 Like, even if it’s just at the beginning,
00:55:28 like, you know, like voyage, like adventure one.
00:55:33 So basically living in space.
00:55:35 Yeah.
00:55:36 With, what robots, what are robots?
00:55:39 With data.
00:55:40 What role?
00:55:41 The data would have to be, even though that wasn’t,
00:55:42 you know, that was like later, but.
00:55:44 So data is a robot that has human like qualities.
00:55:49 Right, without the emotion chip.
00:55:50 Yeah.
00:55:51 You don’t like emotion.
00:55:52 Well, so data with the emotion chip
00:55:54 was kind of a mess, right?
00:55:58 It took a while for that thing to adapt,
00:56:04 but, and so why was that an issue?
00:56:08 The issue is that emotions make us irrational agents.
00:56:14 That’s the problem.
00:56:15 And yet he could think through things,
00:56:20 even if it was based on an emotional scenario, right?
00:56:23 Based on pros and cons.
00:56:25 But as soon as you made him emotional,
00:56:28 one of the metrics he used for evaluation
00:56:31 was his own emotions, not people around him, right?
00:56:35 Like, and so.
00:56:37 We do that as children, right?
00:56:39 So we’re very egocentric when we’re young.
00:56:40 We are very egocentric.
00:56:42 And so isn’t that just an early version of the emotion chip
00:56:45 then, I haven’t watched much Star Trek.
00:56:48 Except I have also met adults, right?
00:56:52 And so that is a developmental process.
00:56:54 And I’m sure there’s a bunch of psychologists
00:56:57 that can go through, like you can have a 60 year old adult
00:57:00 who has the emotional maturity of a 10 year old, right?
00:57:04 And so there’s various phases that people should go through
00:57:08 in order to evolve and sometimes you don’t.
00:57:11 So how much psychology do you think,
00:57:14 a topic that’s rarely mentioned in robotics,
00:57:17 but how much does psychology come to play
00:57:19 when you’re talking about HRI, human robot interaction?
00:57:23 When you have to have robots
00:57:25 that actually interact with humans.
00:57:26 Tons.
00:57:26 So we, like my group, as well as I read a lot
00:57:31 in the cognitive science literature,
00:57:33 as well as the psychology literature.
00:57:36 Because they understand a lot about human, human relations
00:57:42 and developmental milestones and things like that.
00:57:45 And so we tend to look to see what’s been done out there.
00:57:53 Sometimes what we’ll do is we’ll try to match that to see,
00:57:56 is that human, human relationship the same as human robot?
00:58:00 Sometimes it is, and sometimes it’s different.
00:58:03 And then when it’s different, we have to,
00:58:04 we try to figure out, okay,
00:58:06 why is it different in this scenario?
00:58:09 But it’s the same in the other scenario, right?
00:58:11 And so we try to do that quite a bit.
00:58:15 Would you say that’s, if we’re looking at the future
00:58:17 of human robot interaction,
00:58:19 would you say the psychology piece is the hardest?
00:58:22 Like if, I mean, it’s a funny notion for you as,
00:58:25 I don’t know if you consider, yeah.
00:58:27 I mean, one way to ask it,
00:58:28 do you consider yourself a roboticist or a psychologist?
00:58:32 Oh, I consider myself a roboticist
00:58:33 that plays the act of a psychologist.
00:58:36 But if you were to look at yourself sort of,
00:58:40 20, 30 years from now,
00:58:42 do you see yourself more and more
00:58:43 wearing the psychology hat?
00:58:47 Another way to put it is,
00:58:49 are the hard problems in human robot interactions
00:58:51 fundamentally psychology, or is it still robotics,
00:58:55 the perception manipulation, planning,
00:58:57 all that kind of stuff?
00:58:59 It’s actually neither.
00:59:01 The hardest part is the adaptation and the interaction.
00:59:06 So it’s the interface, it’s the learning.
00:59:08 And so if I think of,
00:59:11 like I’ve become much more of a roboticist slash AI person
00:59:17 than when I, like originally, again,
00:59:19 I was about the bionics.
00:59:20 I was electrical engineer, I was control theory, right?
00:59:24 And then I started realizing that my algorithms
00:59:28 needed like human data, right?
00:59:30 And so then I was like, okay, what is this human thing?
00:59:32 How do I incorporate human data?
00:59:34 And then I realized that human perception had,
00:59:38 like there was a lot in terms of how we perceive the world.
00:59:41 And so trying to figure out
00:59:41 how do I model human perception for my,
00:59:44 and so I became a HRI person,
00:59:47 human robot interaction person,
00:59:49 from being a control theory and realizing
00:59:51 that humans actually offered quite a bit.
00:59:55 And then when you do that,
00:59:56 you become more of an artificial intelligence, AI.
00:59:59 And so I see myself evolving more in this AI world
01:00:05 under the lens of robotics,
01:00:09 having hardware, interacting with people.
01:00:12 So you’re a world class expert researcher in robotics,
01:00:17 and yet others, you know, there’s a few,
01:00:21 it’s a small but fierce community of people,
01:00:24 but most of them don’t take the journey
01:00:26 into the H of HRI, into the human.
01:00:29 So why did you brave into the interaction with humans?
01:00:34 It seems like a really hard problem.
01:00:36 It’s a hard problem, and it’s very risky as an academic.
01:00:41 And I knew that when I started down that journey,
01:00:46 that it was very risky as an academic
01:00:49 in this world that was nuance, it was just developing.
01:00:53 We didn’t even have a conference, right, at the time.
01:00:56 Because it was the interesting problems.
01:01:00 That was what drove me.
01:01:01 It was the fact that I looked at what interests me
01:01:06 in terms of the application space and the problems.
01:01:10 And that pushed me into trying to figure out
01:01:14 what people were and what humans were
01:01:16 and how to adapt to them.
01:01:19 If those problems weren’t so interesting,
01:01:21 I’d probably still be sending rovers to glaciers, right?
01:01:26 But the problems were interesting.
01:01:28 And the other thing was that they were hard, right?
01:01:30 So it’s, I like having to go into a room
01:01:34 and being like, I don’t know what to do.
01:01:37 And then going back and saying, okay,
01:01:38 I’m gonna figure this out.
01:01:39 I do not, I’m not driven when I go in like,
01:01:42 oh, there are no surprises.
01:01:44 Like, I don’t find that satisfying.
01:01:47 If that was the case,
01:01:48 I’d go someplace and make a lot more money, right?
01:01:51 I think I stay in academic because and choose to do this
01:01:55 because I can go into a room and like, that’s hard.
01:01:58 Yeah, I think just from my perspective,
01:02:01 maybe you can correct me on it,
01:02:03 but if I just look at the field of AI broadly,
01:02:06 it seems that human robot interaction has the most,
01:02:12 one of the most number of open problems.
01:02:16 Like people, especially relative to how many people
01:02:20 are willing to acknowledge that there are this,
01:02:23 because most people are just afraid of the humans
01:02:26 so they don’t even acknowledge
01:02:27 how many open problems there are.
01:02:28 But it’s in terms of difficult problems
01:02:30 to solve exciting spaces,
01:02:32 it seems to be incredible for that.
01:02:35 It is, and it’s exciting.
01:02:38 You’ve mentioned trust before.
01:02:40 What role does trust from interacting with autopilot
01:02:46 to in the medical context,
01:02:48 what role does trust play in the human robot interactions?
01:02:51 So some of the things I study in this domain
01:02:53 is not just trust, but it really is over trust.
01:02:56 How do you think about over trust?
01:02:58 Like what is, first of all, what is trust
01:03:02 and what is over trust?
01:03:03 Basically, the way I look at it is,
01:03:05 trust is not what you click on a survey,
01:03:08 trust is about your behavior.
01:03:09 So if you interact with the technology
01:03:13 based on the decision or the actions of the technology
01:03:17 as if you trust that decision, then you’re trusting.
01:03:22 And even in my group, we’ve done surveys
01:03:25 that on the thing, do you trust robots?
01:03:28 Of course not.
01:03:29 Would you follow this robot in a burdening building?
01:03:31 Of course not.
01:03:32 And then you look at their actions and you’re like,
01:03:35 clearly your behavior does not match what you think
01:03:39 or what you think you would like to think.
01:03:42 And so I’m really concerned about the behavior
01:03:44 because that’s really at the end of the day,
01:03:45 when you’re in the world,
01:03:47 that’s what will impact others around you.
01:03:50 It’s not whether before you went onto the street,
01:03:52 you clicked on like, I don’t trust self driving cars.
01:03:55 Yeah, that from an outsider perspective,
01:03:58 it’s always frustrating to me.
01:04:00 Well, I read a lot, so I’m insider
01:04:02 in a certain philosophical sense.
01:04:06 It’s frustrating to me how often trust is used in surveys
01:04:10 and how people say, make claims out of any kind of finding
01:04:15 they make while somebody clicking on answer.
01:04:18 You just trust is a, yeah, behavior just,
01:04:23 you said it beautifully.
01:04:24 I mean, the action, your own behavior is what trust is.
01:04:28 I mean, that everything else is not even close.
01:04:30 It’s almost like absurd comedic poetry
01:04:36 that you weave around your actual behavior.
01:04:38 So some people can say their trust,
01:04:41 you know, I trust my wife, husband or not,
01:04:45 whatever, but the actions is what speaks volumes.
01:04:48 You bug their car, you probably don’t trust them.
01:04:52 I trust them, I’m just making sure.
01:04:53 No, no, that’s, yeah.
01:04:55 Like even if you think about cars,
01:04:57 I think it’s a beautiful case.
01:04:58 I came here at some point, I’m sure,
01:05:01 on either Uber or Lyft, right?
01:05:03 I remember when it first came out, right?
01:05:06 I bet if they had had a survey,
01:05:08 would you get in the car with a stranger and pay them?
01:05:11 Yes.
01:05:12 How many people do you think would have said,
01:05:15 like, really?
01:05:16 Wait, even worse, would you get in the car
01:05:18 with a stranger at 1 a.m. in the morning
01:05:21 to have them drop you home as a single female?
01:05:24 Yeah.
01:05:25 Like how many people would say, that’s stupid.
01:05:29 Yeah.
01:05:30 And now look at where we are.
01:05:31 I mean, people put kids, right?
01:05:33 Like, oh yeah, my child has to go to school
01:05:37 and yeah, I’m gonna put my kid in this car with a stranger.
01:05:42 I mean, it’s just fascinating how, like,
01:05:45 what we think we think is not necessarily
01:05:48 matching our behavior.
01:05:49 Yeah, and certainly with robots, with autonomous vehicles
01:05:52 and all the kinds of robots you work with,
01:05:54 that’s, it’s, yeah, it’s, the way you answer it,
01:06:00 especially if you’ve never interacted with that robot before,
01:06:04 if you haven’t had the experience,
01:06:05 you being able to respond correctly on a survey is impossible.
01:06:09 But what do you, what role does trust play
01:06:12 in the interaction, do you think?
01:06:14 Like, is it good to, is it good to trust a robot?
01:06:19 What does over trust mean?
01:06:21 Or is it, is it good to kind of how you feel
01:06:23 about autopilot currently, which is like,
01:06:26 from a roboticist’s perspective, is like,
01:06:29 oh, still very cautious?
01:06:31 Yeah, so this is still an open area of research,
01:06:34 but basically what I would like in a perfect world
01:06:40 is that people trust the technology when it’s working 100%,
01:06:44 and people will be hypersensitive
01:06:47 and identify when it’s not.
01:06:49 But of course we’re not there.
01:06:50 That’s the ideal world.
01:06:53 And, but we find is that people swing, right?
01:06:56 They tend to swing, which means that if my first,
01:07:01 and like, we have some papers,
01:07:02 like first impressions is everything, right?
01:07:05 If my first instance with technology,
01:07:07 with robotics is positive, it mitigates any risk,
01:07:12 it correlates with like best outcomes,
01:07:16 it means that I’m more likely to either not see it
01:07:21 when it makes some mistakes or faults,
01:07:24 or I’m more likely to forgive it.
01:07:28 And so this is a problem
01:07:30 because technology is not 100% accurate, right?
01:07:32 It’s not 100% accurate, although it may be perfect.
01:07:35 How do you get that first moment right, do you think?
01:07:37 There’s also an education about the capabilities
01:07:40 and limitations of the system.
01:07:42 Do you have a sense of how do you educate people correctly
01:07:45 in that first interaction?
01:07:47 Again, this is an open ended problem.
01:07:50 So one of the study that actually has given me some hope
01:07:55 that I were trying to figure out how to put in robotics.
01:07:57 So there was a research study
01:08:01 that it showed for medical AI systems,
01:08:03 giving information to radiologists about,
01:08:07 here you need to look at these areas on the X ray.
01:08:13 What they found was that when the system provided
01:08:18 one choice, there was this aspect of either no trust
01:08:25 or over trust, right?
01:08:26 Like I don’t believe it at all,
01:08:29 or a yes, yes, yes, yes.
01:08:33 And they would miss things, right?
01:08:36 Instead, when the system gave them multiple choices,
01:08:40 like here are the three, even if it knew like,
01:08:43 it had estimated that the top area you need to look at
01:08:45 was some place on the X ray.
01:08:49 If it gave like one plus others,
01:08:54 the trust was maintained and the accuracy of the entire
01:09:00 population increased, right?
01:09:03 So basically it was a, you’re still trusting the system,
01:09:07 but you’re also putting in a little bit of like,
01:09:09 your human expertise, like your human decision processing
01:09:13 into the equation.
01:09:15 So it helps to mitigate that over trust risk.
01:09:18 Yeah, so there’s a fascinating balance that the strike.
01:09:21 Haven’t figured out again, robotics is still an open research.
01:09:24 This is exciting open area research, exactly.
01:09:26 So what are some exciting applications
01:09:28 of human robot interaction?
01:09:30 You started a company, maybe you can talk about
01:09:33 the exciting efforts there, but in general also
01:09:36 what other space can robots interact with humans and help?
01:09:41 Yeah, so besides healthcare,
01:09:42 cause you know, that’s my bias lens.
01:09:44 My other bias lens is education.
01:09:47 I think that, well, one, we definitely,
01:09:51 we in the US, you know, we’re doing okay with teachers,
01:09:54 but there’s a lot of school districts
01:09:56 that don’t have enough teachers.
01:09:58 If you think about the teacher student ratio
01:10:01 for at least public education in some districts, it’s crazy.
01:10:06 It’s like, how can you have learning in that classroom,
01:10:10 right?
01:10:10 Because you just don’t have the human capital.
01:10:12 And so if you think about robotics,
01:10:15 bringing that in to classrooms,
01:10:18 as well as the afterschool space,
01:10:20 where they offset some of this lack of resources
01:10:25 in certain communities, I think that’s a good place.
01:10:28 And then turning on the other end
01:10:30 is using these systems then for workforce retraining
01:10:35 and dealing with some of the things
01:10:38 that are going to come out later on of job loss,
01:10:43 like thinking about robots and in AI systems
01:10:45 for retraining and workforce development.
01:10:48 I think that’s exciting areas that can be pushed even more,
01:10:53 and it would have a huge, huge impact.
01:10:56 What would you say are some of the open problems
01:10:59 in education, sort of, it’s exciting.
01:11:03 So young kids and the older folks
01:11:08 or just folks of all ages who need to be retrained,
01:11:12 who need to sort of open themselves up
01:11:14 to a whole nother area of work.
01:11:17 What are the problems to be solved there?
01:11:20 How do you think robots can help?
01:11:22 We have the engagement aspect, right?
01:11:24 So we can figure out the engagement.
01:11:26 That’s not a…
01:11:27 What do you mean by engagement?
01:11:28 So identifying whether a person is focused,
01:11:34 is like that we can figure out.
01:11:38 What we can figure out and there’s some positive results
01:11:43 in this is that personalized adaptation
01:11:47 based on any concepts, right?
01:11:49 So imagine I think about, I have an agent
01:11:54 and I’m working with a kid learning, I don’t know,
01:11:59 algebra two, can that same agent then switch
01:12:03 and teach some type of new coding skill
01:12:07 to a displaced mechanic?
01:12:11 Like, what does that actually look like, right?
01:12:14 Like hardware might be the same, content is different,
01:12:19 two different target demographics of engagement.
01:12:22 Like how do you do that?
01:12:24 How important do you think personalization
01:12:26 is in human robot interaction?
01:12:28 And not just a mechanic or student,
01:12:31 but like literally to the individual human being.
01:12:35 I think personalization is really important,
01:12:37 but a caveat is that I think we’d be okay
01:12:42 if we can personalize to the group, right?
01:12:44 And so if I can label you
01:12:49 as along some certain dimensions,
01:12:52 then even though it may not be you specifically,
01:12:56 I can put you in this group.
01:12:58 So the sample size, this is how they best learn,
01:13:00 this is how they best engage.
01:13:03 Even at that level, it’s really important.
01:13:06 And it’s because, I mean, it’s one of the reasons
01:13:09 why educating in large classrooms is so hard, right?
01:13:13 You teach to the median,
01:13:15 but there’s these individuals that are struggling
01:13:19 and then you have highly intelligent individuals
01:13:22 and those are the ones that are usually kind of left out.
01:13:26 So highly intelligent individuals may be disruptive
01:13:28 and those who are struggling might be disruptive
01:13:30 because they’re both bored.
01:13:32 Yeah, and if you narrow the definition of the group
01:13:35 or in the size of the group enough,
01:13:37 you’ll be able to address their individual,
01:13:40 it’s not individual needs, but really the most important
01:13:44 group needs, right?
01:13:45 And that’s kind of what a lot of successful
01:13:47 recommender systems do with Spotify and so on.
01:13:50 So it’s sad to believe, but as a music listener,
01:13:53 probably in some sort of large group,
01:13:55 it’s very sadly predictable.
01:13:58 You have been labeled.
01:13:59 Yeah, I’ve been labeled and successfully so
01:14:02 because they’re able to recommend stuff that I like.
01:14:04 Yeah, but applying that to education, right?
01:14:07 There’s no reason why it can’t be done.
01:14:09 Do you have a hope for our education system?
01:14:13 I have more hope for workforce development.
01:14:16 And that’s because I’m seeing investments.
01:14:19 Even if you look at VC investments in education,
01:14:23 the majority of it has lately been going
01:14:26 to workforce retraining, right?
01:14:28 And so I think that government investments is increasing.
01:14:32 There’s like a claim and some of it’s based on fear, right?
01:14:36 Like AI is gonna come and take over all these jobs.
01:14:37 What are we gonna do with all these nonpaying taxes
01:14:41 that aren’t coming to us by our citizens?
01:14:44 And so I think I’m more hopeful for that.
01:14:48 Not so hopeful for early education
01:14:51 because it’s still a who’s gonna pay for it.
01:14:56 And you won’t see the results for like 16 to 18 years.
01:15:01 It’s hard for people to wrap their heads around that.
01:15:07 But on the retraining part, what are your thoughts?
01:15:10 There’s a candidate, Andrew Yang running for president
01:15:13 and saying that sort of AI, automation, robots.
01:15:18 Universal basic income.
01:15:20 Universal basic income in order to support us
01:15:23 as we kind of automation takes people’s jobs
01:15:26 and allows you to explore and find other means.
01:15:30 Like do you have a concern of society
01:15:35 transforming effects of automation and robots and so on?
01:15:40 I do.
01:15:41 I do know that AI robotics will displace workers.
01:15:46 Like we do know that.
01:15:47 But there’ll be other workers
01:15:49 that will be defined new jobs.
01:15:54 What I worry about is, that’s not what I worry about.
01:15:57 Like will all the jobs go away?
01:15:59 What I worry about is the type of jobs that will come out.
01:16:02 Like people who graduate from Georgia Tech will be okay.
01:16:06 We give them the skills,
01:16:07 they will adapt even if their current job goes away.
01:16:10 I do worry about those
01:16:12 that don’t have that quality of an education.
01:16:15 Will they have the ability,
01:16:18 the background to adapt to those new jobs?
01:16:21 That I don’t know.
01:16:22 That I worry about,
01:16:24 which will create even more polarization
01:16:27 in our society, internationally and everywhere.
01:16:31 I worry about that.
01:16:32 I also worry about not having equal access
01:16:36 to all these wonderful things that AI can do
01:16:39 and robotics can do.
01:16:41 I worry about that.
01:16:43 People like me from Georgia Tech from say MIT
01:16:48 will be okay, right?
01:16:50 But that’s such a small part of the population
01:16:53 that we need to think much more globally
01:16:55 of having access to the beautiful things,
01:16:58 whether it’s AI in healthcare, AI in education,
01:17:01 AI in politics, right?
01:17:05 I worry about that.
01:17:05 And that’s part of the thing that you were talking about
01:17:08 is people that build the technology
01:17:09 have to be thinking about ethics,
01:17:12 have to be thinking about access and all those things.
01:17:15 And not just a small subset.
01:17:17 Let me ask some philosophical,
01:17:20 slightly romantic questions.
01:17:22 People that listen to this will be like,
01:17:24 here he goes again.
01:17:26 Okay, do you think one day we’ll build an AI system
01:17:31 that a person can fall in love with
01:17:35 and it would love them back?
01:17:37 Like in the movie, Her, for example.
01:17:39 Yeah, although she kind of didn’t fall in love with him
01:17:43 or she fell in love with like a million other people,
01:17:45 something like that.
01:17:47 You’re the jealous type, I see.
01:17:48 We humans are the jealous type.
01:17:50 Yes, so I do believe that we can design systems
01:17:55 where people would fall in love with their robot,
01:17:59 with their AI partner.
01:18:03 That I do believe.
01:18:05 Because it’s actually,
01:18:06 and I don’t like to use the word manipulate,
01:18:08 but as we see, there are certain individuals
01:18:12 that can be manipulated
01:18:13 if you understand the cognitive science about it, right?
01:18:16 Right, so I mean, if you could think of all close
01:18:19 relationship and love in general
01:18:21 as a kind of mutual manipulation,
01:18:24 that dance, the human dance.
01:18:27 I mean, manipulation is a negative connotation.
01:18:30 And that’s why I don’t like to use that word particularly.
01:18:32 I guess another way to phrase it is,
01:18:34 you’re getting at is it could be algorithmatized
01:18:36 or something, it could be a.
01:18:38 The relationship building part can be.
01:18:40 I mean, just think about it.
01:18:41 We have, and I don’t use dating sites,
01:18:44 but from what I heard, there are some individuals
01:18:48 that have been dating that have never saw each other, right?
01:18:52 In fact, there’s a show I think
01:18:54 that tries to like weed out fake people.
01:18:57 Like there’s a show that comes out, right?
01:18:59 Because like people start faking.
01:19:01 Like, what’s the difference of that person
01:19:05 on the other end being an AI agent, right?
01:19:08 And having a communication
01:19:09 and you building a relationship remotely,
01:19:12 like there’s no reason why that can’t happen.
01:19:15 In terms of human robot interaction,
01:19:17 so what role, you’ve kind of mentioned
01:19:19 with data emotion being, can be problematic
01:19:23 if not implemented well, I suppose.
01:19:26 What role does emotion and some other human like things,
01:19:30 the imperfect things come into play here
01:19:32 for good human robot interaction and something like love?
01:19:37 Yeah, so in this case, and you had asked,
01:19:39 can an AI agent love a human back?
01:19:43 I think they can emulate love back, right?
01:19:47 And so what does that actually mean?
01:19:48 It just means that if you think about their programming,
01:19:52 they might put the other person’s needs
01:19:55 in front of theirs in certain situations, right?
01:19:57 You look at, think about it as a return on investment.
01:20:00 Like, what’s my return on investment?
01:20:01 As part of that equation, that person’s happiness,
01:20:04 has some type of algorithm waiting to it.
01:20:07 And the reason why is because I care about them, right?
01:20:11 That’s the only reason, right?
01:20:13 But if I care about them and I show that,
01:20:15 then my final objective function
01:20:18 is length of time of the engagement, right?
01:20:20 So you can think of how to do this actually quite easily.
01:20:24 And so.
01:20:24 But that’s not love?
01:20:27 Well, so that’s the thing.
01:20:29 I think it emulates love
01:20:32 because we don’t have a classical definition of love.
01:20:38 Right, but, and we don’t have the ability
01:20:41 to look into each other’s minds to see the algorithm.
01:20:45 And I mean, I guess what I’m getting at is,
01:20:48 is it possible that, especially if that’s learned,
01:20:51 especially if there’s some mystery
01:20:52 and black box nature to the system,
01:20:55 how is that, you know?
01:20:57 How is it any different?
01:20:58 How is it any different in terms of sort of
01:21:00 if the system says, I’m conscious, I’m afraid of death,
01:21:05 and it does indicate that it loves you.
01:21:10 Another way to sort of phrase it,
01:21:12 be curious to see what you think.
01:21:14 Do you think there’ll be a time
01:21:16 when robots should have rights?
01:21:20 You’ve kind of phrased the robot in a very roboticist way
01:21:23 and just a really good way, but saying, okay,
01:21:25 well, there’s an objective function
01:21:27 and I could see how you can create
01:21:30 a compelling human robot interaction experience
01:21:33 that makes you believe that the robot cares for your needs
01:21:36 and even something like loves you.
01:21:38 But what if the robot says, please don’t turn me off?
01:21:43 What if the robot starts making you feel
01:21:46 like there’s an entity, a being, a soul there, right?
01:21:50 Do you think there’ll be a future,
01:21:53 hopefully you won’t laugh too much at this,
01:21:55 but where they do ask for rights?
01:22:00 So I can see a future
01:22:03 if we don’t address it in the near term
01:22:08 where these agents, as they adapt and learn,
01:22:11 could say, hey, this should be something that’s fundamental.
01:22:15 I hopefully think that we would address it
01:22:18 before it gets to that point.
01:22:20 So you think that’s a bad future?
01:22:22 Is that a negative thing where they ask
01:22:25 we’re being discriminated against?
01:22:27 I guess it depends on what role
01:22:31 have they attained at that point, right?
01:22:34 And so if I think about now.
01:22:35 Careful what you say because the robots 50 years from now
01:22:39 I’ll be listening to this and you’ll be on TV saying,
01:22:42 this is what roboticists used to believe.
01:22:44 Well, right?
01:22:45 And so this is my, and as I said, I have a bias lens
01:22:48 and my robot friends will understand that.
01:22:52 So if you think about it, and I actually put this
01:22:55 in kind of the, as a roboticist,
01:22:59 you don’t necessarily think of robots as human
01:23:02 with human rights, but you could think of them
01:23:05 either in the category of property,
01:23:09 or you can think of them in the category of animals, right?
01:23:14 And so both of those have different types of rights.
01:23:18 So animals have their own rights as a living being,
01:23:22 but they can’t vote, they can’t write,
01:23:25 they can be euthanized, but as humans,
01:23:29 if we abuse them, we go to jail, right?
01:23:32 So they do have some rights that protect them,
01:23:35 but don’t give them the rights of like citizenship.
01:23:40 And then if you think about property,
01:23:42 property, the rights are associated with the person, right?
01:23:45 So if someone vandalizes your property
01:23:49 or steals your property, like there are some rights,
01:23:53 but it’s associated with the person who owns that.
01:23:58 If you think about it back in the day,
01:24:01 and if you remember, we talked about
01:24:03 how society has changed, women were property, right?
01:24:08 They were not thought of as having rights.
01:24:11 They were thought of as property of, like their…
01:24:15 Yeah, assaulting a woman meant
01:24:17 assaulting the property of somebody else.
01:24:20 Exactly, and so what I envision is,
01:24:22 is that we will establish some type of norm at some point,
01:24:27 but that it might evolve, right?
01:24:29 Like if you look at women’s rights now,
01:24:31 like there are still some countries that don’t have,
01:24:35 and the rest of the world is like,
01:24:36 why that makes no sense, right?
01:24:39 And so I do see a world where we do establish
01:24:42 some type of grounding.
01:24:44 It might be based on property rights,
01:24:45 it might be based on animal rights.
01:24:47 And if it evolves that way,
01:24:50 I think we will have this conversation at that time,
01:24:54 because that’s the way our society traditionally has evolved.
01:24:58 Beautifully put, just out of curiosity,
01:25:01 Anki, Jibo, Mayflower Robotics,
01:25:05 with their robot Curie, SciFiWorks, WeThink Robotics,
01:25:08 were all these amazing robotics companies
01:25:10 led, created by incredible roboticists,
01:25:14 and they’ve all went out of business recently.
01:25:19 Why do you think they didn’t last long?
01:25:21 Why is it so hard to run a robotics company,
01:25:25 especially one like these, which are fundamentally
01:25:29 HRI human robot interaction robots?
01:25:34 Or personal robots?
01:25:35 Each one has a story,
01:25:37 only one of them I don’t understand, and that was Anki.
01:25:41 That’s actually the only one I don’t understand.
01:25:43 I don’t understand it either.
01:25:44 No, no, I mean, I look like from the outside,
01:25:47 I’ve looked at their sheets, I’ve looked at the data that’s.
01:25:50 Oh, you mean like business wise,
01:25:51 you don’t understand, I got you.
01:25:52 Yeah.
01:25:53 Yeah, and like I look at all, I look at that data,
01:25:59 and I’m like, they seem to have like product market fit.
01:26:02 Like, so that’s the only one I don’t understand.
01:26:05 The rest of it was product market fit.
01:26:08 What’s product market fit?
01:26:09 Just that of, like how do you think about it?
01:26:11 Yeah, so although WeThink Robotics was getting there, right?
01:26:15 But I think it’s just the timing,
01:26:17 it just, their clock just timed out.
01:26:20 I think if they’d been given a couple more years,
01:26:23 they would have been okay.
01:26:25 But the other ones were still fairly early
01:26:28 by the time they got into the market.
01:26:30 And so product market fit is,
01:26:32 I have a product that I wanna sell at a certain price.
01:26:37 Are there enough people out there, the market,
01:26:40 that are willing to buy the product at that market price
01:26:42 for me to be a functional viable profit bearing company?
01:26:47 Right?
01:26:48 So product market fit.
01:26:50 If it costs you a thousand dollars
01:26:53 and everyone wants it and only is willing to pay a dollar,
01:26:57 you have no product market fit.
01:26:59 Even if you could sell it for, you know,
01:27:01 it’s enough for a dollar, cause you can’t.
01:27:03 So how hard is it for robots?
01:27:05 Sort of maybe if you look at iRobot,
01:27:07 the company that makes Roombas, vacuum cleaners,
01:27:10 can you comment on, did they find the right product,
01:27:14 market product fit?
01:27:15 Like, are people willing to pay for robots
01:27:18 is also another kind of question underlying all this.
01:27:20 So if you think about iRobot and their story, right?
01:27:23 Like when they first, they had enough of a runway, right?
01:27:28 When they first started,
01:27:29 they weren’t doing vacuum cleaners, right?
01:27:31 They were contracts primarily, government contracts,
01:27:36 designing robots.
01:27:37 Or military robots.
01:27:38 Yeah, I mean, that’s what they were.
01:27:39 That’s how they started, right?
01:27:40 And then.
01:27:41 They still do a lot of incredible work there.
01:27:42 But yeah, that was the initial thing
01:27:44 that gave them enough funding to.
01:27:46 To then try to, the vacuum cleaner is what I’ve been told
01:27:50 was not like their first rendezvous
01:27:53 in terms of designing a product, right?
01:27:56 And so they were able to survive
01:27:59 until they got to the point
01:28:00 that they found a product price market, right?
01:28:05 And even with, if you look at the Roomba,
01:28:09 the price point now is different
01:28:10 than when it was first released, right?
01:28:12 It was an early adopter price,
01:28:13 but they found enough people
01:28:14 who were willing to fund it.
01:28:16 And I mean, I forgot what their loss profile was
01:28:20 for the first couple of years,
01:28:22 but they became profitable in sufficient time
01:28:25 that they didn’t have to close their doors.
01:28:28 So they found the right,
01:28:29 there’s still people willing to pay
01:28:31 a large amount of money,
01:28:32 so over $1,000 for a vacuum cleaner.
01:28:35 Unfortunately for them,
01:28:37 now that they’ve proved everything out,
01:28:39 figured it all out,
01:28:40 now there’s competitors.
01:28:40 Yeah, and so that’s the next thing, right?
01:28:43 The competition,
01:28:44 and they have quite a number, even internationally.
01:28:47 Like there’s some products out there,
01:28:50 you can go to Europe and be like,
01:28:52 oh, I didn’t even know this one existed.
01:28:55 So this is the thing though,
01:28:56 like with any market,
01:28:59 I would, this is not a bad time,
01:29:03 although as a roboticist, it’s kind of depressing,
01:29:06 but I actually think about things like with,
01:29:11 I would say that all of the companies
01:29:13 that are now in the top five or six,
01:29:15 they weren’t the first to the stage, right?
01:29:19 Like Google was not the first search engine,
01:29:22 sorry, Altavista, right?
01:29:24 Facebook was not the first, sorry, MySpace, right?
01:29:28 Like think about it,
01:29:29 they were not the first players.
01:29:31 Those first players,
01:29:32 like they’re not in the top five, 10 of Fortune 500 companies,
01:29:38 right?
01:29:39 They proved, they started to prove out the market,
01:29:43 they started to get people interested,
01:29:46 they started the buzz,
01:29:48 but they didn’t make it to that next level.
01:29:50 But the second batch, right?
01:29:52 The second batch, I think might make it to the next level.
01:29:57 When do you think the Facebook of robotics?
01:30:02 The Facebook of robotics.
01:30:04 Sorry, I take that phrase back because people deeply,
01:30:08 for some reason, well, I know why,
01:30:10 but it’s, I think, exaggerated distrust Facebook
01:30:13 because of the privacy concerns and so on.
01:30:15 And with robotics, one of the things you have to make sure
01:30:18 is all the things we talked about is to be transparent
01:30:21 and have people deeply trust you
01:30:22 to let a robot into their lives, into their home.
01:30:25 When do you think the second batch of robots will come?
01:30:28 Is it five, 10 years, 20 years
01:30:32 that we’ll have robots in our homes
01:30:34 and robots in our hearts?
01:30:36 So if I think about, and because I try to follow
01:30:38 the VC kind of space in terms of robotic investments,
01:30:43 and right now, and I don’t know
01:30:44 if they’re gonna be successful,
01:30:45 I don’t know if this is the second batch,
01:30:49 but there’s only one batch that’s focused
01:30:50 on like the first batch, right?
01:30:52 And then there’s all these self driving Xs, right?
01:30:56 And so I don’t know if they’re a first batch of something
01:30:59 or if like, I don’t know quite where they fit in,
01:31:03 but there’s a number of companies,
01:31:05 the co robot, I call them co robots
01:31:08 that are still getting VC investments.
01:31:13 Some of them have some of the flavor
01:31:14 of like Rethink Robotics.
01:31:15 Some of them have some of the flavor of like Curie.
01:31:18 What’s a co robot?
01:31:20 So basically a robot and human working in the same space.
01:31:26 So some of the companies are focused on manufacturing.
01:31:30 So having a robot and human working together
01:31:34 in a factory, some of these co robots
01:31:37 are robots and humans working in the home,
01:31:41 working in clinics, like there’s different versions
01:31:43 of these companies in terms of their products,
01:31:45 but they’re all, so we think robotics would be
01:31:48 like one of the first, at least well known companies
01:31:52 focused on this space.
01:31:54 So I don’t know if this is a second batch
01:31:56 or if this is still part of the first batch,
01:32:00 that I don’t know.
01:32:01 And then you have all these other companies
01:32:03 in this self driving space.
01:32:06 And I don’t know if that’s a first batch
01:32:09 or again, a second batch.
01:32:11 Yeah.
01:32:11 So there’s a lot of mystery about this now.
01:32:13 Of course, it’s hard to say that this is the second batch
01:32:16 until it proves out, right?
01:32:18 Correct.
01:32:19 Yeah, we need a unicorn.
01:32:20 Yeah, exactly.
01:32:23 Why do you think people are so afraid,
01:32:27 at least in popular culture of legged robots
01:32:30 like those worked in Boston Dynamics
01:32:32 or just robotics in general,
01:32:34 if you were to psychoanalyze that fear,
01:32:36 what do you make of it?
01:32:37 And should they be afraid, sorry?
01:32:39 So should people be afraid?
01:32:41 I don’t think people should be afraid.
01:32:43 But with a caveat, I don’t think people should be afraid
01:32:47 given that most of us in this world
01:32:51 understand that we need to change something, right?
01:32:55 So given that.
01:32:58 Now, if things don’t change, be very afraid.
01:33:01 Which is the dimension of change that’s needed?
01:33:04 So changing, thinking about the ramifications,
01:33:07 thinking about like the ethics,
01:33:09 thinking about like the conversation is going on, right?
01:33:12 It’s no longer a we’re gonna deploy it
01:33:15 and forget that this is a car that can kill pedestrians
01:33:20 that are walking across the street, right?
01:33:22 We’re not in that stage.
01:33:23 We’re putting these roads out.
01:33:25 There are people out there.
01:33:27 A car could be a weapon.
01:33:28 Like people are now, solutions aren’t there yet,
01:33:33 but people are thinking about this
01:33:35 as we need to be ethically responsible
01:33:38 as we send these systems out,
01:33:40 robotics, medical, self driving.
01:33:43 And military too.
01:33:43 And military.
01:33:45 Which is not as often talked about,
01:33:46 but it’s really where probably these robots
01:33:50 will have a significant impact as well.
01:33:51 Correct, correct.
01:33:52 Right, making sure that they can think rationally,
01:33:57 even having the conversations,
01:33:58 who should pull the trigger, right?
01:34:01 But overall you’re saying if we start to think
01:34:03 more and more as a community about these ethical issues,
01:34:05 people should not be afraid.
01:34:06 Yeah, I don’t think people should be afraid.
01:34:08 I think that the return on investment,
01:34:10 the impact, positive impact will outweigh
01:34:14 any of the potentially negative impacts.
01:34:17 Do you have worries of existential threats
01:34:20 of robots or AI that some people kind of talk about
01:34:25 and romanticize about in the next decade,
01:34:28 the next few decades?
01:34:29 No, I don’t.
01:34:31 Singularity would be an example.
01:34:33 So my concept is that, so remember,
01:34:36 robots, AI, is designed by people.
01:34:39 It has our values.
01:34:41 And I always correlate this with a parent and a child.
01:34:45 So think about it, as a parent, what do we want?
01:34:47 We want our kids to have a better life than us.
01:34:49 We want them to expand.
01:34:52 We want them to experience the world.
01:34:55 And then as we grow older, our kids think and know
01:34:59 they’re smarter and better and more intelligent
01:35:03 and have better opportunities.
01:35:04 And they may even stop listening to us.
01:35:08 They don’t go out and then kill us, right?
01:35:10 Like, think about it.
01:35:11 It’s because we, it’s instilled in them values.
01:35:14 We instilled in them this whole aspect of community.
01:35:17 And yes, even though you’re maybe smarter
01:35:19 and have more money and dah, dah, dah,
01:35:22 it’s still about this love, caring relationship.
01:35:26 And so that’s what I believe.
01:35:27 So even if like, you know,
01:35:29 we’ve created the singularity in some archaic system
01:35:32 back in like 1980 that suddenly evolves,
01:35:35 the fact is it might say, I am smarter, I am sentient.
01:35:40 These humans are really stupid,
01:35:43 but I think it’ll be like, yeah,
01:35:46 but I just can’t destroy them.
01:35:47 Yeah, for sentimental value.
01:35:49 It’s still just to come back for Thanksgiving dinner
01:35:53 every once in a while.
01:35:53 Exactly.
01:35:54 That’s such, that’s so beautifully put.
01:35:57 You’ve also said that The Matrix may be
01:36:00 one of your more favorite AI related movies.
01:36:03 Can you elaborate why?
01:36:05 Yeah, it is one of my favorite movies.
01:36:07 And it’s because it represents
01:36:11 kind of all the things I think about.
01:36:14 So there’s a symbiotic relationship
01:36:16 between robots and humans, right?
01:36:20 That symbiotic relationship is that they don’t destroy us,
01:36:22 they enslave us, right?
01:36:24 But think about it,
01:36:28 even though they enslaved us,
01:36:30 they needed us to be happy, right?
01:36:32 And in order to be happy,
01:36:33 they had to create this cruddy world
01:36:35 that they then had to live in, right?
01:36:36 That’s the whole premise.
01:36:39 But then there were humans that had a choice, right?
01:36:44 Like you had a choice to stay in this horrific,
01:36:47 horrific world where it was your fantasy life
01:36:51 with all of the anomalies, perfection, but not accurate.
01:36:54 Or you can choose to be on your own
01:36:57 and like have maybe no food for a couple of days,
01:37:02 but you were totally autonomous.
01:37:05 And so I think of that as, and that’s why.
01:37:07 So it’s not necessarily us being enslaved,
01:37:09 but I think about us having the symbiotic relationship.
01:37:13 Robots and AI, even if they become sentient,
01:37:15 they’re still part of our society
01:37:17 and they will suffer just as much as we.
01:37:20 And there will be some kind of equilibrium
01:37:23 that we’ll have to find some symbiotic relationship.
01:37:26 Right, and then you have the ethicists,
01:37:28 the robotics folks that are like,
01:37:30 no, this has got to stop, I will take the other pill
01:37:34 in order to make a difference.
01:37:36 So if you could hang out for a day with a robot,
01:37:40 real or from science fiction, movies, books, safely,
01:37:44 and get to pick his or her, their brain,
01:37:48 who would you pick?
01:37:55 Gotta say it’s Data.
01:37:57 Data.
01:37:58 I was gonna say Rosie,
01:38:00 but I’m not really interested in her brain.
01:38:03 I’m interested in Data’s brain.
01:38:05 Data pre or post emotion chip?
01:38:08 Pre.
01:38:10 But don’t you think it’d be a more interesting conversation
01:38:15 post emotion chip?
01:38:16 Yeah, it would be drama.
01:38:17 And I’m human, I deal with drama all the time.
01:38:22 But the reason why I wanna pick Data’s brain
01:38:24 is because I could have a conversation with him
01:38:29 and ask, for example, how can we fix this ethics problem?
01:38:34 And he could go through like the rational thinking
01:38:38 and through that, he could also help me
01:38:40 think through it as well.
01:38:42 And so there’s like these fundamental questions
01:38:44 I think I could ask him
01:38:46 that he would help me also learn from.
01:38:49 And that fascinates me.
01:38:52 I don’t think there’s a better place to end it.
01:38:55 Ayana, thank you so much for talking to us, it was an honor.
01:38:57 Thank you, thank you.
01:38:58 This was fun.
01:39:00 Thanks for listening to this conversation
01:39:02 and thank you to our presenting sponsor, Cash App.
01:39:05 Download it, use code LexPodcast,
01:39:08 you’ll get $10 and $10 will go to FIRST,
01:39:11 a STEM education nonprofit that inspires
01:39:13 hundreds of thousands of young minds
01:39:15 to become future leaders and innovators.
01:39:18 If you enjoy this podcast, subscribe on YouTube,
01:39:21 give it five stars on Apple Podcast,
01:39:23 follow on Spotify, support on Patreon
01:39:26 or simply connect with me on Twitter.
01:39:29 And now let me leave you with some words of wisdom
01:39:31 from Arthur C. Clarke.
01:39:35 Whether we are based on carbon or on silicon
01:39:38 makes no fundamental difference.
01:39:40 We should each be treated with appropriate respect.
01:39:43 Thank you for listening and hope to see you next time.