Transcript
00:00:00 The following is a conversation with Rosalind Picard.
00:00:02 She’s a professor at MIT,
00:00:04 director of the Effective Computing Research Group
00:00:06 at the MIT Media Lab,
00:00:08 and cofounder of two companies, Affectiva and Empatica.
00:00:12 Over two decades ago,
00:00:13 she launched a field of effective computing
00:00:15 with her book of the same name.
00:00:17 This book described the importance of emotion
00:00:20 in artificial and natural intelligence.
00:00:23 The vital role of emotional communication
00:00:25 has to the relationship between people in general
00:00:28 and human robot interaction.
00:00:30 I really enjoy talking with Ros over so many topics,
00:00:34 including emotion, ethics, privacy, wearable computing,
00:00:37 and her recent research in epilepsy,
00:00:39 and even love and meaning.
00:00:42 This conversation is part
00:00:43 of the Artificial Intelligence Podcast.
00:00:46 If you enjoy it, subscribe on YouTube, iTunes,
00:00:48 or simply connect with me on Twitter at Lex Friedman,
00:00:51 spelled F R I D.
00:00:53 And now, here’s my conversation with Rosalind Picard.
00:00:59 More than 20 years ago,
00:01:00 you’ve coined the term effective computing
00:01:03 and led a lot of research in this area since then.
00:01:06 As I understand, the goal is to make the machine detect
00:01:09 and interpret the emotional state of a human being
00:01:12 and adapt the behavior of the machine
00:01:14 based on the emotional state.
00:01:16 So how is your understanding of the problem space
00:01:19 defined by effective computing changed in the past 24 years?
00:01:25 So it’s the scope, the applications, the challenges,
00:01:28 what’s involved, how has that evolved over the years?
00:01:32 Yeah, actually, originally,
00:01:33 when I defined the term affective computing,
00:01:36 it was a bit broader than just recognizing
00:01:40 and responding intelligently to human emotion,
00:01:42 although those are probably the two pieces
00:01:44 that we’ve worked on the hardest.
00:01:47 The original concept also encompassed machines
00:01:50 that would have mechanisms
00:01:52 that functioned like human emotion does inside them.
00:01:55 It would be any computing that relates to arises from
00:01:59 or deliberately influences human emotion.
00:02:02 So the human computer interaction part
00:02:05 is the part that people tend to see,
00:02:07 like if I’m really ticked off at my computer
00:02:11 and I’m scowling at it and I’m cursing at it
00:02:13 and it just keeps acting smiling and happy
00:02:15 like that little paperclip used to do,
00:02:17 dancing, winking, that kind of thing
00:02:22 just makes you even more frustrated, right?
00:02:24 And I thought that stupid thing needs to see my affect.
00:02:29 And if it’s gonna be intelligent,
00:02:30 which Microsoft researchers had worked really hard on,
00:02:33 it actually had some of the most sophisticated AI
00:02:34 in it at the time,
00:02:36 that thing’s gonna actually be smart.
00:02:38 It needs to respond to me and you,
00:02:41 and we can send it very different signals.
00:02:45 So by the way, just a quick interruption,
00:02:47 the Clippy, maybe it’s in Word 95, 98,
00:02:52 I don’t remember when it was born,
00:02:54 but many people, do you find yourself with that reference
00:02:58 that people recognize what you’re talking about
00:03:00 still to this point?
00:03:01 I don’t expect the newest students to these days,
00:03:05 but I’ve mentioned it to a lot of audiences,
00:03:07 like how many of you know this Clippy thing?
00:03:09 And still the majority of people seem to know it.
00:03:11 So Clippy kind of looks at maybe natural language processing
00:03:15 where you were typing and tries to help you complete,
00:03:18 I think.
00:03:19 I don’t even remember what Clippy was, except annoying.
00:03:22 Yeah, some people actually liked it.
00:03:25 I would hear those stories.
00:03:27 You miss it?
00:03:28 Well, I miss the annoyance.
00:03:31 They felt like there’s an element.
00:03:34 Someone was there.
00:03:34 Somebody was there and we were in it together
00:03:36 and they were annoying.
00:03:37 It’s like a puppy that just doesn’t get it.
00:03:40 They keep stripping up the couch kind of thing.
00:03:42 And in fact, they could have done it smarter like a puppy.
00:03:44 If they had done, like if when you yelled at it
00:03:48 or cursed at it,
00:03:49 if it had put its little ears back in its tail down
00:03:51 and shrugged off,
00:03:52 probably people would have wanted it back, right?
00:03:55 But instead, when you yelled at it, what did it do?
00:03:58 It smiled, it winked, it danced, right?
00:04:01 If somebody comes to my office and I yell at them,
00:04:03 they start smiling, winking and dancing.
00:04:04 I’m like, I never want to see you again.
00:04:06 So Bill Gates got a standing ovation
00:04:08 when he said it was going away
00:04:10 because people were so ticked.
00:04:12 It was so emotionally unintelligent, right?
00:04:15 It was intelligent about whether you were writing a letter,
00:04:18 what kind of help you needed for that context.
00:04:20 It was completely unintelligent about,
00:04:23 hey, if you’re annoying your customer,
00:04:25 don’t smile in their face when you do it.
00:04:28 So that kind of mismatch was something
00:04:32 the developers just didn’t think about.
00:04:35 And intelligence at the time was really all about math
00:04:39 and language and chess and games,
00:04:44 problems that could be pretty well defined.
00:04:47 Social emotional interaction is much more complex
00:04:50 than chess or Go or any of the games
00:04:53 that people are trying to solve.
00:04:56 And in order to understand that required skills
00:04:58 that most people in computer science
00:05:00 actually were lacking personally.
00:05:02 Well, let’s talk about computer science.
00:05:03 Have things gotten better since the work,
00:05:06 since the message,
00:05:07 since you’ve really launched the field
00:05:09 with a lot of research work in this space?
00:05:11 I still find as a person like yourself,
00:05:14 who’s deeply passionate about human beings
00:05:16 and yet am in computer science,
00:05:18 there still seems to be a lack of,
00:05:22 sorry to say empathy in as computer scientists.
00:05:26 Yeah, well.
00:05:27 Or hasn’t gotten better.
00:05:28 Let’s just say there’s a lot more variety
00:05:30 among computer scientists these days.
00:05:32 Computer scientists are a much more diverse group today
00:05:35 than they were 25 years ago.
00:05:37 And that’s good.
00:05:39 We need all kinds of people to become computer scientists
00:05:41 so that computer science reflects more what society needs.
00:05:45 And there’s brilliance among every personality type.
00:05:49 So it need not be limited to people
00:05:52 who prefer computers to other people.
00:05:54 How hard do you think it is?
00:05:55 Your view of how difficult it is to recognize emotion
00:05:58 or to create a deeply emotionally intelligent interaction.
00:06:03 Has it gotten easier or harder
00:06:06 as you’ve explored it further?
00:06:07 And how far away are we from cracking this?
00:06:12 If you think of the Turing test solving the intelligence,
00:06:16 looking at the Turing test for emotional intelligence.
00:06:20 I think it is as difficult as I thought it was gonna be.
00:06:25 I think my prediction of its difficulty is spot on.
00:06:29 I think the time estimates are always hard
00:06:33 because they’re always a function of society’s love
00:06:37 and hate of a particular topic.
00:06:39 If society gets excited and you get thousands of researchers
00:06:45 working on it for a certain application,
00:06:49 that application gets solved really quickly.
00:06:52 The general intelligence,
00:06:54 the computer’s complete lack of ability
00:06:58 to have awareness of what it’s doing,
00:07:03 the fact that it’s not conscious,
00:07:05 the fact that there’s no signs of it becoming conscious,
00:07:08 the fact that it doesn’t read between the lines,
00:07:11 those kinds of things that we have to teach it explicitly,
00:07:15 what other people pick up implicitly.
00:07:17 We don’t see that changing yet.
00:07:20 There aren’t breakthroughs yet that lead us to believe
00:07:23 that that’s gonna go any faster,
00:07:25 which means that it’s still gonna be kind of stuck
00:07:28 with a lot of limitations
00:07:31 where it’s probably only gonna do the right thing
00:07:34 in very limited, narrow, prespecified contexts
00:07:37 where we can prescribe pretty much
00:07:40 what’s gonna happen there.
00:07:42 So I don’t see the,
00:07:46 it’s hard to predict a date
00:07:47 because when people don’t work on it, it’s infinite.
00:07:51 When everybody works on it, you get a nice piece of it
00:07:56 well solved in a short amount of time.
00:07:58 I actually think there’s a more important issue right now
00:08:01 than the difficulty of it.
00:08:04 And that’s causing some of us
00:08:05 to put the brakes on a little bit.
00:08:07 Usually we’re all just like step on the gas,
00:08:09 let’s go faster.
00:08:11 This is causing us to pull back and put the brakes on.
00:08:14 And that’s the way that some of this technology
00:08:18 is being used in places like China right now.
00:08:21 And that worries me so deeply
00:08:24 that it’s causing me to pull back myself
00:08:27 on a lot of the things that we could be doing.
00:08:30 And try to get the community to think a little bit more
00:08:33 about, okay, if we’re gonna go forward with that,
00:08:36 how can we do it in a way that puts in place safeguards
00:08:39 that protects people?
00:08:41 So the technology we’re referring to is
00:08:43 just when a computer senses the human being,
00:08:46 like the human face, right?
00:08:48 So there’s a lot of exciting things there,
00:08:51 like forming a deep connection with the human being.
00:08:53 So what are your worries, how that could go wrong?
00:08:57 Is it in terms of privacy?
00:08:59 Is it in terms of other kinds of more subtle things?
00:09:02 But let’s dig into privacy.
00:09:04 So here in the US, if I’m watching a video
00:09:07 of say a political leader,
00:09:09 and in the US we’re quite free as we all know
00:09:13 to even criticize the president of the United States, right?
00:09:17 Here that’s not a shocking thing.
00:09:19 It happens about every five seconds, right?
00:09:22 But in China, what happens if you criticize
00:09:27 the leader of the government, right?
00:09:30 And so people are very careful not to do that.
00:09:34 However, what happens if you’re simply watching a video
00:09:37 and you make a facial expression
00:09:40 that shows a little bit of skepticism, right?
00:09:45 Well, and here we’re completely free to do that.
00:09:47 In fact, we’re free to fly off the handle
00:09:50 and say anything we want, usually.
00:09:54 I mean, there are some restrictions
00:09:56 when the athlete does this
00:09:58 as part of the national broadcast.
00:10:00 Maybe the teams get a little unhappy
00:10:03 about picking that forum to do it, right?
00:10:05 But that’s more a question of judgment.
00:10:08 We have these freedoms,
00:10:11 and in places that don’t have those freedoms,
00:10:14 what if our technology can read
00:10:17 your underlying affective state?
00:10:19 What if our technology can read it even noncontact?
00:10:22 What if our technology can read it
00:10:24 without your prior consent?
00:10:28 And here in the US,
00:10:30 in my first company we started, Affectiva,
00:10:32 we have worked super hard to turn away money
00:10:35 and opportunities that try to read people’s affect
00:10:38 without their prior informed consent.
00:10:41 And even the software that is licensable,
00:10:45 you have to sign things saying
00:10:46 you will only use it in certain ways,
00:10:48 which essentially is get people’s buy in, right?
00:10:52 Don’t do this without people agreeing to it.
00:10:56 There are other countries where they’re not interested
00:10:58 in people’s buy in.
00:10:59 They’re just gonna use it.
00:11:01 They’re gonna inflict it on you.
00:11:03 And if you don’t like it,
00:11:04 you better not scowl in the direction of any censors.
00:11:08 So one, let me just comment on a small tangent.
00:11:11 Do you know with the idea of adversarial examples
00:11:15 and deep fakes and so on,
00:11:18 what you bring up is actually,
00:11:20 in that one sense, deep fakes provide
00:11:23 a comforting protection that you can no longer really trust
00:11:30 that the video of your face was legitimate.
00:11:34 And therefore you always have an escape clause
00:11:37 if a government is trying,
00:11:38 if a stable, balanced, ethical government
00:11:44 is trying to accuse you of something,
00:11:46 at least you have protection.
00:11:47 You can say it was fake news, as is a popular term now.
00:11:50 Yeah, that’s the general thinking of it.
00:11:52 We know how to go into the video
00:11:54 and see, for example, your heart rate and respiration
00:11:58 and whether or not they’ve been tampered with.
00:12:02 And we also can put like fake heart rate and respiration
00:12:05 in your video now too.
00:12:06 We decided we needed to do that.
00:12:10 After we developed a way to extract it,
00:12:12 we decided we also needed a way to jam it.
00:12:15 And so the fact that we took time to do that other step too,
00:12:20 that was time that I wasn’t spending
00:12:22 making the machine more affectively intelligent.
00:12:25 And there’s a choice in how we spend our time,
00:12:28 which is now being swayed a little bit less by this goal
00:12:32 and a little bit more like by concern
00:12:34 about what’s happening in society
00:12:36 and what kind of future do we wanna build.
00:12:38 And as we step back and say,
00:12:41 okay, we don’t just build AI to build AI
00:12:44 to make Elon Musk more money
00:12:46 or to make Amazon Jeff Bezos more money.
00:12:48 Good gosh, you know, that’s the wrong ethic.
00:12:52 Why are we building it?
00:12:54 What is the point of building AI?
00:12:57 It used to be, it was driven by researchers in academia
00:13:01 to get papers published and to make a career for themselves
00:13:04 and to do something cool, right?
00:13:05 Like, cause maybe it could be done.
00:13:08 Now we realize that this is enabling rich people
00:13:12 to get vastly richer, the poor are,
00:13:17 the divide is even larger.
00:13:19 And is that the kind of future that we want?
00:13:22 Maybe we wanna think about, maybe we wanna rethink AI.
00:13:25 Maybe we wanna rethink the problems in society
00:13:29 that are causing the greatest inequity
00:13:32 and rethink how to build AI
00:13:35 that’s not about a general intelligence,
00:13:36 but that’s about extending the intelligence
00:13:39 and capability of the have nots
00:13:41 so that we close these gaps in society.
00:13:43 Do you hope that kind of stepping on the brake
00:13:46 happens organically?
00:13:47 Because I think still majority of the force behind AI
00:13:51 is the desire to publish papers,
00:13:52 is to make money without thinking about the why.
00:13:55 Do you hope it happens organically?
00:13:57 Is there room for regulation?
00:14:01 Yeah, yeah, yeah, great questions.
00:14:02 I prefer the, you know,
00:14:05 they talk about the carrot versus the stick.
00:14:07 I definitely prefer the carrot to the stick.
00:14:09 And, you know, in our free world,
00:14:12 we, there’s only so much stick, right?
00:14:14 You’re gonna find a way around it.
00:14:17 I generally think less regulation is better.
00:14:21 That said, even though my position is classically carrot,
00:14:24 no stick, no regulation,
00:14:26 I think we do need some regulations in this space.
00:14:29 I do think we need regulations
00:14:30 around protecting people with their data,
00:14:33 that you own your data, not Amazon, not Google.
00:14:38 I would like to see people own their own data.
00:14:40 I would also like to see the regulations
00:14:42 that we have right now around lie detection
00:14:44 being extended to emotion recognition in general,
00:14:48 that right now you can’t use a lie detector on an employee
00:14:50 when you’re, on a candidate
00:14:52 when you’re interviewing them for a job.
00:14:54 I think similarly, we need to put in place protection
00:14:57 around reading people’s emotions without their consent
00:15:00 and in certain cases,
00:15:02 like characterizing them for a job and other opportunities.
00:15:06 So I’m also, I also think that when we’re reading emotion
00:15:09 that’s predictive around mental health,
00:15:11 that that should, even though it’s not medical data,
00:15:14 that that should get the kinds of protections
00:15:16 that our medical data gets.
00:15:18 What most people don’t know yet
00:15:19 is right now with your smartphone use,
00:15:22 and if you’re wearing a sensor
00:15:25 and you wanna learn about your stress and your sleep
00:15:27 and your physical activity
00:15:28 and how much you’re using your phone
00:15:30 and your social interaction,
00:15:32 all of that nonmedical data,
00:15:34 when we put it together with machine learning,
00:15:37 now called AI, even though the founders of AI
00:15:40 wouldn’t have called it that,
00:15:42 that capability can not only tell that you’re calm right now
00:15:48 or that you’re getting a little stressed,
00:15:50 but it can also predict how you’re likely to be tomorrow.
00:15:53 If you’re likely to be sick or healthy,
00:15:55 happy or sad, stressed or calm.
00:15:58 Especially when you’re tracking data over time.
00:16:00 Especially when we’re tracking a week of your data or more.
00:16:03 Do you have an optimism towards,
00:16:05 you know, a lot of people on our phones
00:16:07 are worried about this camera that’s looking at us.
00:16:10 For the most part, on balance,
00:16:12 are you optimistic about the benefits
00:16:16 that can be brought from that camera
00:16:17 that’s looking at billions of us?
00:16:19 Or should we be more worried?
00:16:24 I think we should be a little bit more worried
00:16:28 about who’s looking at us and listening to us.
00:16:32 The device sitting on your countertop in your kitchen,
00:16:36 whether it’s, you know, Alexa or Google Home or Apple, Siri,
00:16:42 these devices want to listen
00:16:47 while they say ostensibly to help us.
00:16:49 And I think there are great people in these companies
00:16:52 who do want to help people.
00:16:54 Let me not brand them all bad.
00:16:56 I’m a user of products from all of these companies
00:16:59 I’m naming all the A companies, Alphabet, Apple, Amazon.
00:17:04 They are awfully big companies, right?
00:17:09 They have incredible power.
00:17:11 And you know, what if China were to buy them, right?
00:17:17 And suddenly all of that data
00:17:19 were not part of free America,
00:17:22 but all of that data were part of somebody
00:17:24 who just wants to take over the world
00:17:26 and you submit to them.
00:17:27 And guess what happens if you so much as smirk the wrong way
00:17:32 when they say something that you don’t like?
00:17:34 Well, they have reeducation camps, right?
00:17:37 That’s a nice word for them.
00:17:39 By the way, they have a surplus of organs
00:17:41 for people who have surgery these days.
00:17:43 They don’t have an organ donation problem
00:17:45 because they take your blood and they know you’re a match.
00:17:48 And the doctors are on record of taking organs
00:17:51 from people who are perfectly healthy and not prisoners.
00:17:55 They’re just simply not the favored ones of the government.
00:17:59 And you know, that’s a pretty freaky evil society.
00:18:04 And we can use the word evil there.
00:18:06 I was born in the Soviet Union.
00:18:07 I can certainly connect to the worry that you’re expressing.
00:18:13 At the same time, probably both you and I
00:18:15 and you very much so,
00:18:19 you know, there’s an exciting possibility
00:18:23 that you can have a deep connection with a machine.
00:18:27 Yeah, yeah.
00:18:28 Right, so.
00:18:30 Those of us, I’ve admitted students who say that they,
00:18:35 you know, when you list like,
00:18:36 who do you most wish you could have lunch with
00:18:39 or dinner with, right?
00:18:41 And they’ll write like, I don’t like people.
00:18:43 I just like computers.
00:18:44 And one of them said to me once
00:18:46 when I had this party at my house,
00:18:49 I want you to know,
00:18:51 this is my only social event of the year,
00:18:53 my one social event of the year.
00:18:55 Like, okay, now this is a brilliant
00:18:57 machine learning person, right?
00:18:59 And we need that kind of brilliance in machine learning.
00:19:01 And I love that computer science welcomes people
00:19:04 who love people and people who are very awkward
00:19:07 around people.
00:19:08 I love that this is a field that anybody could join.
00:19:12 We need all kinds of people
00:19:14 and you don’t need to be a social person.
00:19:16 I’m not trying to force people who don’t like people
00:19:19 to suddenly become social.
00:19:21 At the same time,
00:19:23 if most of the people building the AIs of the future
00:19:26 are the kind of people who don’t like people,
00:19:29 we’ve got a little bit of a problem.
00:19:31 Well, hold on a second.
00:19:31 So let me push back on that.
00:19:33 So don’t you think a large percentage of the world
00:19:38 can, you know, there’s loneliness.
00:19:40 There is a huge problem with loneliness that’s growing.
00:19:44 And so there’s a longing for connection.
00:19:47 Do you…
00:19:49 If you’re lonely, you’re part of a big and growing group.
00:19:51 Yes.
00:19:52 So we’re in it together, I guess.
00:19:54 If you’re lonely, join the group.
00:19:56 You’re not alone.
00:19:56 You’re not alone.
00:19:57 That’s a good line.
00:20:00 But do you think there’s…
00:20:03 You talked about some worry,
00:20:04 but do you think there’s an exciting possibility
00:20:07 that something like Alexa and these kinds of tools
00:20:11 can alleviate that loneliness
00:20:14 in a way that other humans can’t?
00:20:16 Yeah, yeah, definitely.
00:20:18 I mean, a great book can kind of alleviate loneliness
00:20:22 because you just get sucked into this amazing story
00:20:25 and you can’t wait to go spend time with that character.
00:20:27 And they’re not a human character.
00:20:30 There is a human behind it.
00:20:33 But yeah, it can be an incredibly delightful way
00:20:35 to pass the hours and it can meet needs.
00:20:39 Even, you know, I don’t read those trashy romance books,
00:20:43 but somebody does, right?
00:20:44 And what are they getting from this?
00:20:46 Well, probably some of that feeling of being there, right?
00:20:50 Being there in that social moment,
00:20:52 that romantic moment or connecting with somebody.
00:20:56 I’ve had a similar experience
00:20:57 reading some science fiction books, right?
00:20:59 And connecting with the character.
00:21:00 Orson Scott Card, you know, just amazing writing
00:21:04 and Ender’s Game and Speaker for the Dead, terrible title.
00:21:07 But those kind of books that pull you into a character
00:21:11 and you feel like you’re, you feel very social.
00:21:13 It’s very connected, even though it’s not responding to you.
00:21:17 And a computer, of course, can respond to you.
00:21:19 So it can deepen it, right?
00:21:21 You can have a very deep connection,
00:21:25 much more than the movie Her, you know, plays up, right?
00:21:29 Well, much more.
00:21:30 I mean, movie Her is already a pretty deep connection, right?
00:21:34 Well, but it’s just a movie, right?
00:21:36 It’s scripted.
00:21:37 It’s just, you know, but I mean,
00:21:39 like there can be a real interaction
00:21:42 where the character can learn and you can learn.
00:21:46 You could imagine it not just being you and one character.
00:21:49 You could imagine a group of characters.
00:21:51 You can imagine a group of people and characters,
00:21:53 human and AI connecting,
00:21:56 where maybe a few people can’t sort of be friends
00:22:00 with everybody, but the few people
00:22:02 and their AIs can befriend more people.
00:22:07 There can be an extended human intelligence in there
00:22:10 where each human can connect with more people that way.
00:22:14 But it’s still very limited, but there are just,
00:22:19 what I mean is there are many more possibilities
00:22:21 than what’s in that movie.
00:22:22 So there’s a tension here.
00:22:24 So one, you expressed a really serious concern
00:22:27 about privacy, about how governments
00:22:29 can misuse the information,
00:22:31 and there’s the possibility of this connection.
00:22:34 So let’s look at Alexa.
00:22:36 So personal assistance.
00:22:37 For the most part, as far as I’m aware,
00:22:40 they ignore your emotion.
00:22:42 They ignore even the context or the existence of you,
00:22:47 the intricate, beautiful, complex aspects of who you are,
00:22:52 except maybe aspects of your voice
00:22:54 that help it recognize for speech recognition.
00:22:58 Do you think they should move towards
00:23:00 trying to understand your emotion?
00:23:03 All of these companies are very interested
00:23:04 in understanding human emotion.
00:23:07 They want, more people are telling Siri every day
00:23:11 they want to kill themselves.
00:23:13 Apple wants to know the difference between
00:23:15 if a person is really suicidal versus if a person
00:23:18 is just kind of fooling around with Siri, right?
00:23:21 The words may be the same, the tone of voice
00:23:25 and what surrounds those words is pivotal to understand
00:23:31 if they should respond in a very serious way,
00:23:34 bring help to that person,
00:23:35 or if they should kind of jokingly tease back,
00:23:40 ah, you just want to sell me for something else, right?
00:23:44 Like, how do you respond when somebody says that?
00:23:47 Well, you do want to err on the side of being careful
00:23:51 and taking it seriously.
00:23:53 People want to know if the person is happy or stressed
00:23:59 in part, well, so let me give you an altruistic reason
00:24:03 and a business profit motivated reason.
00:24:08 And there are people in companies that operate
00:24:11 on both principles.
00:24:12 The altruistic people really care about their customers
00:24:16 and really care about helping you feel a little better
00:24:19 at the end of the day.
00:24:20 And it would just make those people happy
00:24:22 if they knew that they made your life better.
00:24:24 If you came home stressed and after talking
00:24:27 with their product, you felt better.
00:24:29 There are other people who maybe have studied
00:24:32 the way affect affects decision making
00:24:35 and prices people pay.
00:24:36 And they know, I don’t know if I should tell you,
00:24:38 like the work of Jen Lerner on heartstrings and purse strings,
00:24:43 you know, if we manipulate you into a slightly sadder mood,
00:24:47 you’ll pay more, right?
00:24:50 You’ll pay more to change your situation.
00:24:53 You’ll pay more for something you don’t even need
00:24:55 to make yourself feel better.
00:24:58 So, you know, if they sound a little sad,
00:25:00 maybe I don’t want to cheer them up.
00:25:01 Maybe first I want to help them get something,
00:25:04 a little shopping therapy, right?
00:25:07 That helps them.
00:25:08 Which is really difficult for a company
00:25:09 that’s primarily funded on advertisement.
00:25:12 So they’re encouraged to get you to offer you products
00:25:16 or Amazon that’s primarily funded
00:25:17 on you buying things from their store.
00:25:20 So I think we should be, you know,
00:25:22 maybe we need regulation in the future
00:25:24 to put a little bit of a wall between these agents
00:25:27 that have access to our emotion
00:25:29 and agents that want to sell us stuff.
00:25:32 Maybe there needs to be a little bit more
00:25:35 of a firewall in between those.
00:25:38 So maybe digging in a little bit
00:25:40 on the interaction with Alexa,
00:25:42 you mentioned, of course, a really serious concern
00:25:44 about like recognizing emotion,
00:25:46 if somebody is speaking of suicide or depression and so on,
00:25:49 but what about the actual interaction itself?
00:25:55 Do you think, so if I, you know,
00:25:57 you mentioned Clippy and being annoying,
00:26:01 what is the objective function we’re trying to optimize?
00:26:04 Is it minimize annoyingness or minimize or maximize happiness?
00:26:09 Or if we look at human to human relations,
00:26:12 I think that push and pull, the tension, the dance,
00:26:15 you know, the annoying, the flaws, that’s what makes it fun.
00:26:19 So is there a room for, like what is the objective function?
00:26:24 There are times when you want to have a little push and pull,
00:26:26 I think of kids sparring, right?
00:26:29 You know, I see my sons and they,
00:26:31 one of them wants to provoke the other to be upset
00:26:33 and that’s fun.
00:26:34 And it’s actually healthy to learn where your limits are,
00:26:38 to learn how to self regulate.
00:26:40 You can imagine a game where it’s trying to make you mad
00:26:43 and you’re trying to show self control.
00:26:45 And so if we’re doing a AI human interaction
00:26:48 that’s helping build resilience and self control,
00:26:51 whether it’s to learn how to not be a bully
00:26:54 or how to turn the other cheek
00:26:55 or how to deal with an abusive person in your life,
00:26:58 then you might need an AI that pushes your buttons, right?
00:27:04 But in general, do you want an AI that pushes your buttons?
00:27:10 Probably depends on your personality.
00:27:12 I don’t, I want one that’s respectful,
00:27:15 that is there to serve me
00:27:18 and that is there to extend my ability to do things.
00:27:23 I’m not looking for a rival,
00:27:25 I’m looking for a helper.
00:27:27 And that’s the kind of AI I’d put my money on.
00:27:30 Your sense is for the majority of people in the world,
00:27:33 in order to have a rich experience,
00:27:35 that’s what they’re looking for as well.
00:27:37 So they’re not looking,
00:27:37 if you look at the movie Her, spoiler alert,
00:27:40 I believe the program that the woman in the movie Her
00:27:46 leaves the person for somebody else,
00:27:51 says they don’t wanna be dating anymore, right?
00:27:54 Like, do you, your sense is if Alexa said,
00:27:58 you know what, I’m actually had enough of you for a while,
00:28:02 so I’m gonna shut myself off.
00:28:04 You don’t see that as…
00:28:07 I’d say you’re trash, cause I paid for you, right?
00:28:10 You, we’ve got to remember,
00:28:14 and this is where this blending human AI
00:28:18 as if we’re equals is really deceptive
00:28:22 because AI is something at the end of the day
00:28:26 that my students and I are making in the lab.
00:28:28 And we’re choosing what it’s allowed to say,
00:28:33 when it’s allowed to speak, what it’s allowed to listen to,
00:28:36 what it’s allowed to act on given the inputs
00:28:40 that we choose to expose it to,
00:28:43 what outputs it’s allowed to have.
00:28:45 It’s all something made by a human.
00:28:49 And if we wanna make something
00:28:50 that makes our lives miserable, fine.
00:28:52 I wouldn’t invest in it as a business,
00:28:56 unless it’s just there for self regulation training.
00:28:59 But I think we need to think about
00:29:01 what kind of future we want.
00:29:02 And actually your question, I really like the,
00:29:05 what is the objective function?
00:29:06 Is it to calm people down?
00:29:09 Sometimes.
00:29:10 Is it to always make people happy and calm them down?
00:29:14 Well, there was a book about that, right?
00:29:16 The brave new world, make everybody happy,
00:29:18 take your Soma if you’re unhappy, take your happy pill.
00:29:22 And if you refuse to take your happy pill,
00:29:24 well, we’ll threaten you by sending you to Iceland
00:29:28 to live there.
00:29:29 I lived in Iceland three years.
00:29:30 It’s a great place.
00:29:31 Don’t take your Soma, then go to Iceland.
00:29:35 A little TV commercial there.
00:29:37 Now I was a child there for a few years.
00:29:39 It’s a wonderful place.
00:29:40 So that part of the book never scared me.
00:29:43 But really like, do we want AI to manipulate us
00:29:46 into submission, into making us happy?
00:29:49 Well, if you are a, you know,
00:29:52 like a power obsessed sick dictator individual
00:29:56 who only wants to control other people
00:29:57 to get your jollies in life, then yeah,
00:29:59 you wanna use AI to extend your power and your scale
00:30:03 to force people into submission.
00:30:07 If you believe that the human race is better off
00:30:10 being given freedom and the opportunity
00:30:12 to do things that might surprise you,
00:30:15 then you wanna use AI to extend people’s ability to build,
00:30:20 you wanna build AI that extends human intelligence,
00:30:22 that empowers the weak and helps balance the power
00:30:27 between the weak and the strong,
00:30:28 not that gives more power to the strong.
00:30:32 So in this process of empowering people and sensing people,
00:30:39 what is your sense on emotion
00:30:41 in terms of recognizing emotion?
00:30:42 The difference between emotion that is shown
00:30:44 and emotion that is felt.
00:30:46 So yeah, emotion that is expressed on the surface
00:30:52 through your face, your body, and various other things,
00:30:56 and what’s actually going on deep inside
00:30:58 on the biological level, on the neuroscience level,
00:31:01 or some kind of cognitive level.
00:31:03 Yeah, yeah.
00:31:05 Whoa, no easy questions here.
00:31:07 Well, yeah, I’m sure there’s no definitive answer,
00:31:11 but what’s your sense?
00:31:12 How far can we get by just looking at the face?
00:31:16 We’re very limited when we just look at the face,
00:31:18 but we can get further than most people think we can get.
00:31:21 People think, hey, I have a great poker face,
00:31:25 therefore all you’re ever gonna get from me is neutral.
00:31:28 Well, that’s naive.
00:31:30 We can read with the ordinary camera
00:31:32 on your laptop or on your phone.
00:31:34 We can read from a neutral face if your heart is racing.
00:31:39 We can read from a neutral face
00:31:41 if your breathing is becoming irregular
00:31:44 and showing signs of stress.
00:31:46 We can read under some conditions
00:31:50 that maybe I won’t give you details on,
00:31:53 how your heart rate variability power is changing.
00:31:57 That could be a sign of stress,
00:31:58 even when your heart rate is not necessarily accelerating.
00:32:02 So…
00:32:03 Sorry, from physio sensors or from the face?
00:32:06 From the color changes that you cannot even see,
00:32:09 but the camera can see.
00:32:11 That’s amazing.
00:32:12 So you can get a lot of signal, but…
00:32:15 So we get things people can’t see using a regular camera.
00:32:18 And from that, we can tell things about your stress.
00:32:21 So if you were just sitting there with a blank face
00:32:25 thinking nobody can read my emotion, well, you’re wrong.
00:32:30 Right, so that’s really interesting,
00:32:31 but that’s from sort of visual information from the face.
00:32:34 That’s almost like cheating your way
00:32:37 to the physiological state of the body,
00:32:39 by being very clever with what you can do with vision.
00:32:42 With signal processing.
00:32:43 With signal processing.
00:32:44 So that’s really impressive.
00:32:45 But if you just look at the stuff we humans can see,
00:32:49 the poker, the smile, the smirks,
00:32:52 the subtle, all the facial actions.
00:32:54 So then you can hide that on your face
00:32:55 for a limited amount of time.
00:32:57 Now, if you’re just going in for a brief interview
00:33:00 and you’re hiding it, that’s pretty easy for most people.
00:33:03 If you are, however, surveilled constantly everywhere you go,
00:33:08 then it’s gonna say, gee, you know, Lex used to smile a lot
00:33:13 and now I’m not seeing so many smiles.
00:33:15 And Roz used to laugh a lot
00:33:20 and smile a lot very spontaneously.
00:33:22 And now I’m only seeing
00:33:23 these not so spontaneous looking smiles.
00:33:26 And only when she’s asked these questions.
00:33:28 You know, that’s something’s changed here.
00:33:31 Probably not getting enough sleep.
00:33:33 We could look at that too.
00:33:35 So now I have to be a little careful too.
00:33:37 When I say we, you think we can’t read your emotion
00:33:40 and we can, it’s not that binary.
00:33:42 What we’re reading is more some physiological changes
00:33:45 that relate to your activation.
00:33:48 Now, that doesn’t mean that we know everything
00:33:51 about how you feel.
00:33:52 In fact, we still know very little about how you feel.
00:33:54 Your thoughts are still private.
00:33:56 Your nuanced feelings are still completely private.
00:34:01 We can’t read any of that.
00:34:02 So there’s some relief that we can’t read that.
00:34:07 Even brain imaging can’t read that.
00:34:09 Wearables can’t read that.
00:34:12 However, as we read your body state changes
00:34:16 and we know what’s going on in your environment
00:34:18 and we look at patterns of those over time,
00:34:21 we can start to make some inferences
00:34:24 about what you might be feeling.
00:34:26 And that is where it’s not just the momentary feeling
00:34:31 but it’s more your stance toward things.
00:34:34 And that could actually be a little bit more scary
00:34:37 with certain kinds of governmental control freak people
00:34:42 who want to know more about are you on their team
00:34:46 or are you not?
00:34:48 And getting that information through over time.
00:34:50 So you’re saying there’s a lot of signal
00:34:51 by looking at the change over time.
00:34:53 Yeah.
00:34:54 So you’ve done a lot of exciting work
00:34:56 both in computer vision
00:34:57 and physiological sense like wearables.
00:35:00 What do you think is the best modality for,
00:35:03 what’s the best window into the emotional soul?
00:35:08 Is it the face?
00:35:09 Is it the voice?
00:35:10 Depends what you want to know.
00:35:11 It depends what you want to know.
00:35:13 It depends what you want to know.
00:35:13 Everything is informative.
00:35:15 Everything we do is informative.
00:35:17 So for health and wellbeing and things like that,
00:35:20 do you find the wearable physiotechnical,
00:35:22 measuring physiological signals
00:35:24 is the best for health based stuff?
00:35:29 So here I’m going to answer empirically
00:35:31 with data and studies we’ve been doing.
00:35:34 We’ve been doing studies.
00:35:36 Now these are currently running
00:35:38 with lots of different kinds of people
00:35:39 but where we’ve published data
00:35:41 and I can speak publicly to it,
00:35:44 the data are limited right now
00:35:45 to New England college students.
00:35:47 So that’s a small group.
00:35:50 Among New England college students,
00:35:52 when they are wearing a wearable
00:35:55 like the empathic embrace here
00:35:57 that’s measuring skin conductance, movement, temperature.
00:36:01 And when they are using a smartphone
00:36:05 that is collecting their time of day
00:36:09 of when they’re texting, who they’re texting,
00:36:12 their movement around it, their GPS,
00:36:14 the weather information based upon their location.
00:36:18 And when it’s using machine learning
00:36:19 and putting all of that together
00:36:20 and looking not just at right now
00:36:22 but looking at your rhythm of behaviors
00:36:26 over about a week.
00:36:28 When we look at that,
00:36:29 we are very accurate at forecasting tomorrow’s stress,
00:36:33 mood and happy, sad mood and health.
00:36:38 And when we look at which pieces of that are most useful,
00:36:43 first of all, if you have all the pieces,
00:36:45 you get the best results.
00:36:48 If you have only the wearable,
00:36:50 you get the next best results.
00:36:52 And that’s still better than 80% accurate
00:36:56 at forecasting tomorrow’s levels.
00:37:00 Isn’t that exciting because the wearable stuff
00:37:02 with physiological information,
00:37:05 it feels like it violates privacy less
00:37:08 than the noncontact face based methods.
00:37:12 Yeah, it’s interesting.
00:37:14 I think what people sometimes don’t,
00:37:16 it’s funny in the early days people would say,
00:37:18 oh, wearing something or giving blood is invasive, right?
00:37:22 Whereas a camera is less invasive
00:37:24 because it’s not touching you.
00:37:26 I think on the contrary,
00:37:28 the things that are not touching you are maybe the scariest
00:37:31 because you don’t know when they’re on or off.
00:37:33 And you don’t know who’s behind it, right?
00:37:39 A wearable, depending upon what’s happening
00:37:43 to the data on it, if it’s just stored locally
00:37:46 or if it’s streaming and what it is being attached to,
00:37:52 in a sense, you have the most control over it
00:37:54 because it’s also very easy to just take it off, right?
00:37:59 Now it’s not sensing me.
00:38:01 So if I’m uncomfortable with what it’s sensing,
00:38:05 now I’m free, right?
00:38:07 If I’m comfortable with what it’s sensing,
00:38:09 then, and I happen to know everything about this one
00:38:12 and what it’s doing with it,
00:38:13 so I’m quite comfortable with it,
00:38:15 then I have control, I’m comfortable.
00:38:20 Control is one of the biggest factors for an individual
00:38:24 in reducing their stress.
00:38:26 If I have control over it,
00:38:28 if I know all there is to know about it,
00:38:30 then my stress is a lot lower
00:38:32 and I’m making an informed choice
00:38:34 about whether to wear it or not,
00:38:36 or when to wear it or not.
00:38:38 I wanna wear it sometimes, maybe not others.
00:38:40 Right, so that control, yeah, I’m with you.
00:38:42 That control, even if, yeah, the ability to turn it off,
00:38:47 that is a really important thing.
00:38:49 It’s huge.
00:38:49 And we need to, maybe, if there’s regulations,
00:38:53 maybe that’s number one to protect
00:38:55 is people’s ability to, it’s easy to opt out as to opt in.
00:38:59 Right, so you’ve studied a bit of neuroscience as well.
00:39:04 How have looking at our own minds,
00:39:08 sort of the biological stuff or the neurobiological,
00:39:12 the neuroscience to get the signals in our brain,
00:39:17 helped you understand the problem
00:39:18 and the approach of effective computing, so?
00:39:21 Originally, I was a computer architect
00:39:23 and I was building hardware and computer designs
00:39:26 and I wanted to build ones that worked like the brain.
00:39:28 So I’ve been studying the brain
00:39:29 as long as I’ve been studying how to build computers.
00:39:33 Have you figured out anything yet?
00:39:36 Very little.
00:39:37 It’s so amazing.
00:39:39 You know, they used to think like,
00:39:40 oh, if you remove this chunk of the brain
00:39:42 and you find this function goes away,
00:39:44 well, that’s the part of the brain that did it.
00:39:45 And then later they realized
00:39:46 if you remove this other chunk of the brain,
00:39:48 that function comes back and,
00:39:50 oh no, we really don’t understand it.
00:39:52 Brains are so interesting and changing all the time
00:39:56 and able to change in ways
00:39:58 that will probably continue to surprise us.
00:40:02 When we were measuring stress,
00:40:04 you may know the story where we found
00:40:07 an unusual big skin conductance pattern on one wrist
00:40:10 in one of our kids with autism.
00:40:14 And in trying to figure out how on earth
00:40:15 you could be stressed on one wrist and not the other,
00:40:17 like how can you get sweaty on one wrist, right?
00:40:20 When you get stressed
00:40:21 with that sympathetic fight or flight response,
00:40:23 like you kind of should like sweat more
00:40:25 in some places than others,
00:40:26 but not more on one wrist than the other.
00:40:27 That didn’t make any sense.
00:40:30 We learned that what had actually happened
00:40:33 was a part of his brain had unusual electrical activity
00:40:37 and that caused an unusually large sweat response
00:40:41 on one wrist and not the other.
00:40:44 And since then we’ve learned
00:40:45 that seizures cause this unusual electrical activity.
00:40:49 And depending where the seizure is,
00:40:51 if it’s in one place and it’s staying there,
00:40:53 you can have a big electrical response
00:40:55 we can pick up with a wearable at one part of the body.
00:40:58 You can also have a seizure
00:40:59 that spreads over the whole brain,
00:41:00 generalized grand mal seizure.
00:41:02 And that response spreads
00:41:04 and we can pick it up pretty much anywhere.
00:41:07 As we learned this and then later built Embrace
00:41:10 that’s now FDA cleared for seizure detection,
00:41:13 we have also built relationships
00:41:15 with some of the most amazing doctors in the world
00:41:18 who not only help people
00:41:20 with unusual brain activity or epilepsy,
00:41:23 but some of them are also surgeons
00:41:24 and they’re going in and they’re implanting electrodes,
00:41:27 not just to momentarily read the strange patterns
00:41:31 of brain activity that we’d like to see return to normal,
00:41:35 but also to read out continuously what’s happening
00:41:37 in some of these deep regions of the brain
00:41:39 during most of life when these patients are not seizing.
00:41:41 Most of the time they’re not seizing,
00:41:42 most of the time they’re fine.
00:41:44 And so we are now working on mapping
00:41:47 those deep brain regions
00:41:49 that you can’t even usually get with EEG scalp electrodes
00:41:53 because the changes deep inside don’t reach the surface.
00:41:58 But interesting when some of those regions
00:42:00 are activated, we see a big skin conductance response.
00:42:04 Who would have thunk it, right?
00:42:05 Like nothing here, but something here.
00:42:07 In fact, right after seizures
00:42:10 that we think are the most dangerous ones
00:42:12 that precede what’s called SUDEP,
00:42:14 Sudden Unexpected Death and Epilepsy,
00:42:16 there’s a period where the brainwaves go flat
00:42:19 and it looks like the person’s brain has stopped,
00:42:21 but it hasn’t.
00:42:23 The activity has gone deep into a region
00:42:26 that can make the cortical activity look flat,
00:42:29 like a quick shutdown signal here.
00:42:32 It can unfortunately cause breathing to stop
00:42:35 if it progresses long enough.
00:42:38 Before that happens, we see a big skin conductance response
00:42:42 in the data that we have.
00:42:43 The longer this flattening, the bigger our response here.
00:42:46 So we have been trying to learn, you know, initially,
00:42:49 like why are we getting a big response here
00:42:51 when there’s nothing here?
00:42:52 Well, it turns out there’s something much deeper.
00:42:55 So we can now go inside the brains
00:42:57 of some of these individuals, fabulous people
00:43:01 who usually aren’t seizing,
00:43:03 and get this data and start to map it.
00:43:05 So that’s the active research that we’re doing right now
00:43:07 with top medical partners.
00:43:09 So this wearable sensor that’s looking at skin conductance
00:43:12 can capture sort of the ripples of the complexity
00:43:17 of what’s going on in our brain.
00:43:18 So this little device, you have a hope
00:43:22 that you can start to get the signal
00:43:24 from the interesting things happening in the brain.
00:43:27 Yeah, we’ve already published the strong correlations
00:43:30 between the size of this response
00:43:32 and the flattening that happens afterwards.
00:43:35 And unfortunately, also in a real SUDEP case
00:43:38 where the patient died because the, well, we don’t know why.
00:43:42 We don’t know if somebody was there,
00:43:43 it would have definitely prevented it.
00:43:45 But we know that most SUDEPs happen
00:43:47 when the person’s alone.
00:43:48 And in this case, a SUDEP is an acronym, S U D E P.
00:43:53 And it stands for the number two cause
00:43:56 of years of life lost actually
00:43:58 among all neurological disorders.
00:44:01 Stroke is number one, SUDEP is number two,
00:44:03 but most people haven’t heard of it.
00:44:05 Actually, I’ll plug my TED talk,
00:44:07 it’s on the front page of TED right now
00:44:09 that talks about this.
00:44:11 And we hope to change that.
00:44:13 I hope everybody who’s heard of SIDS and stroke
00:44:17 will now hear of SUDEP
00:44:18 because we think in most cases it’s preventable
00:44:21 if people take their meds and aren’t alone
00:44:24 when they have a seizure.
00:44:26 Not guaranteed to be preventable.
00:44:27 There are some exceptions,
00:44:29 but we think most cases probably are.
00:44:31 So you had this embrace now in the version two wristband,
00:44:35 right, for epilepsy management.
00:44:39 That’s the one that’s FDA approved?
00:44:41 Yes.
00:44:42 Which is kind of a clear.
00:44:43 FDA cleared, they say.
00:44:45 Sorry.
00:44:46 No, it’s okay.
00:44:46 It essentially means it’s approved for marketing.
00:44:49 Got it.
00:44:50 Just a side note, how difficult is that to do?
00:44:52 It’s essentially getting FDA approval
00:44:54 for computer science technology.
00:44:57 It’s so agonizing.
00:44:58 It’s much harder than publishing multiple papers
00:45:01 in top medical journals.
00:45:04 Yeah, we’ve published peer reviewed
00:45:05 top medical journal neurology, best results,
00:45:08 and that’s not good enough for the FDA.
00:45:10 Is that system,
00:45:12 so if we look at the peer review of medical journals,
00:45:14 there’s flaws, there’s strengths,
00:45:16 is the FDA approval process,
00:45:19 how does it compare to the peer review process?
00:45:21 Does it have the strength?
00:45:23 I’ll take peer review over FDA any day.
00:45:25 But is that a good thing?
00:45:26 Is that a good thing for FDA?
00:45:28 You’re saying, does it stop some amazing technology
00:45:31 from getting through?
00:45:32 Yeah, it does.
00:45:33 The FDA performs a very important good role
00:45:36 in keeping people safe.
00:45:37 They keep things,
00:45:39 they put you through tons of safety testing
00:45:41 and that’s wonderful and that’s great.
00:45:44 I’m all in favor of the safety testing.
00:45:46 But sometimes they put you through additional testing
00:45:51 that they don’t have to explain why they put you through it
00:45:54 and you don’t understand why you’re going through it
00:45:56 and it doesn’t make sense.
00:45:58 And that’s very frustrating.
00:46:00 And maybe they have really good reasons
00:46:04 and they just would,
00:46:05 it would do people a service to articulate those reasons.
00:46:09 Be more transparent.
00:46:10 Be more transparent.
00:46:12 So as part of Empatica, you have sensors.
00:46:15 So what kind of problems can we crack?
00:46:17 What kind of things from seizures to autism
00:46:24 to I think I’ve heard you mentioned depression.
00:46:28 What kind of things can we alleviate?
00:46:29 Can we detect?
00:46:30 What’s your hope of what,
00:46:32 how we can make the world a better place
00:46:33 with this wearable tech?
00:46:35 I would really like to see my fellow brilliant researchers
00:46:40 step back and say, what are the really hard problems
00:46:46 that we don’t know how to solve
00:46:47 that come from people maybe we don’t even see
00:46:50 in our normal life because they’re living
00:46:52 in the poor places.
00:46:54 They’re stuck on the bus.
00:46:56 They can’t even afford the Uber or the Lyft
00:46:58 or the data plan or all these other wonderful things
00:47:02 we have that we keep improving on.
00:47:04 Meanwhile, there’s all these folks left behind in the world
00:47:07 and they’re struggling with horrible diseases
00:47:09 with depression, with epilepsy, with diabetes,
00:47:12 with just awful stuff that maybe a little more time
00:47:19 and attention hanging out with them
00:47:20 and learning what are their challenges in life?
00:47:22 What are their needs?
00:47:24 How do we help them have job skills?
00:47:25 How do we help them have a hope and a future
00:47:28 and a chance to have the great life
00:47:31 that so many of us building technology have?
00:47:34 And then how would that reshape the kinds of AI
00:47:37 that we build? How would that reshape the new apps
00:47:41 that we build or the maybe we need to focus
00:47:44 on how to make things more low cost and green
00:47:46 instead of thousand dollar phones?
00:47:49 I mean, come on, why can’t we be thinking more
00:47:52 about things that do more with less for these folks?
00:47:56 Quality of life is not related to the cost of your phone.
00:48:00 It’s not something that, it’s been shown that what about
00:48:03 $75,000 of income and happiness is the same, okay?
00:48:08 However, I can tell you, you get a lot of happiness
00:48:10 from helping other people.
00:48:12 You get a lot more than $75,000 buys.
00:48:15 So how do we connect up the people who have real needs
00:48:19 with the people who have the ability to build the future
00:48:21 and build the kind of future that truly improves the lives
00:48:25 of all the people that are currently being left behind?
00:48:28 So let me return just briefly on a point,
00:48:32 maybe in the movie, Her.
00:48:35 So do you think if we look farther into the future,
00:48:37 you said so much of the benefit from making our technology
00:48:41 more empathetic to us human beings would make them
00:48:46 better tools, empower us, make our lives better.
00:48:50 Well, if we look farther into the future,
00:48:51 do you think we’ll ever create an AI system
00:48:54 that we can fall in love with?
00:48:56 That we can fall in love with and loves us back
00:49:00 on a level that is similar to human to human interaction,
00:49:04 like in the movie Her or beyond?
00:49:07 I think we can simulate it in ways that could,
00:49:13 you know, sustain engagement for a while.
00:49:17 Would it be as good as another person?
00:49:20 I don’t think so, if you’re used to like good people.
00:49:24 Now, if you’ve just grown up with nothing but abuse
00:49:27 and you can’t stand human beings,
00:49:29 can we do something that helps you there
00:49:32 that gives you something through a machine?
00:49:34 Yeah, but that’s pretty low bar, right?
00:49:36 If you’ve only encountered pretty awful people.
00:49:39 If you’ve encountered wonderful, amazing people,
00:49:41 we’re nowhere near building anything like that.
00:49:44 And I would not bet on building it.
00:49:49 I would bet instead on building the kinds of AI
00:49:53 that helps kind of raise all boats,
00:49:56 that helps all people be better people,
00:49:59 helps all people figure out if they’re getting sick tomorrow
00:50:02 and helps give them what they need to stay well tomorrow.
00:50:05 That’s the kind of AI I wanna build
00:50:07 that improves human lives,
00:50:09 not the kind of AI that just walks on The Tonight Show
00:50:11 and people go, wow, look how smart that is.
00:50:14 Really?
00:50:15 And then it goes back in a box, you know?
00:50:18 So on that point,
00:50:19 if we continue looking a little bit into the future,
00:50:23 do you think an AI that’s empathetic
00:50:25 and does improve our lives
00:50:28 need to have a physical presence, a body?
00:50:31 And even let me cautiously say the C word consciousness
00:50:38 and even fear of mortality.
00:50:40 So some of those human characteristics,
00:50:42 do you think it needs to have those aspects
00:50:45 or can it remain simply a machine learning tool
00:50:50 that learns from data of behavior
00:50:53 that learns to make us,
00:50:56 based on previous patterns, feel better?
00:51:00 Or does it need those elements of consciousness?
00:51:02 It depends on your goals.
00:51:03 If you’re making a movie, it needs a body.
00:51:06 It needs a gorgeous body.
00:51:08 It needs to act like it has consciousness.
00:51:10 It needs to act like it has emotion, right?
00:51:11 Because that’s what sells.
00:51:13 That’s what’s gonna get me to show up and enjoy the movie.
00:51:16 Okay.
00:51:17 In real life, does it need all that?
00:51:19 Well, if you’ve read Orson Scott Card,
00:51:21 Ender’s Game, Speaker of the Dead,
00:51:23 it could just be like a little voice in your earring, right?
00:51:26 And you could have an intimate relationship
00:51:28 and it could get to know you.
00:51:29 And it doesn’t need to be a robot.
00:51:34 But that doesn’t make this compelling of a movie, right?
00:51:37 I mean, we already think it’s kind of weird
00:51:38 when a guy looks like he’s talking to himself on the train,
00:51:41 even though it’s earbuds.
00:51:43 So we have these, embodied is more powerful.
00:51:49 Embodied, when you compare interactions
00:51:51 with an embodied robot versus a video of a robot
00:51:55 versus no robot, the robot is more engaging.
00:52:00 The robot gets our attention more.
00:52:01 The robot, when you walk in your house,
00:52:03 is more likely to get you to remember to do the things
00:52:05 that you asked it to do,
00:52:06 because it’s kind of got a physical presence.
00:52:09 You can avoid it if you don’t like it.
00:52:10 It could see you’re avoiding it.
00:52:12 There’s a lot of power to being embodied.
00:52:14 There will be embodied AIs.
00:52:17 They have great power and opportunity and potential.
00:52:22 There will also be AIs that aren’t embodied,
00:52:24 that just are little software assistants
00:52:28 that help us with different things
00:52:30 that may get to know things about us.
00:52:33 Will they be conscious?
00:52:34 There will be attempts to program them
00:52:36 to make them appear to be conscious.
00:52:39 We can already write programs that make it look like,
00:52:41 oh, what do you mean?
00:52:42 Of course I’m aware that you’re there, right?
00:52:43 I mean, it’s trivial to say stuff like that.
00:52:45 It’s easy to fool people,
00:52:48 but does it actually have conscious experience like we do?
00:52:53 Nobody has a clue how to do that yet.
00:52:55 That seems to be something that is beyond
00:52:58 what any of us knows how to build now.
00:53:01 Will it have to have that?
00:53:03 I think you can get pretty far
00:53:05 with a lot of stuff without it.
00:53:07 But will we accord it rights?
00:53:10 Well, that’s more a political game
00:53:13 than it is a question of real consciousness.
00:53:16 Yeah, can you go to jail for turning off Alexa
00:53:18 is the question for an election maybe a few decades from now.
00:53:24 Well, Sophia Robot’s already been given rights
00:53:27 as a citizen in Saudi Arabia, right?
00:53:30 Even before women have full rights.
00:53:33 Then the robot was still put back in the box
00:53:36 to be shipped to the next place
00:53:39 where it would get a paid appearance, right?
00:53:42 Yeah, it’s dark and almost comedic, if not absurd.
00:53:50 So I’ve heard you speak about your journey in finding faith.
00:53:54 Sure.
00:53:55 And how you discovered some wisdoms about life
00:54:00 and beyond from reading the Bible.
00:54:03 And I’ve also heard you say that,
00:54:05 you said scientists too often assume
00:54:07 that nothing exists beyond what can be currently measured.
00:54:11 Yeah, materialism.
00:54:12 Materialism.
00:54:13 And scientism, yeah.
00:54:14 So in some sense, this assumption enables
00:54:17 the near term scientific method,
00:54:20 assuming that we can uncover the mysteries of this world
00:54:25 by the mechanisms of measurement that we currently have.
00:54:28 But we easily forget that we’ve made this assumption.
00:54:33 So what do you think we miss out on
00:54:35 by making that assumption?
00:54:38 It’s fine to limit the scientific method
00:54:42 to things we can measure and reason about and reproduce.
00:54:47 That’s fine.
00:54:49 I think we have to recognize
00:54:51 that sometimes we scientists also believe
00:54:53 in things that happen historically.
00:54:55 Like I believe the Holocaust happened.
00:54:57 I can’t prove events from past history scientifically.
00:55:03 You prove them with historical evidence, right?
00:55:06 With the impact they had on people,
00:55:08 with eyewitness testimony and things like that.
00:55:11 So a good thinker recognizes that science
00:55:15 is one of many ways to get knowledge.
00:55:19 It’s not the only way.
00:55:21 And there’s been some really bad philosophy
00:55:24 and bad thinking recently, you can call it scientism,
00:55:27 where people say science is the only way to get to truth.
00:55:31 And it’s not, it just isn’t.
00:55:33 There are other ways that work also.
00:55:35 Like knowledge of love with someone.
00:55:38 You don’t prove your love through science, right?
00:55:43 So history, philosophy, love,
00:55:48 a lot of other things in life show us
00:55:50 that there’s more ways to gain knowledge and truth
00:55:55 if you’re willing to believe there is such a thing,
00:55:57 and I believe there is, than science.
00:56:01 I do, I am a scientist, however.
00:56:03 And in my science, I do limit my science
00:56:05 to the things that the scientific method can do.
00:56:09 But I recognize that it’s myopic
00:56:11 to say that that’s all there is.
00:56:13 Right, there’s, just like you listed,
00:56:15 there’s all the why questions.
00:56:17 And really we know, if we’re being honest with ourselves,
00:56:20 the percent of what we really know is basically zero
00:56:25 relative to the full mystery of the…
00:56:28 Measure theory, a set of measure zero,
00:56:30 if I have a finite amount of knowledge, which I do.
00:56:34 So you said that you believe in truth.
00:56:37 So let me ask that old question.
00:56:40 What do you think this thing is all about?
00:56:42 What’s the life on earth?
00:56:44 Life, the universe, and everything?
00:56:46 And everything, what’s the meaning?
00:56:47 I can’t quote Douglas Adams 42.
00:56:49 It’s my favorite number.
00:56:51 By the way, that’s my street address.
00:56:52 My husband and I guessed the exact same number
00:56:54 for our house, we got to pick it.
00:56:57 And there’s a reason we picked 42, yeah.
00:57:00 So is it just 42 or is there,
00:57:02 do you have other words that you can put around it?
00:57:05 Well, I think there’s a grand adventure
00:57:07 and I think this life is a part of it.
00:57:09 I think there’s a lot more to it than meets the eye
00:57:12 and the heart and the mind and the soul here.
00:57:14 I think we see but through a glass dimly in this life.
00:57:18 We see only a part of all there is to know.
00:57:22 If people haven’t read the Bible, they should,
00:57:25 if they consider themselves educated
00:57:27 and you could read Proverbs
00:57:30 and find tremendous wisdom in there
00:57:33 that cannot be scientifically proven.
00:57:35 But when you read it, there’s something in you,
00:57:38 like a musician knows when the instruments played right
00:57:41 and it’s beautiful.
00:57:42 There’s something in you that comes alive
00:57:45 and knows that there’s a truth there
00:57:47 that it’s like your strings are being plucked by the master
00:57:50 instead of by me, right, playing when I pluck it.
00:57:54 But probably when you play, it sounds spectacular, right?
00:57:57 And when you encounter those truths,
00:58:01 there’s something in you that sings
00:58:03 and knows that there is more
00:58:06 than what I can prove mathematically
00:58:09 or program a computer to do.
00:58:11 Don’t get me wrong, the math is gorgeous.
00:58:13 The computer programming can be brilliant.
00:58:16 It’s inspiring, right?
00:58:17 We wanna do more.
00:58:19 None of this squashes my desire to do science
00:58:21 or to get knowledge through science.
00:58:23 I’m not dissing the science at all.
00:58:26 I grow even more in awe of what the science can do
00:58:29 because I’m more in awe of all there is we don’t know.
00:58:33 And really at the heart of science,
00:58:36 you have to have a belief that there’s truth,
00:58:38 that there’s something greater to be discovered.
00:58:41 And some scientists may not wanna use the faith word,
00:58:44 but it’s faith that drives us to do science.
00:58:47 It’s faith that there is truth,
00:58:49 that there’s something to know that we don’t know,
00:58:52 that it’s worth knowing, that it’s worth working hard,
00:58:56 and that there is meaning,
00:58:58 that there is such a thing as meaning,
00:58:59 which by the way, science can’t prove either.
00:59:02 We have to kind of start with some assumptions
00:59:04 that there’s things like truth and meaning.
00:59:06 And these are really questions philosophers own, right?
00:59:10 This is their space,
00:59:11 of philosophers and theologians at some level.
00:59:14 So these are things science,
00:59:19 when people claim that science will tell you all truth,
00:59:23 there’s a name for that.
00:59:23 It’s its own kind of faith.
00:59:25 It’s scientism and it’s very myopic.
00:59:29 Yeah, there’s a much bigger world out there to be explored
00:59:32 in ways that science may not,
00:59:34 at least for now, allow us to explore.
00:59:37 Yeah, and there’s meaning and purpose and hope
00:59:40 and joy and love and all these awesome things
00:59:43 that make it all worthwhile too.
00:59:45 I don’t think there’s a better way to end it, Roz.
00:59:47 Thank you so much for talking today.
00:59:49 Thanks Lex, what a pleasure.
00:59:50 Great questions.