Max Tegmark: Life 3.0 #1

Transcript

00:00:00 As part of MIT course 6S099, Artificial General Intelligence,

00:00:04 I’ve gotten the chance to sit down with Max Tegmark.

00:00:06 He is a professor here at MIT.

00:00:08 He’s a physicist, spent a large part of his career

00:00:11 studying the mysteries of our cosmological universe.

00:00:16 But he’s also studied and delved into the beneficial

00:00:20 possibilities and the existential risks

00:00:24 of artificial intelligence.

00:00:25 Amongst many other things, he is the cofounder

00:00:29 of the Future of Life Institute, author of two books,

00:00:33 both of which I highly recommend.

00:00:35 First, Our Mathematical Universe.

00:00:37 Second is Life 3.0.

00:00:40 He’s truly an out of the box thinker and a fun personality,

00:00:44 so I really enjoy talking to him.

00:00:45 If you’d like to see more of these videos in the future,

00:00:47 please subscribe and also click the little bell icon

00:00:50 to make sure you don’t miss any videos.

00:00:52 Also, Twitter, LinkedIn, agi.mit.edu

00:00:56 if you wanna watch other lectures

00:00:59 or conversations like this one.

00:01:01 Better yet, go read Max’s book, Life 3.0.

00:01:04 Chapter seven on goals is my favorite.

00:01:07 It’s really where philosophy and engineering come together

00:01:10 and it opens with a quote by Dostoevsky.

00:01:14 The mystery of human existence lies not in just staying alive

00:01:17 but in finding something to live for.

00:01:20 Lastly, I believe that every failure rewards us

00:01:23 with an opportunity to learn

00:01:26 and in that sense, I’ve been very fortunate

00:01:28 to fail in so many new and exciting ways

00:01:31 and this conversation was no different.

00:01:34 I’ve learned about something called

00:01:36 radio frequency interference, RFI, look it up.

00:01:40 Apparently, music and conversations

00:01:42 from local radio stations can bleed into the audio

00:01:45 that you’re recording in such a way

00:01:47 that it almost completely ruins that audio.

00:01:49 It’s an exceptionally difficult sound source to remove.

00:01:53 So, I’ve gotten the opportunity to learn

00:01:55 how to avoid RFI in the future during recording sessions.

00:02:00 I’ve also gotten the opportunity to learn

00:02:02 how to use Adobe Audition and iZotope RX 6

00:02:06 to do some noise, some audio repair.

00:02:11 Of course, this is an exceptionally difficult noise

00:02:14 to remove.

00:02:15 I am an engineer.

00:02:16 I’m not an audio engineer.

00:02:18 Neither is anybody else in our group

00:02:20 but we did our best.

00:02:21 Nevertheless, I thank you for your patience

00:02:25 and I hope you’re still able to enjoy this conversation.

00:02:27 Do you think there’s intelligent life

00:02:29 out there in the universe?

00:02:31 Let’s open up with an easy question.

00:02:33 I have a minority view here actually.

00:02:36 When I give public lectures, I often ask for a show of hands

00:02:39 who thinks there’s intelligent life out there somewhere else

00:02:42 and almost everyone put their hands up

00:02:45 and when I ask why, they’ll be like,

00:02:47 oh, there’s so many galaxies out there, there’s gotta be.

00:02:51 But I’m a numbers nerd, right?

00:02:54 So when you look more carefully at it,

00:02:56 it’s not so clear at all.

00:02:59 When we talk about our universe, first of all,

00:03:00 we don’t mean all of space.

00:03:03 We actually mean, I don’t know,

00:03:04 you can throw me the universe if you want,

00:03:05 it’s behind you there.

00:03:07 It’s, we simply mean the spherical region of space

00:03:11 from which light has a time to reach us so far

00:03:15 during the 14.8 billion year,

00:03:17 13.8 billion years since our Big Bang.

00:03:19 There’s more space here but this is what we call a universe

00:03:22 because that’s all we have access to.

00:03:24 So is there intelligent life here

00:03:25 that’s gotten to the point of building telescopes

00:03:28 and computers?

00:03:31 My guess is no, actually.

00:03:34 The probability of it happening on any given planet

00:03:39 is some number we don’t know what it is.

00:03:42 And what we do know is that the number can’t be super high

00:03:48 because there’s over a billion Earth like planets

00:03:50 in the Milky Way galaxy alone,

00:03:52 many of which are billions of years older than Earth.

00:03:56 And aside from some UFO believers,

00:04:00 there isn’t much evidence

00:04:01 that any superduran civilization has come here at all.

00:04:05 And so that’s the famous Fermi paradox, right?

00:04:08 And then if you work the numbers,

00:04:10 what you find is that if you have no clue

00:04:13 what the probability is of getting life on a given planet,

00:04:16 so it could be 10 to the minus 10, 10 to the minus 20,

00:04:19 or 10 to the minus two, or any power of 10

00:04:22 is sort of equally likely

00:04:23 if you wanna be really open minded,

00:04:25 that translates into it being equally likely

00:04:27 that our nearest neighbor is 10 to the 16 meters away,

00:04:31 10 to the 17 meters away, 10 to the 18.

00:04:35 By the time you get much less than 10 to the 16 already,

00:04:41 we pretty much know there is nothing else that close.

00:04:45 And when you get beyond 10.

00:04:47 Because they would have discovered us.

00:04:48 Yeah, they would have been discovered as long ago,

00:04:50 or if they’re really close,

00:04:51 we would have probably noted some engineering projects

00:04:53 that they’re doing.

00:04:54 And if it’s beyond 10 to the 26 meters,

00:04:57 that’s already outside of here.

00:05:00 So my guess is actually that we are the only life in here

00:05:05 that’s gotten the point of building advanced tech,

00:05:09 which I think is very,

00:05:12 puts a lot of responsibility on our shoulders, not screw up.

00:05:15 I think people who take for granted

00:05:17 that it’s okay for us to screw up,

00:05:20 have an accidental nuclear war or go extinct somehow

00:05:22 because there’s a sort of Star Trek like situation out there

00:05:25 where some other life forms are gonna come and bail us out

00:05:28 and it doesn’t matter as much.

00:05:30 I think they’re leveling us into a false sense of security.

00:05:33 I think it’s much more prudent to say,

00:05:35 let’s be really grateful

00:05:36 for this amazing opportunity we’ve had

00:05:38 and make the best of it just in case it is down to us.

00:05:44 So from a physics perspective,

00:05:45 do you think intelligent life,

00:05:48 so it’s unique from a sort of statistical view

00:05:51 of the size of the universe,

00:05:52 but from the basic matter of the universe,

00:05:55 how difficult is it for intelligent life to come about?

00:05:59 The kind of advanced tech building life

00:06:03 is implied in your statement that it’s really difficult

00:06:05 to create something like a human species.

00:06:07 Well, I think what we know is that going from no life

00:06:11 to having life that can do a level of tech,

00:06:15 there’s some sort of two going beyond that

00:06:18 than actually settling our whole universe with life.

00:06:22 There’s some major roadblock there,

00:06:26 which is some great filter as it’s sometimes called,

00:06:30 which is tough to get through.

00:06:33 It’s either that roadblock is either behind us

00:06:37 or in front of us.

00:06:38 I’m hoping very much that it’s behind us.

00:06:41 I’m super excited every time we get a new report from NASA

00:06:45 saying they failed to find any life on Mars.

00:06:48 I’m like, yes, awesome.

00:06:50 Because that suggests that the hard part,

00:06:51 maybe it was getting the first ribosome

00:06:54 or some very low level kind of stepping stone

00:06:59 so that we’re home free.

00:07:00 Because if that’s true,

00:07:01 then the future is really only limited

00:07:03 by our own imagination.

00:07:05 It would be much suckier if it turns out

00:07:07 that this level of life is kind of a dime a dozen,

00:07:11 but maybe there’s some other problem.

00:07:12 Like as soon as a civilization gets advanced technology,

00:07:16 within a hundred years,

00:07:17 they get into some stupid fight with themselves and poof.

00:07:20 That would be a bummer.

00:07:21 Yeah, so you’ve explored the mysteries of the universe,

00:07:26 the cosmological universe, the one that’s sitting

00:07:29 between us today.

00:07:31 I think you’ve also begun to explore the other universe,

00:07:35 which is sort of the mystery,

00:07:38 the mysterious universe of the mind of intelligence,

00:07:40 of intelligent life.

00:07:42 So is there a common thread between your interest

00:07:45 or the way you think about space and intelligence?

00:07:48 Oh yeah, when I was a teenager,

00:07:53 I was already very fascinated by the biggest questions.

00:07:57 And I felt that the two biggest mysteries of all in science

00:08:00 were our universe out there and our universe in here.

00:08:05 So it’s quite natural after having spent

00:08:08 a quarter of a century on my career,

00:08:11 thinking a lot about this one,

00:08:12 that I’m now indulging in the luxury

00:08:14 of doing research on this one.

00:08:15 It’s just so cool.

00:08:17 I feel the time is ripe now

00:08:20 for you trans greatly deepening our understanding of this.

00:08:25 Just start exploring this one.

00:08:26 Yeah, because I think a lot of people view intelligence

00:08:29 as something mysterious that can only exist

00:08:33 in biological organisms like us,

00:08:36 and therefore dismiss all talk

00:08:37 about artificial general intelligence as science fiction.

00:08:41 But from my perspective as a physicist,

00:08:43 I am a blob of quarks and electrons

00:08:46 moving around in a certain pattern

00:08:48 and processing information in certain ways.

00:08:50 And this is also a blob of quarks and electrons.

00:08:53 I’m not smarter than the water bottle

00:08:55 because I’m made of different kinds of quarks.

00:08:57 I’m made of up quarks and down quarks,

00:08:59 exact same kind as this.

00:09:01 There’s no secret sauce, I think, in me.

00:09:05 It’s all about the pattern of the information processing.

00:09:08 And this means that there’s no law of physics

00:09:12 saying that we can’t create technology,

00:09:15 which can help us by being incredibly intelligent

00:09:19 and help us crack mysteries that we couldn’t.

00:09:21 In other words, I think we’ve really only seen

00:09:23 the tip of the intelligence iceberg so far.

00:09:26 Yeah, so the perceptronium.

00:09:29 Yeah.

00:09:31 So you coined this amazing term.

00:09:33 It’s a hypothetical state of matter,

00:09:35 sort of thinking from a physics perspective,

00:09:38 what is the kind of matter that can help,

00:09:40 as you’re saying, subjective experience emerge,

00:09:42 consciousness emerge.

00:09:44 So how do you think about consciousness

00:09:46 from this physics perspective?

00:09:49 Very good question.

00:09:50 So again, I think many people have underestimated

00:09:55 our ability to make progress on this

00:09:59 by convincing themselves it’s hopeless

00:10:01 because somehow we’re missing some ingredient that we need.

00:10:05 There’s some new consciousness particle or whatever.

00:10:09 I happen to think that we’re not missing anything

00:10:12 and that it’s not the interesting thing

00:10:16 about consciousness that gives us

00:10:18 this amazing subjective experience of colors

00:10:21 and sounds and emotions.

00:10:23 It’s rather something at the higher level

00:10:26 about the patterns of information processing.

00:10:28 And that’s why I like to think about this idea

00:10:33 of perceptronium.

00:10:34 What does it mean for an arbitrary physical system

00:10:36 to be conscious in terms of what its particles are doing

00:10:41 or its information is doing?

00:10:43 I don’t think, I hate carbon chauvinism,

00:10:46 this attitude you have to be made of carbon atoms

00:10:47 to be smart or conscious.

00:10:50 There’s something about the information processing

00:10:53 that this kind of matter performs.

00:10:55 Yeah, and you can see I have my favorite equations here

00:10:57 describing various fundamental aspects of the world.

00:11:00 I feel that I think one day,

00:11:02 maybe someone who’s watching this will come up

00:11:04 with the equations that information processing

00:11:07 has to satisfy to be conscious.

00:11:08 I’m quite convinced there is big discovery

00:11:11 to be made there because let’s face it,

00:11:15 we know that so many things are made up of information.

00:11:18 We know that some information processing is conscious

00:11:21 because we are conscious.

00:11:25 But we also know that a lot of information processing

00:11:27 is not conscious.

00:11:28 Like most of the information processing happening

00:11:30 in your brain right now is not conscious.

00:11:32 There are like 10 megabytes per second coming in

00:11:36 even just through your visual system.

00:11:38 You’re not conscious about your heartbeat regulation

00:11:40 or most things.

00:11:42 Even if I just ask you to like read what it says here,

00:11:45 you look at it and then, oh, now you know what it said.

00:11:48 But you’re not aware of how the computation actually happened.

00:11:51 Your consciousness is like the CEO

00:11:53 that got an email at the end with the final answer.

00:11:56 So what is it that makes a difference?

00:12:01 I think that’s both a great science mystery.

00:12:05 We’re actually studying it a little bit in my lab here

00:12:07 at MIT, but I also think it’s just a really urgent question

00:12:10 to answer.

00:12:12 For starters, I mean, if you’re an emergency room doctor

00:12:14 and you have an unresponsive patient coming in,

00:12:17 wouldn’t it be great if in addition to having

00:12:22 a CT scanner, you had a consciousness scanner

00:12:25 that could figure out whether this person

00:12:27 is actually having locked in syndrome

00:12:30 or is actually comatose.

00:12:33 And in the future, imagine if we build robots

00:12:37 or the machine that we can have really good conversations

00:12:41 with, which I think is very likely to happen.

00:12:44 Wouldn’t you want to know if your home helper robot

00:12:47 is actually experiencing anything or just like a zombie,

00:12:51 I mean, would you prefer it?

00:12:53 What would you prefer?

00:12:54 Would you prefer that it’s actually unconscious

00:12:56 so that you don’t have to feel guilty about switching it off

00:12:58 or giving boring chores or what would you prefer?

00:13:02 Well, certainly we would prefer,

00:13:06 I would prefer the appearance of consciousness.

00:13:08 But the question is whether the appearance of consciousness

00:13:11 is different than consciousness itself.

00:13:15 And sort of to ask that as a question,

00:13:18 do you think we need to understand what consciousness is,

00:13:21 solve the hard problem of consciousness

00:13:23 in order to build something like an AGI system?

00:13:28 No, I don’t think that.

00:13:30 And I think we will probably be able to build things

00:13:34 even if we don’t answer that question.

00:13:36 But if we want to make sure that what happens

00:13:37 is a good thing, we better solve it first.

00:13:40 So it’s a wonderful controversy you’re raising there

00:13:44 where you have basically three points of view

00:13:47 about the hard problem.

00:13:48 So there are two different points of view.

00:13:52 They both conclude that the hard problem of consciousness

00:13:55 is BS.

00:13:56 On one hand, you have some people like Daniel Dennett

00:13:59 who say that consciousness is just BS

00:14:01 because consciousness is the same thing as intelligence.

00:14:05 There’s no difference.

00:14:06 So anything which acts conscious is conscious,

00:14:11 just like we are.

00:14:13 And then there are also a lot of people,

00:14:15 including many top AI researchers I know,

00:14:18 who say, oh, consciousness is just bullshit

00:14:19 because, of course, machines can never be conscious.

00:14:22 They’re always going to be zombies.

00:14:24 You never have to feel guilty about how you treat them.

00:14:27 And then there’s a third group of people,

00:14:30 including Giulio Tononi, for example,

00:14:34 and Krzysztof Koch and a number of others.

00:14:37 I would put myself also in this middle camp

00:14:39 who say that actually some information processing

00:14:41 is conscious and some is not.

00:14:44 So let’s find the equation which can be used

00:14:46 to determine which it is.

00:14:49 And I think we’ve just been a little bit lazy,

00:14:52 kind of running away from this problem for a long time.

00:14:54 It’s been almost taboo to even mention the C word

00:14:57 in a lot of circles because,

00:15:00 but we should stop making excuses.

00:15:03 This is a science question and there are ways

00:15:07 we can even test any theory that makes predictions for this.

00:15:11 And coming back to this helper robot,

00:15:13 I mean, so you said you’d want your helper robot

00:15:16 to certainly act conscious and treat you,

00:15:18 like have conversations with you and stuff.

00:15:20 I think so.

00:15:21 But wouldn’t you, would you feel,

00:15:22 would you feel a little bit creeped out

00:15:23 if you realized that it was just a glossed up tape recorder,

00:15:27 you know, that was just zombie and was a faking emotion?

00:15:31 Would you prefer that it actually had an experience

00:15:34 or would you prefer that it’s actually

00:15:37 not experiencing anything so you feel,

00:15:39 you don’t have to feel guilty about what you do to it?

00:15:42 It’s such a difficult question because, you know,

00:15:45 it’s like when you’re in a relationship and you say,

00:15:47 well, I love you.

00:15:48 And the other person said, I love you back.

00:15:49 It’s like asking, well, do they really love you back

00:15:52 or are they just saying they love you back?

00:15:55 Don’t you really want them to actually love you?

00:15:58 It’s hard to, it’s hard to really know the difference

00:16:03 between everything seeming like there’s consciousness

00:16:09 present, there’s intelligence present,

00:16:10 there’s affection, passion, love,

00:16:13 and it actually being there.

00:16:16 I’m not sure, do you have?

00:16:17 But like, can I ask you a question about this?

00:16:19 Like to make it a bit more pointed.

00:16:20 So Mass General Hospital is right across the river, right?

00:16:22 Yes.

00:16:23 Suppose you’re going in for a medical procedure

00:16:26 and they’re like, you know, for anesthesia,

00:16:29 what we’re going to do is we’re going to give you

00:16:31 muscle relaxants so you won’t be able to move

00:16:33 and you’re going to feel excruciating pain

00:16:35 during the whole surgery,

00:16:35 but you won’t be able to do anything about it.

00:16:37 But then we’re going to give you this drug

00:16:39 that erases your memory of it.

00:16:41 Would you be cool about that?

00:16:44 What’s the difference that you’re conscious about it

00:16:48 or not if there’s no behavioral change, right?

00:16:51 Right, that’s a really, that’s a really clear way to put it.

00:16:54 That’s, yeah, it feels like in that sense,

00:16:57 experiencing it is a valuable quality.

00:17:01 So actually being able to have subjective experiences,

00:17:05 at least in that case, is valuable.

00:17:09 And I think we humans have a little bit

00:17:11 of a bad track record also of making

00:17:13 these self serving arguments

00:17:15 that other entities aren’t conscious.

00:17:18 You know, people often say,

00:17:19 oh, these animals can’t feel pain.

00:17:21 It’s okay to boil lobsters because we ask them

00:17:24 if it hurt and they didn’t say anything.

00:17:25 And now there was just a paper out saying,

00:17:27 lobsters do feel pain when you boil them

00:17:29 and they’re banning it in Switzerland.

00:17:31 And we did this with slaves too often and said,

00:17:33 oh, they don’t mind.

00:17:36 They don’t maybe aren’t conscious

00:17:39 or women don’t have souls or whatever.

00:17:41 So I’m a little bit nervous when I hear people

00:17:43 just take as an axiom that machines

00:17:46 can’t have experience ever.

00:17:48 I think this is just a really fascinating science question

00:17:51 is what it is.

00:17:52 Let’s research it and try to figure out

00:17:54 what it is that makes the difference

00:17:56 between unconscious intelligent behavior

00:17:58 and conscious intelligent behavior.

00:18:01 So in terms of, so if you think of a Boston Dynamics

00:18:04 human or robot being sort of with a broom

00:18:07 being pushed around, it starts pushing

00:18:11 on a consciousness question.

00:18:13 So let me ask, do you think an AGI system

00:18:17 like a few neuroscientists believe

00:18:19 needs to have a physical embodiment?

00:18:22 Needs to have a body or something like a body?

00:18:25 No, I don’t think so.

00:18:28 You mean to have a conscious experience?

00:18:30 To have consciousness.

00:18:33 I do think it helps a lot to have a physical embodiment

00:18:36 to learn the kind of things about the world

00:18:38 that are important to us humans, for sure.

00:18:42 But I don’t think the physical embodiment

00:18:45 is necessary after you’ve learned it

00:18:47 to just have the experience.

00:18:48 Think about when you’re dreaming, right?

00:18:51 Your eyes are closed.

00:18:52 You’re not getting any sensory input.

00:18:54 You’re not behaving or moving in any way

00:18:55 but there’s still an experience there, right?

00:18:59 And so clearly the experience that you have

00:19:01 when you see something cool in your dreams

00:19:03 isn’t coming from your eyes.

00:19:04 It’s just the information processing itself in your brain

00:19:08 which is that experience, right?

00:19:10 But if I put it another way, I’ll say

00:19:13 because it comes from neuroscience

00:19:15 is the reason you want to have a body and a physical

00:19:18 something like a physical, you know, a physical system

00:19:23 is because you want to be able to preserve something.

00:19:27 In order to have a self, you could argue,

00:19:30 would you need to have some kind of embodiment of self

00:19:36 to want to preserve?

00:19:38 Well, now we’re getting a little bit anthropomorphic

00:19:42 into anthropomorphizing things.

00:19:45 Maybe talking about self preservation instincts.

00:19:47 I mean, we are evolved organisms, right?

00:19:50 So Darwinian evolution endowed us

00:19:53 and other evolved organism with a self preservation instinct

00:19:57 because those that didn’t have those self preservation genes

00:20:00 got cleaned out of the gene pool, right?

00:20:02 But if you build an artificial general intelligence

00:20:06 the mind space that you can design is much, much larger

00:20:10 than just a specific subset of minds that can evolve.

00:20:14 So an AGI mind doesn’t necessarily have

00:20:17 to have any self preservation instinct.

00:20:19 It also doesn’t necessarily have to be

00:20:21 so individualistic as us.

00:20:24 Like, imagine if you could just, first of all,

00:20:26 or we are also very afraid of death.

00:20:27 You know, I suppose you could back yourself up

00:20:29 every five minutes and then your airplane

00:20:32 is about to crash.

00:20:32 You’re like, shucks, I’m gonna lose the last five minutes

00:20:36 of experiences since my last cloud backup, dang.

00:20:39 You know, it’s not as big a deal.

00:20:41 Or if we could just copy experiences between our minds

00:20:45 easily like we, which we could easily do

00:20:47 if we were silicon based, right?

00:20:50 Then maybe we would feel a little bit more

00:20:54 like a hive mind actually, that maybe it’s the,

00:20:56 so I don’t think we should take for granted at all

00:20:59 that AGI will have to have any of those sort of

00:21:04 competitive as alpha male instincts.

00:21:07 On the other hand, you know, this is really interesting

00:21:10 because I think some people go too far and say,

00:21:13 of course we don’t have to have any concerns either

00:21:16 that advanced AI will have those instincts

00:21:20 because we can build anything we want.

00:21:22 That there’s a very nice set of arguments going back

00:21:26 to Steve Omohundro and Nick Bostrom and others

00:21:28 just pointing out that when we build machines,

00:21:32 we normally build them with some kind of goal, you know,

00:21:34 win this chess game, drive this car safely or whatever.

00:21:38 And as soon as you put in a goal into machine,

00:21:40 especially if it’s kind of open ended goal

00:21:42 and the machine is very intelligent,

00:21:44 it’ll break that down into a bunch of sub goals.

00:21:48 And one of those goals will almost always

00:21:51 be self preservation because if it breaks or dies

00:21:54 in the process, it’s not gonna accomplish the goal, right?

00:21:56 Like suppose you just build a little,

00:21:58 you have a little robot and you tell it to go down

00:22:01 the store market here and get you some food,

00:22:04 make you cook an Italian dinner, you know,

00:22:06 and then someone mugs it and tries to break it

00:22:08 on the way.

00:22:09 That robot has an incentive to not get destroyed

00:22:12 and defend itself or run away,

00:22:14 because otherwise it’s gonna fail in cooking your dinner.

00:22:17 It’s not afraid of death,

00:22:19 but it really wants to complete the dinner cooking goal.

00:22:22 So it will have a self preservation instinct.

00:22:25 Continue being a functional agent somehow.

00:22:27 And similarly, if you give any kind of more ambitious goal

00:22:33 to an AGI, it’s very likely they wanna acquire

00:22:37 more resources so it can do that better.

00:22:39 And it’s exactly from those sort of sub goals

00:22:42 that we might not have intended

00:22:43 that some of the concerns about AGI safety come.

00:22:47 You give it some goal that seems completely harmless.

00:22:50 And then before you realize it,

00:22:53 it’s also trying to do these other things

00:22:55 which you didn’t want it to do.

00:22:56 And it’s maybe smarter than us.

00:22:59 So it’s fascinating.

00:23:01 And let me pause just because I am in a very kind

00:23:05 of human centric way, see fear of death

00:23:08 as a valuable motivator.

00:23:11 So you don’t think, you think that’s an artifact

00:23:16 of evolution, so that’s the kind of mind space

00:23:19 evolution created that we’re sort of almost obsessed

00:23:22 about self preservation, some kind of genetic flow.

00:23:24 You don’t think that’s necessary to be afraid of death.

00:23:29 So not just a kind of sub goal of self preservation

00:23:32 just so you can keep doing the thing,

00:23:34 but more fundamentally sort of have the finite thing

00:23:38 like this ends for you at some point.

00:23:43 Interesting.

00:23:44 Do I think it’s necessary for what precisely?

00:23:47 For intelligence, but also for consciousness.

00:23:50 So for those, for both, do you think really

00:23:55 like a finite death and the fear of it is important?

00:23:59 So before I can answer, before we can agree

00:24:05 on whether it’s necessary for intelligence

00:24:06 or for consciousness, we should be clear

00:24:08 on how we define those two words.

00:24:09 Cause a lot of really smart people define them

00:24:11 in very different ways.

00:24:13 I was on this panel with AI experts

00:24:17 and they couldn’t agree on how to define intelligence even.

00:24:20 So I define intelligence simply

00:24:22 as the ability to accomplish complex goals.

00:24:25 I like your broad definition, because again

00:24:27 I don’t want to be a carbon chauvinist.

00:24:29 Right.

00:24:30 And in that case, no, certainly

00:24:34 it doesn’t require fear of death.

00:24:36 I would say alpha go, alpha zero is quite intelligent.

00:24:40 I don’t think alpha zero has any fear of being turned off

00:24:43 because it doesn’t understand the concept of it even.

00:24:46 And similarly consciousness.

00:24:48 I mean, you could certainly imagine very simple

00:24:52 kind of experience.

00:24:53 If certain plants have any kind of experience

00:24:57 I don’t think they’re very afraid of dying

00:24:58 or there’s nothing they can do about it anyway much.

00:25:00 So there wasn’t that much value in, but more seriously

00:25:04 I think if you ask, not just about being conscious

00:25:09 but maybe having what you would, we might call

00:25:14 an exciting life where you feel passion

00:25:16 and really appreciate the things.

00:25:21 Maybe there somehow, maybe there perhaps it does help

00:25:24 having a backdrop that, Hey, it’s finite.

00:25:27 No, let’s make the most of this, let’s live to the fullest.

00:25:31 So if you knew you were going to live forever

00:25:34 do you think you would change your?

00:25:37 Yeah, I mean, in some perspective

00:25:39 it would be an incredibly boring life living forever.

00:25:43 So in the sort of loose subjective terms that you said

00:25:47 of something exciting and something in this

00:25:50 that other humans would understand, I think is, yeah

00:25:53 it seems that the finiteness of it is important.

00:25:57 Well, the good news I have for you then is

00:25:59 based on what we understand about cosmology

00:26:02 everything is in our universe is probably

00:26:05 ultimately probably finite, although.

00:26:07 Big crunch or big, what’s the, the infinite expansion.

00:26:11 Yeah, we could have a big chill or a big crunch

00:26:13 or a big rip or that’s the big snap or death bubbles.

00:26:18 All of them are more than a billion years away.

00:26:20 So we should, we certainly have vastly more time

00:26:24 than our ancestors thought, but there is still

00:26:29 it’s still pretty hard to squeeze in an infinite number

00:26:32 of compute cycles, even though there are some loopholes

00:26:36 that just might be possible.

00:26:37 But I think, you know, some people like to say

00:26:41 that you should live as if you’re about to

00:26:44 you’re going to die in five years or so.

00:26:46 And that’s sort of optimal.

00:26:47 Maybe it’s a good assumption.

00:26:50 We should build our civilization as if it’s all finite

00:26:54 to be on the safe side.

00:26:55 Right, exactly.

00:26:56 So you mentioned defining intelligence

00:26:59 as the ability to solve complex goals.

00:27:02 Where would you draw a line or how would you try

00:27:05 to define human level intelligence

00:27:08 and superhuman level intelligence?

00:27:10 Where is consciousness part of that definition?

00:27:13 No, consciousness does not come into this definition.

00:27:16 So, so I think of intelligence as it’s a spectrum

00:27:20 but there are very many different kinds of goals

00:27:21 you can have.

00:27:22 You can have a goal to be a good chess player

00:27:24 a good goal player, a good car driver, a good investor

00:27:28 good poet, et cetera.

00:27:31 So intelligence that by its very nature

00:27:34 isn’t something you can measure by this one number

00:27:36 or some overall goodness.

00:27:37 No, no.

00:27:38 There are some people who are more better at this.

00:27:40 Some people are better than that.

00:27:42 Right now we have machines that are much better than us

00:27:45 at some very narrow tasks like multiplying large numbers

00:27:49 fast, memorizing large databases, playing chess

00:27:53 playing go and soon driving cars.

00:27:57 But there’s still no machine that can match

00:28:00 a human child in general intelligence

00:28:02 but artificial general intelligence, AGI

00:28:05 the name of your course, of course

00:28:07 that is by its very definition, the quest

00:28:13 to build a machine that can do everything

00:28:16 as well as we can.

00:28:17 So the old Holy grail of AI from back to its inception

00:28:21 in the sixties, if that ever happens, of course

00:28:25 I think it’s going to be the biggest transition

00:28:27 in the history of life on earth

00:28:29 but it doesn’t necessarily have to wait the big impact

00:28:33 until machines are better than us at knitting

00:28:35 that the really big change doesn’t come exactly

00:28:39 at the moment they’re better than us at everything.

00:28:41 The really big change comes first

00:28:44 there are big changes when they start becoming better

00:28:45 at us at doing most of the jobs that we do

00:28:48 because that takes away much of the demand

00:28:51 for human labor.

00:28:53 And then the really whopping change comes

00:28:55 when they become better than us at AI research, right?

00:29:01 Because right now the timescale of AI research

00:29:03 is limited by the human research and development cycle

00:29:08 of years typically, you know

00:29:10 how long does it take from one release of some software

00:29:13 or iPhone or whatever to the next?

00:29:15 But once Google can replace 40,000 engineers

00:29:20 by 40,000 equivalent pieces of software or whatever

00:29:26 but then there’s no reason that has to be years

00:29:29 it can be in principle much faster

00:29:31 and the timescale of future progress in AI

00:29:36 and all of science and technology will be driven

00:29:39 by machines, not humans.

00:29:40 So it’s this simple point which gives right

00:29:46 this incredibly fun controversy

00:29:48 about whether there can be intelligence explosion

00:29:51 so called singularity as Werner Vinge called it.

00:29:54 Now the idea is articulated by I.J. Good

00:29:57 is obviously way back fifties

00:29:59 but you can see Alan Turing

00:30:01 and others thought about it even earlier.

00:30:06 So you asked me what exactly would I define

00:30:10 human level intelligence, yeah.

00:30:12 So the glib answer is to say something

00:30:15 which is better than us at all cognitive tasks

00:30:18 with a better than any human at all cognitive tasks

00:30:21 but the really interesting bar

00:30:23 I think goes a little bit lower than that actually.

00:30:25 It’s when they can, when they’re better than us

00:30:27 at AI programming and general learning

00:30:31 so that they can if they want to get better

00:30:35 than us at anything by just studying.

00:30:37 So they’re better is a key word and better is towards

00:30:40 this kind of spectrum of the complexity of goals

00:30:44 it’s able to accomplish.

00:30:45 So another way to, and that’s certainly

00:30:50 a very clear definition of human love.

00:30:53 So there’s, it’s almost like a sea that’s rising

00:30:55 you can do more and more and more things

00:30:56 it’s a geographic that you show

00:30:58 it’s really nice way to put it.

00:30:59 So there’s some peaks that

00:31:01 and there’s an ocean level elevating

00:31:03 and you solve more and more problems

00:31:04 but just kind of to take a pause

00:31:07 and we took a bunch of questions

00:31:09 and a lot of social networks

00:31:10 and a bunch of people asked

00:31:11 a sort of a slightly different direction

00:31:14 on creativity and things that perhaps aren’t a peak.

00:31:23 Human beings are flawed

00:31:24 and perhaps better means having contradiction

00:31:28 being flawed in some way.

00:31:30 So let me sort of start easy, first of all.

00:31:34 So you have a lot of cool equations.

00:31:36 Let me ask, what’s your favorite equation, first of all?

00:31:39 I know they’re all like your children, but like

00:31:42 which one is that?

00:31:43 This is the shirt in your equation.

00:31:45 It’s the master key of quantum mechanics

00:31:48 of the micro world.

00:31:49 So this equation will protect everything

00:31:52 to do with atoms, molecules and all the way up.

00:31:55 Right?

00:31:58 Yeah, so, okay.

00:31:59 So quantum mechanics is certainly a beautiful

00:32:02 mysterious formulation of our world.

00:32:05 So I’d like to sort of ask you, just as an example

00:32:08 it perhaps doesn’t have the same beauty as physics does

00:32:12 but in mathematics abstract, the Andrew Wiles

00:32:16 who proved the Fermat’s last theorem.

00:32:19 So he just saw this recently

00:32:22 and it kind of caught my eye a little bit.

00:32:24 This is 358 years after it was conjectured.

00:32:27 So this is very simple formulation.

00:32:29 Everybody tried to prove it, everybody failed.

00:32:32 And so here’s this guy comes along

00:32:34 and eventually proves it and then fails to prove it

00:32:38 and then proves it again in 94.

00:32:41 And he said like the moment when everything connected

00:32:43 into place in an interview said

00:32:46 it was so indescribably beautiful.

00:32:47 That moment when you finally realize the connecting piece

00:32:51 of two conjectures.

00:32:52 He said, it was so indescribably beautiful.

00:32:55 It was so simple and so elegant.

00:32:57 I couldn’t understand how I’d missed it.

00:32:58 And I just stared at it in disbelief for 20 minutes.

00:33:02 Then during the day, I walked around the department

00:33:05 and I keep coming back to my desk

00:33:07 looking to see if it was still there.

00:33:09 It was still there.

00:33:10 I couldn’t contain myself.

00:33:11 I was so excited.

00:33:12 It was the most important moment on my working life.

00:33:15 Nothing I ever do again will mean as much.

00:33:18 So that particular moment.

00:33:20 And it kind of made me think of what would it take?

00:33:24 And I think we have all been there at small levels.

00:33:29 Maybe let me ask, have you had a moment like that

00:33:32 in your life where you just had an idea?

00:33:34 It’s like, wow, yes.

00:33:40 I wouldn’t mention myself in the same breath

00:33:42 as Andrew Wiles, but I’ve certainly had a number

00:33:44 of aha moments when I realized something very cool

00:33:52 about physics, which has completely made my head explode.

00:33:56 In fact, some of my favorite discoveries I made later,

00:33:58 I later realized that they had been discovered earlier

00:34:01 by someone who sometimes got quite famous for it.

00:34:03 So it’s too late for me to even publish it,

00:34:05 but that doesn’t diminish in any way.

00:34:07 The emotional experience you have when you realize it,

00:34:09 like, wow.

00:34:11 Yeah, so what would it take in that moment, that wow,

00:34:15 that was yours in that moment?

00:34:17 So what do you think it takes for an intelligence system,

00:34:21 an AGI system, an AI system to have a moment like that?

00:34:25 That’s a tricky question

00:34:26 because there are actually two parts to it, right?

00:34:29 One of them is, can it accomplish that proof?

00:34:33 Can it prove that you can never write A to the N

00:34:37 plus B to the N equals three to that equal Z to the N

00:34:42 for all integers, et cetera, et cetera,

00:34:45 when N is bigger than two?

00:34:48 That’s simply a question about intelligence.

00:34:51 Can you build machines that are that intelligent?

00:34:54 And I think by the time we get a machine

00:34:57 that can independently come up with that level of proofs,

00:35:00 probably quite close to AGI.

00:35:03 The second question is a question about consciousness.

00:35:07 When will we, how likely is it that such a machine

00:35:11 will actually have any experience at all,

00:35:14 as opposed to just being like a zombie?

00:35:16 And would we expect it to have some sort of emotional response

00:35:20 to this or anything at all akin to human emotion

00:35:24 where when it accomplishes its machine goal,

00:35:28 it views it as somehow something very positive

00:35:31 and sublime and deeply meaningful?

00:35:39 I would certainly hope that if in the future

00:35:41 we do create machines that are our peers

00:35:45 or even our descendants, that I would certainly

00:35:50 hope that they do have this sublime appreciation of life.

00:35:55 In a way, my absolutely worst nightmare

00:35:58 would be that at some point in the future,

00:36:05 the distant future, maybe our cosmos

00:36:07 is teeming with all this post biological life doing

00:36:10 all the seemingly cool stuff.

00:36:12 And maybe the last humans, by the time

00:36:16 our species eventually fizzles out,

00:36:20 will be like, well, that’s OK because we’re

00:36:21 so proud of our descendants here.

00:36:23 And look what all the, my worst nightmare

00:36:26 is that we haven’t solved the consciousness problem.

00:36:30 And we haven’t realized that these are all the zombies.

00:36:32 They’re not aware of anything any more than a tape recorder

00:36:36 has any kind of experience.

00:36:37 So the whole thing has just become

00:36:40 a play for empty benches.

00:36:41 That would be the ultimate zombie apocalypse.

00:36:44 So I would much rather, in that case,

00:36:47 that we have these beings which can really

00:36:52 appreciate how amazing it is.

00:36:57 And in that picture, what would be the role of creativity?

00:37:01 A few people ask about creativity.

00:37:04 When you think about intelligence,

00:37:07 certainly the story you told at the beginning of your book

00:37:09 involved creating movies and so on, making money.

00:37:15 You can make a lot of money in our modern world

00:37:17 with music and movies.

00:37:18 So if you are an intelligent system,

00:37:20 you may want to get good at that.

00:37:22 But that’s not necessarily what I mean by creativity.

00:37:26 Is it important on that complex goals

00:37:29 where the sea is rising for there

00:37:31 to be something creative?

00:37:33 Or am I being very human centric and thinking creativity

00:37:37 somehow special relative to intelligence?

00:37:41 My hunch is that we should think of creativity simply

00:37:47 as an aspect of intelligence.

00:37:50 And we have to be very careful with human vanity.

00:37:57 We have this tendency to very often want

00:37:59 to say, as soon as machines can do something,

00:38:01 we try to diminish it and say, oh, but that’s

00:38:03 not real intelligence.

00:38:05 Isn’t it creative or this or that?

00:38:08 The other thing, if we ask ourselves

00:38:12 to write down a definition of what we actually mean

00:38:14 by being creative, what we mean by Andrew Wiles, what he did

00:38:18 there, for example, don’t we often mean that someone takes

00:38:21 a very unexpected leap?

00:38:26 It’s not like taking 573 and multiplying it

00:38:29 by 224 by just a step of straightforward cookbook

00:38:33 like rules, right?

00:38:36 You can maybe make a connection between two things

00:38:39 that people had never thought was connected or something

00:38:42 like that.

00:38:44 I think this is an aspect of intelligence.

00:38:47 And this is actually one of the most important aspects of it.

00:38:53 Maybe the reason we humans tend to be better at it

00:38:55 than traditional computers is because it’s

00:38:57 something that comes more naturally if you’re

00:38:59 a neural network than if you’re a traditional logic gate

00:39:04 based computer machine.

00:39:05 We physically have all these connections.

00:39:08 And you activate here, activate here, activate here.

00:39:13 Bing.

00:39:16 My hunch is that if we ever build a machine where you could

00:39:21 just give it the task, hey, you say, hey, I just realized

00:39:29 I want to travel around the world instead this month.

00:39:32 Can you teach my AGI course for me?

00:39:34 And it’s like, OK, I’ll do it.

00:39:35 And it does everything that you would have done

00:39:37 and improvises and stuff.

00:39:39 That would, in my mind, involve a lot of creativity.

00:39:43 Yeah, so it’s actually a beautiful way to put it.

00:39:45 I think we do try to grasp at the definition of intelligence

00:39:52 is everything we don’t understand how to build.

00:39:56 So we as humans try to find things

00:39:59 that we have and machines don’t have.

00:40:01 And maybe creativity is just one of the things, one

00:40:03 of the words we use to describe that.

00:40:05 That’s a really interesting way to put it.

00:40:07 I don’t think we need to be that defensive.

00:40:09 I don’t think anything good comes out of saying,

00:40:11 well, we’re somehow special, you know?

00:40:18 Contrary wise, there are many examples in history

00:40:21 of where trying to pretend that we’re somehow superior

00:40:27 to all other intelligent beings has led to pretty bad results,

00:40:33 right?

00:40:35 Nazi Germany, they said that they were somehow superior

00:40:38 to other people.

00:40:40 Today, we still do a lot of cruelty to animals

00:40:42 by saying that we’re so superior somehow,

00:40:44 and they can’t feel pain.

00:40:46 Slavery was justified by the same kind

00:40:48 of just really weak arguments.

00:40:52 And I don’t think if we actually go ahead and build

00:40:57 artificial general intelligence, it

00:40:59 can do things better than us, I don’t

00:41:01 think we should try to found our self worth on some sort

00:41:04 of bogus claims of superiority in terms

00:41:09 of our intelligence.

00:41:12 I think we should instead find our calling

00:41:18 and the meaning of life from the experiences that we have.

00:41:23 I can have very meaningful experiences

00:41:28 even if there are other people who are smarter than me.

00:41:32 When I go to a faculty meeting here,

00:41:34 and we talk about something, and then I certainly realize,

00:41:36 oh, boy, he has an old prize, he has an old prize,

00:41:39 he has an old prize, I don’t have one.

00:41:40 Does that make me enjoy life any less

00:41:43 or enjoy talking to those people less?

00:41:47 Of course not.

00:41:49 And the contrary, I feel very honored and privileged

00:41:54 to get to interact with other very intelligent beings that

00:41:58 are better than me at a lot of stuff.

00:42:00 So I don’t think there’s any reason why

00:42:02 we can’t have the same approach with intelligent machines.

00:42:06 That’s a really interesting.

00:42:07 So people don’t often think about that.

00:42:08 They think about when there’s going,

00:42:10 if there’s machines that are more intelligent,

00:42:13 you naturally think that that’s not

00:42:15 going to be a beneficial type of intelligence.

00:42:19 You don’t realize it could be like peers with Nobel prizes

00:42:23 that would be just fun to talk with,

00:42:25 and they might be clever about certain topics,

00:42:27 and you can have fun having a few drinks with them.

00:42:32 Well, also, another example we can all

00:42:35 relate to of why it doesn’t have to be a terrible thing

00:42:39 to be in the presence of people who are even smarter than us

00:42:42 all around is when you and I were both two years old,

00:42:45 I mean, our parents were much more intelligent than us,

00:42:48 right?

00:42:49 Worked out OK, because their goals

00:42:51 were aligned with our goals.

00:42:53 And that, I think, is really the number one key issue

00:42:58 we have to solve if we value align the value alignment

00:43:02 problem, exactly.

00:43:03 Because people who see too many Hollywood movies

00:43:06 with lousy science fiction plot lines,

00:43:10 they worry about the wrong thing, right?

00:43:12 They worry about some machine suddenly turning evil.

00:43:16 It’s not malice that is the concern.

00:43:21 It’s competence.

00:43:22 By definition, intelligent makes you very competent.

00:43:27 If you have a more intelligent goal playing,

00:43:31 computer playing is a less intelligent one.

00:43:33 And when we define intelligence as the ability

00:43:36 to accomplish goal winning, it’s going

00:43:38 to be the more intelligent one that wins.

00:43:40 And if you have a human and then you

00:43:43 have an AGI that’s more intelligent in all ways

00:43:47 and they have different goals, guess who’s

00:43:49 going to get their way, right?

00:43:50 So I was just reading about this particular rhinoceros species

00:43:57 that was driven extinct just a few years ago.

00:43:59 Ellen Bummer is looking at this cute picture of a mommy

00:44:02 rhinoceros with its child.

00:44:05 And why did we humans drive it to extinction?

00:44:09 It wasn’t because we were evil rhino haters as a whole.

00:44:12 It was just because our goals weren’t aligned

00:44:14 with those of the rhinoceros.

00:44:16 And it didn’t work out so well for the rhinoceros

00:44:17 because we were more intelligent, right?

00:44:19 So I think it’s just so important

00:44:21 that if we ever do build AGI, before we unleash anything,

00:44:27 we have to make sure that it learns

00:44:31 to understand our goals, that it adopts our goals,

00:44:36 and that it retains those goals.

00:44:37 So the cool, interesting problem there

00:44:40 is us as human beings trying to formulate our values.

00:44:47 So you could think of the United States Constitution as a way

00:44:51 that people sat down, at the time a bunch of white men,

00:44:56 which is a good example, I should say.

00:44:59 They formulated the goals for this country.

00:45:01 And a lot of people agree that those goals actually

00:45:03 held up pretty well.

00:45:05 That’s an interesting formulation of values

00:45:07 and failed miserably in other ways.

00:45:09 So for the value alignment problem and the solution to it,

00:45:13 we have to be able to put on paper or in a program

00:45:19 human values.

00:45:20 How difficult do you think that is?

00:45:22 Very.

00:45:24 But it’s so important.

00:45:25 We really have to give it our best.

00:45:28 And it’s difficult for two separate reasons.

00:45:30 There’s the technical value alignment problem

00:45:33 of figuring out just how to make machines understand our goals,

00:45:39 adopt them, and retain them.

00:45:40 And then there’s the separate part of it,

00:45:43 the philosophical part.

00:45:44 Whose values anyway?

00:45:45 And since it’s not like we have any great consensus

00:45:48 on this planet on values, what mechanism should we

00:45:52 create then to aggregate and decide, OK,

00:45:54 what’s a good compromise?

00:45:56 That second discussion can’t just

00:45:58 be left to tech nerds like myself.

00:46:01 And if we refuse to talk about it and then AGI gets built,

00:46:05 who’s going to be actually making

00:46:07 the decision about whose values?

00:46:08 It’s going to be a bunch of dudes in some tech company.

00:46:12 And are they necessarily so representative of all

00:46:17 of humankind that we want to just entrust it to them?

00:46:19 Are they even uniquely qualified to speak

00:46:23 to future human happiness just because they’re

00:46:25 good at programming AI?

00:46:26 I’d much rather have this be a really inclusive conversation.

00:46:30 But do you think it’s possible?

00:46:32 So you create a beautiful vision that includes the diversity,

00:46:37 cultural diversity, and various perspectives on discussing

00:46:40 rights, freedoms, human dignity.

00:46:43 But how hard is it to come to that consensus?

00:46:46 Do you think it’s certainly a really important thing

00:46:50 that we should all try to do?

00:46:51 But do you think it’s feasible?

00:46:54 I think there’s no better way to guarantee failure than to

00:47:00 refuse to talk about it or refuse to try.

00:47:02 And I also think it’s a really bad strategy

00:47:05 to say, OK, let’s first have a discussion for a long time.

00:47:08 And then once we reach complete consensus,

00:47:11 then we’ll try to load it into some machine.

00:47:13 No, we shouldn’t let perfect be the enemy of good.

00:47:16 Instead, we should start with the kindergarten ethics

00:47:20 that pretty much everybody agrees on

00:47:22 and put that into machines now.

00:47:24 We’re not doing that even.

00:47:25 Look at anyone who builds this passenger aircraft,

00:47:31 wants it to never under any circumstances

00:47:33 fly into a building or a mountain.

00:47:35 Yet the September 11 hijackers were able to do that.

00:47:38 And even more embarrassingly, Andreas Lubitz,

00:47:41 this depressed Germanwings pilot,

00:47:43 when he flew his passenger jet into the Alps killing over 100

00:47:47 people, he just told the autopilot to do it.

00:47:50 He told the freaking computer to change the altitude

00:47:53 to 100 meters.

00:47:55 And even though it had the GPS maps, everything,

00:47:58 the computer was like, OK.

00:48:00 So we should take those very basic values,

00:48:05 where the problem is not that we don’t agree.

00:48:08 The problem is just we’ve been too lazy

00:48:10 to try to put it into our machines

00:48:11 and make sure that from now on, airplanes will just,

00:48:15 which all have computers in them,

00:48:16 but will just refuse to do something like that.

00:48:19 Go into safe mode, maybe lock the cockpit door,

00:48:22 go over to the nearest airport.

00:48:24 And there’s so much other technology in our world

00:48:28 as well now, where it’s really becoming quite timely

00:48:31 to put in some sort of very basic values like this.

00:48:34 Even in cars, we’ve had enough vehicle terrorism attacks

00:48:39 by now, where people have driven trucks and vans

00:48:42 into pedestrians, that it’s not at all a crazy idea

00:48:45 to just have that hardwired into the car.

00:48:48 Because yeah, there are a lot of,

00:48:50 there’s always going to be people who for some reason

00:48:52 want to harm others, but most of those people

00:48:54 don’t have the technical expertise to figure out

00:48:56 how to work around something like that.

00:48:58 So if the car just won’t do it, it helps.

00:49:01 So let’s start there.

00:49:02 So there’s a lot of, that’s a great point.

00:49:04 So not chasing perfect.

00:49:06 There’s a lot of things that most of the world agrees on.

00:49:10 Yeah, let’s start there.

00:49:11 Let’s start there.

00:49:12 And then once we start there,

00:49:14 we’ll also get into the habit of having

00:49:17 these kind of conversations about, okay,

00:49:18 what else should we put in here and have these discussions?

00:49:21 This should be a gradual process then.

00:49:23 Great, so, but that also means describing these things

00:49:28 and describing it to a machine.

00:49:31 So one thing, we had a few conversations

00:49:34 with Stephen Wolfram.

00:49:35 I’m not sure if you’re familiar with Stephen.

00:49:37 Oh yeah, I know him quite well.

00:49:38 So he is, he works with a bunch of things,

00:49:42 but cellular automata, these simple computable things,

00:49:46 these computation systems.

00:49:47 And he kind of mentioned that,

00:49:49 we probably have already within these systems

00:49:52 already something that’s AGI,

00:49:56 meaning like we just don’t know it

00:49:58 because we can’t talk to it.

00:50:00 So if you give me this chance to try to at least

00:50:04 form a question out of this is,

00:50:07 I think it’s an interesting idea to think

00:50:10 that we can have intelligent systems,

00:50:12 but we don’t know how to describe something to them

00:50:15 and they can’t communicate with us.

00:50:17 I know you’re doing a little bit of work in explainable AI,

00:50:19 trying to get AI to explain itself.

00:50:22 So what are your thoughts of natural language processing

00:50:25 or some kind of other communication?

00:50:27 How does the AI explain something to us?

00:50:30 How do we explain something to it, to machines?

00:50:33 Or you think of it differently?

00:50:35 So there are two separate parts to your question there.

00:50:39 One of them has to do with communication,

00:50:42 which is super interesting, I’ll get to that in a sec.

00:50:44 The other is whether we already have AGI

00:50:47 but we just haven’t noticed it there.

00:50:49 Right.

00:50:51 There I beg to differ.

00:50:54 I don’t think there’s anything in any cellular automaton

00:50:56 or anything or the internet itself or whatever

00:50:59 that has artificial general intelligence

00:51:03 and that it can really do exactly everything

00:51:05 we humans can do better.

00:51:07 I think the day that happens, when that happens,

00:51:11 we will very soon notice, we’ll probably notice even before

00:51:15 because in a very, very big way.

00:51:17 But for the second part, though.

00:51:18 Wait, can I ask, sorry.

00:51:20 So, because you have this beautiful way

00:51:24 to formulating consciousness as information processing,

00:51:30 and you can think of intelligence

00:51:31 as information processing,

00:51:32 and you can think of the entire universe

00:51:34 as these particles and these systems roaming around

00:51:38 that have this information processing power.

00:51:41 You don’t think there is something with the power

00:51:44 to process information in the way that we human beings do

00:51:49 that’s out there that needs to be sort of connected to.

00:51:55 It seems a little bit philosophical, perhaps,

00:51:57 but there’s something compelling to the idea

00:52:00 that the power is already there,

00:52:01 which the focus should be more on being able

00:52:05 to communicate with it.

00:52:07 Well, I agree that in a certain sense,

00:52:11 the hardware processing power is already out there

00:52:15 because our universe itself can think of it

00:52:19 as being a computer already, right?

00:52:21 It’s constantly computing what water waves,

00:52:23 how it devolved the water waves in the River Charles

00:52:26 and how to move the air molecules around.

00:52:28 Seth Lloyd has pointed out, my colleague here,

00:52:30 that you can even in a very rigorous way

00:52:32 think of our entire universe as being a quantum computer.

00:52:35 It’s pretty clear that our universe

00:52:37 supports this amazing processing power

00:52:40 because you can even,

00:52:42 within this physics computer that we live in, right?

00:52:44 We can even build actual laptops and stuff,

00:52:47 so clearly the power is there.

00:52:49 It’s just that most of the compute power that nature has,

00:52:52 it’s, in my opinion, kind of wasting on boring stuff

00:52:54 like simulating yet another ocean wave somewhere

00:52:56 where no one is even looking, right?

00:52:58 So in a sense, what life does, what we are doing

00:53:00 when we build computers is we’re rechanneling

00:53:03 all this compute that nature is doing anyway

00:53:07 into doing things that are more interesting

00:53:09 than just yet another ocean wave,

00:53:11 and let’s do something cool here.

00:53:14 So the raw hardware power is there, for sure,

00:53:17 but then even just computing what’s going to happen

00:53:21 for the next five seconds in this water bottle,

00:53:23 takes a ridiculous amount of compute

00:53:26 if you do it on a human computer.

00:53:27 This water bottle just did it.

00:53:29 But that does not mean that this water bottle has AGI

00:53:34 because AGI means it should also be able to,

00:53:37 like I’ve written my book, done this interview.

00:53:40 And I don’t think it’s just communication problems.

00:53:42 I don’t really think it can do it.

00:53:46 Although Buddhists say when they watch the water

00:53:49 and that there is some beauty,

00:53:51 that there’s some depth and beauty in nature

00:53:53 that they can communicate with.

00:53:54 Communication is also very important though

00:53:56 because I mean, look, part of my job is being a teacher.

00:54:01 And I know some very intelligent professors even

00:54:06 who just have a bit of hard time communicating.

00:54:09 They come up with all these brilliant ideas,

00:54:12 but to communicate with somebody else,

00:54:14 you have to also be able to simulate their own mind.

00:54:16 Yes, empathy.

00:54:18 Build well enough and understand model of their mind

00:54:20 that you can say things that they will understand.

00:54:24 And that’s quite difficult.

00:54:26 And that’s why today it’s so frustrating

00:54:28 if you have a computer that makes some cancer diagnosis

00:54:32 and you ask it, well, why are you saying

00:54:34 I should have this surgery?

00:54:36 And if it can only reply,

00:54:37 I was trained on five terabytes of data

00:54:40 and this is my diagnosis, boop, boop, beep, beep.

00:54:45 It doesn’t really instill a lot of confidence, right?

00:54:49 So I think we have a lot of work to do

00:54:51 on communication there.

00:54:54 So what kind of, I think you’re doing a little bit of work

00:54:58 in explainable AI.

00:54:59 What do you think are the most promising avenues?

00:55:01 Is it mostly about sort of the Alexa problem

00:55:05 of natural language processing of being able

00:55:07 to actually use human interpretable methods

00:55:11 of communication?

00:55:13 So being able to talk to a system and it talk back to you,

00:55:16 or is there some more fundamental problems to be solved?

00:55:18 I think it’s all of the above.

00:55:21 The natural language processing is obviously important,

00:55:23 but there are also more nerdy fundamental problems.

00:55:27 Like if you take, you play chess?

00:55:31 Of course, I’m Russian.

00:55:33 I have to.

00:55:33 You speak Russian?

00:55:34 Yes, I speak Russian.

00:55:35 Excellent, I didn’t know.

00:55:38 When did you learn Russian?

00:55:39 I speak very bad Russian, I’m only an autodidact,

00:55:41 but I bought a book, Teach Yourself Russian,

00:55:44 read a lot, but it was very difficult.

00:55:47 Wow.

00:55:48 That’s why I speak so bad.

00:55:49 How many languages do you know?

00:55:51 Wow, that’s really impressive.

00:55:53 I don’t know, my wife has some calculation,

00:55:56 but my point was, if you play chess,

00:55:58 have you looked at the AlphaZero games?

00:56:01 The actual games, no.

00:56:02 Check it out, some of them are just mind blowing,

00:56:06 really beautiful.

00:56:07 And if you ask, how did it do that?

00:56:13 You go talk to Demis Hassabis,

00:56:16 I know others from DeepMind,

00:56:19 all they’ll ultimately be able to give you

00:56:20 is big tables of numbers, matrices,

00:56:23 that define the neural network.

00:56:25 And you can stare at these tables of numbers

00:56:28 till your face turn blue,

00:56:29 and you’re not gonna understand much

00:56:32 about why it made that move.

00:56:34 And even if you have natural language processing

00:56:37 that can tell you in human language about,

00:56:40 oh, five, seven, points, two, eight,

00:56:42 still not gonna really help.

00:56:43 So I think there’s a whole spectrum of fun challenges

00:56:47 that are involved in taking a computation

00:56:50 that does intelligent things

00:56:52 and transforming it into something equally good,

00:56:57 equally intelligent, but that’s more understandable.

00:57:01 And I think that’s really valuable

00:57:03 because I think as we put machines in charge

00:57:07 of ever more infrastructure in our world,

00:57:09 the power grid, the trading on the stock market,

00:57:12 weapon systems and so on,

00:57:14 it’s absolutely crucial that we can trust

00:57:17 these AIs to do all we want.

00:57:19 And trust really comes from understanding

00:57:22 in a very fundamental way.

00:57:24 And that’s why I’m working on this,

00:57:27 because I think the more,

00:57:29 if we’re gonna have some hope of ensuring

00:57:31 that machines have adopted our goals

00:57:33 and that they’re gonna retain them,

00:57:35 that kind of trust, I think,

00:57:38 needs to be based on things you can actually understand,

00:57:41 preferably even improve theorems on.

00:57:44 Even with a self driving car, right?

00:57:47 If someone just tells you it’s been trained

00:57:48 on tons of data and it never crashed,

00:57:50 it’s less reassuring than if someone actually has a proof.

00:57:54 Maybe it’s a computer verified proof,

00:57:55 but still it says that under no circumstances

00:57:58 is this car just gonna swerve into oncoming traffic.

00:58:02 And that kind of information helps to build trust

00:58:04 and helps build the alignment of goals,

00:58:09 at least awareness that your goals, your values are aligned.

00:58:12 And I think even in the very short term,

00:58:13 if you look at how, you know, today, right?

00:58:16 This absolutely pathetic state of cybersecurity

00:58:19 that we have, where is it?

00:58:21 Three billion Yahoo accounts we can’t pack,

00:58:27 almost every American’s credit card and so on.

00:58:32 Why is this happening?

00:58:34 It’s ultimately happening because we have software

00:58:37 that nobody fully understood how it worked.

00:58:41 That’s why the bugs hadn’t been found, right?

00:58:44 And I think AI can be used very effectively

00:58:47 for offense, for hacking,

00:58:49 but it can also be used for defense.

00:58:52 Hopefully automating verifiability

00:58:55 and creating systems that are built in different ways

00:59:00 so you can actually prove things about them.

00:59:02 And it’s important.

00:59:05 So speaking of software that nobody understands

00:59:07 how it works, of course, a bunch of people ask

00:59:10 about your paper, about your thoughts

00:59:12 of why does deep and cheap learning work so well?

00:59:14 That’s the paper.

00:59:15 But what are your thoughts on deep learning?

00:59:18 These kind of simplified models of our own brains

00:59:21 have been able to do some successful perception work,

00:59:26 pattern recognition work, and now with AlphaZero and so on,

00:59:29 do some clever things.

00:59:30 What are your thoughts about the promise limitations

00:59:33 of this piece?

00:59:35 Great, I think there are a number of very important insights,

00:59:43 very important lessons we can always draw

00:59:44 from these kinds of successes.

00:59:47 One of them is when you look at the human brain,

00:59:48 you see it’s very complicated, 10th of 11 neurons,

00:59:51 and there are all these different kinds of neurons

00:59:53 and yada, yada, and there’s been this long debate

00:59:55 about whether the fact that we have dozens

00:59:57 of different kinds is actually necessary for intelligence.

01:00:01 We can now, I think, quite convincingly answer

01:00:03 that question of no, it’s enough to have just one kind.

01:00:07 If you look under the hood of AlphaZero,

01:00:09 there’s only one kind of neuron

01:00:11 and it’s ridiculously simple mathematical thing.

01:00:15 So it’s just like in physics,

01:00:17 it’s not, if you have a gas with waves in it,

01:00:20 it’s not the detailed nature of the molecule that matter,

01:00:24 it’s the collective behavior somehow.

01:00:26 Similarly, it’s this higher level structure

01:00:30 of the network that matters,

01:00:31 not that you have 20 kinds of neurons.

01:00:34 I think our brain is such a complicated mess

01:00:37 because it wasn’t evolved just to be intelligent,

01:00:41 it was involved to also be self assembling

01:00:47 and self repairing, right?

01:00:48 And evolutionarily attainable.

01:00:51 And so on and so on.

01:00:53 So I think it’s pretty,

01:00:54 my hunch is that we’re going to understand

01:00:57 how to build AGI before we fully understand

01:00:59 how our brains work, just like we understood

01:01:02 how to build flying machines long before

01:01:05 we were able to build a mechanical bird.

01:01:07 Yeah, that’s right.

01:01:08 You’ve given the example exactly of mechanical birds

01:01:13 and airplanes and airplanes do a pretty good job

01:01:15 of flying without really mimicking bird flight.

01:01:18 And even now after 100 years later,

01:01:20 did you see the Ted talk with this German mechanical bird?

01:01:23 I heard you mention it.

01:01:25 Check it out, it’s amazing.

01:01:26 But even after that, right,

01:01:27 we still don’t fly in mechanical birds

01:01:29 because it turned out the way we came up with was simpler

01:01:32 and it’s better for our purposes.

01:01:33 And I think it might be the same there.

01:01:35 That’s one lesson.

01:01:37 And another lesson, it’s more what our paper was about.

01:01:42 First, as a physicist thought it was fascinating

01:01:45 how there’s a very close mathematical relationship

01:01:48 actually between our artificial neural networks

01:01:50 and a lot of things that we’ve studied for in physics

01:01:54 go by nerdy names like the renormalization group equation

01:01:57 and Hamiltonians and yada, yada, yada.

01:01:59 And when you look a little more closely at this,

01:02:05 you have,

01:02:10 at first I was like, well, there’s something crazy here

01:02:12 that doesn’t make sense.

01:02:13 Because we know that if you even want to build

01:02:19 a super simple neural network to tell apart cat pictures

01:02:22 and dog pictures, right,

01:02:23 that you can do that very, very well now.

01:02:25 But if you think about it a little bit,

01:02:27 you convince yourself it must be impossible

01:02:29 because if I have one megapixel,

01:02:31 even if each pixel is just black or white,

01:02:34 there’s two to the power of 1 million possible images,

01:02:36 which is way more than there are atoms in our universe,

01:02:38 right, so in order to,

01:02:42 and then for each one of those,

01:02:43 I have to assign a number,

01:02:44 which is the probability that it’s a dog.

01:02:47 So an arbitrary function of images

01:02:49 is a list of more numbers than there are atoms in our universe.

01:02:54 So clearly I can’t store that under the hood of my GPU

01:02:57 or my computer, yet somehow it works.

01:03:00 So what does that mean?

01:03:01 Well, it means that out of all of the problems

01:03:04 that you could try to solve with a neural network,

01:03:10 almost all of them are impossible to solve

01:03:12 with a reasonably sized one.

01:03:15 But then what we showed in our paper

01:03:17 was that the fraction, the kind of problems,

01:03:22 the fraction of all the problems

01:03:23 that you could possibly pose,

01:03:26 that we actually care about given the laws of physics

01:03:29 is also an infinite testimony, tiny little part.

01:03:32 And amazingly, they’re basically the same part.

01:03:35 Yeah, it’s almost like our world was created for,

01:03:37 I mean, they kind of come together.

01:03:39 Yeah, well, you could say maybe where the world was created

01:03:42 for us, but I have a more modest interpretation,

01:03:44 which is that the world was created for us,

01:03:46 but I have a more modest interpretation,

01:03:48 which is that instead evolution endowed us

01:03:50 with neural networks precisely for that reason.

01:03:53 Because this particular architecture,

01:03:54 as opposed to the one in your laptop,

01:03:56 is very, very well adapted to solving the kind of problems

01:04:02 that nature kept presenting our ancestors with.

01:04:05 So it makes sense that why do we have a brain

01:04:08 in the first place?

01:04:09 It’s to be able to make predictions about the future

01:04:11 and so on.

01:04:12 So if we had a sucky system, which could never solve it,

01:04:16 we wouldn’t have a world.

01:04:18 So this is, I think, a very beautiful fact.

01:04:23 Yeah.

01:04:24 We also realize that there’s been earlier work

01:04:29 on why deeper networks are good,

01:04:32 but we were able to show an additional cool fact there,

01:04:34 which is that even incredibly simple problems,

01:04:38 like suppose I give you a thousand numbers

01:04:41 and ask you to multiply them together,

01:04:42 and you can write a few lines of code, boom, done, trivial.

01:04:46 If you just try to do that with a neural network

01:04:49 that has only one single hidden layer in it,

01:04:52 you can do it,

01:04:54 but you’re going to need two to the power of a thousand

01:04:57 neurons to multiply a thousand numbers,

01:05:00 which is, again, more neurons than there are atoms

01:05:02 in our universe.

01:05:04 That’s fascinating.

01:05:05 But if you allow yourself to make it a deep network

01:05:09 with many layers, you only need 4,000 neurons.

01:05:13 It’s perfectly feasible.

01:05:16 That’s really interesting.

01:05:17 Yeah.

01:05:18 So on another architecture type,

01:05:21 I mean, you mentioned Schrodinger’s equation,

01:05:22 and what are your thoughts about quantum computing

01:05:27 and the role of this kind of computational unit

01:05:32 in creating an intelligence system?

01:05:34 In some Hollywood movies that I will not mention by name

01:05:39 because I don’t want to spoil them.

01:05:41 The way they get AGI is building a quantum computer.

01:05:45 Because the word quantum sounds cool and so on.

01:05:47 That’s right.

01:05:50 First of all, I think we don’t need quantum computers

01:05:52 to build AGI.

01:05:54 I suspect your brain is not a quantum computer

01:05:59 in any profound sense.

01:06:01 So you don’t even wrote a paper about that

01:06:03 a lot many years ago.

01:06:04 I calculated the so called decoherence time,

01:06:08 how long it takes until the quantum computerness

01:06:10 of what your neurons are doing gets erased

01:06:15 by just random noise from the environment.

01:06:17 And it’s about 10 to the minus 21 seconds.

01:06:21 So as cool as it would be to have a quantum computer

01:06:24 in my head, I don’t think that fast.

01:06:27 On the other hand,

01:06:28 there are very cool things you could do

01:06:33 with quantum computers.

01:06:35 Or I think we’ll be able to do soon

01:06:37 when we get bigger ones.

01:06:39 That might actually help machine learning

01:06:40 do even better than the brain.

01:06:43 So for example,

01:06:47 one, this is just a moonshot,

01:06:50 but learning is very much same thing as search.

01:07:01 If you’re trying to train a neural network

01:07:03 to get really learned to do something really well,

01:07:06 you have some loss function,

01:07:07 you have a bunch of knobs you can turn,

01:07:10 represented by a bunch of numbers,

01:07:12 and you’re trying to tweak them

01:07:12 so that it becomes as good as possible at this thing.

01:07:15 So if you think of a landscape with some valley,

01:07:20 where each dimension of the landscape

01:07:22 corresponds to some number you can change,

01:07:24 you’re trying to find the minimum.

01:07:25 And it’s well known that

01:07:26 if you have a very high dimensional landscape,

01:07:29 complicated things, it’s super hard to find the minimum.

01:07:31 Quantum mechanics is amazingly good at this.

01:07:35 Like if I want to know what’s the lowest energy state

01:07:38 this water can possibly have,

01:07:41 incredibly hard to compute,

01:07:42 but nature will happily figure this out for you

01:07:45 if you just cool it down, make it very, very cold.

01:07:49 If you put a ball somewhere,

01:07:50 it’ll roll down to its minimum.

01:07:52 And this happens metaphorically

01:07:54 at the energy landscape too.

01:07:56 And quantum mechanics even uses some clever tricks,

01:07:59 which today’s machine learning systems don’t.

01:08:02 Like if you’re trying to find the minimum

01:08:04 and you get stuck in the little local minimum here,

01:08:06 in quantum mechanics you can actually tunnel

01:08:08 through the barrier and get unstuck again.

01:08:13 That’s really interesting.

01:08:14 Yeah, so it may be, for example,

01:08:16 that we’ll one day use quantum computers

01:08:19 that help train neural networks better.

01:08:22 That’s really interesting.

01:08:23 Okay, so as a component of kind of the learning process,

01:08:27 for example.

01:08:27 Yeah.

01:08:29 Let me ask sort of wrapping up here a little bit,

01:08:33 let me return to the questions of our human nature

01:08:36 and love, as I mentioned.

01:08:40 So do you think,

01:08:44 you mentioned sort of a helper robot,

01:08:46 but you could think of also personal robots.

01:08:48 Do you think the way we human beings fall in love

01:08:52 and get connected to each other

01:08:54 is possible to achieve in an AI system

01:08:58 and human level AI intelligence system?

01:09:00 Do you think we would ever see that kind of connection?

01:09:03 Or, you know, in all this discussion

01:09:06 about solving complex goals,

01:09:08 is this kind of human social connection,

01:09:10 do you think that’s one of the goals

01:09:12 on the peaks and valleys with the raising sea levels

01:09:16 that we’ll be able to achieve?

01:09:17 Or do you think that’s something that’s ultimately,

01:09:20 or at least in the short term,

01:09:21 relative to the other goals is not achievable?

01:09:23 I think it’s all possible.

01:09:25 And I mean, in recent,

01:09:27 there’s a very wide range of guesses, as you know,

01:09:30 among AI researchers, when we’re going to get AGI.

01:09:35 Some people, you know, like our friend Rodney Brooks

01:09:37 says it’s going to be hundreds of years at least.

01:09:41 And then there are many others

01:09:42 who think it’s going to happen much sooner.

01:09:44 And recent polls,

01:09:46 maybe half or so of AI researchers

01:09:48 think we’re going to get AGI within decades.

01:09:50 So if that happens, of course,

01:09:52 then I think these things are all possible.

01:09:55 But in terms of whether it will happen,

01:09:56 I think we shouldn’t spend so much time asking

01:10:00 what do we think will happen in the future?

01:10:03 As if we are just some sort of pathetic,

01:10:05 your passive bystanders, you know,

01:10:07 waiting for the future to happen to us.

01:10:09 Hey, we’re the ones creating this future, right?

01:10:11 So we should be proactive about it

01:10:15 and ask ourselves what sort of future

01:10:16 we would like to have happen.

01:10:18 We’re going to make it like that.

01:10:19 Well, what I prefer is just some sort of incredibly boring,

01:10:22 zombie like future where there’s all these

01:10:24 mechanical things happening and there’s no passion,

01:10:26 no emotion, no experience, maybe even.

01:10:29 No, I would of course, much rather prefer it

01:10:32 if all the things that we find that we value the most

01:10:36 about humanity are our subjective experience,

01:10:40 passion, inspiration, love, you know.

01:10:43 If we can create a future where those things do happen,

01:10:48 where those things do exist, you know,

01:10:50 I think ultimately it’s not our universe

01:10:54 giving meaning to us, it’s us giving meaning to our universe.

01:10:57 And if we build more advanced intelligence,

01:11:01 let’s make sure we build it in such a way

01:11:03 that meaning is part of it.

01:11:09 A lot of people that seriously study this problem

01:11:11 and think of it from different angles

01:11:13 have trouble in the majority of cases,

01:11:16 if they think through that happen,

01:11:19 are the ones that are not beneficial to humanity.

01:11:22 And so, yeah, so what are your thoughts?

01:11:25 What’s should people, you know,

01:11:29 I really don’t like people to be terrified.

01:11:33 What’s a way for people to think about it

01:11:35 in a way we can solve it and we can make it better?

01:11:39 No, I don’t think panicking is going to help in any way.

01:11:42 It’s not going to increase chances

01:11:44 of things going well either.

01:11:45 Even if you are in a situation where there is a real threat,

01:11:48 does it help if everybody just freaks out?

01:11:51 No, of course, of course not.

01:11:53 I think, yeah, there are of course ways

01:11:56 in which things can go horribly wrong.

01:11:59 First of all, it’s important when we think about this thing,

01:12:03 about the problems and risks,

01:12:05 to also remember how huge the upsides can be

01:12:07 if we get it right, right?

01:12:08 Everything we love about society and civilization

01:12:12 is a product of intelligence.

01:12:13 So if we can amplify our intelligence

01:12:15 with machine intelligence and not anymore lose our loved one

01:12:18 to what we’re told is an incurable disease

01:12:21 and things like this, of course, we should aspire to that.

01:12:24 So that can be a motivator, I think,

01:12:26 reminding ourselves that the reason we try to solve problems

01:12:29 is not just because we’re trying to avoid gloom,

01:12:33 but because we’re trying to do something great.

01:12:35 But then in terms of the risks,

01:12:37 I think the really important question is to ask,

01:12:42 what can we do today that will actually help

01:12:45 make the outcome good, right?

01:12:47 And dismissing the risk is not one of them.

01:12:51 I find it quite funny often when I’m in discussion panels

01:12:54 about these things,

01:12:55 how the people who work for companies,

01:13:01 always be like, oh, nothing to worry about,

01:13:03 nothing to worry about, nothing to worry about.

01:13:04 And it’s only academics sometimes express concerns.

01:13:09 That’s not surprising at all if you think about it.

01:13:11 Right.

01:13:12 Upton Sinclair quipped, right,

01:13:15 that it’s hard to make a man believe in something

01:13:18 when his income depends on not believing in it.

01:13:20 And frankly, we know a lot of these people in companies

01:13:24 that they’re just as concerned as anyone else.

01:13:26 But if you’re the CEO of a company,

01:13:28 that’s not something you want to go on record saying

01:13:30 when you have silly journalists who are gonna put a picture

01:13:33 of a Terminator robot when they quote you.

01:13:35 So the issues are real.

01:13:39 And the way I think about what the issue is,

01:13:41 is basically the real choice we have is,

01:13:48 first of all, are we gonna just dismiss the risks

01:13:50 and say, well, let’s just go ahead and build machines

01:13:54 that can do everything we can do better and cheaper.

01:13:57 Let’s just make ourselves obsolete as fast as possible.

01:14:00 What could possibly go wrong?

01:14:01 That’s one attitude.

01:14:03 The opposite attitude, I think, is to say,

01:14:06 here’s this incredible potential,

01:14:08 let’s think about what kind of future

01:14:11 we’re really, really excited about.

01:14:14 What are the shared goals that we can really aspire towards?

01:14:18 And then let’s think really hard

01:14:19 about how we can actually get there.

01:14:22 So start with, don’t start thinking about the risks,

01:14:24 start thinking about the goals.

01:14:26 And then when you do that,

01:14:28 then you can think about the obstacles you want to avoid.

01:14:30 I often get students coming in right here into my office

01:14:32 for career advice.

01:14:34 I always ask them this very question,

01:14:35 where do you want to be in the future?

01:14:37 If all she can say is, oh, maybe I’ll have cancer,

01:14:40 maybe I’ll get run over by a truck.

01:14:42 Yeah, focus on the obstacles instead of the goals.

01:14:44 She’s just going to end up a hypochondriac paranoid.

01:14:47 Whereas if she comes in and fire in her eyes

01:14:49 and is like, I want to be there.

01:14:51 And then we can talk about the obstacles

01:14:53 and see how we can circumvent them.

01:14:55 That’s, I think, a much, much healthier attitude.

01:14:58 And I feel it’s very challenging to come up with a vision

01:15:03 for the future, which we are unequivocally excited about.

01:15:08 I’m not just talking now in the vague terms,

01:15:10 like, yeah, let’s cure cancer, fine.

01:15:12 I’m talking about what kind of society

01:15:14 do we want to create?

01:15:15 What do we want it to mean to be human in the age of AI,

01:15:20 in the age of AGI?

01:15:22 So if we can have this conversation,

01:15:25 broad, inclusive conversation,

01:15:28 and gradually start converging towards some,

01:15:31 some future that with some direction, at least,

01:15:34 that we want to steer towards, right,

01:15:35 then we’ll be much more motivated

01:15:38 to constructively take on the obstacles.

01:15:39 And I think if I had, if I had to,

01:15:43 if I try to wrap this up in a more succinct way,

01:15:46 I think we can all agree already now

01:15:51 that we should aspire to build AGI

01:15:56 that doesn’t overpower us, but that empowers us.

01:16:05 And think of the many various ways that can do that,

01:16:08 whether that’s from my side of the world

01:16:11 of autonomous vehicles.

01:16:12 I’m personally actually from the camp

01:16:14 that believes this human level intelligence

01:16:16 is required to achieve something like vehicles

01:16:20 that would actually be something we would enjoy using

01:16:23 and being part of.

01:16:25 So that’s one example, and certainly there’s a lot

01:16:27 of other types of robots and medicine and so on.

01:16:30 So focusing on those and then coming up with the obstacles,

01:16:33 coming up with the ways that that can go wrong

01:16:35 and solving those one at a time.

01:16:38 And just because you can build an autonomous vehicle,

01:16:41 even if you could build one

01:16:42 that would drive just fine without you,

01:16:45 maybe there are some things in life

01:16:46 that we would actually want to do ourselves.

01:16:48 That’s right.

01:16:49 Right, like, for example,

01:16:51 if you think of our society as a whole,

01:16:53 there are some things that we find very meaningful to do.

01:16:57 And that doesn’t mean we have to stop doing them

01:16:59 just because machines can do them better.

01:17:02 I’m not gonna stop playing tennis

01:17:04 just the day someone builds a tennis robot and beat me.

01:17:07 People are still playing chess and even go.

01:17:09 Yeah, and in the very near term even,

01:17:14 some people are advocating basic income, replace jobs.

01:17:18 But if the government is gonna be willing

01:17:20 to just hand out cash to people for doing nothing,

01:17:24 then one should also seriously consider

01:17:25 whether the government should also hire

01:17:27 a lot more teachers and nurses

01:17:29 and the kind of jobs which people often

01:17:32 find great fulfillment in doing, right?

01:17:34 We get very tired of hearing politicians saying,

01:17:36 oh, we can’t afford hiring more teachers,

01:17:39 but we’re gonna maybe have basic income.

01:17:41 If we can have more serious research and thought

01:17:44 into what gives meaning to our lives,

01:17:46 the jobs give so much more than income, right?

01:17:48 Mm hmm.

01:17:50 And then think about in the future,

01:17:53 what are the roles that we wanna have people

01:18:00 continually feeling empowered by machines?

01:18:03 And I think sort of, I come from Russia,

01:18:06 from the Soviet Union.

01:18:07 And I think for a lot of people in the 20th century,

01:18:10 going to the moon, going to space was an inspiring thing.

01:18:14 I feel like the universe of the mind,

01:18:18 so AI, understanding, creating intelligence

01:18:20 is that for the 21st century.

01:18:23 So it’s really surprising.

01:18:24 And I’ve heard you mention this.

01:18:25 It’s really surprising to me,

01:18:27 both on the research funding side,

01:18:29 that it’s not funded as greatly as it could be,

01:18:31 but most importantly, on the politician side,

01:18:34 that it’s not part of the public discourse

01:18:36 except in the killer bots terminator kind of view,

01:18:40 that people are not yet, I think, perhaps excited

01:18:44 by the possible positive future

01:18:46 that we can build together.

01:18:48 So we should be, because politicians usually just focus

01:18:51 on the next election cycle, right?

01:18:54 The single most important thing I feel we humans have learned

01:18:57 in the entire history of science

01:18:59 is they were the masters of underestimation.

01:19:02 We underestimated the size of our cosmos again and again,

01:19:08 realizing that everything we thought existed

01:19:10 was just a small part of something grander, right?

01:19:12 Planet, solar system, the galaxy, clusters of galaxies.

01:19:16 The universe.

01:19:18 And we now know that the future has just

01:19:23 so much more potential

01:19:25 than our ancestors could ever have dreamt of.

01:19:27 This cosmos, imagine if all of Earth

01:19:33 was completely devoid of life,

01:19:36 except for Cambridge, Massachusetts.

01:19:39 Wouldn’t it be kind of lame if all we ever aspired to

01:19:42 was to stay in Cambridge, Massachusetts forever

01:19:45 and then go extinct in one week,

01:19:47 even though Earth was gonna continue on for longer?

01:19:49 That sort of attitude I think we have now

01:19:54 on the cosmic scale, life can flourish on Earth,

01:19:57 not for four years, but for billions of years.

01:20:00 I can even tell you about how to move it out of harm’s way

01:20:02 when the sun gets too hot.

01:20:04 And then we have so much more resources out here,

01:20:09 which today, maybe there are a lot of other planets

01:20:12 with bacteria or cow like life on them,

01:20:14 but most of this, all this opportunity seems,

01:20:19 as far as we can tell, to be largely dead,

01:20:22 like the Sahara Desert.

01:20:23 And yet we have the opportunity to help life flourish

01:20:28 around this for billions of years.

01:20:30 So let’s quit squabbling about

01:20:34 whether some little border should be drawn

01:20:36 one mile to the left or right,

01:20:38 and look up into the skies and realize,

01:20:41 hey, we can do such incredible things.

01:20:44 Yeah, and that’s, I think, why it’s really exciting

01:20:46 that you and others are connected

01:20:49 with some of the work Elon Musk is doing,

01:20:51 because he’s literally going out into that space,

01:20:54 really exploring our universe, and it’s wonderful.

01:20:57 That is exactly why Elon Musk is so misunderstood, right?

01:21:02 Misconstrued him as some kind of pessimistic doomsayer.

01:21:05 The reason he cares so much about AI safety

01:21:07 is because he more than almost anyone else appreciates

01:21:12 these amazing opportunities that we’ll squander

01:21:14 if we wipe out here on Earth.

01:21:16 We’re not just going to wipe out the next generation,

01:21:19 all generations, and this incredible opportunity

01:21:23 that’s out there, and that would really be a waste.

01:21:25 And AI, for people who think that it would be better

01:21:30 to do without technology, let me just mention that

01:21:34 if we don’t improve our technology,

01:21:36 the question isn’t whether humanity is going to go extinct.

01:21:39 The question is just whether we’re going to get taken out

01:21:41 by the next big asteroid or the next super volcano

01:21:44 or something else dumb that we could easily prevent

01:21:48 with more tech, right?

01:21:49 And if we want life to flourish throughout the cosmos,

01:21:53 AI is the key to it.

01:21:56 As I mentioned in a lot of detail in my book right there,

01:21:59 even many of the most inspired sci fi writers,

01:22:04 I feel have totally underestimated the opportunities

01:22:08 for space travel, especially at the other galaxies,

01:22:11 because they weren’t thinking about the possibility of AGI,

01:22:15 which just makes it so much easier.

01:22:17 Right, yeah.

01:22:18 So that goes to your view of AGI that enables our progress,

01:22:24 that enables a better life.

01:22:25 So that’s a beautiful way to put it

01:22:28 and then something to strive for.

01:22:29 So Max, thank you so much.

01:22:31 Thank you for your time today.

01:22:32 It’s been awesome.

01:22:33 Thank you so much.

01:22:34 Thanks.

01:22:35 Have a great day.