Transcript
00:00:00 The following is a conversation with Max Tegmark,
00:00:02 his second time on the podcast.
00:00:04 In fact, the previous conversation
00:00:07 was episode number one of this very podcast.
00:00:10 He is a physicist and artificial intelligence researcher
00:00:14 at MIT, cofounder of the Future of Life Institute,
00:00:18 and author of Life 3.0,
00:00:21 Being Human in the Age of Artificial Intelligence.
00:00:24 He’s also the head of a bunch of other huge,
00:00:27 fascinating projects and has written
00:00:29 a lot of different things
00:00:30 that you should definitely check out.
00:00:32 He has been one of the key humans
00:00:34 who has been outspoken about longterm existential risks
00:00:37 of AI and also its exciting possibilities
00:00:40 and solutions to real world problems.
00:00:42 Most recently at the intersection of AI and physics,
00:00:46 and also in reengineering the algorithms
00:00:50 that divide us by controlling the information we see
00:00:53 and thereby creating bubbles and all other kinds
00:00:56 of complex social phenomena that we see today.
00:00:59 In general, he’s one of the most passionate
00:01:01 and brilliant people I have the fortune of knowing.
00:01:04 I hope to talk to him many more times
00:01:06 on this podcast in the future.
00:01:08 Quick mention of our sponsors,
00:01:10 The Jordan Harbinger Show,
00:01:12 Four Sigmatic Mushroom Coffee,
00:01:14 BetterHelp Online Therapy, and ExpressVPN.
00:01:18 So the choices, wisdom, caffeine, sanity, or privacy.
00:01:23 Choose wisely, my friends, and if you wish,
00:01:25 click the sponsor links below to get a discount
00:01:28 and to support this podcast.
00:01:30 As a side note, let me say that much of the researchers
00:01:33 in the machine learning
00:01:35 and artificial intelligence communities
00:01:37 do not spend much time thinking deeply
00:01:40 about existential risks of AI.
00:01:42 Because our current algorithms are seen as useful but dumb,
00:01:46 it’s difficult to imagine how they may become destructive
00:01:49 to the fabric of human civilization
00:01:51 in the foreseeable future.
00:01:53 I understand this mindset, but it’s very troublesome.
00:01:56 To me, this is both a dangerous and uninspiring perspective,
00:02:00 reminiscent of a lobster sitting in a pot of lukewarm water
00:02:03 that a minute ago was cold.
00:02:06 I feel a kinship with this lobster.
00:02:08 I believe that already the algorithms
00:02:10 that drive our interaction on social media
00:02:12 have an intelligence and power
00:02:14 that far outstrip the intelligence and power
00:02:17 of any one human being.
00:02:19 Now really is the time to think about this,
00:02:21 to define the trajectory of the interplay
00:02:24 of technology and human beings in our society.
00:02:26 I think that the future of human civilization
00:02:29 very well may be at stake over this very question
00:02:32 of the role of artificial intelligence in our society.
00:02:36 If you enjoy this thing, subscribe on YouTube,
00:02:38 review it on Apple Podcasts, follow on Spotify,
00:02:40 support on Patreon, or connect with me on Twitter
00:02:43 at Lex Friedman.
00:02:45 And now, here’s my conversation with Max Tegmark.
00:02:49 So people might not know this,
00:02:51 but you were actually episode number one of this podcast
00:02:55 just a couple of years ago, and now we’re back.
00:02:59 And it so happens that a lot of exciting things happened
00:03:02 in both physics and artificial intelligence,
00:03:05 both fields that you’re super passionate about.
00:03:08 Can we try to catch up to some of the exciting things
00:03:11 happening in artificial intelligence,
00:03:14 especially in the context of the way it’s cracking,
00:03:17 open the different problems of the sciences?
00:03:20 Yeah, I’d love to, especially now as we start 2021 here,
00:03:24 it’s a really fun time to think about
00:03:26 what were the biggest breakthroughs in AI,
00:03:29 not the ones necessarily that media wrote about,
00:03:31 but that really matter, and what does that mean
00:03:35 for our ability to do better science?
00:03:37 What does it mean for our ability
00:03:39 to help people around the world?
00:03:43 And what does it mean for new problems
00:03:46 that they could cause if we’re not smart enough
00:03:48 to avoid them, so what do we learn basically from this?
00:03:51 Yes, absolutely.
00:03:52 So one of the amazing things you’re a part of
00:03:54 is the AI Institute for Artificial Intelligence
00:03:57 and Fundamental Interactions.
00:04:00 What’s up with this institute?
00:04:02 What are you working on?
00:04:03 What are you thinking about?
00:04:05 The idea is something I’m very on fire with,
00:04:09 which is basically AI meets physics.
00:04:11 And it’s been almost five years now
00:04:15 since I shifted my own MIT research
00:04:18 from physics to machine learning.
00:04:20 And in the beginning, I noticed that a lot of my colleagues,
00:04:22 even though they were polite about it,
00:04:24 were like kind of, what is Max doing?
00:04:27 What is this weird stuff?
00:04:29 He’s lost his mind.
00:04:30 But then gradually, I, together with some colleagues,
00:04:35 were able to persuade more and more of the other professors
00:04:40 in our physics department to get interested in this.
00:04:42 And now we’ve got this amazing NSF Center,
00:04:46 so 20 million bucks for the next five years, MIT,
00:04:50 and a bunch of neighboring universities here also.
00:04:53 And I noticed now those colleagues
00:04:55 who were looking at me funny have stopped
00:04:57 asking what the point is of this,
00:05:00 because it’s becoming more clear.
00:05:02 And I really believe that, of course,
00:05:05 AI can help physics a lot to do better physics.
00:05:09 But physics can also help AI a lot,
00:05:13 both by building better hardware.
00:05:16 My colleague, Marin Soljacic, for example,
00:05:18 is working on an optical chip for much faster machine
00:05:23 learning, where the computation is done
00:05:25 not by moving electrons around, but by moving photons around,
00:05:30 dramatically less energy use, faster, better.
00:05:34 We can also help AI a lot, I think,
00:05:37 by having a different set of tools
00:05:42 and a different, maybe more audacious attitude.
00:05:46 AI has, to a significant extent, been an engineering discipline
00:05:51 where you’re just trying to make things that work
00:05:54 and being more interested in maybe selling them
00:05:56 than in figuring out exactly how they work
00:06:00 and proving theorems about that they will always work.
00:06:03 Contrast that with physics.
00:06:05 When Elon Musk sends a rocket to the International Space
00:06:08 Station, they didn’t just train with machine learning.
00:06:12 Oh, let’s fire it a little bit more to the left,
00:06:14 a bit more to the right.
00:06:14 Oh, that also missed.
00:06:15 Let’s try here.
00:06:16 No, we figured out Newton’s laws of gravitation and other things
00:06:23 and got a really deep fundamental understanding.
00:06:26 And that’s what gives us such confidence in rockets.
00:06:30 And my vision is that in the future,
00:06:36 all machine learning systems that actually have impact
00:06:38 on people’s lives will be understood
00:06:40 at a really, really deep level.
00:06:43 So we trust them, not because some sales rep told us to,
00:06:46 but because they’ve earned our trust.
00:06:50 And really safety critical things
00:06:51 even prove that they will always do what we expect them to do.
00:06:55 That’s very much the physics mindset.
00:06:57 So it’s interesting, if you look at big breakthroughs
00:07:00 that have happened in machine learning this year,
00:07:03 from dancing robots, it’s pretty fantastic.
00:07:08 Not just because it’s cool, but if you just
00:07:10 think about not that many years ago,
00:07:12 this YouTube video at this DARPA challenge with the MIT robot
00:07:16 comes out of the car and face plants.
00:07:20 How far we’ve come in just a few years.
00:07:23 Similarly, Alpha Fold 2, crushing the protein folding
00:07:30 problem.
00:07:31 We can talk more about implications
00:07:33 for medical research and stuff.
00:07:34 But hey, that’s huge progress.
00:07:39 You can look at the GPT3 that can spout off
00:07:44 English text, which sometimes really, really blows you away.
00:07:48 You can look at DeepMind’s MuZero,
00:07:52 which doesn’t just kick our butt in Go and Chess and Shogi,
00:07:57 but also in all these Atari games.
00:07:59 And you don’t even have to teach it the rules now.
00:08:02 What all of those have in common is, besides being powerful,
00:08:06 is we don’t fully understand how they work.
00:08:10 And that’s fine if it’s just some dancing robots.
00:08:13 And the worst thing that can happen is they face plant.
00:08:16 Or if they’re playing Go, and the worst thing that can happen
00:08:19 is that they make a bad move and lose the game.
00:08:22 It’s less fine if that’s what’s controlling
00:08:25 your self driving car or your nuclear power plant.
00:08:29 And we’ve seen already that even though Hollywood
00:08:33 had all these movies where they try
00:08:35 to make us worry about the wrong things,
00:08:37 like machines turning evil, the actual bad things that
00:08:41 have happened with automation have not
00:08:43 been machines turning evil.
00:08:45 They’ve been caused by overtrust in things
00:08:48 we didn’t understand as well as we thought we did.
00:08:51 Even very simple automated systems
00:08:54 like what Boeing put into the 737 MAX killed a lot of people.
00:09:00 Was it that that little simple system was evil?
00:09:02 Of course not.
00:09:03 But we didn’t understand it as well as we should have.
00:09:07 And we trusted without understanding.
00:09:10 Exactly.
00:09:11 That’s the overtrust.
00:09:12 We didn’t even understand that we didn’t understand.
00:09:15 The humility is really at the core of being a scientist.
00:09:19 I think step one, if you want to be a scientist,
00:09:21 is don’t ever fool yourself into thinking you understand things
00:09:25 when you actually don’t.
00:09:27 That’s probably good advice for humans in general.
00:09:29 I think humility in general can do us good.
00:09:31 But in science, it’s so spectacular.
00:09:33 Why did we have the wrong theory of gravity
00:09:35 ever from Aristotle onward until Galileo’s time?
00:09:40 Why would we believe something so dumb as that if I throw
00:09:43 this water bottle, it’s going to go up with constant speed
00:09:47 until it realizes that its natural motion is down?
00:09:49 It changes its mind.
00:09:51 Because people just kind of assumed Aristotle was right.
00:09:55 He’s an authority.
00:09:56 We understand that.
00:09:57 Why did we believe things like that the sun is
00:09:59 going around the Earth?
00:10:01 Why did we believe that time flows
00:10:04 at the same rate for everyone until Einstein?
00:10:06 Same exact mistake over and over again.
00:10:08 We just weren’t humble enough to acknowledge that we actually
00:10:12 didn’t know for sure.
00:10:13 We assumed we knew.
00:10:15 So we didn’t discover the truth because we
00:10:17 assumed there was nothing there to be discovered, right?
00:10:20 There was something to be discovered about the 737 Max.
00:10:24 And if you had been a bit more suspicious
00:10:26 and tested it better, we would have found it.
00:10:28 And it’s the same thing with most harm
00:10:30 that’s been done by automation so far, I would say.
00:10:33 So I don’t know if you heard here of a company called
00:10:35 Knight Capital?
00:10:38 So good.
00:10:38 That means you didn’t invest in them earlier.
00:10:42 They deployed this automated trading system,
00:10:45 all nice and shiny.
00:10:47 They didn’t understand it as well as they thought.
00:10:49 And it went about losing $10 million
00:10:51 per minute for 44 minutes straight
00:10:55 until someone presumably was like, oh, no, shut this up.
00:10:59 Was it evil?
00:11:00 No.
00:11:01 It was, again, misplaced trust, something they didn’t fully
00:11:04 understand, right?
00:11:05 And there have been so many, even when people
00:11:09 have been killed by robots, which is quite rare still,
00:11:12 but in factory accidents, it’s in every single case
00:11:15 been not malice, just that the robot didn’t understand
00:11:19 that a human is different from an auto part or whatever.
00:11:24 So this is why I think there’s so much opportunity
00:11:28 for a physics approach, where you just aim for a higher
00:11:32 level of understanding.
00:11:33 And if you look at all these systems
00:11:36 that we talked about from reinforcement learning
00:11:40 systems and dancing robots to all these neural networks
00:11:44 that power GPT3 and go playing software and stuff,
00:11:49 they’re all basically black boxes,
00:11:53 not so different from if you teach a human something,
00:11:55 you have no idea how their brain works, right?
00:11:58 Except the human brain, at least,
00:11:59 has been error corrected during many, many centuries
00:12:03 of evolution in a way that some of these systems have not,
00:12:06 right?
00:12:07 And my MIT research is entirely focused
00:12:10 on demystifying this black box, intelligible intelligence
00:12:14 is my slogan.
00:12:15 That’s a good line, intelligible intelligence.
00:12:18 Yeah, that we shouldn’t settle for something
00:12:20 that seems intelligent, but it should
00:12:22 be intelligible so that we actually trust it
00:12:24 because we understand it, right?
00:12:26 Like, again, Elon trusts his rockets
00:12:28 because he understands Newton’s laws and thrust
00:12:31 and how everything works.
00:12:33 And can I tell you why I’m optimistic about this?
00:12:36 Yes.
00:12:37 I think we’ve made a bit of a mistake
00:12:41 where some people still think that somehow we’re never going
00:12:44 to understand neural networks.
00:12:47 We’re just going to have to learn to live with this.
00:12:49 It’s this very powerful black box.
00:12:52 Basically, for those who haven’t spent time
00:12:55 building their own, it’s super simple what happens inside.
00:12:59 You send in a long list of numbers,
00:13:01 and then you do a bunch of operations on them,
00:13:04 multiply by matrices, et cetera, et cetera,
00:13:06 and some other numbers come out that’s output of it.
00:13:09 And then there are a bunch of knobs you can tune.
00:13:13 And when you change them, it affects the computation,
00:13:16 the input output relation.
00:13:18 And then you just give the computer
00:13:19 some definition of good, and it keeps optimizing these knobs
00:13:22 until it performs as good as possible.
00:13:24 And often, you go like, wow, that’s really good.
00:13:27 This robot can dance, or this machine
00:13:29 is beating me at chess now.
00:13:31 And in the end, you have something
00:13:33 which, even though you can look inside it,
00:13:35 you have very little idea of how it works.
00:13:38 You can print out tables of all the millions of parameters
00:13:42 in there.
00:13:43 Is it crystal clear now how it’s working?
00:13:45 No, of course not.
00:13:46 Many of my colleagues seem willing to settle for that.
00:13:49 And I’m like, no, that’s like the halfway point.
00:13:54 Some have even gone as far as sort of guessing
00:13:57 that the mistrutability of this is
00:14:00 where some of the power comes from,
00:14:02 and some sort of mysticism.
00:14:05 I think that’s total nonsense.
00:14:06 I think the real power of neural networks
00:14:10 comes not from inscrutability, but from differentiability.
00:14:15 And what I mean by that is simply
00:14:17 that the output changes only smoothly if you tweak your knobs.
00:14:23 And then you can use all these powerful methods
00:14:26 we have for optimization in science.
00:14:28 We can just tweak them a little bit and see,
00:14:30 did that get better or worse?
00:14:31 That’s the fundamental idea of machine learning,
00:14:33 that the machine itself can keep optimizing
00:14:36 until it gets better.
00:14:37 Suppose you wrote this algorithm instead in Python
00:14:41 or some other programming language,
00:14:43 and then what the knobs did was they just changed
00:14:46 random letters in your code.
00:14:49 Now it would just epically fail.
00:14:51 You change one thing, and instead of saying print,
00:14:53 it says, synth, syntax error.
00:14:56 You don’t even know, was that for the better
00:14:58 or for the worse, right?
00:14:59 This, to me, is what I believe is
00:15:02 the fundamental power of neural networks.
00:15:05 And just to clarify, the changing
00:15:06 of the different letters in a program
00:15:08 would not be a differentiable process.
00:15:10 It would make it an invalid program, typically.
00:15:13 And then you wouldn’t even know if you changed more letters
00:15:16 if it would make it work again, right?
00:15:18 So that’s the magic of neural networks, the inscrutability.
00:15:23 The differentiability, that every setting of the parameters
00:15:26 is a program, and you can tell is it better or worse, right?
00:15:29 And so.
00:15:31 So you don’t like the poetry of the mystery of neural networks
00:15:33 as the source of its power?
00:15:35 I generally like poetry, but.
00:15:37 Not in this case.
00:15:39 It’s so misleading.
00:15:40 And above all, it shortchanges us.
00:15:42 It makes us underestimate the good things
00:15:46 we can accomplish.
00:15:47 So what we’ve been doing in my group
00:15:49 is basically step one, train the mysterious neural network
00:15:53 to do something well.
00:15:54 And then step two, do some additional AI techniques
00:15:59 to see if we can now transform this black box into something
00:16:03 equally intelligent that you can actually understand.
00:16:07 So for example, I’ll give you one example, this AI Feynman
00:16:09 project that we just published, right?
00:16:11 So we took the 100 most famous or complicated equations
00:16:18 from one of my favorite physics textbooks,
00:16:20 in fact, the one that got me into physics
00:16:22 in the first place, the Feynman lectures on physics.
00:16:25 And so you have a formula.
00:16:28 Maybe it has what goes into the formula
00:16:31 as six different variables, and then what comes out as one.
00:16:35 So then you can make a giant Excel spreadsheet
00:16:38 with seven columns.
00:16:39 You put in just random numbers for the six columns
00:16:41 for those six input variables, and then you
00:16:43 calculate with a formula the seventh column, the output.
00:16:46 So maybe it’s like the force equals in the last column
00:16:50 some function of the other.
00:16:51 And now the task is, OK, if I don’t tell you
00:16:53 what the formula was, can you figure that out
00:16:57 from looking at my spreadsheet I gave you?
00:17:00 This problem is called symbolic regression.
00:17:04 If I tell you that the formula is
00:17:05 what we call a linear formula, so it’s just
00:17:08 that the output is sum of all the things, input, the times,
00:17:14 some constants, that’s the famous easy problem
00:17:17 we can solve.
00:17:18 We do it all the time in science and engineering.
00:17:21 But the general one, if it’s more complicated functions
00:17:24 with logarithms or cosines or other math,
00:17:27 it’s a very, very hard one and probably impossible
00:17:30 to do fast in general, just because the number of formulas
00:17:34 with n symbols just grows exponentially,
00:17:37 just like the number of passwords
00:17:38 you can make grow dramatically with length.
00:17:43 But we had this idea that if you first
00:17:46 have a neural network that can actually approximate
00:17:48 the formula, you just trained it,
00:17:49 even if you don’t understand how it works,
00:17:51 that can be the first step towards actually understanding
00:17:56 how it works.
00:17:58 So that’s what we do first.
00:18:00 And then we study that neural network now
00:18:03 and put in all sorts of other data
00:18:04 that wasn’t in the original training data
00:18:06 and use that to discover simplifying
00:18:09 properties of the formula.
00:18:11 And that lets us break it apart, often
00:18:13 into many simpler pieces in a kind of divide
00:18:15 and conquer approach.
00:18:17 So we were able to solve all of those 100 formulas,
00:18:20 discover them automatically, plus a whole bunch
00:18:22 of other ones.
00:18:22 And it’s actually kind of humbling
00:18:26 to see that this code, which anyone who wants now
00:18:29 is listening to this, can type pip install AI Feynman
00:18:33 on the computer and run it.
00:18:34 It can actually do what Johannes Kepler spent four years doing
00:18:38 when he stared at Mars data until he was like,
00:18:40 finally, Eureka, this is an ellipse.
00:18:44 This will do it automatically for you in one hour.
00:18:46 Or Max Planck, he was looking at how much radiation comes out
00:18:51 from different wavelengths from a hot object
00:18:54 and discovered the famous blackbody formula.
00:18:57 This discovers it automatically.
00:19:00 I’m actually excited about seeing
00:19:05 if we can discover not just old formulas again,
00:19:08 but new formulas that no one has seen before.
00:19:12 I do like this process of using kind of a neural network
00:19:14 to find some basic insights and then dissecting
00:19:18 the neural network to then gain the final.
00:19:21 So in that way, you’ve forcing the explainability issue,
00:19:30 really trying to analyze the neural network for the things
00:19:34 it knows in order to come up with the final beautiful,
00:19:38 simple theory underlying the initial system
00:19:42 that you were looking at.
00:19:43 I love that.
00:19:44 And the reason I’m so optimistic that it
00:19:47 can be generalized to so much more
00:19:49 is because that’s exactly what we do as human scientists.
00:19:53 Think of Galileo, whom we mentioned, right?
00:19:55 I bet when he was a little kid, if his dad threw him an apple,
00:19:58 he would catch it.
00:20:01 Why?
00:20:01 Because he had a neural network in his brain
00:20:04 that he had trained to predict the parabolic orbit of apples
00:20:07 that are thrown under gravity.
00:20:09 If you throw a tennis ball to a dog,
00:20:12 it also has this same ability of deep learning
00:20:15 to figure out how the ball is going to move and catch it.
00:20:18 But Galileo went one step further when he got older.
00:20:21 He went back and was like, wait a minute.
00:20:26 I can write down a formula for this.
00:20:27 Y equals x squared, a parabola.
00:20:31 And he helped revolutionize physics as we know it, right?
00:20:36 So there was a basic neural network
00:20:38 in there from childhood that captured the experiences
00:20:43 of observing different kinds of trajectories.
00:20:46 And then he was able to go back in
00:20:48 with another extra little neural network
00:20:51 and analyze all those experiences and be like,
00:20:53 wait a minute.
00:20:54 There’s a deeper rule here.
00:20:56 Exactly.
00:20:56 He was able to distill out in symbolic form
00:21:00 what that complicated black box neural network was doing.
00:21:03 Not only did the formula he got ultimately
00:21:07 become more accurate, and similarly, this
00:21:09 is how Newton got Newton’s laws, which
00:21:12 is why Elon can send rockets to the space station now, right?
00:21:15 So it’s not only more accurate, but it’s also simpler,
00:21:19 much simpler.
00:21:20 And it’s so simple that we can actually describe it
00:21:22 to our friends and each other, right?
00:21:26 We’ve talked about it just in the context of physics now.
00:21:28 But hey, isn’t this what we’re doing when we’re
00:21:31 talking to each other also?
00:21:33 We go around with our neural networks,
00:21:35 just like dogs and cats and chipmunks and Blue Jays.
00:21:38 And we experience things in the world.
00:21:41 But then we humans do this additional step
00:21:43 on top of that, where we then distill out
00:21:46 certain high level knowledge that we’ve extracted from this
00:21:50 in a way that we can communicate it
00:21:52 to each other in a symbolic form in English in this case, right?
00:21:56 So if we can do it and we believe
00:21:59 that we are information processing entities,
00:22:02 then we should be able to make machine learning that
00:22:04 does it also.
00:22:07 Well, do you think the entire thing could be learning?
00:22:10 Because this dissection process, like for AI Feynman,
00:22:14 the secondary stage feels like something like reasoning.
00:22:19 And the initial step feels more like the more basic kind
00:22:23 of differentiable learning.
00:22:25 Do you think the whole thing could be differentiable
00:22:27 learning?
00:22:28 Do you think the whole thing could be basically neural
00:22:31 networks on top of each other?
00:22:32 It’s like turtles all the way down.
00:22:33 Could it be neural networks all the way down?
00:22:35 I mean, that’s a really interesting question.
00:22:37 We know that in your case, it is neural networks all the way
00:22:41 down because that’s all you have in your skull
00:22:42 is a bunch of neurons doing their thing, right?
00:22:45 But if you ask the question more generally,
00:22:50 what algorithms are being used in your brain,
00:22:54 I think it’s super interesting to compare.
00:22:56 I think we’ve gone a little bit backwards historically
00:22:58 because we humans first discovered good old fashioned
00:23:02 AI, the logic based AI that we often call GoFi
00:23:06 for good old fashioned AI.
00:23:09 And then more recently, we did machine learning
00:23:12 because it required bigger computers.
00:23:14 So we had to discover it later.
00:23:15 So we think of machine learning with neural networks
00:23:19 as the modern thing and the logic based AI
00:23:21 as the old fashioned thing.
00:23:24 But if you look at evolution on Earth,
00:23:27 it’s actually been the other way around.
00:23:29 I would say that, for example, an eagle
00:23:34 has a better vision system than I have using.
00:23:38 And dogs are just as good at casting tennis balls as I am.
00:23:42 All this stuff which is done by training in neural network
00:23:45 and not interpreting it in words is
00:23:49 something so many of our animal friends can do,
00:23:51 at least as well as us, right?
00:23:53 What is it that we humans can do that the chipmunks
00:23:56 and the eagles cannot?
00:23:58 It’s more to do with this logic based stuff, right,
00:24:01 where we can extract out information
00:24:04 in symbols, in language, and now even with equations
00:24:10 if you’re a scientist, right?
00:24:12 So basically what happened was first we
00:24:13 built these computers that could multiply numbers real fast
00:24:16 and manipulate symbols.
00:24:18 And we felt they were pretty dumb.
00:24:20 And then we made neural networks that
00:24:22 can see as well as a cat can and do
00:24:25 a lot of this inscrutable black box neural networks.
00:24:30 What we humans can do also is put the two together
00:24:33 in a useful way.
00:24:34 Yes, in our own brain.
00:24:36 Yes, in our own brain.
00:24:37 So if we ever want to get artificial general intelligence
00:24:40 that can do all jobs as well as humans can, right,
00:24:45 then that’s what’s going to be required
00:24:47 to be able to combine the neural networks with symbolic,
00:24:53 combine the old AI with the new AI in a good way.
00:24:55 We do it in our brains.
00:24:57 And there seems to be basically two strategies
00:24:59 I see in industry now.
00:25:01 One scares the heebie jeebies out of me,
00:25:03 and the other one I find much more encouraging.
00:25:05 OK, which one?
00:25:07 Can we break them apart?
00:25:08 Which of the two?
00:25:09 The one that scares the heebie jeebies out of me
00:25:11 is this attitude that we’re just going
00:25:12 to make ever bigger systems that we still
00:25:14 don’t understand until they can be as smart as humans.
00:25:19 What could possibly go wrong?
00:25:22 I think it’s just such a reckless thing to do.
00:25:24 And unfortunately, if we actually
00:25:27 succeed as a species to build artificial general intelligence,
00:25:30 then we still have no clue how it works.
00:25:31 I think at least 50% chance we’re
00:25:35 going to be extinct before too long.
00:25:37 It’s just going to be an utter epic own goal.
00:25:40 So it’s that 44 minute losing money problem or the paper clip
00:25:46 problem where we don’t understand how it works,
00:25:49 and it just in a matter of seconds
00:25:51 runs away in some kind of direction
00:25:52 that’s going to be very problematic.
00:25:54 Even long before, you have to worry about the machines
00:25:57 themselves somehow deciding to do things.
00:26:01 And to us, we have to worry about people using machines
00:26:06 that are short of AGI and power to do bad things.
00:26:09 I mean, just take a moment.
00:26:13 And if anyone is not worried particularly about advanced AI,
00:26:18 just take 10 seconds and just think
00:26:20 about your least favorite leader on the planet right now.
00:26:23 Don’t tell me who it is.
00:26:25 I want to keep this apolitical.
00:26:26 But just see the face in front of you,
00:26:28 that person, for 10 seconds.
00:26:30 Now imagine that that person has this incredibly powerful AI
00:26:35 under their control and can use it
00:26:37 to impose their will on the whole planet.
00:26:38 How does that make you feel?
00:26:42 Yeah.
00:26:44 So can we break that apart just briefly?
00:26:49 For the 50% chance that we’ll run
00:26:51 to trouble with this approach, do you
00:26:53 see the bigger worry in that leader or humans
00:26:58 using the system to do damage?
00:27:00 Or are you more worried, and I think I’m in this camp,
00:27:05 more worried about accidental, unintentional destruction
00:27:09 of everything?
00:27:10 So humans trying to do good, and in a way
00:27:14 where everyone agrees it’s kind of good,
00:27:17 it’s just they’re trying to do good without understanding.
00:27:20 Because I think every evil leader in history
00:27:22 thought they’re, to some degree, thought
00:27:24 they’re trying to do good.
00:27:25 Oh, yeah.
00:27:25 I’m sure Hitler thought he was doing good.
00:27:28 Yeah.
00:27:29 I’ve been reading a lot about Stalin.
00:27:31 I’m sure Stalin is from, he legitimately
00:27:34 thought that communism was good for the world,
00:27:36 and that he was doing good.
00:27:37 I think Mao Zedong thought what he was doing with the Great
00:27:39 Leap Forward was good too.
00:27:41 Yeah.
00:27:42 I’m actually concerned about both of those.
00:27:45 Before, I promised to answer this in detail,
00:27:48 but before we do that, let me finish
00:27:50 answering the first question.
00:27:51 Because I told you that there were two different routes we
00:27:53 could get to artificial general intelligence,
00:27:55 and one scares the hell out of me,
00:27:57 which is this one where we build something,
00:27:59 we just say bigger neural networks, ever more hardware,
00:28:02 and just train the heck out of more data,
00:28:03 and poof, now it’s very powerful.
00:28:07 That, I think, is the most unsafe and reckless approach.
00:28:11 The alternative to that is the intelligible intelligence
00:28:16 approach instead, where we say neural networks is just
00:28:22 a tool for the first step to get the intuition,
00:28:27 but then we’re going to spend also
00:28:29 serious resources on other AI techniques
00:28:33 for demystifying this black box and figuring out
00:28:35 what it’s actually doing so we can convert it
00:28:38 into something that’s equally intelligent,
00:28:41 but that we actually understand what it’s doing.
00:28:44 Maybe we can even prove theorems about it,
00:28:45 that this car here will never be hacked when it’s driving,
00:28:50 because here is the proof.
00:28:53 There is a whole science of this.
00:28:55 It doesn’t work for neural networks
00:28:57 that are big black boxes, but it works well
00:28:58 and works with certain other kinds of codes, right?
00:29:02 That approach, I think, is much more promising.
00:29:05 That’s exactly why I’m working on it, frankly,
00:29:07 not just because I think it’s cool for science,
00:29:09 but because I think the more we understand these systems,
00:29:14 the better the chances that we can
00:29:16 make them do the things that are good for us
00:29:18 that are actually intended, not unintended.
00:29:21 So you think it’s possible to prove things
00:29:24 about something as complicated as a neural network?
00:29:27 That’s the hope?
00:29:28 Well, ideally, there’s no reason it
00:29:30 has to be a neural network in the end either, right?
00:29:34 We discovered Newton’s laws of gravity
00:29:36 with neural network in Newton’s head.
00:29:40 But that’s not the way it’s programmed into the navigation
00:29:44 system of Elon Musk’s rocket anymore.
00:29:46 It’s written in C++, or I don’t know
00:29:49 what language he uses exactly.
00:29:51 And then there are software tools called symbolic
00:29:53 verification.
00:29:54 DARPA and the US military has done a lot of really great
00:29:59 research on this, because they really
00:30:01 want to understand that when they build weapon systems,
00:30:03 they don’t just go fire at random or malfunction, right?
00:30:07 And there is even a whole operating system
00:30:10 called Cell 3 that’s been developed by a DARPA grant,
00:30:12 where you can actually mathematically prove
00:30:16 that this thing can never be hacked.
00:30:18 Wow.
00:30:20 One day, I hope that will be something
00:30:22 you can say about the OS that’s running on our laptops too.
00:30:25 As you know, we’re not there.
00:30:27 But I think we should be ambitious, frankly.
00:30:30 And if we can use machine learning
00:30:34 to help do the proofs and so on as well,
00:30:36 then it’s much easier to verify that a proof is correct
00:30:40 than to come up with a proof in the first place.
00:30:42 That’s really the core idea here.
00:30:45 If someone comes on your podcast and says
00:30:47 they proved the Riemann hypothesis
00:30:49 or some sensational new theorem, it’s
00:30:55 much easier for someone else, take some smart grad,
00:30:58 math grad students to check, oh, there’s an error here
00:31:01 on equation five, or this really checks out,
00:31:04 than it was to discover the proof.
00:31:07 Yeah, although some of those proofs are pretty complicated.
00:31:09 But yes, it’s still nevertheless much easier
00:31:11 to verify the proof.
00:31:12 I love the optimism.
00:31:14 We kind of, even with the security of systems,
00:31:17 there’s a kind of cynicism that pervades people
00:31:21 who think about this, which is like, oh, it’s hopeless.
00:31:24 I mean, in the same sense, exactly like you’re saying
00:31:27 when you own networks, oh, it’s hopeless to understand
00:31:29 what’s happening.
00:31:30 With security, people are just like, well,
00:31:32 it’s always going, there’s always going to be
00:31:36 attack vectors, like ways to attack the system.
00:31:40 But you’re right, we’re just very new
00:31:42 with these computational systems.
00:31:44 We’re new with these intelligent systems.
00:31:46 And it’s not out of the realm of possibly,
00:31:49 just like people that understand the movement
00:31:51 of the stars and the planets and so on.
00:31:54 It’s entirely possible that within, hopefully soon,
00:31:58 but it could be within 100 years,
00:32:00 we start to have an obvious laws of gravity
00:32:03 about intelligence and God forbid about consciousness too.
00:32:09 That one is…
00:32:10 Agreed.
00:32:12 I think, of course, if you’re selling computers
00:32:15 that get hacked a lot, that’s in your interest
00:32:16 as a company that people think it’s impossible
00:32:18 to make it safe, but he’s going to get the idea
00:32:20 of suing you.
00:32:21 I want to really inject optimism here.
00:32:24 It’s absolutely possible to do much better
00:32:29 than we’re doing now.
00:32:30 And your laptop does so much stuff.
00:32:34 You don’t need the music player to be super safe
00:32:37 in your future self driving car, right?
00:32:42 If someone hacks it and starts playing music
00:32:43 you don’t like, the world won’t end.
00:32:47 But what you can do is you can break out
00:32:49 and say that your drive computer that controls your safety
00:32:53 must be completely physically decoupled entirely
00:32:55 from the entertainment system.
00:32:57 And it must physically be such that it can’t take on
00:33:01 over the air updates while you’re driving.
00:33:03 And it can have ultimately some operating system on it
00:33:09 which is symbolically verified and proven
00:33:13 that it’s always going to do what it’s supposed to do, right?
00:33:17 We can basically have, and companies should take
00:33:19 that attitude too.
00:33:20 They should look at everything they do and say
00:33:22 what are the few systems in our company
00:33:25 that threaten the whole life of the company
00:33:27 if they get hacked and have the highest standards for them.
00:33:31 And then they can save money by going for the el cheapo
00:33:34 poorly understood stuff for the rest.
00:33:36 This is very feasible, I think.
00:33:38 And coming back to the bigger question
00:33:41 that you worried about that there’ll be unintentional
00:33:45 failures, I think there are two quite separate risks here.
00:33:47 Right?
00:33:48 We talked a lot about one of them
00:33:49 which is that the goals are noble of the human.
00:33:52 The human says, I want this airplane to not crash
00:33:56 because this is not Muhammad Atta
00:33:58 now flying the airplane, right?
00:34:00 And now there’s this technical challenge
00:34:03 of making sure that the autopilot is actually
00:34:05 gonna behave as the pilot wants.
00:34:11 If you set that aside, there’s also the separate question.
00:34:13 How do you make sure that the goals of the pilot
00:34:17 are actually aligned with the goals of the passenger?
00:34:19 How do you make sure very much more broadly
00:34:22 that if we can all agree as a species
00:34:24 that we would like things to kind of go well
00:34:26 for humanity as a whole, that the goals are aligned here.
00:34:30 The alignment problem.
00:34:31 And yeah, there’s been a lot of progress
00:34:36 in the sense that there’s suddenly huge amounts
00:34:39 of research going on on it about it.
00:34:42 I’m very grateful to Elon Musk
00:34:43 for giving us that money five years ago
00:34:44 so we could launch the first research program
00:34:46 on technical AI safety and alignment.
00:34:49 There’s a lot of stuff happening.
00:34:51 But I think we need to do more than just make sure
00:34:54 little machines do always what their owners do.
00:34:58 That wouldn’t have prevented September 11th
00:35:00 if Muhammad Atta said, okay, autopilot,
00:35:03 please fly into World Trade Center.
00:35:06 And it’s like, okay.
00:35:08 That even happened in a different situation.
00:35:11 There was this depressed pilot named Andreas Lubitz, right?
00:35:15 Who told his German wings passenger jet
00:35:17 to fly into the Alps.
00:35:19 He just told the computer to change the altitude
00:35:21 to a hundred meters or something like that.
00:35:23 And you know what the computer said?
00:35:25 Okay.
00:35:26 And it had the freaking topographical map of the Alps
00:35:29 in there, it had GPS, everything.
00:35:31 No one had bothered teaching it
00:35:33 even the basic kindergarten ethics of like,
00:35:35 no, we never want airplanes to fly into mountains
00:35:39 under any circumstances.
00:35:41 And so we have to think beyond just the technical issues
00:35:48 and think about how do we align in general incentives
00:35:51 on this planet for the greater good?
00:35:53 So starting with simple stuff like that,
00:35:55 every airplane that has a computer in it
00:35:58 should be taught whatever kindergarten ethics
00:36:00 that’s smart enough to understand.
00:36:02 Like, no, don’t fly into fixed objects
00:36:05 if the pilot tells you to do so.
00:36:07 Then go on autopilot mode.
00:36:10 Send an email to the cops and land at the latest airport,
00:36:13 nearest airport, you know.
00:36:14 Any car with a forward facing camera
00:36:18 should just be programmed by the manufacturer
00:36:20 so that it will never accelerate into a human ever.
00:36:24 That would avoid things like the NIS attack
00:36:28 and many horrible terrorist vehicle attacks
00:36:31 where they deliberately did that, right?
00:36:33 This was not some sort of thing,
00:36:35 oh, you know, US and China, different views on,
00:36:38 no, there was not a single car manufacturer
00:36:41 in the world, right, who wanted the cars to do this.
00:36:44 They just hadn’t thought to do the alignment.
00:36:45 And if you look at more broadly problems
00:36:48 that happen on this planet,
00:36:51 the vast majority have to do a poor alignment.
00:36:53 I mean, think about, let’s go back really big
00:36:57 because I know you’re so good at that.
00:36:59 Let’s go big, yeah.
00:36:59 Yeah, so long ago in evolution, we had these genes.
00:37:03 And they wanted to make copies of themselves.
00:37:06 That’s really all they cared about.
00:37:07 So some genes said, hey, I’m gonna build a brain
00:37:13 on this body I’m in so that I can get better
00:37:15 at making copies of myself.
00:37:17 And then they decided for their benefit
00:37:20 to get copied more, to align your brain’s incentives
00:37:23 with their incentives.
00:37:24 So it didn’t want you to starve to death.
00:37:29 So it gave you an incentive to eat
00:37:31 and it wanted you to make copies of the genes.
00:37:35 So it gave you incentive to fall in love
00:37:37 and do all sorts of naughty things
00:37:40 to make copies of itself, right?
00:37:44 So that was successful value alignment done on the genes.
00:37:47 They created something more intelligent than themselves,
00:37:50 but they made sure to try to align the values.
00:37:52 But then something went a little bit wrong
00:37:55 against the idea of what the genes wanted
00:37:58 because a lot of humans discovered,
00:38:00 hey, you know, yeah, we really like this business
00:38:03 about sex that the genes have made us enjoy,
00:38:06 but we don’t wanna have babies right now.
00:38:09 So we’re gonna hack the genes and use birth control.
00:38:13 And I really feel like drinking a Coca Cola right now,
00:38:18 but I don’t wanna get a potbelly,
00:38:20 so I’m gonna drink Diet Coke.
00:38:21 We have all these things we’ve figured out
00:38:24 because we’re smarter than the genes,
00:38:26 how we can actually subvert their intentions.
00:38:29 So it’s not surprising that we humans now,
00:38:33 when we are in the role of these genes,
00:38:34 creating other nonhuman entities with a lot of power,
00:38:37 have to face the same exact challenge.
00:38:39 How do we make other powerful entities
00:38:41 have incentives that are aligned with ours?
00:38:45 And so they won’t hack them.
00:38:47 Corporations, for example, right?
00:38:48 We humans decided to create corporations
00:38:51 because it can benefit us greatly.
00:38:53 Now all of a sudden there’s a supermarket.
00:38:55 I can go buy food there.
00:38:56 I don’t have to hunt.
00:38:57 Awesome, and then to make sure that this corporation
00:39:02 would do things that were good for us and not bad for us,
00:39:05 we created institutions to keep them in check.
00:39:08 Like if the local supermarket sells poisonous food,
00:39:12 then the owners of the supermarket
00:39:17 have to spend some years reflecting behind bars, right?
00:39:22 So we created incentives to align them.
00:39:25 But of course, just like we were able to see
00:39:27 through this thing and you develop birth control,
00:39:30 if you’re a powerful corporation,
00:39:31 you also have an incentive to try to hack the institutions
00:39:35 that are supposed to govern you.
00:39:36 Because you ultimately, as a corporation,
00:39:38 have an incentive to maximize your profit.
00:39:40 Just like you have an incentive
00:39:42 to maximize the enjoyment your brain has,
00:39:44 not for your genes.
00:39:46 So if they can figure out a way of bribing regulators,
00:39:50 then they’re gonna do that.
00:39:52 In the US, we kind of caught onto that
00:39:54 and made laws against corruption and bribery.
00:39:58 Then in the late 1800s, Teddy Roosevelt realized that,
00:40:03 no, we were still being kind of hacked
00:40:05 because the Massachusetts Railroad companies
00:40:07 had like a bigger budget than the state of Massachusetts
00:40:10 and they were doing a lot of very corrupt stuff.
00:40:13 So he did the whole trust busting thing
00:40:15 to try to align these other nonhuman entities,
00:40:18 the companies, again,
00:40:19 more with the incentives of Americans as a whole.
00:40:23 It’s not surprising, though,
00:40:24 that this is a battle you have to keep fighting.
00:40:26 Now we have even larger companies than we ever had before.
00:40:30 And of course, they’re gonna try to, again,
00:40:34 subvert the institutions.
00:40:37 Not because, I think people make a mistake
00:40:41 of getting all too,
00:40:44 thinking about things in terms of good and evil.
00:40:46 Like arguing about whether corporations are good or evil,
00:40:50 or whether robots are good or evil.
00:40:53 A robot isn’t good or evil, it’s a tool.
00:40:57 And you can use it for great things
00:40:58 like robotic surgery or for bad things.
00:41:01 And a corporation also is a tool, of course.
00:41:04 And if you have good incentives to the corporation,
00:41:06 it’ll do great things,
00:41:07 like start a hospital or a grocery store.
00:41:10 If you have any bad incentives,
00:41:12 then it’s gonna start maybe marketing addictive drugs
00:41:15 to people and you’ll have an opioid epidemic, right?
00:41:18 It’s all about,
00:41:21 we should not make the mistake of getting into
00:41:23 some sort of fairytale, good, evil thing
00:41:25 about corporations or robots.
00:41:27 We should focus on putting the right incentives in place.
00:41:30 My optimistic vision is that if we can do that,
00:41:34 then we can really get good things.
00:41:35 We’re not doing so great with that right now,
00:41:38 either on AI, I think,
00:41:39 or on other intelligent nonhuman entities,
00:41:42 like big companies, right?
00:41:43 We just have a new second generation of AI
00:41:47 and a secretary of defense who’s gonna start up now
00:41:51 in the Biden administration,
00:41:53 who was an active member of the board of Raytheon,
00:41:58 for example.
00:41:59 So, I have nothing against Raytheon.
00:42:04 I’m not a pacifist,
00:42:05 but there’s an obvious conflict of interest
00:42:08 if someone is in the job where they decide
00:42:12 who they’re gonna contract with.
00:42:14 And I think somehow we have,
00:42:16 maybe we need another Teddy Roosevelt to come along again
00:42:19 and say, hey, you know,
00:42:20 we want what’s good for all Americans,
00:42:23 and we need to go do some serious realigning again
00:42:26 of the incentives that we’re giving to these big companies.
00:42:30 And then we’re gonna be better off.
00:42:33 It seems that naturally with human beings,
00:42:35 just like you beautifully described the history
00:42:37 of this whole thing,
00:42:38 of it all started with the genes
00:42:40 and they’re probably pretty upset
00:42:42 by all the unintended consequences that happened since.
00:42:45 But it seems that it kind of works out,
00:42:48 like it’s in this collective intelligence
00:42:51 that emerges at the different levels.
00:42:53 It seems to find sometimes last minute
00:42:56 a way to realign the values or keep the values aligned.
00:43:00 It’s almost, it finds a way,
00:43:03 like different leaders, different humans pop up
00:43:07 all over the place that reset the system.
00:43:10 Do you want, I mean, do you have an explanation why that is?
00:43:15 Or is that just survivor bias?
00:43:17 And also is that different,
00:43:19 somehow fundamentally different than with AI systems
00:43:23 where you’re no longer dealing with something
00:43:26 that was a direct, maybe companies are the same,
00:43:30 a direct byproduct of the evolutionary process?
00:43:33 I think there is one thing which has changed.
00:43:36 That’s why I’m not all optimistic.
00:43:40 That’s why I think there’s about a 50% chance
00:43:42 if we take the dumb route with artificial intelligence
00:43:46 that humanity will be extinct in this century.
00:43:51 First, just the big picture.
00:43:53 Yeah, companies need to have the right incentives.
00:43:57 Even governments, right?
00:43:59 We used to have governments,
00:44:02 usually there were just some king,
00:44:04 who was the king because his dad was the king.
00:44:07 And then there were some benefits
00:44:10 of having this powerful kingdom or empire of any sort
00:44:15 because then it could prevent a lot of local squabbles.
00:44:17 So at least everybody in that region
00:44:19 would stop warring against each other.
00:44:20 And their incentives of different cities in the kingdom
00:44:24 became more aligned, right?
00:44:25 That was the whole selling point.
00:44:27 Harare, Noel Harare has a beautiful piece
00:44:31 on how empires were collaboration enablers.
00:44:35 And then we also, Harare says,
00:44:36 invented money for that reason
00:44:38 so we could have better alignment
00:44:40 and we could do trade even with people we didn’t know.
00:44:44 So this sort of stuff has been playing out
00:44:45 since time immemorial, right?
00:44:47 What’s changed is that it happens on ever larger scales,
00:44:51 right?
00:44:52 The technology keeps getting better
00:44:53 because science gets better.
00:44:54 So now we can communicate over larger distances,
00:44:57 transport things fast over larger distances.
00:44:59 And so the entities get ever bigger,
00:45:02 but our planet is not getting bigger anymore.
00:45:05 So in the past, you could have one experiment
00:45:08 that just totally screwed up like Easter Island,
00:45:11 where they actually managed to have such poor alignment
00:45:15 that when they went extinct, people there,
00:45:17 there was no one else to come back and replace them, right?
00:45:21 If Elon Musk doesn’t get us to Mars
00:45:24 and then we go extinct on a global scale,
00:45:27 then we’re not coming back.
00:45:28 That’s the fundamental difference.
00:45:31 And that’s a mistake we don’t make for that reason.
00:45:35 In the past, of course, history is full of fiascos, right?
00:45:39 But it was never the whole planet.
00:45:42 And then, okay, now there’s this nice uninhabited land here.
00:45:45 Some other people could move in and organize things better.
00:45:49 This is different.
00:45:50 The second thing, which is also different
00:45:52 is that technology gives us so much more empowerment, right?
00:45:58 Both to do good things and also to screw up.
00:46:00 In the stone age, even if you had someone
00:46:02 whose goals were really poorly aligned,
00:46:04 like maybe he was really pissed off
00:46:06 because his stone age girlfriend dumped him
00:46:08 and he just wanted to,
00:46:09 if he wanted to kill as many people as he could,
00:46:12 how many could he really take out with a rock and a stick
00:46:15 before he was overpowered, right?
00:46:17 Just handful, right?
00:46:18 Now, with today’s technology,
00:46:23 if we have an accidental nuclear war
00:46:25 between Russia and the US,
00:46:27 which we almost have about a dozen times,
00:46:31 and then we have a nuclear winter,
00:46:32 it could take out seven billion people
00:46:34 or six billion people, we don’t know.
00:46:37 So the scale of the damage is bigger that we can do.
00:46:40 And there’s obviously no law of physics
00:46:45 that says that technology will never get powerful enough
00:46:48 that we could wipe out our species entirely.
00:46:51 That would just be fantasy to think
00:46:53 that science is somehow doomed
00:46:55 to not get more powerful than that, right?
00:46:57 And it’s not at all unfeasible in our lifetime
00:47:00 that someone could design a designer pandemic
00:47:03 which spreads as easily as COVID,
00:47:04 but just basically kills everybody.
00:47:06 We already had smallpox.
00:47:08 It killed one third of everybody who got it.
00:47:13 What do you think of the, here’s an intuition,
00:47:15 maybe it’s completely naive
00:47:16 and this optimistic intuition I have,
00:47:18 which it seems, and maybe it’s a biased experience
00:47:22 that I have, but it seems like the most brilliant people
00:47:25 I’ve met in my life all are really like
00:47:31 fundamentally good human beings.
00:47:33 And not like naive good, like they really wanna do good
00:47:37 for the world in a way that, well, maybe is aligned
00:47:39 to my sense of what good means.
00:47:41 And so I have a sense that the people
00:47:47 that will be defining the very cutting edge of technology,
00:47:51 there’ll be much more of the ones that are doing good
00:47:53 versus the ones that are doing evil.
00:47:55 So the race, I’m optimistic on the,
00:48:00 us always like last minute coming up with a solution.
00:48:03 So if there’s an engineered pandemic
00:48:06 that has the capability to destroy
00:48:09 most of the human civilization,
00:48:11 it feels like to me either leading up to that before
00:48:15 or as it’s going on, there will be,
00:48:19 we’re able to rally the collective genius
00:48:22 of the human species.
00:48:23 I can tell by your smile that you’re
00:48:26 at least some percentage doubtful,
00:48:30 but could that be a fundamental law of human nature?
00:48:35 That evolution only creates, like karma is beneficial,
00:48:40 good is beneficial, and therefore we’ll be all right.
00:48:44 I hope you’re right.
00:48:46 I would really love it if you’re right,
00:48:48 if there’s some sort of law of nature that says
00:48:51 that we always get lucky in the last second
00:48:53 with karma, but I prefer not playing it so close
00:49:01 and gambling on that.
00:49:03 And I think, in fact, I think it can be dangerous
00:49:06 to have too strong faith in that
00:49:08 because it makes us complacent.
00:49:10 Like if someone tells you, you never have to worry
00:49:12 about your house burning down,
00:49:13 then you’re not gonna put in a smoke detector
00:49:15 because why would you need to?
00:49:17 Even if it’s sometimes very simple precautions,
00:49:19 we don’t take them.
00:49:20 If you’re like, oh, the government is gonna take care
00:49:22 of everything for us, I can always trust my politicians.
00:49:24 I can always, we advocate our own responsibility.
00:49:27 I think it’s a healthier attitude to say,
00:49:29 yeah, maybe things will work out.
00:49:30 Maybe I’m actually gonna have to myself step up
00:49:33 and take responsibility.
00:49:37 And the stakes are so huge.
00:49:38 I mean, if we do this right, we can develop
00:49:41 all this ever more powerful technology
00:49:43 and cure all diseases and create a future
00:49:46 where humanity is healthy and wealthy
00:49:48 for not just the next election cycle,
00:49:50 but like billions of years throughout our universe.
00:49:52 That’s really worth working hard for
00:49:54 and not just sitting and hoping
00:49:58 for some sort of fairytale karma.
00:49:59 Well, I just mean, so you’re absolutely right.
00:50:01 From the perspective of the individual,
00:50:03 like for me, the primary thing should be
00:50:05 to take responsibility and to build the solutions
00:50:09 that your skillset allows.
00:50:11 Yeah, which is a lot.
00:50:12 I think we underestimate often very much
00:50:14 how much good we can do.
00:50:16 If you or anyone listening to this
00:50:19 is completely confident that our government
00:50:23 would do a perfect job on handling any future crisis
00:50:25 with engineered pandemics or future AI,
00:50:29 I actually reflect a bit on what actually happened in 2020.
00:50:36 Do you feel that the government by and large
00:50:39 around the world has handled this flawlessly?
00:50:42 That’s a really sad and disappointing reality
00:50:45 that hopefully is a wake up call for everybody.
00:50:48 For the scientists, for the engineers,
00:50:52 for the researchers in AI especially,
00:50:54 it was disappointing to see how inefficient we were
00:51:01 at collecting the right amount of data
00:51:04 in a privacy preserving way and spreading that data
00:51:07 and utilizing that data to make decisions,
00:51:09 all that kind of stuff.
00:51:10 Yeah, I think when something bad happens to me,
00:51:13 I made myself a promise many years ago
00:51:17 that I would not be a whiner.
00:51:21 So when something bad happens to me,
00:51:23 of course it’s a process of disappointment,
00:51:27 but then I try to focus on what did I learn from this
00:51:30 that can make me a better person in the future.
00:51:32 And there’s usually something to be learned when I fail.
00:51:35 And I think we should all ask ourselves,
00:51:38 what can we learn from the pandemic
00:51:41 about how we can do better in the future?
00:51:43 And you mentioned there a really good lesson.
00:51:46 We were not as resilient as we thought we were
00:51:50 and we were not as prepared maybe as we wish we were.
00:51:53 You can even see very stark contrast around the planet.
00:51:57 South Korea, they have over 50 million people.
00:52:01 Do you know how many deaths they have from COVID
00:52:03 last time I checked?
00:52:05 No.
00:52:06 It’s about 500.
00:52:08 Why is that?
00:52:10 Well, the short answer is that they had prepared.
00:52:16 They were incredibly quick,
00:52:19 incredibly quick to get on it
00:52:21 with very rapid testing and contact tracing and so on,
00:52:25 which is why they never had more cases
00:52:28 than they could contract trace effectively, right?
00:52:30 They never even had to have the kind of big lockdowns
00:52:32 we had in the West.
00:52:33 But the deeper answer to,
00:52:36 it’s not just the Koreans are just somehow better people.
00:52:39 The reason I think they were better prepared
00:52:40 was because they had already had a pretty bad hit
00:52:45 from the SARS pandemic,
00:52:47 or which never became a pandemic,
00:52:49 something like 17 years ago, I think.
00:52:52 So it was kind of fresh memory
00:52:53 that we need to be prepared for pandemics.
00:52:56 So they were, right?
00:52:59 So maybe this is a lesson here
00:53:01 for all of us to draw from COVID
00:53:03 that rather than just wait for the next pandemic
00:53:06 or the next problem with AI getting out of control
00:53:09 or anything else,
00:53:11 maybe we should just actually set aside
00:53:14 a tiny fraction of our GDP
00:53:17 to have people very systematically
00:53:19 do some horizon scanning and say,
00:53:20 okay, what are the things that could go wrong?
00:53:23 And let’s duke it out and see
00:53:24 which are the more likely ones
00:53:25 and which are the ones that are actually actionable
00:53:28 and then be prepared.
00:53:29 So one of the observations as one little ant slash human
00:53:36 that I am of disappointment
00:53:38 is the political division over information
00:53:44 that has been observed, that I observed this year,
00:53:47 that it seemed the discussion was less about
00:53:54 sort of what happened and understanding
00:53:57 what happened deeply and more about
00:54:00 there’s different truths out there.
00:54:04 And it’s like an argument,
00:54:05 my truth is better than your truth.
00:54:07 And it’s like red versus blue or different.
00:54:10 It was like this ridiculous discourse
00:54:13 that doesn’t seem to get at any kind of notion of the truth.
00:54:16 It’s not like some kind of scientific process.
00:54:19 Even science got politicized in ways
00:54:21 that’s very heartbreaking to me.
00:54:24 You have an exciting project on the AI front
00:54:28 of trying to rethink one of the,
00:54:32 you mentioned corporations.
00:54:34 There’s one of the other collective intelligence systems
00:54:37 that have emerged through all of this is social networks.
00:54:40 And just the spread, the internet is the spread
00:54:43 of information on the internet,
00:54:46 our ability to share that information.
00:54:48 There’s all different kinds of news sources and so on.
00:54:50 And so you said like that’s from first principles,
00:54:53 let’s rethink how we think about the news,
00:54:57 how we think about information.
00:54:59 Can you talk about this amazing effort
00:55:02 that you’re undertaking?
00:55:03 Oh, I’d love to.
00:55:04 This has been my big COVID project
00:55:06 and nights and weekends on ever since the lockdown.
00:55:11 To segue into this actually,
00:55:13 let me come back to what you said earlier
00:55:14 that you had this hope that in your experience,
00:55:17 people who you felt were very talented
00:55:18 were often idealistic and wanted to do good.
00:55:21 Frankly, I feel the same about all people by and large,
00:55:25 there are always exceptions,
00:55:26 but I think the vast majority of everybody,
00:55:28 regardless of education and whatnot,
00:55:30 really are fundamentally good, right?
00:55:33 So how can it be that people still do so much nasty stuff?
00:55:37 I think it has everything to do with this,
00:55:40 with the information that we’re given.
00:55:41 Yes.
00:55:42 If you go into Sweden 500 years ago
00:55:46 and you start telling all the farmers
00:55:47 that those Danes in Denmark,
00:55:49 they’re so terrible people, and we have to invade them
00:55:52 because they’ve done all these terrible things
00:55:55 that you can’t fact check yourself.
00:55:56 A lot of people, Swedes did that, right?
00:55:59 And we’re seeing so much of this today in the world,
00:56:06 both geopolitically, where we are told that China is bad
00:56:11 and Russia is bad and Venezuela is bad,
00:56:13 and people in those countries are often told
00:56:16 that we are bad.
00:56:17 And we also see it at a micro level where people are told
00:56:21 that, oh, those who voted for the other party are bad people.
00:56:24 It’s not just an intellectual disagreement,
00:56:26 but they’re bad people and we’re getting ever more divided.
00:56:32 So how do you reconcile this with this intrinsic goodness
00:56:39 in people?
00:56:39 I think it’s pretty obvious that it has, again,
00:56:41 to do with the information that we’re fed and given, right?
00:56:46 We evolved to live in small groups
00:56:49 where you might know 30 people in total, right?
00:56:52 So you then had a system that was quite good
00:56:55 for assessing who you could trust and who you could not.
00:56:57 And if someone told you that Joe there is a jerk,
00:57:02 but you had interacted with him yourself
00:57:05 and seen him in action,
00:57:06 and you would quickly realize maybe
00:57:08 that that’s actually not quite accurate, right?
00:57:11 But now that the most people on the planet
00:57:13 are people we’ve never met,
00:57:15 it’s very important that we have a way
00:57:17 of trusting the information we’re given.
00:57:19 And so, okay, so where does the news project come in?
00:57:23 Well, throughout history, you can go read Machiavelli,
00:57:26 from the 1400s, and you’ll see how already then
00:57:28 they were busy manipulating people
00:57:30 with propaganda and stuff.
00:57:31 Propaganda is not new at all.
00:57:35 And the incentives to manipulate people
00:57:37 is just not new at all.
00:57:40 What is it that’s new?
00:57:41 What’s new is machine learning meets propaganda.
00:57:44 That’s what’s new.
00:57:45 That’s why this has gotten so much worse.
00:57:47 Some people like to blame certain individuals,
00:57:50 like in my liberal university bubble,
00:57:53 many people blame Donald Trump and say it was his fault.
00:57:56 I see it differently.
00:57:59 I think Donald Trump just had this extreme skill
00:58:03 at playing this game in the machine learning algorithm age.
00:58:07 A game he couldn’t have played 10 years ago.
00:58:09 So what’s changed?
00:58:10 What’s changed is, well, Facebook and Google
00:58:13 and other companies, and I’m not badmouthing them,
00:58:16 I have a lot of friends who work for these companies,
00:58:18 good people, they deployed machine learning algorithms
00:58:22 just to increase their profit a little bit,
00:58:24 to just maximize the time people spent watching ads.
00:58:28 And they had totally underestimated
00:58:30 how effective they were gonna be.
00:58:32 This was, again, the black box, non intelligible intelligence.
00:58:37 They just noticed, oh, we’re getting more ad revenue.
00:58:39 Great.
00:58:40 It took a long time until they even realized why and how
00:58:42 and how damaging this was for society.
00:58:45 Because of course, what the machine learning figured out
00:58:47 was that the by far most effective way of gluing you
00:58:52 to your little rectangle was to show you things
00:58:55 that triggered strong emotions, anger, et cetera, resentment,
00:58:59 and if it was true or not, it didn’t really matter.
00:59:04 It was also easier to find stories that weren’t true.
00:59:07 If you weren’t limited, that’s just the limitation,
00:59:09 is to show people.
00:59:10 That’s a very limiting fact.
00:59:12 And before long, we got these amazing filter bubbles
00:59:16 on a scale we had never seen before.
00:59:18 A couple of days to the fact that also the online news media
00:59:24 were so effective that they killed a lot of people
00:59:27 that were so effective that they killed a lot of print
00:59:30 journalism.
00:59:30 There’s less than half as many journalists
00:59:34 now in America, I believe, as there was a generation ago.
00:59:39 You just couldn’t compete with the online advertising.
00:59:42 So all of a sudden, most people are not
00:59:47 getting even reading newspapers.
00:59:48 They get their news from social media.
00:59:51 And most people only get news in their little bubble.
00:59:55 So along comes now some people like Donald Trump,
00:59:58 who figured out, among the first successful politicians,
01:00:01 to figure out how to really play this new game
01:00:04 and become very, very influential.
01:00:05 But I think Donald Trump was as simple.
01:00:09 He took advantage of it.
01:00:11 He didn’t create the fundamental conditions
01:00:14 were created by machine learning taking over the news media.
01:00:19 So this is what motivated my little COVID project here.
01:00:22 So I said before, machine learning and tech in general
01:00:27 is not evil, but it’s also not good.
01:00:29 It’s just a tool that you can use
01:00:31 for good things or bad things.
01:00:32 And as it happens, machine learning and news
01:00:36 was mainly used by the big players, big tech,
01:00:39 to manipulate people and to watch as many ads as possible,
01:00:43 which had this unintended consequence of really screwing
01:00:45 up our democracy and fragmenting it into filter bubbles.
01:00:50 So I thought, well, machine learning algorithms
01:00:53 are basically free.
01:00:54 They can run on your smartphone for free also
01:00:56 if someone gives them away to you, right?
01:00:57 There’s no reason why they only have to help the big guy
01:01:01 to manipulate the little guy.
01:01:02 They can just as well help the little guy
01:01:05 to see through all the manipulation attempts
01:01:07 from the big guy.
01:01:08 So this project is called,
01:01:10 you can go to improvethenews.org.
01:01:12 The first thing we’ve built is this little news aggregator.
01:01:16 Looks a bit like Google News,
01:01:17 except it has these sliders on it to help you break out
01:01:20 of your filter bubble.
01:01:21 So if you’re reading, you can click, click
01:01:24 and go to your favorite topic.
01:01:27 And then if you just slide the left, right slider
01:01:31 away all the way over to the left.
01:01:32 There’s two sliders, right?
01:01:33 Yeah, there’s the one, the most obvious one
01:01:36 is the one that has left, right labeled on it.
01:01:38 You go to the left, you get one set of articles,
01:01:40 you go to the right, you see a very different truth
01:01:43 appearing.
01:01:44 Oh, that’s literally left and right on the political spectrum.
01:01:47 On the political spectrum.
01:01:48 So if you’re reading about immigration, for example,
01:01:52 it’s very, very noticeable.
01:01:55 And I think step one always,
01:01:57 if you wanna not get manipulated is just to be able
01:02:00 to recognize the techniques people use.
01:02:02 So it’s very helpful to just see how they spin things
01:02:05 on the two sides.
01:02:08 I think many people are under the misconception
01:02:11 that the main problem is fake news.
01:02:14 It’s not.
01:02:14 I had an amazing team of MIT students
01:02:17 where we did an academic project to use machine learning
01:02:20 to detect the main kinds of bias over the summer.
01:02:23 And yes, of course, sometimes there’s fake news
01:02:25 where someone just claims something that’s false, right?
01:02:30 Like, oh, Hillary Clinton just got divorced or something.
01:02:32 But what we see much more of is actually just omissions.
01:02:37 If you go to, there’s some stories which just won’t be
01:02:41 mentioned by the left or the right, because it doesn’t suit
01:02:45 their agenda.
01:02:46 And then they’ll mention other ones very, very, very much.
01:02:49 So for example, we’ve had a number of stories
01:02:54 about the Trump family’s financial dealings.
01:02:59 And then there’s been a bunch of stories
01:03:01 about the Biden family’s, Hunter Biden’s financial dealings.
01:03:05 Surprise, surprise, they don’t get equal coverage
01:03:07 on the left and the right.
01:03:08 One side loves to cover the Biden, Hunter Biden’s stuff,
01:03:13 and one side loves to cover the Trump.
01:03:15 You can never guess which is which, right?
01:03:17 But the great news is if you’re a normal American citizen
01:03:21 and you dislike corruption in all its forms,
01:03:24 then slide, slide, you can just look at both sides
01:03:28 and you’ll see all those political corruption stories.
01:03:32 It’s really liberating to just take in the both sides,
01:03:37 the spin on both sides.
01:03:39 It somehow unlocks your mind to think on your own,
01:03:42 to realize that, I don’t know, it’s the same thing
01:03:47 that was useful, right, in the Soviet Union times
01:03:49 for when everybody was much more aware
01:03:54 that they’re surrounded by propaganda, right?
01:03:57 That is so interesting what you’re saying, actually.
01:04:00 So Noam Chomsky, used to be our MIT colleague,
01:04:04 once said that propaganda is to democracy
01:04:07 what violence is to totalitarianism.
01:04:11 And what he means by that is if you have
01:04:15 a really totalitarian government,
01:04:16 you don’t need propaganda.
01:04:19 People will do what you want them to do anyway,
01:04:22 but out of fear, right?
01:04:24 But otherwise, you need propaganda.
01:04:28 So I would say actually that the propaganda
01:04:29 is much higher quality in democracies,
01:04:32 much more believable.
01:04:34 And it’s really, it’s really striking.
01:04:36 When I talk to colleagues, science colleagues
01:04:39 like from Russia and China and so on,
01:04:42 I notice they are actually much more aware
01:04:45 of the propaganda in their own media
01:04:47 than many of my American colleagues are
01:04:48 about the propaganda in Western media.
01:04:51 That’s brilliant.
01:04:51 That means the propaganda in the Western media
01:04:53 is just better.
01:04:54 Yes.
01:04:55 That’s so brilliant.
01:04:56 Everything’s better in the West, even the propaganda.
01:04:58 But once you realize that,
01:05:07 you realize there’s also something very optimistic there
01:05:09 that you can do about it, right?
01:05:10 Because first of all, omissions,
01:05:14 as long as there’s no outright censorship,
01:05:16 you can just look at both sides
01:05:19 and pretty quickly piece together
01:05:22 a much more accurate idea of what’s actually going on, right?
01:05:26 And develop a natural skepticism too.
01:05:28 Yeah.
01:05:28 Just an analytical scientific mind
01:05:31 about the way you’re taking the information.
01:05:32 Yeah.
01:05:33 And I think, I have to say,
01:05:35 sometimes I feel that some of us in the academic bubble
01:05:38 are too arrogant about this and somehow think,
01:05:41 oh, it’s just people who aren’t as educated
01:05:44 as the dots are pulled.
01:05:45 When we are often just as gullible also,
01:05:48 we read only our media and don’t see through things.
01:05:52 Anyone who looks at both sides like this
01:05:53 and compares a little will immediately start noticing
01:05:56 the shenanigans being pulled.
01:05:58 And I think what I tried to do with this app
01:06:01 is that the big tech has to some extent
01:06:05 tried to blame the individual for being manipulated,
01:06:08 much like big tobacco tried to blame the individuals
01:06:12 entirely for smoking.
01:06:13 And then later on, our government stepped up and say,
01:06:16 actually, you can’t just blame little kids
01:06:19 for starting to smoke.
01:06:20 We have to have more responsible advertising
01:06:22 and this and that.
01:06:23 I think it’s a bit the same here.
01:06:24 It’s very convenient for a big tech to blame.
01:06:27 So it’s just people who are so dumb and get fooled.
01:06:32 The blame usually comes in saying,
01:06:34 oh, it’s just human psychology.
01:06:36 People just wanna hear what they already believe.
01:06:38 But professor David Rand at MIT actually partly debunked that
01:06:43 with a really nice study showing that people
01:06:45 tend to be interested in hearing things
01:06:47 that go against what they believe,
01:06:49 if it’s presented in a respectful way.
01:06:52 Suppose, for example, that you have a company
01:06:57 and you’re just about to launch this project
01:06:59 and you’re convinced it’s gonna work.
01:07:00 And someone says, you know, Lex,
01:07:03 I hate to tell you this, but this is gonna fail.
01:07:05 And here’s why.
01:07:06 Would you be like, shut up, I don’t wanna hear it.
01:07:08 La, la, la, la, la, la, la, la, la.
01:07:10 Would you?
01:07:11 You would be interested, right?
01:07:13 And also if you’re on an airplane,
01:07:16 back in the pre COVID times,
01:07:19 and the guy next to you
01:07:20 is clearly from the opposite side of the political spectrum,
01:07:24 but is very respectful and polite to you.
01:07:26 Wouldn’t you be kind of interested to hear a bit about
01:07:28 how he or she thinks about things?
01:07:31 Of course.
01:07:32 But it’s not so easy to find out
01:07:35 respectful disagreement now,
01:07:36 because like, for example, if you are a Democrat
01:07:40 and you’re like, oh, I wanna see something
01:07:41 on the other side,
01:07:42 so you just go Breitbart.com.
01:07:45 And then after the first 10 seconds,
01:07:46 you feel deeply insulted by something.
01:07:49 And they, it’s not gonna work.
01:07:52 Or if you take someone who votes Republican
01:07:55 and they go to something on the left,
01:07:57 then they just get very offended very quickly
01:08:00 by them having put a deliberately ugly picture
01:08:02 of Donald Trump on the front page or something.
01:08:04 It doesn’t really work.
01:08:05 So this news aggregator also has this nuance slider,
01:08:09 which you can pull to the right
01:08:11 and then sort of make it easier to get exposed
01:08:13 to actually more sort of academic style
01:08:16 or more respectful,
01:08:17 portrayals of different views.
01:08:19 And finally, the one kind of bias
01:08:22 I think people are mostly aware of is the left, right,
01:08:25 because it’s so obvious,
01:08:26 because both left and right are very powerful here, right?
01:08:30 Both of them have well funded TV stations and newspapers,
01:08:33 and it’s kind of hard to miss.
01:08:35 But there’s another one, the establishment slider,
01:08:39 which is also really fun.
01:08:41 I love to play with it.
01:08:42 And that’s more about corruption.
01:08:44 Yeah, yeah.
01:08:45 I love that one. Yes.
01:08:47 Because if you have a society
01:08:53 where almost all the powerful entities
01:08:57 want you to believe a certain thing,
01:08:59 that’s what you’re gonna read in both the big media,
01:09:01 mainstream media on the left and on the right, of course.
01:09:04 And the powerful companies can push back very hard,
01:09:08 like tobacco companies push back very hard
01:09:10 back in the day when some newspapers
01:09:12 started writing articles about tobacco being dangerous,
01:09:15 so that it was hard to get a lot of coverage
01:09:17 about it initially.
01:09:18 And also if you look geopolitically, right,
01:09:20 of course, in any country, when you read their media,
01:09:23 you’re mainly gonna be reading a lot of articles
01:09:24 about how our country is the good guy
01:09:27 and the other countries are the bad guys, right?
01:09:30 So if you wanna have a really more nuanced understanding,
01:09:33 like the Germans used to be told that the British
01:09:37 used to be told that the French were the bad guys
01:09:38 and the French used to be told
01:09:39 that the British were the bad guys.
01:09:41 Now they visit each other’s countries a lot
01:09:45 and have a much more nuanced understanding.
01:09:47 I don’t think there’s gonna be any more wars
01:09:48 between France and Germany.
01:09:50 But on the geopolitical scale,
01:09:53 it’s just as much as ever, you know,
01:09:54 big Cold War, now US, China, and so on.
01:09:57 And if you wanna get a more nuanced understanding
01:10:01 of what’s happening geopolitically,
01:10:03 then it’s really fun to look at this establishment slider
01:10:05 because it turns out there are tons of little newspapers,
01:10:09 both on the left and on the right,
01:10:11 who sometimes challenge establishment and say,
01:10:14 you know, maybe we shouldn’t actually invade Iraq right now.
01:10:17 Maybe this weapons of mass destruction thing is BS.
01:10:20 If you look at the journalism research afterwards,
01:10:23 you can actually see that quite clearly.
01:10:25 Both CNN and Fox were very pro.
01:10:29 Let’s get rid of Saddam.
01:10:30 There are weapons of mass destruction.
01:10:32 Then there were a lot of smaller newspapers.
01:10:34 They were like, wait a minute,
01:10:36 this evidence seems a bit sketchy and maybe we…
01:10:40 But of course they were so hard to find.
01:10:42 Most people didn’t even know they existed, right?
01:10:44 Yet it would have been better for American national security
01:10:47 if those voices had also come up.
01:10:50 I think it harmed America’s national security actually
01:10:52 that we invaded Iraq.
01:10:53 And arguably there’s a lot more interest
01:10:55 in that kind of thinking too, from those small sources.
01:11:00 So like when you say big,
01:11:02 it’s more about kind of the reach of the broadcast,
01:11:07 but it’s not big in terms of the interest.
01:11:12 I think there’s a lot of interest
01:11:14 in that kind of anti establishment
01:11:16 or like skepticism towards, you know,
01:11:18 out of the box thinking.
01:11:20 There’s a lot of interest in that kind of thing.
01:11:22 Do you see this news project or something like it
01:11:26 being basically taken over the world
01:11:30 as the main way we consume information?
01:11:32 Like how do we get there?
01:11:35 Like how do we, you know?
01:11:37 So, okay, the idea is brilliant.
01:11:39 It’s a, you’re calling it your little project in 2020,
01:11:44 but how does that become the new way we consume information?
01:11:48 I hope, first of all, just to plant a little seed there
01:11:51 because normally the big barrier of doing anything in media
01:11:55 is you need a ton of money, but this costs no money at all.
01:11:59 I’ve just been paying myself.
01:12:00 You pay a tiny amount of money each month to Amazon
01:12:03 to run the thing in their cloud.
01:12:04 We’re not, there never will never be any ads.
01:12:06 The point is not to make any money off of it.
01:12:09 And we just train machine learning algorithms
01:12:11 to classify the articles and stuff.
01:12:13 So it just kind of runs by itself.
01:12:14 So if it actually gets good enough at some point
01:12:17 that it starts catching on, it could scale.
01:12:20 And if other people carbon copy it
01:12:23 and make other versions that are better,
01:12:24 that’s the more the merrier.
01:12:28 I think there’s a real opportunity for machine learning
01:12:32 to empower the individual against the powerful players.
01:12:39 As I said in the beginning here, it’s
01:12:41 been mostly the other way around so far,
01:12:43 that the big players have the AI and then they tell people,
01:12:46 this is the truth, this is how it is.
01:12:49 But it can just as well go the other way around.
01:12:52 And when the internet was born, actually, a lot of people
01:12:54 had this hope that maybe this will be
01:12:56 a great thing for democracy, make it easier
01:12:58 to find out about things.
01:12:59 And maybe machine learning and things like this
01:13:02 can actually help again.
01:13:03 And I have to say, I think it’s more important than ever now
01:13:07 because this is very linked also to the whole future of life
01:13:12 as we discussed earlier.
01:13:13 We’re getting this ever more powerful tech.
01:13:17 Frank, it’s pretty clear if you look
01:13:19 on the one or two generation, three generation timescale
01:13:21 that there are only two ways this can end geopolitically.
01:13:24 Either it ends great for all humanity
01:13:27 or it ends terribly for all of us.
01:13:31 There’s really no in between.
01:13:33 And we’re so stuck in that because technology
01:13:37 knows no borders.
01:13:39 And you can’t have people fighting
01:13:42 when the weapons just keep getting ever more
01:13:44 powerful indefinitely.
01:13:47 Eventually, the luck runs out.
01:13:50 And right now we have, I love America,
01:13:55 but the fact of the matter is what’s good for America
01:13:59 is not opposite in the long term to what’s
01:14:02 good for other countries.
01:14:04 It would be if this was some sort of zero sum game
01:14:07 like it was thousands of years ago when the only way one
01:14:10 country could get more resources was
01:14:13 to take land from other countries
01:14:14 because that was basically the resource.
01:14:17 Look at the map of Europe.
01:14:18 Some countries kept getting bigger and smaller,
01:14:21 endless wars.
01:14:23 But then since 1945, there hasn’t been any war
01:14:26 in Western Europe.
01:14:27 And they all got way richer because of tech.
01:14:29 So the optimistic outcome is that the big winner
01:14:34 in this century is going to be America and China and Russia
01:14:38 and everybody else because technology just makes
01:14:40 us all healthier and wealthier.
01:14:41 And we just find some way of keeping the peace
01:14:44 on this planet.
01:14:46 But I think, unfortunately, there
01:14:48 are some pretty powerful forces right now
01:14:50 that are pushing in exactly the opposite direction
01:14:52 and trying to demonize other countries, which just makes
01:14:55 it more likely that this ever more powerful tech we’re
01:14:58 building is going to be used in disastrous ways.
01:15:02 Yeah, for aggression versus cooperation,
01:15:04 that kind of thing.
01:15:05 Yeah, even look at just military AI now.
01:15:09 It was so awesome to see these dancing robots.
01:15:12 I loved it.
01:15:14 But one of the biggest growth areas in robotics
01:15:17 now is, of course, autonomous weapons.
01:15:19 And 2020 was like the best marketing year
01:15:23 ever for autonomous weapons.
01:15:24 Because in both Libya, it’s a civil war,
01:15:27 and in Nagorno Karabakh, they made the decisive difference.
01:15:34 And everybody else is watching this.
01:15:36 Oh, yeah, we want to build autonomous weapons, too.
01:15:38 In Libya, you had, on one hand, our ally,
01:15:45 the United Arab Emirates that were flying
01:15:47 their autonomous weapons that they bought from China,
01:15:50 bombing Libyans.
01:15:51 And on the other side, you had our other ally, Turkey,
01:15:54 flying their drones.
01:15:57 And they had no skin in the game,
01:16:00 any of these other countries.
01:16:01 And of course, it was the Libyans who really got screwed.
01:16:04 In Nagorno Karabakh, you had actually, again,
01:16:09 Turkey is sending drones built by this company that
01:16:12 was actually founded by a guy who went to MIT AeroAstro.
01:16:17 Do you know that?
01:16:17 No.
01:16:18 Bacratyar.
01:16:18 Yeah.
01:16:19 So MIT has a direct responsibility
01:16:21 for ultimately this.
01:16:22 And a lot of civilians were killed there.
01:16:25 So because it was militarily so effective,
01:16:29 now suddenly there’s a huge push.
01:16:31 Oh, yeah, yeah, let’s go build ever more autonomy
01:16:35 into these weapons, and it’s going to be great.
01:16:39 And I think, actually, people who
01:16:44 are obsessed about some sort of future Terminator scenario
01:16:47 right now should start focusing on the fact
01:16:51 that we have two much more urgent threats happening
01:16:54 from machine learning.
01:16:54 One of them is the whole destruction of democracy
01:16:57 that we’ve talked about now, where
01:17:01 our flow of information is being manipulated
01:17:03 by machine learning.
01:17:04 And the other one is that right now,
01:17:06 this is the year when the big arms race and out of control
01:17:10 arms race in at least Thomas Weapons is going to start,
01:17:12 or it’s going to stop.
01:17:14 So you have a sense that there is like 2020
01:17:18 was an instrumental catalyst for the autonomous weapons race.
01:17:24 Yeah, because it was the first year when they proved
01:17:26 decisive in the battlefield.
01:17:28 And these ones are still not fully autonomous, mostly.
01:17:31 They’re remote controlled, right?
01:17:32 But we could very quickly make things
01:17:38 about the size and cost of a smartphone, which you just put
01:17:43 in the GPS coordinates or the face of the one
01:17:45 you want to kill, a skin color or whatever,
01:17:47 and it flies away and does it.
01:17:48 And the real good reason why the US and all
01:17:53 the other superpowers should put the kibosh on this
01:17:57 is the same reason we decided to put the kibosh on bioweapons.
01:18:01 So we gave the Future of Life Award
01:18:05 that we can talk more about later to Matthew Messelson
01:18:07 from Harvard before for convincing
01:18:08 Nixon to ban bioweapons.
01:18:10 And I asked him, how did you do it?
01:18:13 And he was like, well, I just said, look,
01:18:16 we don’t want there to be a $500 weapon of mass destruction
01:18:20 that all our enemies can afford, even nonstate actors.
01:18:26 And Nixon was like, good point.
01:18:32 It’s in America’s interest that the powerful weapons are all
01:18:34 really expensive, so only we can afford them,
01:18:37 or maybe some more stable adversaries, right?
01:18:41 Nuclear weapons are like that.
01:18:42 But bioweapons were not like that.
01:18:44 That’s why we banned them.
01:18:46 And that’s why you never hear about them now.
01:18:48 That’s why we love biology.
01:18:50 So you have a sense that it’s possible for the big power
01:18:55 houses in terms of the big nations in the world
01:18:58 to agree that autonomous weapons is not a race we want to be on,
01:19:02 that it doesn’t end well.
01:19:03 Yeah, because we know it’s just going
01:19:05 to end in mass proliferation.
01:19:06 And every terrorist everywhere is
01:19:08 going to have these super cheap weapons
01:19:10 that they will use against us.
01:19:13 And our politicians have to constantly worry
01:19:15 about being assassinated every time they go outdoors
01:19:18 by some anonymous little mini drone.
01:19:21 We don’t want that.
01:19:21 And even if the US and China and everyone else
01:19:25 could just agree that you can only
01:19:27 build these weapons if they cost at least $10 million,
01:19:31 that would be a huge win for the superpowers
01:19:34 and, frankly, for everybody.
01:19:38 And people often push back and say, well, it’s
01:19:41 so hard to prevent cheating.
01:19:43 But hey, you could say the same about bioweapons.
01:19:45 Take any of your MIT colleagues in biology.
01:19:49 Of course, they could build some nasty bioweapon
01:19:52 if they really wanted to.
01:19:53 But first of all, they don’t want to
01:19:55 because they think it’s disgusting because of the stigma.
01:19:57 And second, even if there’s some sort of nutcase and want to,
01:20:02 it’s very likely that some of their grad students
01:20:04 or someone would rat them out because everyone else thinks
01:20:06 it’s so disgusting.
01:20:08 And in fact, we now know there was even a fair bit of cheating
01:20:11 on the bioweapons ban.
01:20:13 But no countries used them because it was so stigmatized
01:20:17 that it just wasn’t worth revealing that they had cheated.
01:20:22 You talk about drones, but you kind of
01:20:24 think that drones is a remote operation.
01:20:28 Which they are, mostly, still.
01:20:30 But you’re not taking the next intellectual step
01:20:34 of where does this go.
01:20:36 You’re kind of saying the problem with drones
01:20:38 is that you’re removing yourself from direct violence.
01:20:42 Therefore, you’re not able to sort of maintain
01:20:44 the common humanity required to make
01:20:46 the proper decisions strategically.
01:20:48 But that’s the criticism as opposed to like,
01:20:51 if this is automated, and just exactly as you said,
01:20:55 if you automate it and there’s a race,
01:20:58 then the technology’s gonna get better and better and better
01:21:01 which means getting cheaper and cheaper and cheaper.
01:21:03 And unlike, perhaps, nuclear weapons
01:21:06 which is connected to resources in a way,
01:21:10 like it’s hard to engineer, yeah.
01:21:13 It feels like there’s too much overlap
01:21:17 between the tech industry and autonomous weapons
01:21:20 to where you could have smartphone type of cheapness.
01:21:24 If you look at drones, for $1,000,
01:21:29 you can have an incredible system
01:21:30 that’s able to maintain flight autonomously for you
01:21:34 and take pictures and stuff.
01:21:36 You could see that going into the autonomous weapons space
01:21:39 that’s, but why is that not thought about
01:21:43 or discussed enough in the public, do you think?
01:21:45 You see those dancing Boston Dynamics robots
01:21:48 and everybody has this kind of,
01:21:52 as if this is like a far future.
01:21:55 They have this fear like, oh, this’ll be Terminator
01:21:58 in like some, I don’t know, unspecified 20, 30, 40 years.
01:22:03 And they don’t think about, well, this is like
01:22:05 some much less dramatic version of that
01:22:09 is actually happening now.
01:22:11 It’s not gonna be legged, it’s not gonna be dancing,
01:22:14 but it already has the capability
01:22:17 to use artificial intelligence to kill humans.
01:22:20 Yeah, the Boston Dynamics legged robots,
01:22:22 I think the reason we imagine them holding guns
01:22:24 is just because you’ve all seen Arnold Schwarzenegger, right?
01:22:28 That’s our reference point.
01:22:30 That’s pretty useless.
01:22:32 That’s not gonna be the main military use of them.
01:22:35 They might be useful in law enforcement in the future
01:22:38 and then there’s a whole debate about,
01:22:40 do you want robots showing up at your house with guns
01:22:42 telling you who’ll be perfectly obedient
01:22:45 to whatever dictator controls them?
01:22:47 But let’s leave that aside for a moment
01:22:49 and look at what’s actually relevant now.
01:22:51 So there’s a spectrum of things you can do
01:22:54 with AI in the military.
01:22:55 And again, to put my card on the table,
01:22:57 I’m not the pacifist, I think we should have good defense.
01:23:03 So for example, a predator drone is basically
01:23:08 a fancy little remote controlled airplane, right?
01:23:11 There’s a human piloting it and the decision ultimately
01:23:16 about whether to kill somebody with it
01:23:17 is made by a human still.
01:23:19 And this is a line I think we should never cross.
01:23:23 There’s a current DOD policy.
01:23:25 Again, you have to have a human in the loop.
01:23:27 I think algorithms should never make life
01:23:30 or death decisions, they should be left to humans.
01:23:34 Now, why might we cross that line?
01:23:37 Well, first of all, these are expensive, right?
01:23:40 So for example, when Azerbaijan had all these drones
01:23:46 and Armenia didn’t have any, they start trying
01:23:48 to jerry rig little cheap things, fly around.
01:23:51 But then of course, the Armenians would jam them
01:23:54 or the Azeris would jam them.
01:23:55 And remote control things can be jammed,
01:23:58 that makes them inferior.
01:24:00 Also, there’s a bit of a time delay between,
01:24:02 if we’re piloting something from far away,
01:24:05 speed of light, and the human has a reaction time as well,
01:24:08 it would be nice to eliminate that jamming possibility
01:24:11 in the time that they by having it fully autonomous.
01:24:14 But now you might be, so then if you do,
01:24:17 but now you might be crossing that exact line.
01:24:19 You might program it to just, oh yeah, the air drone,
01:24:22 go hover over this country for a while
01:24:25 and whenever you find someone who is a bad guy,
01:24:28 kill them.
01:24:30 Now the machine is making these sort of decisions
01:24:33 and some people who defend this still say,
01:24:36 well, that’s morally fine because we are the good guys
01:24:39 and we will tell it the definition of bad guy
01:24:43 that we think is moral.
01:24:45 But now it would be very naive to think
01:24:48 that if ISIS buys that same drone,
01:24:51 that they’re gonna use our definition of bad guy.
01:24:54 Maybe for them, bad guy is someone wearing
01:24:55 a US army uniform or maybe there will be some,
01:25:00 weird ethnic group who decides that someone
01:25:04 of another ethnic group, they are the bad guys, right?
01:25:07 The thing is human soldiers with all our faults,
01:25:11 we still have some basic wiring in us.
01:25:14 Like, no, it’s not okay to kill kids and civilians.
01:25:20 And Thomas Weprin has none of that.
01:25:21 It’s just gonna do whatever is programmed.
01:25:23 It’s like the perfect Adolf Eichmann on steroids.
01:25:27 Like they told him, Adolf Eichmann, you know,
01:25:30 he wanted to do this and this and this
01:25:32 to make the Holocaust more efficient.
01:25:33 And he was like, yeah, and off he went and did it, right?
01:25:37 Do we really wanna make machines that are like that,
01:25:41 like completely amoral and we’ll take the user’s definition
01:25:44 of who is the bad guy?
01:25:45 And do we then wanna make them so cheap
01:25:47 that all our adversaries can have them?
01:25:49 Like what could possibly go wrong?
01:25:52 That’s I think the big ordeal of the whole thing.
01:25:56 I think the big argument for why we wanna,
01:26:00 this year really put the kibosh on this.
01:26:03 And I think you can tell there’s a lot
01:26:06 of very active debate even going on within the US military
01:26:10 and undoubtedly in other militaries around the world also
01:26:13 about whether we should have some sort
01:26:14 of international agreement to at least require
01:26:16 that these weapons have to be above a certain size
01:26:20 and cost, you know, so that things just don’t totally spiral
01:26:27 out of control.
01:26:29 And finally, just for your question,
01:26:31 but is it possible to stop it?
01:26:33 Because some people tell me, oh, just give up, you know.
01:26:37 But again, so Matthew Messelsen again from Harvard, right,
01:26:41 who the bioweapons hero, he had exactly this criticism
01:26:46 also with bioweapons.
01:26:47 People were like, how can you check for sure
01:26:49 that the Russians aren’t cheating?
01:26:52 And he told me this, I think really ingenious insight.
01:26:58 He said, you know, Max, some people
01:27:01 think you have to have inspections and things
01:27:03 and you have to make sure that you can catch the cheaters
01:27:06 with 100% chance.
01:27:08 You don’t need 100%, he said.
01:27:10 1% is usually enough.
01:27:14 Because if it’s another big state,
01:27:19 suppose China and the US have signed the treaty drawing
01:27:23 a certain line and saying, yeah, these kind of drones are OK,
01:27:26 but these fully autonomous ones are not.
01:27:28 Now suppose you are China and you have cheated and secretly
01:27:34 developed some clandestine little thing
01:27:36 or you’re thinking about doing it.
01:27:37 What’s your calculation that you do?
01:27:39 Well, you’re like, OK, what’s the probability
01:27:41 that we’re going to get caught?
01:27:44 If the probability is 100%, of course, we’re not going to do it.
01:27:49 But if the probability is 5% that we’re going to get caught,
01:27:52 then it’s going to be like a huge embarrassment for us.
01:27:55 And we still have our nuclear weapons anyway,
01:28:00 so it doesn’t really make an enormous difference in terms
01:28:05 of deterring the US.
01:28:07 And that feeds the stigma that you kind of established,
01:28:11 like this fabric, this universal stigma over the thing.
01:28:14 Exactly.
01:28:15 It’s very reasonable for them to say, well, we probably
01:28:18 get away with it.
01:28:18 If we don’t, then the US will know we cheated,
01:28:21 and then they’re going to go full tilt with their program
01:28:23 and say, look, the Chinese are cheaters,
01:28:25 and now we have all these weapons against us,
01:28:27 and that’s bad.
01:28:27 So the stigma alone is very, very powerful.
01:28:32 And again, look what happened with bioweapons.
01:28:34 It’s been 50 years now.
01:28:36 When was the last time you read about a bioterrorism attack?
01:28:40 The only deaths I really know about with bioweapons
01:28:42 that have happened when we Americans managed
01:28:45 to kill some of our own with anthrax,
01:28:47 or the idiot who sent them to Tom Daschle and others
01:28:49 in letters, right?
01:28:50 And similarly in Sverdlovsk in the Soviet Union,
01:28:55 they had some anthrax in some lab there.
01:28:57 Maybe they were cheating or who knows,
01:29:00 and it leaked out and killed a bunch of Russians.
01:29:02 I’d say that’s a pretty good success, right?
01:29:04 50 years, just two own goals by the superpowers,
01:29:08 and then nothing.
01:29:09 And that’s why whenever I ask anyone
01:29:12 what they think about biology, they think it’s great.
01:29:15 They associate it with new cures, new diseases,
01:29:18 maybe a good vaccine.
01:29:19 This is how I want to think about AI in the future.
01:29:22 And I want others to think about AI too,
01:29:24 as a source of all these great solutions to our problems,
01:29:27 not as, oh, AI, oh yeah, that’s the reason
01:29:31 I feel scared going outside these days.
01:29:34 Yeah, it’s kind of brilliant that bioweapons
01:29:37 and nuclear weapons, we’ve figured out,
01:29:40 I mean, of course there’s still a huge source of danger,
01:29:43 but we figured out some way of creating rules
01:29:47 and social stigma over these weapons
01:29:51 that then creates a stability to our,
01:29:54 whatever that game theoretic stability that occurs.
01:29:57 And we don’t have that with AI,
01:29:59 and you’re kind of screaming from the top of the mountain
01:30:03 about this, that we need to find that
01:30:05 because it’s very possible with the future of life,
01:30:10 as you point out, Institute Awards pointed out
01:30:15 that with nuclear weapons,
01:30:17 we could have destroyed ourselves quite a few times.
01:30:21 And it’s a learning experience that is very costly.
01:30:28 We gave this Future Life Award,
01:30:30 we gave it the first time to this guy, Vasily Arkhipov.
01:30:34 He was on, most people haven’t even heard of him.
01:30:37 Yeah, can you say who he is?
01:30:38 Vasily Arkhipov, he has, in my opinion,
01:30:44 made the greatest positive contribution to humanity
01:30:47 of any human in modern history.
01:30:50 And maybe it sounds like hyperbole here,
01:30:51 like I’m just over the top,
01:30:53 but let me tell you the story and I think maybe you’ll agree.
01:30:56 So during the Cuban Missile Crisis,
01:31:00 we Americans first didn’t know
01:31:01 that the Russians had sent four submarines,
01:31:05 but we caught two of them.
01:31:06 And we didn’t know that,
01:31:09 so we dropped practice depth charges
01:31:11 on the one that he was on,
01:31:12 try to force it to the surface.
01:31:15 But we didn’t know that this nuclear submarine
01:31:17 actually was a nuclear submarine with a nuclear torpedo.
01:31:20 We also didn’t know that they had authorization
01:31:22 to launch it without clearance from Moscow.
01:31:25 And we also didn’t know
01:31:26 that they were running out of electricity.
01:31:28 Their batteries were almost dead.
01:31:29 They were running out of oxygen.
01:31:31 Sailors were fainting left and right.
01:31:34 The temperature was about 110, 120 Fahrenheit on board.
01:31:39 It was really hellish conditions,
01:31:40 really just a kind of doomsday.
01:31:43 And at that point,
01:31:44 these giant explosions start happening
01:31:46 from the Americans dropping these.
01:31:48 The captain thought World War III had begun.
01:31:50 They decided they were gonna launch the nuclear torpedo.
01:31:53 And one of them shouted,
01:31:55 we’re all gonna die,
01:31:56 but we’re not gonna disgrace our Navy.
01:31:58 We don’t know what would have happened
01:32:00 if there had been a giant mushroom cloud all of a sudden
01:32:03 against the Americans.
01:32:04 But since everybody had their hands on the triggers,
01:32:09 you don’t have to be too creative to think
01:32:10 that it could have led to an all out nuclear war,
01:32:13 in which case we wouldn’t be having this conversation now.
01:32:15 What actually took place was
01:32:17 they needed three people to approve this.
01:32:21 The captain had said yes.
01:32:22 There was the Communist Party political officer.
01:32:24 He also said, yes, let’s do it.
01:32:26 And the third man was this guy, Vasily Arkhipov,
01:32:29 who said, no.
01:32:29 For some reason, he was just more chill than the others
01:32:32 and he was the right man at the right time.
01:32:34 I don’t want us as a species rely on the right person
01:32:38 being there at the right time, you know.
01:32:40 We tracked down his family
01:32:42 living in relative poverty outside Moscow.
01:32:47 When he flew his daughter,
01:32:48 he had passed away and flew them to London.
01:32:52 They had never been to the West even.
01:32:54 It was incredibly moving to get to honor them for this.
01:32:57 The next year we gave them a medal.
01:32:59 The next year we gave this Future Life Award
01:33:01 to Stanislav Petrov.
01:33:04 Have you heard of him?
01:33:05 Yes.
01:33:05 So he was in charge of the Soviet early warning station,
01:33:10 which was built with Soviet technology
01:33:12 and honestly not that reliable.
01:33:14 It said that there were five US missiles coming in.
01:33:18 Again, if they had launched at that point,
01:33:21 we probably wouldn’t be having this conversation.
01:33:23 He decided based on just mainly gut instinct
01:33:29 to just not escalate this.
01:33:32 And I’m very glad he wasn’t replaced by an AI
01:33:35 that was just automatically following orders.
01:33:37 And then we gave the third one to Matthew Messelson.
01:33:39 Last year, we gave this award to these guys
01:33:44 who actually use technology for good,
01:33:46 not avoiding something bad, but for something good.
01:33:50 The guys who eliminated this disease,
01:33:52 it was way worse than COVID that had killed
01:33:55 half a billion people in its final century.
01:33:58 Smallpox, right?
01:33:59 So you mentioned it earlier.
01:34:01 COVID on average kills less than 1% of people who get it.
01:34:05 Smallpox, about 30%.
01:34:08 And they just ultimately, Viktor Zhdanov and Bill Foege,
01:34:14 most of my colleagues have never heard of either of them,
01:34:17 one American, one Russian, they did this amazing effort
01:34:22 not only was Zhdanov able to get the US and the Soviet Union
01:34:25 to team up against smallpox during the Cold War,
01:34:27 but Bill Foege came up with this ingenious strategy
01:34:30 for making it actually go all the way
01:34:32 to defeat the disease without funding
01:34:36 for vaccinating everyone.
01:34:37 And as a result, we haven’t had any,
01:34:40 we went from 15 million deaths the year
01:34:42 I was born in smallpox.
01:34:44 So what do we have in COVID now?
01:34:45 A little bit short of 2 million, right?
01:34:47 Yes.
01:34:48 To zero deaths, of course, this year and forever.
01:34:52 There have been 200 million people,
01:34:53 we estimate, who would have died since then by smallpox
01:34:57 had it not been for this.
01:34:58 So isn’t science awesome when you use it for good?
01:35:02 The reason we wanna celebrate these sort of people
01:35:04 is to remind them of this.
01:35:05 Science is so awesome when you use it for good.
01:35:10 And those awards actually, the variety there,
01:35:13 it’s a very interesting picture.
01:35:14 So the first two are looking at,
01:35:19 it’s kind of exciting to think that these average humans
01:35:22 in some sense, they’re products of billions
01:35:26 of other humans that came before them, evolution,
01:35:30 and some little, you said gut,
01:35:33 but there’s something in there
01:35:35 that stopped the annihilation of the human race.
01:35:41 And that’s a magical thing,
01:35:43 but that’s like this deeply human thing.
01:35:45 And then there’s the other aspect
01:35:47 where that’s also very human,
01:35:49 which is to build solution
01:35:51 to the existential crises that we’re facing,
01:35:55 like to build it, to take the responsibility
01:35:57 and to come up with different technologies and so on.
01:36:00 And both of those are deeply human,
01:36:04 the gut and the mind, whatever that is that creates.
01:36:07 The best is when they work together.
01:36:08 Arkhipov, I wish I could have met him, of course,
01:36:11 but he had passed away.
01:36:13 He was really a fantastic military officer,
01:36:16 combining all the best traits
01:36:18 that we in America admire in our military.
01:36:21 Because first of all, he was very loyal, of course.
01:36:23 He never even told anyone about this during his whole life,
01:36:26 even though you think he had some bragging rights, right?
01:36:28 But he just was like, this is just business,
01:36:30 just doing my job.
01:36:31 It only came out later after his death.
01:36:34 And second, the reason he did the right thing
01:36:37 was not because he was some sort of liberal
01:36:39 or some sort of, not because he was just,
01:36:43 oh, peace and love.
01:36:47 It was partly because he had been the captain
01:36:49 on another submarine that had a nuclear reactor meltdown.
01:36:53 And it was his heroism that helped contain this.
01:36:58 That’s why he died of cancer later also.
01:36:59 But he had seen many of his crew members die.
01:37:01 And I think for him, that gave him this gut feeling
01:37:04 that if there’s a nuclear war
01:37:06 between the US and the Soviet Union,
01:37:08 the whole world is gonna go through
01:37:11 what I saw my dear crew members suffer through.
01:37:13 It wasn’t just an abstract thing for him.
01:37:15 I think it was real.
01:37:17 And second though, not just the gut, the mind, right?
01:37:20 He was, for some reason, very levelheaded personality
01:37:23 and very smart guy,
01:37:25 which is exactly what we want our best fighter pilots
01:37:29 to be also, right?
01:37:30 I never forget Neil Armstrong when he’s landing on the moon
01:37:32 and almost running out of gas.
01:37:34 And he doesn’t even change when they say 30 seconds,
01:37:37 he doesn’t even change the tone of voice, just keeps going.
01:37:39 Arkhipov, I think was just like that.
01:37:41 So when the explosions start going off
01:37:43 and his captain is screaming and we should nuke them
01:37:45 and all, he’s like,
01:37:50 I don’t think the Americans are trying to sink us.
01:37:54 I think they’re trying to send us a message.
01:37:58 That’s pretty bad ass.
01:37:59 Yes.
01:38:00 Coolness, because he said, if they wanted to sink us,
01:38:03 and he said, listen, listen, it’s alternating
01:38:06 one loud explosion on the left, one on the right,
01:38:10 one on the left, one on the right.
01:38:12 He was the only one who noticed this pattern.
01:38:15 And he’s like, I think this is,
01:38:17 I’m trying to send us a signal
01:38:20 that they want it to surface
01:38:22 and they’re not gonna sink us.
01:38:25 And somehow,
01:38:29 this is how he then managed it ultimately
01:38:32 with his combination of gut
01:38:34 and also just cool analytical thinking,
01:38:37 was able to deescalate the whole thing.
01:38:40 And yeah, so this is some of the best in humanity.
01:38:44 I guess coming back to what we talked about earlier,
01:38:45 it’s the combination of the neural network,
01:38:47 the instinctive, with, I’m getting teary up here,
01:38:50 getting emotional, but he was just,
01:38:53 he is one of my superheroes,
01:38:56 having both the heart and the mind combined.
01:39:00 And especially in that time, there’s something about the,
01:39:03 I mean, this is a very, in America,
01:39:05 people are used to this kind of idea
01:39:06 of being the individual of like on your own thinking.
01:39:12 I think under, in the Soviet Union under communism,
01:39:15 it’s actually much harder to do that.
01:39:17 Oh yeah, he didn’t even, he even got,
01:39:19 he didn’t get any accolades either
01:39:21 when he came back for this, right?
01:39:24 They just wanted to hush the whole thing up.
01:39:25 Yeah, there’s echoes of that with Chernobyl,
01:39:28 there’s all kinds of,
01:39:30 that’s one, that’s a really hopeful thing
01:39:34 that amidst big centralized powers,
01:39:37 whether it’s companies or states,
01:39:39 there’s still the power of the individual
01:39:42 to think on their own, to act.
01:39:43 But I think we need to think of people like this,
01:39:46 not as a panacea we can always count on,
01:39:50 but rather as a wake up call.
01:39:55 So because of them, because of Arkhipov,
01:39:58 we are alive to learn from this lesson,
01:40:01 to learn from the fact that we shouldn’t keep playing
01:40:03 Russian roulette and almost have a nuclear war
01:40:04 by mistake now and then,
01:40:06 because relying on luck is not a good longterm strategy.
01:40:09 If you keep playing Russian roulette over and over again,
01:40:11 the probability of surviving just drops exponentially
01:40:13 with time.
01:40:14 Yeah.
01:40:15 And if you have some probability
01:40:16 of having an accidental nuke war every year,
01:40:18 the probability of not having one also drops exponentially.
01:40:21 I think we can do better than that.
01:40:22 So I think the message is very clear,
01:40:26 once in a while shit happens,
01:40:27 and there’s a lot of very concrete things we can do
01:40:31 to reduce the risk of things like that happening
01:40:34 in the first place.
01:40:36 On the AI front, if we just link on that for a second.
01:40:39 Yeah.
01:40:40 So you’re friends with, you often talk with Elon Musk
01:40:44 throughout history, you’ve did a lot
01:40:46 of interesting things together.
01:40:48 He has a set of fears about the future
01:40:52 of artificial intelligence, AGI.
01:40:55 Do you have a sense, we’ve already talked about
01:40:59 the things we should be worried about with AI,
01:41:01 do you have a sense of the shape of his fears
01:41:04 in particular about AI,
01:41:06 of which subset of what we’ve talked about,
01:41:10 whether it’s creating, it’s that direction
01:41:14 of creating sort of these giant competition systems
01:41:17 that are not explainable,
01:41:19 they’re not intelligible intelligence,
01:41:21 or is it the…
01:41:26 And then like as a branch of that,
01:41:28 is it the manipulation by big corporations of that
01:41:31 or individual evil people to use that for destruction
01:41:35 or the unintentional consequences?
01:41:37 Do you have a sense of where his thinking is on this?
01:41:40 From my many conversations with Elon,
01:41:42 yeah, I certainly have a model of how he thinks.
01:41:47 It’s actually very much like the way I think also,
01:41:49 I’ll elaborate on it a bit.
01:41:51 I just wanna push back on when you said evil people,
01:41:54 I don’t think it’s a very helpful concept.
01:41:58 Evil people, sometimes people do very, very bad things,
01:42:02 but they usually do it because they think it’s a good thing
01:42:05 because somehow other people had told them
01:42:07 that that was a good thing
01:42:08 or given them incorrect information or whatever, right?
01:42:15 I believe in the fundamental goodness of humanity
01:42:18 that if we educate people well
01:42:21 and they find out how things really are,
01:42:24 people generally wanna do good and be good.
01:42:27 Hence the value alignment,
01:42:30 as opposed to it’s about information, about knowledge,
01:42:33 and then once we have that,
01:42:35 we’ll likely be able to do good
01:42:39 in the way that’s aligned with everybody else
01:42:41 who thinks differently.
01:42:42 Yeah, and it’s not just the individual people
01:42:44 we have to align.
01:42:44 So we don’t just want people to be educated
01:42:49 to know the way things actually are
01:42:51 and to treat each other well,
01:42:53 but we also need to align other nonhuman entities.
01:42:56 We talked about corporations, there has to be institutions
01:42:58 so that what they do is actually good
01:42:59 for the country they’re in
01:43:00 and we should align, make sure that what countries do
01:43:03 is actually good for the species as a whole, et cetera.
01:43:07 Coming back to Elon,
01:43:08 yeah, my understanding of how Elon sees this
01:43:13 is really quite similar to my own,
01:43:15 which is one of the reasons I like him so much
01:43:18 and enjoy talking with him so much.
01:43:19 I feel he’s quite different from most people
01:43:22 in that he thinks much more than most people
01:43:27 about the really big picture,
01:43:29 not just what’s gonna happen in the next election cycle,
01:43:32 but in millennia, millions and billions of years from now.
01:43:36 And when you look in this more cosmic perspective,
01:43:39 it’s so obvious that we are gazing out into this universe
01:43:43 that as far as we can tell is mostly dead
01:43:46 with life being almost imperceptibly tiny perturbation,
01:43:49 and he sees this enormous opportunity
01:43:52 for our universe to come alive,
01:43:54 first to become an interplanetary species.
01:43:56 Mars is obviously just first stop on this cosmic journey.
01:44:02 And precisely because he thinks more long term,
01:44:06 it’s much more clear to him than to most people
01:44:09 that what we do with this Russian roulette thing
01:44:11 we keep playing with our nukes is a really poor strategy,
01:44:15 really reckless strategy.
01:44:16 And also that we’re just building
01:44:18 these ever more powerful AI systems that we don’t understand
01:44:21 is also just a really reckless strategy.
01:44:23 I feel Elon is very much a humanist
01:44:26 in the sense that he wants an awesome future for humanity.
01:44:30 He wants it to be us that control the machines
01:44:35 rather than the machines that control us.
01:44:39 And why shouldn’t we insist on that?
01:44:42 We’re building them after all, right?
01:44:44 Why should we build things that just make us
01:44:46 into some little cog in the machinery
01:44:48 that has no further say in the matter, right?
01:44:50 That’s not my idea of an inspiring future either.
01:44:54 Yeah, if you think on the cosmic scale
01:44:57 in terms of both time and space,
01:45:00 so much is put into perspective.
01:45:02 Yeah.
01:45:04 Whenever I have a bad day, that’s what I think about.
01:45:06 It immediately makes me feel better.
01:45:09 It makes me sad that for us individual humans,
01:45:13 at least for now, the ride ends too quickly.
01:45:16 That we don’t get to experience the cosmic scale.
01:45:20 Yeah, I mean, I think of our universe sometimes
01:45:22 as an organism that has only begun to wake up a tiny bit,
01:45:26 just like the very first little glimmers of consciousness
01:45:30 you have in the morning when you start coming around.
01:45:32 Before the coffee.
01:45:33 Before the coffee, even before you get out of bed,
01:45:35 before you even open your eyes.
01:45:37 You start to wake up a little bit.
01:45:40 There’s something here.
01:45:43 That’s very much how I think of where we are.
01:45:47 All those galaxies out there,
01:45:48 I think they’re really beautiful,
01:45:51 but why are they beautiful?
01:45:52 They’re beautiful because conscious entities
01:45:55 are actually observing them,
01:45:57 experiencing them through our telescopes.
01:46:01 I define consciousness as subjective experience,
01:46:05 whether it be colors or emotions or sounds.
01:46:09 So beauty is an experience.
01:46:12 Meaning is an experience.
01:46:13 Purpose is an experience.
01:46:15 If there was no conscious experience,
01:46:18 observing these galaxies, they wouldn’t be beautiful.
01:46:20 If we do something dumb with advanced AI in the future here
01:46:24 and Earth originating, life goes extinct.
01:46:29 And that was it for this.
01:46:30 If there is nothing else with telescopes in our universe,
01:46:33 then it’s kind of game over for beauty
01:46:36 and meaning and purpose in our whole universe.
01:46:38 And I think that would be just such
01:46:39 an opportunity lost, frankly.
01:46:41 And I think when Elon points this out,
01:46:46 he gets very unfairly maligned in the media
01:46:49 for all the dumb media bias reasons we talked about.
01:46:52 They want to print precisely the things about Elon
01:46:55 out of context that are really click baity.
01:46:58 He has gotten so much flack
01:47:00 for this summoning the demon statement.
01:47:04 I happen to know exactly the context
01:47:07 because I was in the front row when he gave that talk.
01:47:09 It was at MIT, you’ll be pleased to know,
01:47:11 it was the AeroAstro anniversary.
01:47:13 They had Buzz Aldrin there from the moon landing,
01:47:16 a whole house, a Kresge auditorium
01:47:19 packed with MIT students.
01:47:20 And he had this amazing Q&A, it might’ve gone for an hour.
01:47:23 And they talked about rockets and Mars and everything.
01:47:27 At the very end, this one student
01:47:29 who has actually hit my class asked him, what about AI?
01:47:33 Elon makes this one comment
01:47:35 and they take this out of context, print it, goes viral.
01:47:39 What is it like with AI,
01:47:40 we’re summoning the demons, something like that.
01:47:42 And try to cast him as some sort of doom and gloom dude.
01:47:47 You know Elon, he’s not the doom and gloom dude.
01:47:51 He is such a positive visionary.
01:47:54 And the whole reason he warns about this
01:47:55 is because he realizes more than most
01:47:57 what the opportunity cost is of screwing up.
01:47:59 That there is so much awesomeness in the future
01:48:02 that we can and our descendants can enjoy
01:48:05 if we don’t screw up, right?
01:48:07 I get so pissed off when people try to cast him
01:48:10 as some sort of technophobic Luddite.
01:48:15 And at this point, it’s kind of ludicrous
01:48:18 when I hear people say that people who worry about
01:48:21 artificial general intelligence are Luddites
01:48:24 because of course, if you look more closely,
01:48:27 you have some of the most outspoken people making warnings
01:48:32 are people like Professor Stuart Russell from Berkeley
01:48:35 who’s written the bestselling AI textbook, you know.
01:48:38 So claiming that he’s a Luddite who doesn’t understand AI
01:48:43 is the joke is really on the people who said it.
01:48:46 But I think more broadly,
01:48:48 this message is really not sunk in at all.
01:48:50 What it is that people worry about,
01:48:52 they think that Elon and Stuart Russell and others
01:48:56 are worried about the dancing robots picking up an AR 15
01:49:02 and going on a rampage, right?
01:49:04 They think they’re worried about robots turning evil.
01:49:08 They’re not, I’m not.
01:49:10 The risk is not malice, it’s competence.
01:49:15 The risk is just that we build some systems
01:49:17 that are incredibly competent,
01:49:18 which means they’re always gonna get
01:49:20 their goals accomplished,
01:49:22 even if they clash with our goals.
01:49:24 That’s the risk.
01:49:25 Why did we humans drive the West African black rhino extinct?
01:49:30 Is it because we’re malicious, evil rhinoceros haters?
01:49:34 No, it’s just because our goals didn’t align
01:49:38 with the goals of those rhinos
01:49:39 and tough luck for the rhinos, you know.
01:49:42 So the point is just we don’t wanna put ourselves
01:49:46 in the position of those rhinos
01:49:48 creating something more powerful than us
01:49:51 if we haven’t first figured out how to align the goals.
01:49:53 And I am optimistic.
01:49:54 I think we could do it if we worked really hard on it,
01:49:56 because I spent a lot of time
01:49:59 around intelligent entities that were more intelligent
01:50:01 than me, my mom and my dad.
01:50:05 And I was little and that was fine
01:50:07 because their goals were actually aligned
01:50:09 with mine quite well.
01:50:11 But we’ve seen today many examples of where the goals
01:50:15 of our powerful systems are not so aligned.
01:50:17 So those click through optimization algorithms
01:50:22 that are polarized social media, right?
01:50:24 They were actually pretty poorly aligned
01:50:26 with what was good for democracy, it turned out.
01:50:28 And again, almost all problems we’ve had
01:50:31 in the machine learning again came so far,
01:50:33 not from malice, but from poor alignment.
01:50:35 And that’s exactly why that’s why we should be concerned
01:50:38 about it in the future.
01:50:39 Do you think it’s possible that with systems
01:50:43 like Neuralink and brain computer interfaces,
01:50:47 you know, again, thinking of the cosmic scale,
01:50:49 Elon’s talked about this, but others have as well
01:50:52 throughout history of figuring out how the exact mechanism
01:50:57 of how to achieve that kind of alignment.
01:51:00 So one of them is having a symbiosis with AI,
01:51:03 which is like coming up with clever ways
01:51:05 where we’re like stuck together in this weird relationship,
01:51:10 whether it’s biological or in some kind of other way.
01:51:14 Do you think that’s a possibility
01:51:17 of having that kind of symbiosis?
01:51:19 Or do we wanna instead kind of focus
01:51:20 on this distinct entities of us humans talking
01:51:28 to these intelligible, self doubting AIs,
01:51:31 maybe like Stuart Russell thinks about it,
01:51:33 like we’re self doubting and full of uncertainty
01:51:37 and our AI systems are full of uncertainty.
01:51:39 We communicate back and forth
01:51:41 and in that way achieve symbiosis.
01:51:44 I honestly don’t know.
01:51:46 I would say that because we don’t know for sure
01:51:48 what if any of our, which of any of our ideas will work.
01:51:52 But we do know that if we don’t,
01:51:55 I’m pretty convinced that if we don’t get any
01:51:56 of these things to work and just barge ahead,
01:51:59 then our species is, you know,
01:52:01 probably gonna go extinct this century.
01:52:03 I think it’s…
01:52:04 This century, you think like,
01:52:06 you think we’re facing this crisis
01:52:09 is a 21st century crisis.
01:52:11 Like this century will be remembered.
01:52:13 But on a hard drive and a hard drive somewhere
01:52:18 or maybe by future generations is like,
01:52:22 like there’ll be future Future of Life Institute awards
01:52:26 for people that have done something about AI.
01:52:30 It could also end even worse,
01:52:31 whether we’re not superseded
01:52:33 by leaving any AI behind either.
01:52:35 We just totally wipe out, you know,
01:52:37 like on Easter Island.
01:52:38 Our century is long.
01:52:39 You know, there are still 79 years left of it, right?
01:52:44 Think about how far we’ve come just in the last 30 years.
01:52:47 So we can talk more about what might go wrong,
01:52:53 but you asked me this really good question
01:52:54 about what’s the best strategy.
01:52:55 Is it Neuralink or Russell’s approach or whatever?
01:52:59 I think, you know, when we did the Manhattan project,
01:53:05 we didn’t know if any of our four ideas
01:53:08 for enriching uranium and getting out the uranium 235
01:53:11 were gonna work.
01:53:12 But we felt this was really important
01:53:14 to get it before Hitler did.
01:53:16 So, you know what we did?
01:53:17 We tried all four of them.
01:53:19 Here, I think it’s analogous
01:53:21 where there’s the greatest threat
01:53:24 that’s ever faced our species.
01:53:25 And of course, US national security by implication.
01:53:29 We don’t know if we don’t have any method
01:53:31 that’s guaranteed to work, but we have a lot of ideas.
01:53:34 So we should invest pretty heavily
01:53:35 in pursuing all of them with an open mind
01:53:38 and hope that one of them at least works.
01:53:40 These are, the good news is the century is long,
01:53:45 and it might take decades
01:53:47 until we have artificial general intelligence.
01:53:50 So we have some time hopefully,
01:53:52 but it takes a long time to solve
01:53:55 these very, very difficult problems.
01:53:57 It’s gonna actually be the,
01:53:58 it’s the most difficult problem
01:53:59 we were ever trying to solve as a species.
01:54:01 So we have to start now.
01:54:03 So we don’t have, rather than begin thinking about it
01:54:05 the night before some people who’ve had too much Red Bull
01:54:08 switch it on.
01:54:09 And we have to, coming back to your question,
01:54:11 we have to pursue all of these different avenues and see.
01:54:14 If you were my investment advisor
01:54:16 and I was trying to invest in the future,
01:54:19 how do you think the human species
01:54:23 is most likely to destroy itself in the century?
01:54:29 Yeah, so if the crises,
01:54:32 many of the crises we’re facing are really before us
01:54:34 within the next hundred years,
01:54:37 how do we make explicit,
01:54:42 make known the unknowns and solve those problems
01:54:46 to avoid the biggest,
01:54:49 starting with the biggest existential crisis?
01:54:51 So as your investment advisor,
01:54:53 how are you planning to make money on us
01:54:55 destroying ourselves?
01:54:56 I have to ask.
01:54:57 I don’t know.
01:54:58 It might be the Russian origins.
01:55:01 Somehow it’s involved.
01:55:02 At the micro level of detailed strategies,
01:55:04 of course, these are unsolved problems.
01:55:08 For AI alignment,
01:55:09 we can break it into three sub problems
01:55:12 that are all unsolved.
01:55:13 I think you want first to make machines
01:55:16 understand our goals,
01:55:18 then adopt our goals and then retain our goals.
01:55:23 So to hit on all three real quickly.
01:55:27 The problem when Andreas Lubitz told his autopilot
01:55:31 to fly into the Alps was that the computer
01:55:34 didn’t even understand anything about his goals.
01:55:39 It was too dumb.
01:55:40 It could have understood actually,
01:55:42 but you would have had to put some effort in
01:55:45 as a systems designer to don’t fly into mountains.
01:55:48 So that’s the first challenge.
01:55:49 How do you program into computers human values,
01:55:54 human goals?
01:55:56 We can start rather than saying,
01:55:58 oh, it’s so hard.
01:55:59 We should start with the simple stuff, as I said,
01:56:02 self driving cars, airplanes,
01:56:04 just put in all the goals that we all agree on already,
01:56:07 and then have a habit of whenever machines get smarter
01:56:10 so they can understand one level higher goals,
01:56:15 put them into.
01:56:16 The second challenge is getting them to adopt the goals.
01:56:20 It’s easy for situations like that
01:56:22 where you just program it in,
01:56:23 but when you have self learning systems like children,
01:56:26 you know, any parent knows
01:56:29 that there was a difference between getting our kids
01:56:33 to understand what we want them to do
01:56:34 and to actually adopt our goals, right?
01:56:37 With humans, with children, fortunately,
01:56:40 they go through this phase.
01:56:44 First, they’re too dumb to understand
01:56:45 what we want our goals are.
01:56:46 And then they have this period of some years
01:56:50 when they’re both smart enough to understand them
01:56:52 and malleable enough that we have a chance
01:56:53 to raise them well.
01:56:55 And then they become teenagers kind of too late.
01:56:59 But we have this window with machines,
01:57:01 the challenges, the intelligence might grow so fast
01:57:04 that that window is pretty short.
01:57:06 So that’s a research problem.
01:57:08 The third one is how do you make sure they keep the goals
01:57:11 if they keep learning more and getting smarter?
01:57:14 Many sci fi movies are about how you have something
01:57:17 in which initially was aligned,
01:57:18 but then things kind of go off keel.
01:57:20 And, you know, my kids were very, very excited
01:57:24 about their Legos when they were little.
01:57:27 Now they’re just gathering dust in the basement.
01:57:29 If we create machines that are really on board
01:57:32 with the goal of taking care of humanity,
01:57:34 we don’t want them to get as bored with us
01:57:36 as my kids got with Legos.
01:57:39 So this is another research challenge.
01:57:41 How can you make some sort of recursively
01:57:43 self improving system retain certain basic goals?
01:57:47 That said, a lot of adult people still play with Legos.
01:57:50 So maybe we succeeded with the Legos.
01:57:52 Maybe, I like your optimism.
01:57:55 But above all.
01:57:56 So not all AI systems have to maintain the goals, right?
01:57:59 Just some fraction.
01:58:00 Yeah, so there’s a lot of talented AI researchers now
01:58:04 who have heard of this and want to work on it.
01:58:07 Not so much funding for it yet.
01:58:10 Of the billions that go into building AI more powerful,
01:58:14 it’s only a minuscule fraction
01:58:16 so far going into this safety research.
01:58:18 My attitude is generally we should not try to slow down
01:58:20 the technology, but we should greatly accelerate
01:58:22 the investment in this sort of safety research.
01:58:25 And also, this was very embarrassing last year,
01:58:29 but the NSF decided to give out
01:58:31 six of these big institutes.
01:58:33 We got one of them for AI and science, you asked me about.
01:58:37 Another one was supposed to be for AI safety research.
01:58:40 And they gave it to people studying oceans
01:58:43 and climate and stuff.
01:58:46 So I’m all for studying oceans and climates,
01:58:49 but we need to actually have some money
01:58:51 that actually goes into AI safety research also
01:58:53 and doesn’t just get grabbed by whatever.
01:58:56 That’s a fantastic investment.
01:58:57 And then at the higher level, you asked this question,
01:59:00 okay, what can we do?
01:59:02 What are the biggest risks?
01:59:05 I think we cannot just consider this
01:59:08 to be only a technical problem.
01:59:11 Again, because if you solve only the technical problem,
01:59:13 can I play with your robot?
01:59:14 Yes, please.
01:59:15 If we can get our machines to just blindly obey
01:59:20 the orders we give them,
01:59:22 so we can always trust that it will do what we want.
01:59:26 That might be great for the owner of the robot.
01:59:28 That might not be so great for the rest of humanity
01:59:31 if that person is that least favorite world leader
01:59:34 or whatever you imagine, right?
01:59:36 So we have to also take a look at the,
01:59:39 apply alignment, not just to machines,
01:59:41 but to all the other powerful structures.
01:59:44 That’s why it’s so important
01:59:45 to strengthen our democracy again,
01:59:47 as I said, to have institutions,
01:59:48 make sure that the playing field is not rigged
01:59:51 so that corporations are given the right incentives
01:59:54 to do the things that both make profit
01:59:57 and are good for people,
01:59:58 to make sure that countries have incentives
02:00:00 to do things that are both good for their people
02:00:03 and don’t screw up the rest of the world.
02:00:06 And this is not just something for AI nerds to geek out on.
02:00:10 This is an interesting challenge for political scientists,
02:00:13 economists, and so many other thinkers.
02:00:16 So one of the magical things
02:00:18 that perhaps makes this earth quite unique
02:00:25 is that it’s home to conscious beings.
02:00:28 So you mentioned consciousness.
02:00:31 Perhaps as a small aside,
02:00:35 because we didn’t really get specific
02:00:36 to how we might do the alignment.
02:00:39 Like you said,
02:00:40 is there just a really important research problem,
02:00:41 but do you think engineering consciousness
02:00:44 into AI systems is a possibility,
02:00:49 is something that we might one day do,
02:00:53 or is there something fundamental to consciousness
02:00:56 that is, is there something about consciousness
02:00:59 that is fundamental to humans and humans only?
02:01:03 I think it’s possible.
02:01:04 I think both consciousness and intelligence
02:01:08 are information processing.
02:01:10 Certain types of information processing.
02:01:13 And that fundamentally,
02:01:15 it doesn’t matter whether the information is processed
02:01:17 by carbon atoms in the neurons and brains
02:01:21 or by silicon atoms and so on in our technology.
02:01:27 Some people disagree.
02:01:28 This is what I think as a physicist.
02:01:32 That consciousness is the same kind of,
02:01:34 you said consciousness is information processing.
02:01:37 So meaning, I think you had a quote of something like
02:01:43 it’s information knowing itself, that kind of thing.
02:01:47 I think consciousness is, yeah,
02:01:49 is the way information feels when it’s being processed.
02:01:51 One’s being put in complex ways.
02:01:53 We don’t know exactly what those complex ways are.
02:01:56 It’s clear that most of the information processing
02:01:59 in our brains does not create an experience.
02:02:01 We’re not even aware of it, right?
02:02:03 Like for example,
02:02:05 you’re not aware of your heartbeat regulation right now,
02:02:07 even though it’s clearly being done by your body, right?
02:02:10 It’s just kind of doing its own thing.
02:02:12 When you go jogging,
02:02:13 there’s a lot of complicated stuff
02:02:15 about how you put your foot down and we know it’s hard.
02:02:18 That’s why robots used to fall over so much,
02:02:20 but you’re mostly unaware about it.
02:02:22 Your brain, your CEO consciousness module
02:02:25 just sends an email,
02:02:26 hey, I’m gonna keep jogging along this path.
02:02:29 The rest is on autopilot, right?
02:02:31 So most of it is not conscious,
02:02:33 but somehow there is some of the information processing,
02:02:36 which is we don’t know what exactly.
02:02:41 I think this is a science problem
02:02:44 that I hope one day we’ll have some equation for
02:02:47 or something so we can be able to build
02:02:49 a consciousness detector and say, yeah,
02:02:51 here there is some consciousness, here there’s not.
02:02:53 Oh, don’t boil that lobster because it’s feeling pain
02:02:56 or it’s okay because it’s not feeling pain.
02:02:59 Right now we treat this as sort of just metaphysics,
02:03:03 but it would be very useful in emergency rooms
02:03:06 to know if a patient has locked in syndrome
02:03:09 and is conscious or if they are actually just out.
02:03:14 And in the future, if you build a very, very intelligent
02:03:17 helper robot to take care of you,
02:03:20 I think you’d like to know
02:03:21 if you should feel guilty about shutting it down
02:03:24 or if it’s just like a zombie going through the motions
02:03:27 like a fancy tape recorder, right?
02:03:29 And once we can make progress
02:03:32 on the science of consciousness
02:03:34 and figure out what is conscious and what isn’t,
02:03:38 then assuming we want to create positive experiences
02:03:45 and not suffering, we’ll probably choose to build
02:03:48 some machines that are deliberately unconscious
02:03:51 that do incredibly boring, repetitive jobs
02:03:56 in an iron mine somewhere or whatever.
02:03:59 And maybe we’ll choose to create helper robots
02:04:03 for the elderly that are conscious
02:04:05 so that people don’t just feel creeped out
02:04:07 that the robot is just faking it
02:04:10 when it acts like it’s sad or happy.
02:04:12 Like you said, elderly,
02:04:13 I think everybody gets pretty deeply lonely in this world.
02:04:16 And so there’s a place I think for everybody
02:04:19 to have a connection with conscious beings,
02:04:21 whether they’re human or otherwise.
02:04:24 But I know for sure that I would,
02:04:26 if I had a robot, if I was gonna develop any kind
02:04:29 of personal emotional connection with it,
02:04:32 I would be very creeped out
02:04:33 if I knew it in an intellectual level
02:04:35 that the whole thing was just a fraud.
02:04:36 Now today you can buy a little talking doll for a kid
02:04:43 which will say things and the little child will often think
02:04:46 that this is actually conscious
02:04:47 and even real secrets to it that then go on the internet
02:04:50 and with lots of the creepy repercussions.
02:04:52 I would not wanna be just hacked and tricked like this.
02:04:58 If I was gonna be developing real emotional connections
02:05:01 with the robot, I would wanna know
02:05:04 that this is actually real.
02:05:05 It’s acting conscious, acting happy
02:05:08 because it actually feels it.
02:05:09 And I think this is not sci fi.
02:05:11 I think it’s possible to measure, to come up with tools.
02:05:15 After we understand the science of consciousness,
02:05:17 you’re saying we’ll be able to come up with tools
02:05:19 that can measure consciousness
02:05:21 and definitively say like this thing is experiencing
02:05:25 the things it says it’s experiencing.
02:05:27 Kind of by definition.
02:05:28 If it is a physical phenomenon, information processing
02:05:31 and we know that some information processing is conscious
02:05:34 and some isn’t, well, then there is something there
02:05:36 to be discovered with the methods of science.
02:05:38 Giulio Tononi has stuck his neck out the farthest
02:05:41 and written down some equations for a theory.
02:05:43 Maybe that’s right, maybe it’s wrong.
02:05:45 We certainly don’t know.
02:05:46 But I applaud that kind of efforts to sort of take this,
02:05:50 say this is not just something that philosophers
02:05:53 can have beer and muse about,
02:05:56 but something we can measure and study.
02:05:58 And coming, bringing that back to us,
02:06:00 I think what we would probably choose to do, as I said,
02:06:03 is if we cannot figure this out,
02:06:05 choose to make, to be quite mindful
02:06:09 about what sort of consciousness, if any,
02:06:11 we put in different machines that we have.
02:06:16 And certainly, we wouldn’t wanna make,
02:06:19 we should not be making much machines that suffer
02:06:21 without us even knowing it, right?
02:06:23 And if at any point someone decides to upload themselves
02:06:28 like Ray Kurzweil wants to do,
02:06:30 I don’t know if you’ve had him on your show.
02:06:31 We agree, but then COVID happens,
02:06:33 so we’re waiting it out a little bit.
02:06:34 Suppose he uploads himself into this robo Ray
02:06:38 and it talks like him and acts like him and laughs like him.
02:06:42 And before he powers off his biological body,
02:06:46 he would probably be pretty disturbed
02:06:47 if he realized that there’s no one home.
02:06:49 This robot is not having any subjective experience, right?
02:06:53 If humanity gets replaced by machine descendants,
02:06:59 which do all these cool things and build spaceships
02:07:02 and go to intergalactic rock concerts,
02:07:05 and it turns out that they are all unconscious,
02:07:10 just going through the motions,
02:07:11 wouldn’t that be like the ultimate zombie apocalypse, right?
02:07:16 Just a play for empty benches?
02:07:18 Yeah, I have a sense that there’s some kind of,
02:07:21 once we understand consciousness better,
02:07:22 we’ll understand that there’s some kind of continuum
02:07:25 and it would be a greater appreciation.
02:07:28 And we’ll probably understand, just like you said,
02:07:30 it’d be unfortunate if it’s a trick.
02:07:32 We’ll probably definitely understand
02:07:33 that love is indeed a trick that we’ll play on each other,
02:07:37 that we humans are, we convince ourselves we’re conscious,
02:07:40 but we’re really, us and trees and dolphins
02:07:45 are all the same kind of consciousness.
02:07:46 Can I try to cheer you up a little bit
02:07:48 with a philosophical thought here about the love part?
02:07:50 Yes, let’s do it.
02:07:51 You know, you might say,
02:07:53 okay, yeah, love is just a collaboration enabler.
02:07:58 And then maybe you can go and get depressed about that.
02:08:01 But I think that would be the wrong conclusion, actually.
02:08:04 You know, I know that the only reason I enjoy food
02:08:08 is because my genes hacked me
02:08:11 and they don’t want me to starve to death.
02:08:13 Not because they care about me consciously
02:08:17 enjoying succulent delights of pistachio ice cream,
02:08:21 but they just want me to make copies of them.
02:08:23 The whole thing, so in a sense,
02:08:24 the whole enjoyment of food is also a scam like this.
02:08:28 But does that mean I shouldn’t take pleasure
02:08:31 in this pistachio ice cream?
02:08:32 I love pistachio ice cream.
02:08:34 And I can tell you, I know this is an experimental fact.
02:08:38 I enjoy pistachio ice cream every bit as much,
02:08:41 even though I scientifically know exactly why,
02:08:45 what kind of scam this was.
02:08:46 Your genes really appreciate
02:08:48 that you like the pistachio ice cream.
02:08:50 Well, but I, my mind appreciates it too, you know?
02:08:53 And I have a conscious experience right now.
02:08:55 Ultimately, all of my brain is also just something
02:08:58 the genes built to copy themselves.
02:09:00 But so what?
02:09:01 You know, I’m grateful that,
02:09:03 yeah, thanks genes for doing this,
02:09:04 but you know, now it’s my brain that’s in charge here
02:09:07 and I’m gonna enjoy my conscious experience,
02:09:09 thank you very much.
02:09:10 And not just the pistachio ice cream,
02:09:12 but also the love I feel for my amazing wife
02:09:15 and all the other delights of being conscious.
02:09:19 I don’t, actually Richard Feynman,
02:09:22 I think said this so well.
02:09:25 He is also the guy, you know, really got me into physics.
02:09:29 Some art friend said that,
02:09:31 oh, science kind of just is the party pooper.
02:09:34 It’s kind of ruins the fun, right?
02:09:36 When like you have a beautiful flowers as the artist
02:09:39 and then the scientist is gonna deconstruct that
02:09:41 into just a blob of quarks and electrons.
02:09:44 And Feynman pushed back on that in such a beautiful way,
02:09:47 which I think also can be used to push back
02:09:49 and make you not feel guilty about falling in love.
02:09:53 So here’s what Feynman basically said.
02:09:55 He said to his friend, you know,
02:09:56 yeah, I can also as a scientist see
02:09:59 that this is a beautiful flower, thank you very much.
02:10:00 Maybe I can’t draw as good a painting as you
02:10:03 because I’m not as talented an artist,
02:10:04 but yeah, I can really see the beauty in it.
02:10:06 And it just, it also looks beautiful to me.
02:10:09 But in addition to that, Feynman said, as a scientist,
02:10:12 I see even more beauty that the artist did not see, right?
02:10:16 Suppose this is a flower on a blossoming apple tree.
02:10:21 You could say this tree has more beauty in it
02:10:23 than just the colors and the fragrance.
02:10:26 This tree is made of air, Feynman wrote.
02:10:29 This is one of my favorite Feynman quotes ever.
02:10:31 And it took the carbon out of the air
02:10:33 and bound it in using the flaming heat of the sun,
02:10:36 you know, to turn the air into a tree.
02:10:38 And when you burn logs in your fireplace,
02:10:42 it’s really beautiful to think that this is being reversed.
02:10:45 Now the tree is going, the wood is going back into air.
02:10:48 And in this flaming, beautiful dance of the fire
02:10:52 that the artist can see is the flaming light of the sun
02:10:56 that was bound in to turn the air into tree.
02:10:59 And then the ashes is the little residue
02:11:01 that didn’t come from the air
02:11:02 that the tree sucked out of the ground, you know.
02:11:04 Feynman said, these are beautiful things.
02:11:06 And science just adds, it doesn’t subtract.
02:11:10 And I feel exactly that way about love
02:11:12 and about pistachio ice cream also.
02:11:16 I can understand that there is even more nuance
02:11:18 to the whole thing, right?
02:11:20 At this very visceral level,
02:11:22 you can fall in love just as much as someone
02:11:24 who knows nothing about neuroscience.
02:11:27 But you can also appreciate this even greater beauty in it.
02:11:31 Just like, isn’t it remarkable that it came about
02:11:35 from this completely lifeless universe,
02:11:38 just a bunch of hot blob of plasma expanding.
02:11:43 And then over the eons, you know, gradually,
02:11:46 first the strong nuclear force decided
02:11:48 to combine quarks together into nuclei.
02:11:50 And then the electric force bound in electrons
02:11:53 and made atoms.
02:11:53 And then they clustered from gravity
02:11:55 and you got planets and stars and this and that.
02:11:57 And then natural selection came along
02:12:00 and the genes had their little thing.
02:12:01 And you started getting what went from seeming
02:12:04 like a completely pointless universe
02:12:06 that we’re just trying to increase entropy
02:12:08 and approach heat death into something
02:12:10 that looked more goal oriented.
02:12:11 Isn’t that kind of beautiful?
02:12:13 And then this goal orientedness through evolution
02:12:15 got ever more sophisticated where you got ever more.
02:12:18 And then you started getting this thing,
02:12:20 which is kind of like DeepMind’s mu zero and steroids,
02:12:25 the ultimate self play is not what DeepMind’s AI
02:12:29 does against itself to get better at go.
02:12:32 It’s what all these little quark blobs did
02:12:34 against each other in the game of survival of the fittest.
02:12:38 Now, when you had really dumb bacteria
02:12:42 living in a simple environment,
02:12:44 there wasn’t much incentive to get intelligent,
02:12:46 but then the life made environment more complex.
02:12:50 And then there was more incentive to get even smarter.
02:12:53 And that gave the other organisms more of incentive
02:12:56 to also get smarter.
02:12:57 And then here we are now,
02:12:59 just like mu zero learned to become world master at go
02:13:05 and chess from playing against itself
02:13:07 by just playing against itself.
02:13:08 All the quirks here on our planet,
02:13:10 the electrons have created giraffes and elephants
02:13:15 and humans and love.
02:13:17 I just find that really beautiful.
02:13:20 And to me, that just adds to the enjoyment of love.
02:13:24 It doesn’t subtract anything.
02:13:25 Do you feel a little more careful now?
02:13:27 I feel way better, that was incredible.
02:13:30 So this self play of quirks,
02:13:33 taking back to the beginning of our conversation
02:13:36 a little bit, there’s so many exciting possibilities
02:13:39 about artificial intelligence understanding
02:13:42 the basic laws of physics.
02:13:44 Do you think AI will help us unlock?
02:13:47 There’s been quite a bit of excitement
02:13:49 throughout the history of physics
02:13:50 of coming up with more and more general simple laws
02:13:55 that explain the nature of our reality.
02:13:58 And then the ultimate of that would be a theory
02:14:01 of everything that combines everything together.
02:14:03 Do you think it’s possible that one, we humans,
02:14:07 but perhaps AI systems will figure out a theory of physics
02:14:13 that unifies all the laws of physics?
02:14:17 Yeah, I think it’s absolutely possible.
02:14:19 I think it’s very clear
02:14:21 that we’re gonna see a great boost to science.
02:14:24 We’re already seeing a boost actually
02:14:26 from machine learning helping science.
02:14:28 Alpha fold was an example,
02:14:30 the decades old protein folding problem.
02:14:34 So, and gradually, yeah, unless we go extinct
02:14:38 by doing something dumb like we discussed,
02:14:39 I think it’s very likely
02:14:44 that our understanding of physics will become so good
02:14:48 that our technology will no longer be limited
02:14:53 by human intelligence,
02:14:55 but instead be limited by the laws of physics.
02:14:58 So our tech today is limited
02:15:00 by what we’ve been able to invent, right?
02:15:02 I think as AI progresses,
02:15:04 it’ll just be limited by the speed of light
02:15:07 and other physical limits,
02:15:09 which would mean it’s gonna be just dramatically beyond
02:15:13 where we are now.
02:15:15 Do you think it’s a fundamentally mathematical pursuit
02:15:18 of trying to understand like the laws
02:15:22 of our universe from a mathematical perspective?
02:15:25 So almost like if it’s AI,
02:15:28 it’s exploring the space of like theorems
02:15:31 and those kinds of things,
02:15:33 or is there some other more computational ideas,
02:15:39 more sort of empirical ideas?
02:15:41 They’re both, I would say.
02:15:43 It’s really interesting to look out at the landscape
02:15:45 of everything we call science today.
02:15:48 So here you come now with this big new hammer.
02:15:50 It says machine learning on it
02:15:51 and that’s, you know, where are there some nails
02:15:53 that you can help with here that you can hammer?
02:15:56 Ultimately, if machine learning gets the point
02:16:00 that it can do everything better than us,
02:16:02 it will be able to help across the whole space of science.
02:16:06 But maybe we can anchor it by starting a little bit
02:16:08 right now near term and see how we kind of move forward.
02:16:11 So like right now, first of all,
02:16:14 you have a lot of big data science, right?
02:16:17 Where, for example, with telescopes,
02:16:19 we are able to collect way more data every hour
02:16:24 than a grad student can just pour over
02:16:26 like in the old times, right?
02:16:28 And machine learning is already being used very effectively,
02:16:31 even at MIT, to find planets around other stars,
02:16:34 to detect exciting new signatures
02:16:36 of new particle physics in the sky,
02:16:38 to detect the ripples in the fabric of space time
02:16:42 that we call gravitational waves
02:16:44 caused by enormous black holes
02:16:46 crashing into each other halfway
02:16:48 across the observable universe.
02:16:49 Machine learning is running and ticking right now,
02:16:52 doing all these things,
02:16:53 and it’s really helping all these experimental fields.
02:16:58 There is a separate front of physics,
02:17:01 computational physics,
02:17:03 which is getting an enormous boost also.
02:17:05 So we had to do all our computations by hand, right?
02:17:09 People would have these giant books
02:17:11 with tables of logarithms,
02:17:12 and oh my God, it pains me to even think
02:17:16 how long time it would have taken to do simple stuff.
02:17:19 Then we started to get little calculators and computers
02:17:23 that could do some basic math for us.
02:17:26 Now, what we’re starting to see is
02:17:31 kind of a shift from GOFI, computational physics,
02:17:35 to neural network, computational physics.
02:17:40 What I mean by that is most computational physics
02:17:44 would be done by humans programming in
02:17:48 the intelligence of how to do the computation
02:17:50 into the computer.
02:17:52 Just as when Garry Kasparov got his posterior kicked
02:17:55 by IBM’s Deep Blue in chess,
02:17:56 humans had programmed in exactly how to play chess.
02:17:59 Intelligence came from the humans.
02:18:01 It wasn’t learned, right?
02:18:03 Mu zero can be not only Kasparov in chess,
02:18:08 but also Stockfish,
02:18:09 which is the best sort of GOFI chess program.
02:18:12 By learning, and we’re seeing more of that now,
02:18:16 that shift beginning to happen in physics.
02:18:18 So let me give you an example.
02:18:20 So lattice QCD is an area of physics
02:18:24 whose goal is basically to take the periodic table
02:18:27 and just compute the whole thing from first principles.
02:18:31 This is not the search for theory of everything.
02:18:33 We already know the theory
02:18:36 that’s supposed to produce as output the periodic table,
02:18:39 which atoms are stable, how heavy they are,
02:18:42 all that good stuff, their spectral lines.
02:18:45 It’s a theory, lattice QCD,
02:18:48 you can put it on your tshirt.
02:18:50 Our colleague Frank Wilczek
02:18:51 got the Nobel Prize for working on it.
02:18:54 But the math is just too hard for us to solve.
02:18:56 We have not been able to start with these equations
02:18:58 and solve them to the extent that we can predict, oh yeah.
02:19:01 And then there is carbon,
02:19:03 and this is what the spectrum of the carbon atom looks like.
02:19:07 But awesome people are building
02:19:09 these supercomputer simulations
02:19:12 where you just put in these equations
02:19:14 and you make a big cubic lattice of space,
02:19:20 or actually it’s a very small lattice
02:19:22 because you’re going down to the subatomic scale,
02:19:25 and you try to solve it.
02:19:26 But it’s just so computationally expensive
02:19:28 that we still haven’t been able to calculate things
02:19:31 as accurately as we measure them in many cases.
02:19:34 And now machine learning is really revolutionizing this.
02:19:37 So my colleague Fiala Shanahan at MIT, for example,
02:19:40 she’s been using this really cool
02:19:43 machine learning technique called normalizing flows,
02:19:47 where she’s realized she can actually speed up
02:19:49 the calculation dramatically
02:19:52 by having the AI learn how to do things faster.
02:19:55 Another area like this
02:19:57 where we suck up an enormous amount of supercomputer time
02:20:02 to do physics is black hole collisions.
02:20:05 So now that we’ve done the sexy stuff
02:20:06 of detecting a bunch of this with LIGO and other experiments,
02:20:09 we want to be able to know what we’re seeing.
02:20:13 And so it’s a very simple conceptual problem.
02:20:16 It’s the two body problem.
02:20:19 Newton solved it for classical gravity hundreds of years ago,
02:20:23 but the two body problem is still not fully solved.
02:20:26 For black holes.
02:20:26 Black holes, yes, and Einstein’s gravity
02:20:29 because they won’t just orbit in space,
02:20:31 they won’t just orbit each other forever anymore,
02:20:33 two things, they give off gravitational waves
02:20:36 and make sure they crash into each other.
02:20:37 And the game, what you want to do is you want to figure out,
02:20:40 okay, what kind of wave comes out
02:20:43 as a function of the masses of the two black holes,
02:20:46 as a function of how they’re spinning,
02:20:48 relative to each other, et cetera.
02:20:50 And that is so hard.
02:20:52 It can take months of supercomputer time
02:20:54 and massive numbers of cores to do it.
02:20:56 Now, wouldn’t it be great if you can use machine learning
02:21:01 to greatly speed that up, right?
02:21:04 Now you can use the expensive old GoFi calculation
02:21:09 as the truth, and then see if machine learning
02:21:11 can figure out a smarter, faster way
02:21:13 of getting the right answer.
02:21:16 Yet another area, like computational physics.
02:21:20 These are probably the big three
02:21:22 that suck up the most computer time.
02:21:24 Lattice QCD, black hole collisions,
02:21:27 and cosmological simulations,
02:21:29 where you take not a subatomic thing
02:21:32 and try to figure out the mass of the proton,
02:21:34 but you take something enormous
02:21:37 and try to look at how all the galaxies get formed in there.
02:21:41 There again, there are a lot of very cool ideas right now
02:21:44 about how you can use machine learning
02:21:46 to do this sort of stuff better.
02:21:49 The difference between this and the big data
02:21:51 is you kind of make the data yourself, right?
02:21:54 So, and then finally,
02:21:58 we’re looking over the physics landscape
02:22:00 and seeing what can we hammer with machine learning, right?
02:22:02 So we talked about experimental data, big data,
02:22:05 discovering cool stuff that we humans
02:22:07 then look more closely at.
02:22:09 Then we talked about taking the expensive computations
02:22:13 we’re doing now and figuring out
02:22:15 how to do them much faster and better with AI.
02:22:18 And finally, let’s go really theoretical.
02:22:21 So things like discovering equations,
02:22:25 having deep fundamental insights,
02:22:30 this is something closest to what I’ve been doing
02:22:33 in my group.
02:22:33 We talked earlier about the whole AI Feynman project,
02:22:35 where if you just have some data,
02:22:37 how do you automatically discover equations
02:22:39 that seem to describe this well,
02:22:42 that you can then go back as a human
02:22:44 and then work with and test and explore.
02:22:46 And you asked a really good question also
02:22:50 about if this is sort of a search problem in some sense.
02:22:54 That’s very deep actually what you said, because it is.
02:22:56 Suppose I ask you to prove some mathematical theorem.
02:23:01 What is a proof in math?
02:23:02 It’s just a long string of steps, logical steps
02:23:05 that you can write out with symbols.
02:23:07 And once you find it, it’s very easy to write a program
02:23:10 to check whether it’s a valid proof or not.
02:23:14 So why is it so hard to prove it?
02:23:16 Well, because there are ridiculously many possible
02:23:19 candidate proofs you could write down, right?
02:23:21 If the proof contains 10,000 symbols,
02:23:25 even if there were only 10 options
02:23:27 for what each symbol could be,
02:23:29 that’s 10 to the power of 1,000 possible proofs,
02:23:33 which is way more than there are atoms in our universe.
02:23:36 So you could say it’s trivial to prove these things.
02:23:38 You just write a computer, generate all strings,
02:23:41 and then check, is this a valid proof?
02:23:43 No.
02:23:44 Is this a valid proof?
02:23:45 Is this a valid proof?
02:23:46 No.
02:23:47 And then you just keep doing this forever.
02:23:51 But there are a lot of,
02:23:53 but it is fundamentally a search problem.
02:23:55 You just want to search the space of all those,
02:23:57 all strings of symbols to find one that is the proof, right?
02:24:03 And there’s a whole area of machine learning called search.
02:24:08 How do you search through some giant space
02:24:10 to find the needle in the haystack?
02:24:12 And it’s easier in cases
02:24:14 where there’s a clear measure of good,
02:24:17 like you’re not just right or wrong,
02:24:18 but this is better and this is worse,
02:24:20 so you can maybe get some hints
02:24:21 as to which direction to go in.
02:24:23 That’s why we talked about neural networks work so well.
02:24:28 I mean, that’s such a human thing
02:24:30 of that moment of genius
02:24:32 of figuring out the intuition of good, essentially.
02:24:37 I mean, we thought that that was…
02:24:38 Or is it?
02:24:40 Maybe it’s not, right?
02:24:41 We thought that about chess, right?
02:24:42 That the ability to see like 10, 15,
02:24:46 sometimes 20 steps ahead was not a calculation
02:24:50 that humans were performing.
02:24:51 It was some kind of weird intuition
02:24:53 about different patterns, about board positions,
02:24:57 about the relative positions,
02:24:59 somehow stitching stuff together.
02:25:01 And a lot of it is just like intuition,
02:25:03 but then you have like alpha,
02:25:05 I guess zero be the first one that did the self play.
02:25:10 It just came up with this.
02:25:12 It was able to learn through self play mechanism,
02:25:14 this kind of intuition.
02:25:16 Exactly.
02:25:16 But just like you said, it’s so fascinating to think,
02:25:19 well, they’re in the space of totally new ideas.
02:25:24 Can that be done in developing theorems?
02:25:28 We know it can be done by neural networks
02:25:30 because we did it with the neural networks
02:25:32 in the craniums of the great mathematicians of humanity.
02:25:36 And I’m so glad you brought up alpha zero
02:25:38 because that’s the counter example.
02:25:39 It turned out we were flattering ourselves
02:25:41 when we said intuition is something different.
02:25:45 Only humans can do it.
02:25:46 It’s not information processing.
02:25:50 It used to be that way.
02:25:53 Again, it’s really instructive, I think,
02:25:56 to compare the chess computer Deep Blue
02:25:58 that beat Kasparov with alpha zero
02:26:02 that beat Lisa Dahl at Go.
02:26:04 Because for Deep Blue, there was no intuition.
02:26:08 There was some, humans had programmed in some intuition.
02:26:12 After humans had played a lot of games,
02:26:13 they told the computer, count the pawn as one point,
02:26:16 the bishop is three points, rook is five points,
02:26:19 and so on, you add it all up,
02:26:21 and then you add some extra points for past pawns
02:26:23 and subtract if the opponent has it and blah, blah, blah.
02:26:28 And then what Deep Blue did was just search.
02:26:32 Just very brute force and tried many, many moves ahead,
02:26:34 all these combinations and a prune tree search.
02:26:37 And it could think much faster than Kasparov, and it won.
02:26:42 And that, I think, inflated our egos
02:26:45 in a way it shouldn’t have,
02:26:46 because people started to say, yeah, yeah,
02:26:48 it’s just brute force search, but it has no intuition.
02:26:52 Alpha zero really popped our bubble there,
02:26:57 because what alpha zero does,
02:27:00 yes, it does also do some of that tree search,
02:27:03 but it also has this intuition module,
02:27:06 which in geek speak is called a value function,
02:27:09 where it just looks at the board
02:27:11 and comes up with a number for how good is that position.
02:27:14 The difference was no human told it
02:27:17 how good the position is, it just learned it.
02:27:22 And mu zero is the coolest or scariest of all,
02:27:26 depending on your mood,
02:27:28 because the same basic AI system
02:27:33 will learn what the good board position is,
02:27:35 regardless of whether it’s chess or Go or Shogi
02:27:38 or Pacman or Lady Pacman or Breakout or Space Invaders
02:27:42 or any number, a bunch of other games.
02:27:45 You don’t tell it anything,
02:27:45 and it gets this intuition after a while for what’s good.
02:27:49 So this is very hopeful for science, I think,
02:27:52 because if it can get intuition
02:27:55 for what’s a good position there,
02:27:57 maybe it can also get intuition
02:27:58 for what are some good directions to go
02:28:00 if you’re trying to prove something.
02:28:03 I often, one of the most fun things in my science career
02:28:06 is when I’ve been able to prove some theorem about something
02:28:08 and it’s very heavily intuition guided, of course.
02:28:12 I don’t sit and try all random strings.
02:28:14 I have a hunch that, you know,
02:28:16 this reminds me a little bit of about this other proof
02:28:18 I’ve seen for this thing.
02:28:19 So maybe I first, what if I try this?
02:28:22 Nah, that didn’t work out.
02:28:24 But this reminds me actually,
02:28:25 the way this failed reminds me of that.
02:28:28 So combining the intuition with all these brute force
02:28:33 capabilities, I think it’s gonna be able to help physics too.
02:28:38 Do you think there’ll be a day when an AI system
02:28:42 being the primary contributor, let’s say 90% plus,
02:28:46 wins the Nobel Prize in physics?
02:28:50 Obviously they’ll give it to the humans
02:28:51 because we humans don’t like to give prizes to machines.
02:28:54 It’ll give it to the humans behind the system.
02:28:57 You could argue that AI has already been involved
02:28:59 in some Nobel Prizes, probably,
02:29:01 maybe something with black holes and stuff like that.
02:29:03 Yeah, we don’t like giving prizes to other life forms.
02:29:07 If someone wins a horse racing contest,
02:29:09 they don’t give the prize to the horse either.
02:29:11 That’s true.
02:29:13 But do you think that we might be able to see
02:29:16 something like that in our lifetimes when AI,
02:29:19 so like the first system I would say
02:29:21 that makes us think about a Nobel Prize seriously
02:29:25 is like Alpha Fold is making us think about
02:29:28 in medicine, physiology, a Nobel Prize,
02:29:31 perhaps discoveries that are direct result
02:29:34 of something that’s discovered by Alpha Fold.
02:29:36 Do you think in physics we might be able
02:29:39 to see that in our lifetimes?
02:29:41 I think what’s probably gonna happen
02:29:43 is more of a blurring of the distinctions.
02:29:46 So today if somebody uses a computer
02:29:53 to do a computation that gives them the Nobel Prize,
02:29:54 nobody’s gonna dream of giving the prize to the computer.
02:29:57 They’re gonna be like, that was just a tool.
02:29:59 I think for these things also,
02:30:02 people are just gonna for a long time
02:30:04 view the computer as a tool.
02:30:06 But what’s gonna change is the ubiquity of machine learning.
02:30:11 I think at some point in my lifetime,
02:30:17 finding a human physicist who knows nothing
02:30:21 about machine learning is gonna be almost as hard
02:30:23 as it is today finding a human physicist
02:30:25 who doesn’t says, oh, I don’t know anything about computers
02:30:29 or I don’t use math.
02:30:30 That would just be a ridiculous concept.
02:30:34 You see, but the thing is there is a magic moment though,
02:30:38 like with Alpha Zero, when the system surprises us
02:30:42 in a way where the best people in the world
02:30:46 truly learn something from the system
02:30:48 in a way where you feel like it’s another entity.
02:30:52 Like the way people, the way Magnus Carlsen,
02:30:54 the way certain people are looking at the work of Alpha Zero,
02:30:58 it’s like, it truly is no longer a tool
02:31:02 in the sense that it doesn’t feel like a tool.
02:31:06 It feels like some other entity.
02:31:08 So there’s a magic difference like where you’re like,
02:31:13 if an AI system is able to come up with an insight
02:31:17 that surprises everybody in some like major way
02:31:23 that’s a phase shift in our understanding
02:31:25 of some particular science
02:31:27 or some particular aspect of physics,
02:31:30 I feel like that is no longer a tool.
02:31:32 And then you can start to say
02:31:35 that like it perhaps deserves the prize.
02:31:38 So for sure, the more important
02:31:40 and the more fundamental transformation
02:31:43 of the 21st century science is exactly what you’re saying,
02:31:46 which is probably everybody will be doing machine learning.
02:31:50 It’s to some degree.
02:31:51 Like if you want to be successful
02:31:54 at unlocking the mysteries of science,
02:31:57 you should be doing machine learning.
02:31:58 But it’s just exciting to think about like,
02:32:01 whether there’ll be one that comes along
02:32:03 that’s super surprising and they’ll make us question
02:32:08 like who the real inventors are in this world.
02:32:10 Yeah.
02:32:11 Yeah, I think the question of,
02:32:14 isn’t if it’s gonna happen, but when?
02:32:15 And, but it’s important.
02:32:17 Honestly, in my mind, the time when that happens
02:32:20 is also more or less the same time
02:32:23 when we get artificial general intelligence.
02:32:25 And then we have a lot bigger things to worry about
02:32:28 than whether we should get the Nobel prize or not, right?
02:32:31 Yeah.
02:32:31 Because when you have machines
02:32:35 that can outperform our best scientists at science,
02:32:39 they can probably outperform us
02:32:41 at a lot of other stuff as well,
02:32:44 which can at a minimum make them
02:32:46 incredibly powerful agents in the world.
02:32:49 And I think it’s a mistake to think
02:32:53 we only have to start worrying about loss of control
02:32:57 when machines get to AGI across the board,
02:32:59 where they can do everything, all our jobs.
02:33:02 Long before that, they’ll be hugely influential.
02:33:07 We talked at length about how the hacking of our minds
02:33:12 with algorithms trying to get us glued to our screens,
02:33:18 right, has already had a big impact on society.
02:33:22 That was an incredibly dumb algorithm
02:33:24 in the grand scheme of things, right?
02:33:25 The supervised machine learning,
02:33:27 yet that had huge impact.
02:33:29 So I just don’t want us to be lulled
02:33:32 into false sense of security
02:33:33 and think there won’t be any societal impact
02:33:35 until things reach human level,
02:33:37 because it’s happening already.
02:33:38 And I was just thinking the other week,
02:33:40 when I see some scaremonger going,
02:33:44 oh, the robots are coming,
02:33:47 the implication is always that they’re coming to kill us.
02:33:50 Yeah.
02:33:51 And maybe you should have worried about that
02:33:52 if you were in Nagorno Karabakh
02:33:54 during the recent war there.
02:33:55 But more seriously, the robots are coming right now,
02:34:01 but they’re mainly not coming to kill us.
02:34:03 They’re coming to hack us.
02:34:06 They’re coming to hack our minds,
02:34:08 into buying things that maybe we didn’t need,
02:34:11 to vote for people who may not have
02:34:13 our best interest in mind.
02:34:15 And it’s kind of humbling, I think,
02:34:17 actually, as a human being to admit
02:34:20 that it turns out that our minds are actually
02:34:22 much more hackable than we thought.
02:34:24 And the ultimate insult is that we are actually
02:34:27 getting hacked by the machine learning algorithms
02:34:30 that are, in some objective sense,
02:34:31 much dumber than us, you know?
02:34:33 But maybe we shouldn’t be so surprised
02:34:35 because, you know, how do you feel about cute puppies?
02:34:40 Love them.
02:34:41 So, you know, you would probably argue
02:34:43 that in some across the board measure,
02:34:46 you’re more intelligent than they are,
02:34:47 but boy, are cute puppies good at hacking us, right?
02:34:50 Yeah.
02:34:51 They move into our house, persuade us to feed them
02:34:53 and do all these things.
02:34:54 And what do they ever do but for us?
02:34:56 Yeah.
02:34:57 Other than being cute and making us feel good, right?
02:35:00 So if puppies can hack us,
02:35:03 maybe we shouldn’t be so surprised
02:35:04 if pretty dumb machine learning algorithms can hack us too.
02:35:09 Not to speak of cats, which is another level.
02:35:11 And I think we should,
02:35:13 to counter your previous point about there,
02:35:15 let us not think about evil creatures in this world.
02:35:18 We can all agree that cats are as close
02:35:20 to objective evil as we can get.
02:35:22 But that’s just me saying that.
02:35:24 Okay, so you have.
02:35:25 Have you seen the cartoon?
02:35:27 I think it’s maybe the onion
02:35:31 with this incredibly cute kitten.
02:35:33 And it just says, it’s underneath something
02:35:36 that thinks about murder all day.
02:35:38 Exactly.
02:35:41 That’s accurate.
02:35:43 You’ve mentioned offline that there might be a link
02:35:45 between post biological AGI and SETI.
02:35:47 So last time we talked,
02:35:52 you’ve talked about this intuition
02:35:54 that we humans might be quite unique
02:35:59 in our galactic neighborhood.
02:36:02 Perhaps our galaxy,
02:36:03 perhaps the entirety of the observable universe
02:36:06 who might be the only intelligent civilization here,
02:36:10 which is, and you argue pretty well for that thought.
02:36:17 So I have a few little questions around this.
02:36:21 One, the scientific question,
02:36:24 in which way would you be,
02:36:29 if you were wrong in that intuition,
02:36:33 in which way do you think you would be surprised?
02:36:36 Like why were you wrong?
02:36:38 We find out that you ended up being wrong.
02:36:41 Like in which dimension?
02:36:43 So like, is it because we can’t see them?
02:36:48 Is it because the nature of their intelligence
02:36:51 or the nature of their life is totally different
02:36:54 than we can possibly imagine?
02:36:56 Is it because the,
02:37:00 I mean, something about the great filters
02:37:02 and surviving them,
02:37:04 or maybe because we’re being protected from signals,
02:37:08 all those explanations for why we haven’t heard
02:37:15 a big, loud, like red light that says we’re here.
02:37:21 So there are actually two separate things there
02:37:23 that I could be wrong about,
02:37:24 two separate claims that I made, right?
02:37:28 One of them is, I made the claim,
02:37:32 I think most civilizations,
02:37:36 when you’re going from simple bacteria like things
02:37:41 to space colonizing civilizations,
02:37:47 they spend only a very, very tiny fraction
02:37:50 of their life being where we are.
02:37:55 That I could be wrong about.
02:37:57 The other one I could be wrong about
02:37:58 is the quite different statement that I think that actually
02:38:01 I’m guessing that we are the only civilization
02:38:04 in our observable universe
02:38:06 from which light has reached us so far
02:38:08 that’s actually gotten far enough to invent telescopes.
02:38:12 So let’s talk about maybe both of them in turn
02:38:13 because they really are different.
02:38:15 The first one, if you look at the N equals one,
02:38:19 the data point we have on this planet, right?
02:38:22 So we spent four and a half billion years
02:38:25 fluxing around on this planet with life, right?
02:38:28 We got, and most of it was pretty lame stuff
02:38:32 from an intelligence perspective,
02:38:33 you know, it was bacteria and then the dinosaurs spent,
02:38:39 then the things gradually accelerated, right?
02:38:41 Then the dinosaurs spent over a hundred million years
02:38:43 stomping around here without even inventing smartphones.
02:38:46 And then very recently, you know,
02:38:50 it’s only, we’ve only spent 400 years
02:38:52 going from Newton to us, right?
02:38:55 In terms of technology.
02:38:56 And look what we’ve done even, you know,
02:39:00 when I was a little kid, there was no internet even.
02:39:02 So it’s, I think it’s pretty likely for,
02:39:05 in this case of this planet, right?
02:39:08 That we’re either gonna really get our act together
02:39:12 and start spreading life into space, the century,
02:39:15 and doing all sorts of great things,
02:39:16 or we’re gonna wipe out.
02:39:18 It’s a little hard.
02:39:20 If I, I could be wrong in the sense that maybe
02:39:23 what happened on this earth is very atypical.
02:39:25 And for some reason, what’s more common on other planets
02:39:28 is that they spend an enormously long time
02:39:31 futzing around with the ham radio and things,
02:39:33 but they just never really take it to the next level
02:39:36 for reasons I don’t, I haven’t understood.
02:39:38 I’m humble and open to that.
02:39:40 But I would bet at least 10 to one
02:39:42 that our situation is more typical
02:39:45 because the whole thing with Moore’s law
02:39:46 and accelerating technology,
02:39:48 it’s pretty obvious why it’s happening.
02:39:51 Everything that grows exponentially,
02:39:52 we call it an explosion,
02:39:54 whether it’s a population explosion or a nuclear explosion,
02:39:56 it’s always caused by the same thing.
02:39:58 It’s that the next step triggers a step after that.
02:40:01 So I, we, tomorrow’s technology,
02:40:04 today’s technology enables tomorrow’s technology
02:40:06 and that enables the next level.
02:40:09 And as I think, because the technology is always better,
02:40:13 of course, the steps can come faster and faster.
02:40:17 On the other question that I might be wrong about,
02:40:19 that’s the much more controversial one, I think.
02:40:22 But before we close out on this thing about,
02:40:24 if, the first one, if it’s true
02:40:27 that most civilizations spend only a very short amount
02:40:30 of their total time in the stage, say,
02:40:32 between inventing
02:40:37 telescopes or mastering electricity
02:40:40 and leaving there and doing space travel,
02:40:43 if that’s actually generally true,
02:40:46 then that should apply also elsewhere out there.
02:40:49 So we should be very, very,
02:40:51 we should be very, very surprised
02:40:52 if we find some random civilization
02:40:55 and we happen to catch them exactly
02:40:56 in that very, very short stage.
02:40:58 It’s much more likely
02:40:59 that we find a planet full of bacteria.
02:41:02 Or that we find some civilization
02:41:05 that’s already post biological
02:41:07 and has done some really cool galactic construction projects
02:41:11 in their galaxy.
02:41:13 Would we be able to recognize them, do you think?
02:41:15 Is it possible that we just can’t,
02:41:17 I mean, this post biological world,
02:41:21 could it be just existing in some other dimension?
02:41:23 It could be just all a virtual reality game
02:41:26 for them or something, I don’t know,
02:41:28 that it changes completely
02:41:30 where we won’t be able to detect.
02:41:32 We have to be honestly very humble about this.
02:41:35 I think I said earlier the number one principle
02:41:39 of being a scientist is you have to be humble
02:41:40 and willing to acknowledge that everything we think,
02:41:42 guess might be totally wrong.
02:41:45 Of course, you could imagine some civilization
02:41:46 where they all decide to become Buddhists
02:41:48 and very inward looking
02:41:49 and just move into their little virtual reality
02:41:52 and not disturb the flora and fauna around them
02:41:55 and we might not notice them.
02:41:58 But this is a numbers game, right?
02:41:59 If you have millions of civilizations out there
02:42:02 or billions of them,
02:42:03 all it takes is one with a more ambitious mentality
02:42:08 that decides, hey, we are gonna go out
02:42:10 and settle a bunch of other solar systems
02:42:15 and maybe galaxies.
02:42:17 And then it doesn’t matter
02:42:18 if they’re a bunch of quiet Buddhists,
02:42:19 we’re still gonna notice that expansionist one, right?
02:42:23 And it seems like quite the stretch to assume that,
02:42:26 now we know even in our own galaxy
02:42:28 that there are probably a billion or more planets
02:42:33 that are pretty Earth like.
02:42:35 And many of them are formed over a billion years
02:42:37 before ours, so had a big head start.
02:42:40 So if you actually assume also
02:42:43 that life happens kind of automatically
02:42:46 on an Earth like planet,
02:42:48 I think it’s quite the stretch to then go and say,
02:42:52 okay, so there are another billion civilizations out there
02:42:55 that also have our level of tech
02:42:56 and they all decided to become Buddhists
02:42:59 and not a single one decided to go Hitler on the galaxy
02:43:02 and say, we need to go out and colonize
02:43:05 or not a single one decided for more benevolent reasons
02:43:08 to go out and get more resources.
02:43:11 That seems like a bit of a stretch, frankly.
02:43:13 And this leads into the second thing
02:43:16 you challenged me that I might be wrong about,
02:43:18 how rare or common is life, you know?
02:43:22 So Francis Drake, when he wrote down the Drake equation,
02:43:25 multiplied together a huge number of factors
02:43:27 and then we don’t know any of them.
02:43:29 So we know even less about what you get
02:43:31 when you multiply together the whole product.
02:43:35 Since then, a lot of those factors
02:43:37 have become much better known.
02:43:38 One of his big uncertainties was
02:43:40 how common is it that a solar system even has a planet?
02:43:44 Well, now we know it very common.
02:43:46 Earth like planets, we know we have better.
02:43:48 There are a dime a dozen, there are many, many of them,
02:43:50 even in our galaxy.
02:43:52 At the same time, you know, we have thanks to,
02:43:55 I’m a big supporter of the SETI project and its cousins
02:43:58 and I think we should keep doing this
02:44:00 and we’ve learned a lot.
02:44:02 We’ve learned that so far,
02:44:03 all we have is still unconvincing hints, nothing more, right?
02:44:08 And there are certainly many scenarios
02:44:10 where it would be dead obvious.
02:44:13 If there were a hundred million
02:44:15 other human like civilizations in our galaxy,
02:44:19 it would not be that hard to notice some of them
02:44:21 with today’s technology and we haven’t, right?
02:44:23 So what we can say is, well, okay,
02:44:27 we can rule out that there is a human level of civilization
02:44:30 on the moon and in fact, the many nearby solar systems
02:44:34 where we cannot rule out, of course,
02:44:37 that there is something like Earth sitting in a galaxy
02:44:41 five billion light years away.
02:44:45 But we’ve ruled out a lot
02:44:46 and that’s already kind of shocking
02:44:48 given that there are all these planets there, you know?
02:44:50 So like, where are they?
02:44:51 Where are they all?
02:44:52 That’s the classic Fermi paradox.
02:44:54 And so my argument, which might very well be wrong,
02:44:59 it’s very simple really, it just goes like this.
02:45:01 Okay, we have no clue about this.
02:45:05 It could be the probability of getting life
02:45:07 on a random planet, it could be 10 to the minus one
02:45:11 a priori or 10 to the minus five, 10, 10 to the minus 20,
02:45:14 10 to the minus 30, 10 to the minus 40.
02:45:17 Basically every order of magnitude is about equally likely.
02:45:21 When then do the math and ask the question,
02:45:24 how close is our nearest neighbor?
02:45:27 It’s again, equally likely that it’s 10 to the 10 meters away,
02:45:30 10 to 20 meters away, 10 to the 30 meters away.
02:45:33 We have some nerdy ways of talking about this
02:45:35 with Bayesian statistics and a uniform log prior,
02:45:38 but that’s irrelevant.
02:45:39 This is the simple basic argument.
02:45:42 And now comes the data.
02:45:43 So we can say, okay, there are all these orders
02:45:46 of magnitude, 10 to the 26 meters away,
02:45:49 there’s the edge of our observable universe.
02:45:51 If it’s farther than that, light hasn’t even reached us yet.
02:45:54 If it’s less than 10 to the 16 meters away,
02:45:58 well, it’s within Earth’s,
02:46:02 it’s no farther away than the sun.
02:46:03 We can definitely rule that out.
02:46:07 So I think about it like this,
02:46:08 a priori before we looked at the telescopes,
02:46:11 it could be 10 to the 10 meters, 10 to the 20,
02:46:14 10 to the 30, 10 to the 40, 10 to the 50, 10 to blah, blah, blah.
02:46:16 Equally likely anywhere here.
02:46:18 And now we’ve ruled out like this chunk.
02:46:21 And here is the edge of our observable universe already.
02:46:27 So I’m certainly not saying I don’t think
02:46:30 there’s any life elsewhere in space.
02:46:32 If space is infinite,
02:46:33 then you’re basically a hundred percent guaranteed
02:46:35 that there is, but the probability that there is life,
02:46:41 that the nearest neighbor,
02:46:42 it happens to be in this little region
02:46:43 between where we would have seen it already
02:46:47 and where we will never see it.
02:46:48 There’s actually significantly less than one, I think.
02:46:51 And I think there’s a moral lesson from this,
02:46:54 which is really important,
02:46:55 which is to be good stewards of this planet
02:47:00 and this shot we’ve had.
02:47:01 It can be very dangerous to say,
02:47:03 oh, it’s fine if we nuke our planet or ruin the climate
02:47:07 or mess it up with unaligned AI,
02:47:10 because I know there is this nice Star Trek fleet out there.
02:47:15 They’re gonna swoop in and take over where we failed.
02:47:18 Just like it wasn’t the big deal
02:47:19 that the Easter Island losers wiped themselves out.
02:47:23 That’s a dangerous way of lulling yourself
02:47:25 into false sense of security.
02:47:27 If it’s actually the case that it might be up to us
02:47:32 and only us, the whole future of intelligent life
02:47:35 in our observable universe,
02:47:37 then I think it really puts a lot of responsibility
02:47:42 on our shoulders.
02:47:43 It’s inspiring, it’s a little bit terrifying,
02:47:45 but it’s also inspiring.
02:47:46 But it’s empowering, I think, most of all,
02:47:48 because the biggest problem today is,
02:47:50 I see this even when I teach,
02:47:53 so many people feel that it doesn’t matter what they do
02:47:56 or we do, we feel disempowered.
02:47:58 Oh, it makes no difference.
02:48:02 This is about as far from that as you can come.
02:48:05 But we realize that what we do
02:48:07 on our little spinning ball here in our lifetime
02:48:12 could make the difference for the entire future of life
02:48:15 in our universe.
02:48:17 How empowering is that?
02:48:18 Yeah, survival of consciousness.
02:48:20 I mean, a very similar kind of empowering aspect
02:48:25 of the Drake equation is,
02:48:27 say there is a huge number of intelligent civilizations
02:48:31 that spring up everywhere,
02:48:32 but because of the Drake equation,
02:48:34 which is the lifetime of a civilization,
02:48:38 maybe many of them hit a wall.
02:48:39 And just like you said, it’s clear that that,
02:48:43 for us, the great filter,
02:48:45 the one possible great filter seems to be coming
02:48:49 in the next 100 years.
02:48:51 So it’s also empowering to say,
02:48:53 okay, well, we have a chance to not,
02:48:58 I mean, the way great filters work,
02:49:00 they just get most of them.
02:49:02 Exactly.
02:49:02 Nick Bostrom has articulated this really beautifully too.
02:49:06 Every time yet another search for life on Mars
02:49:09 comes back negative or something,
02:49:11 I’m like, yes, yes.
02:49:14 Our odds for us surviving is the best.
02:49:17 You already made the argument in broad brush there, right?
02:49:20 But just to unpack it, right?
02:49:22 The point is we already know
02:49:26 there is a crap ton of planets out there
02:49:28 that are Earth like,
02:49:29 and we also know that most of them do not seem
02:49:33 to have anything like our kind of life on them.
02:49:35 So what went wrong?
02:49:37 There’s clearly one step along the evolutionary,
02:49:39 at least one filter or roadblock
02:49:42 in going from no life to spacefaring life.
02:49:45 And where is it?
02:49:48 Is it in front of us or is it behind us, right?
02:49:51 If there’s no filter behind us,
02:49:54 and we keep finding all sorts of little mice on Mars
02:50:00 or whatever, right?
02:50:01 That’s actually very depressing
02:50:03 because that makes it much more likely
02:50:04 that the filter is in front of us.
02:50:06 And that what actually is going on
02:50:08 is like the ultimate dark joke
02:50:11 that whenever a civilization
02:50:13 invents sufficiently powerful tech,
02:50:15 it’s just, you just set your clock.
02:50:17 And then after a little while it goes poof
02:50:19 for one reason or other and wipes itself out.
02:50:21 Now wouldn’t that be like utterly depressing
02:50:24 if we’re actually doomed?
02:50:26 Whereas if it turns out that there is a really,
02:50:29 there is a great filter early on
02:50:31 that for whatever reason seems to be really hard
02:50:33 to get to the stage of sexually reproducing organisms
02:50:39 or even the first ribosome or whatever, right?
02:50:43 Or maybe you have lots of planets with dinosaurs and cows,
02:50:47 but for some reason they tend to get stuck there
02:50:48 and never invent smartphones.
02:50:50 All of those are huge boosts for our own odds
02:50:55 because been there done that, you know?
02:50:58 It doesn’t matter how hard or unlikely it was
02:51:01 that we got past that roadblock
02:51:03 because we already did.
02:51:05 And then that makes it likely
02:51:07 that the future is in our own hands, we’re not doomed.
02:51:11 So that’s why I think the fact
02:51:14 that life is rare in the universe,
02:51:18 it’s not just something that there is some evidence for,
02:51:21 but also something we should actually hope for.
02:51:26 So that’s the end, the mortality,
02:51:29 the death of human civilization
02:51:31 that we’ve been discussing in life,
02:51:33 maybe prospering beyond any kind of great filter.
02:51:36 Do you think about your own death?
02:51:39 Does it make you sad that you may not witness some of the,
02:51:45 you know, you lead a research group
02:51:47 on working some of the biggest questions
02:51:49 in the universe actually,
02:51:51 both on the physics and the AI side?
02:51:53 Does it make you sad that you may not be able
02:51:55 to see some of these exciting things come to fruition
02:51:58 that we’ve been talking about?
02:52:00 Of course, of course it sucks, the fact that I’m gonna die.
02:52:04 I remember once when I was much younger,
02:52:07 my dad made this remark that life is fundamentally tragic.
02:52:10 And I’m like, what are you talking about, daddy?
02:52:13 And then many years later, I felt,
02:52:15 now I feel I totally understand what he means.
02:52:17 You know, we grow up, we’re little kids
02:52:19 and everything is infinite and it’s so cool.
02:52:21 And then suddenly we find out that actually, you know,
02:52:25 you got to serve only,
02:52:26 this is the, you’re gonna get game over at some point.
02:52:30 So of course it’s something that’s sad.
02:52:36 Are you afraid?
02:52:42 No, not in the sense that I think anything terrible
02:52:46 is gonna happen after I die or anything like that.
02:52:48 No, I think it’s really gonna be a game over,
02:52:50 but it’s more that it makes me very acutely aware
02:52:56 of what a wonderful gift this is
02:52:57 that I get to be alive right now.
02:53:00 And is a steady reminder to just live life to the fullest
02:53:04 and really enjoy it because it is finite, you know.
02:53:08 And I think actually, and we know we all get
02:53:11 the regular reminders when someone near and dear to us dies
02:53:14 that one day it’s gonna be our turn.
02:53:19 It adds this kind of focus.
02:53:21 I wonder what it would feel like actually
02:53:23 to be an immortal being if they might even enjoy
02:53:26 some of the wonderful things of life a little bit less
02:53:29 just because there isn’t that.
02:53:33 Finiteness?
02:53:34 Yeah.
02:53:35 Do you think that could be a feature, not a bug,
02:53:38 the fact that we beings are finite?
02:53:42 Maybe there’s lessons for engineering
02:53:44 in artificial intelligence systems as well
02:53:46 that are conscious.
02:53:48 Like do you think it makes, is it possible
02:53:53 that the reason the pistachio ice cream is delicious
02:53:56 is the fact that you’re going to die one day
02:53:59 and you will not have all the pistachio ice cream
02:54:03 that you could eat because of that fact?
02:54:06 Well, let me say two things.
02:54:07 First of all, it’s actually quite profound
02:54:09 what you’re saying.
02:54:10 I do think I appreciate the pistachio ice cream
02:54:12 a lot more knowing that I will,
02:54:14 there’s only a finite number of times I get to enjoy that.
02:54:17 And I can only remember a finite number of times
02:54:19 in the past.
02:54:21 And moreover, my life is not so long
02:54:25 that it just starts to feel like things are repeating
02:54:26 themselves in general.
02:54:28 It’s so new and fresh.
02:54:30 I also think though that death is a little bit overrated
02:54:36 in the sense that it comes from a sort of outdated view
02:54:42 of physics and what life actually is.
02:54:45 Because if you ask, okay, what is it that’s gonna die
02:54:49 exactly, what am I really?
02:54:52 When I say I feel sad about the idea of myself dying,
02:54:56 am I really sad that this skin cell here is gonna die?
02:54:59 Of course not, because it’s gonna die next week anyway
02:55:01 and I’ll grow a new one, right?
02:55:04 And it’s not any of my cells that I’m associating really
02:55:08 with who I really am.
02:55:11 Nor is it any of my atoms or quarks or electrons.
02:55:15 In fact, basically all of my atoms get replaced
02:55:19 on a regular basis, right?
02:55:20 So what is it that’s really me
02:55:22 from a more modern physics perspective?
02:55:24 It’s the information in processing me.
02:55:28 That’s where my memory, that’s my memories,
02:55:31 that’s my values, my dreams, my passion, my love.
02:55:40 That’s what’s really fundamentally me.
02:55:43 And frankly, not all of that will die when my body dies.
02:55:48 Like Richard Feynman, for example, his body died of cancer,
02:55:55 but many of his ideas that he felt made him very him
02:55:59 actually live on.
02:56:01 This is my own little personal tribute to Richard Feynman.
02:56:04 I try to keep a little bit of him alive in myself.
02:56:07 I’ve even quoted him today, right?
02:56:09 Yeah, he almost came alive for a brief moment
02:56:11 in this conversation, yeah.
02:56:13 Yeah, and this honestly gives me some solace.
02:56:17 When I work as a teacher, I feel,
02:56:20 if I can actually share a bit about myself
02:56:25 that my students feel worthy enough to copy and adopt
02:56:30 as some part of things that they know
02:56:33 or they believe or aspire to,
02:56:36 now I live on also a little bit in them, right?
02:56:39 And so being a teacher is a little bit
02:56:44 of what I, that’s something also that contributes
02:56:49 to making me a little teeny bit less mortal, right?
02:56:53 Because I’m not, at least not all gonna die all at once,
02:56:56 right?
02:56:57 And I find that a beautiful tribute to people
02:56:59 we do not respect.
02:57:01 If we can remember them and carry in us
02:57:05 the things that we felt was the most awesome about them,
02:57:10 right, then they live on.
02:57:11 And I’m getting a bit emotional here,
02:57:13 but it’s a very beautiful idea you bring up there.
02:57:16 I think we should stop this old fashioned materialism
02:57:19 and just equate who we are with our quirks and electrons.
02:57:25 There’s no scientific basis for that really.
02:57:27 And it’s also very uninspiring.
02:57:33 Now, if you look a little bit towards the future, right?
02:57:36 One thing which really sucks about humans dying is that even
02:57:40 though some of their teachings and memories and stories
02:57:43 and ethics and so on will be copied by those around them,
02:57:47 hopefully, a lot of it can’t be copied
02:57:50 and just dies with them, with their brain.
02:57:51 And that really sucks.
02:57:53 That’s the fundamental reason why we find it so tragic
02:57:56 when someone goes from having all this information there
02:57:59 to the more just gone, ruined, right?
02:58:03 With more post biological intelligence,
02:58:07 that’s going to shift a lot, right?
02:58:10 The only reason it’s so hard to make a backup of your brain
02:58:13 in its entirety is exactly
02:58:15 because it wasn’t built for that, right?
02:58:17 If you have a future machine intelligence,
02:58:21 there’s no reason for why it has to die at all.
02:58:24 If you want to copy it, whatever it is,
02:58:28 into some other machine intelligence,
02:58:30 whatever it is, into some other quark blob, right?
02:58:36 You can copy not just some of it, but all of it, right?
02:58:39 And so in that sense,
02:58:45 you can get immortality because all the information
02:58:48 can be copied out of any individual entity.
02:58:51 And it’s not just mortality that will change
02:58:54 if we get to more post biological life.
02:58:56 It’s also with that, very much the whole individualism
02:59:03 we have now, right?
02:59:04 The reason that we make such a big difference
02:59:05 between me and you is exactly because
02:59:09 we’re a little bit limited in how much we can copy.
02:59:10 Like I would just love to go like this
02:59:13 and copy your Russian skills, Russian speaking skills.
02:59:17 Wouldn’t it be awesome?
02:59:18 But I can’t, I have to actually work for years
02:59:21 if I want to get better on it.
02:59:23 But if we were robots.
02:59:27 Just copy and paste freely, then that loses completely.
02:59:31 It washes away the sense of what immortality is.
02:59:35 And also individuality a little bit, right?
02:59:37 We would start feeling much more,
02:59:40 maybe we would feel much more collaborative with each other
02:59:43 if we can just, hey, you know, I’ll give you my Russian,
02:59:45 you can give me your Russian
02:59:46 and I’ll give you whatever,
02:59:47 and suddenly you can speak Swedish.
02:59:50 Maybe that’s less a bad trade for you,
02:59:52 but whatever else you want from my brain, right?
02:59:54 And there’ve been a lot of sci fi stories
02:59:58 about hive minds and so on,
02:59:59 where people, where experiences
03:00:02 can be more broadly shared.
03:00:05 And I think one, we don’t,
03:00:08 I don’t pretend to know what it would feel like
03:00:12 to be a super intelligent machine,
03:00:16 but I’m quite confident that however it feels
03:00:20 about mortality and individuality
03:00:22 will be very, very different from how it is for us.
03:00:26 Well, for us, mortality and finiteness
03:00:30 seems to be pretty important at this particular moment.
03:00:34 And so all good things must come to an end.
03:00:37 Just like this conversation, Max.
03:00:39 I saw that coming.
03:00:40 Sorry, this is the world’s worst translation.
03:00:44 I could talk to you forever.
03:00:45 It’s such a huge honor that you’ve spent time with me.
03:00:49 The honor is mine.
03:00:50 Thank you so much for getting me essentially
03:00:53 to start this podcast by doing the first conversation,
03:00:55 making me realize falling in love
03:00:58 with conversation in itself.
03:01:01 And thank you so much for inspiring
03:01:03 so many people in the world with your books,
03:01:05 with your research, with your talking,
03:01:07 and with the other, like this ripple effect of friends,
03:01:12 including Elon and everybody else that you inspire.
03:01:15 So thank you so much for talking today.
03:01:18 Thank you, I feel so fortunate
03:01:21 that you’re doing this podcast
03:01:23 and getting so many interesting voices out there
03:01:27 into the ether and not just the five second sound bites,
03:01:30 but so many of the interviews I’ve watched you do.
03:01:33 You really let people go in into depth
03:01:36 in a way which we sorely need in this day and age.
03:01:38 That I got to be number one, I feel super honored.
03:01:41 Yeah, you started it.
03:01:43 Thank you so much, Max.
03:01:45 Thanks for listening to this conversation
03:01:47 with Max Tegmark, and thank you to our sponsors,
03:01:50 the Jordan Harbinger Show, For Sigmatic Mushroom Coffee,
03:01:54 BetterHelp Online Therapy, and ExpressVPN.
03:01:58 So the choice is wisdom, caffeine, sanity, or privacy.
03:02:04 Choose wisely, my friends.
03:02:05 And if you wish, click the sponsor links below
03:02:08 to get a discount and to support this podcast.
03:02:11 And now let me leave you with some words from Max Tegmark.
03:02:15 If consciousness is the way that information feels
03:02:18 when it’s processed in certain ways,
03:02:21 then it must be substrate independent.
03:02:24 It’s only the structure of information processing
03:02:26 that matters, not the structure of the matter
03:02:29 doing the information processing.
03:02:31 Thank you for listening, and hope to see you next time.