Nick Bostrom: Simulation and Superintelligence #83

Transcript

00:00:00 The following is a conversation with Nick Bostrom, a philosopher at University of Oxford

00:00:05 and the director of the Future of Humanity Institute.

00:00:08 He has worked on fascinating and important ideas in existential risk, simulation hypothesis,

00:00:15 human enhancement ethics, and the risks of superintelligent AI systems, including in

00:00:20 his book, Superintelligence.

00:00:23 I can see talking to Nick multiple times in this podcast, many hours each time, because

00:00:27 he has done some incredible work in artificial intelligence, in technology, space, science,

00:00:34 and really philosophy in general, but we have to start somewhere.

00:00:38 This conversation was recorded before the outbreak of the coronavirus pandemic that

00:00:43 both Nick and I, I’m sure, will have a lot to say about next time we speak, and perhaps

00:00:49 that is for the best, because the deepest lessons can be learned only in retrospect

00:00:54 when the storm has passed.

00:00:56 I do recommend you read many of his papers on the topic of existential risk, including

00:01:01 the technical report titled Global Catastrophic Risks Survey that he coauthored with Anders

00:01:07 Sandberg.

00:01:08 For everyone feeling the medical, psychological, and financial burden of this crisis, I’m

00:01:14 sending love your way.

00:01:15 Stay strong.

00:01:16 We’re in this together.

00:01:17 We’ll beat this thing.

00:01:20 This is the Artificial Intelligence Podcast.

00:01:22 If you enjoy it, subscribe on YouTube, review it with five stars on Apple Podcast, support

00:01:28 it on Patreon, or simply connect with me on Twitter at Lex Friedman, spelled F R I D M

00:01:33 A N.

00:01:34 As usual, I’ll do one or two minutes of ads now and never any ads in the middle that

00:01:39 can break the flow of the conversation.

00:01:41 I hope that works for you and doesn’t hurt the listening experience.

00:01:46 This show is presented by Cash App, the number one finance app in the App Store.

00:01:50 When you get it, use code LEXPODCAST.

00:01:53 Cash App lets you send money to friends, buy Bitcoin, and invest in the stock market with

00:01:57 as little as one dollar.

00:01:59 Since Cash App does fractional share trading, let me mention that the order execution algorithm

00:02:05 that works behind the scenes to create the abstraction of fractional orders is an algorithmic

00:02:10 marvel.

00:02:11 So big props to the Cash App engineers for solving a hard problem that in the end provides

00:02:16 an easy interface that takes a step up to the next layer of abstraction over the stock

00:02:20 market, making trading more accessible for new investors and diversification much easier.

00:02:26 So again, if you get Cash App from the App Store, Google Play, and use the code LEXPODCAST,

00:02:33 you get $10, and Cash App will also donate $10 to FIRST, an organization that is helping

00:02:38 to advance robotics and STEM education for young people around the world.

00:02:43 And now, here’s my conversation with Nick Bostrom.

00:02:49 At the risk of asking the Beatles to play yesterday or the Rolling Stones to play Satisfaction,

00:02:54 let me ask you the basics.

00:02:56 What is the simulation hypothesis?

00:02:59 That we are living in a computer simulation.

00:03:02 What is a computer simulation?

00:03:04 How are we supposed to even think about that?

00:03:06 Well, so the hypothesis is meant to be understood in a literal sense, not that we can kind of

00:03:15 metaphorically view the universe as an information processing physical system, but that there

00:03:21 is some advanced civilization who built a lot of computers and that what we experience

00:03:28 is an effect of what’s going on inside one of those computers so that the world around

00:03:34 us, our own brains, everything we see and perceive and think and feel would exist because

00:03:43 this computer is running certain programs.

00:03:48 So do you think of this computer as something similar to the computers of today, these deterministic

00:03:55 sort of Turing machine type things?

00:03:58 Is that what we’re supposed to imagine or we’re supposed to think of something more

00:04:01 like a quantum mechanical system?

00:04:07 Something much bigger, something much more complicated, something much more mysterious

00:04:11 from our current perspective?

00:04:12 The ones we have today would do fine, I mean, bigger, certainly.

00:04:15 You’d need more memory and more processing power.

00:04:18 I don’t think anything else would be required.

00:04:21 Now, it might well be that they do have additional, maybe they have quantum computers and other

00:04:26 things that would give them even more of, it seems kind of plausible, but I don’t think

00:04:31 it’s a necessary assumption in order to get to the conclusion that a technologically

00:04:38 mature civilization would be able to create these kinds of computer simulations with conscious

00:04:44 beings inside them.

00:04:46 So do you think the simulation hypothesis is an idea that’s most useful in philosophy,

00:04:52 computer science, physics, sort of where do you see it having valuable kind of starting

00:05:02 point in terms of a thought experiment of it?

00:05:05 Is it useful?

00:05:06 I guess it’s more informative and interesting and maybe important, but it’s not designed

00:05:14 to be useful for something else.

00:05:16 Okay, interesting, sure.

00:05:18 But is it philosophically interesting or is there some kind of implications of computer

00:05:23 science and physics?

00:05:24 I think not so much for computer science or physics per se.

00:05:29 Certainly it would be of interest in philosophy, I think also to say cosmology or physics in

00:05:37 as much as you’re interested in the fundamental building blocks of the world and the rules

00:05:43 that govern it.

00:05:46 If we are in a simulation, there is then the possibility that say physics at the level

00:05:50 where the computer running the simulation could be different from the physics governing

00:05:57 phenomena in the simulation.

00:05:59 So I think it might be interesting from point of view of religion or just for kind of trying

00:06:06 to figure out what the heck is going on.

00:06:09 So we mentioned the simulation hypothesis so far.

00:06:14 There is also the simulation argument, which I tend to make a distinction.

00:06:19 So simulation hypothesis, we are living in a computer simulation.

00:06:23 Simulation argument, this argument that tries to show that one of three propositions is

00:06:27 true, one of which is the simulation hypothesis, but there are two alternatives in the original

00:06:34 simulation argument, which we can get to.

00:06:36 Yeah, let’s go there.

00:06:37 By the way, confusing terms because people will, I think, probably naturally think simulation

00:06:43 argument equals simulation hypothesis, just terminology wise.

00:06:47 But let’s go there.

00:06:48 So simulation hypothesis means that we are living in a simulations, the hypothesis that

00:06:52 we’re living in a simulation, simulation argument has these three complete possibilities that

00:06:58 cover all possibilities.

00:07:00 So what are they?

00:07:01 Yeah.

00:07:02 So it’s like a disjunction.

00:07:03 It says at least one of these three is true, although it doesn’t on its own tell us which

00:07:08 one.

00:07:10 So the first one is that almost all civilizations that are current stage of technological development

00:07:17 go extinct before they reach technological maturity.

00:07:23 So there is some great filter that makes it so that basically none of the civilizations

00:07:34 throughout maybe a vast cosmos will ever get to realize the full potential of technological

00:07:41 development.

00:07:42 And this could be, theoretically speaking, this could be because most civilizations kill

00:07:47 themselves too eagerly or destroy themselves too eagerly, or it might be super difficult

00:07:52 to build a simulation.

00:07:55 So the span of time.

00:07:57 Theoretically it could be both.

00:07:58 Now I think it looks like we would technologically be able to get there in a time span that

00:08:04 is short compared to, say, the lifetime of planets and other sort of astronomical processes.

00:08:13 So your intuition is to build a simulation is not…

00:08:16 Well, so this is interesting concept of technological maturity.

00:08:21 It’s kind of an interesting concept to have other purposes as well.

00:08:25 We can see even based on our current limited understanding what some lower bound would

00:08:31 be on the capabilities that you could realize by just developing technologies that we already

00:08:37 see are possible.

00:08:38 So for example, one of my research fellows here, Eric Drexler, back in the 80s, studied

00:08:46 molecular manufacturing.

00:08:48 That is you could analyze using theoretical tools and computer modeling the performance

00:08:55 of various molecularly precise structures that we didn’t then and still don’t today

00:09:01 have the ability to actually fabricate.

00:09:04 But you could say that, well, if we could put these atoms together in this way, then

00:09:07 the system would be stable and it would rotate at this speed and have all these computational

00:09:13 characteristics.

00:09:16 And he also outlined some pathways that would enable us to get to this kind of molecularly

00:09:22 manufacturing in the fullness of time.

00:09:25 And you could do other studies we’ve done.

00:09:28 You could look at the speed at which, say, it would be possible to colonize the galaxy

00:09:33 if you had mature technology.

00:09:36 We have an upper limit, which is the speed of light.

00:09:38 We have sort of a lower current limit, which is how fast current rockets go.

00:09:42 We know we can go faster than that by just making them bigger and have more fuel and

00:09:47 stuff.

00:09:48 We can then start to describe the technological affordances that would exist once a civilization

00:09:56 has had enough time to develop, at least those technologies we already know are possible.

00:10:01 Then maybe they would discover other new physical phenomena as well that we haven’t realized

00:10:05 that would enable them to do even more.

00:10:08 But at least there is this kind of basic set of capabilities.

00:10:11 Can you just link on that, how do we jump from molecular manufacturing to deep space

00:10:18 exploration to mature technology?

00:10:23 What’s the connection there?

00:10:24 Well, so these would be two examples of technological capability sets that we can have a high degree

00:10:31 of confidence are physically possible in our universe and that a civilization that was

00:10:38 allowed to continue to develop its science and technology would eventually attain.

00:10:42 You can intuit like, we can kind of see the set of breakthroughs that are likely to happen.

00:10:48 So you can see like, what did you call it, the technological set?

00:10:53 With computers, maybe it’s easiest.

00:10:58 One is we could just imagine bigger computers using exactly the same parts that we have.

00:11:01 So you can kind of scale things that way, right?

00:11:04 But you could also make processors a bit faster.

00:11:07 If you had this molecular nanotechnology that Eric Drexler described, he characterized a

00:11:13 kind of crude computer built with these parts that would perform at a million times the

00:11:19 human brain while being significantly smaller, the size of a sugar cube.

00:11:25 And he made no claim that that’s the optimum computing structure, like for all you know,

00:11:30 we could build faster computers that would be more efficient, but at least you could

00:11:33 do that if you had the ability to do things that were atomically precise.

00:11:37 I mean, so you can then combine these two.

00:11:39 You could have this kind of nanomolecular ability to build things atom by atom and then

00:11:45 say at this as a spatial scale that would be attainable through space colonizing technology.

00:11:53 You could then start, for example, to characterize a lower bound on the amount of computing power

00:11:58 that a technologically mature civilization would have.

00:12:01 If it could grab resources, you know, planets and so forth, and then use this molecular

00:12:07 nanotechnology to optimize them for computing, you’d get a very, very high lower bound on

00:12:15 the amount of compute.

00:12:17 So sorry, just to define some terms, so technologically mature civilization is one that took that

00:12:22 piece of technology to its lower bound.

00:12:26 What is a technologically mature civilization?

00:12:27 So that means it’s a stronger concept than we really need for the simulation hypothesis.

00:12:31 I just think it’s interesting in its own right.

00:12:34 So it would be the idea that there is some stage of technological development where you’ve

00:12:38 basically maxed out, that you developed all those general purpose, widely useful technologies

00:12:45 that could be developed, or at least kind of come very close to the, you know, 99.9%

00:12:51 there or something.

00:12:53 So that’s an independent question.

00:12:55 You can think either that there is such a ceiling, or you might think it just goes,

00:12:59 the technology tree just goes on forever.

00:13:03 Where does your sense fall?

00:13:04 I would guess that there is a maximum that you would start to asymptote towards.

00:13:10 So new things won’t keep springing up, new ceilings.

00:13:13 In terms of basic technological capabilities, I think that, yeah, there is like a finite

00:13:18 set of laws that can exist in this universe.

00:13:23 Moreover, I mean, I wouldn’t be that surprised if we actually reached close to that level

00:13:30 fairly shortly after we have, say, machine superintelligence.

00:13:33 So I don’t think it would take millions of years for a human originating civilization

00:13:39 to begin to do this.

00:13:42 It’s more likely to happen on historical timescales.

00:13:46 But that’s an independent speculation from the simulation argument.

00:13:51 I mean, for the purpose of the simulation argument, it doesn’t really matter whether

00:13:55 it goes indefinitely far up or whether there is a ceiling, as long as we know we can at

00:13:59 least get to a certain level.

00:14:01 And it also doesn’t matter whether that’s going to happen in 100 years or 5,000 years

00:14:06 or 50 million years.

00:14:08 Like the timescales really don’t make any difference for this.

00:14:11 Can you look on that a little bit?

00:14:13 Like there’s a big difference between 100 years and 10 million years.

00:14:19 So it doesn’t really not matter because you just said it doesn’t matter if we jump scales

00:14:25 to beyond historical scales.

00:14:28 So we described that.

00:14:30 So for the simulation argument, sort of doesn’t it matter that we if it takes 10 million years,

00:14:40 it gives us a lot more opportunity to destroy civilization in the meantime?

00:14:44 Yeah, well, so it would shift around the probabilities between these three alternatives.

00:14:49 That is, if we are very, very far away from being able to create these simulations, if

00:14:54 it’s like, say, billions of years into the future, then it’s more likely that we will

00:14:58 fail ever to get there.

00:14:59 There’s more time for us to kind of go extinct along the way.

00:15:04 And so this is similarly for other civilizations.

00:15:06 So it is important to think about how hard it is to build a simulation.

00:15:11 In terms of figuring out which of the disjuncts.

00:15:14 But for the simulation argument itself, which is agnostic as to which of these three alternatives

00:15:19 is true.

00:15:20 Yeah.

00:15:21 Okay.

00:15:22 It’s like you don’t have to like the simulation argument would be true whether or not we thought

00:15:26 this could be done in 500 years or it would take 500 million years.

00:15:29 No, for sure.

00:15:30 The simulation argument stands.

00:15:31 I mean, I’m sure there might be some people who oppose it, but it doesn’t matter.

00:15:36 I mean, it’s very nice those three cases cover it.

00:15:39 But the fun part is at least not saying what the probabilities are, but kind of thinking

00:15:44 about kind of intuiting reasoning about what’s more likely, what are the kind of things that

00:15:50 would make some of the arguments less and more so like.

00:15:54 But let’s actually, I don’t think we went through them.

00:15:56 So number one is we destroy ourselves before we ever create simulation.

00:16:00 Right.

00:16:01 So that’s kind of sad, but we have to think not just what might destroy us.

00:16:07 I mean, so there could be some whatever disaster, some meteor slamming the earth a few years

00:16:14 from now that could destroy us.

00:16:16 Right.

00:16:17 But you’d have to postulate in order for this first disjunct to be true that almost all

00:16:24 civilizations throughout the cosmos also failed to reach technological maturity.

00:16:32 And the underlying assumption there is that there is likely a very large number of other

00:16:37 intelligent civilizations.

00:16:39 Well, if there are, yeah, then they would virtually all have to succumb in the same

00:16:45 way.

00:16:46 I mean, then that leads off another, I guess there are a lot of little digressions that

00:16:50 are interesting.

00:16:51 Definitely, let’s go there.

00:16:52 Let’s go there.

00:16:53 Keep dragging us back.

00:16:54 Well, there are these, there is a set of basic questions that always come up in conversations

00:16:58 with interesting people, like the Fermi paradox, like there’s like, you could almost define

00:17:05 whether a person is interesting, whether at some point the question of the Fermi paradox

00:17:09 comes up, like, well, so for what it’s worth, it looks to me that the universe is very big.

00:17:16 I mean, in fact, according to the most popular current cosmological theories, infinitely

00:17:23 big.

00:17:25 And so then it would follow pretty trivially that it would contain a lot of other civilizations,

00:17:31 in fact, infinitely many.

00:17:34 If you have some local stochasticity and infinitely many, it’s like, you know, infinitely many

00:17:39 lumps of matter, one next to another, there’s kind of random stuff in each one, then you’re

00:17:43 going to get all possible outcomes with probability one infinitely repeated.

00:17:51 So then certainly there would be a lot of extraterrestrials out there.

00:17:54 Even short of that, if the universe is very big, that might be a finite but large number.

00:18:02 If we were literally the only one, yeah, then of course, if we went extinct, then all of

00:18:09 civilizations at our current stage would have gone extinct before becoming technological

00:18:14 material.

00:18:15 So then it kind of becomes trivially true that a very high fraction of those went extinct.

00:18:22 But if we think there are many, I mean, it’s interesting, because there are certain things

00:18:25 that possibly could kill us, like if you look at existential risks, and it might be a different,

00:18:35 like the best answer to what would be most likely to kill us might be a different answer

00:18:40 than the best answer to the question, if there is something that kills almost everyone, what

00:18:46 would that be?

00:18:47 Because that would have to be some risk factor that was kind of uniform overall possible

00:18:53 civilization.

00:18:54 So in this, for the sake of this argument, you have to think about not just us, but like

00:18:59 every civilization dies out before they create the simulation or something very close to

00:19:05 everybody.

00:19:06 Okay.

00:19:07 So what’s number two in the number two is the convergence hypothesis that is that maybe

00:19:14 like a lot of some of these civilizations do make it through to technological maturity,

00:19:18 but out of those who do get there, they all lose interest in creating these simulations.

00:19:26 So they just have the capability of doing it, but they choose not to.

00:19:32 Not just a few of them decide not to, but out of a million, maybe not even a single

00:19:40 one of them would do it.

00:19:41 And I think when you say lose interest, that sounds like unlikely because it’s like they

00:19:48 get bored or whatever, but it could be so many possibilities within that.

00:19:53 I mean, losing interest could be, it could be anything from it being exceptionally difficult

00:20:02 to do to fundamentally changing the sort of the fabric of reality.

00:20:09 If you do it is ethical concerns, all those kinds of things could be exceptionally strong

00:20:14 pressures.

00:20:15 Well, certainly, I mean, yeah, ethical concerns.

00:20:18 I mean, not really too difficult to do.

00:20:21 I mean, in a sense, that’s the first assumption that you get to technological maturity where

00:20:26 you would have the ability using only a tiny fraction of your resources to create many,

00:20:32 many simulations.

00:20:34 So it wouldn’t be the case that they would need to spend half of their GDP forever in

00:20:39 order to create one simulation and they had this like difficult debate about whether they

00:20:43 should invest half of their GDP for this.

00:20:46 It would more be like, well, if any little fraction of the civilization feels like doing

00:20:50 this at any point during maybe their millions of years of existence, then that would be

00:20:57 millions of simulations.

00:21:00 But certainly, there could be many conceivable reasons for why there would be this convert,

00:21:07 many possible reasons for not running ancestor simulations or other computer simulations,

00:21:13 even if you could do so cheaply.

00:21:15 By the way, what’s an ancestor simulation?

00:21:17 Well, that would be the type of computer simulation that would contain people like those we think

00:21:24 have lived on our planet in the past and like ourselves in terms of the types of experiences

00:21:30 they have and where those simulated people are conscious.

00:21:33 So like not just simulated in the same sense that a non player character would be simulated

00:21:41 in the current computer game where it’s kind of has like an avatar body and then a very

00:21:45 simple mechanism that moves it forward or backwards.

00:21:49 But something where the simulated being has a brain, let’s say that’s simulated at a sufficient

00:21:56 level of granularity that it would have the same subjective experiences as we have.

00:22:03 So where does consciousness fit into this?

00:22:06 Do you think simulation, I guess there are different ways to think about how this can

00:22:10 be simulated, just like you’re talking about now.

00:22:14 Do we have to simulate each brain within the larger simulation?

00:22:21 Is it enough to simulate just the brain, just the minds and not the simulation, not the

00:22:26 universe itself?

00:22:27 Like, is there a different ways to think about this?

00:22:29 Yeah, I guess there is a kind of premise in the simulation argument rolled in from philosophy

00:22:38 of mind that is that it would be possible to create a conscious mind in a computer.

00:22:45 And that what determines whether some system is conscious or not is not like whether it’s

00:22:51 built from organic biological neurons, but maybe something like what the structure of

00:22:56 the computation is that it implements.

00:22:59 So we can discuss that if we want, but I think it would be more forward as far as my view

00:23:05 that it would be sufficient, say, if you had a computation that was identical to the computation

00:23:15 in the human brain down to the level of neurons.

00:23:17 So if you had a simulation with 100 billion neurons connected in the same way as the human

00:23:21 brain, and you then roll that forward with the same kind of synaptic weights and so forth,

00:23:27 so you actually had the same behavior coming out of this as a human with that brain would

00:23:33 have done, then I think that would be conscious.

00:23:36 Now it’s possible you could also generate consciousness without having that detailed

00:23:43 assimilation, there I’m getting more uncertain exactly how much you could simplify or abstract

00:23:50 away.

00:23:51 Can you look on that?

00:23:52 What do you mean?

00:23:53 I missed where you’re placing consciousness in the second.

00:23:56 Well, so if you are a computationalist, do you think that what creates consciousness

00:24:01 is the implementation of a computation?

00:24:04 Some property, emergent property of the computation itself.

00:24:07 Yeah.

00:24:08 That’s the idea.

00:24:09 Yeah, you could say that.

00:24:10 But then the question is, what’s the class of computations such that when they are run,

00:24:16 consciousness emerges?

00:24:18 So if you just have something that adds one plus one plus one plus one, like a simple

00:24:24 computation, you think maybe that’s not going to have any consciousness.

00:24:28 If on the other hand, the computation is one like our human brains are performing, where

00:24:36 as part of the computation, there is a global workspace, a sophisticated attention mechanism,

00:24:43 there is self representations of other cognitive processes and a whole lot of other things

00:24:50 that possibly would be conscious.

00:24:52 And in fact, if it’s exactly like ours, I think definitely it would.

00:24:56 But exactly how much less than the full computation that the human brain is performing would be

00:25:02 required is a little bit, I think, of an open question.

00:25:09 He asked another interesting question as well, which is, would it be sufficient to just have

00:25:17 say the brain or would you need the environment in order to generate the same kind of experiences

00:25:24 that we have?

00:25:26 And there is a bunch of stuff we don’t know.

00:25:29 I mean, if you look at, say, current virtual reality environments, one thing that’s clear

00:25:35 is that we don’t have to simulate all details of them all the time in order for, say, the

00:25:40 human player to have the perception that there is a full reality and that you can have say

00:25:47 procedurally generated where you might only render a scene when it’s actually within the

00:25:51 view of the player character.

00:25:55 And so similarly, if this environment that we perceive is simulated, it might be that

00:26:06 all of the parts that come into our view are rendered at any given time.

00:26:10 And a lot of aspects that never come into view, say the details of this microphone I’m

00:26:16 talking into, exactly what each atom is doing at any given point in time, might not be part

00:26:23 of the simulation, only a more coarse grained representation.

00:26:27 So that to me is actually from an engineering perspective, why the simulation hypothesis

00:26:31 is really interesting to think about is how difficult is it to fake sort of in a virtual

00:26:39 reality context, I don’t know if fake is the right word, but to construct a reality that

00:26:45 is sufficiently real to us to be immersive in the way that the physical world is.

00:26:52 I think that’s actually probably an answerable question of psychology, of computer science,

00:26:59 of how, where’s the line where it becomes so immersive that you don’t want to leave

00:27:06 that world?

00:27:07 Yeah, or that you don’t realize while you’re in it that it is a virtual world.

00:27:13 Yeah, those are two actually questions, yours is the more sort of the good question about

00:27:17 the realism, but mine, from my perspective, what’s interesting is it doesn’t have to be

00:27:23 real, but how can we construct a world that we wouldn’t want to leave?

00:27:29 Yeah, I mean, I think that might be too low a bar, I mean, if you think, say when people

00:27:34 first had pong or something like that, I’m sure there were people who wanted to keep

00:27:38 playing it for a long time because it was fun and they wanted to be in this little world.

00:27:44 I’m not sure we would say it’s immersive, I mean, I guess in some sense it is, but like

00:27:48 an absorbing activity doesn’t even have to be.

00:27:51 But they left that world though, that’s the thing.

00:27:54 So like, I think that bar is deceivingly high.

00:27:59 So they eventually left, so you can play pong or Starcraft or whatever more sophisticated

00:28:05 games for hours, for months, you know, while the work has to be in a big addiction, but

00:28:12 eventually they escaped that.

00:28:13 So you mean when it’s absorbing enough that you would spend your entire, you would choose

00:28:19 to spend your entire life in there.

00:28:21 And then thereby changing the concept of what reality is, because your reality becomes the

00:28:28 game.

00:28:29 Not because you’re fooled, but because you’ve made that choice.

00:28:33 Yeah, and it made, different people might have different preferences regarding that.

00:28:38 Some might, even if you had any perfect virtual reality, might still prefer not to spend the

00:28:47 rest of their lives there.

00:28:49 I mean, in philosophy, there’s this experience machine, thought experiment.

00:28:53 Have you come across this?

00:28:55 So Robert Nozick had this thought experiment where you imagine some crazy super duper neuroscientist

00:29:03 of the future have created a machine that could give you any experience you want if

00:29:08 you step in there.

00:29:10 And for the rest of your life, you can kind of pre programmed it in different ways.

00:29:15 So your fun dreams could come true, you could, whatever you dream, you want to be a great

00:29:24 artist, a great lover, like have a wonderful life, all of these things.

00:29:29 If you step into the experience machine will be your experiences, constantly happy.

00:29:36 But you would kind of disconnect from the rest of reality and you would float there

00:29:39 in a tank.

00:29:41 And so Nozick thought that most people would choose not to enter the experience machine.

00:29:48 I mean, many might want to go there for a holiday, but they wouldn’t want to have to

00:29:51 check out of existence permanently.

00:29:54 And so he thought that was an argument against certain views of value according to what we

00:30:01 value is a function of what we experience.

00:30:04 Because in the experience machine, you could have any experience you want, and yet many

00:30:08 people would think that would not be much value.

00:30:12 So therefore, what we value depends on other things than what we experience.

00:30:18 So okay, can you can you take that argument further?

00:30:21 What about the fact that maybe what we value is the up and down of life?

00:30:25 So you could have up and downs in the experience machine, right?

00:30:29 But what can’t you have in the experience machine?

00:30:31 Well, I mean, that then becomes an interesting question to explore.

00:30:35 But for example, real connection with other people, if the experience machine is a solo

00:30:40 machine where it’s only you, like that’s something you wouldn’t have there.

00:30:44 You would have this subjective experience that would be like fake people.

00:30:49 But when if you gave somebody flowers, there wouldn’t be anybody there who actually got

00:30:53 happy.

00:30:54 It would just be a little simulation of somebody smiling.

00:30:58 But the simulation would not be the kind of simulation I’m talking about in the simulation

00:31:01 argument where the simulated creature is conscious, it would just be a kind of smiley face that

00:31:06 would look perfectly real to you.

00:31:08 So we’re now drawing a distinction between appear to be perfectly real and actually being

00:31:14 real.

00:31:15 Yeah.

00:31:16 Um, so that could be one thing, I mean, like a big impact on history, maybe is also something

00:31:22 you won’t have if you check into this experience machine.

00:31:25 So some people might actually feel the life I want to have for me is one where I have

00:31:29 a big positive impact on history unfolds.

00:31:35 So you could kind of explore these different possible explanations for why it is you wouldn’t

00:31:43 want to go into the experience machine if that’s, if that’s what you feel.

00:31:48 And one interesting observation regarding this Nozick thought experiment and the conclusions

00:31:53 he wanted to draw from it is how much is a kind of a status quo effect.

00:31:58 So a lot of people might not want to get this on current reality to plug into this dream

00:32:04 machine.

00:32:06 But if they instead were told, well, what you’ve experienced up to this point was a

00:32:13 dream now, do you want to disconnect from this and enter the real world when you have

00:32:20 no idea maybe what the real world is, or maybe you could say, well, you’re actually a farmer

00:32:24 in Peru, growing, you know, peanuts, and you could live for the rest of your life in this

00:32:32 way, or would you want to continue your dream life as Alex Friedman going around the world

00:32:40 making podcasts and doing research.

00:32:44 So if the status quo was that they were actually in the experience machine, I think a lot of

00:32:51 people might then prefer to live the life that they are familiar with rather than sort

00:32:55 of bail out into.

00:32:57 So that’s interesting, the change itself, the leap, yeah, so it might not be so much

00:33:02 the reality itself that we’re after.

00:33:04 But it’s more that we are maybe involved in certain projects and relationships.

00:33:09 And we have, you know, a self identity and these things that our values are kind of connected

00:33:14 with carrying that forward.

00:33:15 And then whether it’s inside a tank or outside a tank in Peru, or whether inside a computer

00:33:22 outside a computer, that’s kind of less important to what we ultimately care about.

00:33:29 Yeah, but still, so just to linger on it, it is interesting.

00:33:34 I find maybe people are different, but I find myself quite willing to take the leap to the

00:33:39 farmer in Peru, especially as the virtual reality system become more realistic.

00:33:46 I find that possibility and I think more people would take that leap.

00:33:50 But so in this thought experiment, just to make sure we are understanding, so in this

00:33:53 case, the farmer in Peru would not be a virtual reality, that would be the real, your life,

00:34:01 like before this whole experience machine started.

00:34:04 Well, I kind of assumed from that description, you’re being very specific, but that kind

00:34:09 of idea just like washes away the concept of what’s real.

00:34:15 I’m still a little hesitant about your kind of distinction between real and illusion.

00:34:23 Because when you can have an illusion that feels, I mean, that looks real, I don’t know

00:34:31 how you can definitively say something is real or not, like what’s a good way to prove

00:34:35 that something is real in that context?

00:34:37 Well, so I guess in this case, it’s more a stipulation.

00:34:41 In one case, you’re floating in a tank with these wires by the super duper neuroscientists

00:34:47 plugging into your head, giving you like Friedman experiences.

00:34:52 In the other, you’re actually tilling the soil in Peru, growing peanuts, and then those

00:34:57 peanuts are being eaten by other people all around the world who buy the exports.

00:35:01 That’s two different possible situations in the one and the same real world that you could

00:35:08 choose to occupy.

00:35:09 But just to be clear, when you’re in a vat with wires and the neuroscientists, you can

00:35:15 still go farming in Peru, right?

00:35:19 No, well, if you wanted to, you could have the experience of farming in Peru, but there

00:35:25 wouldn’t actually be any peanuts grown.

00:35:28 But what makes a peanut, so a peanut could be grown and you could feed things with that

00:35:36 peanut and why can’t all of that be done in a simulation?

00:35:41 I hope, first of all, that they actually have peanut farms in Peru, I guess we’ll get a

00:35:45 lot of comments otherwise from Angrit.

00:35:50 I was way up to the point when you started talking about Peru peanuts, that’s when I

00:35:54 realized you’re relying out of these.

00:35:56 In that climate.

00:35:57 No, I mean, I think, I mean, in the simulation, I think there is a sense, the important sense

00:36:05 in which it would all be real.

00:36:07 Nevertheless, there is a distinction between inside the simulation and outside the simulation.

00:36:13 Or in the case of Nozick’s thought experiment, whether you’re in the vat or outside the vat,

00:36:19 and some of those differences may or may not be important.

00:36:22 I mean, that comes down to your values and preferences.

00:36:25 So if the, if the experience machine only gives you the experience of growing peanuts,

00:36:32 but you’re the only one in the experience machines.

00:36:35 No, but there’s other, you can, within the experience machine, others can plug in.

00:36:40 Well, there are versions of the experience machine.

00:36:43 So in fact, you might want to have, distinguish different thought experiments, different versions

00:36:47 of it.

00:36:48 I see.

00:36:49 So in, like in the original thought experiment, maybe it’s only you, right?

00:36:51 And you think, I wouldn’t want to go in there.

00:36:54 Well, that tells you something interesting about what you value and what you care about.

00:36:58 Then you could say, well, what if you add the fact that there would be other people

00:37:02 in there and you would interact with them?

00:37:03 Well, it starts to make it more attractive, right?

00:37:06 Then you could add in, well, what if you could also have important longterm effects on human

00:37:10 history and the world, and you could actually do something useful, even though you were

00:37:14 in there.

00:37:15 That makes it maybe even more attractive.

00:37:17 Like you could actually have a life that had a purpose and consequences.

00:37:22 And so as you sort of add more into it, it becomes more similar to the baseline reality

00:37:30 that you were comparing it to.

00:37:32 Yeah, but I just think inside the experience machine and without taking those steps you

00:37:37 just mentioned, you still have an impact on longterm history of the creatures that live

00:37:45 inside that, of the quote unquote fake creatures that live inside that experience machine.

00:37:53 And that, like at a certain point, you know, if there’s a person waiting for you inside

00:37:59 that experience machine, maybe your newly found wife and she dies, she has fear, she

00:38:06 has hopes, and she exists in that machine when you plug out, when you unplug yourself

00:38:12 and plug back in, she’s still there going on about her life.

00:38:16 Well, in that case, yeah, she starts to have more of an independent existence.

00:38:20 Independent existence.

00:38:21 But it depends, I think, on how she’s implemented in the experience machine.

00:38:26 Take one limit case where all she is is a static picture on the wall, a photograph.

00:38:32 So you think, well, I can look at her, right?

00:38:36 But that’s it.

00:38:37 There’s no…

00:38:38 Then you think, well, it doesn’t really matter much what happens to that, any more than a

00:38:41 normal photograph if you tear it up, right?

00:38:45 It means you can’t see it anymore, but you haven’t harmed the person whose picture you

00:38:49 tore up.

00:38:52 But if she’s actually implemented, say, at a neural level of detail so that she’s a fully

00:38:58 realized digital mind with the same behavioral repertoire as you have, then very plausibly

00:39:06 she would be a conscious person like you are.

00:39:09 And then what you do in this experience machine would have real consequences for how this

00:39:14 other mind felt.

00:39:17 So you have to specify which of these experience machines you’re talking about.

00:39:21 I think it’s not entirely obvious that it would be possible to have an experience machine

00:39:27 that gave you a normal set of human experiences, which include experiences of interacting with

00:39:34 other people, without that also generating consciousnesses corresponding to those other

00:39:40 people.

00:39:41 That is, if you create another entity that you perceive and interact with, that to you

00:39:47 looks entirely realistic.

00:39:49 Not just when you say hello, they say hello back, but you have a rich interaction, many

00:39:53 days, deep conversations.

00:39:54 It might be that the only possible way of implementing that would be one that also has

00:40:00 a side effect, instantiated this other person in enough detail that you would have a second

00:40:06 consciousness there.

00:40:07 I think that’s to some extent an open question.

00:40:11 So you don’t think it’s possible to fake consciousness and fake intelligence?

00:40:15 Well, it might be.

00:40:16 I mean, I think you can certainly fake, if you have a very limited interaction with somebody,

00:40:21 you could certainly fake that.

00:40:24 If all you have to go on is somebody said hello to you, that’s not enough for you to

00:40:28 tell whether that was a real person there, or a prerecorded message, or a very superficial

00:40:34 simulation that has no consciousness, because that’s something easy to fake.

00:40:39 We could already fake it, now you can record a voice recording.

00:40:43 But if you have a richer set of interactions where you’re allowed to ask open ended questions

00:40:49 and probe from different angles, you couldn’t give canned answer to all of the possible

00:40:54 ways that you could probe it, then it starts to become more plausible that the only way

00:41:00 to realize this thing in such a way that you would get the right answer from any which

00:41:05 angle you probed it, would be a way of instantiating it, where you also instantiated a conscious

00:41:10 mind.

00:41:11 Yeah, I’m with you on the intelligence part, but is there something about me that says

00:41:13 consciousness is easier to fake?

00:41:15 Like I’ve recently gotten my hands on a lot of rubas, don’t ask me why or how.

00:41:23 And I’ve made them, there’s just a nice robotic mobile platform for experiments.

00:41:28 And I made them scream and or moan in pain, so on, just to see when they’re responding

00:41:34 to me.

00:41:35 And it’s just a sort of psychological experiment on myself.

00:41:39 And I think they appear conscious to me pretty quickly.

00:41:43 To me, at least my brain can be tricked quite easily.

00:41:46 I said if I introspect, it’s harder for me to be tricked that something is intelligent.

00:41:53 So I just have this feeling that inside this experience machine, just saying that you’re

00:41:58 conscious and having certain qualities of the interaction, like being able to suffer,

00:42:05 like being able to hurt, like being able to wander about the essence of your own existence,

00:42:12 not actually, I mean, creating the illusion that you’re wandering about it is enough to

00:42:18 create the illusion of consciousness.

00:42:23 And because of that, create a really immersive experience to where you feel like that is

00:42:27 the real world.

00:42:28 So you think there’s a big gap between appearing conscious and being conscious?

00:42:33 Or is it that you think it’s very easy to be conscious?

00:42:36 I’m not actually sure what it means to be conscious.

00:42:38 All I’m saying is the illusion of consciousness is enough to create a social interaction that’s

00:42:48 as good as if the thing was conscious, meaning I’m making it about myself.

00:42:52 Right.

00:42:53 Yeah.

00:42:54 I mean, I guess there are a few different things.

00:42:55 One is how good the interaction is, which might, I mean, if you don’t really care about

00:42:59 like probing hard for whether the thing is conscious, maybe it would be a satisfactory

00:43:05 interaction, whether or not you really thought it was conscious.

00:43:10 Now, if you really care about it being conscious in like inside this experience machine, how

00:43:20 easy would it be to fake it?

00:43:22 And you say, it sounds fairly easy, but then the question is, would that also mean it’s

00:43:28 very easy to instantiate consciousness?

00:43:30 Like it’s much more widely spread in the world and we have thought it doesn’t require a big

00:43:35 human brain with a hundred billion neurons, all you need is some system that exhibits

00:43:39 basic intentionality and can respond and you already have consciousness.

00:43:43 Like in that case, I guess you still have a close coupling.

00:43:49 I guess that case would be where they can come apart, where you could create the appearance

00:43:54 of there being a conscious mind with actually not being another conscious mind.

00:43:59 I’m somewhat agnostic exactly where these lines go.

00:44:03 I think one observation that makes it plausible that you could have very realistic appearances

00:44:12 relatively simply, which also is relevant for the simulation argument and in terms of

00:44:18 thinking about how realistic would a virtual reality model have to be in order for the

00:44:24 simulated creature not to notice that anything was awry.

00:44:27 Well, just think of our own humble brains during the wee hours of the night when we

00:44:33 are dreaming.

00:44:35 Many times, well, dreams are very immersive, but often you also don’t realize that you’re

00:44:40 in a dream.

00:44:43 And that’s produced by simple primitive three pound lumps of neural matter effortlessly.

00:44:51 So if a simple brain like this can create the virtual reality that seems pretty real

00:44:57 to us, then how much easier would it be for a super intelligent civilization with planetary

00:45:03 sized computers optimized over the eons to create a realistic environment for you to

00:45:09 interact with?

00:45:10 Yeah.

00:45:11 By the way, behind that intuition is that our brain is not that impressive relative

00:45:17 to the possibilities of what technology could bring.

00:45:21 It’s also possible that the brain is the epitome, is the ceiling.

00:45:26 How is that possible?

00:45:30 Meaning like this is the smartest possible thing that the universe could create.

00:45:36 So that seems unlikely to me.

00:45:39 Yeah.

00:45:40 I mean, for some of these reasons we alluded to earlier in terms of designs we already

00:45:47 have for computers that would be faster by many orders of magnitude than the human brain.

00:45:54 Yeah.

00:45:55 We can see that the constraints, the cognitive constraints in themselves is what enables

00:46:01 the intelligence.

00:46:02 So the more powerful you make the computer, the less likely it is to become super intelligent.

00:46:09 This is where I say dumb things to push back on that statement.

00:46:12 Yeah.

00:46:13 I’m not sure I thought that we might.

00:46:14 No.

00:46:15 I mean, so there are different dimensions of intelligence.

00:46:18 A simple one is just speed.

00:46:20 Like if you can solve the same challenge faster in some sense, you’re like smarter.

00:46:25 So there I think we have very strong evidence for thinking that you could have a computer

00:46:31 in this universe that would be much faster than the human brain and therefore have speed

00:46:37 super intelligence, like be completely superior, maybe a million times faster.

00:46:42 Then maybe there are other ways in which you could be smarter as well, maybe more qualitative

00:46:46 ways, right?

00:46:48 And the concepts are a little bit less clear cut.

00:46:51 So it’s harder to make a very crisp, neat, firmly logical argument for why that could

00:46:59 be qualitative super intelligence as opposed to just things that were faster.

00:47:03 Although I still think it’s very plausible and for various reasons that are less than

00:47:08 watertight arguments.

00:47:09 But when you can sort of, for example, if you look at animals and even within humans,

00:47:14 like there seems to be like Einstein versus random person, like it’s not just that Einstein

00:47:19 was a little bit faster, but like how long would it take a normal person to invent general

00:47:25 relativity is like, it’s not 20% longer than it took Einstein or something like that.

00:47:30 It’s like, I don’t know whether they would do it at all or it would take millions of

00:47:32 years or some totally bizarre.

00:47:37 But your intuition is that the compute size will get you go increasing the size of the

00:47:42 computer and the speed of the computer might create some much more powerful levels of intelligence

00:47:49 that would enable some of the things we’ve been talking about with like the simulation,

00:47:53 being able to simulate an ultra realistic environment, ultra realistic perception of

00:48:00 reality.

00:48:01 Yeah.

00:48:02 I mean, strictly speaking, it would not be necessary to have super intelligence in order

00:48:05 to have say the technology to make these simulations, ancestor simulations or other kinds of simulations.

00:48:14 As a matter of fact, I think if we are in a simulation, it would most likely be one

00:48:20 built by a civilization that had super intelligence.

00:48:26 It certainly would help a lot.

00:48:27 I mean, you could build more efficient larger scale structures if you had super intelligence.

00:48:31 I also think that if you had the technology to build these simulations, that’s like a

00:48:34 very advanced technology.

00:48:35 It seems kind of easier to get the technology to super intelligence.

00:48:40 I’d expect by the time they could make these fully realistic simulations of human history

00:48:45 with human brains in there, like before that they got to that stage, they would have figured

00:48:49 out how to create machine super intelligence or maybe biological enhancements of their

00:48:55 own brains if there were biological creatures to start with.

00:48:59 So we talked about the three parts of the simulation argument.

00:49:04 One, we destroy ourselves before we ever create the simulation.

00:49:08 Two, we somehow, everybody somehow loses interest in creating the simulation.

00:49:13 Three, we’re living in a simulation.

00:49:16 So you’ve kind of, I don’t know if your thinking has evolved on this point, but you kind of

00:49:21 said that we know so little that these three cases might as well be equally probable.

00:49:28 So probabilistically speaking, where do you stand on this?

00:49:31 Yeah, I mean, I don’t think equal necessarily would be the most supported probability assignment.

00:49:41 So how would you, without assigning actual numbers, what’s more or less likely in your

00:49:47 view?

00:49:48 Well, I mean, I’ve historically tended to punt on the question of like between these

00:49:54 three.

00:49:55 So maybe you ask me another way is which kind of things would make each of these more or

00:50:01 less likely?

00:50:03 What kind of intuition?

00:50:05 Certainly in general terms, if you think anything that say increases or reduces the probability

00:50:10 of one of these, we tend to slosh probability around on the other.

00:50:17 So if one becomes less probable, like the other would have to, cause it’s got to add

00:50:20 up to one.

00:50:22 So if we consider the first hypothesis, the first alternative that there’s this filter

00:50:28 that makes it so that virtually no civilization reaches technological maturity, in particular

00:50:39 our own civilization, if that’s true, then it’s like very unlikely that we would reach

00:50:42 technological maturity because if almost no civilization at our stage does it, then it’s

00:50:47 unlikely that we do it.

00:50:49 So hence…

00:50:50 Sorry, can you linger on that for a second?

00:50:51 Well, so if it’s the case that almost all civilizations at our current stage of technological

00:50:59 development failed to reach maturity, that would give us very strong reason for thinking

00:51:05 we will fail to reach technological maturity.

00:51:07 Oh, and also sort of the flip side of that is the fact that we’ve reached it means that

00:51:12 many other civilizations have reached this point.

00:51:13 Yeah.

00:51:14 So that means if we get closer and closer to actually reaching technological maturity,

00:51:20 there’s less and less distance left where we could go extinct before we are there, and

00:51:26 therefore the probability that we will reach increases as we get closer, and that would

00:51:31 make it less likely to be true that almost all civilizations at our current stage failed

00:51:36 to get there.

00:51:37 Like we would have this…

00:51:38 The one case we had started ourselves would be very close to getting there, that would

00:51:42 be strong evidence that it’s not so hard to get to technological maturity.

00:51:46 So to the extent that we feel we are moving nearer to technological maturity, that would

00:51:52 tend to reduce the probability of the first alternative and increase the probability of

00:51:58 the other two.

00:51:59 It doesn’t need to be a monotonic change.

00:52:01 Like if every once in a while some new threat comes into view, some bad new thing you could

00:52:07 do with some novel technology, for example, that could change our probabilities in the

00:52:13 other direction.

00:52:15 But that technology, again, you have to think about as that technology has to be able to

00:52:20 equally in an even way affect every civilization out there.

00:52:26 Yeah, pretty much.

00:52:28 I mean, that’s strictly speaking, it’s not true.

00:52:30 I mean, that could be two different existential risks and every civilization, you know, one

00:52:36 or the other, like, but none of them kills more than 50%.

00:52:42 But incidentally, so in some of my work, I mean, on machine superintelligence, like pointed

00:52:50 to some existential risks related to sort of super intelligent AI and how we must make

00:52:54 sure, you know, to handle that wisely and carefully.

00:52:59 It’s not the right kind of existential catastrophe to make the first alternative true though.

00:53:09 Like it might be bad for us if the future lost a lot of value as a result of it being

00:53:15 shaped by some process that optimized for some completely nonhuman value.

00:53:21 But even if we got killed by machine superintelligence, that machine superintelligence might still

00:53:27 attain technological maturity.

00:53:29 Oh, I see, so you’re not human exclusive.

00:53:33 This could be any intelligent species that achieves, like it’s all about the technological

00:53:38 maturity.

00:53:39 But the humans have to attain it.

00:53:43 Right.

00:53:44 So like superintelligence could replace us and that’s just as well for the simulation

00:53:47 argument.

00:53:48 Yeah, yeah.

00:53:49 I mean, it could interact with the second hypothesis by alternative.

00:53:51 Like if the thing that replaced us was either more likely or less likely than we would be

00:53:57 to have an interest in creating ancestor simulations, you know, that could affect probabilities.

00:54:02 But yeah, to a first order, like if we all just die, then yeah, we won’t produce any

00:54:09 simulations because we are dead.

00:54:11 But if we all die and get replaced by some other intelligent thing that then gets to

00:54:17 technological maturity, the question remains, of course, if not that thing, then use some

00:54:21 of its resources to do this stuff.

00:54:25 So can you reason about this stuff, given how little we know about the universe?

00:54:30 Is it reasonable to reason about these probabilities?

00:54:36 So like how little, well, maybe you can disagree, but to me, it’s not trivial to figure out

00:54:45 how difficult it is to build a simulation.

00:54:47 We kind of talked about it a little bit.

00:54:49 We also don’t know, like as we try to start building it, like start creating virtual worlds

00:54:56 and so on, how that changes the fabric of society.

00:54:59 Like there’s all these things along the way that can fundamentally change just so many

00:55:04 aspects of our society about our existence that we don’t know anything about, like the

00:55:09 kind of things we might discover when we understand to a greater degree the fundamental, the physics,

00:55:19 like the theory, if we have a breakthrough, have a theory and everything, how that changes

00:55:23 stuff, how that changes deep space exploration and so on.

00:55:27 Like, is it still possible to reason about probabilities given how little we know?

00:55:33 Yes, I think there will be a large residual of uncertainty that we’ll just have to acknowledge.

00:55:41 And I think that’s true for most of these big picture questions that we might wonder

00:55:47 about.

00:55:49 It’s just we are small, short lived, small brained, cognitively very limited humans with

00:55:57 little evidence.

00:55:59 And it’s amazing we can figure out as much as we can really about the cosmos.

00:56:04 But okay, so there’s this cognitive trick that seems to happen when I look at the simulation

00:56:10 argument, which for me, it seems like case one and two feel unlikely.

00:56:16 I want to say feel unlikely as opposed to sort of like, it’s not like I have too much

00:56:22 scientific evidence to say that either one or two are not true.

00:56:26 It just seems unlikely that every single civilization destroys itself.

00:56:32 And it seems like feels unlikely that the civilizations lose interest.

00:56:37 So naturally, without necessarily explicitly doing it, but the simulation argument basically

00:56:44 says it’s very likely we’re living in a simulation.

00:56:49 To me, my mind naturally goes there.

00:56:51 I think the mind goes there for a lot of people.

00:56:54 Is that the incorrect place for it to go?

00:56:57 Well, not necessarily.

00:56:59 I think the second alternative, which has to do with the motivations and interests of

00:57:09 technological and material civilizations, I think there is much we don’t understand about

00:57:15 that.

00:57:16 Can you talk about that a little bit?

00:57:18 What do you think?

00:57:19 I mean, this is a question that pops up when you when you build an AGI system or build

00:57:22 a general intelligence.

00:57:26 How does that change our motivations?

00:57:27 Do you think it’ll fundamentally transform our motivations?

00:57:30 Well, it doesn’t seem that implausible that once you take this leap to to technological

00:57:39 maturity, I mean, I think like it involves creating machine super intelligence, possibly

00:57:44 that would be sort of on the path for basically all civilizations, maybe before they are able

00:57:50 to create large numbers of ancestry simulations, they would that that possibly could be one

00:57:55 of these things that quite radically changes the orientation of what a civilization is,

00:58:03 in fact, optimizing for.

00:58:06 There are other things as well.

00:58:08 So at the moment, we have not perfect control over our own being our own mental states,

00:58:20 our own experiences are not under our direct control.

00:58:25 So for example, if if you want to experience a pleasure and happiness, you might have to

00:58:33 do a whole host of things in the external world to try to get into the stage into the

00:58:39 mental state where you experience pleasure, like some people get some pleasure from eating

00:58:44 great food.

00:58:45 Well, they can just turn that on, they have to kind of actually go to a nice restaurant

00:58:49 and then they have to make money.

00:58:51 So there’s like all this kind of activity that maybe arises from the fact that we are

00:58:58 trying to ultimately produce mental states.

00:59:02 But the only way to do that is by a whole host of complicated activities in the external

00:59:06 world.

00:59:07 Now, at some level of technological development, I think we’ll become auto potent in the sense

00:59:11 of gaining direct ability to choose our own internal configuration, and enough knowledge

00:59:18 and insight to be able to actually do that in a meaningful way.

00:59:22 So then it could turn out that there are a lot of instrumental goals that would drop

00:59:28 out of the picture and be replaced by other instrumental goals, because we could now serve

00:59:33 some of these final goals in more direct ways.

00:59:37 And who knows how all of that shakes out after civilizations reflect on that and converge

00:59:45 on different attractors and so on and so forth.

00:59:49 And that could be new instrumental considerations that come into view as well, that we are just

00:59:57 oblivious to, that would maybe have a strong shaping effect on actions, like very strong

01:00:04 reasons to do something or not to do something, then we just don’t realize they are there

01:00:08 because we are so dumb, bumbling through the universe.

01:00:11 But if almost inevitably en route to attaining the ability to create many ancestors simulations,

01:00:17 you do have this cognitive enhancement, or advice from super intelligences or yourself,

01:00:23 then maybe there’s like this additional set of considerations coming into view and it’s

01:00:27 obvious that the thing that makes sense is to do X, whereas right now it seems you could

01:00:32 X, Y or Z and different people will do different things and we are kind of random in that sense.

01:00:39 Because at this time, with our limited technology, the impact of our decisions is minor.

01:00:45 I mean, that’s starting to change in some ways.

01:00:48 But…

01:00:49 Well, I’m not sure how it follows that the impact of our decisions is minor.

01:00:53 Well, it’s starting to change.

01:00:55 I mean, I suppose 100 years ago it was minor.

01:00:58 It’s starting to…

01:01:00 Well, it depends on how you view it.

01:01:03 What people did 100 years ago still have effects on the world today.

01:01:08 Oh, I see.

01:01:11 As a civilization in the togetherness.

01:01:14 Yeah.

01:01:15 So it might be that the greatest impact of individuals is not at technological maturity

01:01:21 or very far down.

01:01:22 It might be earlier on when there are different tracks, civilization could go down.

01:01:28 Maybe the population is smaller, things still haven’t settled out.

01:01:33 If you count indirect effects, those could be bigger than the direct effects that people

01:01:41 have later on.

01:01:43 So part three of the argument says that…

01:01:46 So that leads us to a place where eventually somebody creates a simulation.

01:01:53 I think you had a conversation with Joe Rogan.

01:01:55 I think there’s some aspect here where you got stuck a little bit.

01:02:01 How does that lead to we’re likely living in a simulation?

01:02:06 So this kind of probability argument, if somebody eventually creates a simulation, why does

01:02:12 that mean that we’re now in a simulation?

01:02:15 What you get to if you accept alternative three first is there would be more simulated

01:02:22 people with our kinds of experiences than non simulated ones.

01:02:26 Like if you look at the world as a whole, by the end of time as it were, you just count

01:02:34 it up.

01:02:36 That would be more simulated ones than non simulated ones.

01:02:39 Then there is an extra step to get from that.

01:02:43 If you assume that, suppose for the sake of the argument, that that’s true.

01:02:48 How do you get from that to the statement we are probably in a simulation?

01:02:57 So here you’re introducing an indexical statement like it’s that this person right now is in

01:03:05 a simulation.

01:03:06 There are all these other people that are in simulations and some that are not in the

01:03:10 simulation.

01:03:13 But what probability should you have that you yourself is one of the simulated ones

01:03:19 in that setup?

01:03:21 So I call it the bland principle of indifference, which is that in cases like this, when you

01:03:28 have two sets of observers, one of which is much larger than the other and you can’t from

01:03:37 any internal evidence you have, tell which set you belong to, you should assign a probability

01:03:46 that’s proportional to the size of these sets.

01:03:50 So that if there are 10 times more simulated people with your kinds of experiences, you

01:03:55 would be 10 times more likely to be one of those.

01:03:58 Is that as intuitive as it sounds?

01:04:00 I mean, that seems kind of, if you don’t have enough information, you should rationally

01:04:06 just assign the same probability as the size of the set.

01:04:10 It seems pretty plausible to me.

01:04:15 Where are the holes in this?

01:04:17 Is it at the very beginning, the assumption that everything stretches, you have infinite

01:04:23 time essentially?

01:04:24 You don’t need infinite time.

01:04:26 You just need, how long does the time take?

01:04:29 However long it takes, I guess, for a universe to produce an intelligent civilization that

01:04:36 attains the technology to run some ancestry simulations.

01:04:40 When the first simulation is created, that stretch of time, just a little longer than

01:04:45 they’ll all start creating simulations.

01:04:48 Well, I mean, there might be a difference.

01:04:52 If you think of there being a lot of different planets and some subset of them have life

01:04:57 and then some subset of those get to intelligent life and some of those maybe eventually start

01:05:03 creating simulations, they might get started at quite different times.

01:05:07 Maybe on some planet, it takes a billion years longer before you get monkeys or before you

01:05:13 get even bacteria than on another planet.

01:05:19 This might happen at different cosmological epochs.

01:05:25 Is there a connection here to the doomsday argument and that sampling there?

01:05:28 Yeah, there is a connection in that they both involve an application of anthropic reasoning

01:05:36 that is reasoning about these kind of indexical propositions.

01:05:41 But the assumption you need in the case of the simulation argument is much weaker than

01:05:49 the assumption you need to make the doomsday argument go through.

01:05:53 What is the doomsday argument and maybe you can speak to the anthropic reasoning in more

01:05:58 general.

01:05:59 Yeah, that’s a big and interesting topic in its own right, anthropics, but the doomsday

01:06:03 argument is this really first discovered by Brandon Carter, who was a theoretical physicist

01:06:11 and then developed by philosopher John Leslie.

01:06:15 I think it might have been discovered initially in the 70s or 80s and Leslie wrote this book,

01:06:21 I think in 96.

01:06:23 And there are some other versions as well by Richard Gott, who’s a physicist, but let’s

01:06:27 focus on the Carter Leslie version where it’s an argument that we have systematically underestimated

01:06:38 the probability that humanity will go extinct soon.

01:06:44 Now I should say most people probably think at the end of the day there is something wrong

01:06:49 with this doomsday argument that it doesn’t really hold.

01:06:52 It’s like there’s something wrong with it, but it’s proved hard to say exactly what is

01:06:56 wrong with it and different people have different accounts.

01:07:00 My own view is it seems inconclusive, but I can say what the argument is.

01:07:06 Yeah, that would be good.

01:07:08 So maybe it’s easiest to explain via an analogy to sampling from urns.

01:07:17 So imagine you have two urns in front of you and they have balls in them that have numbers.

01:07:27 The two urns look the same, but inside one there are 10 balls.

01:07:30 Ball number one, two, three, up to ball number 10.

01:07:33 And then in the other urn you have a million balls numbered one to a million and somebody

01:07:41 puts one of these urns in front of you and asks you to guess what’s the chance it’s the

01:07:48 10 ball urn and you say, well, 50, 50, I can’t tell which urn it is.

01:07:53 But then you’re allowed to reach in and pick a ball at random from the urn and that’s suppose

01:07:58 you find that it’s ball number seven.

01:08:02 So that’s strong evidence for the 10 ball hypothesis.

01:08:05 It’s a lot more likely that you would get such a low numbered ball if there are only

01:08:11 10 balls in the urn, like it’s in fact 10% done, right?

01:08:14 Then if there are a million balls, it would be very unlikely you would get number seven.

01:08:19 So you perform a Bayesian update and if your prior was 50, 50 that it was the 10 ball urn,

01:08:27 you become virtually certain after finding the random sample was seven that it’s only

01:08:31 has 10 balls in it.

01:08:33 So in the case of the urns, this is uncontroversial, just elementary probability theory.

01:08:37 The Doomsday Argument says that you should reason in a similar way with respect to different

01:08:43 hypotheses about how many balls there will be in the urn of humanity as it were, how

01:08:49 many humans there will ever have been by the time we go extinct.

01:08:54 So to simplify, let’s suppose we only consider two hypotheses, either maybe 200 billion humans

01:09:00 in total or 200 trillion humans in total.

01:09:05 You could fill in more hypotheses, but it doesn’t change the principle here.

01:09:09 So it’s easiest to see if we just consider these two.

01:09:12 So you start with some prior based on ordinary empirical ideas about threats to civilization

01:09:18 and so forth.

01:09:19 And maybe you say it’s a 5% chance that we will go extinct by the time there will have

01:09:23 been 200 billion only, you’re kind of optimistic, let’s say, you think probably we’ll make it

01:09:28 through, colonize the universe.

01:09:31 But then, according to this Doomsday Argument, you should take off your own birth rank as

01:09:39 a random sample.

01:09:40 So your birth rank is your sequence in the position of all humans that have ever existed.

01:09:47 It turns out you’re about a human number of 100 billion, you know, give or take.

01:09:52 That’s like, roughly how many people have been born before you.

01:09:55 That’s fascinating, because I probably, we each have a number.

01:09:59 We would each have a number in this, I mean, obviously, the exact number would depend on

01:10:04 where you started counting, like which ancestors was human enough to count as human.

01:10:09 But those are not really important, there are relatively few of them.

01:10:13 So yeah, so you’re roughly 100 billion.

01:10:16 Now, if they’re only going to be 200 billion in total, that’s a perfectly unremarkable

01:10:20 number.

01:10:21 You’re somewhere in the middle, right?

01:10:22 It’s a run of the mill human, completely unsurprising.

01:10:26 Now, if they’re going to be 200 trillion, you would be remarkably early, like what are

01:10:32 the chances out of these 200 trillion human that you should be human number 100 billion?

01:10:40 That seems it would have a much lower conditional probability.

01:10:45 And so analogously to how in the urn case, you thought after finding this low numbered

01:10:50 random sample, you update it in favor of the urn having few balls.

01:10:54 Similarly, in this case, you should update in favor of the human species having a lower

01:11:00 total number of members that is doomed soon.

01:11:04 You said doomed soon?

01:11:05 Well, that would be the hypothesis in this case that it will end 100 billion.

01:11:11 I just like that term for that hypothesis.

01:11:14 So what it kind of crucially relies on, the Doomsday Argument, is the idea that you should

01:11:20 reason as if you were a random sample from the set of all humans that will have existed.

01:11:27 If you have that assumption, then I think the rest kind of follows.

01:11:31 The question then is, why should you make that assumption?

01:11:34 In fact, you know you’re 100 billion, so where do you get this prior?

01:11:38 And then there is like a literature on that with different ways of supporting that assumption.

01:11:45 That’s just one example of anthropic reasoning, right?

01:11:48 That seems to be kind of convenient when you think about humanity, when you think about

01:11:53 sort of even like existential threats and so on, as it seems that quite naturally that

01:12:00 you should assume that you’re just an average case.

01:12:03 Yeah, that you’re kind of a typical randomly sample.

01:12:07 Now, in the case of the Doomsday Argument, it seems to lead to what intuitively we think

01:12:12 is the wrong conclusion, or at least many people have this reaction that there’s got

01:12:16 to be something fishy about this argument.

01:12:19 Because from very, very weak premises, it gets this very striking implication that we

01:12:25 have almost no chance of reaching size 200 trillion humans in the future.

01:12:30 And how could we possibly get there just by reflecting on when we were born?

01:12:35 It seems you would need sophisticated arguments about the impossibility of space colonization,

01:12:39 blah, blah.

01:12:40 So one might be tempted to reject this key assumption, I call it the self sampling assumption,

01:12:45 the idea that you should reason as if you’re a random sample from all observers or in your

01:12:50 some reference class.

01:12:52 However, it turns out that in other domains, it looks like we need something like this

01:12:58 self sampling assumption to make sense of bona fide scientific inferences.

01:13:04 In contemporary cosmology, for example, you have these multiverse theories.

01:13:09 And according to a lot of those, all possible human observations are made.

01:13:14 So if you have a sufficiently large universe, you will have a lot of people observing all

01:13:18 kinds of different things.

01:13:22 So if you have two competing theories, say about the value of some constant, it could

01:13:29 be true according to both of these theories that there will be some observers observing

01:13:34 the value that corresponds to the other theory, because there will be some observers that

01:13:42 have hallucinations, so there’s a local fluctuation or a statistically anomalous measurement,

01:13:47 these things will happen.

01:13:49 And if enough observers make enough different observations, there will be some that sort

01:13:53 of by chance make these different ones.

01:13:55 And so what we would want to say is, well, many more observers, a larger proportion of

01:14:04 the observers will observe as it were the true value.

01:14:08 And a few will observe the wrong value.

01:14:10 If we think of ourselves as a random sample, we should expect with a probability to observe

01:14:15 the true value and that will then allow us to conclude that the evidence we actually

01:14:20 have is evidence for the theories we think are supported.

01:14:24 It kind of then is a way of making sense of these inferences that clearly seem correct,

01:14:32 that we can make various observations and infer what the temperature of the cosmic background

01:14:38 is and the fine structure constant and all of this.

01:14:44 But it seems that without rolling in some assumption similar to the self sampling assumption,

01:14:49 this inference just doesn’t go through.

01:14:51 And there are other examples.

01:14:53 So there are these scientific contexts where it looks like this kind of anthropic reasoning

01:14:56 is needed and makes perfect sense.

01:14:59 And yet, in the case of the Dupest argument, it has this weird consequence and people might

01:15:02 think there’s something wrong with it there.

01:15:05 So there’s then this project that would consist in trying to figure out what are the legitimate

01:15:14 ways of reasoning about these indexical facts when observer selection effects are in play.

01:15:20 In other words, developing a theory of anthropics.

01:15:23 And there are different views of looking at that and it’s a difficult methodological area.

01:15:29 But to tie it back to the simulation argument, the key assumption there, this bland principle

01:15:37 of indifference, is much weaker than the self sampling assumption.

01:15:43 So if you think about, in the case of the Dupest argument, it says you should reason

01:15:48 as if you are a random sample from all humans that will have lived, even though in fact

01:15:52 you know that you are about number 100 billionth human and you’re alive in the year 2020.

01:15:59 Whereas in the case of the simulation argument, it says that, well, if you actually have no

01:16:04 way of telling which one you are, then you should assign this kind of uniform probability.

01:16:10 Yeah, yeah, your role as the observer in the simulation argument is different, it seems

01:16:14 like.

01:16:15 Like who’s the observer?

01:16:16 I mean, I keep assigning the individual consciousness.

01:16:19 But a lot of observers in the context of the simulation argument, the relevant observers

01:16:26 would be A, the people in original histories, and B, the people in simulations.

01:16:33 So this would be the class of observers that we need, I mean, they’re also maybe the simulators,

01:16:37 but we can set those aside for this.

01:16:40 So the question is, given that class of observers, a small set of original history observers

01:16:46 and a large class of simulated observers, which one should you think is you?

01:16:51 Where are you amongst this set of observers?

01:16:54 I’m maybe having a little bit of trouble wrapping my head around the intricacies of what it

01:17:00 means to be an observer in this, in the different instantiations of the anthropic reasoning

01:17:08 cases that we mentioned.

01:17:09 Yeah.

01:17:10 I mean, does it have to be…

01:17:11 It’s not the observer.

01:17:12 Yeah, I mean, it may be an easier way of putting it is just like, are you simulated, are you

01:17:16 not simulated, given this assumption that these two groups of people exist?

01:17:21 Yeah.

01:17:22 In the simulation case, it seems pretty straightforward.

01:17:24 Yeah.

01:17:25 So the key point is the methodological assumption you need to make to get the simulation argument

01:17:32 to where it wants to go is much weaker and less problematic than the methodological assumption

01:17:38 you need to make to get the doomsday argument to its conclusion.

01:17:43 Maybe the doomsday argument is sound or unsound, but you need to make a much stronger and more

01:17:48 controversial assumption to make it go through.

01:17:52 In the case of the simulation argument, I guess one maybe way intuition pumped to support

01:17:58 this bland principle of indifference is to consider a sequence of different cases where

01:18:05 the fraction of people who are simulated to non simulated approaches one.

01:18:12 So in the limiting case where everybody is simulated, obviously you can deduce with certainty

01:18:22 that you are simulated.

01:18:24 If everybody with your experiences is simulated and you know you’ve got to be one of those,

01:18:30 you don’t need a probability at all, you just kind of logically conclude it, right?

01:18:36 So then as we move from a case where say 90% of everybody is simulated, 99%, 99.9%, it

01:18:48 should seem plausible that the probability you assign should sort of approach one certainty

01:18:54 as the fraction approaches the case where everybody is in a simulation.

01:19:02 You wouldn’t expect that to be a discrete, well, if there’s one non simulated person,

01:19:06 then it’s 50, 50, but if we move that, then it’s 100%, like it should kind of, there are

01:19:12 other arguments as well one can use to support this bland principle of indifference, but

01:19:18 that might be enough to.

01:19:19 But in general, when you start from time equals zero and go into the future, the fraction

01:19:25 of simulated, if it’s possible to create simulated worlds, the fraction of simulated worlds will

01:19:30 go to one.

01:19:31 Well, I mean, it won’t go all the way to one.

01:19:37 In reality, that would be some ratio, although maybe a technologically mature civilization

01:19:43 could run a lot of simulations using a small portion of its resources, it probably wouldn’t

01:19:52 be able to run infinitely many.

01:19:53 I mean, if we take say the observed, the physics in the observed universe, if we assume that

01:19:59 that’s also the physics at the level of the simulators, that would be limits to the amount

01:20:05 of information processing that any one civilization could perform in its future trajectory.

01:20:16 First of all, there’s limited amount of matter you can get your hands off because with a

01:20:20 positive cosmological constant, the universe is accelerating, there’s like a finite sphere

01:20:25 of stuff, even if you traveled with the speed of light that you could ever reach, you have

01:20:28 a finite amount of stuff.

01:20:31 And then if you think there is like a lower limit to the amount of loss you get when you

01:20:37 perform an erasure of a computation, or if you think, for example, just matter gradually

01:20:42 over cosmological timescales, decay, maybe protons decay, other things, and you radiate

01:20:49 out gravitational waves, like there’s all kinds of seemingly unavoidable losses that

01:20:55 occur.

01:20:56 Eventually, we’ll have something like a heat death of the universe or a cold death or whatever,

01:21:04 but yeah.

01:21:05 So it’s finite, but of course, we don’t know which, if there’s many ancestral simulations,

01:21:11 we don’t know which level we are.

01:21:13 So there could be, couldn’t there be like an arbitrary number of simulation that spawned

01:21:18 ours, and those had more resources, in terms of physical universe to work with?

01:21:26 Sorry, what do you mean that that could be?

01:21:29 Sort of, okay, so if simulations spawn other simulations, it seems like each new spawn

01:21:40 has fewer resources to work with.

01:21:44 But we don’t know at which step along the way we are at.

01:21:50 Any one observer doesn’t know whether we’re in level 42, or 100, or one, or is that not

01:21:58 matter for the resources?

01:22:01 I mean, it’s true that there would be uncertainty as to, you could have stacked simulations,

01:22:08 and that could then be uncertainty as to which level we are at.

01:22:16 As you remarked also, all the computations performed in a simulation within the simulation

01:22:24 also have to be expanded at the level of the simulation.

01:22:28 So the computer in basement reality where all these simulations with the simulations

01:22:32 with the simulations are taking place, like that computer, ultimately, it’s CPU or whatever

01:22:37 it is, like that has to power this whole tower, right?

01:22:40 So if there is a finite compute power in basement reality, that would impose a limit to how

01:22:46 tall this tower can be.

01:22:48 And if each level kind of imposes a large extra overhead, you might think maybe the

01:22:53 tower would not be very tall, that most people would be low down in the tower.

01:23:00 I love the term basement reality.

01:23:03 Let me ask one of the popularizers, you said there’s many through this, when you look at

01:23:09 sort of the last few years of the simulation hypothesis, just like you said, it comes up

01:23:14 every once in a while, some new community discovers it and so on.

01:23:17 But I would say one of the biggest popularizers of this idea is Elon Musk.

01:23:22 Do you have any kind of intuition about what Elon thinks about when he thinks about simulation?

01:23:27 Why is this of such interest?

01:23:30 Is it all the things we’ve talked about, or is there some special kind of intuition about

01:23:34 simulation that he has?

01:23:36 I mean, you might have a better, I think, I mean, why it’s of interest, I think it’s

01:23:40 like seems pretty obvious why, to the extent that one thinks the argument is credible,

01:23:45 why it would be of interest, it would, if it’s correct, tell us something very important

01:23:48 about the world in one way or the other, whichever of the three alternatives for a simulation

01:23:53 that seems like arguably one of the most fundamental discoveries, right?

01:23:58 Now, interestingly, in the case of someone like Elon, so there’s like the standard arguments

01:24:02 for why you might want to take the simulation hypothesis seriously, the simulation argument,

01:24:06 right?

01:24:07 In the case that if you are actually Elon Musk, let us say, there’s a kind of an additional

01:24:12 reason in that what are the chances you would be Elon Musk?

01:24:17 It seems like maybe there would be more interest in simulating the lives of very unusual and

01:24:24 remarkable people.

01:24:26 So if you consider not just simulations where all of human history or the whole of human

01:24:32 civilization are simulated, but also other kinds of simulations, which only include some

01:24:37 subset of people, like in those simulations that only include a subset, it might be more

01:24:44 likely that they would include subsets of people with unusually interesting or consequential

01:24:49 lives.

01:24:50 So if you’re Elon Musk, it’s more likely that you’re an inspiration.

01:24:54 Like if you’re Donald Trump, or if you’re Bill Gates, or you’re like, some particularly

01:25:00 like distinctive character, you might think that that, I mean, if you just think of yourself

01:25:06 into the shoes, right, it’s got to be like an extra reason to think that’s kind of.

01:25:11 So interesting.

01:25:12 So on a scale of like farmer in Peru to Elon Musk, the more you get towards the Elon Musk,

01:25:19 the higher the probability.

01:25:20 You’d imagine that would be some extra boost from that.

01:25:25 There’s an extra boost.

01:25:26 So he also asked the question of what he would ask an AGI saying, the question being, what’s

01:25:32 outside the simulation?

01:25:34 Do you think about the answer to this question?

01:25:37 If we are living in a simulation, what is outside the simulation?

01:25:41 So the programmer of the simulation?

01:25:44 Yeah, I mean, I think it connects to the question of what’s inside the simulation in that.

01:25:49 So if you had views about the creators of the simulation, it might help you make predictions

01:25:56 about what kind of simulation it is, what might happen, what happens after the simulation,

01:26:03 if there is some after, but also like the kind of setup.

01:26:06 So these two questions would be quite closely intertwined.

01:26:12 But do you think it would be very surprising to like, is the stuff inside the simulation,

01:26:17 is it possible for it to be fundamentally different than the stuff outside?

01:26:21 Yeah.

01:26:22 Like, another way to put it, can the creatures inside the simulation be smart enough to even

01:26:29 understand or have the cognitive capabilities or any kind of information processing capabilities

01:26:34 enough to understand the mechanism that created them?

01:26:40 They might understand some aspects of it.

01:26:43 I mean, it’s a level of, it’s kind of, there are levels of explanation, like degrees to

01:26:50 which you can understand.

01:26:51 So does your dog understand what it is to be human?

01:26:53 Well, it’s got some idea, like humans are these physical objects that move around and

01:26:58 do things.

01:26:59 And a normal human would have a deeper understanding of what it is to be a human.

01:27:05 And maybe some very experienced psychologist or great novelist might understand a little

01:27:12 bit more about what it is to be human.

01:27:14 And maybe superintelligence could see right through your soul.

01:27:18 So similarly, I do think that we are quite limited in our ability to understand all of

01:27:27 the relevant aspects of the larger context that we exist in.

01:27:31 But there might be hope for some.

01:27:33 I think we understand some aspects of it.

01:27:36 But you know, how much good is that?

01:27:38 If there’s like one key aspect that changes the significance of all the other aspects.

01:27:44 So we understand maybe seven out of 10 key insights that you need.

01:27:51 But the answer actually, like varies completely depending on what like number eight, nine

01:27:57 and 10 insight is.

01:28:00 It’s like whether you want to suppose that the big task were to guess whether a certain

01:28:07 number was odd or even, like a 10 digit number.

01:28:12 And if it’s even, the best thing for you to do in life is to go north.

01:28:16 And if it’s odd, the best thing for you is to go south.

01:28:21 Now we are in a situation where maybe through our science and philosophy, we figured out

01:28:25 what the first seven digits are.

01:28:26 So we have a lot of information, right?

01:28:28 Most of it we figured out.

01:28:31 But we are clueless about what the last three digits are.

01:28:34 So we are still completely clueless about whether the number is odd or even and therefore

01:28:38 whether we should go north or go south.

01:28:41 I feel that’s an analogy, but I feel we’re somewhat in that predicament.

01:28:45 We know a lot about the universe.

01:28:48 We’ve come maybe more than half of the way there to kind of fully understanding it.

01:28:52 But the parts we’re missing are plausibly ones that could completely change the overall

01:28:58 upshot of the thing and including change our overall view about what the scheme of priorities

01:29:04 should be or which strategic direction would make sense to pursue.

01:29:07 Yeah.

01:29:08 I think your analogy of us being the dog trying to understand human beings is an entertaining

01:29:15 one, and probably correct.

01:29:17 The closer the understanding tends from the dog’s viewpoint to us human psychologist viewpoint,

01:29:24 the steps along the way there will have completely transformative ideas of what it means to be

01:29:29 human.

01:29:30 So the dog has a very shallow understanding.

01:29:33 It’s interesting to think that, to analogize that a dog’s understanding of a human being

01:29:39 is the same as our current understanding of the fundamental laws of physics in the universe.

01:29:45 Oh man.

01:29:47 Okay.

01:29:48 We spent an hour and 40 minutes talking about the simulation.

01:29:51 I like it.

01:29:53 Let’s talk about super intelligence.

01:29:54 At least for a little bit.

01:29:57 And let’s start at the basics.

01:29:58 What to you is intelligence?

01:30:00 Yeah.

01:30:01 I tend not to get too stuck with the definitional question.

01:30:05 I mean, the common sense to understand, like the ability to solve complex problems, to

01:30:11 learn from experience, to plan, to reason, some combination of things like that.

01:30:18 Is consciousness mixed up into that or no?

01:30:21 Is consciousness mixed up into that?

01:30:23 Well, I think it could be fairly intelligent at least without being conscious probably.

01:30:31 So then what is super intelligence?

01:30:33 That would be like something that was much more, had much more general cognitive capacity

01:30:40 than we humans have.

01:30:41 So if we talk about general super intelligence, it would be much faster learner be able to

01:30:48 reason much better, make plans that are more effective at achieving its goals, say in a

01:30:53 wide range of complex challenging environments.

01:30:57 In terms of as we turn our eye to the idea of sort of existential threats from super

01:31:03 intelligence, do you think super intelligence has to exist in the physical world or can

01:31:08 it be digital only?

01:31:10 Sort of we think of our general intelligence as us humans, as an intelligence that’s associated

01:31:17 with the body, that’s able to interact with the world, that’s able to affect the world

01:31:22 directly with physically.

01:31:23 I mean, digital only is perfectly fine, I think.

01:31:26 I mean, you could, it’s physical in the sense that obviously the computers and the memories

01:31:31 are physical.

01:31:32 But it’s capability to affect the world sort of.

01:31:34 Could be very strong, even if it has a limited set of actuators, if it can type text on the

01:31:42 screen or something like that, that would be, I think, ample.

01:31:45 So in terms of the concerns of existential threat of AI, how can an AI system that’s

01:31:52 in the digital world have existential risk, sort of, and what are the attack vectors for

01:32:00 a digital system?

01:32:01 Well, I mean, I guess maybe to take one step back, so I should emphasize that I also think

01:32:07 there’s this huge positive potential from machine intelligence, including super intelligence.

01:32:13 And I want to stress that because some of my writing has focused on what can go wrong.

01:32:20 And when I wrote the book Superintelligence, at that point, I felt that there was a kind

01:32:27 of neglect of what would happen if AI succeeds, and in particular, a need to get a more granular

01:32:34 understanding of where the pitfalls are so we can avoid them.

01:32:38 I think that since the book came out in 2014, there has been a much wider recognition of

01:32:45 that.

01:32:46 And a number of research groups are now actually working on developing, say, AI alignment techniques

01:32:51 and so on and so forth.

01:32:52 So yeah, I think now it’s important to make sure we bring back onto the table the upside

01:33:01 as well.

01:33:02 And there’s a little bit of a neglect now on the upside, which is, I mean, if you look

01:33:07 at, I was talking to a friend, if you look at the amount of information that is available,

01:33:11 or people talking and people being excited about the positive possibilities of general

01:33:16 intelligence, that’s not, it’s far outnumbered by the negative possibilities in terms of

01:33:23 our public discourse.

01:33:25 Possibly, yeah.

01:33:26 It’s hard to measure.

01:33:28 But what are, can you linger on that for a little bit, what are some, to you, possible

01:33:33 big positive impacts of general intelligence?

01:33:37 Super intelligence?

01:33:38 Well, I mean, super intelligence, because I tend to also want to distinguish these two

01:33:43 different contexts of thinking about AI and AI impacts, the kind of near term and long

01:33:48 term, if you want, both of which I think are legitimate things to think about, and people

01:33:54 should discuss both of them, but they are different and they often get mixed up.

01:34:02 And then, then I get, you get confusion, like, I think you get simultaneously like maybe

01:34:06 an overhyping of the near term and then under hyping of the long term.

01:34:10 And so I think as long as we keep them apart, we can have like, two good conversations,

01:34:15 but or we can mix them together and have one bad conversation.

01:34:18 Can you clarify just the two things we were talking about, the near term and the long

01:34:22 term?

01:34:23 Yeah.

01:34:24 And what are the distinctions?

01:34:25 Well, it’s a, it’s a blurry distinction.

01:34:28 But say the things I wrote about in this book, super intelligence, long term, things people

01:34:34 are worrying about today with, I don’t know, algorithmic discrimination, or even things,

01:34:41 self driving cars and drones and stuff, more near term.

01:34:47 And then of course, you could imagine some medium term where they kind of overlap and

01:34:51 they one evolves into the other.

01:34:55 But at any rate, I think both, yeah, the issues look kind of somewhat different depending

01:35:00 on which of these contexts.

01:35:01 So I think, I think it’d be nice if we can talk about the long term and think about a

01:35:10 positive impact or a better world because of the existence of the long term super intelligence.

01:35:17 Do you have views of such a world?

01:35:19 Yeah.

01:35:20 I mean, I guess it’s a little hard to articulate because it seems obvious that the world has

01:35:24 a lot of problems as it currently stands.

01:35:29 And it’s hard to think of any one of those, which it wouldn’t be useful to have like a

01:35:36 friendly aligned super intelligence working on.

01:35:40 So from health to the economic system to be able to sort of improve the investment and

01:35:48 trade and foreign policy decisions, all that kind of stuff.

01:35:52 All that kind of stuff and a lot more.

01:35:56 I mean, what’s the killer app?

01:35:57 Well, I don’t think there is one.

01:35:59 I think AI, especially artificial general intelligence is really the ultimate general

01:36:05 purpose technology.

01:36:07 So it’s not that there is this one problem, this one area where it will have a big impact.

01:36:12 But if and when it succeeds, it will really apply across the board in all fields where

01:36:18 human creativity and intelligence and problem solving is useful, which is pretty much all

01:36:23 fields.

01:36:24 Right.

01:36:25 The thing that it would do is give us a lot more control over nature.

01:36:30 It wouldn’t automatically solve the problems that arise from conflict between humans, fundamentally

01:36:37 political problems.

01:36:38 Some subset of those might go away if you just had more resources and cooler tech.

01:36:42 But some subset would require coordination that is not automatically achieved just by

01:36:50 having more technological capability.

01:36:53 But anything that’s not of that sort, I think you just get an enormous boost with this kind

01:36:59 of cognitive technology once it goes all the way.

01:37:02 Now, again, that doesn’t mean I’m thinking, oh, people don’t recognize what’s possible

01:37:10 with current technology and like sometimes things get overhyped.

01:37:14 But I mean, those are perfectly consistent views to hold.

01:37:16 The ultimate potential being enormous.

01:37:19 And then it’s a very different question of how far are we from that or what can we do

01:37:23 with near term technology?

01:37:25 Yeah.

01:37:26 So what’s your intuition about the idea of intelligence explosion?

01:37:29 So there’s this, you know, when you start to think about that leap from the near term

01:37:34 to the long term, the natural inclination, like for me, sort of building machine learning

01:37:40 systems today, it seems like it’s a lot of work to get the general intelligence, but

01:37:45 there’s some intuition of exponential growth of exponential improvement of intelligence

01:37:49 explosion.

01:37:50 Can you maybe try to elucidate, try to talk about what’s your intuition about the possibility

01:38:00 of an intelligence explosion, that it won’t be this gradual slow process, there might

01:38:05 be a phase shift?

01:38:07 Yeah, I think it’s, we don’t know how explosive it will be.

01:38:13 I think for what it’s worth, it seems fairly likely to me that at some point, there will

01:38:19 be some intelligence explosion, like some period of time, where progress in AI becomes

01:38:24 extremely rapid, roughly, roughly in the area where you might say it’s kind of humanish

01:38:32 equivalent in core cognitive faculties, that the concept of human equivalent starts to

01:38:40 break down when you look too closely at it.

01:38:43 And just how explosive does something have to be for it to be called an intelligence

01:38:48 explosion?

01:38:49 Like, does it have to be like overnight, literally, or a few years?

01:38:54 But overall, I guess, if you plotted the opinions of different people in the world, I guess

01:39:00 that would be somewhat more probability towards the intelligence explosion scenario than probably

01:39:06 the average, you know, AI researcher, I guess.

01:39:09 So and then the other part of the intelligence explosion, or just forget explosion, just

01:39:14 progress is once you achieve that gray area of human level intelligence, is it obvious

01:39:21 to you that we should be able to proceed beyond it to get to super intelligence?

01:39:26 Yeah, that seems, I mean, as much as any of these things can be obvious, given we’ve never

01:39:33 had one, people have different views, smart people have different views, it’s like some

01:39:39 degree of uncertainty that always remains for any big, futuristic, philosophical grand

01:39:44 question that just we realize humans are fallible, especially about these things.

01:39:49 But it does seem, as far as I’m judging things based on my own impressions, that it seems

01:39:55 very unlikely that that would be a ceiling at or near human cognitive capacity.

01:40:04 And that’s such a, I don’t know, that’s such a special moment, it’s both terrifying and

01:40:10 exciting to create a system that’s beyond our intelligence.

01:40:15 So maybe you can step back and say, like, how does that possibility make you feel that

01:40:22 we can create something, it feels like there’s a line beyond which it steps, it’ll be able

01:40:28 to outsmart you.

01:40:31 And therefore, it feels like a step where we lose control.

01:40:35 Well, I don’t think the latter follows that is you could imagine.

01:40:42 And in fact, this is what a number of people are working towards making sure that we could

01:40:46 ultimately project higher levels of problem solving ability while still making sure that

01:40:53 they are aligned, like they are in the service of human values.

01:40:58 I mean, so losing control, I think, is not a given that that would happen.

01:41:06 Now you asked how it makes me feel, I mean, to some extent, I’ve lived with this for so

01:41:10 long, since as long as I can remember, being an adult or even a teenager, it seemed to

01:41:16 me obvious that at some point, AI will succeed.

01:41:19 And so I actually misspoke, I didn’t mean control, I meant, because the control problem

01:41:27 is an interesting thing.

01:41:28 And I think the hope is, at least we should be able to maintain control over systems that

01:41:33 are smarter than us.

01:41:35 But we do lose our specialness, it sort of will lose our place as the smartest, coolest

01:41:46 thing on earth.

01:41:48 And there’s an ego involved with that, that humans aren’t very good at dealing with.

01:41:55 I mean, I value my intelligence as a human being.

01:41:59 It seems like a big transformative step to realize there’s something out there that’s

01:42:04 more intelligent.

01:42:05 I mean, you don’t see that as such a fundamentally…

01:42:09 I think yes, a lot, I think it would be small, because I mean, I think there are already

01:42:14 a lot of things out there that are, I mean, certainly, if you think the universe is big,

01:42:18 there’s going to be other civilizations that already have super intelligences, or that

01:42:23 just naturally have brains the size of beach balls and are like, completely leaving us

01:42:29 in the dust.

01:42:30 And we haven’t come face to face with them.

01:42:33 We haven’t come face to face.

01:42:34 But I mean, that’s an open question, what would happen in a kind of post human world?

01:42:41 Like how much day to day would these super intelligences be involved in the lives of

01:42:49 ordinary?

01:42:50 I mean, you could imagine some scenario where it would be more like a background thing that

01:42:54 would help protect against some things, but you wouldn’t like that, they wouldn’t be this

01:42:58 intrusive kind of, like making you feel bad by like, making clever jokes on your expert,

01:43:04 like there’s like all sorts of things that maybe in the human context would feel awkward

01:43:09 about that.

01:43:10 You don’t want to be the dumbest kid in your class, everybody picks it, like, a lot of

01:43:14 those things, maybe you need to abstract away from, if you’re thinking about this context

01:43:19 where we have infrastructure that is in some sense, beyond any or all humans.

01:43:26 I mean, it’s a little bit like, say, the scientific community as a whole, if you think of that

01:43:30 as a mind, it’s a little bit of a metaphor.

01:43:33 But I mean, obviously, it’s got to be like, way more capacious than any individual.

01:43:39 So in some sense, there is this mind like thing already out there that’s just vastly

01:43:44 more intelligent than any individual is.

01:43:49 And we think, okay, that’s, you just accept that as a fact.

01:43:55 That’s the basic fabric of our existence is there’s super intelligent.

01:43:59 You get used to a lot of, I mean, there’s already Google and Twitter and Facebook, these

01:44:06 recommender systems that are the basic fabric of our, I could see them becoming, I mean,

01:44:13 do you think of the collective intelligence of these systems as already perhaps reaching

01:44:17 super intelligence level?

01:44:19 Well, I mean, so here it comes to the concept of intelligence and the scale and what human

01:44:26 level means.

01:44:29 The kind of vagueness and indeterminacy of those concepts starts to dominate how you

01:44:37 would answer that question.

01:44:38 So like, say the Google search engine has a very high capacity of a certain kind, like

01:44:45 retrieving, remembering and retrieving information, particularly like text or images that are,

01:44:54 you have a kind of string, a word string key, obviously superhuman at that, but a vast set

01:45:02 of other things it can’t even do at all.

01:45:06 Not just not do well, but so you have these current AI systems that are superhuman in

01:45:12 some limited domain and then like radically subhuman in all other domains.

01:45:19 Same with a chess, like are just a simple computer that can multiply really large numbers,

01:45:23 right?

01:45:24 So it’s going to have this like one spike of super intelligence and then a kind of a

01:45:28 zero level of capability across all other cognitive fields.

01:45:32 Yeah, I don’t necessarily think the generalness, I mean, I’m not so attached with it, but I

01:45:37 think it’s sort of, it’s a gray area and it’s a feeling, but to me sort of alpha zero is

01:45:44 somehow much more intelligent, much, much more intelligent than Deep Blue.

01:45:51 And to say which domain, you could say, well, these are both just board games, they’re both

01:45:55 just able to play board games, who cares if they’re going to do better or not, but there’s

01:45:59 something about the learning, the self play that makes it, crosses over into that land

01:46:06 of intelligence that doesn’t necessarily need to be general.

01:46:09 In the same way, Google is much closer to Deep Blue currently in terms of its search

01:46:14 engine than it is to sort of the alpha zero.

01:46:17 And the moment it becomes, the moment these recommender systems really become more like

01:46:22 alpha zero, but being able to learn a lot without the constraints of being heavily constrained

01:46:29 by human interaction, that seems like a special moment in time.

01:46:34 I mean, certainly learning ability seems to be an important facet of general intelligence,

01:46:43 that you can take some new domain that you haven’t seen before and you weren’t specifically

01:46:48 pre programmed for, and then figure out what’s going on there and eventually become really

01:46:52 good at it.

01:46:53 So that’s something alpha zero has much more of than Deep Blue had.

01:47:00 And in fact, I mean, systems like alpha zero can learn not just Go, but other, in fact,

01:47:06 probably beat Deep Blue in chess and so forth.

01:47:09 So you do see this as general and it matches the intuition.

01:47:13 We feel it’s more intelligent and it also has more of this general purpose learning

01:47:17 ability.

01:47:20 And if we get systems that have even more general purpose learning ability, it might

01:47:23 also trigger an even stronger intuition that they are actually starting to get smart.

01:47:28 So if you were to pick a future, what do you think a utopia looks like with AGI systems?

01:47:33 Sort of, is it the neural link brain computer interface world where we’re kind of really

01:47:40 closely interlinked with AI systems?

01:47:43 Is it possibly where AGI systems replace us completely while maintaining the values and

01:47:50 the consciousness?

01:47:53 Is it something like it’s a completely invisible fabric, like you mentioned, a society where

01:47:57 just aids and a lot of stuff that we do like curing diseases and so on.

01:48:02 What is utopia if you get to pick?

01:48:03 Yeah, I mean, it is a good question and a deep and difficult one.

01:48:09 I’m quite interested in it.

01:48:10 I don’t have all the answers yet, but I might never have.

01:48:15 But I think there are some different observations one can make.

01:48:19 One is if this scenario actually did come to pass, it would open up this vast space

01:48:26 of possible modes of being.

01:48:30 On one hand, material and resource constraints would just be like expanded dramatically.

01:48:36 So there would be a lot of a big pie, let’s say.

01:48:41 Also it would enable us to do things, including to ourselves, it would just open up this much

01:48:51 larger design space and option space than we have ever had access to in human history.

01:48:59 I think two things follow from that.

01:49:01 One is that we probably would need to make a fairly fundamental rethink of what ultimately

01:49:08 we value, like think things through more from first principles.

01:49:11 The context would be so different from the familiar that we could have just take what

01:49:15 we’ve always been doing and then like, oh, well, we have this cleaning robot that cleans

01:49:21 the dishes in the sink and a few other small things.

01:49:24 I think we would have to go back to first principles.

01:49:27 So even from the individual level, go back to the first principles of what is the meaning

01:49:31 of life, what is happiness, what is fulfillment.

01:49:35 And then also connected to this large space of resources is that it would be possible.

01:49:43 And I think something we should aim for is to do well by the lights of more than one

01:49:52 value system.

01:49:55 That is, we wouldn’t have to choose only one value criterion and say we’re going to do

01:50:06 something that scores really high on the metric of, say, hedonism, and then is like a zero

01:50:15 by other criteria, like kind of wireheaded brain synovat, and it’s like a lot of pleasure,

01:50:22 that’s good, but then like no beauty, no achievement like that.

01:50:26 Or pick it up, I think to some significant, not unlimited sense, but the significant sense,

01:50:32 it would be possible to do very well by many criteria, like maybe you could get like 98%

01:50:40 of the best according to several criteria at the same time, given this great expansion

01:50:47 of the option space.

01:50:50 So have competing value systems, competing criteria, as a sort of forever, just like

01:50:57 our Democrat versus Republican, there seems to be this always multiple parties that are

01:51:02 useful for our progress in society, even though it might seem dysfunctional inside the moment,

01:51:08 but having the multiple value system seems to be beneficial for, I guess, a balance of

01:51:14 power.

01:51:15 So that’s, yeah, not exactly what I have in mind that it, well, although maybe in an indirect

01:51:21 way it is, but that if you had the chance to do something that scored well on several

01:51:30 different metrics, our first instinct should be to do that rather than immediately leap

01:51:36 to the thing, which ones of these value systems are we going to screw over?

01:51:40 Like our first, let’s first try to do very well by all of them.

01:51:44 Then it might be that you can’t get 100% of all and you would have to then like have the

01:51:49 hard conversation about which one will only get 97%.

01:51:51 There you go.

01:51:52 There’s my cynicism that all of existence is always a trade off, but you say, maybe

01:51:57 it’s not such a bad trade off.

01:51:58 Let’s first at least try it.

01:52:00 Well, this would be a distinctive context in which at least some of the constraints

01:52:06 would be removed.

01:52:07 I’ll leave it at that.

01:52:08 So there’s probably still be trade offs in the end.

01:52:10 It’s just that we should first make sure we at least take advantage of this abundance.

01:52:16 So in terms of thinking about this, like, yeah, one should think, I think in this kind

01:52:21 of frame of mind of generosity and inclusiveness to different value systems and see how far

01:52:31 one can get there at first.

01:52:34 And I think one could do something that would be very good according to many different criteria.

01:52:41 We kind of talked about AGI fundamentally transforming the value system of our existence,

01:52:50 the meaning of life.

01:52:52 But today, what do you think is the meaning of life?

01:52:56 The silliest or perhaps the biggest question, what’s the meaning of life?

01:52:59 What’s the meaning of existence?

01:53:03 What gives your life fulfillment, purpose, happiness, meaning?

01:53:07 Yeah, I think these are, I guess, a bunch of different but related questions in there

01:53:14 that one can ask.

01:53:17 Happiness meaning.

01:53:18 Yeah.

01:53:19 I mean, like you could imagine somebody getting a lot of happiness from something that they

01:53:22 didn’t think was meaningful.

01:53:27 Like mindless, like watching reruns of some television series, waiting junk food, like

01:53:31 maybe some people that gives pleasure, but they wouldn’t think it had a lot of meaning.

01:53:35 Whereas, conversely, something that might be quite loaded with meaning might not be

01:53:39 very fun always, like some difficult achievement that really helps a lot of people, maybe requires

01:53:45 self sacrifice and hard work.

01:53:49 So these things can, I think, come apart, which is something to bear in mind also when

01:53:57 if you’re thinking about these utopia questions that you might, to actually start to do some

01:54:06 constructive thinking about that, you might have to isolate and distinguish these different

01:54:12 kinds of things that might be valuable in different ways.

01:54:16 Make sure you can sort of clearly perceive each one of them and then you can think about

01:54:20 how you can combine them.

01:54:22 And just as you said, hopefully come up with a way to maximize all of them together.

01:54:27 Yeah, or at least get, I mean, maximize or get like a very high score on a wide range

01:54:33 of them, even if not literally all.

01:54:35 You can always come up with values that are exactly opposed to one another, right?

01:54:39 But I think for many values, they’re kind of opposed with, if you place them within

01:54:45 a certain dimensionality of your space, like there are shapes that are kind of, you can’t

01:54:51 untangle like in a given dimensionality, but if you start adding dimensions, then it might

01:54:57 in many cases just be that they are easy to pull apart and you could.

01:55:02 So we’ll see how much space there is for that, but I think that there could be a lot in this

01:55:07 context of radical abundance, if ever we get to that.

01:55:12 I don’t think there’s a better way to end it, Nick.

01:55:15 You’ve influenced a huge number of people to work on what could very well be the most

01:55:20 important problems of our time.

01:55:22 So it’s a huge honor.

01:55:23 Thank you so much for talking.

01:55:24 Well, thank you for coming by, Lex.

01:55:25 That was fun.

01:55:26 Thank you.

01:55:27 Thanks for listening to this conversation with Nick Bostrom, and thank you to our presenting

01:55:31 sponsor, Cash App.

01:55:33 Please consider supporting the podcast by downloading Cash App and using code LEXPodcast.

01:55:40 If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple Podcast,

01:55:45 subscribe on Patreon, or simply connect with me on Twitter at Lex Friedman.

01:55:50 And now, let me leave you with some words from Nick Bostrom.

01:55:55 Our approach to existential risks cannot be one of trial and error.

01:56:00 There’s no opportunity to learn from errors.

01:56:02 The reactive approach, see what happens, limit damages, and learn from experience is unworkable.

01:56:09 Rather, we must take a proactive approach.

01:56:13 This requires foresight to anticipate new types of threats and a willingness to take

01:56:17 decisive, preventative action and to bear the costs, moral and economic, of such actions.

01:56:26 Thank you for listening, and hope to see you next time.