Transcript
00:00:00 we can actually figure out where are the aliens out there in space time by being clever about the
00:00:04 few things we can see, one of which is our current date. And so now that you have this living
00:00:09 cosmology, we can tell the story that the universe starts out empty. And then at some point, things
00:00:14 like us appear very primitive, and then some of those stop being quiet and expand. And then for
00:00:20 a few billion years, they expand, and then they meet each other. And then for the next hundred
00:00:25 billion years, they commune with each other. That is, the usual models of cosmology say that in
00:00:30 roughly 150 billion years, the expansion of the universe will happen so much that all you’ll have
00:00:37 left is some galaxy clusters that are sort of disconnected from each other. But before then,
00:00:43 they will interact. There will be this community of all the grabby alien civilizations, and each
00:00:48 one of them will hear about and even meet thousands of others. And we might hope to join
00:00:54 them someday and become part of that community. The following is a conversation with Robin Hansen,
00:01:01 an economist at George Mason University, and one of the most fascinating, wild, fearless,
00:01:06 and fun minds I’ve ever gotten the chance to accompany for a time in exploring questions
00:01:11 of human nature, human civilization, and alien life out there in our impossibly big universe.
00:01:19 He is the coauthor of a book titled The Elephant in the Brain, Hidden Motives in Everyday Life,
00:01:25 The Age of M, Work, Love, and Life When Robots Rule the Earth, and a fascinating recent paper
00:01:31 I recommend on quote, Grabby Aliens, titled If Loud Aliens Explain Human Earliness,
00:01:39 Quiet Aliens Are Also Rare. This is the Lex Friedman podcast. To support it, please check
00:01:45 out our sponsors in the description. And now, dear friends, here’s Robin Hansen.
00:01:52 You are working on a book about quote, grabby aliens. This is a technical term, like the Big
00:01:58 Bang. So what are grabby aliens? Grabby aliens expand fast into the universe and they change
00:02:07 stuff. That’s the key concept. So if they were out there, we would notice. That’s the key idea. So
00:02:16 the question is, where are the grabby aliens? So Fermi’s question is, where are the aliens? And we
00:02:22 could vary that in two terms, right? Where are the quiet, hard to see aliens? And where are the
00:02:27 big, loud, grabby aliens? So it’s actually hard to say where all the quiet ones are, right?
00:02:33 There could be a lot of them out there because they’re not doing much. They’re not making a big
00:02:38 difference in the world. But the grabby aliens, by definition, are the ones you would see.
00:02:43 We don’t know exactly what they do with where they went, but the idea is they’re in some sort
00:02:48 of competitive world where each part of them is trying to grab more stuff and do something with
00:02:55 it. And almost surely, whatever is the most competitive thing to do with all the stuff they
00:03:02 grab isn’t to leave it alone the way it started, right? So we humans, when we go around the Earth
00:03:08 and use stuff, we change it. We would turn a forest into a farmland, turn a harbor into a city.
00:03:14 So the idea is aliens would do something with it. And so we’re not exactly sure what it would look
00:03:20 like, but it would look different. So somewhere in the sky, we would see big spheres of different
00:03:25 activity where things had been changed because they had been there. Expanding spheres. Right.
00:03:30 So as you expand, you aggressively interact and change the environment.
00:03:34 So the word grabby versus loud, you’re using them sometimes synonymously, sometimes not.
00:03:40 Grabby to me is a little bit more aggressive. What does it mean to be loud? What does it mean
00:03:48 to be grabby? What’s the difference? And loud in what way? Is it visual? Is it sound? Is it some
00:03:53 other physical phenomena like gravitational waves? Are you using this kind of in a broad
00:03:59 philosophical sense or there’s a specific thing that it means to be loud in this universe of ours?
00:04:07 My coauthors and I put together a paper with a particular mathematical model. And so we use the
00:04:14 term grabby aliens to describe that more particular model. And the idea is it’s a
00:04:18 more particular model of the general concept of loud. So loud would just be the general idea that
00:04:23 they would be really obvious. So grabby is the technical term,
00:04:27 is it in the title of the paper? It’s in the body. The title is actually about loud and quiet.
00:04:33 Right. So the idea is you want to distinguish your particular model of things from the general
00:04:38 category of things everybody else might talk about. So that’s how we distinguish.
00:04:41 The paper title is If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare.
00:04:48 If life on earth, God, this is such a good abstract. If life on earth had to achieve
00:04:52 N hard steps to reach humanity’s level, then the chance of this event rose as time to the nth
00:05:00 power. So we’ll talk about power, we’ll talk about linear increase. So what is the technical definition
00:05:06 of grabby? How do you envision grabbiness? And why are in contrast with humans, why aren’t humans
00:05:17 grabby? So like, where’s that line? Is it well definable? What is grabbing what is not grabby?
00:05:23 We have a mathematical model of the distribution of advanced civilizations, i.e. aliens in space
00:05:29 and time. That model has three parameters. And we can set each one of those parameters from data.
00:05:37 And therefore, we claim this is actually what we know about where they are in space time.
00:05:42 So the key idea is they appear at some point in space time. And then after some short delay,
00:05:48 they start expanding. And they expand at some speed. And the speed is one of those parameters.
00:05:54 That’s one of the three. And the other two parameters are about how they appear in time.
00:05:59 That is they appear at random places. And they appear in time according to a power law.
00:06:04 And that power law has two parameters. And we can fit each of those parameters to data. And so then
00:06:09 we can say, now we know, we know the distribution of advanced civilizations in space and time. So
00:06:16 we are right now a new civilization, and we have not yet started to expand. But plausibly,
00:06:21 we would start to do that within say, 10 million years of the current moment. That’s plenty of time.
00:06:27 And 10 million years is a really short duration in the history of the universe. So we are at the
00:06:33 moment, a sort of random sample of the kind of times at which an advanced civilization might
00:06:37 appear. Because we may or may not become grabby. But if we do, we’ll do it soon. And so our current
00:06:42 date is a sample. And that gives us one of the other parameters. The second parameter is the
00:06:47 constant in front of the power law. And that’s arrived from our current date.
00:06:51 So power law, what is the N in the power law?
00:06:58 That’s the more complicated thing to explain.
00:07:00 Right. Advanced life appeared by going through a sequence of hard steps. So starting with very
00:07:08 simple life, and here we are at the end of this process at pretty advanced life. And so we had
00:07:12 to go through some intermediate steps such as sexual selection, photosynthesis, multicellular
00:07:19 animals. And the idea is that each of those steps was hard. Evolution just took a long time searching
00:07:26 in a big space of possibilities to find each of those steps. And the challenge was to achieve
00:07:32 all of those steps by a deadline of when the planets would no longer host simple life. And so
00:07:40 Earth has been really lucky compared to all the other billions of planets out there,
00:07:44 and that we managed to achieve all these steps in the short time of the five billion years that
00:07:51 Earth can support simple life. So not all steps, but a lot of them, because we don’t know how many
00:07:56 steps there are before you start the expansion. So these are all the steps from the birth of life
00:08:02 to the initiation of major expansion. Right. So we’re pretty sure that it would happen really
00:08:07 soon so that it couldn’t be the same sort of a hard step as the last one. So in terms of taking
00:08:12 a long time. So when we look at the history of Earth, we look at the durations of the major
00:08:18 things that have happened. That suggests that there’s roughly say six hard steps that happened,
00:08:25 say between three and 12, and that we have just achieved the last one that would take a long time.
00:08:32 Which is?
00:08:34 We don’t know. But whatever it is, we’ve just achieved the last one.
00:08:38 We’re talking about humans or aliens here. So let’s talk about some of these steps. So
00:08:42 Earth is really special in some way. We don’t exactly know the level of specialness. We don’t
00:08:47 really know which steps were the hardest or not because we just have a sample of one. But you’re
00:08:53 saying that there’s three to 12 steps that we have to go through to get to where we are that are hard
00:08:58 steps, hard to find by something that took a long time and is unlikely. There’s a lot of ways to fail.
00:09:07 There’s a lot more ways to fail than to succeed. The first step would be sort of the very simplest
00:09:13 form of life of any sort. And then we don’t know whether that first sort is the first sort that we
00:09:20 see in the historical record or not. But then some other steps are, say, the development of
00:09:24 photosynthesis, the development of sexual reproduction. There’s the development of
00:09:30 eukaryote cells, which are certain kind of complicated cells that seems to have only
00:09:35 appeared once. And then there’s multicellularity, that is multiple cells coming together to large
00:09:40 organisms like us. And in this statistical model of trying to fit all these steps into a finite
00:09:48 window, the model actually predicts that these steps could be of varying difficulties. That is,
00:09:53 they could each take different amounts of time on average. But if you’re lucky enough that they all
00:09:58 appear in a very short time, then the durations between them will be roughly equal. And the time
00:10:04 remaining leftover in the rest of the window will also be the same length. So we at the moment have
00:10:10 roughly a billion years left on Earth until simple life like us would no longer be possible.
00:10:16 Life appeared roughly 400 million years after the very first time when life was possible at the very
00:10:21 beginning. So those two numbers right there give you the rough estimate of six hard steps.
00:10:26 Just to build up an intuition here. So we’re trying to create a simple mathematical model
00:10:31 of how life emerges and expands in the universe. And there’s a section in this paper, how many
00:10:39 hard steps? Question mark. Right. The two most plausibly diagnostic Earth durations seem to be
00:10:45 the one remaining after now before Earth becomes uninhabitable for complex life. So you estimate
00:10:50 how long Earth lasts, how many hard steps. There’s windows for doing different hard steps,
00:10:59 and you can sort of like cueing theory, mathematically estimate of like the solution
00:11:09 or the passing of the hard steps or the taking of the hard steps. Sort of like coldly mathematical
00:11:15 look. If life, pre expansionary life, requires n number of steps, what is the probability of taking
00:11:25 those steps on an Earth that lasts a billion years or two billion years or five billion years
00:11:30 or 10 billion years? And you say solving for E using the observed durations of 1.1 and 0.4
00:11:38 then gives E values of 3.9 and 12.5 range 5.7 to 26 suggesting a middle estimate of at least six.
00:11:46 That’s where you said six hard steps. Right. Just to get to where we are. Right. We started at the
00:11:54 bottom. Now we’re here. That took six steps on average. The key point is on average, these things
00:12:00 on any one random planet would take trillions or trillions of years, just a really long time.
00:12:07 And so we’re really lucky that they all happened really fast in a short time before our window
00:12:12 closed. And the chance of that happening in that short window goes as that time period to the power
00:12:19 of the number of steps. And so that was where the power we talked about before came from. And so
00:12:25 that means in the history of the universe, we should overall roughly expect advanced life to
00:12:30 appear as a power law in time. So that very early on, there was very little chance of anything
00:12:36 appearing. And then later on as things appear, other things are appearing somewhat closer to
00:12:40 them in time because they’re all going as this power law. What is the power law? Can we, for
00:12:46 people who are not math inclined, can you describe what a power law is? So say the function X is
00:12:52 linear and X squared is quadratic. So it’s the power of two. If we make X to the three, that’s
00:12:59 cubic or the power of three. And so X to the sixth is the power of six. And so we’d say
00:13:06 life appears in the universe on a planet like Earth in that proportion to the time that it’s
00:13:12 been ready for life to appear. And that over the universe in general, it’ll appear at roughly a
00:13:22 power law like that. What is the X, what is N? Is it the number of hard steps?
00:13:27 Yes, the number of hard steps. So that’s the idea.
00:13:30 It’s like if you’re gambling and you’re doubling up every time, this is the probability you just
00:13:35 keep winning. So it gets very unlikely very quickly. And so we’re the result of this unlikely
00:13:45 chain of successes. It’s actually a lot like cancer. So the dominant model of cancer in an
00:13:50 organism like each of us is that we have all these cells and in order to become cancerous,
00:13:55 a single cell has to go through a number of mutations and these very unlikely mutations.
00:14:00 And so any one cell is very unlikely to have any, have all these mutations happen by the time
00:14:05 your lifespan’s over. But we have enough cells in our body that the chance of any one cell
00:14:10 producing cancer by the end of your life is actually pretty high, more like 40%.
00:14:15 And so the chance of cancer appearing in your lifetime also goes as a power law,
00:14:19 this power of the number of mutations that’s required for any one cell in your body to become
00:14:24 cancerous.
00:14:24 The longer you live, the likely you are to have cancer cells.
00:14:28 And the power is also roughly six. That is the chance of you getting cancer is
00:14:34 roughly the power of six of the time you’ve been since you were born.
00:14:37 It is perhaps not lost on people that you’re comparing power laws of the survival or the
00:14:45 arrival of the human species to cancerous cells.
00:14:50 The same mathematical model, but of course, we might have a different value assumption
00:14:55 about the two outcomes. But of course, from the point of view of cancer, it’s more similar.
00:15:00 For the point of view of cancer, it’s a win win. We both get to thrive, I suppose.
00:15:09 It is interesting to take the point of view of all kinds of life forms on earth,
00:15:13 of viruses, of bacteria. They have a very different view.
00:15:18 It’s like the Instagram channel, Nature is Metal.
00:15:22 The ethic under which nature operates doesn’t often coincide, correlate with human
00:15:29 morals. It seems cold and machine like in the selection process that it performs.
00:15:38 I am an analyst, I’m a scholar, an intellectual, and I feel I should carefully distinguish
00:15:44 predicting what’s likely to happen and then evaluating or judging what I think would be
00:15:50 better to happen. And it’s a little dangerous to mix those up too closely because then we can
00:15:56 then we can have wishful thinking. And so I try typically to just analyze what seems likely to
00:16:01 happen regardless of whether I like it or that we do anything about it. And then once you see
00:16:07 a rough picture of what’s likely to happen if we do nothing, then we can ask, well, what might we
00:16:12 prefer? And ask, where could the levers be to move it at least a little toward what we might prefer?
00:16:19 But often doing that just analysis of what’s likely to happen if we do nothing offends many
00:16:24 people. They find that dehumanizing or cold or metal, as you say, to just say, well, this is
00:16:31 what’s likely to happen and it’s not your favorite, sorry, but maybe we can do something, but maybe
00:16:39 we can’t do that much. This is very interesting that the cold analysis, whether it’s geopolitics,
00:16:48 whether it’s medicine, whether it’s economics, sometimes misses some very specific aspect of
00:16:59 human condition. Like for example, when you look at a doctor and the act of a doctor helping a
00:17:07 single patient, if you do the analysis of that doctor’s time and cost of the medicine or the
00:17:14 surgery or the transportation of the patient, this is the Paul Farmer question, you know, is it worth
00:17:20 spending ten, twenty, thirty thousand dollars on this one patient? When you look at all the people
00:17:26 that are suffering in the world, that money could be spent so much better. And yet there’s something
00:17:31 about human nature that wants to help the person in front of you, and that is actually the right
00:17:39 thing to do, despite the analysis. And sometimes when you do the analysis, there’s something
00:17:46 about the human mind that allows you to not take that leap, that irrational leap to act in this way,
00:17:54 that the analysis explains it away. Well it’s like, for example, the U.S. government, you know, the
00:18:02 DOT, Department of Transportation, puts a value of I think like nine million dollars on a human life.
00:18:09 And the moment you put that number on a human life, you can start thinking, well okay, I can start
00:18:13 making decisions about this or that, and with a sort of cold economic perspective, and then you
00:18:20 might lose, you might deviate from a deeper truth of what it means to be human somehow. You have to
00:18:28 dance, because then if you put too much weight on the anecdotal evidence on these kinds of human
00:18:35 emotions, then you’re going to lose, you could also probably more likely deviate from truth.
00:18:42 But there’s something about that cold analysis. Like I’ve been listening to a lot of people
00:18:47 coldly analyze wars. War in Yemen, war in Syria, Israel, Palestine, war in Ukraine, and there’s
00:18:56 something lost when you do a cold analysis of why something happened. When you talk about energy,
00:19:03 talking about sort of conflict, competition over resources, when you talk about geopolitics,
00:19:11 sort of models of geopolitics, and why a certain war happened, you lose something about the suffering
00:19:16 that happens. I don’t know. It’s an interesting thing, because you’re both, you’re exceptionally good
00:19:22 at models in all domains, literally, but also there’s a humanity to you. So it’s an interesting
00:19:31 dance. I don’t know if you can comment on that dance. Sure. It’s definitely true, as you say,
00:19:37 that for many people, if you are accurate in your judgment of, say, for a medical patient,
00:19:43 what’s the chance that this treatment might help? And what’s the cost? And compare those
00:19:50 to each other, and you might say, this looks like a lot of cost for a small medical gain.
00:19:58 And at that point, knowing that fact, that might take the air out of your sails. You might
00:20:06 not be willing to do the thing that maybe you feel is right anyway, which is still to pay for it.
00:20:13 And then somebody knowing that might want to keep that news from you and not tell you about
00:20:18 the low chance of success or the high cost in order to save you this
00:20:22 tension, this awkward moment where you might fail to do what they and you think is right.
00:20:30 But I think the higher calling, the higher standard to hold you to, which many people
00:20:36 can be held to, is to say, I will look at things accurately, I will know the truth,
00:20:41 and then I will also do the right thing with it. I will be at peace with my judgment about what
00:20:47 the right thing is in terms of the truth. I don’t need to be lied to in order to figure out what the
00:20:52 right thing to do is. And I think if you do think you need to be lied to in order to figure out
00:20:57 what the right thing to do is, you’re at a great disadvantage because then people will be lying
00:21:03 to, you will be lying to yourself, and you won’t be as effective at achieving whatever good you
00:21:10 were trying to achieve. But getting the data, getting the facts is step one, not the final
00:21:15 step. So I would say having a good model, getting the good data is step one, and it’s a burden.
00:21:24 Because you can’t just use that data to arrive at sort of the easy convenient thing. You have
00:21:33 to really deeply think about what is the right thing. So the dark aspect of data, of models,
00:21:42 is you can use it to excuse away actions that aren’t ethical. You can use data to basically
00:21:50 excuse away anything. But not looking at data lets you excuse yourself to pretend and think
00:21:57 that you’re doing good when you’re not. Exactly. But it is a burden. It doesn’t excuse you from
00:22:03 still being human and deeply thinking about what is right. That very kind of gray area,
00:22:09 that very subjective area, that’s part of the human condition. But let us return for a time
00:22:16 to aliens. So you started to define sort of the model, the parameters of grabbiness.
00:22:26 As we approach grabbiness. So what happens? So again, there was three parameters. There’s the
00:22:32 speed at which they expand, there’s the rate at which they appear in time, and that rate has a
00:22:38 constant and a power. So we’ve talked about the history of life on Earth suggests that power is
00:22:42 around 6, but maybe 3 to 12. We can say that constant comes from our current date, sort of
00:22:48 sets the overall rate. And the speed, which is the last parameter, comes from the fact that when we
00:22:54 look in the sky, we don’t see them. So the model predicts very strongly that if they were expanding
00:22:59 slowly, say 1% of the speed of light, our sky would be full of vast spheres that were full
00:23:05 of activity. That is, at a random time when a civilization is first appearing, if it looks out
00:23:11 into its sky, it would see many other grabby alien civilizations in the sky, and they would be much
00:23:15 bigger than the full moon. There’d be huge spheres in the sky, and they would be visibly different.
00:23:20 We don’t see them. Can we pause for a second? Okay. There’s a bunch of hard steps that Earth had to
00:23:26 pass to arrive at this place we are currently, which we’re starting to launch rockets onto space.
00:23:33 We’re kind of starting to expand a bit, very slowly. Okay. But this is like the birth. If you
00:23:39 look at the entirety of the history of Earth, we’re now at this precipice of like expansion.
00:23:46 We could, we might not choose to, but if we do, we will do it in the next 10 million years.
00:23:51 10 million. Wow. Time flies when you’re having fun.
00:23:55 10 million is a short time on the cosmological scale. So that is, it might be only a thousand,
00:23:59 but the point is if it’s, even if it’s up to 10 million, that hardly makes any difference to the
00:24:03 model. So I might as well give you 10 million. This, this makes me feel, I was, I was so stressed
00:24:08 about planning what I’m going to do today. And now you got plenty of time, plenty of time.
00:24:13 Uh, just need to be generating some offspring quickly here. Okay. Um, so, and there’s this moment
00:24:23 this 10 million, uh, year gap, uh, or window when we start expanding and you’re saying, okay,
00:24:29 so this is an interesting moment where there’s a bunch of other alien civilizations that might at
00:24:34 some history of the universe arrived at this moment we’re here, they passed all the hard steps.
00:24:39 There’s a, there’s a model for how likely it is that that happens. And then they start expanding
00:24:45 and you think of an expansion is almost like a sphere. Right. That’s when you say speed,
00:24:50 we’re talking about the speed of the radius growth. Exactly. Like the surface, how fast the
00:24:55 surface. Okay. And so you’re saying that there is some speed for that expansion, average speed,
00:25:01 and then we can play with that parameter. And if that speed is super slow, then maybe that
00:25:08 explains why we haven’t seen anything. If it’s super fast, the slow would create the puzzle.
00:25:14 It’s slow predicts, we would see them, but we don’t see them as a way to explain that is that
00:25:18 they’re fast. So the idea is if they’re moving really fast, then we don’t see them until they’re
00:25:23 almost here. Okay, this is counterintuitive. All right, hold on a second. So I think this
00:25:28 works best when I say a bunch of dumb things. Okay. And then you elucidate the full complexity
00:25:37 and the beauty of the dumbness. Okay. So there’s these spheres out there in the universe that are
00:25:44 made visible because they’re sort of using a lot of energy. So they’re generating a lot of light
00:25:49 stuff. They’re changing things. They’re changing things. And change would be visible a long way
00:25:55 off. Yes. They would take apart stars, rearrange them, restructure galaxies. They would do all
00:26:00 kinds of big, huge stuff. Okay. If they’re expanding slowly, we would see a lot of them
00:26:08 because the universe is old, is old enough to where we would see them. That is we’re assuming
00:26:13 we’re just typical, you know, maybe at the 50th percentile of them. So like half of them have
00:26:17 appeared so far. The other half will still appear later. And the math of our best estimate is that
00:26:26 they appear roughly once per million galaxies. And we would meet them in roughly a billion years
00:26:33 if we expanded out to meet them. So we’re looking at a grabby aliens model
00:26:37 3D sim. That’s the actual name of the video. By the time we get to 13.8 billion years, the fun
00:26:48 begins. Okay. So this is, we’re watching a three dimensional sphere rotating. I presume that’s the
00:26:56 universe. And then grabby aliens are expanding and filling that universe with all kinds of fun.
00:27:04 Pretty soon it’s all full. It’s full. So that’s how the grabby aliens come in contact. First of all,
00:27:11 with other aliens and then with us humans. The following is a simulation of the grabby aliens
00:27:18 model of alien civilizations. Civilizations are born that expand outwards at constant speed.
00:27:24 A spherical region of space is shown. By the time we get to 13.8 billion years,
00:27:29 this sphere will be about 3000 times as wide as the distance from the Milky Way to Andromeda.
00:27:36 Okay. This is fun.
00:27:38 It’s huge.
00:27:38 Okay. It’s huge. All right. So why don’t we see, we’re one little tiny, tiny, tiny, tiny dot in
00:27:48 that giant, giant sphere. Why don’t we see any of the grabby aliens?
00:27:53 It depends on how fast they expand. So you could see that if they expanded at the speed of light,
00:27:58 you wouldn’t see them until they were here. So like out there, if somebody is destroying the
00:28:03 universe with a vacuum decay, there’s this, there’s this doomsday scenario where somebody
00:28:09 somewhere could change the vacuum of the universe and that would expand at the speed of light and
00:28:14 basically destroy everything it hit. But you’d never see that until I got here because it’s
00:28:18 expanding at the speed of light. If you’re expanding really slow, then you see it from
00:28:21 a long way off. So the fact we don’t see anything in the sky tells us they’re expanding fast,
00:28:26 say over a third the speed of light. And that’s really, really fast. But that’s what you have to
00:28:32 believe if we look out and you don’t see anything. Now you might say, well, how, maybe I just don’t
00:28:37 want to believe this whole model. Why should I believe this whole model at all? And our best
00:28:42 evidence why you should believe this model is our early date. We are right now almost 14 billion
00:28:49 years into the universe on a planet around a star that’s roughly 5 billion years old.
00:28:56 But the average star out there will last roughly 5 trillion years. That is a thousand times longer.
00:29:05 And remember that power law, it says that the chance of advanced life appearing on a planet
00:29:09 goes as the power of sixth of the time. So if a planet lasts a thousand times longer,
00:29:14 then the chance of it appearing on that planet, if everything would stay empty at least, is a
00:29:19 thousand to the sixth power or 10 to the 18. So enormous, overwhelming chance that if the universe
00:29:27 would just stay sit and empty and waiting for advanced life to appear, when it would appear
00:29:31 would be way at the end of all these planet lifetimes. That is the long planets near the end
00:29:39 of the lifetime, trillions of years into the future. So, but we’re really early compared to that. And
00:29:44 our explanation is at the moment, as you saw in the video, the universe is filling up in roughly a
00:29:49 billion years, it’ll all be full. And at that point, it’s too late for advanced life to show up.
00:29:53 So you had to show up now before that deadline. Okay. Can we break that apart a little bit? Okay.
00:29:59 Or linger on some of the things you said. So with the power law, the things we’ve done on earth,
00:30:03 the model you have says that it’s very unlikely, like we’re lucky SOBs. Is that mathematically
00:30:11 correct to say? We’re crazy early. That is when early means like in the history of the universe.
00:30:18 In the history. Okay. So given this model, how do we make sense of that? If we’re super,
00:30:25 can we just be the lucky ones? Well, 10 to the 18 lucky, you know,
00:30:30 how lucky do you feel? So, you know, that’s a pretty lucky, right? 10 to the 18 is a billion,
00:30:37 billion. So then if you were just being honest and humble, that that means, what does that mean?
00:30:45 It means one of the assumptions that calculated this crazy early must be wrong. That’s what it
00:30:49 means. So the key assumption we suggest is that the universe would stay empty. So most life would
00:30:56 stay empty. So most life would appear like a thousand times longer later than now if everything
00:31:04 would stay empty, waiting for it to appear. So what is not empty?
00:31:08 So the grabby aliens are filling the universe right now. Roughly at the moment, they filled
00:31:11 half of the universe and they’ve changed it. And when they fill everything, it’s too late for stuff
00:31:16 like us to appear. But wait, hold on a second. Did anyone help us get lucky? If it’s so difficult,
00:31:24 what, how do like, so it’s like cancer, right? There’s all these cells, each of which randomly
00:31:30 does or doesn’t get cancer. And eventually some cell gets cancer and you know, we were one of
00:31:35 those, but hold on a second. Okay. But we got it early. We got early compared to the prediction
00:31:44 with an assumption that’s wrong. That’s so that’s how we do a lot of, you know, theoretical
00:31:48 analysis. You have a model that makes a prediction that’s wrong. Then that helps you reject that
00:31:52 model. Okay. Let’s try to understand exactly where the wrong is. So the assumption is that the
00:31:57 universe is empty, stays empty, stays empty and waits until this advanced life appears in trillions
00:32:04 of years. That is if the universe would just stay empty, if there was just, you know, nobody else
00:32:09 out there, then when you should expect advanced life to appear, if you’re the only one in the
00:32:14 universe, when should you expect to appear? You should expect to appear trillions of years in the
00:32:18 future. I see. Right, right. So this is a very sort of nuanced mathematical assumption. I don’t
00:32:25 think we can intuit it cleanly with words. But if you assume that you’re just wait, the universe
00:32:33 stays empty and you’re waiting for one life civilization to pop up, then it’s gonna, it
00:32:41 should happen very late, much later than now. And if you look at Earth, the way things happen on
00:32:48 Earth, it happened much, much, much, much, much earlier than it was supposed to according to this
00:32:53 model. If you take the initial assumption, therefore you can say, well, the initial assumption of the
00:32:58 universe staying empty is very unlikely. Right. And the other alternative theory is the universe
00:33:04 is filling up and will fill up soon. And so we are typical for the origin data of things that
00:33:10 can appear before the deadline. Before the deadline. Okay, it’s filling up. So why don’t we see anything
00:33:15 if it’s filling up? Because they’re expanding really fast. Close to the speed of light. Exactly.
00:33:20 So we will only see it when it’s here. Almost here. Okay. What are the ways in which we might see
00:33:28 a quickly expanding? This is both exciting and terrifying. It is terrifying. It’s like watching
00:33:34 a truck, like driving at you at 100 miles an hour. And so we would see spheres in the sky,
00:33:41 at least one sphere in the sky, growing very rapidly. And like very rapidly, right? Yes,
00:33:49 very rapidly. So we’re not, so there’s, there’s, you know, different def because we were just
00:33:54 talking about 10 million years. This would be, you might see it 10 million years in advance coming.
00:34:00 I mean, you still might have a long warning. Again, the universe is 14 billion years old.
00:34:05 The typical origin times of these things are spread over several billion years. So the chance
00:34:10 of one originating at a, you know, very close to you in time is very low. So they still might take
00:34:16 millions of years from the time you see it, from the time it gets here. You’ll have a million years
00:34:22 of your years to be terrified of this mass sphere coming at you. But coming at you very fast. So if
00:34:27 they’re traveling close to the speed of light, but they’re coming from a long way away. So remember,
00:34:32 the rate at which they appear is one per million galaxies, right? So they’re roughly a hundred
00:34:38 galaxies away. I see. So the Delta between the speed of light and their actual travel speed is
00:34:45 very important, right? So even if they’re going at say half the speed of light, we’ll have a long
00:34:50 time then. Yeah. But what if they’re traveling exactly at a speed of light? Then we see them,
00:34:55 like then we wouldn’t have much warning, but that’s less likely. Well, we can’t exclude it.
00:35:00 And they could also be somehow traveling faster than the speed of light.
00:35:04 But I think we can’t exclude because if they could go faster than the speed of light, then
00:35:08 they would just already be everywhere. So in a universe where you can travel faster than the
00:35:13 speed of light, you can go backwards in space time. So any time you appeared anywhere in space
00:35:17 time, you could just fill up everything. Yeah. And so anybody in the future, whoever appeared,
00:35:22 they would have been here by now. Can you exclude the possibility that those kinds of aliens aren’t
00:35:27 already here? Well, you have, we should have a different discussion of that. Okay. So let’s
00:35:33 actually lead that. Let’s leave that discussion aside just to linger and understand the grabby
00:35:38 alien expansion, which is beautiful and fascinating. Okay. So there’s these giant expanding
00:35:45 spheres of alien civilizations. Now, when those spheres collide, mathematically,
00:35:59 it’s very likely that we’re not the first collision of grabby alien civilizations,
00:36:07 I suppose is one way to say it. So there’s like the first time the spheres touch each other,
00:36:12 recognize each other. They meet. They recognize each other first before they meet.
00:36:19 They see each other coming. They see each other coming. And then, so there’s a bunch of them.
00:36:23 There’s a combinatorial thing where they start seeing each other coming. And then there’s a
00:36:27 third neighbor. It’s like, what the hell? And then there’s a fourth one. Okay. So what does that,
00:36:31 you think, look like? What lessons from human nature, that’s the only data we have,
00:36:38 well, can you draw the story of the history of the universe here is what I would call a living
00:36:44 cosmology. So what I’m excited about in part by this model is that it lets us tell a story of
00:36:51 cosmology where there are actors who have agendas. So most ancient peoples, they had cosmologies,
00:36:57 stories they told about where the universe came from and where it’s going and what’s happening
00:37:01 out there. And their stories, they like to have agents and actors, gods or something out there
00:37:04 doing things. And lately our favorite cosmology is dead, kind of boring. We’re the only activity
00:37:12 we know about or see and everything else just looks dead and empty. But this is now telling us,
00:37:17 no, that’s not quite right. At the moment, the universe is filling up and in a few billion years,
00:37:22 it’ll be all full. And from then on, the history of the universe will be the universe full of aliens.
00:37:29 LW. Yeah. So that’s a really good reminder, a really good way to think about cosmology is we’re
00:37:35 surrounded by a vast darkness and we don’t know what’s going on in that darkness until the light
00:37:42 from whatever generate lights arrives here. So we kind of, yeah, we look up at the sky,
00:37:48 okay, there’s stars, oh, they’re pretty, but you don’t think about the giant expanding spheres of
00:37:55 aliens because you don’t see them. But now our date, looking at the clock, if you’re clever,
00:38:01 the clock tells you. So I like the analogy with the ancient Greeks. So you might think that an
00:38:06 ancient Greek staring at the universe couldn’t possibly tell how far away the sun was or how
00:38:11 far away the moon is or how big the earth is. All you can see is just big things in the sky,
00:38:16 you can’t tell. But they were clever enough actually to be able to figure out the size of
00:38:19 the earth and the distance to the moon and the sun and the size of the moon and sun. That is,
00:38:24 they could figure those things out actually by being clever enough. And so similarly,
00:38:28 we can actually figure out where are the aliens out there in space time by being clever about the
00:38:32 few things we can see, one of which is our current date. And so now that you have this living
00:38:37 cosmology, we can tell the story that the universe starts out empty and then at some point, things
00:38:43 like us appear very primitive and then some of those stop being quiet and expand. And then for
00:38:49 a few billion years, they expand and then they meet each other. And then for the next hundred
00:38:53 billion years, they commune with each other. That is, the usual models of cosmology say that in
00:38:59 roughly 150 billion years, the expansion of the universe will happen so much that all you’ll have
00:39:06 left is some galaxy clusters that are sort of disconnected from each other. But before then,
00:39:11 for the next hundred billion years, they will interact. There will be this community of all the
00:39:19 grabby alien civilizations and each one of them will hear about and even meet thousands of others.
00:39:24 And we might hope to join them someday and become part of that community. That’s an interesting
00:39:30 thing to aspire to. Yes, interesting is an interesting word. Is the universe of alien
00:39:38 civilizations defined by war as much or more than war defined human history?
00:39:47 I would say it’s defined by competition and then the question is how much competition implies war.
00:39:57 So up until recently, competition defined life on Earth. Competition between species and organisms
00:40:07 and among humans, competitions among individuals and communities and that competition often took
00:40:12 the form of war in the last 10,000 years. Many people now are hoping or even expecting to sort
00:40:20 of suppress and end competition in human affairs. They regulate business competition, they prevent
00:40:28 military competition and that’s a future I think a lot of people will like to continue and
00:40:34 strengthen. People will like to have something close to world government or world governance or
00:40:39 at least a world community and they will like to suppress war and many forms of business and
00:40:44 personal competition over the coming centuries. And they may like that so much that they prevent
00:40:51 interstellar colonization which would become the end of that era. That is interstellar colonization
00:40:56 would just return severe competition to human or our descendant affairs and many civilizations may
00:41:03 prefer that and ours may prefer that. But if they choose to allow interstellar colonization,
00:41:08 they will have chosen to allow competition to return with great force. That is, there’s really
00:41:13 not much of a way to centrally govern a rapidly expanding sphere of civilization. And so I think
00:41:20 one of the most solid things we can predict about Graviolians is they have accepted competition
00:41:26 and they have internal competition and therefore they have the potential for competition when they
00:41:32 meet each other at the borders. But whether that’s military competition is more of an open question.
00:41:37 LW. So military meaning physically destructive, right.
00:41:46 So there’s a lot to say there. So one idea that you kind of proposed is progress might be maximized
00:41:55 through competition, through some kind of healthy competition, some definition of healthy. So like
00:42:03 constructive not destructive competition. So like we would likely grab the alien civilizations would
00:42:11 be likely defined by competition because they can expand faster because competition allows
00:42:17 innovation and sort of the battle of ideas.
00:42:19 LW. The way I would take the logic is to say competition just happens if you can’t coordinate
00:42:26 to stop it and you probably can’t coordinate to stop it in an expanding interstellar way.
00:42:31 LW. So competition is a fundamental force in the universe.
00:42:37 LW. It has been so far and it would be within an expanding Graviolian civilization. But we today
00:42:44 have the chance, many people think and hope, of greatly controlling and limiting competition
00:42:50 within our civilization for a while. And that’s an interesting choice whether to allow competition
00:42:57 to sort of regain its full force or whether to suppress and manage it.
00:43:02 LW. Well one of the open questions that has been raised in the past less than 100 years
00:43:13 is whether our desire to lessen the destructive nature of competition or the destructive kind
00:43:20 of competition will be outpaced by the destructive power of our weapons. Sort of if nuclear weapons
00:43:32 and weapons of that kind become more destructive than our desire for peace then all it takes is
00:43:41 one asshole at the party to ruin the party.
00:43:45 LW. It takes one asshole to make a delay, but not that much of a delay on the cosmological
00:43:51 scales we’re talking about. So even a vast nuclear war, if it happened here right now on Earth,
00:43:59 it would not kill all humans and it certainly wouldn’t kill all life.
00:44:05 And so human civilization would return within 100,000 years.
00:44:09 LW. So all the history of atrocities and if you look at the Black Plague,
00:44:23 which is not human caused atrocities or whatever.
00:44:26 LW. There are a lot of military atrocities in history, absolutely.
00:44:29 LW. In the 20th century. Those are, those challenge us to think about human nature,
00:44:36 but the cosmic scale of time and space, they do not stop the human spirit, essentially.
00:44:44 The humanity goes on through all the atrocities, it goes on.
00:44:48 LW. Most likely.
00:44:50 LW. So even a nuclear war isn’t enough to destroy us or to stop our potential from expanding,
00:44:57 but we could institute a regime of global governance that limited competition,
00:45:03 including military and business competition of sorts, and that could prevent our expansion.
00:45:08 LW. Of course, to play devil’s advocate, global governance is centralized power,
00:45:20 power corrupts, and absolute power corrupts absolutely. One of the aspects of competition
00:45:27 that’s been very productive is not letting any one person, any one country, any one center of power
00:45:36 become absolutely powerful, because that’s another lesson, is it seems to corrupt.
00:45:43 There’s something about ego in the human mind that seems to be corrupted by power,
00:45:47 so when you say global governance, that terrifies me more than the possibility of war,
00:45:55 because it’s…
00:45:57 LW. I think people will be less terrified than you are right now,
00:46:01 and let me try to paint the picture from their point of view. This isn’t my point of view,
00:46:05 but I think it’s going to be a widely shared point of view.
00:46:07 LW. Yes. This is two devil’s advocates arguing.
00:46:10 LW. Two devils.
00:46:10 LW. Okay. So for the last half century and into the continuing future, we actually have had
00:46:18 a strong elite global community that shares a lot of values and beliefs and has created a lot
00:46:26 of convergence in global policy. So if you look at electromagnetic spectrum or medical experiments
00:46:33 or pandemic policy or nuclear power energy or regulating airplanes or just in a wide range
00:46:40 of area, in fact, the world has very similar regulations and rules everywhere, and it’s not
00:46:46 a coincidence because they are part of a world community where people get together at places
00:46:51 like Davos, et cetera, where world elites want to be respected by other world elites, and they
00:46:59 have a convergence of opinion, and that produces something like global governance,
00:47:05 but without a global center. This is what human mobs or communities have done for a long time,
00:47:11 that is, humans can coordinate together on shared behavior without a center by having
00:47:16 gossip and reputation within a community of elites. And that is what we have been doing and
00:47:22 are likely to do a lot more of. So for example, one of the things that’s happening, say, with the
00:47:27 war in Ukraine is that this world community of elites has decided that they disapprove of the
00:47:33 Russian invasion and they are coordinating to pull resources together from all around the world in
00:47:38 order to oppose it, and they are proud of that, sharing that opinion in there, and they feel that
00:47:45 they are morally justified in their stance there. And that’s the kind of event that actually brings
00:47:53 world elite communities together, where they come together and they push a particular policy and
00:47:59 position that they share and that they achieve successes. And the same sort of passion animates
00:48:04 global elites with respect to, say, global warming or global poverty and other sorts of things. And
00:48:09 they are, in fact, making progress on those sorts of things through shared global community of
00:48:16 elites. And in some sense, they are slowly walking toward global governance, slowly strengthening
00:48:23 various world institutions of governance, but cautiously, carefully watching out for the
00:48:28 possibility of a single power that might corrupt it. I think a lot of people over the coming
00:48:34 centuries will look at that history and like it. It’s an interesting thought. And thank you for
00:48:41 playing that devil’s advocate there. But I think the elites too easily lose touch of the morals
00:48:52 that the best of human nature and power corrupts. Sure, but their view is the one that determines
00:48:59 what happens. Their view may still end up there, even if you or I might criticize it from that
00:49:06 point of view. So from a perspective of minimizing human suffering, elites can use topics of the war
00:49:14 in Ukraine and climate change and all of those things to sell an idea to the world. And with
00:49:25 disregard to the amount of suffering it causes, their actual actions. So like you can tell all
00:49:33 kinds of narratives. That’s the way propaganda works. Hitler really sold the idea that everything
00:49:39 Germany is doing is either it’s the victim is defending itself against the cruelty of the world,
00:49:45 and it’s actually trying to bring out about a better world. So every power center thinks they’re
00:49:52 doing good. And so this is the positive of competition, of having multiple power centers.
00:50:01 This kind of gathering of elites makes me very, very, very nervous. The dinners, the meetings
00:50:11 and the closed rooms. I don’t know. But remember we talked about separating our cold analysis of
00:50:19 what’s likely or possible from what we prefer. And so this isn’t exactly enough time for that.
00:50:24 We might say, I would recommend we don’t go this route of a strong world governance. And because
00:50:32 I would say it’ll preclude this possibility of becoming rabid aliens, of filling the next
00:50:37 nearest million galaxies for the next billion years with vast amounts of activity and interest
00:50:43 and value of life out there. That’s the thing we would lose by deciding that we wouldn’t expand,
00:50:50 that we would stay here and keep our comfortable shared governance.
00:50:55 So you wait, you think that global governance is, makes it more likely or less likely that
00:51:06 we expand out into the universe?
00:51:08 Less.
00:51:09 Okay.
00:51:10 This is the key, this is the key point.
00:51:11 Right. Right. So screw the elites.
00:51:16 We want to, wait, do we want to expand?
00:51:19 So again, I want to separate my neutral analysis from my evaluation and say,
00:51:25 first of all, I have an analysis that tells us this is a key choice that we will face and that
00:51:30 it’s key choice other aliens have faced out there. And it could be that only one in 10 or one in 100
00:51:35 civilizations chooses to expand and the rest of them stay quiet. And that’s how it goes out there.
00:51:40 And we face that choice too. And it’ll happen sometime in the next 10 million years,
00:51:46 maybe the next thousand. But the key thing to notice from our point of view is that
00:51:52 even though you might like our global governance, you might like the fact that we’ve come together,
00:51:56 we no longer have massive wars and we no longer have destructive competition.
00:52:01 And that we could continue that, the cost of continuing that would be to prevent
00:52:06 interstellar colonization. That is once you allow interstellar colonization, then you’ve lost
00:52:11 control of those colonies and whatever they change into, they could come back here and compete with
00:52:16 you back here as a result of having lost control. And I think if people value that global governance
00:52:23 and global community and regulation and all the things it can do enough, they would then
00:52:29 want to prevent interstellar colonization.
00:52:31 I want to have a conversation with those people. I believe that both for humanity,
00:52:37 for the good of humanity, for what I believe is good in humanity and for expansion, exploration,
00:52:44 innovation, distributing the centers of power is very beneficial. So this whole meeting of elites
00:52:51 and I’ve been very fortunate to meet quite a large number of elites. They make me nervous
00:52:59 because it’s easy to lose touch of reality. I’m nervous about that in myself to make sure that
00:53:10 you never lose touch as you get sort of older, wiser, you know, how you generally get like
00:53:19 disrespectful of kids, kids these days. No, the kids are okay. But I think you should hear
00:53:24 a stronger case for their position. So I’m going to play for the elites. Yes. Well, for the limiting
00:53:32 of expansion and for the regulation of behavior. Okay. Can I link on that? So you’re saying those
00:53:39 two are connected. So the human civilization and alien civilizations come to a crossroads.
00:53:47 They have to decide, do we want to expand or not? And connected to that, do we want to give a lot
00:53:54 of power to a central elite? Or do we want to distribute the power centers, which is naturally
00:54:03 connected to the expansion? When you expand, you distribute the power. If say over the next thousand
00:54:10 years, we fill up the solar system, right? We go out from earth and we colonize Mars and we change
00:54:15 a lot of things. Within a solar system, still everything is within reach. That is, if there’s
00:54:20 a rebellious colony around Neptune, you can throw rocks at it and smash it and then teach them
00:54:25 discipline. Okay. A central control over the solar system is feasible. But once you let it escape the
00:54:34 solar system, it’s no longer feasible. But if you have a solar system that doesn’t have a central
00:54:38 control, maybe broken into a thousand different political units in the solar system, then any one
00:54:44 part of that that allows interstellar colonization and it happens. That is interstellar colonization
00:54:50 happens when only one party chooses to do it and is able to do it. And that’s what it is there for.
00:54:55 So we can just say in a world of competition, if interstellar colonization is possible, it will
00:55:00 happen and then competition will continue. And that will sort of ensure the continuation of
00:55:04 competition into the indefinite future. And competition, we don’t know, but competition
00:55:10 can take violent forms and many forms. And the case I was going to make is that I think one of
00:55:15 the things that most scares people about competition is not just that it creates holocausts and death
00:55:21 on massive scales, is that it’s likely to change who we are and what we value.
00:55:28 Yes. So this is the other thing with power. As we grow, as human civilization grows,
00:55:37 becomes multi planetary, multi solar system potentially, how does that change us, do you think?
00:55:43 I think the more you think about it, the more you realize it can change us a lot.
00:55:48 So first of all, this is pretty dark, by the way. Well, it’s just honest.
00:55:53 Right. Well, I’m trying to get there. But I think the first thing you should say,
00:55:55 if you look at history, just human history over the last 10,000 years,
00:55:59 if you really understood what people were like a long time ago, you’d realize they were really
00:56:04 quite different. Ancient cultures created people who were really quite different. Most historical
00:56:09 fiction lies to you about that. It often offers you modern characters in an ancient world.
00:56:14 But if you actually study history, you will see just how different they were and how differently
00:56:19 they thought. And they’ve changed a lot many times, and they’ve changed a lot across time.
00:56:25 So I think the most obvious prediction about the future is, even if you only have the mechanisms
00:56:29 of change we’ve seen in the past, you should still expect a lot of change in the future.
00:56:33 But we have a lot bigger mechanisms for change in the future than we had in the past.
00:56:37 So I have this book called The Age of M, Work, Love, and Life, and Robots Rule the Earth. And
00:56:44 it’s about what happens if brain emulations become possible. So a brain emulation is where you take
00:56:49 a actual human brain, and you scan it and find spatial and chemical detail to create
00:56:55 a computer simulation of that brain. And then those computer simulations of brains
00:57:00 are basically citizens in a new world. They work, and they vote, and they fall in love,
00:57:04 and they get mad, and they lie to each other. And this is a whole new world. And my book is
00:57:08 about analyzing how that world is different than our world, basically using competition as my key
00:57:14 lever of analysis. That is, if that world remains competitive, then I can figure out how they change
00:57:19 in that world, what they do differently than we do. And it’s very different. And it’s different in
00:57:26 ways that are shocking sometimes to many people and ways some people don’t like. I think it’s an
00:57:32 okay world, but I have to admit, it’s quite different. And that’s just one technology.
00:57:37 If we add dozens more technologies, changes into the future, we should just expect it’s possible
00:57:45 to become very different than who we are. I mean, in the space of all possible minds,
00:57:49 our minds are a particular architecture, a particular structure, a particular set of habits,
00:57:54 and they are only one piece in a vast base of possibilities. The space of possible minds is
00:58:00 really huge. So yeah, let’s linger on the space of possible minds for a moment, just to sort of
00:58:07 humble ourselves. How peculiar our peculiarities are, like the fact that we like a particular kind
00:58:19 of sex, and the fact that we eat food through one hole and poop through another hole. And that seems
00:58:27 to be a fundamental aspect of life, is very important to us. And that life is finite in a
00:58:35 certain kind of way, we have a meat vehicle. So death is very important to us. I wonder which
00:58:41 aspects are fundamental, or would be common throughout human history and also throughout,
00:58:47 sorry, throughout history of life on Earth, and throughout other kinds of lives. Like what is
00:58:53 really useful? You mentioned competition seems to be a one fundamental thing.
00:58:57 I’ve tried to do analysis of where our distant descendants might go in terms of what are robust
00:59:03 features we could predict about our descendants. So again, I have this analysis of sort of the
00:59:08 next generation, so the next era after ours. If you think of human history as having three eras
00:59:13 so far, right? There was the forager era, the farmer era, and the industry era. Then my attempt
00:59:18 in age of M is to analyze the next era after that. And it’s very different, but of course,
00:59:22 there could be more and more errors after that. So analyzing a particular scenario and thinking
00:59:28 it through is one way to try to see how different the future could be, but that doesn’t give you
00:59:32 some sort of sense of what’s typical. But I have tried to analyze what’s typical.
00:59:38 And so I have two predictions I think I can make pretty solidly. One thing is that we know at the
00:59:45 moment that humans discount the future rapidly. So we discount the future in terms of caring
00:59:52 about consequences, roughly a factor of two per generation. And there’s a solid evolutionary
00:59:56 analysis why sexual creatures would do that. Because basically your descendants only share
01:00:01 half of your genes and your descendants are a generation away. So we only care about our
01:00:06 grandchildren. Basically that’s a factor of four later because it’s later. So this actually
01:00:14 explains typical interest rates in the economy. That is interest rates are greatly influenced by
01:00:19 our discount rates. And we basically discount the future by a factor of two per generation.
01:00:25 But that’s a side effect of the way our preferences evolved as sexually selected
01:00:32 creatures. We should expect that in the longer run creatures will evolve who don’t discount the
01:00:37 future. They will care about the long run and they will therefore not neglect the long run.
01:00:43 So for example, for things like global warming or things like that, at the moment, many commenters
01:00:48 are sad that basically ordinary people don’t seem to care much, market prices don’t seem to care
01:00:52 much and more ordinary people, it doesn’t really impact them much because humans don’t care much
01:00:57 about the longterm future. And futurists find it hard to motivate people and to engage people about
01:01:04 the longterm future because they just don’t care that much. But that’s a side effect of this
01:01:08 particular way that our preferences evolved about the future. And so in the future, they will neglect
01:01:14 the future less. And that’s an interesting thing that we can predict robustly. Eventually,
01:01:19 you know, maybe a few centuries, maybe longer, eventually our descendants will
01:01:24 care about the future. Can you speak to the intuition behind that? Is it
01:01:29 useful to think more about the future? Right. If evolution rewards creatures for having many
01:01:35 descendants, then if you have decisions that influence how many descendants you have,
01:01:40 then that would be good if you made those decisions. But in order to do that, you’ll have to
01:01:44 care about them. You have to care about that future. So to push back, that’s if you’re trying
01:01:49 to maximize the number of descendants. But the nice thing about not caring too much about the
01:01:54 longterm future is you’re more likely to take big risks or you’re less risk averse. And it’s possible
01:02:01 that both evolution and just life in the universe rewards the risk takers. Well, we actually have
01:02:11 analysis of the ideal risk preferences too. So there’s a literature on ideal preferences that
01:02:19 evolution should promote. And for example, there’s literature on competing investment funds and what
01:02:24 the managers of those funds should care about in terms of risk, various kinds of risks, and in terms
01:02:29 of discounting. And so managers of investment funds should basically have logarithmic risk, i.e. in
01:02:38 shared risk, in correlated risk, but be very risk neutral with respect to uncorrelated risk. So
01:02:47 that’s a feature that’s predicted to happen about individual personal choices in biology and also
01:02:54 for investment funds. So that’s other things. That’s also something we can say about the long
01:02:57 run. What’s correlated and uncorrelated risk? If there’s something that would affect all of your
01:03:03 descendants, then if you take that risk, you might have more descendants, but you might have zero.
01:03:11 And that’s just really bad to have zero descendants. But an uncorrelated risk would be a
01:03:16 risk that some of your descendants would suffer, but others wouldn’t. And then you have a portfolio
01:03:20 of descendants. And so that portfolio ensures you against problems with any one of them.
01:03:26 I like the idea of portfolio descendants. And we’ll talk about portfolios with your idea of
01:03:31 you briefly mentioned, we’ll return there with M, EM, the age of EM, work, love, and life when
01:03:37 robots rule the earth. EM, by the way, is emulated minds. So this one of the…
01:03:44 M is short for emulations.
01:03:46 M is short for emulations. And it’s kind of an idea of how we might create artificial minds,
01:03:51 artificial copies of minds, or human like intelligences.
01:03:56 I have another dramatic prediction I can make about long term preferences.
01:04:00 Yes.
01:04:01 Which is, at the moment, we reproduce as the result of a hodgepodge of preferences that
01:04:07 aren’t very well integrated, but sort of in our ancestral environment induced us to reproduce.
01:04:12 So we have preferences over being sleepy and hungry and thirsty and wanting to have sex and
01:04:17 wanting to be excitement, et cetera, right? And so in our ancestral environment, the packages
01:04:23 of preferences that we evolved to have did induce us to have more descendants. That’s why we’re here.
01:04:31 But those packages of preferences are not a robust way to promote having more descendants.
01:04:36 They were tied to our ancestral environment, which is no longer true. So that’s one of the
01:04:40 reasons we are now having a big fertility decline because in our current environment,
01:04:45 our ancestral preferences are not inducing us to have a lot of kids,
01:04:48 which is, from evolution’s point of view, a big mistake.
01:04:52 We can predict that in the longer run, there will arise creatures who
01:04:56 just abstractly know that what they want is more descendants.
01:05:00 That’s a very robust way to have more descendants is to have that as your direct preference.
01:05:05 First of all, your thinking is so clear. I love it. So mathematical. And thank you
01:05:11 for thinking so clear with me and bearing with my interruptions and going on the tangents when we go
01:05:19 there. So you’re just clearly saying that successful long term civilizations will prefer to have
01:05:27 descendants, more descendants.
01:05:30 Not just prefer, consciously and abstractly prefer. That is, it won’t be the indirect
01:05:35 consequence of other preference. It will just be the thing they know they want.
01:05:39 There’ll be a president in the future that says, we must have more sex.
01:05:44 We must have more descendants and do whatever it takes to do that.
01:05:47 Whatever. We must go to the moon and do the other things. Not because they’re easy,
01:05:52 but because they’re hard. But instead of the moon, let’s have lots of sex. Okay.
01:05:56 But there’s a lot of ways to have descendants, right?
01:05:58 Right. So that’s the whole point. When the world gets more complicated and there are many possible
01:06:03 strategies, it’s having that as your abstract preference that will force you to think through
01:06:07 those possibilities and pick the one that’s most effective.
01:06:09 So just to clarify, descendants doesn’t necessarily mean the narrow definition of
01:06:15 descendants, meaning humans having sex and then having babies.
01:06:18 Exactly.
01:06:18 You can have artificial intelligence systems in whom you instill some capability of cognition
01:06:27 and perhaps even consciousness. You can also create through genetics and biology clones of yourself
01:06:32 or slightly modified clones, thousands of them. So all kinds of descendants. It could be descendants
01:06:41 in the space of ideas too, for somehow we no longer exist in this meat vehicle. It’s now just
01:06:47 like whatever the definition of a life form is, you have descendants of those life forms.
01:06:54 Yes. And they will be thoughtful about that. They will have thought about what counts as a
01:06:58 descendant and that’ll be important to them to have the right concept.
01:07:02 So the they there is very interesting, who the they are.
01:07:05 But the key thing is we’re making predictions that I think are somewhat robust about what
01:07:10 our distant descendants will be like. Another thing I think you would automatically accept is
01:07:14 they will almost entirely be artificial. And I think that would be the obvious prediction
01:07:17 about any aliens we would meet. That is they would long since have given up reproducing
01:07:22 biologically.
01:07:24 Well, it’s like organic or something. It’s all real.
01:07:28 It might be squishy and made out of hydrocarbons, but it would be artificial in the sense of made
01:07:33 in factories with designs on CAD things, right? Factories with scale economy. So the factories
01:07:37 we have made on earth today have much larger scale economies than the factories in ourselves.
01:07:42 So the factories in ourselves are, there are marvels, but they don’t achieve very many scale
01:07:46 economies. They’re tiny little factories.
01:07:47 But they’re all factories.
01:07:49 Yes.
01:07:49 Factories on top of factories. So everything, the factories that are designed is different
01:07:54 than sort of the factories that have evolved.
01:07:56 Yeah. I think the nature of the word design is very interesting to uncover there. But
01:08:02 let me, in terms of aliens, let me go, let me analyze your Twitter like it’s Shakespeare.
01:08:09 Okay.
01:08:10 There’s a tweet says, define hello, in quotes, alien civilizations as one that might the
01:08:16 next million years identify humans as intelligent and civilized, travel to earth and say hello
01:08:24 by making their presence and advanced abilities known to us. The next 15 polls, this is a
01:08:29 Twitter thread, the next 15 polls ask about such hello aliens. And what these polls ask
01:08:35 is your Twitter followers, what they think those aliens will be like certain particular
01:08:42 qualities. So poll number one is what percent of hello aliens evolved from biological species
01:08:49 with two main genders? And you know, the popular vote is above 80%. So most of them have two
01:08:58 genders. What do you think about that? I’ll ask you about some of these because they’re
01:09:00 so interesting. It’s such an interesting question.
01:09:02 It is a fun set of questions.
01:09:03 Yes, it’s a fun set of questions. So the genders as we look through evolutionary history, what’s
01:09:08 the usefulness of that as opposed to having just one or like millions?
01:09:13 So there’s a question in evolution of life on earth, there are very few species that
01:09:18 have more than two genders. There are some, but they aren’t very many. But there’s an
01:09:22 enormous number of species that do have two genders, much more than one. And so there’s
01:09:27 a literature on why did multiple genders evolve, and that’s sort of what’s the point of having
01:09:34 males and females versus hermaphrodites. So most plants are hermaphrodites, that is they
01:09:40 would mate male female, but each plant can be either role. And then most animals have
01:09:47 chosen to split into males and females. And then they’re differentiating the two genders.
01:09:52 And there’s an interesting set of questions about why that happens.
01:09:56 Because you can do selection, you basically have like one gender competes for the affection
01:10:03 of other and there’s sexual partnership that creates the offspring. So there’s sexual
01:10:08 selection. It’s nice to have like to a party, it’s nice to have dance partners. And then
01:10:14 then each one get to choose based on certain characteristics. And that’s an efficient
01:10:18 mechanism for adapting to the environment, being successfully adapted to the environment.
01:10:24 It does look like there’s an advantage. If you have males, then the males can take higher
01:10:29 variants. And so there can be stronger selection among the males in terms of weeding out genetic
01:10:34 mutations because the males have a higher variance in their mating success.
01:10:38 Yes. Sure. Okay. Question number two, what percent of hello aliens evolved from land
01:10:44 animals as opposed to plants or ocean slash air organisms? By the way, I did recently
01:10:53 see that there’s only 10% of species on earth are in the ocean. So there’s a lot more variety
01:11:03 on land. There is. It’s interesting. So why is that? I can’t even intuit exactly why that would
01:11:10 be. Maybe survival on land is harder and so you get a lot more. The story that I understand is
01:11:16 it’s about small niches. So speciation can be promoted by having multiple different species.
01:11:23 So in the ocean, species are larger. That is there are more creatures in each species because the
01:11:29 ocean environments don’t vary as much. So if you’re good in one place, you’re good in many
01:11:33 other places. But on land, and especially in rivers, rivers contain an enormous percentage of
01:11:38 the kinds of species on land, you see, because they vary so much from place to place. And so
01:11:46 a species can be good in one place and then other species can’t really compete because they came
01:11:51 from a different place where things are different. So it’s a remarkable fact actually that speciation
01:11:58 promotes evolution in the long run. That is more evolution has happened on land because there have
01:12:03 been more species on land because each species has been smaller. And that’s actually a warning
01:12:08 about something called rot that I’ve thought a lot about, which is one of the problems with
01:12:13 even a world government, which is large systems of software today just consistently rot and decay
01:12:19 with time and have to be replaced. And that plausibly also is a problem for other large
01:12:23 systems, including biological systems, legal systems, regulatory systems. And it seems like
01:12:29 large species actually don’t evolve as effectively as small ones do. And that’s an important thing
01:12:36 to notice about that. And that’s different from ordinary sort of evolution in economies on Earth
01:12:44 in the last few centuries, say. On Earth, the more technical evolution and economic growth happens in
01:12:51 larger integrated cities and nations. But in biology, it’s the other way around. More evolution
01:12:56 happened in the fragmented species. Yeah, it’s such a nuanced discussion because you can also
01:13:02 push back in terms of nations and at least companies. It’s like large companies seems to evolve
01:13:08 less effectively. There is something that they have more resources, they don’t even have better
01:13:17 resilience. And when you look at the scale of decades and centuries, it seems like a lot of
01:13:23 large companies die. But still large economies do better, like large cities grow better than small
01:13:29 cities. Large integrated economies like the United States or the European Union do better than small
01:13:34 fragmented ones. So, yeah, sure. That’s a very interesting, long discussion. But so most of the
01:13:41 people, and obviously votes on Twitter represent the absolute objective truth of things.
01:13:48 But an interesting question about oceans is that, okay, remember I told you about how most
01:13:52 planets would last for trillions of years and be later, right? So people have tried to explain why
01:13:58 life appeared on Earth by saying, oh, all those planets are going to be unqualified for life
01:14:02 because of various problems. That is, they’re around smaller stars, which last longer, and
01:14:06 smaller stars have some things like more solar flares, maybe more tidal locking. But almost
01:14:11 all of these problems with longer lived planets aren’t problems for ocean worlds. And a large
01:14:17 fraction of planets out there are ocean worlds. So if life can appear on an ocean world, then
01:14:23 that pretty much ensures that these planets that last a very long time could have advanced life
01:14:30 because there’s a huge fraction of ocean worlds. So that’s actually an open question.
01:14:34 So when you say, sorry, when you say life appear, you’re kind of saying life and intelligent life.
01:14:41 So that’s an open question. Is land, and that’s I suppose the question behind
01:14:50 the Twitter poll, which is a grabby alien civilization that comes to say hello,
01:14:57 what’s the chance that they first began their early steps, the difficult steps they took on
01:15:04 land? What do you think? 80%, most people on Twitter think it’s very likely on land.
01:15:14 I think people are discounting ocean worlds too much. That is, I think people tend to assume that
01:15:20 whatever we did must be the only way it’s possible. And I think people aren’t giving
01:15:23 enough credit for other possible paths. Dolphins, Waterworld, by the way,
01:15:28 people criticize that movie. I love that movie. Kevin Costner can do me no wrong.
01:15:32 Okay, next question. What percent of hello aliens once had a nuclear war with greater
01:15:39 than 10 nukes fired in anger? So not in the incompetence and as an accident,
01:15:47 intentional firing of nukes and less than 20% was the most popular vote.
01:15:54 And that just seems wrong to me.
01:15:56 So like, I wonder what, so most people think once you get nukes, we’re not going to fire them.
01:16:02 They believe in the power.
01:16:04 I think they’re assuming that if you had a nuclear war, then that would just end
01:16:08 civilization for good. I think that’s the thinking.
01:16:10 That’s the main thing.
01:16:11 And I think that’s just wrong. I think you could rise again after a nuclear war.
01:16:15 It might take 10,000 years or 100,000 years, but it could rise again.
01:16:18 So what do you think about mutual assured destruction
01:16:21 as a force to prevent people from firing nuclear weapons? That’s a question that I knew
01:16:28 to a terrifying degree has been raised now and what’s going on.
01:16:31 Well, I mean, clearly it has had an effect. The question is just how strong an effect for how
01:16:36 long. I mean, clearly we have not gone wild with nuclear war and clearly the devastation that you
01:16:43 would get if you initiated a nuclear war is part of the reasons people have been reluctant to start
01:16:47 a war. The question is just how reliably will that ensure the absence of a war?
01:16:52 Yeah. The night is still young.
01:16:54 Exactly.
01:16:54 This has been 70 years or whatever it’s been.
01:16:57 I mean, but what do you think? Do you think we’ll see nuclear war in the century?
01:17:06 I don’t know if in the century, but it’s the sort of thing that’s likely to happen eventually.
01:17:12 That’s a very loose statement. Okay. I understand. Now this is where I pull you out of your
01:17:17 mathematical model and ask a human question. Do you think this particular human question…
01:17:22 I think we’ve been lucky that it hasn’t happened so far.
01:17:24 But what is the nature of nuclear war? Let’s think about this. There’s dictators, there’s democracies,
01:17:36 miscommunication. How do wars start? World War I, World War II.
01:17:40 So the biggest datum here is that we’ve had an enormous decline in major war over the last
01:17:46 century. So that has to be taken into account now. So the problem is war is a process that has a very
01:17:52 long tail. That is, there are rare, very large wars. So the average war is much worse than the
01:18:00 median war because of this long tail. And that makes it hard to identify trends over time. So
01:18:08 the median war has clearly gone way down in the last century at a medium rate of war. But it could
01:18:12 be that’s because the tail has gotten thicker. And in fact, the average war is just as bad,
01:18:17 but most wars are gonna be big wars. So that’s the thing we’re not so sure about.
01:18:21 There’s no strong data on wars with one, because of the destructive nature of the weapons,
01:18:31 kill hundreds of millions of people. There’s no data on this.
01:18:35 So, but we can start intuiting.
01:18:37 But we can see that the power law, we can do a power law fit to the rate of wars and it’s a
01:18:42 power law with a thick tail. So it’s one of those things that you should expect most of the damage
01:18:46 to be in the few biggest ones. So that’s also true for pandemics and a few other things. For
01:18:51 pandemics, most of the damages in the few biggest ones. So the median pandemics of ours, less than
01:18:55 the average that you should expect in the future. But those, that fitting of data is very questionable
01:19:02 because everything you said is correct. The question is like, what can we infer about the
01:19:09 future of civilization, threatening pandemics or nuclear war from studying the history of the
01:19:19 20th century? So like, you can’t just fit it to the data, the rate of wars and the destructive
01:19:25 nature. Like that’s not, that’s not how nuclear war will happen. Nuclear war happens with two
01:19:31 assholes or idiots that have access to a button.
01:19:35 Small wars happen that way too.
01:19:36 No, I understand that, but that’s, it’s very important. Small wars aside, it’s very important
01:19:41 to understand the dynamics, the human dynamics and the geopolitics of the way nuclear war happens
01:19:46 in order to predict how we can minimize the chance of a…
01:19:51 But it is a common and useful intellectual strategy to take something that could be really
01:19:56 big or, but is often very small and fit the distribution of the data, small things, which
01:20:01 you have a lot of them and then ask, do I believe the big things are really that different? Right?
01:20:05 I see.
01:20:05 So sometimes it’s reasonable to say like, say with tornadoes or even pandemics or something,
01:20:10 the underlying process might not be that different for the big and small ones.
01:20:14 It might not be. The fact that mutual sure destruction seems to work to some degree
01:20:23 shows you that to some degree it’s different than the small wars.
01:20:31 So it’s a really important question to understand is, are humans capable, one human, like how many
01:20:40 humans on earth, if I give them a button now, say you pressing this button will kill everyone on
01:20:46 earth, everyone, right? How many humans will press that button? I want to know those numbers,
01:20:53 like day to day, minute to minute, how many people have that much irresponsibility, evil,
01:21:01 incompetence, ignorance, whatever word you want to assign, there’s a lot of dynamics of the
01:21:06 psychology that leads you to press that button, but how many? My intuition is the number, the more
01:21:12 destructive that press of a button, the fewer humans you find. And that number gets very close
01:21:17 to zero very quickly, especially people have access to such a button, but that’s perhaps
01:21:24 a hope than a reality. And unfortunately we don’t have good data on this,
01:21:28 which is like how destructive are humans willing to be?
01:21:34 So I think part of this just has to think about, ask what your time scales you’re looking at,
01:21:39 right? So if you say, if you look at the history of war, you know, we’ve had a lot of wars pretty
01:21:44 consistently over many centuries. So if I ask, if you ask, will we have a nuclear war in the
01:21:50 next 50 years? I might say, well, probably not. If I say 500 or 5,000 years, like if the same sort
01:21:56 of risks are underlying and they just continue, then you have to add that up over time and think
01:22:00 the risk is getting a lot larger the longer a timescale we’re looking at.
01:22:04 But okay, let’s generalize nuclear war because what I was more referring to is something that
01:22:09 kills more than 20% of humans on earth and injures or makes the other 80%
01:22:25 suffer horribly, survive, but suffer. That’s what I was referring to. So when you look at 500 years
01:22:32 from now, that might not be nuclear war. That might be something else. That’s that kind of,
01:22:36 has that destructive effect. And I don’t know, these feel like novel questions in the history
01:22:45 of humanity. I just don’t know. I think since nuclear weapons, this has been, you know,
01:22:52 engineering pandemics, for example, robotics, so nanobots. It just seems like a real new
01:23:02 possibility that we have to contend with it. We don’t have good models or from my perspective.
01:23:08 So if you look on say the last thousand years or 10,000 years, we could say we’ve seen a certain
01:23:13 rate at which people are willing to make big destruction in terms of war. Okay. And if you’re
01:23:19 willing to project that data forward, that I think like if you want to ask over periods of
01:23:23 thousands or tens of thousands of years, you would have a reasonable data set. So the key
01:23:28 question is what’s changed lately? Okay. And so a big question of which I’ve given a lot of thought
01:23:34 to what are the major changes that seem to have happened in culture and human attitudes over the
01:23:39 last few centuries and what’s our best explanation for those so that we can project them forward into
01:23:44 the future. And I have a story about that, which is the story that we have been drifting back toward
01:23:51 forager attitudes in the last few centuries as we get rich. So the idea is we spent a million years
01:23:57 being a forager and that was a very sort of standard lifestyle that we know a lot about.
01:24:04 Foragers sort of live in small bands. They make decisions cooperatively. They share food. They,
01:24:10 you know, they don’t have much property, et cetera. And humans liked that. And then 10,000 years ago,
01:24:16 farming became possible, but it was only possible because we were plastic enough to really change
01:24:21 our culture. Farming styles and cultures are very different. They have slavery, they have war,
01:24:25 they have property, they have inequality, they have kings. They stay in one place instead of
01:24:30 wandering. They don’t have as much diversity of experience or food. They have more disease.
01:24:35 This farming life is just very different. But humans were able to sort of introduce conformity
01:24:41 and religion and all sorts of things to become just a very different kind of creature as farmers.
01:24:45 Farmers are just really different than foragers in terms of their values and their lives.
01:24:49 But the pressures that made foragers into farmers were part mediated by poverty.
01:24:55 Farmers are poor. And if they deviated from the farming norms that people around them supported,
01:25:00 they were quite at risk of starving to death. And then in the last few centuries,
01:25:05 we’ve gotten rich. And as we’ve gotten rich, the social pressures that turned foragers into farmers
01:25:11 have become less persuasive to us. So, for example, a farming young woman who was told,
01:25:18 if you have a child out of wedlock, you and your child may starve, that was a credible threat.
01:25:22 She would see actual examples around her to make that a believable threat. Today,
01:25:28 if you say to a young woman, you shouldn’t have a child out of wedlock, she will see other young
01:25:31 woman around her doing okay that way. We’re all rich enough to be able to afford that sort of a
01:25:36 thing. And therefore, she’s more inclined often to go with her inclinations, her sort of more
01:25:42 natural inclinations about such things rather than to be pressured to follow the official
01:25:47 farming norms of that you shouldn’t do that sort of thing. And all through our lives, we have been
01:25:51 drifting back toward forager attitudes because we’ve been getting rich. And so, aside from at
01:25:57 work, which is an exception, but elsewhere, I think this explains trends toward less slavery,
01:26:04 more democracy, less religion, less fertility, more promiscuity, more travel, more art, more leisure,
01:26:12 fewer work hours. All of these trends are basically explained by becoming more forager like.
01:26:18 And much science fiction celebrates this, Star Trek or the culture novels, people
01:26:23 like this image that we are moving toward this world. We’re basically like foragers, we’re peaceful,
01:26:27 we share, we make decisions collectively, we have a lot of free time, we are into art.
01:26:34 So forger, you know, forger is a word and it has, it’s a loaded word because it’s connected to
01:26:42 the actual, what life was actually like at that time. As you mentioned, we sometimes don’t do a
01:26:49 good job of telling accurately what life was like back then. But you’re saying if it’s not exactly
01:26:55 like foragers, it rhymes in some fundamental way. You also said peaceful. Is it obvious that a
01:27:01 forager with a nuclear weapon would be peaceful? I don’t know if that’s 100% obvious. So we know,
01:27:10 again, we know a fair bit about what foragers lives were like. The main sort of violence they
01:27:14 had would be sexual jealousy. They were relatively promiscuous and so there’d be a lot of jealousy.
01:27:19 But they did not have organized wars with each other. That is, they were at peace with their
01:27:24 neighboring forager bands. They didn’t have property in land or even in people. They didn’t
01:27:28 really have marriage. And so they were, in fact, peaceful.
01:27:35 When you think about large scale wars, they don’t start large scale wars.
01:27:38 They didn’t have coordinated large scale wars like the way chimpanzees do. Our chimpanzees do
01:27:42 have wars between one tribe of chimpanzees and others, but human foragers do not. Farmers return
01:27:47 to that, of course, the more chimpanzee like styles. Well, that’s a hopeful message. If we
01:27:52 could return real quick to the Hello Aliens Twitter thread. One of them is really interesting
01:28:00 about language. What percent of Hello Aliens would be able to talk to us in our language?
01:28:05 This is the question of communication. It actually gets to the nature of language.
01:28:10 It also gets to the nature of how advanced you expect them to be.
01:28:16 So I think some people see that we have advanced over the last thousands of years,
01:28:22 and we aren’t reaching any sort of limit. And so they tend to assume it could go on forever.
01:28:28 And I actually tend to think that within, say, 10 million years, we will sort of max out on
01:28:34 technology. We will sort of learn everything that’s feasible to know for the most part. And then
01:28:40 obstacles to understanding would more be about sort of cultural differences, like ways in which
01:28:45 different places had just chosen to do things differently. And so then the question is, is it
01:28:52 even possible to communicate across some cultural distances? And I could imagine some maybe advanced
01:28:59 aliens who just become so weird and different from each other, they can’t communicate with each other.
01:29:03 But we’re probably pretty simple compared to them. So I would think, sure, if they wanted to,
01:29:10 they could communicate with us. So it’s the simplicity of the recipient. I tend to,
01:29:17 just to push back, let’s explore the possibility where that’s not the case. Can we communicate
01:29:23 with ants? I find that this idea that… We’re not very good at communicating in general.
01:29:33 Oh, you’re saying… All right, I see. You’re saying once you get orders of magnitude better
01:29:38 at communicating… Once they had maxed out on all communication technology in general,
01:29:43 and they just understood in general how to communicate with lots of things, and had done
01:29:47 that for millions of years. But you have to be able to… This is so interesting. As somebody
01:29:51 who cares a lot about empathy and imagining how other people feel, communication requires empathy,
01:30:00 meaning you have to truly understand how the other person, the other organism sees the world.
01:30:08 It’s like a four dimensional species talking to a two dimensional species. It’s not as trivial as,
01:30:15 to me at least, as it might at first seem. So let me reverse my position a little,
01:30:20 because I’ll say, well, the hello aliens question really combines two different scenarios
01:30:28 that we’re slipping over. So one scenario would be that the hello aliens would be like grabby
01:30:34 aliens. They would be just fully advanced. They would have been expanding for millions of years.
01:30:38 They would have a very advanced civilization, and then they would finally be arriving here
01:30:43 after a billion years perhaps of expanding, in which case they’re going to be crazy advanced
01:30:47 at some maximal level. But the hello aliens about aliens we might meet soon, which might be sort of
01:30:55 UFO aliens, and UFO aliens probably are not grabby aliens. How do you get here if you’re
01:31:02 not a grabby alien? Well, they would have to be able to travel. Oh. But they would not be expansive.
01:31:11 So the road trip doesn’t count as a grabby. So we’re talking about expanding the colony,
01:31:17 the comfortable colony. So the question is, if UFOs, some of them are aliens,
01:31:24 what kind of aliens would they be? This is sort of the key question you have to ask in order to
01:31:28 try to interpret that scenario. The key fact we would know is that they are here right now,
01:31:36 but the universe around us is not full of an alien civilization. So that says right off the bat
01:31:43 that they chose not to allow massive expansion of a grabby civilization.
01:31:50 Is it possible that they chose it, but we just don’t see them yet? These are the stragglers,
01:31:56 the journeymen. So the timing coincidence is, it’s almost surely if they are here now,
01:32:02 they are much older than us. They are many millions of years older than us. And so they
01:32:08 could have filled the galaxy in that last millions of years if they had wanted to.
01:32:13 That is, they couldn’t just be right at the edge. Very unlikely. Most likely they would have been
01:32:18 around waiting for us for a long time. They could have come here any time in the last millions of
01:32:22 years and they just chosen, they’ve been waiting around for this or they just chose to come
01:32:25 recently. But the timing coincidence, it would be crazy unlikely that they just happen to be able to
01:32:31 get here, say in the last hundred years. They would no doubt have been able to get here far
01:32:36 earlier than that. Again, we don’t know. So this is a friend like UFO sightings on earth. We don’t
01:32:41 know if this kind of increase in sightings have anything to do with actual visitations.
01:32:46 I’m just talking about the timing. They arose at some point in space time.
01:32:52 And it’s very unlikely that that was just to the point that they could just barely get here
01:32:56 recently. Almost surely they could have gotten here much earlier. And throughout the stretch
01:33:03 of several billion years that earth existed, they could have been here often. Exactly. So
01:33:07 they could have therefore filled the galaxy long time ago if they had wanted to. Let’s push back
01:33:12 on that. The question to me is, isn’t it possible that the expansion of a civilization is much
01:33:20 harder than the travel? The sphere of the reachable is different than the sphere of the colonized.
01:33:31 So isn’t it possible that the sphere of places where like the stragglers go, the different
01:33:38 people that journey out, the explorers, is much, much larger and grows much faster than the
01:33:44 civilization? So in which case, like they would visit us. There’s a lot of visitors, the grad
01:33:51 students of the civilization. They’re like exploring, they’re collecting the data, but
01:33:56 we’re not yet going to see them. And by yet, I mean across millions of years.
01:34:01 The time delay between when the first thing might arrive and then when colonists could arrive in
01:34:10 mass and do a mass amount of work is cosmologically short. In human history, of course, sure, there
01:34:16 might be a century between that, but a century is just a tiny amount of time on the scales we’re
01:34:22 talking about. So this is, in computer science, ant colony optimization. It’s true for ants.
01:34:28 So it’s like when the first ant shows up, it’s likely if there’s anything of value,
01:34:33 it’s likely the other ants will follow quickly. Yeah.
01:34:36 Relatively short. It’s also true that traveling over very long distances, probably one of the
01:34:42 main ways to make that feasible is that you land somewhere, you colonize a bit, you create new
01:34:48 resources that can then allow you to go farther. Many short hops as opposed to a giant long journey.
01:34:53 Exactly. Those hops require that you are able to start a colonization of sorts along those hops.
01:34:59 You have to be able to stop somewhere, make it into a way station such that you can then support
01:35:04 you moving farther. So what do you think of, there’s been a lot of UFO sightings. What do
01:35:10 you think about those UFO sightings and what do you think if any of them are of extraterrestrial
01:35:19 origin and we don’t see giant civilizations out in the sky, how do you make sense of that then?
01:35:27 I want to do some clearing of throats, which people like to do on this topic, right? They want
01:35:33 to make sure you understand they’re saying this and not that, right? So I would say the analysis
01:35:39 needs both a prior and a likelihood. So the prior is what are the scenarios that are at all plausible
01:35:47 in terms of what we know about the universe. And then the likelihood is the particular actual
01:35:52 sightings, like how hard are those to explain through various means. I will establish myself
01:35:58 as somewhat of an expert on the prior. I would say my studies and the things I’ve studied make me an
01:36:04 expert and I should stand up and have an opinion on that and be able to explain it. The likelihood,
01:36:09 however, is not my area of expertise. That is, I’m not a pilot. I don’t do atmospheric studies of
01:36:15 studies of things I haven’t studied in detail, the various kinds of atmospheric phenomena or
01:36:20 whatever that might be used to explain the particular sightings. I can just say from
01:36:24 my amateur stance, the sightings look damn puzzling. They do not look easy to dismiss.
01:36:30 The attempts I’ve seen to easily dismiss them seem to me to fail. It seems like these are
01:36:35 pretty puzzling, weird stuff that deserve an expert’s attention in terms of considering,
01:36:42 asking what the likelihood is. So analogy I would make as a murder trial. On average, if we say,
01:36:48 what’s the chance any one person murdered another person as a prior probability, maybe one in a
01:36:52 thousand people get murdered. Maybe each person has a thousand people around them who could
01:36:56 plausibly have done it. So the prior probability of a murder is one in a million. But we allow
01:37:01 murder trials because often evidence is sufficient to overcome a one in a million prior because the
01:37:07 evidence is often strong enough, right? My guess, rough guess for the UFOs as aliens
01:37:13 scenario, at least some of them, is the priors roughly one in a thousand,
01:37:17 much higher than the usual murder trial, plenty high enough that strong physical evidence could
01:37:23 put you over the top to think it’s more likely than not. But I’m not an expert on that physical
01:37:28 evidence. I’m going to leave that part to someone else. I’m going to say the prior is pretty high.
01:37:33 This isn’t a crazy scenario. So then I can elaborate on where my prior comes from.
01:37:38 What scenario could make most sense of this data? My scenario to make sense has two main parts.
01:37:46 First is panspermia siblings. So panspermia is the process by which life might have arrived on
01:37:55 earth from elsewhere. And a plausible time for that, I mean, it would have to happen very early
01:38:00 in earth’s history because we see life early in history. And a plausible time could have been
01:38:05 during the stellar nursery where the sun was born with many other stars in the same close proximity
01:38:12 with lots of rocks flying around, able to move things from one place to another.
01:38:18 If a rock with life on it from some rock with planet with life came into that stellar nursery,
01:38:24 it plausibly could have seeded many planets in that stellar nursery all at the same time. They’re
01:38:30 all born at the same time in the same place, pretty close to each other, lots of rocks flying
01:38:33 around. So a panspermia scenario would then create siblings, i.e. there would be, say, a few thousand
01:38:42 other planets out there. So after the nursery forms, it drifts, it separates, they drift apart.
01:38:48 And so out there in the galaxy, there would now be a bunch of other stars all formed at the same
01:38:52 time. And we can actually spot them in terms of their spectrum. And they would have then started
01:38:58 on the same path of life as we did with that life being seeded, but they would move at different
01:39:03 rates. And most likely, most of them would never reach an advanced level before the deadline. But
01:39:11 maybe one other did, and maybe it did before us. So if they did, they could know all this,
01:39:18 and they could go searching for their siblings. That is, they could look in the sky for the other
01:39:22 stars that match the spectrum that matches the spectrum that came from this nursery.
01:39:26 They could identify their sibling stars in the galaxy, the thousand of them. And those would be
01:39:32 of special interest to them because they would think, well, life might be on those. And they
01:39:38 could go looking for them. Can we just, such a brilliant mathematical, philosophical, physical,
01:39:47 biological idea of panspermia siblings, because we all kind of started at a similar time
01:39:53 in this local pocket of the universe. And so that changes a lot of the math.
01:40:02 So that would create this correlation between when advanced life might appear,
01:40:06 no longer just random independent spaces and space time. There’d be this cluster, perhaps.
01:40:10 And that allows interaction between non grabby alien civilizations, like kind of
01:40:19 primitive alien civilizations, like us with others. And they might be a little bit ahead.
01:40:25 That’s so fascinating.
01:40:26 They would probably be a lot ahead. So the puzzle is, if they happened before us,
01:40:33 they probably happened hundreds of millions of years before us.
01:40:37 But less than a billion.
01:40:38 Less than a billion, but still plenty of time that they could have become grabby and filled
01:40:43 the galaxy and gone beyond. So the fact is, they chose not to become grabby. That would
01:40:49 have to be the interpretation. If we have panspermia siblings…
01:40:52 Plenty of time to become grabby, you said. So they should be gone.
01:40:54 Yes, they had plenty of time and they chose not to.
01:40:58 Are we sure about this? A hundred million years is enough.
01:41:02 So I told you before that I said, within 10 million years, our descendants will become
01:41:07 grabby or not.
01:41:08 And they’ll have that choice. Okay.
01:41:10 Right? And so they, clearly more than 10 million years earlier than us, so they chose not to.
01:41:16 But still go on vacation, look around, just not grabby.
01:41:20 If they chose not to expand, that’s going to have to be a rule they set to not allow
01:41:25 any part of themselves to do it. If they let any little ship fly away with the ability
01:41:31 to create a colony, the game’s over. Then the universe becomes grabby from their origin
01:41:38 with this one colony, right? So in order to prevent their civilization being grabby,
01:41:42 they have to have a rule they enforce pretty strongly that no part of them can ever try
01:41:46 to do that.
01:41:46 Through a global authoritarian regime or through something that’s internal to them,
01:41:52 meaning it’s part of the nature of life that it doesn’t want…
01:41:56 As like a political officer in the brain or whatever.
01:41:59 Yes. There’s something in human nature that prevents you from what or like alien nature
01:42:08 that as you get more advanced, you become lazier and lazier in terms of exploration
01:42:13 and expansion.
01:42:14 So I would say they would have to have enforced a rule against expanding and that rule would
01:42:20 probably make them reluctant to let people leave very far. You know, any one vacation
01:42:25 trip far away could risk an expansion from this vacation trip. So they would probably
01:42:29 have a pretty tight lid on just allowing any travel out from their origin in order to
01:42:34 enforce this rule. But then we also know, well, they would have chosen to come here.
01:42:40 So clearly they made an exception from their general rule to say, okay, but an expedition
01:42:45 to Earth, that should be allowed.
01:42:48 It could be intentional exception or incompetent exception.
01:42:52 But if incompetent, then they couldn’t maintain this over 100 million years, this policy of
01:42:57 not allowing any expansion. So we have to see they have successfully, they not just
01:43:01 had a policy to try, they succeeded over 100 million years in preventing the expansion.
01:43:07 That’s a substantial competence.
01:43:09 Let me think about this. So you don’t think there could be a barrier in 100 million years,
01:43:14 you don’t think there could be a barrier to like technological barrier to becoming expansionary.
01:43:25 Imagine the Europeans have tried to prevent anybody from leaving Europe to go to the new
01:43:30 world. And imagine what it would have taken to make that happen over 100 million years.
01:43:36 Yeah, it’s impossible.
01:43:37 They would have had to have very strict, you know, guards at the borders saying, no, you
01:43:43 can’t go.
01:43:44 But just to clarify, you’re not suggesting that’s actually possible.
01:43:48 I am suggesting it’s possible.
01:43:51 I don’t know how you keep, in my silly human brain, maybe it’s the brain that values freedom,
01:43:57 but I don’t know how you can keep, no matter how much force, no matter how much censorship
01:44:03 or control or so on, I just don’t know how you can keep people from exploring into the
01:44:10 mysterious, into the unknown.
01:44:11 You’re thinking of people, we’re talking aliens. So remember, there’s a vast space
01:44:14 of different possible social creatures they could have evolved from, different cultures
01:44:18 they could be in, different kinds of threats. I mean, there are many things, as you talked
01:44:22 about, that most of us would feel very reluctant to do.
01:44:25 This isn’t one of those.
01:44:26 Okay, so how, if the UFO sightings represent alien visitors, how the heck are they getting
01:44:33 here under the panspermia siblings?
01:44:36 So panspermia siblings is one part of the scenario, which is that’s where they came
01:44:40 from. And from that, we can conclude they had this rule against expansion and they’ve
01:44:44 successfully enforced that. That also creates a plausible agenda for why they would be here,
01:44:50 that is to enforce that rule on us. That is, if we go out and expanding, then we have defeated
01:44:56 the purpose of this rule they set up.
01:44:58 Interesting.
01:44:58 Right? So they would be here to convince us to not expand.
01:45:03 Convince in quotes.
01:45:05 Right? Through various mechanisms. So obviously, one thing we conclude is they didn’t just
01:45:09 destroy us. That would have been completely possible, right? So the fact that they’re
01:45:13 here and we are not destroyed means that they chose not to destroy us. They have some degree
01:45:18 of empathy or whatever their morals are that would make them reluctant to just destroy
01:45:24 us. They would rather persuade us.
01:45:26 Destroy their brethren. And so they may have been, there’s a difference in arrival and
01:45:31 observation. They may have been observing for a very long time.
01:45:34 Exactly.
01:45:35 And they arrive to try to, not to try, I don’t think to try to ensure that we don’t become
01:45:45 grabby.
01:45:46 Which is because we can see that they did not, they must have enforced a rule against
01:45:50 that and they are therefore here to, that’s a plausible interpretation why they would
01:45:55 risk this expedition when they clearly don’t risk very many expeditions over this long
01:45:59 period to allow this one exception because otherwise, if they don’t, we may become grabby.
01:46:04 And they could have just destroyed us, but they didn’t.
01:46:06 And they’re closely monitoring the technological advancing of our civilization. Like what
01:46:11 nuclear weapons is one thing that, all right, cool. That might have less to do with nuclear
01:46:15 weapons and more with nuclear energy. Maybe they’re monitoring fusion closely. Like how
01:46:21 clever are these apes getting?
01:46:23 So no doubt they have a button that if we get too uppity or risky, they can push the
01:46:28 button and ensure that we don’t expand. But they’d rather do it some other way. So now
01:46:32 that’s, that explains why they’re here and why they aren’t out there. But there’s another
01:46:36 thing that we need to explain. There’s another key data we need to explain about UFOs if
01:46:40 we’re going to have a hypothesis that explains them. And this is something many people have
01:46:43 noticed, which is they had two extreme options they could have chosen and didn’t chose.
01:46:50 They could have either just remained completely invisible. Clearly an advanced civilization
01:46:54 could have been completely invisible. There’s no reason they need to fly around and be
01:46:58 noticed. They could just be in orbit and in dark satellites that are completely invisible
01:47:02 to us watching whatever they want to watch. That would be well within their abilities.
01:47:06 That’s one thing they could have done. The other thing they could do is just show up
01:47:09 and land on the White House lawn, as they say, and shake hands, like make themselves
01:47:13 really obvious. They could have done either of those and they didn’t do either of those.
01:47:17 That’s the next thing you need to explain about UFOs as aliens. Why would they take
01:47:21 this intermediate approach, hanging out near the edge of visibility with somewhat impressive
01:47:26 mechanisms, but not walking up and introducing themselves nor just being completely invisible?
01:47:30 So, okay, a lot of questions there. So one, do you think it’s obvious where the White
01:47:37 House is or the White House lawn?
01:47:39 Obvious where there are concentrations of humans that you could go up and introduce.
01:47:42 But is humans the most interesting thing about Earth?
01:47:46 Yeah.
01:47:46 Are you sure about this? Because…
01:47:48 If they’re worried about an expansion, then they would be worried about a civilization
01:47:52 that could be capable of expansion. Obviously humans are the civilization on Earth that’s
01:47:57 by far the closest to being able to expand.
01:47:59 I just don’t know if aliens obviously see…obviously see humans, like the individual
01:48:10 humans, like the meat vehicles, as the center of focus for observing a life on a planet.
01:48:19 They’re supposed to be really smart and advanced. Like, this shouldn’t be that hard for them.
01:48:23 But I think we’re actually the dumb ones, because we think humans are the important
01:48:27 things. But it could be our ideas. It could be something about our technologies.
01:48:32 But that’s mediated with us. It’s correlated with us.
01:48:34 No, we make it seem like it’s mediated by us humans. But the focus for alien civilizations
01:48:43 might be the AI systems or the technologies themselves. That might be the organism. Like,
01:48:49 what humans are like…human is the food, the source of the organism that’s under observation,
01:48:57 versus like…
01:48:59 So what they wanted to have close contact with was something that was closely near humans,
01:49:03 then they would be contacting those. And we would just incidentally see, but we would still see.
01:49:08 But don’t you think that…isn’t it possible, taking their perspective,
01:49:12 isn’t it possible that they would want to interact with some fundamental aspect that
01:49:16 they’re interested in without interfering with it? And that’s actually a very…no
01:49:23 matter how advanced you are, it’s very difficult to do.
01:49:25 But that’s puzzling. So, I mean, the prototypical UFO observation is a shiny,
01:49:33 big object in the sky that has very rapid acceleration and no apparent surfaces for
01:49:41 using air to manipulate at speed. And the question is, why that? Again, if they just…
01:49:50 For example, if they just wanted to talk to our computer systems, they could move some sort of
01:49:55 like a little probe that connects to a wire and reads and sends bits there. They don’t need a
01:50:00 shiny thing flying in the sky.
01:50:02 But don’t you think they would be looking for the right way to communicate, the right
01:50:08 language to communicate? Everything you just said, looking at the computer systems,
01:50:13 I mean, that’s not a trivial thing. Coming up with a signal that us humans would not freak out
01:50:20 too much about, but also understand, might not be that trivial.
01:50:24 Well, so the not freak out part is another interesting constraint. So again, I said,
01:50:28 like the two obvious strategies are just to remain completely invisible and watch,
01:50:31 which would be quite feasible, or to just directly interact, come out and be really
01:50:36 very direct, right? I mean, there’s big things that you can see around. There’s big cities,
01:50:41 there’s aircraft carriers, there’s lots of… If you want to just find a big thing and come
01:50:45 right up to it and like tap it on the shoulder or whatever, that would be quite feasible,
01:50:49 then they’re not doing that. So my hypothesis is that one of the other questions there was,
01:50:57 do they have a status hierarchy? And I think most animals on earth who are social animals
01:51:02 who are social animals have status hierarchy, and they would reasonably presume that we have
01:51:07 a status hierarchy. And…
01:51:09 Take me to your leader.
01:51:11 Well, I would say their strategy is to be impressive and sort of get us to see them
01:51:17 at the top of our status hierarchy. That’s how, for example, we domesticate dogs, right?
01:51:25 We convince dogs we’re the leader of their pack, right? And we domesticate many animals that way,
01:51:30 but as we just swap into the top of their status hierarchy and we say,
01:51:34 we’re your top status animal, so you should do what we say, you should follow our lead.
01:51:39 So the idea that would be, they are going to get us to do what they want by being top status.
01:51:48 You know, all through history, kings and emperors, et cetera, have tried to impress their citizens
01:51:52 and other people by having the bigger palace, the bigger parade, the bigger crown and
01:51:56 diamonds, right? Whatever, maybe building a bigger pyramid, et cetera. It’s a very well
01:52:00 established trend to just be high status by being more impressive than the rest.
01:52:05 To push back when there’s an order of several orders of magnitude of power differential,
01:52:11 asymmetry of power, I feel like that status hierarchy no longer applies. It’s like memetic
01:52:16 theory. It’s like…
01:52:18 Most emperors are several orders of magnitude more powerful than any one member of their empire.
01:52:22 Let’s increase that by even more. So like if I’m interacting with ants,
01:52:29 I no longer feel like I need to establish my power with ants. I actually want to lower myself
01:52:38 to the ants. I want to become the lowest possible ant so that they would welcome me.
01:52:44 So I’m less concerned about them worshiping me. I’m more concerned about them welcoming me.
01:52:49 It is important that you be nonthreatening and that you be local. So I think
01:52:52 for example, if the aliens had done something really big in the sky, 100 light years away,
01:52:57 that would be there, not here. And that could seem threatening. So I think their strategy to
01:53:02 be the high status would have to be to be visible, but to be here and nonthreatening.
01:53:06 I just don’t know if it’s obvious how to do that. Take your own perspective. You see a planet
01:53:14 with relatively intelligent complex structures being formed, life forms. You could see this
01:53:20 under in Titan or something like that, Europa. You start to see not just primitive bacterial
01:53:29 life, but multicellular life. And it seems to form some very complicated cellular colonies,
01:53:36 structures that they’re dynamic. There’s a lot of stuff going on. Some gigantic cellular automata
01:53:43 type of construct. How do you make yourself known to them in an impressive fashion
01:53:52 without destroying it? We know how to destroy potentially.
01:53:56 Right. So if you go touch stuff, you’re likely to hurt it, right? There’s a good risk of hurting
01:54:02 something by getting too close and touching it and interacting, right?
01:54:04 Yeah, like landing on a White House lawn.
01:54:06 Right. So the claim is that their current strategy of hanging out at the periphery of
01:54:12 our vision and just being very clearly physically impressive with very clear physically impressive
01:54:17 abilities is at least a plausible strategy they might use to impress us and convince us sort of
01:54:25 we’re at the top of their status hierarchy. And I would say if they came closer, not only would
01:54:30 they risk hurting us in ways that they couldn’t really understand, but more plausibly, they would
01:54:35 reveal things about themselves we would hate. So if you look at how we treat other civilizations
01:54:40 on Earth and other people, we are generally interested in foreigners and people from other
01:54:46 plant lands. And we were generally interested in their varying cult customs, et cetera,
01:54:51 until we find out that they do something that violates our moral norms and then we hate them.
01:54:56 And these are aliens for God’s sakes, right? There’s just going to be something about them
01:55:01 that we hate. They eat babies. Who knows what it is? Something they don’t think is offensive,
01:55:05 but that they think we might find. And so they would be risking a lot by revealing a lot about
01:55:11 themselves. We would find something we hated. Interesting. But do you resonate at all with
01:55:16 memetic theory where like, we only feel this way about things that are very close to us.
01:55:21 So aliens are sufficiently different to where we’ll be like, fascinated, terrified or fascinated,
01:55:26 but not like. Right, but if they want to be at the top of our status hierarchy to get us to
01:55:30 follow them, they can’t be too distant. They have to be close enough that we would see them that
01:55:35 way. But pretend to be close enough. Right. And not reveal much that mystery that old Clint Eastwood
01:55:41 cowboy. I mean, we’re clever enough that we can figure out their agenda. That is just from the
01:55:47 fact that we’re here. If we see that they’re here, we can figure out, Oh, they want us not to expand
01:55:51 and look, they are this huge power and they’re very impressive. So, and a lot of us don’t want
01:55:55 to expand. So that could easily tip us over the edge toward we already wanted to not expand. We
01:56:02 already wanted to be able to regulate and have a central community. And here are these very advanced
01:56:07 smart aliens who have survived for a hundred million years and they’re telling us not to expand
01:56:12 either. This is brilliant. I love this so much. Uh, the, the, so returning to panspermia siblings,
01:56:21 just to clarify one thing in that framework, how would, who originated, who planted it?
01:56:31 Would it be a grabby alien civilization that planted the siblings or no? The simple scenario
01:56:36 is that life started on some other planet billions of years ago and it went through part of the
01:56:44 stages of evolution to advance life, but not all the way to advance life. And then some rock hit
01:56:49 it, grabbed a piece of it on the rock and that rock drifted for maybe in a million years until
01:56:54 it happened to prong the stellar nursery where it then seeded many stars. And something about that
01:57:00 life without being super advanced, it was nevertheless resilient to the harsh conditions
01:57:05 of space. There’s some graphs that I’ve been impressed by that show sort of the level of
01:57:10 genetic information in various kinds of life on the history of earth. And basically we are now
01:57:16 more complex than the earlier life, but the earlier life was still pretty complex. And so if
01:57:22 you actually project this log graph in history, it looks like it was many billions of years ago
01:57:27 when you get down to zero. So like plausible, you could say there was just a lot of evolution that
01:57:31 had to happen before you to get to the simplest life we’ve ever seen in history of life on earth
01:57:35 was still pretty damn complicated. Okay. And so that race, that’s always been this puzzle. How
01:57:40 could life get to this enormously complicated level in the short period it seems to at the
01:57:46 beginning of earth history. So where, you know, it’s only 300 million years at most when it
01:57:52 appeared. And then it was really complicated at that point. So panspermia allows you to
01:57:57 explain that complexity by saying, well, it’s been another 5 billion years on another planet
01:58:03 going through lots of earlier stages where it was working its way up to the level of
01:58:06 complexity you see at the beginning of earth. We’ll try to talk about other ideas of the
01:58:12 origin of life, but let me return to UFO sightings. Is there other explanations that are possible
01:58:18 outside of panspermia siblings that can explain no grabby aliens in the sky and yet alien arrival
01:58:26 on earth? Well, the other categories of explanations that most people will use is, well,
01:58:33 first of all, just mistakes, like, you know, you’re, you’re, you’re confusing something
01:58:37 ordinary for something mysterious, right? Or some sort of secret organization, like our
01:58:43 government is secretly messing with us and trying to do a, you know, a false flag ops
01:58:48 or whatever, right? You know, they’re trying to convince the Russians or the Chinese that
01:58:52 there might be aliens and scare them into not attacking or something, right? Because
01:58:56 if you, you know, the history of World War II, say the US government did all these big
01:59:00 fake operations where they were faking a lot of big things in order to mess with people.
01:59:05 So that’s a possibility. The government has been lying and, you know, faking things and
01:59:09 paying people to lie about what they saw, et cetera. That’s a plausible set of explanations
01:59:16 for the range of sightings seen. And another explanation people offer is some other hidden
01:59:21 organization on earth or some, you know, secret organization somewhere that has much more
01:59:26 advanced capabilities than anybody’s given a credit for, for some reason it’s been keeping
01:59:30 secret. I mean, they all sound somewhat implausible, but again, we’re looking for maybe,
01:59:35 you know, one in a thousand sort of priors. Question is, you know, could, could they be
01:59:40 in that level of plausibility? Can we just linger on this? So you, first of all, you’ve written,
01:59:47 talked about, thought about so many different topics. You’re an incredible mind. And I just
01:59:54 thank you for sitting down today. I’m almost like at a loss of which place we explore,
01:59:59 but let me on this topic, ask about conspiracy theories because you’ve written about institutions
02:00:06 authorities. What, this is a bit of a therapy session, but what do we make of conspiracy
02:00:18 theories? The phrase itself is pushing you in a direction, right? So clearly in history,
02:00:25 we’ve had many large coordinated keepings of secrets, right? Say the Manhattan project,
02:00:30 right? And there was hundreds of thousands of people working on that over many years,
02:00:34 but they kept it a secret, right? Clearly many large military operations have kept things secrets
02:00:39 over, you know, even decades with many thousands of people involved. So clearly it’s possible to
02:00:47 keep some things secret over time periods. You know, but the more people you involve and the
02:00:53 more time you are assuming and the more, the less centralized an organization or the less
02:00:59 discipline they have, the harder it gets to believe. But we’re just trying to calibrate
02:01:02 basically in our minds, which kind of secrets can be kept by which groups over what time periods
02:01:07 for what purposes, right? But let me, I don’t have enough data. So I’m somebody, I, you know,
02:01:14 I hang out with people and I love people. I love all things really. And I just, I think that most
02:01:22 people, even the assholes have the capacity to be good and they’re beautiful and I enjoy them.
02:01:28 So the kind of data, my brain, whatever the chemistry of my brain is that sees the beautiful
02:01:33 in things is maybe collecting a subset of data that doesn’t allow me to intuit the competence
02:01:42 that humans are able to achieve in constructing a conspiracy theory. So for example, one thing
02:01:50 that people often talk about is like intelligence agencies, this like broad thing. They say the CIA,
02:01:55 the FSB, the different, the British intelligence. I’ve fortunate or unfortunate enough, never gotten
02:02:02 the chance that I know of to talk to any member of those intelligence agencies nor like take a
02:02:11 peek behind the curtain or the first curtain. I don’t know how many levels of curtains there are.
02:02:16 And so I don’t, I can’t intuit my interactions with government. I was funded by DOD and DARPA
02:02:22 and I’ve interacted, been to the Pentagon, like with all due respect to my friends, lovely friends
02:02:31 in government. And there are a lot of incredible people, but there is a very giant bureaucracy
02:02:36 that sometimes suffocates the ingenuity of the human spirit is one way I can put it. Meaning
02:02:43 they are, I just, it’s difficult for me to imagine extreme competence at a scale of hundreds or
02:02:50 thousands human beings. Now that doesn’t mean that’s my very anecdotal data of the situation.
02:02:56 And so I try to build up my intuition about centralized system of government, how much
02:03:05 conspiracy is possible, how much the intelligence agencies or some other source can generate
02:03:14 sufficiently robust propaganda that controls the populace. If you look at World War II, as you
02:03:20 mentioned, there’ve been extremely powerful propaganda machines on the Nazi, on the side of
02:03:26 Nazi Germany, on the side of the Soviet Union, on the side of the United States and all these different
02:03:33 mechanisms. Sometimes they control the free press through social pressures. Sometimes they control
02:03:40 the press through the threat of violence, as you do in authoritarian regimes. Sometimes it’s like
02:03:47 deliberately the dictator, like writing the news, the headlines and literally announcing it. And
02:03:53 something about human psychology forces you to embrace the narrative and believe the narrative.
02:04:02 And at scale that becomes reality when the initial spark was just the propaganda thought in a single
02:04:09 individual’s mind. So I can’t necessarily intuit of what’s possible, but I’m skeptical of the power
02:04:19 of human institutions to construct conspiracy theories that cause suffering at scale, especially
02:04:26 in this modern age when information is becoming more and more accessible by the populace. Anyway,
02:04:32 that’s the, I don’t know if you can elucidate for us.
02:04:35 It’s called suffering at scale, but of course, say during wartime, the people who are managing
02:04:39 the various conspiracies like D Day or Manhattan Project, they thought that their conspiracy was
02:04:45 avoiding harm rather than causing harm. So if you can get a lot of people to think that supporting
02:04:49 the conspiracy is helpful, then a lot more might do that. And there’s just a lot of things that
02:04:57 people just don’t want to see. So if you can make your conspiracy the sort of thing that people
02:05:01 wouldn’t want to talk about anyway, even if they knew about it, you’re most of the way there.
02:05:07 So I have learned many over the years, many things that most ordinary people would never want to
02:05:12 hear, many things that most ordinary people should be interested in, but somehow don’t know,
02:05:17 even though the data has been very widespread. So I have this book, The Elephant and the Brain,
02:05:21 and one of the chapters is there on medicine. And basically, most people seem ignorant of the very
02:05:27 basic fact that when we do randomized trials where we give some people more medicine than others,
02:05:32 the people who get more medicine are not healthier. Just overall, in general, just like
02:05:38 induce somebody to get more medicine because you just give them more budget to buy medicine, say.
02:05:42 And not a specific medicine, just the whole category. And you would think that would be
02:05:46 something most people should know about medicine. You might even think that would be a conspiracy
02:05:50 theory to think that would be hidden, but in fact, most people never learn that fact.
02:05:55 So just to clarify, just a general high level statement, the more medicine you take,
02:06:02 the less healthy you are.
02:06:04 Randomized experiments don’t find that fact. Do not find that more medicine makes you more healthy.
02:06:10 There’s just no connection. In randomized experiments, there’s no relationship between
02:06:15 more medicine and being healthier.
02:06:16 So it’s not a negative relationship, but it’s just no relationship.
02:06:19 Right.
02:06:20 And so the conspiracy theory would say that the businesses that sell you medicine don’t want you
02:06:27 to know that fact. And then you’re saying that there’s also part of this is that people just
02:06:32 don’t want to know.
02:06:33 They just don’t want to know. And so they don’t learn this. So I’ve lived in the Washington area
02:06:38 for several decades now, reading the Washington Post regularly. Every week there was a special
02:06:44 section on health and medicine. It never was mentioned in that section of the paper
02:06:48 in all the 20 years I read that.
02:06:50 So do you think there is some truth to this caricatured blue pill, red pill,
02:06:55 where most people don’t want to know the truth?
02:06:58 There are many things about which people don’t want to know certain kinds of truths.
02:07:02 Yeah. That is bad looking truths, truths that discouraging, truths that sort of take away the
02:07:07 justification for things they feel passionate about.
02:07:10 Do you think that’s a bad aspect of human nature? That’s something we should try to overcome?
02:07:16 Well, as we discussed, my first priority is to just tell people about it, to do the analysis
02:07:22 and the cold facts of what’s actually happening, and then to try to be careful about how we can
02:07:26 improve. So our book, The Elephant in the Rain, coauthored with Kevin Simler, is about how we
02:07:30 hide motives in everyday life. And our first priority there is just to explain to you what are
02:07:35 the things that you are not looking at that you have reluctant to look at. And many people try
02:07:40 to take that book as a self help book where they’re trying to improve themselves and make
02:07:44 sure they look at more things. And that often goes badly because it’s harder to actually do
02:07:49 that than you think. But we at least want you to know that this truth is available if you want
02:07:55 to learn about it.
02:07:56 It’s the Nietzsche, if you gaze long into the abyss, the abyss gazes into you. Let’s talk about
02:08:01 this elephant in the brain. Amazing book. The elephant in the room is, quote, an important
02:08:08 issue that people are reluctant to acknowledge or address a social taboo. The elephant in the brain
02:08:14 is an important but unacknowledged feature of how our mind works, an introspective taboo.
02:08:20 You describe selfishness and self deception as the core or some of the core elephants,
02:08:28 some of the elephants, elephant offspring in the brain. Selfishness and self deception.
02:08:35 All right.
02:08:36 Can you explain, can you explain why these are the taboos in our brain that we
02:08:45 don’t want to acknowledge to ourselves?
02:08:46 Your conscious mind, the one that’s listening to me that I’m talking to at the moment, you like
02:08:53 to think of yourself as the president or king of your mind, ruling over all that you see,
02:08:58 issuing commands that immediately obeyed. You are instead better understood as the press secretary
02:09:06 of your brain. You don’t make decisions. You justify them to an audience. That’s what your
02:09:12 conscious mind is for. You watch what you’re doing and you try to come up with stories that explain
02:09:20 what you’re doing so that you can avoid accusations of violating norms. So humans compared to most
02:09:26 other animals have norms, and this allows us to manage larger groups with our morals and norms
02:09:32 about what we should or shouldn’t be doing. This is so important to us that we needed to be
02:09:38 constantly watching what we were doing in order to make sure we had a good story to avoid norm
02:09:43 violations. So many norms are about motives. So if I hit you on purpose, that’s a big violation.
02:09:48 If I hit you accidentally, that’s okay. I need to be able to explain why it was an accident
02:09:52 and not on purpose.
02:09:54 So where does that need come from for your own self preservation?
02:09:58 Right. So humans have norms and we have the norm that if we see anybody violating a norm,
02:10:03 we need to tell other people and then coordinate to make them stop and punish them for violating.
02:10:09 So such benefits are strong enough and severe enough that we each want to avoid being successfully
02:10:15 accused of violating norms. So for example, hitting someone on purpose is a big clear norm
02:10:21 violation. If we do it consistently, we may be thrown out of the group and that would mean we
02:10:25 would die. Okay. So we need to be able to convince people we are not going around hitting people on
02:10:30 purpose. If somebody happens to be at the other end of our fist and their face connects, that was
02:10:37 an accident and we need to be able to explain that. And similarly for many other norms humans
02:10:43 have, we are serious about these norms and we don’t want people to violate. We find them
02:10:48 violating, we’re going to accuse them. But many norms have a motive component. And so we are
02:10:53 trying to explain ourselves and make sure we have a good motive story about everything we do,
02:10:58 which is why we’re constantly trying to explain what we’re doing. And that’s what your conscious
02:11:02 mind is doing. It is trying to make sure you’ve got a good motive story for everything you’re
02:11:07 doing. And that’s why you don’t know why you really do things. What you know is what the good
02:11:12 story is about why you’ve been doing things. And that’s the self deception. And you’re saying that
02:11:17 there is a machine, the actual dictator is selfish. And then you’re just the press secretary who’s
02:11:24 desperately doesn’t want to get fired and is justifying all of the decisions of the dictator.
02:11:29 And that’s the self deception.
02:11:31 Right. Now, most people actually are willing to believe that this is true in the abstract. So
02:11:36 our book has been classified as psychology and it was reviewed by psychologists. And the basic
02:11:41 way that psychology referees and reviewers responded is to say, this is well known. Most
02:11:46 people accept that there’s a fair bit of self deception.
02:11:49 But they don’t want to accept it about themselves.
02:11:51 Well, they don’t want to accept it about the particular topics that we talk about. So people
02:11:55 accept the idea in the abstract that they might be self deceived or that they might not be honest
02:12:00 about various things. But that hasn’t penetrated into the literatures where people are explaining
02:12:05 particular things like why we go to school, why we go to the doctor, why we vote, et cetera. So
02:12:10 our book is mainly about 10 areas of life and explaining about in each area what our actual
02:12:16 motives there are. And people who study those things have not admitted that hidden motives are
02:12:23 explaining those particular areas.
02:12:25 So they haven’t taken the leap from theoretical psychology to actual public policy.
02:12:30 Exactly.
02:12:30 And economics and all that kind of stuff. Well, let me just linger on this and bring up my old
02:12:38 friends Zingman Freud and Carl Jung. So how vast is this landscape of the unconscious mind,
02:12:47 the power and the scope of the dictator? Is it only dark there? Is it some light? Is there some
02:12:56 love?
02:12:56 The vast majority of what’s happening in your head, you’re unaware of. So in a literal sense,
02:13:02 the unconscious, the aspects of your mind that you’re not conscious of is the overwhelming
02:13:07 majority. But that’s just true in a literal engineering sense. Your mind is doing lots of
02:13:12 low level things, and you just can’t be consciously aware of all that low level stuff. But there’s
02:13:17 plenty of room there for lots of things you’re not aware of.
02:13:21 But can we try to shine a light at the things we’re unaware of specifically? Now, again,
02:13:26 staying with the philosophical psychology side for a moment, can you shine a light in the Jungian
02:13:32 shadow? What’s going on there? What is this machine like? What level of thoughts are happening
02:13:40 there? Is it something that we can even interpret? If we somehow could visualize it, is it something
02:13:46 that’s human interpretable? Or is it just a kind of chaos of monitoring different systems in the
02:13:51 body, making sure you’re happy, making sure you’re fed all those kind of basic forces that form
02:13:58 abstractions on top of each other, and they’re not introspective at all?
02:14:01 We humans are social creatures. Plausibly being social is the main reason we have these unusually
02:14:06 large brains. Therefore, most of our brain is devoted to being social. And so the things we are
02:14:13 very obsessed with and constantly paying attention to are, how do I look to others? What would others
02:14:19 think of me if they knew these various things they might learn about me?
02:14:23 So that’s close to being fundamental to what it means to be human, is caring what others think.
02:14:28 Right. To be trying to present a story that would be okay for what others think. But we’re very
02:14:34 constantly thinking, what do other people think?
02:14:36 So let me ask you this question then about you, Robin Hansen, who in many places, sometimes for
02:14:45 fun, sometimes as a basic statement of principle, likes to disagree with what the majority of people
02:14:52 think. So how do you explain, how are you self deceiving yourself in this task? And how are you
02:15:02 being self, like, why is the dictator manipulating you inside your head to be so critical? Like,
02:15:08 there’s norms. Why do you want to stand out in this way? Why do you want to challenge the
02:15:14 norms in this way?
02:15:15 Almost by definition, I can’t tell you what I’m deceiving myself about. But the more practical
02:15:20 strategy that’s quite feasible is to ask about what are typical things that most people deceive
02:15:25 themselves about, and then to own up to those particular things.
02:15:29 Sure. What’s a good one?
02:15:32 So for example, I can very much acknowledge that I would like to be well thought of,
02:15:38 that I would be seeking attention and glory and praise from my intellectual work, and that that
02:15:47 would be a major agenda driving my intellectual attempts. So if there were topics that other
02:15:55 people would find less interesting, I might be less interested in those for that reason,
02:15:59 for example. I might want to find topics where other people are interested, and I might want to
02:16:05 go for the glory of finding a big insight rather than a small one, and maybe one that was
02:16:13 especially surprising. That’s also, of course, consistent with some more ideal concept of what
02:16:19 an intellectual should be. But most intellectuals are relatively risk averse. They are in some
02:16:27 local intellectual tradition, and they are adding to that, and they are staying conforming to the
02:16:32 sort of usual assumptions and usual accepted beliefs and practices of a particular area
02:16:37 so that they can be accepted in that area and treated as part of the community. But you might
02:16:45 think for the purpose of the larger intellectual project of understanding the world better,
02:16:50 people should be less eager to just add a little bit to some tradition, and they should be looking
02:16:55 for what’s neglected between the major traditions and major questions. They should be looking for
02:16:59 assumptions maybe we’re making that are wrong. They should be looking at things that are very
02:17:04 surprising, things that you would have thought a priori unlikely that once you are convinced of it,
02:17:10 you find that to be very important and a big update. So you could say that one motivation
02:17:21 I might have is less motivated to be sort of comfortably accepted into some particular
02:17:26 intellectual community and more willing to just go for these more fundamental long shots that should
02:17:33 be very important if you could find them.
02:17:35 Which would, if you can find them, would get you appreciated across a larger number of people
02:17:45 across the longer time span of history. So like maybe the small local community will say,
02:17:52 you suck, you must conform. But the larger community will see the brilliance of you
02:18:00 breaking out of the cage of the small conformity into a larger cage. There’s always a bigger cage
02:18:06 and then you’ll be remembered by more. Yeah. Also that explains your choice of colorful shirt that
02:18:13 looks great in a black background. So you definitely stand out.
02:18:17 Right. Now, of course, you could say, well, you could get all this attention by making false
02:18:22 claims of dramatic improvement. And then wouldn’t that be much easier than actually working through
02:18:28 all the details to make true claims?
02:18:30 Why not? Let me ask the press secretary. Why not? So of course you spoke several times about how
02:18:37 much you value truth and the pursuit of truth. That’s a very nice narrative. Hitler and Stalin
02:18:43 also talked about the value of truth. Do you worry when you introspect as broadly as all humans
02:18:51 might that it becomes a drug, this being a martyr, being the person who points out that the emperor
02:19:03 wears no clothes, even when the emperor is obviously dressed, just to be the person who points
02:19:11 out that the emperor is wearing no clothes. Do you think about that?
02:19:14 So I think the standards you hold yourself to are dependent on the audience you have in mind.
02:19:23 So if you think of your audience as relatively easily fooled or relatively gullible, then you
02:19:29 won’t bother to generate more complicated, deep, you know, arguments and structures and evidence
02:19:36 to persuade somebody who has higher standards because why bother? You don’t have to worry
02:19:42 about it. Why bother? You can get away with something much easier. And of course, if you are,
02:19:47 say, a salesperson, you know, you make money on sales, then you don’t need to convince the top few
02:19:53 percent of the most sharp customers. You can just go for the bottom 60 percent of the most gullible
02:19:58 customers and make plenty of sales, right? So I think intellectuals have to vary. One of the main
02:20:06 ways intellectuals vary is in who is their audience in their mind? Who are they trying to
02:20:09 impress? Is it the people down the hall? Is it the people who are reading their Twitter feed? Is it
02:20:15 their parents? Is it their high school teacher? Or is it Einstein and Freud and Socrates, right?
02:20:24 So I think those of us who are especially arrogant, especially think that we’re really
02:20:31 big shot or have a chance at being a really big shot, we were naturally going to pick the
02:20:34 big shot audience that we can. We’re going to be trying to impress Socrates and Einstein.
02:20:39 Is that why you hang out with Tyler Cohen a lot and try to convince him yourself?
02:20:44 And you might think, you know, from the point of view of just making money or having sex or
02:20:48 other sorts of things, this is misdirected energy, right? Trying to impress the very
02:20:54 most highest quality minds. That’s such a small sample and they can’t do that much for you anyway.
02:20:59 Yeah. So I might well have had more, you know, ordinary success in life,
02:21:04 be more popular, invited to more parties, make more money if I had targeted a lower tier
02:21:11 set of intellectuals with the standards they have. But for some reason I decided early on
02:21:17 that Einstein was my audience or people like him and I was going to impress them.
02:21:23 Yeah. I mean, you pick your set of motivations, you know, convincing,
02:21:27 impressing Tyler Cohen is not going to help you get laid. Trust me, I tried. All right.
02:21:34 What are some notable sort of effects of the elephant in the brain in everyday life? So you
02:21:43 mentioned when we try to apply that to economics, to public policy. So when we think about medicine,
02:21:50 education, all those kinds of things, what are some things that we’re…
02:21:53 The key thing is medicine is much less useful health wise than you think. So, you know,
02:21:59 if you were focused on your health, you would care a lot less about it. And if you were focused
02:22:04 on other people’s health, you would also care a lot less about it. But if medicine is, as we
02:22:08 suggest, more about showing that you care and let other people showing that they care about you,
02:22:13 then a lot of priority on medicine can make sense. So that was our very earliest discussion
02:22:18 in the podcast. You were talking about what, you know, should you give people a lot of medicine
02:22:22 when it’s not very effective? And then the answer then is, well, if that’s the way that you show
02:22:27 that you care about them and you really want them to know you care, then maybe that’s what
02:22:32 you need to do if you can’t find a cheaper, more effective substitute. So if we actually just pause
02:22:37 on that for a little bit, how do we start to untangle the full set of self deception happening
02:22:44 in the space of medicine? So we have a method that we use in our book that is what I recommend
02:22:49 for people to use in all these sorts of topics. The straightforward method is first, don’t look
02:22:54 at yourself. Look at other people, look at broad patterns of behavior and other people, and then
02:23:00 ask, what are the various theories we could have to explain these patterns of behavior? And then
02:23:05 just do the simple matching, which theory better matches the behavior they have. And the last step
02:23:11 is to assume that’s true of you too. Don’t assume you’re an exception. If you happen to be an
02:23:17 exception, that won’t go so well, but nevertheless, on average, you aren’t very well positioned to
02:23:22 judge if you’re an exception. So look at what other people do, explain what other people do,
02:23:27 and assume that’s you too. But also in the case of medicine, there’s several parties to consider.
02:23:34 So there’s the individual person that’s receiving the medicine. There’s the doctors that are
02:23:38 prescribing the medicine. There’s drug companies that are selling drugs. There are governments that
02:23:45 have regulations that are lobbyists. So you can build up a network of categories of humans in this
02:23:51 and they each play their role. So how do you introspect the sort of analyze the system at a
02:24:00 system scale versus at the individual scale? So it turns out that in general, it’s usually much
02:24:07 easier to explain producer behavior than consumer behavior. That is, the drug companies or the
02:24:13 doctors have relatively clear incentives to give the customers whatever they want. And so many say
02:24:20 governments in democratic countries have the incentive to give the voters what they want.
02:24:24 So that focuses your attention on the patient and the voter in this equation and saying,
02:24:31 what do they want? They would be driving the rest of the system.
02:24:35 Whatever they want, the other parties are willing to give them in order to get paid. So now we’re
02:24:42 looking for puzzles in patient and voter behavior. What are they choosing? And why do they choose
02:24:48 that? And how much exactly? And then we can explain that potentially again, returning to
02:24:55 the producer, but the producer being incentivized to manipulate the decision making processes of
02:25:00 the voter and the consumer. Now, in almost every industry, producers are in general happy to lie
02:25:07 and exaggerate in order to get more customers. This is true of auto repair as much as human
02:25:11 body repair and medicine. So the differences between these industries can’t be explained
02:25:16 by the willingness of the producers to give customers what they want or to do various things
02:25:20 that we have to again, go to the customers. Why are customers treating body repair different
02:25:26 than auto repair? Yeah, and that potentially requires a lot of thinking, a lot of data
02:25:35 collection and potentially looking at historical data too, because things don’t just happen
02:25:39 overnight. Over time, there’s trends. In principle it does, but actually it’s a lot,
02:25:43 actually easier than you might think. I think the biggest limitation is just the willingness
02:25:47 to consider alternative hypotheses. So many of the patterns that you need to rely on are actually
02:25:53 pretty obvious, simple patterns. You just have to notice them and ask yourself, how can I explain
02:25:58 those? Often you don’t need to look at the most subtle, most difficult statistical evidence that
02:26:04 might be out there. The simplest patterns are often enough. All right. So there’s a fundamental
02:26:10 statement about self deception in the book. There’s the application of that, like we just did
02:26:14 in medicine. Can you steel man the argument that many of the foundational ideas in the book are
02:26:22 wrong? Meaning there’s two that you just made, which is it can be a lot simpler than it looks.
02:26:31 Can you steel man the case that it’s, case by case, it’s always super complicated. Like it’s
02:26:38 a complex system. It’s very difficult to have a simple model about. It’s very difficult to
02:26:42 introspect. And the other one is that the human brain isn’t, not just about self deception. That
02:26:50 there’s a lot of, there’s a lot of motivations at play and we are able to really introspect our own
02:26:57 mind. And like what, what’s on the surface of the conscious is actually quite a good representation
02:27:03 of what’s going on in the brain. And you’re not deceiving yourself. You’re able to actually
02:27:07 arrive to deeply think about where your mind stands and what you think about the world. And
02:27:13 it’s less about impressing people and more about being a free thinking individual.
02:27:18 So when a child tries to explain why they don’t have their homework assignment, they are sometimes
02:27:26 inclined to say, the dog ate my homework. They almost never say the dragon ate my homework.
02:27:32 The reason is the dragon is a completely implausible explanation. Almost always when we
02:27:38 make excuses for things, we choose things that are at least in some degree plausible. It could
02:27:44 perhaps have happened. That’s an obstacle for any explanation of a hidden motive or a hidden
02:27:51 feature of human behavior. If people are pretending one thing while really doing another,
02:27:57 they’re usually going to pick as a pretense something that’s somewhat plausible. That’s
02:28:02 going to be an obstacle to proving that hypothesis if you are focused on sort of the local data that
02:28:09 a person would typically have if they were challenged. So if you’re just looking at one
02:28:12 kid and his lack of homework, maybe you can’t tell whether his dog ate his homework or not.
02:28:18 If you happen to know he doesn’t have a dog, you might have more confidence. You will need to have
02:28:24 a wider range of evidence than a typical person would when they’re encountering that actual excuse
02:28:29 in order to see past the excuse. That will just be a general feature of it. So if I say,
02:28:36 there’s this usual story about where we go to the doctor and then there’s this other explanation,
02:28:41 it’ll be true that you’ll have to look at wider data in order to see that because people don’t
02:28:47 usually offer excuses unless in the local context of their excuse, they can get away with it. That
02:28:53 is, it’s hard to tell, right? So in the case of medicine, I have to point you to sort of larger
02:28:58 sets of data. But in many areas of academia, including health economics, the researchers there
02:29:07 also want to support the usual points of view. And so they will have selection effects in their
02:29:13 publications and their analysis whereby they, if they’re getting a result too much contrary to the
02:29:18 usual point of view everybody wants to have, they will file drawer that paper or redo the analysis
02:29:24 until they get an answer that’s more to people’s liking. So that means in the health economics
02:29:29 literature, there are plenty of people who will claim that in fact, we have evidence that medicine
02:29:34 is effective. And when I respond, I will have to point you to our most reliable evidence.
02:29:41 And ask you to consider the possibility that the literature is biased in that when the evidence
02:29:46 isn’t as reliable, when they have more degrees of freedom in order to get the answer they want,
02:29:50 they do tend to get the answer they want. But when we get to the kind of evidence that’s much
02:29:55 harder to mess with, that’s where we will see the truth be more revealed. So with respect to
02:30:01 medicine, we have millions of papers published in medicine over the years, most of which give the
02:30:07 impression that medicine is useful. There’s a small literature on randomized experiments of the
02:30:14 aggregate effects of medicine, where there’s maybe a few half dozen or so papers, where it would be
02:30:21 the hardest to hide it because it’s such a straightforward experiment done in a straightforward
02:30:28 way that it’s hard to manipulate. And that’s where I will point you to.
02:30:34 Manipulate. And that’s where I will point you to, to show you that there’s relatively
02:30:39 little correlation between health and medicine. But even then, people could try to save the
02:30:43 phenomenon and say, well, it’s not hidden motives. It’s just ignorance. They could say,
02:30:47 for example, you know, medicine’s complicated. Most people don’t know the literature.
02:30:53 Therefore, they can be excused for ignorance. They are just ignorantly assuming that medicine
02:30:59 is effective. It’s not that they have some other motive that they’re trying to achieve.
02:31:02 And then I will have to do, you know, as with a conspiracy theory analysis, I’m saying, well,
02:31:07 like, how long has this misperception been going on? How consistently has it happened
02:31:12 around the world and across time? And I would have to say, look, you know, if we’re talking about,
02:31:18 say, a recent new product, like Segway scooters or something, I could say not so many people have
02:31:24 seen them or used them. Maybe they could be confused about their value. If we’re talking
02:31:28 about a product that’s been around for thousands of years, used in roughly the same way all across
02:31:32 the world, and we see the same pattern over and over again, this sort of ignorance mistake just
02:31:38 doesn’t work so well. It also is a question of how much of the self deception is prevalent versus
02:31:47 foundational. Because there’s a kind of implied thing where it’s foundational to human nature
02:31:52 versus just a common pitfall. This is a question I have. So, like, maybe human progress is made by
02:32:01 people who don’t fall into the self deception. It’s a baser aspect of human nature, but then
02:32:08 you escape it easily if you’re motivated.
02:32:12 The motivational hypotheses about the self deceptions are in terms of how it makes you
02:32:17 look to the people around you. Again, the press secretary. So, the story would be, most people
02:32:23 want to look good to the people around them. Therefore, most people present themselves in ways
02:32:28 that help them look good to the people around them. That’s sufficient to say there would be a
02:32:35 lot of it. It doesn’t need to be 100%, right? There’s enough variety in people and in
02:32:40 circumstances that sometimes taking a contrarian strategy can be in the interest of some minority
02:32:44 of the people. So, I might, for example, say that that’s a strategy I’ve taken. I’ve decided that
02:32:52 being contrarian on these things could be winning for me in that there’s a room for a small number
02:32:58 of people like me who have these sort of messages who can then get more attention, even if there’s
02:33:04 not room for most people to do that. And that can be explaining sort of the variety, right?
02:33:11 Similarly, you might say, look, just look at the most obvious things. Most people would like to
02:33:15 look good, right? In the sense of physically, just you look good right now. You’re wearing a nice
02:33:18 suit, you have a haircut, you shaved, right? So, and we cut my own hair by the way. Okay.
02:33:23 Well, that’s all the more impressive. That’s a counter argument for your claim.
02:33:29 So, clearly, if we look at most people and their physical appearance, clearly, most people are
02:33:33 trying to look somewhat nice, right? They shower, they shave, they comb their hair,
02:33:38 but we certainly see some people around who are not trying to look so nice, right? Is that a
02:33:42 big challenge, the hypothesis that people want to look nice? Not that much, right? We can see
02:33:48 in those particular people’s context, more particular reasons why they’ve chosen to be
02:33:53 an exception to the more general rule.
02:33:55 So, the general rule does reveal something foundational generally.
02:34:00 Right.
02:34:01 That’s the way things work. Let me ask you, you wrote a blog post about the general rule,
02:34:05 let me ask you, you wrote a blog post about the accuracy of authorities since we’re talking
02:34:10 about this, especially in medicine. Just looking around us, especially during this time of the
02:34:17 pandemic, there’s been a growing distrust of authorities, of institutions, even the institution
02:34:24 of science itself. What are the pros and cons of authorities, would you say? So, what’s nice
02:34:33 about authorities? What’s nice about institutions? And what are their pitfalls?
02:34:40 One standard function of authority is as something you can defer to, respectively,
02:34:45 without needing to seem too submissive or ignorant or, you know, gullible. That is,
02:34:56 you know, when you’re asking what should I act on or what beliefs should I act on,
02:35:02 you might be worried if I chose something too contrarian, too weird, too speculative,
02:35:07 that that would make me look bad. So, I would just choose something very conservative.
02:35:13 So, maybe an authority lets you choose something a little less conservative because the authority
02:35:19 is your authorization. The authority will let you do it. And you can say, and somebody says,
02:35:23 why did you do that thing? And they say, the authority authorized it. The authority tells me,
02:35:28 I should do this. Why aren’t you doing it, right?
02:35:30 So, the authority is often pushing for the conservative?
02:35:34 Well, no, the authority can do more. I mean, so for example, we just think about,
02:35:38 I don’t know, in a pandemic even, right? You could just think, I’ll just stay home and close
02:35:43 all the doors or I’ll just ignore it, right? You could just think of just some very simple
02:35:46 strategy that might be defensible if there were no authorities, right? But authorities might be
02:35:51 able to know more than that. They might be able to like look at some evidence, draw a more context
02:35:57 dependent conclusion, declare it as the authority’s opinion. And then other people might follow that
02:36:01 and that could be better than doing nothing. So, what you mentioned, WHO, the world’s most
02:36:06 beloved organization. So, this is me speaking in general, WHO and CDC has been kind of,
02:36:16 I, depending on degrees and details, just not behaving as I would have imagined in the best
02:36:29 possible evolution of human civilization, authorities should act. They seem to have failed
02:36:35 in some fundamental way in terms of leadership in a difficult time for our society. Can you say what
02:36:42 are the pros and cons of this particular authority? So, again, if there were no authorities whatsoever,
02:36:49 no accepted authorities, then people would sort of have to sort of randomly pick different local
02:36:55 authorities who would conflict with each other. And then they’d be fighting each other about that,
02:36:59 or just not believe anybody and just do some initial default action that you would always do
02:37:03 without responding to context. So, the potential gain of an authority is that they could know more
02:37:09 than just basic ignorance. And if people followed them, they could both be more informed than
02:37:15 ignorance and all doing the same thing. So, they’re each protected from being accused or
02:37:20 complained about. That’s the idea of an authority. That would be the good. What’s the con of that?
02:37:26 Okay. How does that go wrong? So, the con is that if you think of yourself as the authority and
02:37:32 asking what’s my best strategy as an authority, it’s unfortunately not to be maximally informative.
02:37:40 So, you might think the ideal authority would not just tell you more than ignorance, it would tell
02:37:45 you as much as possible. Okay. It would give you as much detail as you could possibly listen to and
02:37:51 manage to assimilate. And it would update that as frequently as possible or as frequently as you
02:37:57 were able to listen and assimilate. And that would be the maximally informative authority. The problem
02:38:03 is there’s a conflict between being an authority or being seen as an authority and being maximally
02:38:10 informative. That was the point of my blog post that you’re pointing out to here. That is, if you
02:38:16 look at it from their point of view, they won’t long remain the perceived authority if they are
02:38:23 too cautious, incautious about how they use that authority. And one of the ways to be incautious
02:38:31 would be to be too informative. Okay. That’s still in the pro column for me because you’re talking
02:38:37 about the tensions that are very data driven and very honest. And I would hope that authorities
02:38:44 struggle with that. How much information to provide to people to maximize outcomes.
02:38:52 Now I’m generally somebody that believes more information is better because I trust the
02:38:57 intelligence of people. But I’d like to mention a bigger con on authorities, which is the human
02:39:03 question. This comes back to a global government and so on. Is that, you know, there’s humans that
02:39:11 sit in chairs during meetings and those authorities, they have different titles. It’s
02:39:16 for humans form hierarchies. And sometimes those titles get to your head a little bit
02:39:20 and you start to want to think, how do I preserve my control over this authority? As opposed to
02:39:26 thinking through like, what is the mission of the authority? What is the mission of WHO and
02:39:32 the other such organization? And how do I maximize the implementation of that mission? You start to
02:39:37 think, well, I kind of like sitting in this big chair at the head of the table. I’d like to sit
02:39:43 there for another few years or better yet, I want to be remembered as the person who in a time of
02:39:48 crisis was at the head of this authority and did a lot of good things. So you stop trying to do good
02:39:58 under what good means given the mission of the authority. And you start to try to carve a
02:40:03 narrative, to manipulate the narrative. First in the meeting room, everybody around you, just a
02:40:09 small little story you tell yourself, the new interns, the managers throughout the whole
02:40:15 hierarchy of the company. Okay, once everybody in the company or in the organization believes this
02:40:20 narrative, now you start to control the release of information, not because you’re trying to
02:40:28 maximize outcomes, but because you’re trying to maximize the effectiveness of the narrative that
02:40:33 you are truly a great representative of this authority in human history. And I just feel like
02:40:40 those human forces whenever you have an authority, it starts getting to people’s heads. One of the
02:40:47 most, me as a scientist, one of the most disappointing things to see during the pandemic
02:40:53 is the use of authority from colleagues of mine to roll their eyes, to dismiss other human beings
02:41:04 just because they got a PhD, just because they’re an assistant, associate, full faculty, just because
02:41:12 they are deputy head of X organization, NIH, whatever the heck the organization is,
02:41:20 just because they got an award of some kind and at a conference they won a best paper award seven
02:41:27 years ago and then somebody shook their hand and gave them a medal, maybe it was a president
02:41:32 and it’s been 20, 30 years that people have been patting them on the back saying how special
02:41:37 they are, especially when they’re controlling money and getting sucked up to from other scientists
02:41:43 who really want the money in a self deception kind of way, they don’t actually really care
02:41:47 about your performance and all of that gets to your head and no longer are you the authority
02:41:52 that’s trying to do good and lessen the suffering in the world, you become an authority that just
02:41:57 wants to maximize, self preserve yourself in a sitting on a throne of power. So this is core to
02:42:06 sort of what it is to be an economist. I’m a professor of economics. There you go with the
02:42:12 authority again. No, it’s about saying, we often have a situation where we see a world of behavior
02:42:20 and then we see ways in which particular behaviors are not sort of maximally socially useful.
02:42:26 Yes.
02:42:28 And we have a variety of reactions to that. So one kind of reaction is to sort of morally
02:42:34 blame each individual for not doing the maximally socially useful thing under perhaps the idea that
02:42:42 people could be identified and shamed for that and maybe induced into doing the better thing if
02:42:46 only enough people were calling them out on it, right? But another way to think about it is to
02:42:52 think that people sit in institutions with certain stable institutional structures and that
02:42:58 institutions create particular incentives for individuals and that individuals are typically
02:43:04 doing whatever is in their local interest in the context of that institution.
02:43:10 And then perhaps to less blame individuals for winning their local institutional game
02:43:15 and more blaming the world for having the wrong institutions. So economists are often like
02:43:20 wondering what other institutions we could have instead of the ones we have and which of them
02:43:24 might promote better behavior. And this is a common thing we do all across human behavior is
02:43:29 to think of what are the institutions we’re in and what are the alternative variations we could
02:43:33 imagine and then to say which institutions would be most productive. I would agree with you that
02:43:40 our information institutions, that is the institutions by which we collect information
02:43:44 and aggregate it and share it with people are especially broken in the sense of far from the
02:43:51 ideal of what would be the most cost effective way to collect and share information. But then
02:43:56 the challenge is to try to produce better institutions. And as an academic, I’m aware that
02:44:03 academia is particularly broken in the sense that we give people incentives to do research that’s
02:44:09 not very interesting or important because basically they’re being impressive. And we actually care
02:44:15 more about whether academics are impressive than whether they’re interesting or useful.
02:44:20 And I can go happy to go into detail with lots of different known institutions and their known
02:44:25 institutional failings, ways in which those institutions produce incentives that are
02:44:31 mistaken. And that was the point of the post we started with talking about the authorities. If
02:44:34 I need to be seen as an authority, that’s at odds with my being informative and I might choose to be
02:44:42 the authority instead of being informative because that’s my institutional incentives.
02:44:46 And if I may, I’d like to, given that beautiful picture of incentives and individuals that you
02:44:54 just painted, let me just apologize for a couple of things. One, I often put too much blame on
02:45:03 leaders of institutions versus the incentives that govern those institutions. And as a result of that,
02:45:11 I’ve been, I believe too critical of Anthony Fauci, too emotional about my criticism of
02:45:20 Anthony Fauci. And I’d like to apologize for that because I think there’s a deep, there’s deeper
02:45:26 truths to think about. There’s deeper incentives to think about. That said, I do sort of, I’m a
02:45:32 romantic creature by nature. I romanticize Winston Churchill. When I think about Nazi Germany,
02:45:42 I think about Hitler more than I do about the individual people of Nazi Germany. You think
02:45:47 about leaders, you think about individuals, not necessarily the parameters, the incentives that
02:45:51 govern the system that, because it’s harder. It’s harder to think through deeply about the models
02:45:58 from which those individuals arise, but that’s the right thing to do. So, but also I don’t apologize
02:46:05 for being emotional sometimes and being.
02:46:07 I’m happy to blame the individual leaders in the sense that, you know, I might say, well,
02:46:12 you should be trying to reform these institutions if you’re just there to like get promoted and look
02:46:17 good at being at the top. But maybe I can blame you for your motives and your priorities in there,
02:46:22 but I can understand why the people at the top would be the people who are selected for having
02:46:26 the priority of primarily trying to get to the top. I get that.
02:46:29 Can I maybe ask you about particular universities? They’ve received, like science has received an
02:46:36 increase in distrust overall as an institution, which breaks my heart because I think science is
02:46:43 beautiful as a, not maybe not as an institution, but as one of the things, one of the journeys that
02:46:51 humans have taken on. The other one is university. I think university is actually a place for me,
02:46:58 at least in the way I see it, is a place of freedom of exploring ideas, scientific ideas,
02:47:06 engineering ideas, more than corporate, more than a company, more than a lot of domains in life.
02:47:15 They’re, it’s not just in its ideal, but it’s in its implementation, a place where you can
02:47:22 be a kid for your whole life and play with ideas. And I think with all the criticism that universities
02:47:28 still not currently receive, I think they, I don’t think that criticism is representative
02:47:35 of universities. They focus on very anecdotal evidence of particular departments, particular
02:47:39 people, but I still feel like there’s a lot of place for freedom of thought, at least MIT,
02:47:50 at least in the fields I care about, in a particular kind of science, a particular kind
02:47:56 of technical fields, mathematics, computer science, physics, engineering, so robotics,
02:48:02 artificial intelligence. This is a place where you get to be a kid. Yet there is bureaucracy that’s
02:48:12 rising up. There’s like more rules. There’s more meetings and there’s more administration
02:48:18 having like PowerPoint presentations, which to me, you should like be more of a renegade
02:48:28 explorer of ideas and meetings destroy, they suffocate that radical thought that happens
02:48:34 when you’re an undergraduate student and you can do all kinds of wild things when you’re
02:48:38 a graduate student. Anyway, all that to say, you’ve thought about this aspect too. Is there
02:48:42 something positive, insightful you could say about how we can make for better universities
02:48:50 in the decades to come? This particular institution, how can we improve them?
02:48:54 I hear that centuries ago, many scientists and intellectuals were aristocrats. They had time
02:49:03 and could, if they chose, choose to be intellectuals. That’s a feature of the combination
02:49:12 that they had some source of resources that allowed them leisure and that the kind of
02:49:17 competition they were faced in among aristocrats allowed that sort of a self indulgence or
02:49:24 self pursuit, at least at some point in their lives. So the analogous observation is that
02:49:32 university professors often have sort of the freedom and space to do a wide range of things.
02:49:39 And I am certainly enjoying that as a tenured professor.
02:49:42 You’re a really, sorry to interrupt, a really good representative of that.
02:49:46 Just the exploration you’re doing, the depth of thought, like most people are afraid to do the
02:49:52 kind of broad thinking that you’re doing, which is great.
02:49:55 The fact that that can happen is a combination of these two things analogously. One is that
02:50:01 we have fierce competition to become a tenured professor, but then once you become tenured,
02:50:05 we give you the freedom to do what you like. And that’s a happenstance. It didn’t have to
02:50:11 be that way. And in many other walks of life, even though people have a lot of resources,
02:50:16 et cetera, they don’t have that kind of freedom set up. So I think we’re kind of,
02:50:20 I’m kind of lucky that tenure exists and that I’m enjoying it. But I can’t be too enthusiastic
02:50:28 about this unless I can approve of sort of the source of the resources that’s paying for all
02:50:31 this. So for the aristocrat, if you thought they stole it in war or something, you wouldn’t be so
02:50:37 pleased. Whereas if you thought they had earned it or their ancestors had earned this money that
02:50:41 they were spending as an aristocrat, then you could be more okay with that. So for universities,
02:50:47 I have to ask, where are the main sources of resources that are going to the universities and
02:50:52 are they getting their money’s worth? Are they getting a good value for that payment?
02:50:58 So first of all, they’re students. And the question is, are students getting good value
02:51:03 for their education? And each person is getting value in the sense that they are identified and
02:51:10 shown to be a more capable person, which is then worth more salary as an employee later.
02:51:15 But there is a case for saying there’s a big waste to the system because we aren’t actually
02:51:21 changing the students or educating them. We’re more sorting them or labeling them. And that’s
02:51:27 a very expensive process to produce that outcome. And part of the expense is the freedom from tenure,
02:51:33 I guess. So I feel like I can’t be too proud of that because it’s basically a tax on all these
02:51:38 young students to pay this enormous amount of money in order to be labeled as better. Whereas I
02:51:43 feel like we should be able to find cheaper ways of doing that. The other main customer is
02:51:49 researcher patrons like the government or other foundations. And then the question is,
02:51:54 are they getting their money worth out of the money they’re paying for research to happen?
02:51:59 And my analysis is they don’t actually care about the research progress. They are mainly
02:52:05 buying an affiliation with credentialed impressiveness on the part of the researchers.
02:52:09 They mainly pay money to researchers who are impressive and have high, you know,
02:52:13 impressive affiliations. And they don’t really much care what research project happens as a result.
02:52:18 Is that a cynical? So there’s a deep truth to that cynical perspective. Is there
02:52:26 a less cynical perspective that they do care about the long term investment into the progress
02:52:32 of science and humanity? Well, they might personally care, but they’re stuck in an equilibrium.
02:52:37 Sure.
02:52:38 Wherein they, basically most foundations like governments or research or, you know,
02:52:43 the Ford Foundation, they are, the individuals there are rated based on the prestige they bring
02:52:50 to that organization. And even if they might personally want to produce more intellectual
02:52:54 progress, they are in a competitive game where they don’t have tenure and they need to produce
02:53:00 this prestige. And so once they give grant money to prestigious people, that is the thing that
02:53:04 shows that they have achieved prestige for the organization. And that’s what they need to do in
02:53:08 order to retain their position. And you do hope that there’s a correlation between prestige and
02:53:14 actual competence. Of course, there is a correlation. The question is just, could we do
02:53:19 this better some other way? I think it’s almost, I think it’s pretty clear we could. What is harder
02:53:25 to do is move the world to a new equilibrium where we do that instead. What are the components
02:53:31 of the better ways to do it? Is it money? So how, the sources of money and how the money is
02:53:39 allocated to give the individual researchers freedom? Years ago I started studying this topic
02:53:46 exactly because this was my issue and this was many decades ago now. And I spent a long time
02:53:51 and my best guess still is prediction markets, betting markets. So if you as a research
02:53:58 paper patron want to know the answer to a particular question, like what’s the mass of
02:54:02 the electron neutrino, then what you can do is just subsidize a betting market in that question.
02:54:09 And that will induce more research into answering that question because the people who then
02:54:13 answer that question can then make money in that betting market with the new information they gain.
02:54:17 So that’s a robust way to induce more information on a topic. If you want to induce an
02:54:22 accomplishment, you can create prizes. And there’s of course a long history of prizes to induce
02:54:28 accomplishments. And we moved away from prizes, even though we once used them far more often than
02:54:35 we did today. And there’s a history to that. And for the customers who want to be affiliated with
02:54:43 impressive academics, which is what most of the customers want, students, journalists, and patrons,
02:54:48 I think there’s a better way of doing that, which I just wrote about in my second most recent blog
02:54:53 post. Can you explain? Sure. What we do today is we take sort of acceptance by other academics
02:54:59 recently as our best indication of their deserved prestige. That is recent publications, recent
02:55:07 job affiliation, institutional affiliations, recent invitations to speak, recent grants.
02:55:13 We are today taking other impressive academics, recent choices to affiliate with them as our best
02:55:21 guesstimate of their prestige. I would say we could do better by creating betting markets in what the
02:55:28 distant future will judge to have been their deserved prestige looking back on them. I think
02:55:34 most intellectuals, for example, think that if we looked back two centuries, say to intellectuals
02:55:39 from two centuries ago, and tried to look in detail at their research and how it influenced
02:55:45 future research and which path it was on, we could much more accurately judge their actual
02:55:52 deserved prestige. That is who was actually on the right track, who actually helped, which will be
02:55:58 different than what people at the time judged using the immediate indications at the time of
02:56:02 which position they had or which publications they had or things like that. So in this way,
02:56:07 if you think from the perspective of multiple centuries, you would higher prioritize true
02:56:15 novelty, you would disregard the temporal proximity, like how recent the thing is,
02:56:21 and you would think like, what is the brave, the bold, the big, a novel idea that this sense,
02:56:27 and you would actually, you would be able to rate that because you could see the path
02:56:31 with which ideas took, which things had dead ends, which led to what other followings. You could,
02:56:36 looking back centuries later, have a much better estimate of who actually had what long term
02:56:41 effects on intellectual progress. So my proposal is we actually pay people in several centuries to
02:56:47 do this historical analysis. And we have prediction markets today where we buy and sell
02:56:52 assets, which will later off pay off in terms of those final evaluations. So now we’ll be inducing
02:56:58 people today to make their best estimate of those things by actually looking at the details of
02:57:03 people and setting the prices accordingly. So my proposal would be we rate people today on those
02:57:08 prices today. So instead of looking at their list of publications or affiliations, you look at the
02:57:12 actual price of assets that represent people’s best guess of what the future will say about them.
02:57:18 That’s brilliant. So this concept of idea futures, can you elaborate what this would entail?
02:57:26 I’ve been elaborating two versions of it here. So one is if there’s a particular question,
02:57:32 say the mass of the electron neutrino, and what you as a patron want to do is get an answer to
02:57:37 that question, then what you would do is subsidize the betting market in that question under the
02:57:42 assumption that eventually we’ll just know the answer and we can pay off the bets that way.
02:57:47 And that is a plausible assumption for many kinds of concrete intellectual questions like what’s the
02:57:51 mass of the electron neutrino. In this hypothetical world that you’re constructing that may be a real
02:57:56 world, do you mean literally financial? Yes. Literal. Very literal. Very cash. Very direct
02:58:05 and literal. Yes. Or crypto. Well, crypto is money. Yes, sure. So the idea would be research labs
02:58:12 would be for profit. They would have as their expense paying researchers to study things and
02:58:17 then their profit would come from using the insights the researchers gains to trade in these
02:58:22 financial markets. Just like hedge funds today make money by paying researchers to study firms
02:58:28 and then making their profits by trading on those that that insight in the ordinary financial market.
02:58:33 And the market would, if it’s efficient, would be able to become better and better at predicting
02:58:40 the powerful ideas that the individual is able to generate. The variance around the mass of the
02:58:44 electron neutrino would decrease with time as we learned that value of that parameter better and
02:58:49 any other parameters that we wanted to estimate. You don’t think those markets would also respond
02:58:53 to recency of prestige and all those kinds of things? They would respond, but the question is
02:59:00 if they might respond incorrectly, but if you think they’re doing it incorrectly, you have a
02:59:03 profit opportunity where you can go fix it. So we’d be inviting everybody to ask whether they can
02:59:10 find any biases or errors in the current ways in which people are estimating these things from
02:59:14 whatever clues they have. Right. There’s a big incentive for the correction mechanism in academia
02:59:18 currently. There’s not, it’s the safe choice to go with the prestige. Exactly. And there’s no.
02:59:26 Even if you privately think that the prestige is over overrated. Even if you think strongly that
02:59:33 it’s overrated. Still you don’t have an incentive to defy that publicly. You’re going to lose a lot
02:59:38 unless you’re a contrarian that writes brilliant blogs and then you could talk about it in the
02:59:44 pocket. Right. I mean, initially this was my initial concept of having these betting markets
02:59:49 on these key parameters. And what I then realized over time was that that’s more what people
02:59:53 pretend to care about. What they really mostly care about is just who’s how good. And that’s
02:59:58 what most of the system is built on is trying to rate people and rank them. And so I designed this
03:00:03 other alternative based on historical evaluation centuries later, just about who’s how good,
03:00:08 because that’s what I think most of the customers really care about.
03:00:10 Customers. I like the word customers here. Humans. Right. Well, every major area of life,
03:00:16 which, you know, has specialists who get paid to do that thing must have some customers from
03:00:20 elsewhere who are paying for it. Well, who are the customers for the mass of the neutrino?
03:00:25 Yes. I, I, I understand that a sense people who are willing to pay. Right. For a thing.
03:00:33 That’s an important thing to understand about anything. Who are the customers? So when I think
03:00:36 and what’s the product, like medicine, education, academia, military, et cetera, that’s part of the
03:00:42 hidden motives analysis. Often people have a thing they say about what the product is and who the
03:00:46 customer is. And maybe you need to dig a little deeper to find out what’s really going on.
03:00:50 Or a lot deeper. You, uh, you’ve written that you seek out quote view quakes. You’re able as a,
03:00:59 uh, as an intelligent black box word generating machine, you’re able to generate a lot of sexy
03:01:03 words. I like it. I love it. View quakes, which are insights, which dramatically changed my
03:01:10 worldview, your worldview. Uh, you write, I loved science fiction as a child studied physics and
03:01:17 artificial intelligence for a long time each, and now study economics and political science,
03:01:23 all fields full of such insights. So let me ask, what are some view quakes or a beautiful,
03:01:30 surprising idea to you from each of those fields, physics, AI, economics, political science?
03:01:36 I know it’s a tough question. Something that springs to mind about physics, for example,
03:01:40 that just as beautiful. I mean, right from the beginning, say special relativity was a big
03:01:45 surprise. Uh, you know, most of us have a simple concept of time and it seems perfectly adequate
03:01:51 for everything we’ve ever seen. And to have it explained to you that you need to sort of have a
03:01:55 mixture concept of time and space where you put it into the space time construct, how it looks
03:02:00 different from different perspectives. That was quite a shock. And that was, you know, such a
03:02:06 shock that it makes you think, what else do I know that, you know, isn’t the way it seems. Certainly
03:02:11 quantum mechanics is certainly another enormous shock in terms of from your point, you know,
03:02:16 you have this idea that there’s a space and then there’s, you know, point particles at points and
03:02:21 maybe fields in between. And, um, quantum mechanics is just a whole different representation. It looks
03:02:28 nothing like what you would have thought as sort of the basic representation of the physical world.
03:02:32 And that was quite a surprise. What would you say is the catalyst for the, for the view quake in
03:02:39 theoretical physics in the 20th century? Where does that come from? So the interesting thing
03:02:43 about Einstein, it seems like a lot of that came from like almost thought experiments. It wasn’t
03:02:47 almost experimentally driven. Um, and with, actually, I don’t know the full story of quantum
03:02:55 mechanics, how much of it is experiment, like where, if you, if you look at the full trace of
03:03:01 idea generation there, uh, of all the weird stuff that falls out of quantum mechanics, how much of
03:03:07 that was the experimentalist? How much was it the theoreticians? But usually in theoretical
03:03:11 physics, the theories lead the way. So maybe can you, uh, can you elucidate like what, what is the
03:03:18 catalyst for these? The remarkable thing about physics and about many other areas of academic
03:03:24 intellectual life is that it just seems way overdetermined. That is, if it hadn’t been for
03:03:31 Einstein or if it hadn’t been for Heisenberg, certainly within a half a century, somebody else
03:03:36 would have come up with essentially the same things. Is that something you believe or is that
03:03:41 something? Yes. So I think when you look at sort of just the history of physics and the history of
03:03:46 other areas, you know, some areas like that, there’s just this enormous convergence that the,
03:03:51 the different kinds of evidence that was being collected was so redundant in the sense that so
03:03:56 many different things revealed the same things that eventually you just kind of have to accept it
03:04:02 because it just gets obvious. So if you look at the details, of course, you know, Einstein did it
03:04:08 for somebody else and it’s well worth celebrating Einstein for that. And, you know, we, by
03:04:13 celebrating the particular people who did something first or came across something first,
03:04:17 we are encouraging all the rest to move a little faster, to try to, to push us all a little faster,
03:04:25 which is great. But I still think we would have gotten roughly to the same place within a half
03:04:32 century. So sometimes people are special because of how much longer it would have taken. So some
03:04:37 people say general relativity would have taken longer without Einstein than other things. I mean,
03:04:42 Heisenberg quantum mechanics, I mean, there were several different formulations of quantum mechanics
03:04:46 all around the same few years, means no one of them made that much of a difference. We would have
03:04:51 had pretty much the same thing regardless of which of them did it exactly when. Nevertheless,
03:04:56 I’m happy to celebrate them all. But this is a choice I make in my research. That is, when there’s
03:05:00 an area where there’s lots of people working together, you know, who are sort of scoping each
03:05:05 other and getting a result just before somebody else does, you ask, well, how much of a difference
03:05:10 would I make there? At most, I could make something happen a few months before somebody else. And so
03:05:16 I’m less worried about them missing things. So when I’m trying to help the world, like doing research,
03:05:21 I’m looking for neglected things. I’m looking for things that nobody’s doing it. If I didn’t do it,
03:05:25 nobody would do it. Nobody would do it. Or at least for a long time. In the next 10, 20 years,
03:05:28 kind of thing. Right, exactly. Same with general relativity, just, you know, who would do it?
03:05:33 It might take another 10, 20, 30, 50 years. So that’s the place where you can have the
03:05:36 biggest impact is finding the things that nobody would do unless you did them.
03:05:40 And then that’s when you get the big view quake, the insight. So what about artificial
03:05:45 intelligence? Would it be the EMs, the emulated minds? What idea, whether that struck you in the
03:05:56 shower one day or that you just…
03:06:00 Clearly, the biggest view quake in artificial intelligence is the realization of just how
03:06:05 complicated our human minds are. So most people who come to artificial intelligence from other
03:06:11 fields or from relative ignorance, a very common phenomenon, which you must be familiar with,
03:06:17 is that they come up with some concept and then they think that must be it. Once we implement this
03:06:22 new concept, we will have it. We will have full human level or higher artificial intelligence,
03:06:27 right? And they’re just not appreciating just how big the problem is, how long the road is,
03:06:32 just how much is involved, because that’s actually hard to appreciate. When we just think,
03:06:36 it seems really simple. And studying artificial intelligence, going through many particular
03:06:41 problems, looking at each problem, all the different things you need to be able to do
03:06:45 to solve a problem like that, makes you realize all the things your minds are doing that you
03:06:50 are not aware of. That’s that vast subconscious that you’re not aware of. That’s the biggest
03:06:55 view quake from artificial intelligence by far for most people who study artificial intelligence,
03:06:59 is to see just how hard it is. I think that’s a good point. But I think it’s a very early
03:07:07 view quake. It’s when the Dunning Kruger crashes hard. It’s the first realization that humans are
03:07:16 actually quite incredible. The human mind, the human body is quite incredible. There’s a lot
03:07:20 of different parts to it. But then, see, it’s already been so long for me to think about
03:07:27 it. It’s already been so long for me that I’ve experienced that view quake that, for me,
03:07:32 I now experience the view quakes of, holy shit, this little thing is actually quite powerful,
03:07:37 like neural networks. I’m amazed. Because you’ve become almost cynical after that first view quake
03:07:45 of, like, this is so hard. Like, evolution did some incredible work to create the human mind.
03:07:52 But then you realize, just like you have, you’ve talked about a bunch of simple models
03:07:57 that simple things can actually be extremely powerful, that maybe emulating the human mind
03:08:04 is extremely difficult. But you can go a long way with a large neural network. You can go a long way
03:08:09 with a dumb solution. It’s that Stuart Russell thing with the reinforcement learning. Holy crap,
03:08:15 you can go quite a long way with a simple thing. But we still have a very long road to go,
03:08:18 but not unless… I can’t, I refuse to sort of know. The road is full of surprises. So long is
03:08:29 an interesting, like you said, with the six hard steps that humans have to take to arrive at where
03:08:34 we are from the origin of life on Earth. So it’s long, maybe, in the statistical improbability of
03:08:42 the steps that have to be taken. But in terms of how quickly those steps could be taken,
03:08:47 I don’t know if my intuition says it’s, if it’s hundreds of years away or if it’s a couple of
03:08:55 years away, I prefer to measure… Pretty confidence, at least a decade. And
03:09:00 mildly confidence, at least three decades. I can steal man either direction. I prefer to
03:09:05 measure that journey in Elon Musk’s. That’s a new… Well, we don’t get Elon Musk very often,
03:09:10 so that’s a long timescale. For now, I don’t know, maybe you can clone or maybe multiply or
03:09:16 even know what Elon Musk, what that is. What is that? What is… That’s a good question.
03:09:21 Exactly. Well, that’s an excellent question. How does that fit into the model of the three
03:09:26 parameters that are required for becoming a grabby alien civilization? That’s the question of how
03:09:33 much any individual makes in the long path of civilization over time. Yes. And it’s a favorite
03:09:39 topic of historians and people to try to focus on individuals and how much of a difference they
03:09:44 make. And certainly, some individuals make a substantial difference in the modest term,
03:09:49 right? Like, you know, without Hitler being Hitler in the role he took, European history would
03:09:55 have taken a different path for a while there. But if we’re looking over like many centuries
03:10:00 longer term things, most individuals do fade in their individual influence.
03:10:04 So, I mean… Even Einstein. Even Einstein, no matter how sexy your hair is, you will also be
03:10:13 forgotten in the long arc of history. So you said at least 10 years. So let’s talk a little bit about
03:10:20 this AI point of where, how we achieve, how hard is the problem of solving intelligence
03:10:28 is the problem of solving intelligence by engineering artificial intelligence
03:10:35 that achieves human level, human like qualities that we associate with intelligence. How hard
03:10:41 is this? What are the different trajectories that take us there? One way to think about it
03:10:46 is in terms of the scope of the technology space you’re talking about. So let’s take the biggest
03:10:52 possible scope, all of human technology, right? The entire human economy. So the entire economy
03:11:00 is composed of many industries, each of which have many products with many different technologies
03:11:04 supporting each one. At that scale, I think we can accept that most innovations are a small
03:11:13 fraction of the total. That is usually have relatively gradual overall progress. And that
03:11:20 individual innovations that have a substantial effect that total are rare and their total effect
03:11:25 is still a small percentage of the total economy. There’s very few individual innovations that
03:11:31 made a substantial difference to the whole economy. What are we talking? Steam engine,
03:11:35 shipping containers, a few things. Shipping containers deserves to be up there with steam
03:11:42 engines, honestly. Can you say exactly why shipping containers… Shipping containers
03:11:48 revolutionized shipping. Shipping is very important. But placing that at shipping containers.
03:11:55 So you’re saying you wouldn’t have some of the magic of the supply chain, all that,
03:11:59 without shipping containers. That made a big difference, absolutely. Interesting. That’s
03:12:02 something to look into. We shouldn’t take that tangent, although I’m tempted to. But anyway,
03:12:08 so there’s a few, just a few innovations. Right. So at the scale of the whole economy, right?
03:12:13 Right. Now, as you move down to a much smaller scale, you will see individual innovations
03:12:19 having a bigger effect, right? So if you look at, I don’t know, lawnmowers or something,
03:12:24 I don’t know about the innovations lawnmower, but there were probably like steps where you
03:12:28 just had a new kind of lawnmower and that made a big difference to mowing lawns because you’re
03:12:34 focusing on a smaller part of the whole technology space, right? And sometimes like military
03:12:41 technology, there’s a lot of military technologies, a lot of small ones, but every once in a while,
03:12:45 a particular military weapon like makes a big difference. But still, even so, mostly overall,
03:12:51 they’re making modest differences to something that’s increasing relatively soon. Like US military
03:12:56 is the strongest in the world consistently for a while. No one weapon in the last 70 years has
03:13:02 made a big difference in terms of the overall prominence of the US military, right? Because
03:13:07 that’s just saying, even though every once in a while, even the recent Soviet hyper missiles or
03:13:12 whatever they are, they aren’t changing the overall balance dramatically, right?
03:13:18 So when we get to AI, now I can frame the question, how big is AI? Basically, so one way of
03:13:25 thinking about AI is it’s just all mental tasks. And then you ask what fraction of tasks are mental
03:13:30 tasks? And then I go, a lot. And then if I think of AI as like half of everything, then I think,
03:13:38 well, it’s got to be composed of lots of parts where any one innovation is only a small impact,
03:13:44 right? Now, if you think, no, no, no, AI is like AGI. And then you think AGI is a small thing,
03:13:52 right? There’s only a small number of key innovations that will enable it. Now you’re
03:13:57 thinking there could be a bigger chunk that you might find that would have a bigger impact. So
03:14:03 the way I would ask you to frame these things in terms of the chunkiness of different areas of
03:14:08 technology, in part, in terms of how big they are. If you take 10 chunky areas and you add them
03:14:13 together, the total is less chunky. Yeah. But don’t you, are you able until you solve
03:14:19 the fundamental core parts of the problem to estimate the chunkiness of that problem?
03:14:24 Well, if you have a history of prior chunkiness, that could be your best estimate for future
03:14:29 chunkiness. So for example, I mean, even at the level of the world economy, right? We’ve had this,
03:14:34 what, 10,000 years of civilization. Well, that’s only a short time. You might say, oh, that doesn’t
03:14:40 predict future chunkiness. But it looks relatively steady and consistent. We can say even in computer
03:14:47 science, we’ve had 70 years of computer science. We have enough data to look at chunkiness of
03:14:52 computer science. Like when were there algorithms or approaches that made a big chunky difference
03:15:00 and how large a fraction of that was that? And I’d say mostly in computer science,
03:15:05 most innovation has been relatively small chunks. The bigger chunks have been rare.
03:15:09 Well, this is the interesting thing. This is about AI and just algorithms in general is
03:15:14 page rank. So Google’s, right? So sometimes it’s a simple algorithm that by itself is not that useful,
03:15:27 but the scale of context and in a context that’s scalable, depending on the context,
03:15:34 all of a sudden the power is revealed. And there’s something, I guess that’s the nature of chunkiness
03:15:38 is that things that can reach a lot of people simply can be quite chunky.
03:15:45 So one standard story about algorithms is to say algorithms have a fixed cost plus a marginal cost.
03:15:53 And so in history, when you had computers that were very small, you tried all the algorithms
03:15:58 that had low fixed costs and you look for the best of those. But over time, as computers got bigger,
03:16:04 you could afford to do larger fixed costs and try those. And some of those had more effective
03:16:09 algorithms in terms of their marginal cost. And that, in fact, that roughly explains the
03:16:15 longterm history where in fact, the rate of algorithmic improvement is about the same as
03:16:19 the rate of hardware improvement, which is a remarkable coincidence. But it would be explained
03:16:25 by saying, well, there’s all these better algorithms you can’t try until you have a big enough computer
03:16:30 to pay the fixed cost of doing some trials to find out if that algorithm actually saves you
03:16:35 on the marginal cost. And so that’s an explanation for this relatively continuous history. So we have
03:16:41 a good story about why hardware is so continuous. And you might think, why would software be so
03:16:45 continuous with the hardware? But if there’s a distribution of algorithms in terms of their fixed
03:16:50 costs, and it’s, say, spread out at a wide log normal distribution, then we could be sort of
03:16:55 marching through that log normal distribution, trying out algorithms with larger fixed costs and
03:17:00 finding the ones that have lower marginal costs.
03:17:02 So would you say AGI, human level, AI, even EM, M, emulated minds, is chunky? Like a few
03:17:18 breakthroughs can take us.
03:17:19 So an M is by its nature chunky in the sense that if you have an emulated brain and you’re
03:17:25 25% effective at emulating it, that’s crap. That’s nothing. Okay. You pretty much need to
03:17:32 emulate a full human brain.
03:17:34 Is that obvious? Is that obvious?
03:17:36 It’s pretty obvious. I’m talking about like, you know, so the key thing is you’re emulating
03:17:41 various brain cells. And so you have to emulate the input output pattern of those cells. So if
03:17:46 you get that pattern somewhat close, but not close enough, then the whole system just doesn’t have
03:17:51 the overall behavior you’re looking for, right?
03:17:53 But it could have functionally some of the power of the overall system.
03:17:57 So there’ll be some threshold. The point is when you get close enough, then it goes over the
03:18:00 threshold, right? It’s like taking a computer chip and deleting every 1% of the gates, right?
03:18:05 No, that’s very chunky. But the hope is that emulating the human brain, I mean, the human
03:18:12 brain itself is not…
03:18:13 Right. So it has a certain level of redundancy and a certain level of robustness. And so there’s
03:18:17 some threshold when you get close to that level of redundancy or robustness, then it starts to
03:18:20 work. But until you get to that level, it’s just going to be crap, right? It’s going to be just a
03:18:25 big thing that isn’t working for us. So we can be pretty sure that emulations is a big chunk in an
03:18:32 economic sense, right? At some point, you’ll be able to make one that’s actually effective in
03:18:37 enable substituting for humans. And then that will be this huge economic product that people will
03:18:42 try to buy like crazy.
03:18:43 You’ll bring a lot of value to people’s lives, so they’ll be willing to pay for it.
03:18:47 Right. But it could be that the first emulation costs a billion dollars each, right? And then we
03:18:53 have them, but we can’t really use them. They’re too expensive. And then the cost slowly comes
03:18:56 down. And now we have less of a chunky adoption, right? That as the cost comes down, then we use
03:19:03 more and more of them in more and more contexts. And that’s a more continuous curve. So it’s only
03:19:10 if the first emulations are relatively cheap that you get a more sudden disruption to society.
03:19:15 And that could happen if sort of the algorithm is the last thing you figure out how to do or
03:19:19 something.
03:19:19 What about robots that capture some magic in terms of social connection? The robots, like we have a
03:19:28 robot dog on the carpet right there. Robots that are able to capture some magic of human connection
03:19:36 as they interact with humans, but are not emulating the brain. What about those? How far away?
03:19:42 So we’re thinking about chunkiness or distance now. So if you ask how chunky is the task of making
03:19:48 a, you know, emulatable robot or something, which chunkiness and time are correlated.
03:19:55 Right. But it’s about how far away it is or how suddenly it would happen. Chunkiness is how
03:20:01 suddenly and difficulty is just how far away it is. But it could be a continuous difficulty. It
03:20:07 would just be far away, but we’ll slowly steadily get there. Or there could be these thresholds where
03:20:14 we reach a threshold and suddenly we can do a lot better.
03:20:17 Yeah. That’s a good question for both. I tend to believe that all of it, not just the M, but AGI
03:20:24 too is chunky and human level intelligence embodied in robots is also chunky.
03:20:31 The history of computer science and chunkiness so far seems to be my rough best guess for the
03:20:36 penis of AGI. That is, it is chunky.
03:20:39 It’s modestly chunky, not that chunky. Right.
03:20:43 Our ability to use computers to do many things in the economy has been moving relatively steadily.
03:20:48 Overall, in terms of our use of computers in society,
03:20:52 they have been relatively steadily improving for 70 years.
03:20:55 No, but I would say that’s hard. Yeah. Okay. Okay. I would have to really think about that
03:21:00 because neural networks are quite surprising.
03:21:03 Sure. But every once in a while we have a new thing that’s surprising. But if you stand back,
03:21:07 you know, we see something like that every 10 years or so, some new innovation that has a big effect.
03:21:12 So, moderately chunky. Yeah.
03:21:19 But the history of the level of disruption we’ve seen in the past would be a rough
03:21:22 estimate of the level of disruption in the future. Unless the future is,
03:21:25 we’re going to hit a chunky territory, much chunkier than we’ve seen in the past.
03:21:28 Well, I do think there’s, it’s like, like Kuhnian, like revolution type.
03:21:36 It seems like the data, especially on AI, is difficult to reason with because it’s so recent,
03:21:46 it’s such a recent field. Wow, AI’s been around for 50 years.
03:21:50 I mean, 50, 60, 70, 80 years being recent. Okay.
03:21:53 It’s enough time to see a lot of trends.
03:21:58 A few trends, a few trends. I think the internet, computing, there’s really a lot of interesting
03:22:06 stuff that’s happened over the past 30 years that I think the possibility of revolutions
03:22:13 is likelier than it was in the… I think for the last 70 years,
03:22:17 there have always been a lot of things that look like they had a potential for revolution.
03:22:21 So we can’t reason well about this. I mean, we can reason well by looking
03:22:25 at the past trends. I would say the past trend is roughly your best guess for the future.
03:22:30 No, but if I look back at the things that might’ve looked like revolutions in the 70s and 80s and 90s,
03:22:37 they are less like the revolutions that appear to be happening now, or the capacity of revolution
03:22:43 that appear to be there now. First of all, there’s a lot more money to be made. So there’s a lot more
03:22:49 incentive for markets to do a lot of kind of innovation, it seems like in the AI space.
03:22:54 But then again, there’s a history of winters and summers and so on.
03:22:58 So maybe we’re just like riding a nice wave right now.
03:23:00 One of the biggest issues is the difference between impressive demos and commercial value.
03:23:05 Yes.
03:23:06 So we often through the history of AI, we saw very impressive demos
03:23:10 that never really translated much into commercial value.
03:23:12 Somebody who works on and cares about autonomous and semi autonomous vehicles,
03:23:17 tell me about it. And there again, we return to the number of Elon Musk’s per earth per year
03:23:24 generated. That’s the M. Coincidentally, same initials as the M.
03:23:31 Very suspicious, very suspicious. We’re going to have to look into that. All right. Two more fields
03:23:37 that I would like to force and twist your arm to look for view quakes and for beautiful ideas,
03:23:43 economics. What is a beautiful idea to you about economics? You mentioned a lot of them.
03:23:53 Sure. So as you said before, there’s going to be the first view cake most people encounter that
03:23:58 makes the biggest difference on average in the world, because that’s the only thing most people
03:24:02 ever see is the first one. And so with AI, the first one is just how big the problem is. But
03:24:10 once you get past that, you’ll find others. Certainly for economics, the first one is just
03:24:16 the power of markets. You might have thought it was just really hard to figure out how to optimize
03:24:22 in a big, complicated space. And markets just do a good first pass for an awful lot of stuff.
03:24:29 And they are really quite robust and powerful. And that’s just quite the view quake, where you just
03:24:35 say, if you want to get in the ballpark, just let a market handle it and step back. And that’s true
03:24:43 for a wide range of things. It’s not true for everything, but it’s a very good first approximation.
03:24:48 Most people’s intuitions for how they should limit markets are actually messing them up.
03:24:53 They’re that good in sense. Most people, when you go, I don’t know if we want to trust that.
03:24:57 Well, you should be trusting that. What are markets? Just a couple of words. So the idea
03:25:07 is if people want something, then let other companies form to try to supply that thing.
03:25:12 Let those people pay for their cost of whatever they’re making and try to offer that product
03:25:16 to those people. Let many such firms enter that industry and let the customers decide
03:25:22 which ones they want. And if the firm goes out of business, let it go bankrupt and let other
03:25:26 people invest in whichever ventures they want to try to attract customers to their version
03:25:30 of the product. And that just works for a wide range of products and services.
03:25:34 And through all of this, there’s a free exchange of information too.
03:25:37 There’s a hope that there’s no manipulation of information and so on.
03:25:43 Even when those things happen, still just the simple market solution is usually better
03:25:48 than the things you’ll try to do to fix it.
03:25:49 Than the alternative.
03:25:50 That’s a view, Craig. It’s surprising. It’s not what you would have initially thought.
03:25:55 That’s one of the great, I guess, inventions of human civilization that trust the markets.
03:26:02 Now, another view, Craig, that I learned in my research that’s not all of economics,
03:26:05 but something more specialized is the rationality of disagreement. That is,
03:26:11 basically people who are trying to believe what’s true in a complicated situation would not actually
03:26:16 disagree. And of course, humans disagree all the time. So it was quite the striking fact for me to
03:26:22 learn in grad school that actually rational agents would not knowingly disagree. And so that makes
03:26:28 disagreement more puzzling and it makes you less willing to disagree.
03:26:35 Humans are, to some degree, rational and are able to…
03:26:40 Their priorities are different than just figuring out the truth.
03:26:43 Are different than just figuring out the truth.
03:26:48 Which might not be the same as being irrational.
03:26:52 That’s another tangent that could take an hour.
03:26:56 In the space of human affairs, political science, what is a beautiful, foundational,
03:27:04 interesting idea to you, a view, Craig, in the space of political science?
03:27:08 It’s the main thing that goes wrong in politics is people not agreeing on what the best thing to do is.
03:27:19 That’s a wrong thing.
03:27:20 So that’s what goes wrong. That is where you say, what’s fundamentally behind most
03:27:24 political failures? It’s that people are ignorant of what the consequences of policy is.
03:27:30 And that’s surprising because it’s actually feasible to solve that problem,
03:27:34 which we aren’t solving.
03:27:35 So it’s a bug, not a feature that there’s an inability to arrive at a consensus.
03:27:43 So most political systems, if everybody looked to some authority, say, on a question and that
03:27:47 authority told them the answer, then most political systems are capable of just doing that thing.
03:27:55 That is. And so it’s the failure to have trustworthy authorities
03:28:00 that is sort of the underlying failure behind most political failure.
03:28:04 We invade Iraq, say, when we don’t have an authority to tell us that’s a really stupid
03:28:09 thing to do. And it is possible to create more informative trustworthy authorities.
03:28:17 That’s a remarkable fact about the world of institutions that we could do that, but we aren’t.
03:28:24 Yeah, that’s surprising. We could and we aren’t.
03:28:28 Right. Another big view, Craig, about politics is from the elephant in the brain that most people,
03:28:31 when they’re interacting with politics, they say they want to make the world better,
03:28:35 they make their city better, their country better, and that’s not their priority.
03:28:39 What is it?
03:28:40 They want to show loyalty to their allies. They want to show their people they’re on their side,
03:28:44 yes. Or their various tribes they’re in, that’s their primary priority and they do accomplish that.
03:28:51 Yeah. And the tribes are usually color coded, conveniently enough.
03:28:55 What would you say, you know, it’s the Churchill question. Democracy is the crappiest form of
03:29:01 government, but it’s the best one we got. What’s the best form of government for this, our 7 billion
03:29:08 human civilization and the maybe as we get farther and further. You mentioned a lot of stuff
03:29:14 that’s fascinating about human history as we become more forager like and looking out beyond
03:29:21 what’s the best form of government in the next 50, 100 years as we become a multi planetary species.
03:29:26 So, the key failing is that we have existing political institutions and related institutions
03:29:33 like media institutions and other authority institutions, and these institutions sit in
03:29:39 a vast space of possible institutions. And the key failing, we’re just not exploring that space.
03:29:44 And the key failing, we’re just not exploring that space. So, I have made my proposals in that space,
03:29:50 and I think I can identify many promising solutions. And many other people have made many
03:29:54 other promising proposals in that space. But the key thing is we’re just not pursuing those
03:29:59 proposals. We’re not trying them out on small scales, we’re not doing tests, we’re not exploring
03:30:04 the space of these options. That is the key thing we’re failing to do. And if we did that, I am
03:30:10 confident we would find much better institutions than one we’re using now, but we would have to
03:30:14 actually try. So, a lot of those topics, I do hope we get a chance to talk again. You’re a fascinating
03:30:23 human being. So, I’m skipping a lot of tangents on purpose that I would love to take. You’re such a
03:30:28 brilliant person on so many different topics. Let me take a stroll into the deep human psyche of
03:30:40 Robin Hansen himself. So, first… May not be that deep.
03:30:48 I might just be all on the surface. What you see is what you get. There might not be much hiding
03:30:51 behind it. Some of the fun is on the surface. I actually think this is true of many of the most
03:30:58 successful, most interesting people you see in the world. That is, they have put so much effort
03:31:04 into the surface that they’ve constructed. And that’s where they put all their energy. Somebody
03:31:10 might be a statesman or an actor or something else, and people want to interview them and they
03:31:14 want to say, what are you behind the scenes? What do you do in your free time? Those people don’t
03:31:18 have free time. They don’t have another life behind the scenes. They put all their energy into
03:31:24 that surface, the one we admire, the one we’re fascinated by. And they kind of have to make up
03:31:28 the stuff behind the scenes to supply it for you, but it’s not really there. Well, there’s several
03:31:33 ways of phrasing that. So, one of it is authenticity, which is if you become the thing you are on the
03:31:41 surface, if the depths mirror the surface, then that’s what authenticity is. You’re not hiding
03:31:48 something. You’re not concealing something. To push back on the idea of actors, they actually have
03:31:52 often a manufactured surface that they put on and they try on different masks and the depths are
03:32:00 very different from the surface. And that’s actually what makes them very not interesting
03:32:03 to interview. If you are an actor who actually lives the role that you play, so like, I don’t
03:32:13 know, Clint Eastwood type character who clearly represents the cowboy, like at least rhymes or
03:32:20 echoes the person you play on the surface, that’s authenticity. Some people are typecasts and they
03:32:26 have basically one persona they play in all of their movies and TV shows. And so those people,
03:32:30 it probably is the actual persona that they are, or it has become that over time. Clint Eastwood
03:32:37 would be one. I think of Tom Hanks as an ever. I think they just always play the same person.
03:32:40 And you and I are just both surface players. You’re the fun, brilliant thinker and I am the
03:32:49 suit wearing idiot full of silly questions. All right. That said, let’s put on your wise
03:33:01 sage hat and ask you, what advice would you give to young people today in high school and college
03:33:07 about life, about how to live a successful life in career or just in general that they can be proud
03:33:15 of? Most young people, when they actually ask you that question, what they usually mean is how can
03:33:22 I be successful by usual standards? I’m not very good at giving advice about that because that’s
03:33:28 not how I tried to live my life. So I would more flip it around and say, you live in a rich society
03:33:36 and you will have a long life. You have many resources available to you. Whatever career you
03:33:44 take, you’ll have plenty of time to make progress on something else. Yes, it might be better if you
03:33:50 find a way to combine your career and your interests in a way that gives you more time
03:33:54 and energy, but there are often big compromises there as well. So if you have a passion about some
03:34:00 topic or some thing that you think just is worth pursuing, you can just do it. You don’t need other
03:34:05 people’s approval. And you can just start doing whatever it is you think it’s worth doing. It
03:34:12 might take you decades, but decades are enough to make enormous progress on most all interesting
03:34:17 things. And don’t worry about the commitment of it. I mean, that’s a lot of what people worry
03:34:21 about is, well, there’s so many options. And if I choose a thing and I stick with it, I sacrifice
03:34:27 all the other paths I could have taken. So I switched my career at the age of 34 with two
03:34:32 kids, age zero and two, went back to grad school in social science after being a research software
03:34:39 engineer. So it’s quite possible to change your mind later in life.
03:34:45 How can you have an age of zero?
03:34:48 Less than one.
03:34:50 Okay. Oh, you index was zero. I got it. Okay.
03:34:55 Right. People also ask what to read and I say, textbooks. Until you’ve read lots of textbooks
03:35:02 or maybe review articles, I’m not so sure you should be reading blog posts and Twitter feeds
03:35:08 and even podcasts. I would say at the beginning, this is our best, humanity’s best summary of how
03:35:16 to learn things is crammed into textbooks. Especially the ones on like introduction to
03:35:21 biology, introduction to everything. Just read all the algorithms, read as many textbooks as
03:35:26 you can stomach. And then maybe if you want to know more about a subject, find review articles.
03:35:30 Right. You don’t need to read the latest stuff for most topics.
03:35:33 Yeah. And actually textbooks often have the prettiest pictures.
03:35:37 There you go.
03:35:37 And depending on the field, if it’s technical, then doing the homework problems at the end,
03:35:42 it’s actually extremely, extremely useful. Extremely powerful way to understand something
03:35:47 if you allow it. I actually think of like high school and college, which you kind of remind me
03:35:54 of, people don’t often think of it that way, but you will almost not again get an opportunity
03:36:02 to spend the time with a fundamental subject and like, and everybody’s forcing you, like
03:36:08 everybody wants you to do it. And like, you’ll never get that chance again to sit there,
03:36:14 even though it’s outside of your interest, biology. Like in high school, I took AP biology,
03:36:19 AP chemistry. I’m thinking of subjects I never again really visited seriously. And it was so
03:36:28 nice to be forced into anatomy and physiology, to be forced into that world, to stay with it,
03:36:35 to look at the pretty pictures, to certain moments to actually for a moment, enjoy the beauty of
03:36:40 these, of like how a cell works and all those kinds of things. And you’re somehow that stays
03:36:46 like the ripples of that fascination that stays with you, even if you never do those,
03:36:51 even if you never utilize those learnings in your actual work.
03:36:56 A common problem, at least of many young people I meet is that they’re like feeling
03:37:01 idealistic and altruistic, but in a rush. So, you know, the usual human tradition that goes back,
03:37:09 you know, hundreds of thousands of years is that people’s productivity rises with time and maybe
03:37:13 peaks around the age of 40 or 50. The age of 40, 50 is when you will be having the highest income,
03:37:19 you’ll have the most contacts, you will sort of be wise about how the world works.
03:37:25 Expect to have your biggest impact then. Before then, you can have impacts, but you’re also mainly
03:37:31 building up your resources and abilities. That’s the usual human trajectory. Expect that to be
03:37:38 true of you too. Don’t be in such a rush to like accomplish enormous things at the age of 18 or
03:37:43 whatever. I mean, you might as well practice trying to do things, but that’s mostly about
03:37:47 learning how to do things by practicing. There’s a lot of things you can’t do unless you just
03:37:50 keep trying them. And when all else fails, try to maximize the number of offspring,
03:37:56 however way you can. That’s certainly something I’ve neglected. I would tell my younger version
03:38:01 of myself, try to have more descendants. Yes, absolutely. It matters more than I gave,
03:38:09 I realized at the time. Both in terms of making copies of yourself in mutated form
03:38:18 and just the joy of raising them. Sure. I mean, the meaning even, you know, so in the literature on
03:38:28 the value people get out of life, there’s a key distinction between happiness and meaning.
03:38:32 So happiness is how do you feel right now about right now and meaning is how do you feel about
03:38:37 your whole life? And, you know, many things that produce happiness don’t produce meaning as
03:38:44 reliably. And if you have to choose between them, you’d rather have meaning. And meaning is more
03:38:51 goes along with sacrificing happiness sometimes. And children are an example of that. You get a lot
03:38:57 more meaning out of children, even if they’re a lot more work. What do you think kids, children
03:39:05 are so magical, like raising kids? I would love to have kids. And whenever I work with robots,
03:39:15 there’s some of the same magic when there’s an entity that comes to life. And in that case,
03:39:21 I’m not trying to draw too many parallels, but there is some echo to it, which is when you
03:39:27 program a robot, there’s some aspect of your intellect that is now instilled in this other
03:39:33 moving being that’s kind of magical. Well, why do you think that’s magical? And you said happiness
03:39:40 and meaning as opposed to a short. Why is it meaningful? It’s overdetermined. Like I can give
03:39:49 you several different reasons, all of which is sufficient. And so the question is, we don’t know
03:39:53 which ones are the correct reasons. It’s overdetermined. Look it up. So, you know, I meet a
03:39:59 lot of people interested in the future, interested in thinking about the future. They’re thinking
03:40:03 about how can I influence the future? But overwhelmingly in history so far, the main way
03:40:08 people have influenced the future is by having children, overwhelmingly. And that’s just not an
03:40:15 incidental fact. You are built for that. That is, you’re the sequence of thousands of generations,
03:40:22 each of which successfully had a descendant. And that affected who you are. You just have to expect
03:40:28 and it’s true that who you are is built to be, you know, expect to have a child, to want to have a
03:40:36 child, to have that be a natural and meaningful interaction for you. And it’s just true. It’s just
03:40:41 one of those things you just should have expected and it’s not a surprise. Well, to push back and
03:40:48 sort of in terms of influencing the future, as we get more and more technology, more and more of us
03:40:54 are able to influence the future in all kinds of other ways, right? Being a teacher, educator. Even
03:41:00 so, though, still most of our influence in the future has probably happened being kids, even
03:41:05 though we’ve accumulated more ways, other ways to do it. You mean at scale. I guess the depth of
03:41:11 influence, like really how much of much effort, how much of yourself you really put into another human
03:41:15 being. Do you mean both the raising of a kid or you mean raw genetic information? Well, both, but
03:41:24 raw genetics is probably more than half of it. More than half. More than half. Even in this modern
03:41:30 world? Yeah. Genetics. Let me ask some dark, difficult questions, if I might. Let’s take a
03:41:40 stroll into that place that may or may not exist, according to you. What’s the darkest place you’ve
03:41:48 ever gone to in your mind, in your life, a dark time, a challenging time in your life that you had to overcome?
03:41:58 You know, probably just feeling strongly rejected. And so I’ve been, I’m apparently somewhat
03:42:06 emotionally scarred by just being very rejection averse, which must have happened because
03:42:11 some rejections were just very scarring. At a scale in what kinds of communities? On the
03:42:18 individual scale? I mean, lots of different scales, yeah. All the different, many different scales. Still
03:42:24 that rejection stings. Hold on a second, but you are a contrarian thinker. You challenge the
03:42:33 norms. Why, if you were scarred by rejection, why welcome it in so many ways at a much
03:42:43 larger scale, constantly with your ideas? It could be that I’m just stupid, or that I’ve just categorized
03:42:50 them differently than I should or something. You know, the most rejection that I’ve faced hasn’t been
03:42:58 because of my intellectual ideas. So the intellectual ideas haven’t been the thing
03:43:06 to risk the rejection. The one that, the things that challenge your mind taking you to a dark
03:43:14 place are the more psychological rejections. So. Well, you just asked me, you know, what took me to a
03:43:21 dark place. You didn’t specify it as sort of an intellectual dark place, I guess. Yeah, I just
03:43:25 meant like what? So intellectual is disjoint or at least at a more surface level than something
03:43:33 emotional? Yeah, I would just think, you know, there are times in your life when, you know,
03:43:38 you’re just in a dark place and that can have many different causes. And most, you know, most
03:43:43 intellectuals are still just people and most of the things that will affect them are the kinds of
03:43:47 things that affect people. They aren’t that different necessarily. I mean, that’s going to be true for,
03:43:52 like, I presume most basketball players are still just people. If you ask them what was the worst
03:43:55 part of their life, it’s going to be this kind of thing that was the worst part of life for most
03:43:59 people. So rejection early in life? Yeah, I think, I mean, not in grade school probably, but, you know,
03:44:06 yeah, sort of, you know, being a young nerdy guy and feeling, you know, not in much demand or interest
03:44:13 or, you know, later on, lots of different kinds of rejection. But yeah, but I think that’s, you know,
03:44:22 most of us like to pretend we don’t that much need other people. We don’t care what they think.
03:44:26 I know it’s a common sort of stance if somebody rejects you or something, I didn’t care about them
03:44:30 anyway. I, you know, didn’t, but I think to be honest, people really do care. Yeah, we do seek
03:44:35 that connection, that love. What do you think is the role of love in the human condition?
03:44:40 Um, opacity, in part. That is, love is one of those things where we know at some level it’s
03:44:53 important to us, but it’s not very clearly shown to us exactly how or why or in what ways.
03:45:00 There are some kinds of things we want where we can just clearly see that we want and why that we
03:45:03 want it, right? We know when we’re thirsty, and we know why we were thirsty, and we know what to
03:45:07 do about being thirsty, and we know when it’s over that we’re no longer thirsty. Love isn’t like that.
03:45:14 It’s like, what do we seek from this? We’re drawn to it, but we do not understand why
03:45:19 we’re drawn exactly. Because it’s not just affection, because if it was just affection,
03:45:25 we don’t seem to be drawn to pure affection. We don’t seem to be drawn to somebody who’s like a
03:45:32 servant. We don’t seem to be necessarily drawn to somebody that satisfies all your needs or something
03:45:37 like that. So it’s clearly something we want or need, but we’re not exactly very clear about it,
03:45:43 and that is kind of important to it. So I’ve also noticed there are some kinds of things
03:45:48 you can’t imagine very well. So if you imagine a situation, there’s some aspects of the situation
03:45:53 that you can clearly, you can imagine it being bright or dim, you can imagine it being windy,
03:45:56 or you can imagine it being hot or cold. But there’s some aspects about your emotional stance
03:46:02 in a situation that’s actually just hard to imagine or even remember. You can often remember
03:46:08 an emotion only when you’re in a similar sort of emotion situation, and otherwise, you just can’t
03:46:12 bring the emotion to your mind, and you can’t even imagine it, right? So there’s certain kinds of
03:46:19 emotions you can have, and when you’re in that emotion, you can know that you have it, and you
03:46:22 can have a name, and it’s associated. But later on, I tell you, remember joy, and it doesn’t come to
03:46:28 mind. I’m not able to replay it. Right. And it’s the sort of reason why we have, one of the reasons
03:46:33 that pushes us to re consume it and reproduce it is that we can’t reimagine it. Well, it’s interesting
03:46:41 because there’s a Daniel Kahneman type of thing of reliving memories, because I’m able to summon
03:46:47 some aspect of that emotion, again, by thinking of that situation from which that emotion came.
03:46:53 Right. So like a certain song, you can listen to it, and you can feel the same way you felt the
03:46:59 first time you remember that song associated with it. Right. So you need to remember that situation
03:47:03 in some sort of complete package. Yes. You can’t just take one part off of it, and then if you get
03:47:08 the whole package again, if you remember the whole feeling. Yes. Or some fundamental aspect of that
03:47:13 whole experience that arouse from which the feeling arose. And actually, the feeling is probably
03:47:18 different in some way. It could be more pleasant or less pleasant than the feeling you felt
03:47:23 originally, and that morphs over time every time you replay that memory. It is interesting. You’re
03:47:28 not able to replay the feeling perfectly. You don’t remember the feeling. You remember the facts of the
03:47:33 events. So there’s a sense of which over time we expand our vocabulary as a community of language,
03:47:39 and that allows us to sort of have more feelings and know that we are feeling them. Because you can
03:47:43 have a feeling but not have a word for it, and then you don’t know how to categorize it or even
03:47:47 what it is and whether it’s the same as something else. But once you have a word for it, you can
03:47:52 sort of pull it together more easily. And so I think over time we are having a richer palette of
03:47:58 feelings because we have more words for them. What has been a painful loss in your life?
03:48:05 Maybe somebody or something that’s no longer in your life, but played an important part of your life.
03:48:12 Youth?
03:48:14 That’s a concept. No, it has to be…
03:48:16 I mean, but I was once younger. I had health and I had vitality. I was
03:48:20 insomere. I mean, you know, I’ve lost that over time.
03:48:22 Do you see that as a different person? Maybe you’ve lost that person.
03:48:26 Yes, absolutely. I’m a different person than I was when I was younger, and I don’t even remember
03:48:32 exactly what he was. So I don’t remember as many things from the past as many people do. So in
03:48:36 some sense, I’ve just lost a lot of my history by not remembering it. And I’m not that person
03:48:42 anymore. That person is gone and I don’t have any of their abilities.
03:48:45 Is it a painful loss, though?
03:48:46 Yeah.
03:48:47 Or is it a… Why is it painful? Because you’re wiser.
03:48:54 There’s so many things that are beneficial to getting older.
03:48:57 Right. But I just was this person and I felt assured that I could continue to be that person.
03:49:06 And you’re no longer that person.
03:49:07 And he’s gone. And I’m not him anymore. And he died without fanfare or a funeral.
03:49:14 And that the person you are today talking to me, that person will be changed, too.
03:49:20 Yes. And maybe in 20 years, he won’t be there anymore.
03:49:24 And the future person, we’ll look back. The future version of you will…
03:49:30 For Ems, this will be less of a problem. For Ems, they would be able to save an archived
03:49:34 copy of themselves at each different age. And they could turn it on periodically and go back
03:49:39 and talk to it.
03:49:40 To replay. You think some of that will be… So with emulated minds, with Ems,
03:49:46 there’s a digital cloning that happens. And do you think that makes you less special if you’re
03:50:00 clonable? Does that make you the experience of life, the experience of a moment, the scarcity
03:50:10 of that moment, the scarcity of that experience, isn’t that a fundamental part of what makes
03:50:14 that experience so delicious, so rich of feeling?
03:50:18 I think if you think of a song that lots of people listen to that are copies all over the
03:50:22 world, we’re going to call that a more special song.
03:50:26 Yeah. Yeah.
03:50:32 So there’s a perspective on copying and cloning where you’re just scaling happiness versus
03:50:39 degrading it.
03:50:40 I mean, each copy of a song is less special if there are many copies, but the song itself is
03:50:46 more special if there are many copies.
03:50:48 In a mass, right, you’re actually spreading the happiness even if it diminishes over a
03:50:55 large number of people at scale and that increases the overall happiness in the world.
03:50:59 And then you’re able to do that with multiple songs.
03:51:02 Is a person who has an identical twin more or less special?
03:51:06 Well, the problem with identical twins is, you know, it’s like just two with M’s.
03:51:16 Right, but two is different than one.
03:51:18 So I think an identical twin’s life is richer for having this other identical twin, somebody
03:51:24 who understands them better than anybody else can.
03:51:27 From the point of view of an identical twin, I think they have a richer life for being
03:51:32 part of this couple, each of which is very similar.
03:51:34 Now, if you said, will the world, you know, if we lose one of the identical twins, will
03:51:38 the world miss it as much because you’ve got the other one and they’re pretty similar?
03:51:42 Maybe from the rest of the world’s point of view, they suffer less of a loss when they
03:51:46 lose one of the identical twins.
03:51:48 But from the point of view of the identical twin themselves, their life is enriched by
03:51:52 having a twin.
03:51:53 See, but the identical twin copying happens at the place of birth.
03:51:58 It’s different than copying after you’ve done some of the environment, like the nurture
03:52:05 at the teenage or in the 20s after going to college.
03:52:08 Yes, that’ll be an interesting thing for M’s to find out all the different ways that
03:52:11 they can have different relationships to different people who have different degrees of similarity
03:52:16 to them in time.
03:52:17 Yeah, yeah, man.
03:52:23 But it seems like a rich space to explore and I don’t feel sorry for them.
03:52:26 This sounds like an interesting world to live in.
03:52:29 And there could be some ethical conundrums there.
03:52:31 There will be many new choices to make that they don’t make now.
03:52:35 So, and I discussed that in the book Age of M.
03:52:38 Like, say you have a lover and you make a copy of yourself, but the lover doesn’t make
03:52:43 a copy.
03:52:43 Well now, which one of you or are both still related to the lover?
03:52:48 Socially entitled to show up.
03:52:52 Yes, so you’ll have to make choices then when you split yourself, which of you inherit
03:52:58 which unique things.
03:53:01 Yeah, and of course there’ll be an equivalent increase in lawyers.
03:53:08 Well, I guess you can clone the lawyers to help manage some of these negotiations of
03:53:14 how to split property.
03:53:16 The nature of owning, I mean, property is connected to individuals, right?
03:53:22 You only really need lawyers for this with an inefficient, awkward law that is not very
03:53:26 transparent and able to do things.
03:53:28 So, you know, for example, an operating system of a computer is a law for that computer.
03:53:33 When the operating system is simple and clean, you don’t need to hire a lawyer to make a
03:53:37 key choice with the operating system.
03:53:38 You don’t need a human in the loop.
03:53:40 You just make a choice, right?
03:53:42 So ideally we want a legal system that makes the common choices easy and not require much
03:53:48 overhead.
03:53:49 And that’s the digitization of things further enables that.
03:53:56 So the loss of a younger self, what about the loss of your life overall?
03:54:01 Do you ponder your death, your mortality?
03:54:03 Are you afraid of it?
03:54:05 I am a cryonics customer.
03:54:06 That’s what this little tag around my deck says.
03:54:09 It says that if you find me in a medical situation, you should call these people to enable the
03:54:15 cryonics transfer.
03:54:16 So I am taking a long shot chance at living a much longer life.
03:54:22 Can you explain what cryonics is?
03:54:25 So when medical science gives up on me in this world, instead of burning me or letting
03:54:32 worms eat me, they will freeze me or at least freeze my head.
03:54:36 And there is damage that happens in the process of freezing the head.
03:54:40 But once it’s frozen, it won’t change for a very long time.
03:54:44 Chemically, it’ll just be completely exactly the same.
03:54:47 So future technology might be able to revive me.
03:54:50 And in fact, I would be mainly counting on the brain emulation scenario, which doesn’t
03:54:55 require reviving my entire biological body.
03:54:58 It means I would be in a computer simulation.
03:55:02 And so I think I’ve got at least a 5% shot at that.
03:55:06 And that’s immortality.
03:55:10 But most likely it won’t happen.
03:55:12 And therefore, I’m sad that it won’t happen.
03:55:14 Do you think immortality is something that you would like to have?
03:55:20 Well, I mean, just like infinity, I mean, you can’t know until forever, which means
03:55:26 never, right?
03:55:26 So all you can really, you know, the better choice is at each moment, do you want to keep
03:55:30 going?
03:55:31 So I would like at every moment to have the option to keep going.
03:55:34 The interesting thing about human experience is that the way you phrase it is exactly right.
03:55:45 At every moment, I would like to keep going.
03:55:48 But the thing that happens, you know, leave them wanting more of whatever that phrase
03:55:58 is, the thing that happens is over time, it’s possible for certain experiences to become
03:56:04 bland and you become tired of them.
03:56:07 And that actually makes life really unpleasant.
03:56:13 Sorry, makes that experience really unpleasant.
03:56:15 And perhaps you can generalize that to life itself if you have a long enough horizon.
03:56:21 And so…
03:56:22 Might happen, but might as well wait and find out.
03:56:24 But then you’re ending on suffering, you know?
03:56:28 So in the world of brain emulations, I have more options.
03:56:32 You can return yourself.
03:56:34 That is, I can make copies of myself, archive copies at various ages.
03:56:39 And at a later age, I could decide that I’d rather replace myself with a new copy from
03:56:43 a younger age.
03:56:44 So does a brain emulation still operate in physical space?
03:56:48 So can we do, what do you think about like the metaverse and operating in virtual reality
03:56:53 so we can conjure up not just emulate, not just your own brain and body, but the entirety
03:57:00 of the environment?
03:57:00 Well, most brain emulations will, in fact, most of their time in virtual reality.
03:57:06 But they wouldn’t think of it as virtual reality.
03:57:08 They would just think of it as their usual reality.
03:57:11 I mean, the thing to notice, I think, in our world, most of us spend most time indoors.
03:57:16 And indoors, we are surrounded by walls covered with paint and floors covered with
03:57:21 tile or rugs.
03:57:23 Most of our environment is artificial.
03:57:26 It’s constructed to be convenient for us.
03:57:28 It’s not the natural world that was there before.
03:57:31 A virtual reality is basically just like that.
03:57:33 It is the environment that’s comfortable and convenient for you.
03:57:37 But when it’s the right, that environment for you, it’s real for you.
03:57:41 Just like the room you’re in right now most likely is very real for you.
03:57:45 You’re not focused on the fact that the paint is hiding the actual studs behind the
03:57:49 wall and the actual wires and pipes and everything else.
03:57:52 The fact that we’re hiding that from you doesn’t make it fake or unreal.
03:57:58 What are the chances that we’re actually in the very kind of system that you’re describing
03:58:04 where the environment and the brain is being emulated and you’re just replaying an experience
03:58:08 when you first did a podcast with Lex after?
03:58:14 And now, the person that originally launched this already did hundreds of podcasts with
03:58:19 Lex.
03:58:19 This is just the first time and you like this time because there’s so much uncertainty.
03:58:24 There’s nerves.
03:58:25 It could have gone any direction.
03:58:28 At the moment, we don’t have the technical ability to create that emulation.
03:58:32 So we’d have to be postulating that in the future we have that ability and then they
03:58:37 choose to evaluate this moment now to simulate it.
03:58:40 Don’t you think we could be in the simulation of that exact experience right now and we
03:58:45 wouldn’t be able to know?
03:58:46 So one scenario would be this never really happened.
03:58:51 This only happens as a reconstruction later on.
03:58:55 That’s different than the scenario that this did happen the first time and now it’s happening
03:58:58 again as a reconstruction.
03:59:00 That second scenario is harder to put together because it requires this coincidence where
03:59:06 between the two times we produce the ability to do it.
03:59:08 But don’t you think replay of memories, poor replay of memories is something that might
03:59:18 be a possible thing in the future?
03:59:19 You’re saying it’s harder than conjure up things from scratch.
03:59:23 It’s certainly possible.
03:59:25 So the main way I would think about it is in terms of the demand for simulation versus
03:59:29 other kinds of things.
03:59:31 So I’ve given this a lot of thought because I first wrote about this long ago when Bostrom
03:59:36 first wrote his papers about simulation argument and I wrote about how to live in a simulation.
03:59:42 And so the key issue is the fraction of creatures in the universe that are really experiencing
03:59:50 what you appear to be really experiencing relative to the fraction that are experiencing
03:59:54 it in a simulation way, i.e., simulated.
03:59:57 So then the key parameter is at any one moment in time, creatures at that time, many of them,
04:00:06 most of them are presumably really experiencing what they’re experiencing, but some fraction
04:00:10 of them are experiencing some past time where that past time is being remembered via their
04:00:17 simulation.
04:00:19 So to figure out this ratio, what we need to think about is basically two functions.
04:00:26 One is how fast in time does the number of creatures grow?
04:00:30 And then how fast in time does the interest in the past decline?
04:00:34 Because at any one time, people will be simulating different periods in the past with different
04:00:40 emphasis.
04:00:40 I love the way you think so much.
04:00:42 That’s exactly right, yeah.
04:00:44 So if the first function grows slower than the second one declines, then in fact, your
04:00:51 chances of being simulated are low.
04:00:54 Yes.
04:00:54 So the key question is how fast does interest in the past decline relative to the rate
04:00:58 at which the population grows with time?
04:01:00 Does this correlate to you earlier suggested that the interest in the future increases
04:01:05 over time, are those correlated interest in the future versus interest in the past?
04:01:09 Like, why are we interested in the past?
04:01:11 So, but the simple way to do it is, as you know, like Google Ngrams has a way to type
04:01:15 in a word and see how interest in it declines or rises over time, right?
04:01:20 Yeah.
04:01:20 You can just type in a year and get the answer for that.
04:01:24 If you type in a particular year, like 1900 or 1950, you can see with Google Ngram, how
04:01:30 interest in that year increased up until that date and decreased after it.
04:01:34 Yep.
04:01:35 And you can see that interest in a date declines faster than does the population grow with
04:01:41 time.
04:01:42 That is brilliant.
04:01:44 That is so interesting.
04:01:45 And so you have the answer.
04:01:48 Wow.
04:01:49 Wow.
04:01:50 And that was your argument against, not against to this particular aspect of the simulation,
04:01:56 how much past simulation there will be, replay of past memories.
04:02:01 First of all, if we assume that like simulation of the past is a small fraction of all the
04:02:06 creatures at that moment.
04:02:07 Yes.
04:02:07 Right.
04:02:08 And then it’s about how fast.
04:02:10 Now, some people have argued plausibly that maybe most interest in the past falls with
04:02:15 this fast function, but some unusual category of interest in the past won’t fall that fast
04:02:19 quickly.
04:02:20 And then that eventually would dominate.
04:02:22 So that’s a other hypothesis you want.
04:02:24 Some category.
04:02:25 So that very outlier specific kind of, yeah, okay.
04:02:28 Yeah, yeah, yeah.
04:02:29 Like really popular kinds of memories, like probably sexual.
04:02:35 In a trillion years, there’s some small research institute that tries to randomly select from
04:02:40 all possible people in history or something to simulate.
04:02:42 Yeah, yeah, yeah.
04:02:46 So the question is how big is this research institute and how big is the future in a trillion
04:02:50 years, right?
04:02:51 And that would be hard to say.
04:02:52 But if we just look at the ordinary process by which people simulate recent errors.
04:02:57 So if you look at, it’s also true for movies and plays and video games,
04:03:02 overwhelmingly they’re interested in the recent past.
04:03:04 There’s very few video games where you play someone in the Roman Empire.
04:03:07 Right.
04:03:08 But even fewer where you play someone in the ancient Egyptian Empire.
04:03:14 Yeah, just different.
04:03:15 It’s just declined very quickly.
04:03:16 But every once in a while that’s brought back.
04:03:20 But yeah, you’re right.
04:03:21 I mean, just if you look at the mass of entertainment, movies and games, it’s focusing on the present
04:03:28 recent past.
04:03:29 And maybe some, I mean, where does science fiction fit into this?
04:03:32 Because it’s sort of, what is science fiction?
04:03:39 I mean, it’s a mix of the past and the present and some kind of manipulation of that to make
04:03:44 it more efficient for us to ask deep philosophical questions about humanity.
04:03:48 The closest genre to science fiction is clearly fantasy.
04:03:51 Fantasy and science fiction in many bookstores and even Netflix or whatever categories, they’re
04:03:55 just lumped together.
04:03:56 So clearly they have a similar function.
04:03:58 So that the function of fantasy is more transparent than the function of science fiction.
04:04:02 So use that as your guide.
04:04:04 What’s fantasy for is just to take away the constraints of the ordinary world and imagine
04:04:08 stories with much fewer constraints.
04:04:10 That’s what fantasy is.
04:04:11 You are much less constrained.
04:04:13 What’s the purpose to remove constraints?
04:04:14 Is it to escape from the harshness of the constraints of the real world?
04:04:19 Or is it to just remove constraints in order to explore some, get a deeper understanding
04:04:24 of our world?
04:04:26 What is it?
04:04:26 I mean, why do people read fantasy?
04:04:28 I’m not a cheap fantasy reading kind of person.
04:04:34 So I need to…
04:04:36 One story that sounds plausible to me is that there are sort of these deep story structures
04:04:40 that we love and we want to realize.
04:04:43 And then many details of the world get in their way.
04:04:46 Fantasy takes all those obstacles out of the way and lets you tell the essential hero story
04:04:51 or the essential love story, whatever essential story you want to tell.
04:04:53 Well, the reality and constraints are not in the way.
04:04:59 And so science fiction can be thought of as like fantasy, except you’re not willing to
04:05:02 admit that it can’t be true.
04:05:04 So the future gives the excuse of saying, well, it could happen.
04:05:09 And you accept some more reality constraints for the illusion, at least, that maybe it
04:05:13 could really happen.
04:05:16 Maybe it could happen.
04:05:18 And that, it stimulates the imagination.
04:05:20 Imagination is something really interesting about human beings.
04:05:24 And it seems also to be an important part of creating really special things is to be
04:05:28 able to first imagine them.
04:05:30 With you and Nick Bostrom, where do you land on the simulation and all the mathematical
04:05:37 ways of thinking it and just the thought experiment of it?
04:05:41 Are we living in a simulation?
04:05:44 That was just the discussion we just had.
04:05:46 That is, you should grant the possibility of being a simulation.
04:05:50 You shouldn’t be 100% confident that you’re not.
04:05:52 You should certainly grant a small probability.
04:05:54 The question is, how large is that probability?
04:05:56 Are you saying we would be, I misunderstood because I thought our discussion was about
04:06:01 replaying things that already happened.
04:06:03 Right.
04:06:03 But the whole question is, right now, is that what I am?
04:06:08 Am I actually a replay from some distant future?
04:06:11 But it doesn’t necessarily need to be a replay.
04:06:13 It could be a totally new.
04:06:15 You could be, you don’t have to be an NPC.
04:06:17 Clearly, I’m in a certain era with a certain kind of world around me.
04:06:20 So either this is a complete fantasy or it’s a past of somebody else in the future.
04:06:26 No, it could be a complete fantasy though.
04:06:28 It could be.
04:06:28 But then you have to talk about what’s the fraction of complete fantasies.
04:06:33 I would say it’s easier to generate a fantasy than to replay a memory.
04:06:36 Right?
04:06:37 Oh, but the fraction is important.
04:06:39 We just look at the entire history of everything.
04:06:41 We just say, sure, but most things are real.
04:06:43 Most things aren’t fantasies.
04:06:45 Therefore, the chance that my thing is real.
04:06:47 Right?
04:06:47 So the simulation argument works stronger about sort of the past.
04:06:50 We say, ah, but there’s more future people than there are today.
04:06:53 So you being in the past of the future makes you special relative to them,
04:06:57 which makes you more likely to be in a simulation.
04:06:59 Right?
04:07:00 If we’re just taking the full count and saying, in all creatures ever,
04:07:03 what percentage are in simulations?
04:07:05 Probably no more than 10%.
04:07:08 So what’s the good argument for that?
04:07:10 That most things are real?
04:07:11 Yeah.
04:07:12 Because a classroom says the other way, right?
04:07:14 In a competitive world, in a world where people have to work and have to get things done,
04:07:20 then they have a limited budget for leisure.
04:07:24 And so, you know, leisure things are less common than work things, like real things.
04:07:29 Right?
04:07:29 But if you look at the stretch of history in the universe, doesn’t the ratio of leisure increase?
04:07:41 Isn’t that where we, isn’t that the forger?
04:07:45 Right, but now we’re looking at the fraction of leisure,
04:07:47 which takes the form of something where the person doing the leisure doesn’t realize it.
04:07:51 Now there could be some fraction of that, but that’s much smaller, right?
04:07:55 Yeah.
04:07:57 Clueless forgers.
04:07:58 Or somebody is clueless in the process of supporting this leisure, right?
04:08:02 It might not be the person leisureing, somebody,
04:08:04 they’re a supporting character or something,
04:08:05 but still that’s got to be a pretty small fraction of leisure.
04:08:07 What, you mentioned that children are one of the things that are a source of meaning.
04:08:13 Broadly speaking, then let me ask the big question.
04:08:16 What’s the meaning of this whole thing?
04:08:19 Robin, meaning of life.
04:08:21 What is the meaning of life?
04:08:23 We talked about alien civilizations, but this is the one we got.
04:08:27 Where are the aliens?
04:08:28 Where are the human?
04:08:30 Seem to be conscious, be able to introspect.
04:08:35 What’s, why are we here?
04:08:37 This is the thing I told you before about how we can predict that
04:08:40 future creatures will be different from us.
04:08:43 We, our preferences are this amalgam of various sorts of random sort of patched together
04:08:51 preferences about thirst and sex and sleep and attention and all these sorts of things.
04:08:57 So we don’t understand that very well.
04:08:59 It’s not very transparent and it’s a mess, right?
04:09:03 That is the source of our motivation.
04:09:05 That is how we were made and how we are induced to do things.
04:09:09 But we can’t summarize it very well and we don’t even understand it very well.
04:09:13 That’s who we are.
04:09:15 And often we find ourselves in a situation where we don’t feel very motivated.
04:09:18 We don’t know why.
04:09:18 In other situations, we find ourselves very motivated and we don’t know why either.
04:09:24 And so that’s the nature of being a human of the sort that we are because
04:09:29 even though we can think abstractly and reason abstractly, this package
04:09:32 of motivations is just opaque and a mess.
04:09:34 And that’s what it means to be a human today and the motivation.
04:09:39 We can’t very well tell the meaning of our life.
04:09:42 It is this mess that our descendants will be different.
04:09:44 They will actually know exactly what they want.
04:09:48 And it will be to have more descendants.
04:09:50 That will be the meaning for them.
04:09:52 Well, it’s funny that you have the certainty.
04:09:54 You have more certainty.
04:09:56 You have more transparency about our descendants than you do about your own self.
04:10:01 Right.
04:10:02 So it’s really interesting to think, because you mentioned this about love,
04:10:07 that something that’s fundamental about love is this opaqueness that we’re not able
04:10:13 to really introspect what the heck it is or all the feelings, the complex feelings.
04:10:19 And that’s true about many of our motivations.
04:10:21 And that’s what it means to be human of the 20th and the 21st century variety.
04:10:28 Why is that not a feature that we will choose to persist in civilization then?
04:10:35 This opaqueness, put another way, mystery, maintaining a sense of mystery
04:10:40 about ourselves and about those around us.
04:10:43 Maybe that’s a really nice thing to have.
04:10:45 Maybe.
04:10:46 But, so, I mean, this is the fundamental issue in analyzing the future.
04:10:50 What will set the future?
04:10:52 One theory about what will set the future is, what do we want the future to be?
04:10:56 What do we want the future to be?
04:10:58 So under that theory, we should sit and talk about what we want the future to be,
04:11:01 have some conferences, have some conventions, discussion things, vote on it maybe,
04:11:05 and then hand out off to the implementation people to make the future the way we’ve
04:11:09 decided it should be.
04:11:12 That’s not the actual process that’s changed the world over history up to this point.
04:11:16 It has not been the result of us deciding what we want and making it happen.
04:11:21 In our individual lives, we can do that.
04:11:23 We might decide what career we want or where we want to live, who we want to live with.
04:11:26 In our individual lives, we often do slowly make our lives better according to our plan
04:11:31 and our things, but that’s not the whole world.
04:11:34 The whole world so far has mostly been a competitive world where things happen if
04:11:38 anybody anywhere chooses to adopt them and they have an advantage.
04:11:42 And then it spreads and other people are forced to adopt it by competitive pressures.
04:11:46 So that’s the kind of analysis I can use to predict the future.
04:11:49 And I do use that to predict the future.
04:11:50 It doesn’t tell us it’ll be a future we like.
04:11:52 It just tells us what it’ll be.
04:11:54 And it’ll be one where we’re trying to maximize the number of our descendants.
04:11:57 And we know that abstractly and directly.
04:12:00 And it’s not opaque.
04:12:01 With some probability that’s nonzero, that will lead us to become grabby in expanding
04:12:09 aggressively out into the cosmos until we meet other aliens.
04:12:13 The timing isn’t clear.
04:12:14 We might become grabby and then this happens.
04:12:17 These are, the grabbiness and this are both the result of competition, but it’s less
04:12:21 clear which happens first.
04:12:24 Does this future excite you or scare you?
04:12:26 How do you feel about this whole thing?
04:12:28 Again, I told you compared to sort of a dead cosmology, at least it’s energizing and having
04:12:33 a living story with real actors and characters and agendas, right?
04:12:36 Yeah.
04:12:37 And that’s one hell of a fun universe to live in.
04:12:40 Robin, you’re one of the most fascinating, fun people to talk to.
04:12:44 Brilliant, humble, systematic in your analysis.
04:12:48 Hold on to my wallet here.
04:12:49 What’s he looking for?
04:12:50 I already stole your wallet long ago.
04:12:52 I really, really appreciate you spending your valuable time with me.
04:12:55 I hope we get a chance to talk many more times in the future.
04:12:59 Thank you so much for sitting down.
04:13:01 Thank you.
04:13:03 Thanks for listening to this conversation with Robin Hansen.
04:13:05 To support this podcast, please check out our sponsors in the description.
04:13:09 And now let me leave you with some words from Ray Bradbury.
04:13:13 We are an impossibility in an impossible universe.
04:13:17 Thank you for listening and hope to see you next time.