Daniel Schmachtenberger: Steering Civilization Away from Self-Destruction #191

Transcript

00:00:00 The following is a conversation with Daniel Schmachtenberger, a founding member of the

00:00:04 Consilience Project that is aimed at improving public sensemaking and dialogue.

00:00:09 He is interested in understanding how we humans can be the best version of ourselves as individuals

00:00:15 and as collectives at all scales.

00:00:19 Quick mention of our sponsors, Ground News, NetSuite, Four Sigmatic, Magic Spoon, and

00:00:24 BetterHelp.

00:00:25 Check them out in the description to support this podcast.

00:00:29 As a side note, let me say that I got a chance to talk to Daniel on and off the mic for a

00:00:33 couple of days.

00:00:34 We took a long walk the day before our conversation.

00:00:37 I really enjoyed meeting him, just on a basic human level.

00:00:40 We talked about the world around us with words that carried hope for us individual ants actually

00:00:46 contributing something of value to the colony.

00:00:50 These conversations are the reasons I love human beings, our insatiable striving to lessen

00:00:55 the suffering in the world.

00:00:56 But more than that, there’s a simple magic to two strangers meeting for the first time

00:01:01 and sharing ideas, becoming fast friends, and creating something that is far greater

00:01:06 than the sum of our parts.

00:01:07 I’ve gotten to experience some of that same magic here in Austin with a few new friends

00:01:12 and in random bars in my travels across this country.

00:01:16 Where a conversation leaves me with a big stupid smile on my face and a new appreciation

00:01:21 of this too short, too beautiful life.

00:01:24 This is the Lex Friedman podcast, and here is my conversation with Daniel Schmachtenberger.

00:01:31 If aliens were observing Earth through the entire history, just watching us, and we’re

00:01:37 tasked with summarizing what happened until now, what do you think they would say?

00:01:41 What do you think they would write up in that summary?

00:01:43 Like it has to be pretty short, less than a page.

00:01:47 Like in Hitchhiker’s Guide, there’s I think like a paragraph or a couple sentences.

00:01:54 How would you summarize, sorry, how would the aliens summarize, do you think, all of

00:01:58 human civilization?

00:02:00 My first thoughts take more than a page.

00:02:04 They’d probably distill it.

00:02:06 Because if they watched, well, I mean, first, I have no idea if their senses are even attuned

00:02:12 to similar stuff to what our senses are attuned to, or what the nature of their consciousness

00:02:16 is like relative to ours.

00:02:19 So let’s assume that they’re kind of like us, just technologically more advanced to

00:02:22 get here from wherever they are.

00:02:24 That’s the first kind of constraint on the thought experiment.

00:02:27 And then if they’ve watched throughout all of history, they saw the burning of Alexandria.

00:02:32 They saw that 2,000 years ago in Greece, we were producing things like clocks, the antikytheria

00:02:38 mechanism, and then that technology got lost.

00:02:40 They saw that there wasn’t just a steady dialectic of progress.

00:02:45 So every once in a while, there’s a giant fire that destroys a lot of things.

00:02:49 There’s a giant commotion that destroys a lot of things.

00:02:54 Yeah, and it’s usually self induced.

00:02:58 They would have seen that.

00:03:00 And so as they’re looking at us now, as we move past the nuclear weapons age into the

00:03:07 full globalization, anthropocene, exponential tech age, still making our decisions relatively

00:03:15 similarly to how we did in the stone age as far as rivalry game theory type stuff, I think

00:03:21 they would think that this is probably most likely one of the planets that is not going

00:03:24 to make it to being intergalactic because we blow ourselves up in the technological

00:03:28 adolescence.

00:03:29 And if we are going to, we’re going to need some major progress rapidly in the social

00:03:37 technologies that can guide and bind and direct the physical technologies so that we are safe

00:03:43 vessels for the amount of power we’re getting.

00:03:46 Actually, Hitchhiker’s Guide has an estimation about how much of a risk this particular thing

00:03:55 poses to the rest of the galaxy.

00:03:57 And I think, I forget what it was, I think it was medium or low.

00:04:03 So their estimation was, would be that this species of ant like creatures is not going

00:04:08 to survive long.

00:04:10 There’s ups and downs in terms of technological innovation.

00:04:13 The fundamental nature of their behavior from a game theory perspective hasn’t really changed.

00:04:18 They have not learned in any fundamental way how to control and properly incentivize or

00:04:26 properly do the mechanism design of games to ensure long term survival.

00:04:32 And then they move on to another planet.

00:04:35 Do you think there is, in a slightly more serious question, do you think there’s some

00:04:43 number or perhaps a very, very large number of intelligent alien civilizations out there?

00:04:50 Yes, would be hard to think otherwise.

00:04:54 I know, I think Bostrom had a new article not that long ago on why that might not be

00:04:58 the case, that the Drake equation might not be the kind of end story on it.

00:05:04 But when I look at the total number of Kepler planets just that we’re aware of just galactically

00:05:09 and also like when those life forms were discovered in Mono Lake that didn’t have the same six

00:05:16 primary atoms, I think it had arsenic replacing phosphorus as one of the primary aspects of

00:05:21 its energy metabolism, we get to think about that the building blocks might be more different.

00:05:26 So the physical constraints even that the planets have to have might be more different.

00:05:30 It seems really unlikely not to mention interesting things that we’ve observed that are still

00:05:37 unexplained.

00:05:38 As you had guests on your show discussing Tic Tac and all the ones that have visited.

00:05:44 Yeah.

00:05:45 Well, let’s dive right into that.

00:05:46 What do you make sense of the rich human psychology of there being hundreds of thousands, probably

00:05:56 millions of witnesses of UFOs of different kinds on Earth, most of which I presume are

00:06:02 conjured up by the human mind through the perception system.

00:06:07 Some number might be true, some number might be reflective of actual physical objects,

00:06:12 whether it’s you know, drones or testing military technology that secret or otherworldly technology.

00:06:21 What do you make sense of all of that, because it’s gained quite a bit of popularity recently.

00:06:26 There’s some sense in which that’s us humans being hopeful and dreaming of otherworldly

00:06:37 creatures as a way to escape the dreariness of our of the human condition.

00:06:44 But in another sense, it could be it really could be something truly exciting that science

00:06:49 should turn its eye towards.

00:06:53 So what do you where do you place it?

00:06:56 Speaking of turning eye towards this is one of those super fascinating, actually super

00:07:00 consequential possibly topics that I wish I had more time to study and just haven’t

00:07:05 allocated so I don’t have firm beliefs on this because I haven’t got to study it as

00:07:09 much as I want.

00:07:10 So what I’m going to say comes from a superficial assessment.

00:07:15 While we know there are plenty of things that people thought of as UFO sightings that we

00:07:20 can fully write off, we have other better explanations for them.

00:07:24 What we’re interested in is the ones that we don’t have better explanations for and

00:07:27 then not just immediately jumping to a theory of what it is, but holding it as unidentified

00:07:33 and being being curious and earnest.

00:07:36 I think the the tic tac one is quite interesting and made it in major media recently.

00:07:42 But I don’t know if you ever saw the Disclosure Project, a guy named Steven Greer organized

00:07:49 a bunch of mostly US military and some commercial flight people who had direct observation and

00:07:57 classified information disclosing it at a CNN briefing.

00:08:02 And so you saw high ranking generals, admirals, fighter pilots all describing things that

00:08:07 they saw on radar with their own eyes or cameras, and also describing some phenomena that had

00:08:17 some consistency across different people.

00:08:20 And I find this interesting enough that I think it would be silly to just dismiss it.

00:08:27 And specifically, we can ask the question, how much of it is natural phenomena, ball

00:08:32 lightning or something like that?

00:08:34 And this is why I’m more interested in what fighter pilots and astronauts and people who

00:08:39 are trained in being able to identify flying objects and atmospheric phenomena have to

00:08:48 say about it.

00:08:51 I think the thing then you could say, well, are they more advanced military craft?

00:08:57 Is it some kind of, you know, human craft?

00:09:00 The interesting thing that a number of them describe is something that’s kind of like

00:09:03 right angles at speed, or not right angles, acute angles at speed, but something that

00:09:09 looks like a different relationship to inertia than physics makes sense for us.

00:09:14 I don’t think that there are any human technologies that are doing that even in really deep underground

00:09:20 black projects.

00:09:22 Now one could say, okay, well, could it be a hologram?

00:09:25 Or would it show up on radar if radar is also seeing it?

00:09:28 And so I don’t know.

00:09:30 I think there’s enough, I mean, and for that to be a massive coordinated psyop, is it as

00:09:37 interesting and ridiculous in a way as the idea that it’s UFOs from some extra planetary

00:09:45 source?

00:09:46 So it’s up there on the interesting topics.

00:09:49 To me there’s, if it is at all alien technology, it is the dumbest version of alien technology.

00:09:57 It’s so far away, it’s like the old, old crappy VHS tapes of alien technology.

00:10:03 These are like crappy drones that just floated or even like space to the level of like space

00:10:08 junk because it is so close to our human technology.

00:10:14 We talk about it moves in ways that’s unlike what we understand about physics, but it still

00:10:19 has very similar kind of geometric notions and something that we humans can perceive

00:10:26 with our eyes, all those kinds of things.

00:10:28 I feel like alien technology most likely would be something that we would not be able to

00:10:34 perceive.

00:10:35 Not because they’re hiding, but because it’s so far advanced that it would be beyond the

00:10:43 cognitive capabilities of us humans.

00:10:45 Just as you were saying, as per your answer for alien summarizing Earth, the starting

00:10:53 assumption is they have similar perception systems, they have similar cognitive capabilities,

00:10:59 and that very well may not be the case.

00:11:01 Let me ask you about staying in aliens for just a little longer because I think it’s

00:11:08 a good transition in talking about governments and human societies.

00:11:14 Do you think if a US government or any government was in possession of an alien spacecraft or

00:11:24 of information related to alien spacecraft, they would have the capacity, structurally

00:11:34 would they have the processes, would they be able to communicate that to the public

00:11:45 effectively or would they keep it secret in a room and do nothing with it, both to try

00:11:50 to preserve military secrets, but also because of the incompetence that’s inherent to bureaucracies

00:11:58 or either?

00:12:00 Well, we can certainly see when certain things become declassified 25 or 50 years later that

00:12:08 there were things that the public might have wanted to know that were kept secret for a

00:12:13 very long time for reasons of at least supposedly national security, which is also a nice source

00:12:20 of plausible deniability for people covering their ass for doing things that would be problematic

00:12:27 and other purposes.

00:12:34 There are, there’s a scientist at Stanford who supposedly got some material that was

00:12:42 recovered from Area 51 type area, did analysis on it using, I believe, electron microscopy

00:12:48 and a couple other methods and came to the idea that it was a nanotech alloy that was

00:12:56 something we didn’t currently have the ability to do, was not naturally occurring.

00:13:00 So there, I’ve heard some things and again, like I said, I’m not going to stand behind

00:13:05 any of these because I haven’t done the level of study to have high confidence.

00:13:13 I think what you said also about would it be super low tech alien craft, like would

00:13:19 they necessarily move their atoms around in space or might they do something more interesting

00:13:24 than that, might they be able to have a different relationship to the concept of space or information

00:13:30 or consciousness or one of the things that the craft supposedly do is not only accelerate

00:13:36 and turn in a way that looks non inertial, but also disappear.

00:13:40 So there’s a question as to like the two are not necessarily mutually exclusive and it

00:13:45 could be possible to, some people run a hypothesis that they create intentional amounts of exposure

00:13:51 as an invitation of a particular kind, who knows, interesting field.

00:13:58 We tend to assume like SETI that’s listening out for aliens out there, I’ve just been

00:14:05 recently reading more and more about gravitational waves and you have orbiting black holes that

00:14:13 orbit each other, they generate ripples in space time on my, for fun at night when I

00:14:20 lay in bed, I think about what it would be like to ride those waves when they, not the

00:14:25 low magnitude they are when they reach earth, but get closer to the black holes because

00:14:30 it will basically be shrinking and expanding us in all dimensions, including time.

00:14:38 So it’s actually ripples through space time that they generate.

00:14:43 Why is it that you couldn’t use that, it travels the speed of light, travels at a speed which

00:14:51 is a very weird thing to say when you’re morphing space time, you could argue it’s faster than

00:15:00 the speed of light.

00:15:02 So if you’re able to communicate by, to summon enough energy to generate black holes and

00:15:08 to orbit them, to force them to orbit each other, why not travel as the ripples in space

00:15:16 time, whatever the hell that means, somehow combined with wormholes.

00:15:21 So if you’re able to communicate through, like we don’t think of gravitational waves

00:15:26 as something you can communicate with because the radio will have to be a very large size

00:15:33 and very dense, but perhaps that’s it, perhaps that’s one way to communicate, it’s a very

00:15:39 effective way.

00:15:41 And that would explain, like we wouldn’t even be able to make sense of that, of the physics

00:15:47 that results in an alien species that’s able to control gravity at that scale.

00:15:53 I think you just jumped up the Kardashev scale so far that you’re not just harnessing the

00:15:57 power of a star, but harnessing the power of mutually rotating black holes.

00:16:05 That’s way above my physics pay grade to think about including even non rotating black hole

00:16:13 versions of transwarp travel.

00:16:17 I think, you know, you can talk with Eric more about that, I think he has better ideas

00:16:22 on it than I do.

00:16:23 My hope for the future of humanity mostly does not rest in the near term on our ability

00:16:30 to get to other habitable planets in time.

00:16:33 And even more than that, in the list of possible solutions of how to improve human civilization,

00:16:39 orbiting black holes is not on the first page for you.

00:16:43 Not on the first page.

00:16:44 Okay.

00:16:45 I bet you did not expect us to start this conversation here, but I’m glad the places

00:16:49 it went.

00:16:52 I am excited on a much smaller scale of Mars, Europa, Titan, Venus, potentially having very

00:17:02 like bacteria like life forms, just on a small human level, it’s a little bit scary, but

00:17:11 mostly really exciting that there might be life elsewhere in the volcanoes and the oceans

00:17:19 all around us, teaming, having little societies and whether there’s properties about that

00:17:25 kind of life that’s somehow different than ours.

00:17:28 I don’t know what would be more exciting if those colonies of single cell type organisms,

00:17:35 what would be more exciting if they’re different or they’re the same?

00:17:39 If they’re the same, that means through the rest of the universe, there’s life forms like

00:17:47 us, something like us everywhere.

00:17:51 If they’re different, that’s also really exciting because there’s life forms everywhere that

00:17:57 are not like us.

00:18:00 That’s a little bit scary.

00:18:01 I don’t know what’s scarier actually.

00:18:04 I think both scary and exciting no matter what, right?

00:18:08 The idea that they could be very different is philosophically very interesting for us

00:18:11 to open our aperture on what life and consciousness and self replicating possibilities could look

00:18:17 like.

00:18:19 The question on are they different or the same, obviously there’s lots of life here

00:18:22 that is the same in some ways and different in other ways.

00:18:26 When you take the thing that we call an invasive species is something that’s still pretty the

00:18:30 same hydrocarbon based thing, but co evolved with co selective pressures in a certain environment,

00:18:36 we move it to another environment, it might be devastating to that whole ecosystem because

00:18:39 it’s just different enough that it messes up the self stabilizing dynamics of that ecosystem.

00:18:44 So the question of are they, would they be different in ways where we could still figure

00:18:52 out a way to inhabit a biosphere together or fundamentally not fundamentally the nature

00:18:59 of how they operate and the nature of how we operate would be incommensurable is a deep

00:19:04 question.

00:19:05 Well, we offline talked about mimetic theory, right?

00:19:10 It seems like if there were sufficiently different where we would not even, we can coexist on

00:19:15 different planes, it seems like a good thing.

00:19:19 If we’re close enough together to where we’d be competing, then it’s, you’re getting into

00:19:24 the world of viruses and pathogens and all those kinds of things to where we would, one

00:19:30 of us would die off quickly through basically mass murder without even accidentally.

00:19:40 If we just had a self replicating single celled kind of creature that happened to not work

00:19:48 well for the hydrocarbon life that was here that got introduced because he either output

00:19:53 something that was toxic or utilized up the same resource too quickly and it just replicated

00:19:57 faster and mutated faster, that it wouldn’t be a mimetic theory, conflict theory kind

00:20:04 of harm.

00:20:05 It would just be a Von Neumann machine, a self replicating machine that was fundamentally

00:20:11 incompatible with these kinds of self replicating systems with faster OODA loops.

00:20:16 For one final time, putting your alien God hat on and you look at human civilization,

00:20:24 do you think about the 7.8 billion people on earth as individual little creatures, individual

00:20:30 little organisms, or do you think of us as one organism with a collective intelligence?

00:20:41 What’s the proper framework through which to analyze it again as an alien?

00:20:46 So that I know where you’re coming from, would you have asked the question the same way before

00:20:50 the industrial revolution, before the agricultural revolution when there were half a billion

00:20:54 people and no telecommunications connecting them?

00:20:59 I would indeed ask the question the same way, but I would be less confident about your conclusions.

00:21:09 It would be an actually more interesting way to ask the question at that time, but I was

00:21:12 nevertheless asked it the same way.

00:21:14 Yes.

00:21:15 Well, let’s go back further and smaller than rather than just a single human or the entire

00:21:20 human species, let’s look at a relatively isolated tribe.

00:21:27 In the relatively isolated, probably sub Dunbar number, sub 150 people tribe, do I look at

00:21:34 that as one entity where evolution is selecting for based on group selection or do I think

00:21:40 of it as 150 individuals that are interacting in some way?

00:21:45 Well, could those individuals exist without the group?

00:21:49 No.

00:21:52 The evolutionary adaptiveness of humans was involved critically group selection and individual

00:22:00 humans alone trying to figure out stone tools and protection and whatever aren’t what was

00:22:06 selected for.

00:22:09 And so I think the or is the wrong frame.

00:22:13 I think it’s individuals are affecting the group that they’re a part of.

00:22:20 They’re also dependent upon and being affected by the group that they’re part of.

00:22:25 And so this now starts to get deep into political theories also, which is theories that orient

00:22:31 towards the collective at different scales, whether a tribal scale or an empire or a nation

00:22:35 state or something, and ones that orient towards the individual liberalism and stuff like that.

00:22:40 And I think there’s very obvious failure modes on both sides.

00:22:43 And so the relationship between them is more interesting to me than either of them.

00:22:47 The relationship between the individual and the collective and the question around how

00:22:49 to have a virtuous process between those.

00:22:52 So a good social system would be one where the organism of the individual and the organism

00:22:57 of the group of individuals is they’re both synergistic to each other.

00:23:02 So what is best for the individuals and what’s best for the whole is aligned.

00:23:05 But there is nevertheless an individual.

00:23:08 They’re not, it’s a matter of degrees, I suppose, but what defines a human more, the social

00:23:21 network within which they’ve been brought up, through which they’ve developed their

00:23:26 intelligence or is it their own sovereign individual self?

00:23:33 What’s your intuition of how much, not just for evolutionary survival, but as intellectual

00:23:40 beings, how much do we need others for our development?

00:23:44 Yeah.

00:23:45 I think we have a weird sense of this today relative to most previous periods of sapient

00:23:51 history.

00:23:53 I think the vast majority of sapient history is tribal, like depending upon your early

00:23:59 human model, 200,000 or 300,000 years of homo sapiens and little tribes, where they depended

00:24:06 upon that tribe for survival and excommunication from the tribe was fatal.

00:24:12 I think they, and our whole evolutionary genetic history is in that environment and the amount

00:24:17 of time we’ve been out of it is relatively so tiny.

00:24:20 And then we still depended upon extended families and local communities more and the big kind

00:24:27 of giant market complex where I can provide something to the market to get money, to be

00:24:33 able to get other things from the market where it seems like I don’t need anyone.

00:24:35 It’s almost like disintermediating our sense of need, even though you’re in my ability

00:24:42 to talk to each other using these mics and the phones that we coordinated on took millions

00:24:46 of people over six continents to be able to run the supply chains that made all the stuff

00:24:50 that we depend on, but we don’t notice that we depend upon them.

00:24:52 They all seem fungible.

00:24:56 If you take a baby, obviously that you didn’t even get to a baby without a mom.

00:25:00 Was it dependent?

00:25:01 Are we dependent upon each other, right, without two parents at minimum and they depended upon

00:25:05 other people.

00:25:06 But if we take that baby and we put it out in the wild, it obviously dies.

00:25:11 So if we let it grow up for a little while, the minimum amount of time where it starts

00:25:14 to have some autonomy and then we put it out in the wild, and this has happened a few times,

00:25:19 it doesn’t learn language and it doesn’t learn the small motor articulation that we learn.

00:25:27 It doesn’t learn the type of consciousness that we end up having that is socialized.

00:25:34 So I think we take for granted how much conditioning affects us.

00:25:41 Is it possible that it affects basically 99.9 or maybe the whole thing?

00:25:49 The whole thing is the connection between us humans and that we’re no better than apes

00:25:56 without our human connections.

00:25:59 Because thinking of it that way forces us to think very differently about human society

00:26:05 and how to progress forward if the connections are fundamental.

00:26:09 I just have to object to the no better than apes, because better here I think you mean

00:26:14 a specific thing, which means have capacities that are fundamentally different than.

00:26:17 I think apes also depend upon troops.

00:26:21 And I think the idea of humans as better than nature in some kind of ethical sense ends

00:26:29 up having heaps of problems.

00:26:30 We’ll table that.

00:26:31 We can come back to it.

00:26:32 But when we say what is unique about Homo sapien capacity relative to the other animals

00:26:36 we currently inhabit the biosphere with, and I’m saying it that way because there were

00:26:41 other early hominids that had some of these capacities, we believe.

00:26:47 Our tool creation and our language creation and our coordination are all kind of the results

00:26:52 of a certain type of capacity for abstraction.

00:26:56 And other animals will use tools, but they don’t evolve the tools they use.

00:26:59 They keep using the same types of tools that they basically can find.

00:27:03 So a chimp will notice that a rock can cut a vine that it wants to, and it’ll even notice

00:27:08 that a sharper rock will cut it better.

00:27:10 And experientially it’ll use the sharper rock.

00:27:12 And if you even give it a knife, it’ll probably use the knife because it’s experiencing the

00:27:15 effectiveness.

00:27:16 But it doesn’t make stone tools because that requires understanding why one is sharper

00:27:22 than the other.

00:27:23 What is the abstract principle called sharpness to then be able to invent a sharper thing?

00:27:28 That same abstraction makes language and the ability for abstract representation, which

00:27:34 makes the ability to coordinate in a more advanced set of ways.

00:27:38 So I do think our ability to coordinate with each other is pretty fundamental to the selection

00:27:43 of what we are as a species.

00:27:46 I wonder if that coordination, that connection is actually the thing that gives birth to

00:27:51 consciousness, that gives birth to, well, let’s start with self awareness.

00:27:56 More like theory of mind.

00:27:57 Theory of mind.

00:27:58 Yeah.

00:27:59 You know, I suppose there’s experiments that show that there’s other mammals that have

00:28:03 a very crude theory of mind.

00:28:05 Not sure.

00:28:06 Maybe dogs, something like that.

00:28:08 But actually dogs probably has to do with that they co evolved with humans.

00:28:12 See it’d be interesting if that theory of mind is what leads to consciousness in the

00:28:18 way we think about it.

00:28:21 Is the richness of the subjective experience that is consciousness.

00:28:24 I have an inkling sense that that only exists because we’re social creatures.

00:28:31 That doesn’t come with the hardware and the software in the beginning.

00:28:36 That’s learned as an effective tool for communication almost.

00:28:45 I think we think that consciousness is fundamental.

00:28:49 And maybe it’s not, there’s a bunch of folks kind of criticize the idea that the illusion

00:28:58 of consciousness is consciousness.

00:29:00 That it is just a facade we use to help us construct theories of mind.

00:29:08 You almost put yourself in the world as a subjective being.

00:29:12 And that experience, you want to richly experience it as an individual person so that I could

00:29:18 empathize with your experience.

00:29:20 I find that notion compelling.

00:29:22 Mostly because it allows you to then create robots that become conscious not by being

00:29:29 quote unquote conscious but by just learning to fake it till they make it.

00:29:37 Present a facade of consciousness with the task of making that facade very convincing

00:29:44 to us humans and thereby it will become conscious.

00:29:48 Have a sense that in some way that will make them conscious if they’re sufficiently convincing

00:29:55 to humans.

00:29:58 Is there some element of that that you find convincing?

00:30:05 This is a much harder set of questions and deep end of the pool than starting with the

00:30:11 aliens was.

00:30:15 We went from aliens to consciousness.

00:30:18 This is not the trajectory I was expecting nor you, but let us walk a while.

00:30:24 We can walk a while and I don’t think we will do it justice.

00:30:27 So what do we mean by consciousness versus conscious self reflective awareness?

00:30:34 What do we mean by awareness, qualia, theory of mind?

00:30:38 There’s a lot of terms that we think of as slightly different things and subjectivity,

00:30:45 first person.

00:30:46 I don’t remember exactly the quote, but I remember when reading when Sam Harris wrote

00:30:53 the book Free Will and then Dennett critiqued it and then there was some writing back and

00:30:57 forth between the two because normally they’re on the same side of kind of arguing for critical

00:31:05 thinking and logical fallacies and philosophy of science against supernatural ideas.

00:31:11 And here Dennett believed there is something like free will.

00:31:15 He is a determinist compatibilist, but no consciousness and a radical element of this.

00:31:21 And Sam was saying, no, there is consciousness, but there’s no free will.

00:31:24 And that’s like the most fundamental kinds of axiomatic senses they disagreed on, but

00:31:29 neither of them could say it was because the other one didn’t understand the philosophy

00:31:31 of science or logical fallacies.

00:31:34 And they kind of spoke past each other and at the end, if I remember correctly, Sam said

00:31:37 something that I thought was quite insightful, which was to the effect of it seems, because

00:31:42 they weren’t making any progress in shared understanding, it seems that we simply have

00:31:46 different intuitions about this.

00:31:49 And what you could see was that what the words meant, right at the level of symbol grounding,

00:31:56 might be quite different.

00:31:59 One of them might have had deeply different enough life experiences that what is being

00:32:03 referenced and then also different associations of what the words mean.

00:32:06 This is why when trying to address these things, Charles Sanders Peirce said the first philosophy

00:32:11 has to be semiotics, because if you don’t get semiotics right, we end up importing different

00:32:16 ideas and bad ideas right into the nature of the language that we’re using.

00:32:20 And then it’s very hard to do epistemology or ontology together.

00:32:22 So, I’m saying this to say why I don’t think we’re going to get very far is I think we

00:32:28 would have to go very slowly in terms of defining what we mean by words and fundamental concepts.

00:32:33 Well, and also allowing our minds to drift together for a time so that our definitions

00:32:40 of these terms align.

00:32:42 I think there’s some, there’s a beauty that some people enjoy with Sam that he is quite

00:32:51 stubborn on his definitions of terms without often clearly revealing that definition.

00:32:59 So in his mind, he can sense that he can deeply understand what he means exactly by a term

00:33:06 like free will and consciousness.

00:33:08 And you’re right, he’s very specific in fascinating ways that not only does he think that free

00:33:15 will is an illusion, he thinks he’s able, not thinks, he says he’s able to just remove

00:33:23 himself from the experience of free will and just be like for minutes at a time, hours

00:33:30 at a time, like really experience as if he has no free will, like he’s a leaf flowing

00:33:38 down the river.

00:33:41 And given that, he’s very sure that consciousness is fundamental.

00:33:45 So here’s this conscious leaf that’s subjectively experiencing the floating and yet has no ability

00:33:53 to control and make any decisions for itself.

00:33:56 It’s only a, the decisions have all been made.

00:34:02 There’s some aspect to which the terminology there perhaps is the problem.

00:34:06 So that’s a particular kind of meditative experience and the people in the Vedantic

00:34:11 tradition and some of the Buddhist traditions thousands of years ago described similar experiences

00:34:15 and somewhat similar conclusions, some slightly different.

00:34:19 There are other types of phenomenal experience that are the phenomenal experience of pure

00:34:27 agency and, you know, like the Catholic theologian but evolutionary theorist Teilhard de Chardin

00:34:33 describes this and that rather than a creator agent God in the beginning, there’s a creative

00:34:39 impulse or a creative process and he would go into a type of meditation that identified

00:34:44 as the pure essence of that kind of creative process.

00:34:49 And I think the types of experience we’ve had and then one, the types of experience

00:34:55 we’ve had make a big deal to the nature of how we do symbol grounding.

00:34:58 The other thing is the types of experiences we have can’t not be interpreted through

00:35:03 our existing interpretive frames and most of the time our interpretive frames are unknown

00:35:07 even to us, some of them.

00:35:09 And so this is a tricky, this is a tricky topic.

00:35:15 So I guess there’s a bunch of directions we could go with it but I want to come back to

00:35:19 what the impulse was that was interesting around what is consciousness and how does

00:35:24 it relate to us as social beings and how does it relate to the possibility of consciousness

00:35:29 with AIs.

00:35:30 Right, you’re keeping us on track which is, which is wonderful, you’re a wonderful hiking

00:35:35 partner.

00:35:36 Okay, yes.

00:35:37 Let’s go back to the initial impulse of what is consciousness and how does the social impulse

00:35:43 connect to consciousness?

00:35:45 Is consciousness a consequence of that social connection?

00:35:50 I’m going to state a position and not argue it because it’s honestly like it’s a long

00:35:55 hard thing to argue and we can totally do it another time if you want.

00:36:00 I don’t subscribe to consciousness as an emergent property of biology or neural networks.

00:36:11 Obviously a lot of people do, obviously the philosophy of science orients towards that

00:36:17 in not absolutely but largely.

00:36:24 I think of the nature of first person, the universe of first person, of qualia as experience,

00:36:33 sensation, desire, emotion, phenomenology, but the felt sense, not the we say emotion

00:36:41 and we think of a neurochemical pattern or an endocrine pattern.

00:36:45 But all of the physical stuff, the third person stuff has position and momentum and charge

00:36:50 and stuff like that that is measurable, repeatable.

00:36:55 I think of the nature of first person and third person as ontologically orthogonal to

00:37:00 each other, not reducible to each other.

00:37:03 They’re different kinds of stuff.

00:37:06 So I think about the evolution of third person that we’re quite used to thinking about from

00:37:11 subatomic particles to atoms to molecules to on and on.

00:37:14 I think about a similar kind of and corresponding evolution in the domain of first person from

00:37:19 the way Whitehead talked about kind of prehension or proto qualia in earlier phases of self

00:37:24 organization into higher orders of it and that there’s correspondence, but that neither

00:37:29 like the idealists do we reduce third person to first person, which is what idealists do,

00:37:35 or neither like the physicalists do we reduce first person to third person.

00:37:40 Obviously Bohm talked about an implicate order that was deeper than and gave rise to the

00:37:46 explicate order of both.

00:37:48 Nagel talks about something like that.

00:37:49 I have a slightly different sense of that, but again, I’ll just kind of not argue how

00:37:54 that occurs for a moment and say, so rather than say, does consciousness emerge from,

00:37:59 I’ll talk about do higher capacities of consciousness emerge in relationship with.

00:38:07 So it’s not first person as a category emerging from third person, but increased complexity

00:38:12 within the nature of first person and third person co evolving.

00:38:17 Do I think that it seems relatively likely that more advanced neural networks have deeper

00:38:22 phenomenology, more complex, where it goes just from basic sensation to emotion to social

00:38:29 awareness to abstract cognition to self reflexive abstract cognition?

00:38:35 Yeah.

00:38:36 But I wouldn’t say that’s the emergence of consciousness.

00:38:37 I would say it’s increased complexity within the domain of first person corresponding to

00:38:41 increased complexity and the correspondence should not automatically be seen as causal.

00:38:46 We can get into the arguments for why that often is the case.

00:38:50 So would I say that obviously the sapient brain is pretty unique and a single sapient

00:38:57 now has that, right?

00:38:58 Even if it took sapiens evolving in tribes based on group selection to make that brain.

00:39:03 So the group made it now that brain is there.

00:39:06 Now if I take that single person with that brain out of the group and try to raise them

00:39:09 in a box, they’ll still not be very interesting even with the brain.

00:39:14 But the brain does give hardware capacities that if conditioned in relationship can have

00:39:20 interesting things emerge.

00:39:21 So do I think that the human biology, types of human consciousness and types of social

00:39:29 interaction all co emerged and co evolved?

00:39:32 Yes.

00:39:33 As a small aside, as you’re talking about the biology, let me comment that I spent,

00:39:38 this is what I do, this is what I do with my life.

00:39:41 This is why I will never accomplish anything is I spent much of the morning trying to do

00:39:47 research on how many computations the brain performs and how much energy it uses versus

00:39:53 the state of the art CPUs and GPUs arriving at about 20 quadrillion.

00:40:00 So that’s two to the 10 to the 16 computations.

00:40:03 So synaptic firings per second that the brain does.

00:40:08 And that’s about a million times faster than the let’s say the 20 thread state of the

00:40:15 arts Intel CPU, the 10th generation.

00:40:21 And then there’s similar calculation for the GPU and all ended up also trying to compute

00:40:28 that it takes 10 watts to run the brain about.

00:40:32 And then what does that mean in terms of calories per day, kilocalories?

00:40:36 That’s about for an average human brain, that’s 250 to 300 calories a day.

00:40:44 And so it ended up being a calculation where you’re doing about 20 quadrillion calculations

00:40:54 that are fueled by something like depending on your diet, three bananas.

00:40:59 So three bananas results in a computation that’s about a million times more powerful

00:41:05 than the current state of the art computers.

00:41:08 Now, let’s take that one step further.

00:41:10 There’s some assumptions built in there.

00:41:12 The assumption is that one, what the brain is doing is just computation.

00:41:17 Two, the relevant computations are synaptic firings and that there’s nothing other than

00:41:21 synaptic firings that we have to factor.

00:41:25 So I’m forgetting his name right now.

00:41:28 There’s a very famous neuroscientist at Stanford just passed away recently who did a lot of

00:41:35 the pioneering work on glial cells and showed that his assessment glial cells did a huge

00:41:40 amount of the thinking, not just neurons.

00:41:42 And it opened up this entirely different field of like what the brain is and what consciousness

00:41:46 is.

00:41:47 You look at Damasio’s work on embodied cognition and how much of what we would consider consciousness

00:41:51 or feeling is happening outside of the nervous system completely, happening in endocrine

00:41:56 process involving lots of other cells and signal communication.

00:42:00 You talk to somebody like Penrose who you’ve had on the show and even though the Penrose

00:42:04 Hammerhoff conjecture is probably not right, is there something like that that might be

00:42:08 the case where we’re actually having to look at stuff happening at the level of quantum

00:42:11 computation of microtubules?

00:42:14 I’m not arguing for any of those.

00:42:16 I’m arguing that we don’t know how big the unknown unknown set is.

00:42:20 Well, at the very least, this has become like an infomercial for the human brain.

00:42:26 At the very, but wait, there’s more.

00:42:29 At the very least, the three bananas buys you a million times.

00:42:33 At the very least.

00:42:34 At the very least.

00:42:35 That’s impressive.

00:42:36 And then you could have, and then the synaptic firings we’re referring to is strictly the

00:42:41 electrical signals.

00:42:42 That could be the mechanical transmission of information, there’s chemical transmission

00:42:45 of information, there’s all kinds of other stuff going on.

00:42:49 And then there’s memory that’s built in, that’s also all tied in.

00:42:52 Not to mention, which I’m learning more and more about, it’s not just about the neurons.

00:42:58 It’s also about the immune system that’s somehow helping with the computation.

00:43:02 So the entirety and the entire body is helping with the computation.

00:43:06 So the three bananas.

00:43:07 It could buy you a lot.

00:43:10 It could buy you a lot.

00:43:12 But on the topic of sort of the greater degrees of complexity emerging in consciousness, I

00:43:22 think few things are as beautiful and inspiring as taking a step outside of the human brain,

00:43:29 just looking at systems where simple rules create incredible complexity.

00:43:36 Not create.

00:43:38 Incredible complexity emerges.

00:43:40 So one of the simplest things to do that with is cellular automata.

00:43:46 And there’s, I don’t know what it is, and maybe you can speak to it, we will certainly

00:43:53 talk about the implications of this, but there’s so few things that are as awe inspiring to

00:44:02 me as knowing the rules of a system and not being able to predict what the heck it looks

00:44:07 like.

00:44:08 And it creates incredibly beautiful complexity that when zoomed out on, looks like there’s

00:44:15 actual organisms doing things that operate on a scale much higher than the underlying

00:44:26 mechanism.

00:44:27 So with cellular automata, that’s cells that are born and die.

00:44:31 Born and die and they only know about each other’s neighbors.

00:44:34 And there’s simple rules that govern that interaction of birth and death.

00:44:38 And then they create, at scale, organisms that look like they take up hundreds or thousands

00:44:44 of cells and they’re moving, they’re moving around, they’re communicating, they’re sending

00:44:49 signals to each other.

00:44:51 And you forget at moments at a time before you remember that the simple rules on cells

00:44:59 is all that it took to create that.

00:45:04 It’s sad in that we can’t come up with a simple description of that system that generalizes

00:45:15 the behavior of the large organisms.

00:45:19 We can only come up, we can only hope to come up with the thing, the fundamental physics

00:45:23 or the fundamental rules of that system, I suppose.

00:45:25 It’s sad that we can’t predict everything we know about the mathematics of those systems.

00:45:29 It seems like we can’t really in a nice way, like economics tries to do, to predict how

00:45:34 this whole thing will unroll.

00:45:37 But it’s beautiful because of how simple it is underneath it all.

00:45:42 So what do you make of the emergence of complexity from simple rules?

00:45:49 What the hell is that about?

00:45:50 Yeah.

00:45:51 Well, we can see that something like flocking behavior, the murmuration, can be computer

00:45:56 coded.

00:45:57 It’s a very hard set of rules to be able to see some of those really amazing types of

00:46:01 complexity.

00:46:03 And the whole field of complexity science and some of the subdisciplines like Stigma

00:46:08 G are studying how following fairly simple responses to a pheromone signal do ant colonies

00:46:14 do this amazing thing where what you might describe as the organizational or computational

00:46:19 capacity of the colony is so profound relative to what each individual ant is doing.

00:46:26 I am not anywhere near as well versed in the cutting edge of cellular automata as I would

00:46:31 like.

00:46:32 Unfortunately, in terms of topics that I would like to get to and haven’t, like ET’s more

00:46:36 Wolfram’s A New Kind of Science, I have only skimmed and read reviews of and not read the

00:46:43 whole thing or his newer work since.

00:46:47 But his idea of the four basic kind of categories of emergent phenomena that can come from cellular

00:46:53 automata and that one of them is kind of interesting and looks a lot like complexity rather than

00:46:59 just chaos or homogeneity or self termination or whatever.

00:47:08 I think this is very interesting.

00:47:11 It does not instantly make me think that biology is operating on a similarly small set of rules

00:47:17 and or that human consciousness is.

00:47:19 I’m not that reductionist oriented.

00:47:26 So if you look at, say, Santa Fe Institute, one of the cofounders, Stuart Kaufman, his

00:47:31 work, you should really get him on your show.

00:47:33 So a lot of the questions that you like, one of Kaufman’s more recent books after investigations

00:47:39 and some of the real fundamental stuff was called Reinventing the Sacred and it had to

00:47:42 do with some of these exact questions in kind of non reductionist approach, but that is

00:47:47 not just silly hippie ism.

00:47:50 And he was very interested in highly non ergodic systems where you couldn’t take a lot of behavior

00:47:55 over a small period of time and predict what the behavior of subsets over a longer period

00:47:59 of time would do.

00:48:01 And then going further, someone who spent some time at Santa Fe Institute and then kind

00:48:05 of made a whole new field that you should have on, Dave Snowden, who some people call

00:48:10 the father of anthro complexity or what is the complexity unique to humans.

00:48:16 And he says something to the effect of that modeling humans as termites really doesn’t

00:48:19 cut it.

00:48:20 Like we don’t respond exactly identically to the same pheromone stimulus using Stigma

00:48:26 G like it works for flows of traffic and some very simple human behaviors, but it really

00:48:30 doesn’t work for trying to make sense of the Sistine Chapel and Picasso and general relativity

00:48:35 creation and stuff like that.

00:48:37 And it’s because the termites are not doing abstraction, forecasting deep into the future

00:48:43 and making choices now based on forecasts of the future, not just adaptive signals in

00:48:47 the moment and evolutionary code from history.

00:48:49 That’s really different, right?

00:48:51 Like making choices now that can factor deep modeling of the future.

00:48:56 And with humans, our uniqueness one to the next in terms of response to similar stimuli

00:49:02 is much higher than it is with a termite.

00:49:06 One of the interesting things there is that their uniqueness is extremely low.

00:49:08 They’re basically fungible within a class, right?

00:49:11 There’s different classes, but within a class they’re basically fungible and their system

00:49:14 uses that very high numbers and lots of loss, right?

00:49:19 Lots of death and loss.

00:49:20 But do you think the termite feels that way?

00:49:21 Don’t, don’t you think we humans are deceiving ourselves about our uniqueness?

00:49:25 Perhaps it doesn’t, it just, isn’t there some sense in which this emergence just creates

00:49:29 different higher and higher levels of abstraction where every, at every layer, each organism

00:49:34 feels unique?

00:49:35 Is that possible?

00:49:36 That we’re all equally dumb but at different scales?

00:49:40 No, I think uniqueness is evolving.

00:49:44 I think that hydrogen atoms are more similar to each other than cells of the same type

00:49:51 are.

00:49:52 And I think that cells are more similar to each other than humans are.

00:49:54 And I think that highly K selected species are more unique than R selected species.

00:50:00 So they’re different evolutionary processes.

00:50:03 The R selected species where you have a whole, a lot of death and very high birth rates,

00:50:09 and not looking for as much individuality within or individual possible expression to

00:50:16 cover the evolutionary search space within an individual.

00:50:18 You’re looking at it more in terms of a numbers game.

00:50:22 So yeah, I would say there’s probably more difference between one orca and the next than

00:50:26 there is between one Cape buffalo and the next.

00:50:29 Given that, it would be interesting to get your thoughts about memetic theory where we’re

00:50:35 imitating each other in the context of this idea of uniqueness.

00:50:43 How much truth is there to that?

00:50:46 How compelling is this worldview to you of Girardian memetic theory of desire where maybe

00:50:56 you can explain it from your perspective, but it seems like imitating each other is

00:51:00 the fundamental property of the behavior of human civilization.

00:51:05 Well, imitation is not unique to humans, right?

00:51:09 Monkeys imitate.

00:51:11 So a certain amount of learning through observing is not unique to humans.

00:51:18 Humans do more of it.

00:51:19 It’s actually kind of worth speaking to this for a moment.

00:51:24 Monkeys can learn new behaviors, new…

00:51:27 We’ve even seen teaching an ape sign language and then the ape teaching other apes sign

00:51:31 language.

00:51:33 So that’s a kind of mimesis, right?

00:51:34 Kind of learning through imitation.

00:51:38 And that needs to happen if they need to learn or develop capacities that are not just coded

00:51:42 by their genetics, right?

00:51:44 So within the same genome, they’re learning new things based on the environment.

00:51:49 And so based on someone else learn something first and so let’s pick it up.

00:51:54 How much a creature is the result of just its genetic programming and how much it’s

00:51:59 learning is a very interesting question.

00:52:02 And I think this is a place where humans really show up radically different than everything

00:52:06 else.

00:52:07 And you can see it in the neoteny, how long we’re basically fetal.

00:52:13 That the closest ancestors to us, if we look at a chimp, a chimp can hold on to its mother’s

00:52:19 fur while she moves around day one.

00:52:22 And obviously we see horses up and walking within 20 minutes.

00:52:26 The fact that it takes a human a year to be walking and it takes a horse 20 minutes and

00:52:30 you say how many multiples of 20 minutes go into a year, like that’s a long period of

00:52:34 helplessness that wouldn’t work for a horse, right?

00:52:37 Like they or anything else.

00:52:40 And not only could we not hold on to mom in the first day, it’s three months before we

00:52:46 can move our head volitionally.

00:52:48 So it’s like why are we embryonic for so long?

00:52:52 Obviously it’s like it’s still fetal on the outside, had to be because couldn’t keep growing

00:52:58 inside and actually ever get out with big heads and narrower hips from going upright.

00:53:05 So here’s a place where there’s a coevolution of the pattern of humans, specifically here

00:53:11 our neoteny and what that portends to learning with our being tool making and environment

00:53:18 modifying creatures, which is because we have the abstraction to make tools, we change our

00:53:24 environments more than other creatures change their environments.

00:53:26 The next most environment modifying creature to us is like a beaver.

00:53:31 And then we’re in LA, you fly into LAX and you look at the just orthogonal grid going

00:53:36 on forever in all directions.

00:53:39 And we’ve recently come into the Anthropocene where the surface of the earth is changing

00:53:43 more from human activity than geological activity and then beavers and you’re like, okay, wow,

00:53:47 we’re really in a class of our own in terms of environment modifying.

00:53:53 So as soon as we started tool making, we were able to change our environments much more

00:54:01 radically.

00:54:02 We could put on clothes and go to a cold place.

00:54:05 And this is really important because we actually went and became apex predators in every environment.

00:54:10 We functioned like apex predators, polar bear can’t leave the Arctic and the lion can’t

00:54:15 leave the Savannah and an orca can’t leave the ocean.

00:54:18 And we went and became apex predators in all those environments because of our tool creation

00:54:21 capacity.

00:54:22 We could become better predators than them adapted to the environment or at least with

00:54:25 our tools adapted to the environment.

00:54:27 So in every aspect towards any organism in any environment, we’re incredibly good at

00:54:34 becoming apex predators.

00:54:36 Yes.

00:54:37 And nothing else can do that kind of thing.

00:54:40 There is no other apex predator that, you see the other apex predator is only getting

00:54:44 better at being a predator through evolutionary process that’s super slow and that super slow

00:54:48 process creates co selective process with their environment.

00:54:52 So as the predator becomes a tiny bit faster, it eats more of the slow prey, the genes of

00:54:56 the fast prey and breed and the prey becomes faster.

00:54:58 And so there’s this kind of balancing and we in because of our tool making, we increased

00:55:03 our predatory capacity faster than anything else could increase its resilience to it.

00:55:08 As a result, we start outstripping the environment and extincting species following stone tools

00:55:13 and going and becoming apex predator everywhere.

00:55:15 This is why we can’t keep applying apex predator theories because we’re not an apex predator.

00:55:18 We’re an apex predator, but we’re something much more than that.

00:55:22 Like just for an example, the top apex predator in the world, an orca.

00:55:27 An orca can eat one big fish at a time, like one tuna, and it’ll miss most of the time

00:55:31 or one seal.

00:55:33 And we can put a mile long drift net out on a single boat and pull up an entire school

00:55:39 of them.

00:55:40 Right?

00:55:41 We can deplete the entire oceans of them.

00:55:42 That’s not an orca.

00:55:43 That’s not an apex predator.

00:55:45 And that’s not even including that we can then genetically engineer different creatures.

00:55:49 We can extinct species.

00:55:50 We can devastate whole ecosystems.

00:55:52 We can make built worlds that have no natural things that are just human built worlds.

00:55:56 We can build new types of natural creatures, synthetic life.

00:55:59 So we are much more like little gods than we are like apex predators now, but we’re

00:56:02 still behaving as apex predators and little gods that behave as apex predators causes

00:56:06 a problem kind of core to my assessment of the world.

00:56:10 So what does it mean to be a predator?

00:56:13 So a predator is somebody that effectively can mine the resources from a place.

00:56:19 So for their survival, or is it also just purely like higher level objectives of violence

00:56:28 and what is, can predators be predators towards the same, each other towards the same species?

00:56:34 Like are we using the word predator sort of generally, which then connects to conflict

00:56:39 and military conflict, violent conflict in this base of human species.

00:56:46 Obviously we can say that plants are mining the resources of their environment in a particular

00:56:50 way, using photosynthesis to be able to pull minerals out of the soil and nitrogen and

00:56:54 carbon out of the air and like that.

00:56:57 And we can say herbivores are being able to mine and concentrate that.

00:57:01 So I wouldn’t say mining the environment is unique to predator.

00:57:04 Predator is generally being defined as mining other animals, right?

00:57:16 We don’t consider herbivores predators, but animal, which requires some type of violence

00:57:23 capacity because animals move, plants don’t move.

00:57:27 So it requires some capacity to overtake something that can move and try to get away.

00:57:34 We’ll go back to the Gerard thing and then we’ll come back here.

00:57:37 Why are we neotenous?

00:57:38 Why are we embryonic for so long?

00:57:42 Because are we, did we just move from the Savannah to the Arctic and we need to learn

00:57:47 new stuff?

00:57:48 If we came genetically programmed, we would not be able to do that.

00:57:51 Are we throwing spears or are we fishing or are we running an industrial supply chain

00:57:56 or are we texting?

00:57:57 What is the adaptive behavior?

00:57:59 Horses today in the wild and horses 10,000 years ago are doing pretty much the same stuff.

00:58:03 And so since we make tools and we evolve our tools and then change our environment so quickly

00:58:10 and other animals are largely the result of their environment, but we’re environment modifying

00:58:14 so rapidly, we need to come without too much programming so we can learn the environment

00:58:19 we’re in, learn the language, right?

00:58:21 Which is going to be very important to learn the tool making.

00:58:27 And so we have a very long period of relative helplessness because we aren’t coded how to

00:58:32 behave yet because we’re imprinting a lot of software on how to behave that is useful

00:58:36 to that particular time.

00:58:38 So our mimesis is not unique to humans, but the total amount of it is really unique.

00:58:44 And this is also where the uniqueness can go up, right?

00:58:46 Is because we are less just the result of the genetics and that means the kind of learning

00:58:51 through history that they got coded in genetics and more the result of, it’s almost like our

00:58:56 hardware selected for software, right?

00:59:00 Like if evolution is kind of doing these, think of as a hardware selection, I have problems

00:59:04 with computer metaphors for biology, but I’ll use this one here, that we have not had hardware

00:59:12 changes since the beginning of sapiens, but our world is really, really different.

00:59:18 And that’s all changes in software, right?

00:59:20 Changes on the same fundamental genetic substrate, what we’re doing with these brains and minds

00:59:27 and bodies and social groups and like that.

00:59:30 And so, now, Gerard specifically was looking at when we watch other people talking, so

00:59:40 we learn language, you and I would have a hard time learning Mandarin today or it would

00:59:44 take a lot of work, we’d be learning how to conjugate verbs and stuff, but a baby learns

00:59:47 it instantly without anyone even really trying to teach it just through mimesis.

00:59:50 So it’s a powerful thing.

00:59:52 They’re obviously more neuroplastic than we are when they’re doing that and all their

00:59:55 attention is allocated to that.

00:59:57 But they’re also learning how to move their bodies and they’re learning all kinds of stuff

01:00:01 through mimesis.

01:00:02 One of the things that Gerard says is they’re also learning what to want.

01:00:06 And they learn what to want.

01:00:07 They learn desire by watching what other people want.

01:00:10 And so, intrinsic to this, people end up wanting what other people want and if we can’t have

01:00:16 what other people have without taking it away from them, then that becomes a source of conflict.

01:00:21 So the mimesis of desire is the fundamental generator of conflict and that then the conflict

01:00:29 energy within a group of people will build over time.

01:00:32 This is a very, very crude interpretation of the theory.

01:00:35 Can we just pause on that?

01:00:37 For people who are not familiar and for me who hasn’t, I’m loosely familiar but haven’t

01:00:42 internalized it, but every time I think about it, it’s a very compelling view of the world.

01:00:46 Whether it’s true or not, it’s quite, it’s like when you take everything Freud says as

01:00:53 truth, it’s a very interesting way to think about the world and in the same way, thinking

01:00:59 about the mimetic theory of desire that everything we want is imitation of other people’s wants.

01:01:11 We don’t have any original wants.

01:01:13 We’re constantly imitating others.

01:01:15 And so, and not just others, but others we’re exposed to.

01:01:21 So there’s these little local pockets, however defined local, of people imitating each other.

01:01:27 And one that’s super empowering because then you can pick which group you can join.

01:01:33 What do you want to imitate?

01:01:37 It’s the old like, whoever your friends are, that’s what your life is going to be like.

01:01:42 That’s really powerful.

01:01:43 I mean, it’s depressing that we’re so unoriginal, but it’s also liberating in that if this holds

01:01:50 true, that we can choose our life by choosing the people we hang out with.

01:01:55 So okay.

01:01:57 Thoughts that are very compelling that seem like they’re more absolute than they actually

01:02:01 are end up also being dangerous.

01:02:02 We want to, I’m going to discuss here where I think we need to amend this particular theory.

01:02:10 But specifically, you just said something that everyone who’s paid attention knows is

01:02:14 true experientially, which is who you’re around affects who you become.

01:02:19 And as libertarian and self determining and sovereign as we’d like to be, everybody I

01:02:26 think knows that if you got put in the maximum security prison, aspects of your personality

01:02:31 would have to adapt or you wouldn’t survive there, right?

01:02:34 You would become different.

01:02:37 If you grew up in Darfur versus Finland, you would be different with your same genetics,

01:02:40 like just there’s no real question about that.

01:02:44 And that even today, if you hang out in a place with ultra marathoners as your roommates

01:02:50 or all people who are obese as your roommates, the statistical likelihood of what happens

01:02:55 to your fitness is pretty clear, right?

01:02:57 Like the behavioral science of this is pretty clear.

01:02:59 So the whole saying we are the average of the five people we spend the most time around.

01:03:04 I think the more self reflective someone is and the more time they spend by themselves

01:03:07 in self reflection, the less this is true, but it’s still true.

01:03:10 So one of the best things someone can do to become more self determined is be self determined

01:03:16 about the environments they want to put themselves in, because to the degree that there is some

01:03:20 self determination and some determination by the environment, don’t be fighting an environment

01:03:24 that is predisposing you in bad directions.

01:03:27 Try to put yourself in an environment that is predisposing the things that you want.

01:03:30 In turn, try to affect the environment in ways that predispose positive things for those

01:03:34 around you.

01:03:36 Or perhaps also there’s probably interesting ways to play with this.

01:03:39 You could probably put yourself like form connections that have this perfect tension

01:03:45 in all directions to where you’re actually free to decide whatever the heck you want,

01:03:50 because the set of wants within your circle of interactions is so conflicting that you’re

01:03:56 free to choose whichever one.

01:03:59 If there’s enough tension, as opposed to everybody aligned like a flock of birds.

01:04:03 Yeah, I mean, you definitely want that all of the dialectics would be balanced.

01:04:09 So if you have someone who is extremely oriented to self empowerment and someone who’s extremely

01:04:17 oriented to kind of empathy and compassion, both the dialectic of those is better than

01:04:21 either of them on their own.

01:04:24 If you have both of them inhabiting, being inhabited better than you by the same person

01:04:28 and spending time around that person will probably do well for you.

01:04:32 I think the thing you just mentioned is super important when it comes to cognitive schools,

01:04:36 which is I think one of the fastest things people can do to improve their learning and

01:04:43 their not just cognitive learning, but their meaningful problem solving communication and

01:04:50 civic capacity, capacity to participate as a citizen with other people and making the

01:04:54 world better is to be seeking dialectical synthesis all the time.

01:05:01 And so in the Hegelian sense, if you have a thesis, you have an antithesis.

01:05:06 So maybe we have libertarianism on one side and Marxist kind of communism on the other

01:05:10 side.

01:05:11 And one is arguing that the individual is the unit of choice.

01:05:16 And so we want to increase the freedom and support of individual choice because as they

01:05:21 make more agentic choices, it’ll produce a better whole for everybody.

01:05:25 The other side saying, well, the individuals are conditioned by their environment who would

01:05:28 choose to be born into Darfur rather than Finland.

01:05:31 So we actually need to collectively make environments that are good because the environment conditions

01:05:39 the individuals.

01:05:40 So you have a thesis and an antithesis.

01:05:42 And then Hegel’s ideas, you have a synthesis, which is a kind of higher order truth that

01:05:46 understands how those relate in a way that neither of them do.

01:05:50 And so it is actually at a higher order of complexity.

01:05:52 So the first part would be, can I steel man each of these?

01:05:55 Can I argue each one well enough that the proponents of it are like, totally, you got

01:05:58 that?

01:06:00 And not just argue it rhetorically, but can I inhabit it where I can try to see and feel

01:06:04 the world the way someone seeing and feeling the world that way would?

01:06:08 Because once I do, then I don’t want to screw those people because there’s truth in it,

01:06:12 right?

01:06:13 And I’m not going to go back to war with them.

01:06:14 I’m going to go to finding solutions that could actually work at a higher order.

01:06:18 If I don’t go to a higher order, then there’s war.

01:06:21 And but then the higher order thing would be, well, it seems like the individual does

01:06:25 affect the commons and the collective and other people.

01:06:28 It also seems like the collective conditions individuals at least statistically.

01:06:33 And I can cherry pick out the one guy who got out of the ghetto and pulled himself up

01:06:37 by his bootstraps.

01:06:38 But I can also say statistically that most people born into the ghetto show up differently

01:06:42 than most people born into the Hamptons.

01:06:44 And so unless you want to argue that and have you take your child from the Hamptons and

01:06:49 put them in the ghetto, then like, come on, be realistic about this thing.

01:06:52 So how do we make, we don’t want social systems that make weak dependent individuals, right?

01:07:00 The welfare argument.

01:07:01 But we also don’t want no social system that supports individuals to do better.

01:07:08 We don’t want individuals where their self expression and agency fucks the environment

01:07:12 and everybody else and employs slave labor and whatever.

01:07:15 So can we make it to where individuals are creating holes that are better for conditioning

01:07:21 other individuals?

01:07:22 Can we make it to where we have holes that are conditioning increased agency and sovereignty,

01:07:26 right?

01:07:27 That would be the synthesis.

01:07:28 So the thing that I’m coming to here is if people have that as a frame, and sometimes

01:07:33 it’s not just thesis and antithesis, it’s like eight different views, right?

01:07:37 Can I steel man each view?

01:07:39 This is not just, can I take the perspective, but am I seeking them?

01:07:42 Am I actively trying to inhabit other people’s perspective?

01:07:46 Then can I really try to essentialize it and argue the best points of it, both the sense

01:07:52 making about reality and the values, why these values actually matter?

01:07:57 Then just like I want to seek those perspectives, then I want to seek, is there a higher order

01:08:04 set of understandings that could fulfill the values of and synthesize the sense making

01:08:08 of all of them simultaneously?

01:08:10 Maybe I won’t get it, but I want to be seeking it and I want to be seeking progressively

01:08:13 better ones.

01:08:14 So this is perspective seeking, driving perspective taking, and then seeking synthesis.

01:08:21 I think that that one cognitive disposition might be the most helpful thing.

01:08:31 Would you put a title of dialectic synthesis on that process because that seems to be such

01:08:36 a part, so like this rigorous empathy, like it’s not just empathy.

01:08:42 It’s empathy with rigor, like you really want to understand and embody different worldviews

01:08:48 and then try to find a higher order synthesis.

01:08:50 Okay, so I remember last night you told me when we first met, you said that you looked

01:08:58 in somebody’s eyes and you felt that you had suffered in some ways that they had suffered

01:09:01 and so you could trust them.

01:09:03 Empathy pathos, right, creates a certain sense of kind of shared bonding and shared intimacy.

01:09:08 So empathy is actually feeling the suffering of somebody else and feeling the depth of

01:09:14 their sentience.

01:09:15 I don’t want to fuck them anymore.

01:09:16 I don’t want to hurt them.

01:09:17 I don’t want to behave, I don’t want my proposition to go through when I go and inhabit the perspective

01:09:22 of the other people if they feel that’s really going to mess them up, right?

01:09:25 And so the rigorous empathy, it’s different than just compassion, which is I generally

01:09:30 care.

01:09:31 I have a generalized care, but I don’t know what it’s like to be them.

01:09:34 I can never know what it’s like to be them perfectly and that there’s a humility you

01:09:37 have to have, which is my most rigorous attempt is still not it.

01:09:42 My most rigorous attempt, mine, to know what it’s like to be a woman is still not it.

01:09:46 I have no question that if I was actually a woman, it would be different than my best

01:09:49 guesses.

01:09:50 I have no question if I was actually black, it would be different than my best guesses.

01:09:54 So there’s a humility in that which keeps me listening because I don’t think that I

01:09:57 know fully, but I want to, and I’m going to keep trying better to.

01:10:02 And then I want to accross them, and then I want to say, is there a way we can forward

01:10:05 together and not have to be in war?

01:10:07 It has to be something that could meet the values that everyone holds, that could reconcile

01:10:12 the partial sensemaking that everyone holds, and that could offer a way forward that is

01:10:17 more agreeable than the partial perspectives at war with each other.

01:10:21 But so the more you succeed at this empathy with humility, the more you’re carrying the

01:10:26 burden of other people’s pain, essentially.

01:10:30 Now, this goes back to the question of do I see us as one being or 7.8 billion.

01:10:38 I think if I’m overwhelmed with my own pain, I can’t empathize that much because I don’t

01:10:47 have the bandwidth.

01:10:48 I don’t have the capacity.

01:10:49 If I don’t feel like I can do something about a particular problem in the world, it’s hard

01:10:53 to feel it because it’s just too devastating.

01:10:56 And so a lot of people go numb and even go nihilistic because they just don’t feel the

01:11:00 agency.

01:11:01 So as I actually become more empowered as an individual and have more sense of agency,

01:11:05 I also become more empowered to be more empathetic for others and be more connected to that shared

01:11:10 burden and want to be able to make choices on behalf of and in benefit of.

01:11:15 So this way of living seems like a way of living that would solve a lot of problems

01:11:23 in society from a cellular automata perspective.

01:11:28 So if you have a bunch of little agents behaving in this way, my intuition, there’ll be interesting

01:11:34 complexities that emerge, but my intuition is it will create a society that’s very different

01:11:39 and recognizably better than the one we have today.

01:11:44 How much like…

01:11:45 Oh, wait, hold that question because I want to come back to it, but this brings us back

01:11:49 to Gerard, which we didn’t answer.

01:11:51 The conflict theory.

01:11:52 Yes.

01:11:53 Because about how to get past the conflict theory.

01:11:54 Yes.

01:11:55 You know the Robert Frost poem about the two paths and you never have enough time to return

01:11:59 back to the other?

01:12:00 We’re going to have to do that quite a lot.

01:12:01 We’re going to be living that poem over and over again, but yes, how to…

01:12:08 Let’s return back.

01:12:09 Okay.

01:12:10 So the rest of the argument goes, you learn to want what other people want, therefore

01:12:14 fundamental conflict based in our desire because we want the thing that somebody else has.

01:12:19 And then people are in conflict over trying to get the same stuff, power, status, attention,

01:12:24 physical stuff, a mate, whatever it is.

01:12:27 And then we learn the conflict by watching.

01:12:30 And so then the conflict becomes metic.

01:12:33 And we become on the Palestinian side or the Israeli side or the communist or capitalist

01:12:37 side or the left or right politically or whatever it is.

01:12:41 And until eventually the conflict energy in the system builds up so much that some type

01:12:46 of violence is needed to get the bad guy, whoever it is that we’re going to blame.

01:12:50 And you know, Gerard talks about why scapegoating was kind of a mechanism to minimize the amount

01:12:55 of violence.

01:12:56 Let’s blame a scapegoat as being more relevant than they really were.

01:13:00 But if we all believe it, then we can all kind of calm down with the conflict energy.

01:13:03 It’s a really interesting concept, by the way.

01:13:06 I mean, you beautifully summarized it, but the idea that there’s a scapegoat, that there’s

01:13:11 this kind of thing naturally leads to a conflict and then they find the other, some group that’s

01:13:15 the other that’s either real or artificial as the cause of the conflict.

01:13:20 Well, it’s always artificial because the cause of the conflict in Gerard is the mimesis of

01:13:25 desire itself.

01:13:26 And how do we attack that?

01:13:27 How do we attack that it’s our own desire?

01:13:30 So this now gets to something more like Buddha said, right, which was desire is the cause

01:13:33 of suffering.

01:13:34 Gerard and Buddha would kind of agree in this way.

01:13:40 So but that’s that explains I mean, again, it’s a compelling description of human history

01:13:46 that we do tend to come up with the other.

01:13:50 And

01:13:51 okay, kind of I just I just had such a funny experience with someone critiquing Gerard

01:13:55 the other day in such an elegant and beautiful and simple way.

01:13:59 It’s a friend who’s grew up Aboriginal Australian, is a scholar of Aboriginal social technologies.

01:14:12 He’s like, nah man, Gerard just made shit up about how tribes work.

01:14:16 Like we come from a tribe, we’ve got tens of thousands of years, and we didn’t have

01:14:21 increasing conflict and then scapegoat and kill someone.

01:14:23 We’d have a little bit of conflict and then we would dance and then everybody’d be fine.

01:14:28 We’d dance around the campfire, everyone would like kind of physically get the energy out,

01:14:31 we’d look in each other’s eyes, we’d have positive bonding, and then we’re fine.

01:14:34 And nobody, no scapegoats.

01:14:36 And

01:14:37 I think that’s called the Joe Rogan theory of desire, which is, he’s like, all all of

01:14:42 human problems have to do with the fact that you don’t do enough hard shit in your day.

01:14:47 So maybe, maybe just dance it because he says like doing exercise and running on the treadmill

01:14:51 gets gets all the demons out and maybe just dancing gets all the demons out.

01:14:55 So this is why I say we have to be careful with taking an idea that seems too explanatory

01:15:00 and then taking it as a given and then saying, well, now that we’re stuck with the fact that

01:15:06 conflict is inexorable because human, because mimetic desire and therefore, how do we deal

01:15:09 with the inexorability of the conflict and how to sublimate violence?

01:15:12 Well, no, the whole thing might be actually gibberish, meaning it’s only true in certain

01:15:17 conditions and other conditions it’s not true.

01:15:19 So the deeper question is under which conditions is that true?

01:15:22 Under which conditions is it not true?

01:15:23 What do those other conditions make possible and look like?

01:15:26 And in general, we should stay away from really compelling models of reality because there’s

01:15:31 something about, about our brains that these models become sticky and we can’t even think

01:15:36 outside of them.

01:15:37 So.

01:15:38 It’s not that we stay away from them.

01:15:39 It’s that we know that the model of reality is never reality.

01:15:42 That’s the key thing.

01:15:43 Humility again, it goes back to just having the humility that you don’t have a perfect

01:15:47 model of reality.

01:15:48 There’s an ep, the, the model of reality could never be reality.

01:15:52 The process of modeling is inherently information reduction and I can never show that the unknown

01:16:00 unknown set has been factored.

01:16:02 It’s back to the cellular automata.

01:16:05 You can’t, you can’t put the genie back in the bottle.

01:16:10 Like when you realize it’s unfortunately, sadly impossible to, to create a model of

01:16:19 cellular automata, even if you know the basic rules that predict to even any degree of accuracy,

01:16:26 what how that system will evolve, which is fascinating mathematically.

01:16:32 Sorry.

01:16:33 I think about it quite a lot.

01:16:34 It’s very annoying.

01:16:36 Wolfram has this rule 30, like you should be able to predict it.

01:16:41 It’s so simple, but you can’t predict what’s going to be like, there’s a, there’s a problem

01:16:48 he defines, like try to predict some aspect of the middle, middle column of the system,

01:16:53 just anything about it.

01:16:54 What’s going to happen in the future.

01:16:55 And you can’t, you can’t, it sucks because then we can’t make sense of this world in

01:17:03 a real, in a reality, in a definitive way.

01:17:07 It’s always like in the striving, like it, we’re always striving.

01:17:11 Yeah.

01:17:12 I don’t think this sucks.

01:17:15 That so that’s a feature, not a bug.

01:17:17 Well, that’s assuming a designer.

01:17:21 I would say I don’t think it sucks.

01:17:23 I think it’s not only beautiful, but maybe necessary for beauty.

01:17:27 The mess.

01:17:30 So you’re a, so you’re, you’re disagree Jordan Pearson should clean up your room.

01:17:35 You like the rooms messy.

01:17:36 It’s a, it’s essential for the, for beauty.

01:17:39 It’s not, it’s not that it’s okay.

01:17:42 I take, I have no idea if it was intended this way.

01:17:46 And so I’m just interpreting it a way I like the commandment about having no false idols

01:17:52 to me, the way I interpret that that is meaningful is that re reality is sacred to me.

01:17:58 I have a reverence for reality, but I know my best understanding of it is never complete.

01:18:04 I know my best model of it is a model where I tried to make some kind of predictive capacity

01:18:11 by reducing the complexity of it to a set of stuff that I could observe and then a subset

01:18:16 of that stuff that I thought was the causal dynamics and then some set of, you know, mechanisms

01:18:20 that are involved.

01:18:22 And what we find is that it can be super useful, like Newtonian gravity can help us do ballistic

01:18:27 curves and all kinds of super useful stuff.

01:18:30 And then we get to the place where it doesn’t explain what’s happening at the cosmological

01:18:34 scale or at a quantum scale.

01:18:36 And at each time, what we’re finding is we excluded stuff.

01:18:42 And it also doesn’t explain the reconciliation of gravity with quantum mechanics and the

01:18:46 other kind of fundamental laws.

01:18:48 So models can be useful, but they’re never true with a capital T, meaning they’re never

01:18:53 an actual real full, they’re never a complete description of what’s happening in real systems.

01:18:59 They can be a complete description of what’s happening in an artificial system that was

01:19:02 the result of applying a model.

01:19:04 So the model of a circuit board and the circuit board are the same thing, but I would argue

01:19:07 that the model of a cell and the cell are not the same thing.

01:19:11 And I would say this is key to what we call complexity versus the complicated, which is

01:19:16 a distinction Dave Snowden made well in defining the difference between simple, complicated,

01:19:24 complex and chaotic systems.

01:19:26 But one of the definers in complex systems is that no matter how you model the complex

01:19:30 system, it will still have some emergent behavior not predicted by the model.

01:19:34 Can you elaborate on the complex versus the complicated?

01:19:37 Complicated means we can fully explicate the phase space of all the things that it can

01:19:41 do.

01:19:42 We can program it.

01:19:44 All human, not all, for the most part, human built things are complicated.

01:19:49 They don’t self organize.

01:19:51 They don’t self repair.

01:19:52 They’re not self evolving and we can make a blueprint for them where, sorry, for human

01:19:57 systems, for human technologies, human technologies, that are basically the application of models

01:20:04 right.

01:20:06 And engineering is kind of applied science, science as the modeling process.

01:20:12 And but with humans are complex, complex stuff with biological type stuff and sociological

01:20:19 type stuff, it more has generator functions and even those can’t be fully explicated than

01:20:25 it has or our explanation can’t prove that it has closure of what would be in the unknown

01:20:30 unknown set where we keep finding like, oh, it’s just the genome.

01:20:33 Oh, well now it’s the genome and the epigenome and then a recursive change on the epigenome

01:20:37 because of the proteome.

01:20:38 And then there’s mitochondrial DNA and then viruses affected and fuck, right?

01:20:41 So it’s like we get overexcited when we think we found the thing.

01:20:46 So on Facebook, you know how you can list your relationship as complicated?

01:20:49 It should actually say it’s, it’s complex.

01:20:52 That’s the more accurate description.

01:20:54 You self terminating is a really interesting idea that you talk about quite a bit.

01:21:01 First of all, what is a self terminating system?

01:21:04 And I think you have a sense, correct me if I’m wrong, that human civilization is a currently

01:21:11 is, is a self terminating system.

01:21:16 Why do you have that intuition combined with the definition of what soft self terminating

01:21:20 means?

01:21:21 Okay, so if we look at human societies historically, human civilizations, it’s not that hard to

01:21:33 realize that most of the major civilizations and empires of the past don’t exist anymore.

01:21:37 So they had a life cycle, they died for some reason.

01:21:40 So we don’t still have the early Egyptian empire or Inca or Maya or Aztec or any of

01:21:46 those, right?

01:21:47 So they, they terminated, sometimes it seems like they were terminated from the outside

01:21:52 in war.

01:21:53 Sometimes it seems like they self terminated.

01:21:54 When we look at Easter Island, it was a self termination.

01:21:57 So let’s go ahead and take an island situation.

01:22:00 If I have an island and we are consuming the resources on that island faster than the resources

01:22:05 can replicate themselves and there’s a finite space there, that system is going to self

01:22:09 terminate.

01:22:10 It’s not going to be able to keep doing that thing because you’ll get to a place of there’s

01:22:13 no resources left and then you get a, so now if I’m utilizing the resources faster than

01:22:20 they can replicate or faster than they can replenish and I’m actually growing our population

01:22:25 in the process, I’m even increasing the rate of the utilization of resources, I might get

01:22:30 an exponential curve and then hit a wall and then just collapse the exponential curve rather

01:22:35 than do an S curve or some other kind of thing.

01:22:39 So self terminating system is any system that depends upon a substrate system that is debasing

01:22:47 its own substrate, that is debasing what it depends upon.

01:22:50 So you’re right that if you look at empires, they rise and fall throughout human history,

01:22:58 but not this time, bro.

01:23:02 This one’s going to last forever.

01:23:04 I like that idea.

01:23:06 I think that if we don’t understand why all the previous ones failed, we can’t ensure

01:23:10 that.

01:23:11 And so I think it’s very important to understand it well so that we can have that be a designed

01:23:15 outcome with somewhat decent probability.

01:23:18 So we’re, it’s sort of in terms of consuming the resources on the island, we’re a clever

01:23:24 bunch and we keep coming up, especially when on the horizon there is a termination point,

01:23:33 we keep coming up with clever ways of avoiding disaster, of avoiding collapse, of constructing.

01:23:40 This is where technological innovation, this is where growth comes in, coming up with different

01:23:44 ways to improve productivity and the way society functions such that we consume less resources

01:23:50 or get a lot more from the resources we have.

01:23:53 So there’s some sense in which there’s a human ingenuity is a source for optimism about the

01:24:02 future of this particular system that may not be self terminating.

01:24:07 If there’s more innovation than there is consumption.

01:24:13 So overconsumption of resources is just one way I think can self terminate.

01:24:17 We’re just kind of starting here.

01:24:18 But there are reasons for optimism and pessimism then they’re both worth understanding and

01:24:27 there’s failure modes on understanding either without the other.

01:24:31 As we mentioned previously, there’s what I would call naive techno optimism, naive techno

01:24:38 capital optimism that says stuff just has been getting better and better and we wouldn’t

01:24:43 want to live in the dark ages and tech has done all this awesome stuff and we know the

01:24:48 proponents of those models and this stuff is going to kind of keep getting better.

01:24:52 Of course there are problems, but human ingenuity rises to its supply and demand will solve

01:24:55 the problems, whatever.

01:24:56 Would you put Rick or as well in that, or in that bucket, is there some specific people

01:25:03 you have in mind or naive optimism is truly naive to where you’re essentially just have

01:25:08 an optimism that’s blind to any kind of realities of the way technology progresses.

01:25:13 I don’t think that anyone who thinks about it and writes about it is perfectly naive.

01:25:22 Gotcha.

01:25:23 But there might be.

01:25:24 It’s a platonic ideal.

01:25:25 There might be a bias in the nature of the assessment.

01:25:29 I would also say there’s kind of naive techno pessimism and there are critics of technology.

01:25:40 I mean, you read the Unabomber’s Manifesto on why technology can’t not result in our

01:25:46 self termination, so we have to take it out before it gets any further.

01:25:51 But also if you read a lot of the X risk community, you know, Bostrom and friends, it’s like our

01:25:58 total number of existential risks and the total probability of them is going up.

01:26:04 And so I think that there are, we have to hold together where our positive possibilities

01:26:11 and our risk possibilities are both increasing and then say for the positive possibilities

01:26:16 to be realized long term, all of the catastrophic risks have to not happen.

01:26:22 Any of the catastrophic risks happening is enough to keep that positive outcome from

01:26:26 occurring.

01:26:27 So how do we ensure that none of them happen?

01:26:30 If we want to say, let’s have a civilization that doesn’t collapse.

01:26:33 So again, Collapse Theory, it’s worth looking at books like The Collapse of Complex Societies

01:26:38 by Joseph Tainter.

01:26:39 It does an analysis of that many of the societies fell for internal institutional decay, civilizational

01:26:49 decay reasons.

01:26:51 Baudrillard in Simulation and Simulacra looks at a very different way of looking at how

01:26:55 institutional decay and the collective intelligence of a system happens and it becomes kind of

01:26:59 more internally parasitic on itself.

01:27:02 Obviously Jared Diamond made a more popular book called Collapse.

01:27:06 And as we were mentioning, the anticatheria mechanism has been getting attention in the

01:27:10 news lately.

01:27:11 It was like a 2000 year old clock, right?

01:27:14 Like metal gears.

01:27:15 And does that mean we lost like 1500 years of technological progress?

01:27:23 And from a society that was relatively technologically advanced.

01:27:29 So what I’m interested in here is being able to say, okay, well, why did previous societies

01:27:35 fail?

01:27:38 Can we understand that abstractly enough that we can make a civilizational model that isn’t

01:27:44 just trying to solve one type of failure, but solve the underlying things that generate

01:27:49 the failures as a whole?

01:27:51 Are there some underlying generator functions or patterns that would make a system self

01:27:56 terminating?

01:27:57 And can we solve those and have that be the kernel of a new civilizational model that

01:28:00 is not self terminating?

01:28:02 And can we then be able to actually look at the categories of extras we’re aware of and

01:28:06 see that we actually have resilience in the presence of those?

01:28:10 Not just resilience, but antifragility.

01:28:13 And I would say for the optimism to be grounded, it has to actually be able to understand the

01:28:18 risk space well and have adequate solutions for it.

01:28:22 So can we try to dig into some basic intuitions about the underlying sources of catastrophic

01:28:31 failures of the system and overconsumption that’s built in into self terminating systems?

01:28:37 So both the overconsumption, which is like the slow death, and then there’s the fast

01:28:42 death of nuclear war and all those kinds of things.

01:28:45 AGI, biotech, bioengineering, nanotechnology, nano, my favorite nanobots.

01:28:53 Nanobots are my favorite because it sounds so cool to me that I could just know that

01:28:59 I would be one of the scientists that would be full steam ahead in building them without

01:29:06 sufficiently thinking about the negative consequences.

01:29:08 I would definitely be, I would be podcasting all about the negative consequences, but when

01:29:14 I go back home, I’d be, I’d just in my heart know the amount of excitement is a dumb descendant

01:29:20 of ape, no offense to apes.

01:29:24 I want to backtrack on my previous comments about, negative comments about apes.

01:29:34 That I have that sense of excitement that would result in problems.

01:29:39 So sorry, a lot of things said, but what’s, can we start to pull it a thread because you’ve

01:29:43 also provided a kind of a beautiful general approach to this, which is this dialectic

01:29:50 synthesis or just rigorous empathy, whatever, whatever word we want to put to it, that seems

01:29:57 to be from the individual perspective as one way to sort of live in the world as we tried

01:30:01 to figure out how to construct non self terminating systems.

01:30:06 So what, what are some underlying sources?

01:30:08 Yeah.

01:30:09 First I have to say, I actually really respect Drexler for emphasizing Grey Goo and engines

01:30:17 of creation back in the day to make sure the world was paying adequate attention to the

01:30:23 risks of the nanotech as someone who was right at the cutting edge of what could be.

01:30:32 There’s definitely game theoretic advantage to those who focus on the opportunities and

01:30:37 don’t focus on the risks or pretend there aren’t risks because they get to market first.

01:30:46 And then they externalize all of the costs through limited liability or whatever it is

01:30:51 to the commons or wherever happen to have it.

01:30:53 Other people are going to have to solve those, but now they have the power and capital associated.

01:30:56 The person who looked at the risks and tried to do better design and go slower is probably

01:31:01 not going to move into positions of as much power influences quickly.

01:31:04 So this is one of the issues we have to deal with is some of the bad game theoretic dispositions

01:31:08 in the system relative to its own stability.

01:31:12 And the key aspect to that, sorry to interrupt, is the externalities generated.

01:31:17 Yes.

01:31:18 What flavors of catastrophic risk are we talking about here?

01:31:21 What’s your favorite flavor in terms of ice cream?

01:31:24 So mine is coconut.

01:31:25 Nobody seems to like coconut ice cream.

01:31:28 So ice cream aside, what are you most worried about in terms of catastrophic risk that will

01:31:36 help us kind of make concrete the discussion we’re having about how to fix this whole thing?

01:31:44 Yeah.

01:31:45 I think it’s worth taking a historical perspective briefly to just kind of orient everyone to

01:31:49 it.

01:31:50 We don’t have to go all the way back to the aliens who’ve seen all of civilization.

01:31:55 But to just recognize that for all of human history, as far as we’re aware, there were

01:32:03 existential risks to civilizations and they happened, right?

01:32:07 Like there were civilizations that were killed in war, tribes that were killed in tribal

01:32:12 warfare or whatever.

01:32:13 So people faced existential risk to the group that they identified with.

01:32:18 It’s just those were local phenomena, right?

01:32:20 It wasn’t a fully global phenomena.

01:32:22 So an empire could fall and surrounding empires didn’t fall.

01:32:25 Maybe they came in and filled the space.

01:32:30 The first time that we were able to think about catastrophic risk, not from like a solar

01:32:35 flare or something that we couldn’t control, but from something that humans would actually

01:32:39 create at a global level was World War II and the bomb.

01:32:43 Because it was the first time that we had tech big enough that could actually mess up

01:32:47 everything at a global level that could mess up habitability.

01:32:50 We just weren’t powerful enough to do that before.

01:32:53 It’s not that we didn’t behave in ways that would have done it.

01:32:55 We just only behaved in those ways at the scale we could affect.

01:32:59 And so it’s important to get that there’s the entire world before World War II where

01:33:04 we don’t have the ability to make a nonhabitable biosphere, nonhabitable for us.

01:33:09 And then there’s World War II and the beginning of a completely new phase where global human

01:33:14 induced catastrophic risk is now a real thing.

01:33:18 And that was such a big deal that it changed the entire world in a really fundamental way,

01:33:23 which is, you know, when you study history, it’s amazing how big a percentage of history

01:33:28 is studying war, right, and the history of war, as you said, European history and whatever.

01:33:32 It’s generals and wars and empire expansions.

01:33:35 And so the major empires near each other never had really long periods of time where they

01:33:40 weren’t engaged in war or preparation for war or something like that was – humans

01:33:45 don’t have a good precedent in the post tribal phase, the civilization phase of being able

01:33:51 to solve conflicts without war for very long.

01:33:54 World War II was the first time where we could have a war that no one could win.

01:34:00 And so the superpowers couldn’t fight again.

01:34:02 They couldn’t do a real kinetic war.

01:34:04 They could do diplomatic wars and Cold War type stuff and they could fight proxy wars

01:34:08 through other countries that didn’t have the big weapons.

01:34:11 And so mutually assured destruction and like coming out of World War II, we actually realized

01:34:15 that nation states couldn’t prevent world war.

01:34:19 And so we needed a new type of supervening government in addition to nation states, which

01:34:23 was the whole Bretton Woods world, the United Nations, the World Bank, the IMF, the globalization

01:34:31 trade type agreements, mutually assured destruction that was how do we have some coordination

01:34:36 beyond just nation states between them since we have to stop war between at least the superpowers.

01:34:42 And it was pretty successful given that we’ve had like 75 years of no superpower on superpower

01:34:48 war.

01:34:50 We’ve had lots of proxy wars during that time.

01:34:53 We’ve had Cold War.

01:34:56 And I would say we’re in a new phase now where the Bretton Woods solution is basically over

01:35:01 or almost over.

01:35:02 Can you describe the Bretton Woods solution?

01:35:05 Yeah.

01:35:06 So the Bretton Woods, the series of agreements for how the nations would be able to engage

01:35:15 with each other in a solution other than war was these IGOs, these intergovernmental organizations

01:35:21 and was the idea of globalization.

01:35:24 Since we could have global effects, we needed to be able to think about things globally

01:35:28 where we had trade relationships with each other where it would not be profitable to

01:35:32 war with each other.

01:35:33 It’d be more profitable to actually be able to trade with each other.

01:35:35 So our own self interest was gonna drive our non war interest.

01:35:42 And so this started to look like, and obviously this couldn’t have happened that much earlier

01:35:47 either because industrialization hadn’t gotten far enough to be able to do massive global

01:35:51 industrial supply chains and ship stuff around quickly.

01:35:54 But like we were mentioning earlier, almost all the electronics that we use today, just

01:35:59 basic cheap stuff for us is made on six continents, made in many countries.

01:36:02 There’s no single country in the world that could actually make many of the things that

01:36:06 we have and from the raw material extraction to the plastics and polymers and the et cetera.

01:36:12 And so the idea that we made a world that could do that kind of trade and create massive

01:36:18 GDP growth, we could all work together to be able to mine natural resources and grow

01:36:22 stuff.

01:36:23 With the rapid GDP growth, there was the idea that everybody could keep having more without

01:36:28 having to take each other’s stuff.

01:36:30 And so that was part of kind of the Bretton Woods post World War II model.

01:36:35 The other was that we’d be so economically interdependent that blowing each other up

01:36:38 would never make sense.

01:36:41 That worked for a while.

01:36:43 Now it also brought us up into planetary boundaries faster, the unrenewable use of resource and

01:36:51 turning those resources into pollution on the other side of the supply chain.

01:36:56 So obviously that faster GDP growth meant the overfishing of the oceans and the cutting

01:37:02 down of the trees and the climate change and the mining, toxic mining tailings going into

01:37:08 the water and the mountaintop removal mining and all those types of things.

01:37:11 That’s the overconsumption side of the risk that we’re talking about.

01:37:15 And so the answer of let’s do positive GDP is the answer rapidly and exponentially obviously

01:37:23 accelerated the planetary boundary side.

01:37:27 And that started to be, that was thought about for a long time, but it started to be modeled

01:37:31 with the Club of Rome and limits of growth.

01:37:38 But it’s just very obvious to say if you have a linear materials economy where you take

01:37:41 stuff out of the earth faster, whether it’s fish or trees or oil, you take it out of the

01:37:47 earth faster than it can replenish itself and you turn it into trash after using it

01:37:52 for a short period of time, you put the trash in the environment faster than it can process

01:37:56 itself and there’s toxicity associated with both sides of this.

01:37:59 You can’t run an exponentially growing linear materials economy on a finite planet forever.

01:38:05 That’s not a hard thing to figure out.

01:38:07 And it has to be exponential if there’s an exponentiation in the monetary supply because

01:38:11 of interest and then fractional reserve banking and to then be able to keep up with the growing

01:38:16 monetary supply, you have to have growth of goods and services.

01:38:19 So that’s that kind of thing that has happened.

01:38:24 But you also see that when you get these supply chains that are so interconnected across the

01:38:28 world, you get increased fragility because a collapse or a problem in one area then affects

01:38:33 the whole world in a much bigger area as opposed to the issues being local, right?

01:38:37 So we got to see with COVID and an issue that started in one part of China affecting the

01:38:43 whole world so much more rapidly than would have happened before Bretton Woods, right?

01:38:48 Before international travel, supply chains, you know, that whole kind of thing and with

01:38:52 a bunch of second and third order effects that people wouldn’t have predicted, okay,

01:38:55 we have to stop certain kinds of travel because of viral contaminants, but the countries doing

01:39:01 agriculture depend upon fertilizer they don’t produce that is shipped into them and depend

01:39:06 upon pesticides they don’t produce.

01:39:07 So we got both crop failures and crops being eaten by locusts in scale in Northern Africa

01:39:12 and Iran and things like that because they couldn’t get the supplies of stuff in.

01:39:15 So then you get massive starvation or future kind of hunger issues because of supply chain

01:39:21 shutdowns.

01:39:22 So you get this increased fragility and cascade dynamics where a small problem can end up

01:39:26 leading to cascade effects.

01:39:29 And also we went from two superpowers with one catastrophe weapon to now that same catastrophe

01:39:40 weapon is there’s more countries that have it, eight or nine countries that have it,

01:39:46 and there’s a lot more types of catastrophe weapons.

01:39:50 We now have catastrophe weapons with weaponized drones that can hit infrastructure targets

01:39:54 with bio, with in fact every new type of tech has created an arms race.

01:39:59 So we have not with the UN or the other kind of intergovernmental organizations, we haven’t

01:40:04 been able to really do nuclear de proliferation.

01:40:07 We’ve actually had more countries get nukes and keep getting faster nukes, the race to

01:40:12 hypersonics and things like that.

01:40:15 And every new type of technology that has emerged has created an arms race.

01:40:20 And so you can’t do mutually assured destruction with multiple agents the way you can with

01:40:25 two agents.

01:40:26 Two agents, it’s much easier to create a stable Nash equilibrium that’s forced.

01:40:31 But the ability to monitor and say if these guys shoot, who do I shoot?

01:40:33 Do I shoot them?

01:40:34 Do I shoot everybody?

01:40:35 Do I?

01:40:36 And so you get a three body problem.

01:40:37 You get a very complex type of thing when you have multiple agents and multiple different

01:40:41 types of catastrophe weapons, including ones that can be much more easily produced than

01:40:45 nukes.

01:40:46 Nukes are really hard to produce.

01:40:47 There’s only uranium in a few areas.

01:40:48 uranium enrichment is hard, ICBMs are hard, but weaponized drones hitting smart targets

01:40:54 is not so hard.

01:40:55 There’s a lot of other things where basically the scale at being able to manufacture them

01:40:58 is going way, way down to where even non state actors can have them.

01:41:02 And so when we talk about exponential tech and the decentralization of exponential tech,

01:41:09 what that means is decentralized catastrophe weapon capacity.

01:41:14 And especially in a world of increasing numbers of people feeling disenfranchised, frantic,

01:41:19 whatever for different reasons.

01:41:21 So I would say where the Bretton Woods world doesn’t prepare us to be able to deal with

01:41:27 lots of different agents, having lots of different types of catastrophe weapons you can’t put

01:41:31 mutually assured destruction on, where you can’t keep doing growth of materials economy

01:41:37 in the same way because of hitting planetary boundaries and where the fragility dynamics

01:41:43 are actually now their own source of catastrophic risk.

01:41:46 So now we’re, so like there was all the world until world war II and world war II is just

01:41:50 from a civilization timescale point of view is just a second ago.

01:41:54 It seems like a long time, but it is really not.

01:41:56 We get a short period of relative peace at the level of superpowers while building up

01:42:00 the military capacity for much, much, much worse war the entire time.

01:42:04 And then now we’re at this new phase where the things that allowed us to make it through

01:42:09 the nuclear power are not the same systems that will let us make it through the next

01:42:13 stage.

01:42:14 So what is this next post Bretton Woods?

01:42:18 How do we become safe vessels, safe stewards of many different types of exponential technology

01:42:26 is a key question when we’re thinking about X risk.

01:42:30 Okay.

01:42:31 And I’d like to try to answer the how a few ways, but first on the mutually assured destruction.

01:42:41 Do you give credit to the idea of two superpowers now blowing each other up with nuclear weapons

01:42:49 to the simple game theoretic model of mutually assured destruction or something you’ve said

01:42:54 previously this idea of inverse correlation, which I tend to believe between the, now you

01:43:04 were talking about tech, but I think it’s maybe broadly true.

01:43:09 The inverse correlation between competence and propensity for destruction.

01:43:14 So the better, the, the, the bigger your weapons, not because you’re afraid of a mutually assured

01:43:22 self destruction, but because we’re human beings and there’s a deep moral fortitude

01:43:27 there that somehow aligned with competence and being good at your job that like, it’s

01:43:32 very hard to be a psychopath and be good at killing at scale.

01:43:42 Do you share any of that intuition?

01:43:46 Kind of.

01:43:48 I think most people would say that Alexander the Great and Genghis Khan and Napoleon were

01:43:53 effective people that were good at their job that were actually maybe asymmetrically good

01:44:01 at being able to organize people and do certain kinds of things that were pretty oriented

01:44:08 towards certain types of destruction or pretty willing to, maybe they would say they were

01:44:13 oriented towards empire expansion, but pretty willing to commit certain acts of destruction

01:44:18 in the name of it.

01:44:19 What are you worried about?

01:44:20 The Genghis Khan, or you could argue he’s not a psychopath.

01:44:27 That are you worried about Genghis Khan, are you worried about Hitler or are you worried

01:44:31 about a terrorist who is, has a very different ethic, which is not even for, it’s not trying

01:44:42 to preserve and build and expand my community.

01:44:46 It’s more about just the destruction in itself is the goal.

01:44:50 I think the thing that you’re looking at that I do agree with is that there’s a psychological

01:44:56 disposition towards construction and a psychological disposition more towards destruction.

01:45:03 Obviously everybody has both and can toggle between both and oftentimes one is willing

01:45:07 to destroy certain things.

01:45:09 We have this idea of creative destruction, right?

01:45:11 Willing to destroy certain things to create other things and utilitarianism and trolley

01:45:15 problems are all about exploring that space and the idea of war is all about that.

01:45:20 I am trying to create something for our people and it requires destroying some other people.

01:45:29 Sociopathy is a funny topic because it’s possible to have very high fealty to your in group

01:45:32 and work on perfecting the methods of torture to the out group at the same time because

01:45:38 you can dehumanize and then remove empathy.

01:45:43 And I would also say that there are types.

01:45:48 So the reason, the thing that gives hope about the orientation towards construction and destruction

01:45:55 being a little different in psychology is what it takes to build really catastrophic

01:46:00 tech, even today where it doesn’t take what it took to make a nuke, a small group of people

01:46:04 could do it, takes still some real technical knowledge that required having studied for

01:46:10 a while and some then building capacity and there’s a question of is that psychologically

01:46:16 inversely correlated with the desire to damage civilization meaningfully?

01:46:24 A little bit.

01:46:25 A little bit, I think.

01:46:27 I think a lot.

01:46:29 I think it’s actually, I mean, this is the conversation I had like with, I think offline

01:46:34 with Dan Carlin, which is like, it’s pretty easy to come up with ways that any competent,

01:46:41 I can come up with a lot of ways to hurt a lot of people and it’s pretty easy, like I

01:46:46 alone could do it and there’s a lot of people as smart or smarter than me, at least in their

01:46:55 creation of explosives.

01:46:58 Why are we not seeing more insane mass murder?

01:47:03 I think there’s something fascinating and beautiful about this and it does have to do

01:47:10 with some deeply pro social types of characteristics in humans but when you’re dealing with very

01:47:19 large numbers, you don’t need a whole lot of a phenomena and so then you start to say,

01:47:24 well, what’s the probability that X won’t happen this year, then won’t happen in the

01:47:27 next two years, three years, four years and then how many people are doing destructive

01:47:32 things with lower tech and then how many of them can get access to higher tech that they

01:47:36 didn’t have to figure out how to build.

01:47:39 So when I can get commercial tech and maybe I don’t understand tech very well but I understand

01:47:47 it well enough to utilize it, not to create it and I can repurpose it.

01:47:51 When we saw that commercial drone with a homemade thermite bomb hit the Ukrainian munitions

01:47:57 factory and do the equivalent of an incendiary bomb level of damage, that was just home tech,

01:48:03 that’s just simple kind of thing.

01:48:06 And so the question is not does it stay being a small percentage of the population?

01:48:14 The question is can you bind that phenomena nearly completely and especially now as you

01:48:24 start to get into bigger things, CRISPR gene drive technologies and various things like

01:48:29 that, can you bind it completely long term over what period of time?

01:48:36 Not perfectly though, that’s the thing.

01:48:38 I’m trying to say that there is some, let’s call it, that’s a random word, love, that’s

01:48:46 inherent and that’s core to human nature that’s preventing destruction at scale.

01:48:54 And you’re saying yeah but there’s a lot of humans, there’s going to be eight plus billion

01:48:59 and then there’s a lot of seconds in the day to come up with stuff, there’s a lot of pain

01:49:03 in the world that can lead to a distorted view of the world such that you want to channel

01:49:08 that pain into the destruction, all those kinds of things and it’s only a matter of

01:49:12 time that any one individual can do large damage, especially as we create more and more

01:49:19 democratized decentralized ways to deliver that damage even if you don’t know how to

01:49:23 build the initial weapon.

01:49:25 But the thing is it seems like it’s a race between the cheapening of destructive weapons

01:49:37 and the capacity of humans to express their love towards each other and it’s a race that

01:49:44 so far, I know on Twitter it’s not popular to say but love is winning, okay?

01:49:52 So what is the argument that love is going to lose here against nuclear weapons and biotech

01:49:58 and AI and drones?

01:50:02 Okay I’m going to comment the end of this to a how love wins so I just want you to know

01:50:07 that that’s where I’m oriented.

01:50:09 That’s the end, okay.

01:50:10 But I’m going to argue against why that is a given because it’s not a given, I don’t

01:50:19 believe and I think that it’s…

01:50:20 This is like a good romantic comedy so you’re going to create drama right now but it will

01:50:25 end in a happy ending.

01:50:27 Well it’s because it’s only a happy ending if we actually understand the issues well

01:50:30 enough and take responsibility to shift it.

01:50:32 Do I believe like there’s a reason why there’s so much more dystopic sci fi than protopic

01:50:37 sci fi and the some protopic sci fi usually requires magic is because – or at least

01:50:45 magical tech, right, dilithium crystals and warp drives and stuff because it’s very hard

01:50:51 to imagine people like the people we have been in the history books with exponential

01:50:59 type technology and power that don’t eventually blow themselves up, that make good enough

01:51:04 choices as stewards of their environment and their commons and each other and etc.

01:51:09 So like it’s easier to think of scenarios where we blow ourselves up than it is to think

01:51:13 of scenarios where we avoid every single scenario where we blow ourselves up.

01:51:16 And when I say blow ourselves up I mean the environmental versions, the terrorist versions,

01:51:21 the war versions, the cumulative externalities versions.

01:51:25 And I’m sorry if I’m interrupting your flow of thought but why is it easier?

01:51:33 Could it be a weird psychological thing where we either are just more capable to visualize

01:51:39 explosions and destruction and then the sicker thought which is like we kind of enjoy for

01:51:44 some weird reason thinking about that kind of stuff even though we wouldn’t actually

01:51:48 act on it.

01:51:49 It’s almost like some weird, like I love playing shooter games, you know, first person shooters

01:51:56 and like especially if it’s like murdering zombies and doom, you’re shooting demons.

01:52:01 I play one of my favorite games Diablo is like slashing through different monsters and

01:52:05 the screaming and pain and the hellfire and then I go out into the real world to eat my

01:52:11 coconut ice cream and I’m all about love.

01:52:13 So like can we trust our ability to visualize how it all goes to shit as an actual rational

01:52:20 way of thinking?

01:52:22 I think it’s a fair question to say to what degree is there just kind of perverse fantasy

01:52:28 and morbid exploration and whatever else that happens in our imagination but I don’t think

01:52:37 that’s the whole of it.

01:52:38 I think there is also a reality to the combinatorial possibility space and the difference in the

01:52:44 probabilities that there’s a lot of ways I could try to put the 70 trillion cells of

01:52:50 your body together that don’t make you.

01:52:53 There’s not that many ways I can put them together that make you.

01:52:55 There’s a lot of ways I could try to connect the organs together that make some weird kind

01:52:58 of group of organs on a desk but that doesn’t actually make a functioning human and you

01:53:06 can kill an adult human in a second but you can’t get one in a second.

01:53:09 It takes 20 years to grow one and a lot of things happen right.

01:53:12 I could destroy this building in a couple of minutes with demolition but it took a year

01:53:18 or a couple of years to build it.

01:53:20 There is –

01:53:21 Calm down, Cole.

01:53:23 This is just an example.

01:53:25 He doesn’t mean it.

01:53:27 There’s a gradient where entropy is easier and there’s a lot more ways to put a set

01:53:35 of things together that don’t work than the few that really do produce higher order

01:53:38 synergies.

01:53:45 When we look at a history of war and then we look at exponentially more powerful warfare,

01:53:51 an arms race that drives that in all these directions, and when we look at a history

01:53:54 of environmental destruction and exponentially more powerful tech that makes exponential

01:53:58 externalities multiplied by the total number of agents that are doing it and the cumulative

01:54:02 effects, there’s a lot of ways the whole thing can break, like a lot of different ways.

01:54:07 And for it to get ahead, it has to have none of those happen.

01:54:12 And so there’s just a probability space where it’s easier to imagine that thing.

01:54:18 So to say how do we have a protopic future, we have to say, well, one criteria must be

01:54:23 that it avoids all of the catastrophic risks.

01:54:25 So can we understand – can we inventory all the catastrophic risks?

01:54:28 Can we inventory the patterns of human behavior that give rise to them?

01:54:32 And could we try to solve for that?

01:54:35 And could we have that be the essence of the social technology that we’re thinking about

01:54:39 to be able to guide, bind, and direct a new physical technology?

01:54:42 Because so far, our physical technology – like we were talking about the Genghis Khan’s

01:54:47 like that, that obviously use certain kinds of physical technology and armaments and also

01:54:52 social technology and unconventional warfare for a particular set of purposes.

01:54:57 But we have things that don’t look like warfare, like Rockefeller and Standard Oil.

01:55:04 And it looked like a constructive mindset to be able to bring this new energy resource

01:55:11 to the world, and it did.

01:55:14 And the second order effects of that are climate change and all of the oil spills that have

01:55:21 happened and will happen and all of the wars in the Middle East over the oil that have

01:55:26 been there and the massive political clusterfuck and human life issues that are associated

01:55:32 with it and on and on, right?

01:55:36 And so it’s also not just the orientation to construct a thing can have a narrow focus

01:55:44 on what I’m trying to construct but be affecting a lot of other things through second and third

01:55:47 order effects I’m not taking responsibility for.

01:55:51 You often on another tangent mentioned second, third, and fourth order effects.

01:55:57 And order.

01:55:58 And order.

01:55:59 Cascading.

01:56:00 Which is really fascinating.

01:56:02 Like starting with the third order plus it gets really interesting because we don’t

01:56:09 even acknowledge like the second order effects.

01:56:11 Right.

01:56:12 But like thinking because those it could get bigger and bigger and bigger in ways we were

01:56:17 not anticipating.

01:56:18 So how do we make those?

01:56:20 So it sounds like part of the thing that you are thinking through in terms of a solution

01:56:27 how to create an anti fragile, a resilient society is to make explicit acknowledge, understand

01:56:38 the externalities, the second order, third order, fourth order, and the order effects.

01:56:44 How do we start to think about those effects?

01:56:47 Yeah, the war application is harm we’re trying to cause or that we’re aware we’re causing.

01:56:52 Right.

01:56:53 The externality is harm that at least supposedly we’re not aware we’re causing or at minimum

01:56:58 it’s not our intention.

01:56:59 Right.

01:57:00 Maybe we’re either totally unaware of it or we’re aware of it but it is a side effect

01:57:03 of what our intention is.

01:57:04 It’s not the intention itself.

01:57:06 There are catastrophic risks from both types.

01:57:09 The direct application of increased technological power to a rivalrous intent which is going

01:57:16 to cause harm for some out group, for some in group to win.

01:57:19 But the out group is also working on growing the tech and if they don’t lose completely

01:57:23 they reverse engineer the tech, up regulate it, come back with more capacity.

01:57:27 So there’s the exponential tech arms race side of in group, out group rivalry using

01:57:33 exponential tech that is one set of risks.

01:57:36 And the other set of risks is the application of exponentially more powerful tech not intentionally

01:57:44 to try and beat an out group but to try to achieve some goal that we have but to produce

01:57:49 a second and third order effects that do have harm to the commons, to other people, to environment,

01:57:56 to other groups that might actually be bigger problems than the problem we were originally

01:58:02 trying to solve with the thing we were building.

01:58:05 When Facebook was building a dating app and then building a social app where people could

01:58:10 tag pictures, they weren’t trying to build a democracy destroying app that would maximize

01:58:20 time on site as part of its ad model through AI optimization of a newsfeed to the thing

01:58:27 that made people spend most time on site which is usually them being limbically hijacked

01:58:31 more than something else which ends up appealing to people’s cognitive biases and group identities

01:58:37 and creates no sense of shared reality.

01:58:39 They weren’t trying to do that but it was a second order effect and it’s a pretty fucking

01:58:45 powerful second order effect and a pretty fast one because the rate of tech is obviously

01:58:51 able to get distributed to much larger scale much faster and with a bigger jump in terms

01:58:56 of total vertical capacity than that’s what it means to get to the verticalizing part

01:59:00 of an exponential curve.

01:59:02 So just like we can see that oil had the second order environmental effects and also social

01:59:09 and political effects.

01:59:11 War and so much of the whole like the total amount of oil used has a proportionality to

01:59:19 total global GDP and this is why we have this the petrodollar and so the oil thing also

01:59:27 had the externalities of a major aspect of what happened with military industrial complex

01:59:32 and things like that.

01:59:34 But we can see the same thing with more current technologies with Facebook and Google and

01:59:40 other things.

01:59:41 So I don’t think we can run and the more powerful the tech is, we build it for reason X, whatever

01:59:49 reason X is.

01:59:51 Maybe X is three things, maybe it’s one thing, right?

01:59:55 We’re doing the oil thing because we wanna make cars because it’s a better method of

01:59:58 individual transportation, we’re building the Facebook thing because we’re gonna connect

02:00:01 people socially in a personal sphere.

02:00:04 But it interacts with complex systems, with ecologies, economies, psychologies, cultures,

02:00:13 and so it has effects on other than the thing we’re intending.

02:00:16 Some of those effects can end up being negative effects, but because this technology, if we

02:00:22 make it to solve a problem, it has to overcome the problem.

02:00:25 The problem has been around for a while, it’s gonna overcome in a short period of time.

02:00:28 So it usually has greater scale, greater rate of magnitude in some way.

02:00:32 That also means that the externalities that it creates might be bigger problems.

02:00:37 And you can say, well, but then that’s the new problem and humanity will innovate its

02:00:40 way out of that.

02:00:41 Well, I don’t think that’s paying attention to the fact that we can’t keep up with exponential

02:00:45 curves like that, nor do finite spaces allow exponential externalities forever.

02:00:52 And this is why a lot of the smartest people thinking about this are thinking, well, no,

02:00:57 I think we’re totally screwed unless we can make a benevolent AI singleton that rules all

02:01:02 of us.

02:01:03 Guys like Ostrom and others thinking in those directions, because they’re like, how do humans

02:01:10 try to do multipolarity and make it work?

02:01:14 And I have a different answer of what I think it looks like that does have more to do with

02:01:19 love, but some applied social tech aligned with love.

02:01:22 That’s good, because I have a bunch of really dumb ideas I’d prefer to hear.

02:01:28 I’d like to hear some of them first.

02:01:30 I think the idea I would have is to be a bit more rigorous in trying to measure the amount

02:01:37 of love you add or subtract from the world in second, third, fourth, fifth order effects.

02:01:46 It’s actually, I think, especially in the world of tech, quite doable.

02:01:52 You just might not like, the shareholders may not like that kind of metric, but it’s

02:01:58 pretty easy to measure.

02:02:01 That’s not even, I’m perhaps half joking about love, but we could talk about just happiness

02:02:07 and well being, long term well being.

02:02:11 That’s pretty easy for Facebook, for YouTube, for all these companies to measure that.

02:02:16 They do a lot of kinds of surveys.

02:02:19 There’s very simple solutions here that you could just survey how, I mean, servers are

02:02:25 in some sense useless because they’re a subset of the population.

02:02:31 You’re just trying to get a sense, it’s very loose kind of understanding, but integrated

02:02:35 deeply as part of the technology.

02:02:37 Most of our tech is recommender systems.

02:02:39 Most of the, sorry, not tech, online interactions driven by recommender systems that learn very

02:02:46 little data about you and use that data based on, mostly based on traces of your previous

02:02:52 behavior to suggest future things.

02:02:54 This is how Twitter, this is how Facebook works.

02:02:56 This is how AdSense or Google AdSense works, this is how Netflix, YouTube work and so on.

02:03:02 And for them to just track as opposed to engagement, how much you spend in a particular video,

02:03:08 a particular site, is also track, give you the technology to do self report of what makes

02:03:16 you feel good, what makes you grow as a person, of what makes you, you know, the best version

02:03:23 of yourself, the Rogan idea of the hero of your movie.

02:03:31 And just add that little bit of information.

02:03:34 If you have people, you have this like happiness surveys of how you feel about the last five

02:03:39 days, how would you report your experience.

02:03:42 You can lay out the set of videos.

02:03:45 It’s kind of fascinating, I don’t know if you ever look at YouTube, the history of videos

02:03:48 you’ve looked at.

02:03:49 It’s fascinating.

02:03:50 It’s very embarrassing for me.

02:03:52 Like it’ll be like a lecture and then like a set of videos that I don’t want anyone to

02:03:57 know about, which is, which is, which will be like, I don’t know, maybe like five videos

02:04:03 in a row where it looks like I watched the whole thing, which I probably did about like

02:04:07 how to cook a steak, even though, or just like with the best chefs in the world cooking

02:04:11 steaks and I’m just like sitting there watching it for no purpose whatsoever, wasting away

02:04:17 my life or like funny cat videos or like legit, that’s always a good one.

02:04:23 And I could look back and rate which videos made me a better person and not.

02:04:29 And I mean, on a more serious note, there’s a bunch of conversations, podcasts or lectures

02:04:34 I’ve watched, which made me a better person and some of them made me a worse person.

02:04:40 And honestly, not for stupid reasons, like I feel dumber, but because I do have a sense

02:04:45 that that started me on a path of, of not being kind to other people.

02:04:54 For example, I’ll give you a, for my own, and I’m sorry for ranting, but maybe there’s

02:04:58 some usefulness to this kind of exploration of self.

02:05:02 When I focus on creating, on programming, on science, I become a much deeper thinker

02:05:11 and a kinder person to others.

02:05:14 When I listen to too many, a little bit is good, but too many podcasts or videos about

02:05:20 how, how our world is melting down or criticizing ridiculous people, the worst of the quote

02:05:28 unquote woke, for example.

02:05:30 All there’s all these groups that are misbehaving in fascinating ways because they’ve been corrupted

02:05:35 by power.

02:05:37 The more I watch, the more I watch criticism of them, the worse I become.

02:05:44 And I’m aware of this, but I’m also aware that for some reason it’s pleasant to watch

02:05:49 those sometimes.

02:05:51 And so for, for me to be able to self report that to the YouTube algorithm, to the systems

02:05:56 around me, and they ultimately try to optimize to make me the best person, the best version

02:06:02 of myself, which I personally believe would make YouTube a lot more money because I’d

02:06:06 be much more willing to spend time on YouTube and give YouTube a lot more, a lot more of

02:06:11 my money.

02:06:12 That’s a, that’s great for business and great for humanity because it’ll make me a kinder

02:06:17 person.

02:06:18 It’ll increase the love quotient, the love metric, and it’ll make them a lot of money.

02:06:25 I feel like everything’s aligned.

02:06:27 And so you, you should do that not just for YouTube algorithm, but also for military strategy

02:06:31 and whether you go to war or not, because one externality you can think of about going

02:06:36 to war, which I think we talked about offline is we often go to war with kind of governments

02:06:42 with a, with, not with the people.

02:06:46 You have to think about the kids of countries that see a soldier and because of what they

02:06:57 experienced the interaction with the soldier, hate is born.

02:07:01 When you’re like eight years old, six years old, you lose your dad, you lose your mom,

02:07:07 you lose a friend, somebody close to you that want a really powerful externality that could

02:07:12 be reduced to love, positive and negative is the hate that’s born when you make decisions.

02:07:19 And that’s going to take fruition that that little seed is going to become a tree that

02:07:25 then leads to the kind of destruction that we talk about.

02:07:30 So but in my sense, it’s possible to reduce everything to a measure of how much love does

02:07:35 this add to the world.

02:07:38 All that to say, do you have ideas of how we practically build systems that create a

02:07:48 resilient society?

02:07:49 There were a lot of good things that you shared where there’s like 15 different ways that

02:07:55 we could enter this that are all interesting.

02:07:57 So I’m trying to see which one will probably be most useful.

02:08:00 Pick the one or two things that are least ridiculous.

02:08:03 When you were mentioning if we could see some of the second order effects or externalities

02:08:11 that we aren’t used to seeing, specifically the one of a kid being radicalized somewhere

02:08:15 else, which engenders enmity in them towards us, which decreases our own future security.

02:08:20 Even if you don’t care about the kid, if you care about the kid, it’s a whole other thing.

02:08:24 Yeah, I mean, I think when we saw this, when Jane Fonda and others went to Vietnam and

02:08:30 took photos and videos of what was happening, and you got to see the pictures of the kids

02:08:34 with napalm on them, that like the antiwar effort was bolstered by that in a way it couldn’t

02:08:42 have been without that.

02:08:45 Until we can see the images, you can’t have a mere neuron effect in the same way.

02:08:50 And when you can, that starts to have a powerful effect.

02:08:53 I think there’s a deep principle that you’re sharing there, which is that if we can have

02:09:01 a rivalrous intent where our in group, whatever it is, maybe it’s our political party wanting

02:09:07 to win within the US, maybe it’s our nation state wanting to win in a war or an economic

02:09:13 war over resource or whatever it is, that if we don’t obliterate the other people completely,

02:09:19 they don’t go away, they’re not engendered to like us more, they didn’t become less smart.

02:09:27 So they have more enmity towards us and whatever technologies we employed to be successful,

02:09:31 they will now reverse engineer, make iterations on and come back.

02:09:35 And so you drive an arms race, which is why you can see that the wars were over history

02:09:42 employing more lethal weaponry.

02:09:46 And not just the kinetic war, the information war and the narrative war and the economic

02:09:53 war, like it just increased capacity in all of those fronts.

02:09:58 And so what seems like a win to us on the short term might actually really produce losses

02:10:04 in the long term.

02:10:05 And what’s even in our own best interest in the long term is probably more aligned with

02:10:08 everyone else because we inter affect each other.

02:10:11 And I think the thing about globalism, globalization and exponential tech and the rate at which

02:10:16 we affect each other and the rate at which we affect the biosphere that we’re all affected

02:10:19 by is that this kind of proverbial spiritual idea that we’re all interconnected and need

02:10:28 to think about that in some way, that was easy for tribes to get because everyone in

02:10:33 the tribes so clearly saw their interconnection and dependence on each other.

02:10:37 But in terms of a global level, the speed at which we are actually interconnected, the

02:10:43 speed at which the harm happening to something in Wuhan affects the rest of the world or

02:10:48 a new technology developed somewhere affects the entire world or an environmental issue

02:10:52 or whatever is making it to where we either actually all get, not as a spiritual idea,

02:10:58 just even as physics, right?

02:10:59 We all get the interconnectedness of everything and that we either all consider that and see

02:11:04 how to make it through more effectively together or failures anywhere end up becoming decreased

02:11:10 quality of life and failures and increased risk everywhere.

02:11:12 Don’t you think people are beginning to experience that at the individual level?

02:11:16 So governments are resisting it.

02:11:18 They’re trying to make us not empathize with each other, feel connected.

02:11:21 But don’t you think people are beginning to feel more and more connected?

02:11:25 Like isn’t that exactly what the technology is enabling?

02:11:27 Like social networks, we tend to criticize them, but isn’t there a sense which we’re

02:11:34 experiencing, you know?

02:11:37 When you watch those videos that are criticizing, whether it’s the woke Antifa side or the QAnon

02:11:43 Trump supporter side, does it seem like they have increased empathy for people that are

02:11:50 outside of their ideologic camp?

02:11:51 Not at all.

02:11:52 I may be conflating my own experience of the world and that of the populace.

02:12:04 I tend to see those videos as feeding something that’s a relic of the past.

02:12:12 They figured out that drama fuels clicks, but whether I’m right or wrong, I don’t know.

02:12:19 But I tend to sense that that is not, that hunger for drama is not fundamental to human

02:12:26 beings that we want to actually, that we want to understand Antifa and we want to empathize.

02:12:34 We want to take radical ideas and be able to empathize with them and synthesize it all.

02:12:41 Okay, let’s look at cultural outliers in terms of violence versus compassion.

02:12:51 We can see that a lot of cultures have relatively lower in group violence, bigger out group

02:12:58 violence, and there’s some variance in them and variance at different times based on the

02:13:01 scarcity or abundance of resource and other things.

02:13:04 But you can look at say, Janes, whose whole religion is around nonviolence so much so

02:13:12 that they don’t even hurt plants, they only take fruits that fall off them and stuff.

02:13:16 Or to go to a larger population, you could take Buddhists, where for the most part, with

02:13:21 a few exceptions, for the most part across three millennia and across lots of different

02:13:25 countries and geographies and whatever, you have 10 million people plus or minus who don’t

02:13:30 hurt bugs.

02:13:33 The whole spectrum of genetic variance that is happening within a culture of that many

02:13:36 people and head traumas and whatever, and nobody hurts bugs.

02:13:41 And then you look at a group where the kids grew up as child soldiers in Liberia or Darfur

02:13:47 were to make it to adulthood, pretty much everybody’s killed people hand to hand and

02:13:51 killed people who were civilian or innocent type of people.

02:13:54 And you say, okay, so we were very neotenous, we can be conditioned by our environment and

02:14:00 humans can be conditioned where almost all the humans show up in these two different

02:14:05 bell curves.

02:14:06 It doesn’t mean that the Buddhists had no violence, it doesn’t mean that these people

02:14:08 had no compassion, but they’re very different Gaussian distributions.

02:14:14 And so I think one of the important things that I like to do is look at the examples

02:14:20 of the populations, what Buddhism shows regarding compassion or what Judaism shows around education,

02:14:28 the average level of education that everybody gets because of a culture that is really working

02:14:32 on conditioning it or various cultures.

02:14:35 What are the positive deviance outside of the statistical deviance to see what is actually

02:14:41 possible and then say, what are the conditioning factors and can we condition those across

02:14:47 a few of them simultaneously and could we build a civilization like that becomes a very

02:14:52 interesting question.

02:14:53 So there’s this kind of real politic idea that humans are violent, large groups of humans

02:14:59 become violent, they become irrational, specifically those two things, rivalrous and violent and

02:15:03 irrational.

02:15:05 And so in order to minimize the total amount of violence and have some good decisions,

02:15:08 they need ruled somehow.

02:15:10 And that not getting that is some kind of naive utopianism that doesn’t understand human

02:15:15 nature yet.

02:15:16 This gets back to like mimesis of desire as an inexorable thing.

02:15:20 I think the idea of the masses is actually a kind of propaganda that is useful for the

02:15:26 classes that control to popularize the idea that most people are too violent, lazy, undisciplined

02:15:37 and irrational to make good choices and therefore their choices should be sublimated in some

02:15:42 kind of way.

02:15:43 I think that if we look back at these conditioning environments, we can say, okay, so the kids

02:15:50 that go to a really fancy school and have a good developmental environment like Exeter

02:15:58 Academy, there’s still a Gaussian distribution of how well they do on any particular metric,

02:16:03 but on average, they become senators and the worst ones become high end lawyers or whatever.

02:16:09 And then I look at the inner city school with a totally different set of things and I see

02:16:12 a very, very differently displaced Gaussian distribution, but a very different set of

02:16:15 conditioning factors.

02:16:16 And then I say the masses, well, if all those kids who were one of the parts of the masses

02:16:20 got to go to Exeter and have that family and whatever, would they still be the masses?

02:16:25 Could we actually condition more social virtue, more civic virtue, more orientation towards

02:16:32 dialectical synthesis, more empathy, more rationality widely?

02:16:37 Yes.

02:16:39 Would that lead to better capacity for something like participatory governance, democracy or

02:16:45 republic or some kind of participatory governance?

02:16:47 Yes.

02:16:48 Yes.

02:16:49 Is it necessary for it actually?

02:16:52 Yes.

02:16:54 And is it good for class interests?

02:16:57 Not really.

02:16:58 By the way, when you say class interests, this is the powerful leading over the less

02:17:03 powerful, that kind of idea.

02:17:06 Anyone that benefits from asymmetries of power doesn’t necessarily benefit from decreasing

02:17:12 those asymmetries of power and kind of increasing the capacity of people more widely.

02:17:20 And so, when we talk about power, we’re talking about asymmetries in agency, influence and

02:17:28 control.

02:17:29 Do you think that hunger for power is fundamental to human nature?

02:17:33 I think we should get that straight before we talk about other stuff.

02:17:36 So like this pick up line that I use at a bar often, which is power corrupts and absolute

02:17:43 power corrupts, absolutely.

02:17:45 Is that true or is that just a fancy thing to say?

02:17:48 In modern society, there’s something to be said, have we changed as societies over time

02:17:55 in terms of how much we crave power?

02:17:58 That there is an impulse towards power that is innate in people and can be conditioned

02:18:03 one way or the other, yes, but you can see that Buddhist society does a very different

02:18:06 thing with it at scale, that you don’t end up seeing the emergence of the same types

02:18:13 of sociopathic behavior and particularly then creating sociopathic institutions.

02:18:21 And so, it’s like, is eating the foods that were rare in our evolutionary environment

02:18:28 that give us more dopamine hit because they were rare and they’re not anymore, salt,

02:18:31 sugar?

02:18:33 Is there something pleasurable about those where humans have an orientation to overeat

02:18:37 if they can?

02:18:38 Well, the fact that there is that possibility doesn’t mean everyone will obligately be obese

02:18:42 and die of obesity, right?

02:18:44 Like it’s possible to have a particular impulse and to be able to understand it, have other

02:18:49 ones and be able to balance them.

02:18:52 And so, to say that power dynamics are obligate in humans and we can’t do anything about it

02:19:00 is very similar to me to saying like everyone is going to be obligately obese.

02:19:05 Yeah.

02:19:06 So, there’s some degree to which the control of those impulses has to do with the conditioning

02:19:10 early in life.

02:19:11 Yes.

02:19:12 And the culture that creates the environment to be able to do that and then the recursion

02:19:16 on that.

02:19:17 Okay.

02:19:18 So, if we were to, bear with me, just asking for a friend, if we’re to kill all humans

02:19:24 on Earth and then start over, is there ideas about how to build up, okay, we don’t have

02:19:32 to kill, let’s leave the humans on Earth, they’re fine and go to Mars and start a new

02:19:38 society.

02:19:39 Is there ways to construct systems of conditioning, education of how we live with each other that

02:19:47 would incentivize us properly to not seek power, to not construct systems that are of

02:19:57 asymmetry of power and to create systems that are resilient to all kinds of terrorist attacks,

02:20:03 to all kinds of destructions?

02:20:06 I believe so.

02:20:08 Is there some inclination?

02:20:10 Of course, you probably don’t have all the answers, but you have insights about what

02:20:14 that looks like.

02:20:15 Yeah.

02:20:16 It’s just rigorous practice of dialectic synthesis as essentially conversations with assholes

02:20:23 of various flavors until they’re not assholes anymore because you become deeply empathetic

02:20:28 with their experience.

02:20:29 Okay.

02:20:30 So, there’s a lot of things that we would need to construct to come back to this, like

02:20:37 what is the basis of rivalry?

02:20:39 How do you bind it?

02:20:41 How does it relate to tech?

02:20:43 If you have a culture that is doing less rivalry, does it always lose in war to those who do

02:20:48 war better?

02:20:49 And how do you make something on the enactment of how to get there from here?

02:20:52 Great, great.

02:20:53 So what’s rivalry?

02:20:54 Well, is rivalry bad or good?

02:20:58 So is another word for rivalry competition?

02:21:01 Yes, I think roughly, yes.

02:21:05 I think bad and good are kind of silly concepts here.

02:21:10 Good for some things, bad for other things.

02:21:12 Bad for some contexts and others.

02:21:15 Even that.

02:21:16 Okay.

02:21:17 Let me give you an example that relates back to the Facebook measuring thing you were mentioning

02:21:21 a moment ago.

02:21:23 First, I think what you’re saying is actually aligned with the right direction and what

02:21:27 I want to get to in a moment, but it’s not, the devil is in the details here.

02:21:32 So I enjoy praise, it feeds my ego, I grow stronger.

02:21:36 So I appreciate that.

02:21:37 I will make sure to include one piece every 15 minutes as we go.

02:21:42 So it’s easier to measure, there are problems with this argument, but there’s also utility

02:21:53 to it.

02:21:54 So let’s take it for the utility it has first.

02:21:59 It’s harder to measure happiness than it is to measure comfort.

02:22:04 We can measure with technology that the shocks in a car are making the car bounce less, that

02:22:10 the bed is softer and, you know, material science and those types of things.

02:22:16 And happiness is actually hard for philosophers to define because some people find that there’s

02:22:23 certain kinds of overcoming suffering that are necessary for happiness.

02:22:26 There’s happiness that feels more like contentment and happiness that feels more like passion.

02:22:30 Is passion the source of all suffering or the source of all creativity?

02:22:32 Like there’s deep stuff and it’s mostly first person, not measurable third person stuff,

02:22:37 even if maybe it corresponds to third person stuff to some degree.

02:22:40 But we also see examples of some of our favorite examples as people who are in the worst environments

02:22:45 who end up finding happiness, right, where the third person stuff looks to be less conducive

02:22:49 and there’s some Victor Frankl, Nelson Mandela, whatever.

02:22:54 But it’s pretty easy to measure comfort and it’s pretty universal.

02:22:57 And I think we can see that the Industrial Revolution started to replace happiness with

02:23:01 comfort quite heavily as the thing it was optimizing for.

02:23:05 And we can see that when increased comfort is given, maybe because of the evolutionary

02:23:09 disposition that expending extra calories when for the majority of our history we didn’t

02:23:14 have extra calories was not a safe thing to do.

02:23:17 Who knows why?

02:23:19 When extra comfort is given, it’s very easy to take that path, even if it’s not the path

02:23:25 that supports overall well being long term.

02:23:29 And so, we can see that, you know, when you look at the techno optimist idea that we have

02:23:37 better lives than Egyptian pharaohs and kings and whatever, what they’re largely looking

02:23:41 at is how comfortable our beds are and how comfortable the transportation systems are

02:23:47 and things like that, in which case there’s massive improvement.

02:23:50 But we also see that in some of the nations where people have access to the most comfort,

02:23:54 suicide and mental illness are the highest.

02:23:57 And we also see that some of the happiest cultures are actually some of the ones that

02:24:01 are in materially lame environments.

02:24:04 And so, there’s a very interesting question here, and if I understand correctly, you do

02:24:08 cold showers, and Joe Rogan was talking about how he needs to do some fairly intensive kind

02:24:13 of struggle that is a non comfort to actually induce being better as a person, this concept

02:24:21 of hormesis, that it’s actually stressing an adaptive system that increases its adaptive

02:24:27 capacity, and that there’s something that the happiness of a system has something to

02:24:33 do with its adaptive capacity, its overall resilience, health, well being, which requires

02:24:37 a decent bit of discomfort.

02:24:40 And yet, in the presence of the comfort solution, it’s very hard to not choose it, and then

02:24:46 as you’re choosing it regularly, to actually down regulate your overall adaptive capacity.

02:24:51 And so, when we start saying, can we make tech where we’re measuring for the things

02:25:00 that it produces beyond just the measure of GDP or whatever particular measures look like

02:25:06 the revenue generation or profit generation of my business, are all the meaningful things

02:25:13 measurable, and what are the right measures, and what are the externalities of optimizing

02:25:20 for that measurement set, what meaningful things aren’t included in that measurement

02:25:23 set, that might have their own externalities, these are some of the questions we actually

02:25:27 have to take seriously.

02:25:28 Yeah, and I think they’re answerable questions, right?

02:25:31 Progressively better, not perfect.

02:25:33 Right, so first of all, let me throw out happiness and comfort out of the discussion, those seem

02:25:37 like useless, the distinction, because I said they’re useful, well being is useful, but

02:25:43 I think I take it back.

02:25:47 I propose new metrics in this brainstorm session, which is, so one is like personal growth,

02:25:59 which is intellectual growth, I think we’re able to make that concrete for ourselves,

02:26:05 like you’re a better person than you were a week ago, or a worse person than you were

02:26:11 a week ago.

02:26:12 I think we can ourselves report that, and understand what that means, it’s this grey

02:26:18 area, and we try to define it, but I think we humans are pretty good at that, because

02:26:22 we have a sense, an idealistic sense of the person we might be able to become.

02:26:27 We all dream of becoming a certain kind of person, and I think we have a sense of getting

02:26:31 closer and not towards that person.

02:26:34 Maybe this is not a great metric, fine.

02:26:36 The other one is love, actually.

02:26:39 Like if you’re happy or not, or you’re comfortable or not, how much love do you have towards

02:26:45 your fellow human beings?

02:26:47 I feel like if you try to optimize that, and increasing that, that’s going to have, that’s

02:26:51 a good metric.

02:26:55 How many times a day, sorry, if I can quantify, how many times a day have you thought positively

02:27:00 of another human being?

02:27:02 Put that down as a number, and increase that number.

02:27:06 I think the process of saying, okay, so let’s not take GDP or GDP per capita as the metric

02:27:13 we want to optimize for, because GDP goes up during war, and it goes up with more healthcare

02:27:18 spending from sicker people, and various things that we wouldn’t say correlate to quality

02:27:21 of life.

02:27:23 Addiction drives GDP awesomely.

02:27:24 By the way, when I said growth, I wasn’t referring to GDP.

02:27:28 I know.

02:27:29 I’m giving an example now of the primary metric we use, and why it’s not an adequate metric,

02:27:33 because we’re exploring other ones.

02:27:35 So the idea of saying, what would the metrics for a good civilization be?

02:27:41 If I had to pick a set of metrics, what would the best ones be if I was going to optimize

02:27:44 for those?

02:27:46 And then really try to run the thought experiment more deeply, and say, okay, so what happens

02:27:51 if we optimize for that?

02:27:54 Try to think through the first, and second, and third order effects of what happens that’s

02:27:58 positive, and then also say, what negative things can happen from optimizing that?

02:28:03 What actually matters that is not included in that or in that way of defining it?

02:28:07 Because love versus number of positive thoughts per day, I could just make a long list of

02:28:11 names and just say positive thing about each one.

02:28:13 It’s all very superficial.

02:28:15 Not include animals or the rest of life, have a very shallow total amount of it, but I’m

02:28:20 optimizing the number, and if I get some credit for the number.

02:28:24 And this is when I said the model of reality isn’t reality.

02:28:29 When you make a set of metrics that we’re going to optimize for this, whatever reality

02:28:33 is that is not included in those metrics can be the areas where harm occurs, which is why

02:28:38 I would say that wisdom is something like the discernment that leads to right choices

02:28:48 beyond what metrics based optimization would offer.

02:28:53 Yeah, but another way to say that is wisdom is a constantly expanding and evolving set

02:29:03 of metrics.

02:29:06 Which means that there is something in you that is recognizing a new metric that’s important

02:29:10 that isn’t part of that metric set.

02:29:11 So there’s a certain kind of connection, discernment, awareness, and this is an iterative game theory.

02:29:19 There’s a girdles and completeness theorem, right?

02:29:20 Which is if the system, if the set of things is consistent, it won’t be complete.

02:29:24 So we’re going to keep adding to it, which is why we were saying earlier, I don’t think

02:29:27 it’s not beautiful.

02:29:30 And especially if you were just saying one of the metrics you want to optimize for at

02:29:32 the individual level is becoming, right?

02:29:34 That we’re becoming more.

02:29:35 Well, that then becomes true for the civilization and our metric sets as well.

02:29:39 And our definition of how to think about a meaningful life and a meaningful civilization.

02:29:44 I can tell you what some of my favorite metrics are.

02:29:46 What’s that?

02:29:50 Well love is obviously not a metric.

02:29:52 It’s like you can bench.

02:29:53 Yeah.

02:29:54 It’s a good metric.

02:29:55 Yeah.

02:29:56 I want to optimize that across the entire population, starting with infants.

02:30:01 So in the same way that love isn’t a metric, but you could make metrics that look at certain

02:30:06 parts of it.

02:30:07 The thing I’m about to say isn’t a metric, but it’s a, it’s a consideration because I

02:30:11 thought about this a lot.

02:30:12 I don’t think there is a metric, a right one.

02:30:16 I think that every metric by itself without this thing we talked about of the continuous

02:30:20 improvement becomes a paperclip maximizer.

02:30:22 I think that’s why what the idea of false idol means in terms of the model of reality

02:30:28 not being reality.

02:30:29 Then my sacred relationship is to reality itself, which also binds me to the unknown

02:30:34 forever.

02:30:35 To the known, but also to the unknown.

02:30:36 And there’s a sense of sacredness connected to the unknown that creates an epistemic humility

02:30:41 that is always seeking not just to optimize the thing I know, but to learn new stuff.

02:30:45 And to be open to perceive reality directly.

02:30:47 So my model never becomes sacred.

02:30:49 My model is useful.

02:30:50 My

02:30:51 So the model can’t be the false idol.

02:30:53 Correct.

02:30:54 Yeah.

02:30:55 And this is why the first verse of the Tao Te Ching is the Tao that is nameable is not

02:30:59 the eternal Tao.

02:31:00 The naming then can become the source of the 10,000 things that if you get too carried

02:31:04 away with it can actually obscure you from paying attention to reality beyond in the

02:31:08 models.

02:31:09 It sounds a lot, a lot like Stephen Wolfram, but in a different language, much more poetic.

02:31:14 I can imagine that.

02:31:15 No, I’m referring, I’m joking, but there’s a echoes of cellular automata, which you can’t

02:31:20 name.

02:31:21 You can’t construct a good model cellular automata.

02:31:24 You can only watch in awe.

02:31:26 I apologize.

02:31:27 I’m distracting your train of thought horribly and miserably making it different.

02:31:32 By the way, something robots aren’t good at and dealing with the uncertainty of uneven

02:31:36 ground.

02:31:37 You’ve been okay so far.

02:31:38 You’ve been doing wonderfully.

02:31:40 So what’s your favorite metrics?

02:31:41 Okay.

02:31:42 So I know you’re not a robot.

02:31:43 So I have a

02:31:44 So one metric, and there are problems with this, but one metric that I like to just as

02:31:50 a thought experiment to consider is because you’re actually asking, I mean, I know you

02:31:56 ask your guests about the meaning of life because ultimately when you’re saying what

02:32:01 is a desirable civilization, you can’t answer that without answering what is a meaningful

02:32:06 human life and to say what is a good civilization because it’s going to be in relationship to

02:32:11 that, right?

02:32:17 And then you have whatever your answer is, how do you know what is the epistemic basis

02:32:22 for postulating that?

02:32:25 There’s also a whole nother reason for asking that question.

02:32:27 I don’t, I mean, that doesn’t even apply to you whatsoever, which is, it’s interesting

02:32:34 how few people have been asked questions like it.

02:32:41 The joke about these questions is silly, right?

02:32:45 It’s funny to watch a person and if I was more of an asshole, I would really stick on

02:32:50 that question.

02:32:51 Right.

02:32:52 It’s a silly question in some sense, but like we haven’t really considered what it means.

02:32:58 Just a more concrete version of that question is what is a better world?

02:33:03 What is the kind of world we’re trying to create really?

02:33:06 Have you really thought,

02:33:07 I’ll give you some kind of simple answers to that that are meaningful to me, but let

02:33:13 me do the societal indices first because they’re fun.

02:33:17 We should take a note of this meaningful thing because it’s important to come back to.

02:33:20 Are you reminding me to ask you about the meaning of life?

02:33:23 Noted.

02:33:24 Let me jot that down.

02:33:28 So because I think I stopped tracking it like 25 open threads.

02:33:33 Okay.

02:33:34 Let it all burn.

02:33:36 One index that I find very interesting is the inverse correlation of addiction within

02:33:42 the society.

02:33:45 The more a society produces addiction within the people in it, the less healthy I think

02:33:50 the society is as a pretty fundamental metric.

02:33:54 And so the more the individuals feel that there are less compulsive things in compelling

02:34:01 them to behave in ways that are destructive to their own values.

02:34:06 And insofar as a civilization is conditioning and influencing the individuals within it,

02:34:12 the inverse of addiction.

02:34:14 Lovely defined.

02:34:15 Correct.

02:34:16 Addiction.

02:34:17 What’s it?

02:34:18 Yeah.

02:34:19 Compulsive behavior that is destructive towards things that we value.

02:34:25 Yeah.

02:34:28 I think that’s a very interesting one to think about.

02:34:29 That’s a really interesting one.

02:34:30 And this is then also where comfort and addiction start to get very close.

02:34:35 And the ability to go in the other direction from addiction is the ability to be exposed

02:34:40 to hypernormal stimuli and not go down the path of desensitizing to other stimuli and

02:34:46 needing that hypernormal stimuli, which does involve a kind of hormesis.

02:34:51 So I do think the civilization of the future has to create something like ritualized discomfort.

02:35:00 And I think that’s what the sweat lodge and the vision quest and the solo journey and

02:35:11 the ayahuasca journey and the Sundance were.

02:35:13 I think it’s even a big part of what yoga asana was, is to make beings that are resilient

02:35:20 and strong, they have to overcome some things.

02:35:23 To make beings that can control their own mind and fear, they have to face some fears.

02:35:27 But we don’t want to put everybody in war or real trauma.

02:35:31 And yet we can see that the most fucked up people we know had childhoods of a lot of

02:35:35 trauma.

02:35:36 But some of the most incredible people we know had childhoods of a lot of trauma, whether

02:35:40 or not they happened to make it through and overcome that or not.

02:35:43 So how do we get the benefits of the stealing of character and the resilience and the whatever

02:35:49 that happened from the difficulty without traumatizing people?

02:35:52 A certain kind of ritualized discomfort that not only has us overcome something by ourselves,

02:36:01 but overcome it together with each other where nobody bails when it gets hard because the

02:36:05 other people are there.

02:36:06 So it’s both a resilience of the individuals and a resilience of the bonding.

02:36:11 So I think we’ll keep getting more and more comfortable stuff, but we have to also develop

02:36:15 resilience in the presence of that for the anti addiction direction and the fullness

02:36:21 of character and the trustworthiness to others.

02:36:24 So you have to be consistently injecting discomfort into the system, ritualize.

02:36:30 I mean, this sounds like you have to imagine Sisyphus happy.

02:36:34 You have to imagine Sisyphus with his rock, optimally resilient from a metrics perspective

02:36:45 in society.

02:36:47 So we want to constantly be throwing rocks at ourselves.

02:36:52 Not constantly.

02:36:54 You didn’t have to frequently, periodically, and there’s different levels of intensity,

02:37:00 different periodicities.

02:37:01 Now, I do not think this should be imposed by states.

02:37:05 I think it should emerge from cultures.

02:37:09 And I think the cultures are developing people that understand the value of it.

02:37:12 So there is both a cultural cohesion to it, but there’s also a voluntaryism because the

02:37:19 people value the thing that is being developed and understand it.

02:37:22 And that’s what conditioning, it’s conditioning some of these values.

02:37:28 Conditioning is a bad word because we like our idea of sovereignty, but when we recognize

02:37:32 the language that we speak and the words that we think in and the patterns of thought built

02:37:38 into that language and the aesthetics that we like and so much is conditioned in us just

02:37:42 based on where we’re born, you can’t not condition people.

02:37:45 So all you can do is take more responsibility for what the conditioning factors are.

02:37:48 And then you have to think about this question of what is a meaningful human life?

02:37:51 Because we’re, unlike the other animals born into environment that they’re genetically

02:37:55 adapted for, we’re building new environments that we were not adapted for, and then we’re

02:37:59 becoming affected by those.

02:38:02 So then we have to say, well, what kinds of environments, digital environments, physical

02:38:06 environments, social environments would we want to create that would develop the healthiest,

02:38:13 happiest, most moral, noble, meaningful people?

02:38:16 What are even those sets of things that matter?

02:38:18 So you end up getting deep existential consideration at the heart of civilization design when you

02:38:23 start to realize how powerful we’re becoming and how much what we’re building it in service

02:38:27 towards matters.

02:38:28 Before I pull it, I think three threads you just laid down, is there another metric index

02:38:34 that you’re interested in?

02:38:35 There’s one more that I really like.

02:38:39 There’s a number, but the next one that comes to mind is I have to make a very quick model.

02:38:51 Healthy human bonding, say we were in a tribal type setting, my positive emotional states

02:38:58 and your positive emotional states would most of the time be correlated, your negative emotional

02:39:03 states and mine.

02:39:04 And so you start laughing, I start laughing, you start crying, my eyes might tear up.

02:39:10 And we would call that the compassion compersion axis.

02:39:15 I would, this is a model I find useful.

02:39:18 So compassion is when you’re feeling something negative, I feel some pain, I feel some empathy,

02:39:23 something in relationship.

02:39:24 Compersion is when you do well, I’m stoked for you, right?

02:39:27 Like I actually feel happiness at your happiness.

02:39:29 I like compersion.

02:39:30 Yeah, the fact that it’s such an uncommon word in English is actually a problem culturally.

02:39:35 Because I feel that often, and I think that’s a really good feeling to feel and maximize

02:39:40 for actually.

02:39:41 That’s actually the metric I’m going to say is the compassion compersion axis is the thing

02:39:46 I would optimize for.

02:39:47 Now, there is a state where my emotional states and your emotional states are just totally

02:39:53 decoupled.

02:39:55 And that is like sociopathy.

02:39:57 I don’t want to hurt you, but I don’t care if I do or for you to do well or whatever.

02:40:01 But there’s a worse state and it’s extremely common, which is where they’re inversely coupled.

02:40:06 Where my positive emotions correspond to your negative ones and vice versa.

02:40:11 And that is the, I would call it the jealousy sadism axis.

02:40:17 The jealousy axis is when you’re doing really well, I feel something bad.

02:40:20 I feel taken away from, less than, upset, envious, whatever.

02:40:26 And that’s so common, but I think of it as kind of a low grade psychopathology that we’ve

02:40:34 just normalized.

02:40:36 The idea that I’m actually upset at the happiness or fulfillment or success of another is like

02:40:41 a profoundly fucked up thing.

02:40:42 No, we shouldn’t shame it and repress it so it gets worse.

02:40:45 We should study it.

02:40:46 Where does it come from?

02:40:47 And it comes from our own insecurities and stuff.

02:40:50 But then the next part that everybody knows is really fucked up is just on the same axis.

02:40:55 It’s the same inverted, which is to the jealousy or the envy is the, I feel badly when you’re

02:41:01 doing well.

02:41:02 The sadism side is I actually feel good when you lose or when you’re in pain, I feel some

02:41:06 happiness that’s associated.

02:41:07 And you can see when someone feels jealous, sometimes they feel jealous with a partner

02:41:12 and then they feel they want that partner to get it, revenge comes up or something.

02:41:17 So sadism is really like jealousy is one step on the path to sadism from the healthy compassion

02:41:23 conversion axis.

02:41:24 So, I would like to see a society that is inversely, that is conditioning sadism and

02:41:30 jealousy inversely, right?

02:41:32 The lower that amount and the more the compassion conversion.

02:41:36 And if I had to summarize that very simply, I’d say it would optimize for conversion.

02:41:42 Which is because notice that’s not just saying love for you where I might be self sacrificing

02:41:47 and miserable and I love people, but I kill myself, which I don’t think anybody thinks

02:41:52 a great idea.

02:41:53 Happiness where I might be sociopathically happy where I’m causing problems all over

02:41:56 the place or even sadistically happy, but it’s a coupling, right?

02:42:00 That I’m actually feeling happiness in relationship to yours and even in causal relationship where

02:42:04 I, my own agentic desire to get happier wants to support you too.

02:42:09 That’s actually speaking of another pickup line.

02:42:13 That’s quite honestly what I, as a guy who is single, this is going to come out very

02:42:19 ridiculous because it’s like, oh yeah, where’s your girlfriend, bro?

02:42:22 But that’s what I look for in a relationship because it’s like, it’s so much, it’s so,

02:42:32 it’s such an amazing life where you actually get joy from another person’s success and

02:42:38 they get joy from your success.

02:42:40 And then it becomes like you don’t actually need to succeed much for that to have a, like

02:42:45 a loop, like a cycle of just like happiness that just increases like exponentially.

02:42:52 It’s weird.

02:42:53 So like just be, just enjoying the happiness of others, the success of others.

02:42:58 So this, this is like the, let’s call this, cause the first person that drilled this into

02:43:02 my head is Rogan, Joe Rogan.

02:43:05 He was the embodiment of that cause I saw somebody who is a successful, rich and nonstop

02:43:12 true.

02:43:13 I mean, you could tell when somebody is full of shit and somebody is not really genuinely

02:43:19 enjoying the success of his friends.

02:43:22 That was weird to me.

02:43:23 That was interesting.

02:43:24 And I mean, the way you’re kind of speaking to it, the reason Joe stood out to me is I

02:43:30 guess I haven’t witnessed genuine expression of that often in this culture of just real

02:43:36 joy for others.

02:43:38 I mean, part of that has to do, there hasn’t been many channels where you can watch or

02:43:43 listen to people being their authentic selves.

02:43:46 So I’m sure there’s a bunch of people who live life with compersion.

02:43:49 They probably don’t seek public attention also, but that was, yeah, if there was any

02:43:56 word that could express what I’ve learned from Joe, why he’s been a really inspiring

02:44:00 figure is that compersion.

02:44:03 And I wish our world was, had a lot more of that cause then it may, I mean, my own, sorry

02:44:12 to go in a small tangent, but like you’re speaking how society should function.

02:44:19 But I feel like if you optimize for that metric in your own personal life, you’re going to

02:44:25 live a truly fulfilling life.

02:44:27 I don’t know what the right word to use, but that’s a really good way to live life.

02:44:32 You will also learn what gets in the way of it and how to work with it that if you wanted

02:44:37 to help try to build systems at scale or apply Facebook or exponential technologies to do

02:44:42 that, you would have more actual depth of real knowledge of what that takes.

02:44:48 And this is, you know, as you mentioned that there’s this virtuous cycle between when you

02:44:52 get stoked on other people doing well and then they have a similar relationship to you

02:44:55 and everyone is in the process of building each other up.

02:44:59 And this is what I would say the healthy version of competition is versus the unhealthy version.

02:45:05 The healthy version, right, the root, I believe it’s a Latin word that means to strive together.

02:45:12 And it’s that impulse of becoming where I want to become more, but I recognize that

02:45:16 there’s actually a hormesis.

02:45:17 There’s a challenge that is needed for me to be able to do that.

02:45:21 But that means that, yes, there’s an impulse where I’m trying to get ahead.

02:45:24 Maybe I’m even trying to win, but I actually want a good opponent and I want them to get

02:45:28 ahead too because that is where my ongoing becoming happens and the win itself will get

02:45:32 boring very quickly.

02:45:34 The ongoing becoming is where there’s aliveness and for the ongoing becoming, they need to

02:45:39 have it too.

02:45:40 And that’s the strive together.

02:45:41 So, in the healthy competition, I’m stoked when they’re doing really well because my

02:45:44 becoming is supported by it.

02:45:47 Now this is actually a very nice segue into a model I like about what a meaningful human

02:45:55 life is, if you want to go there.

02:46:00 Let’s go there.

02:46:01 I have three things I’m going elsewhere with, but if we were first, let us take this short

02:46:08 stroll through the park of the meaning of life.

02:46:12 Daniel, what is a meaningful life?

02:46:16 I think the semantics end up mattering because a lot of people will take the word meaning

02:46:24 and the word purpose almost interchangeably and they’ll think kind of, what is the meaning

02:46:30 of my life?

02:46:31 What is the meaning of human life?

02:46:32 What is the meaning of life?

02:46:33 What’s the meaning of the universe?

02:46:35 And what is the meaning of existence rather than nonexistence?

02:46:38 So, there’s a lot of kind of existential considerations there and I think there’s some

02:46:43 cognitive mistakes that are very easy, like taking the idea of purpose.

02:46:48 Which is like a goal?

02:46:49 Which is a utilitarian concept.

02:46:51 The purpose of one thing is defined in relationship to other things that have assumed value.

02:46:59 And to say, what is the purpose of everything?

02:47:00 Well, purpose is too small of a question.

02:47:03 It’s fundamentally a relative question within everything.

02:47:05 What is the purpose of one thing relative to another?

02:47:07 What is the purpose of everything?

02:47:08 And there’s nothing outside of it with which to say it.

02:47:11 We actually just got to the limits of the utility of the concept of purpose.

02:47:16 It doesn’t mean it’s purposeless in the sense of something inside of it being purposeless.

02:47:19 It means the concept is too small.

02:47:21 Which is why you end up getting to, you know, like in Taoism, talking about the nature of

02:47:27 it.

02:47:28 Rather, there’s a fundamental what where the why can’t go deeper is the nature of it.

02:47:35 But I’m going to try to speak to a much simpler part, which is when people think about what

02:47:40 is a meaningful human life.

02:47:42 And kind of if we were to optimize for something at the level of individual life, but also,

02:47:48 how does optimizing for this at the level of the individual life lead to the best society

02:47:54 for insofar as people living that way affects others and long term, the world as a whole?

02:47:59 And how would we then make a civilization that was trying to think about these things?

02:48:05 Because you can see that there are a lot of dialectics where there’s value on two sides,

02:48:13 individualism and collectivism or the ability to accept things and the ability to push harder

02:48:20 and whatever.

02:48:22 And there’s failure modes on both sides.

02:48:25 And so, when you were starting to say, okay, individual happiness, you’re like, wait, fuck,

02:48:29 sadists can be happy while hurting people.

02:48:31 It’s not individual happiness, it’s love.

02:48:32 But wait, some people can self sacrifice out of love in a way that actually ends up just

02:48:36 creating codependency for everybody.

02:48:39 Or okay, so how do we think about all those things together?

02:48:48 This kind of came to me as a simple way that I kind of relate to it is that a meaningful

02:48:54 life involves the mode of being, the mode of doing and the mode of becoming.

02:49:00 And it involves a virtuous relationship between those three and that any of those modes on

02:49:07 their own also have failure modes that are not a meaningful life.

02:49:12 The mode of being, the way I would describe it, if we’re talking about the essence of

02:49:20 it is about taking in and appreciating the beauty of life that is now.

02:49:25 It’s a mode that is in the moment and that is largely about being with what is.

02:49:33 It’s fundamentally grounded in the nature of experience and the meaningfulness of experience.

02:49:37 The prima facie meaningfulness of when I’m having this experience, I’m not actually asking

02:49:42 what the meaning of life is, I’m actually full of it.

02:49:45 I’m full of experiencing it.

02:49:46 The momentary experience, the moment.

02:49:49 Yes.

02:49:50 So taking in the beauty of life.

02:49:54 Being is adding to the beauty of life.

02:49:56 I’m going to produce some art, I’m going to produce some technology that will make life

02:49:59 easier and more beautiful for somebody else.

02:50:01 I’m going to do some science that will end up leading to better insights or other people’s

02:50:08 ability to appreciate the beauty of life more because they understand more about it or whatever

02:50:11 it is or protect it, right?

02:50:13 I’m going to protect it in some way.

02:50:14 But that’s adding to or being in service of the beauty of life through our doing.

02:50:19 And becoming is getting better at both of those.

02:50:23 Being able to deepen our being, which is to be able to take in the beauty of life more

02:50:26 profoundly, be more moved by it, touched by it, and increasing our capacity with doing

02:50:32 to add to the beauty of life more.

02:50:37 So I hold that a meaningful life has to be all three of those.

02:50:42 And where they’re not in conflict with each other, ultimately it grounds in being, it

02:50:48 grounds in the intrinsic meaningfulness of experience.

02:50:52 And then my doing is ultimately something that will be able to increase the possibility

02:50:57 of the quality of experience for others.

02:51:00 And my becoming is a deepening on those.

02:51:03 So it grounds an experience and also the evolutionary possibility of experience.

02:51:09 And the point is to oscillate between these, never getting stuck on any one or I suppose

02:51:18 in parallel, well you can’t really, attention is a thing, you can only allocate attention.

02:51:26 I want moments where I am absorbed in the sunset and I’m not thinking about what to

02:51:31 do next.

02:51:32 Yeah.

02:51:33 And then the fullness of that can make it to where my doing doesn’t come from what’s

02:51:39 in it for me because I actually feel overwhelmingly full already.

02:51:45 And then it’s like how can I make life better for other people that don’t have as much opportunities

02:51:51 I had?

02:51:52 How can I add something wonderful?

02:51:53 How can I just be in the creative process?

02:51:56 And so I think where the doing comes from matters and if the doing comes from a fullness

02:52:01 of being, it’s inherently going to be paying attention to externalities or it’s more oriented

02:52:08 to do that than if it comes from some emptiness that is trying to get full in some way that

02:52:12 is willing to cause sacrifices other places and where a chunk of its attention is internally

02:52:15 focused.

02:52:18 And so when Buddha said desire is the cause of all suffering, then later the vow of the

02:52:23 Bodhisattva which was to show up for all sentient beings in universe forever is a pretty intense

02:52:29 thing like desire.

02:52:32 I would say there is a kind of desire, if we think of desire as a basis for movement

02:52:36 like a flow or a gradient, there’s a kind of desire that comes from something missing

02:52:39 inside seeking fulfillment of that in the world.

02:52:43 That ends up being the cause of actions that perpetuate suffering.

02:52:46 But there’s also not just non desire, there’s a kind of desire that comes from feeling full

02:52:51 at the beauty of life and wanting to add to it that is a flow this direction.

02:52:57 And I don’t think that is the cause of suffering.

02:52:59 I think that is, you know, and the Western traditions, right, the Eastern traditions

02:53:04 focused on that and kind of unconditional happiness outside of them, in the moment outside

02:53:08 of time.

02:53:09 The Western tradition said, no, actually, desire is the source of creativity and we’re

02:53:12 here to be made in the image and likeness of the creator.

02:53:15 We’re here to be fundamentally creative.

02:53:17 But creating from where and in service of what?

02:53:21 Creating from a sense of connection to everything and wholeness in service of the well being

02:53:24 of all of it is very different.

02:53:28 Which is back to that compassion, compersion axis.

02:53:31 Being, doing, becoming.

02:53:34 It’s pretty powerful.

02:53:38 You could potentially be algorithmatized into a robot just saying, where does death come

02:53:50 into that?

02:53:54 Being is forgetting, I mean, the concept of time completely.

02:53:59 There’s a sense to doing and becoming that has a deadline built in, the urgency built

02:54:07 in.

02:54:08 Do you think death is fundamental to this, to a meaningful life?

02:54:16 Acknowledging or feeling the terror of death, like Ernest Becker, or just acknowledging

02:54:25 the uncertainty, the mystery, the melancholy nature of the fact that the ride ends.

02:54:31 Is that part of this equation or it’s not necessary?

02:54:34 Okay, look at how it could be related.

02:54:37 I’ve experienced fear of death.

02:54:40 I’ve also experienced times where I thought I was going to die that felt extremely peaceful

02:54:47 and beautiful.

02:54:50 And it’s funny because we can be afraid of death because we’re afraid of hell or bad

02:54:59 reincarnation or the bardo or some kind of idea of the afterlife we have or we’re projecting

02:55:03 some kind of sentient suffering.

02:55:05 But if we’re afraid of just non experience, I noticed that every time I stay up late enough

02:55:12 that I’m really tired, I’m longing for deep sleep and non experience, right?

02:55:18 Like I’m actually longing for experience to stop.

02:55:21 And it’s not morbid, it’s not a bummer.

02:55:26 And I don’t mind falling asleep and sometimes when I wake up, I want to go back into it

02:55:30 and then when it’s done, I’m happy to come out of it.

02:55:34 So when we think about death and having finite time here, and we could talk about if we live

02:55:44 for a thousand years instead of a hundred or something like that, it would still be

02:55:47 finite time.

02:55:49 The one bummer with the age we die is that I generally find that people mostly start

02:55:53 to emotionally mature just shortly before they die.

02:55:58 But if I get to live forever, I can just stay focused on what’s in it for me forever.

02:56:15 And if life continues and consciousness and sentience and people appreciating beauty and

02:56:20 adding to it and becoming continues, my life doesn’t, but my life can have effects that

02:56:25 continue well beyond it, then life with a capital L starts mattering more to me than

02:56:31 my life.

02:56:32 My life gets to be a part of and in service to.

02:56:35 And the whole thing about when old men plant trees, the shade of which they’ll never get

02:56:40 to be in.

02:56:41 I remember the first time I read this poem by Hafez, the Sufi poet, written in like 13th

02:56:49 century or something like that, and he talked about that if you’re lonely, to think about

02:56:56 him and he was kind of leaning his spirit into yours across the distance of a millennium

02:57:01 and would comfort you with these poems and just thinking about people a millennium from

02:57:06 now and caring about their experience and what they’d be suffering if they’d be lonely

02:57:10 and could he offer something that could touch them.

02:57:13 And it’s just fucking beautiful.

02:57:15 And so like the most beautiful parts of humans have to do with something that transcends

02:57:20 what’s in it for me.

02:57:23 And death forces you to that.

02:57:25 So not only does death create the urgency of doing, you’re very right, it does have

02:57:34 a sense in which it incentivizes the compersion and the compassion.

02:57:42 And the widening, you remember Einstein had that quote, something to the effect of it’s

02:57:46 an optical delusion of consciousness to believe there are separate things.

02:57:50 There’s this one thing we call universe and something about us being inside of a prison

02:57:56 of perception that can only see a very narrow little bit of it.

02:58:02 But this might be just some weird disposition of mine, but when I think about the future

02:58:10 after I’m dead and I think about consciousness, I think about young people falling in love

02:58:18 for the first time and their experience, and I think about people being awed by sunsets

02:58:22 and I think about all of it, right?

02:58:27 I can’t not feel connected to that.

02:58:30 Do you feel some sadness to the very high likelihood that you will be forgotten completely

02:58:37 by all of human history, you, Daniel, the name, that which cannot be named?

02:58:46 Systems like to self perpetuate, egos do that.

02:58:52 The idea that I might do something meaningful that future people will appreciate, of course

02:58:56 there’s like a certain sweetness to that idea.

02:59:00 But I know how many people did something, did things that I wouldn’t be here without

02:59:05 and that my life would be less without, whose names I will never know.

02:59:09 And I feel a gratitude to them, I feel a closeness, I feel touched by that, and I think to the

02:59:15 degree that the future people are conscious enough, there is a, you know, a lot of traditions

02:59:22 have this kind of are we being good ancestors and respect for the ancestors beyond the names.

02:59:26 I think that’s a very healthy idea.

02:59:30 But let me return to a much less beautiful and a much less pleasant conversation.

02:59:36 You mentioned prison.

02:59:37 Back to X risk, okay.

02:59:41 And conditioning.

02:59:43 You mentioned something about the state.

02:59:48 So what role, let’s talk about companies, governments, parents, all the mechanisms that

02:59:56 can be a source of conditioning.

02:59:58 Which flavor of ice cream do you like?

03:00:01 Do you think the state is the right thing for the future?

03:00:05 So governments that are elected democratic systems that are representing representative

03:00:10 democracy.

03:00:11 Is there some kind of political system of governance that you find appealing?

03:00:17 Is it parents, meaning a very close knit tribes of conditioning that’s the most essential?

03:00:26 And then you and Michael Malice would happily agree that it’s anarchy, or the state should

03:00:34 be dissolved or destroyed or burned to the ground if you’re Michael Malice, giggling,

03:00:42 holding the torch as the fire burns.

03:00:46 So which which is it is the state can state be good?

03:00:50 Or is the state bad for the conditioning of a beautiful world, A or B?

03:00:57 This is like an SPT test.

03:00:58 You like to give these simplified good or bad things.

03:01:03 Would I like the state that we live in currently, the United States federal government to stop

03:01:08 existing today?

03:01:09 No, I would really not like that.

03:01:11 I think that would be not quite bad for the world in a lot of ways.

03:01:16 Do I think that it’s a optimal social system and maximally just and humane and all those

03:01:23 things?

03:01:24 And I wanted to continue as is.

03:01:25 No, also not that.

03:01:26 But I am much more interested in it being able to evolve to a better thing without going

03:01:32 through the catastrophe phase that I think it’s just non existence would give.

03:01:38 So what size of state is good in a sense like do we should we as a human society as this

03:01:45 world becomes more globalized?

03:01:47 Should we be constantly striving to reduce the we can we can put on a map like right

03:01:53 now, literally, like the the centers of power in the world, some of them are tech companies,

03:02:02 some of them are governments, should we be trying to as much as possible to decentralize

03:02:06 the power to where it’s very difficult to point on the map, the centers of power.

03:02:12 And that means making the state however, there’s a bunch of different ways to make the government

03:02:18 much smaller, that could be reducing in the United States, reducing the funding for the

03:02:28 government, all those kinds of things, their set of responsibilities, the set of powers,

03:02:33 it could be, I mean, this is far out, but making more nations, or maybe nations not

03:02:40 in the space that are defined by geographic location, but rather in the space of ideas,

03:02:45 which is what anarchy is about.

03:02:47 So anarchy is about forming collectives based on their set of ideas, and doing so dynamically

03:02:52 not based on where you were born, and so on.

03:02:56 I think we can say that the natural state of humans, if we want to describe such a thing,

03:03:03 is to live in tribes that were below the Dunbar number, meaning that for a few hundred thousand

03:03:11 years of human history, all of the groups of humans mostly stayed under that size.

03:03:16 And whenever it would get up to that size, it would end up cleaving.

03:03:19 And so it seems like there’s a pretty strong, but there weren’t individual humans out in

03:03:23 the wild doing really well, right?

03:03:25 So we were a group animal, but with groups that had a specific size.

03:03:28 So we could say, in a way, humans were being domesticated by those groups.

03:03:32 They were learning how to have certain rules to participate with the group, without which

03:03:36 you’d get kicked out.

03:03:37 But that’s still the wild state of people.

03:03:40 And maybe it’s useful to do as a side statement, which I’ve recently looked at a bunch of

03:03:45 papers around Dunbar’s number, where the mean is actually 150.

03:03:49 If you actually look at the original papers, it’s a range.

03:03:51 It’s really a range.

03:03:53 So it’s actually somewhere under a thousand.

03:03:56 So it’s a range of like two to 500 or whatever it is.

03:03:59 But like you could argue that the, I think it actually is exactly two, the range is two

03:04:05 to 520, something like that.

03:04:08 And this is the mean that’s taken crudely.

03:04:12 It’s not a very good paper in terms of the actual numerically speaking.

03:04:18 But it’d be interesting if there’s a bunch of Dunbar numbers that could be computed for

03:04:24 particular environments, particular conditions, so on.

03:04:26 It is very true that they’re likely to be something small, you know, under a million.

03:04:32 But it’d be interesting if we can expand that number in interesting ways that will change

03:04:36 the fabric of this conversation.

03:04:37 I just want to kind of throw that in there.

03:04:39 I don’t know if the 150 is baked in somehow into the hardware.

03:04:43 We can talk about some of the things that it probably has to do with.

03:04:47 Up to a certain number of people.

03:04:50 And this is going to be variable based on the social technologies that mediate it to

03:04:53 some degree.

03:04:54 We’ll talk about that in a minute.

03:04:59 Up to a certain number of people, everybody can know everybody else pretty intimately.

03:05:04 So let’s go ahead and just take 150 as an average number.

03:05:12 Everybody can know everyone intimately enough that if your actions made anyone else do poorly,

03:05:18 it’s your extended family and you’re stuck living with them and you know who they are

03:05:22 and there’s no anonymous people.

03:05:24 There’s no just them and over there.

03:05:27 And that’s one part of what leads to a kind of tribal process where it’s good for the

03:05:32 individual and good for the whole has a coupling.

03:05:35 Also below that scale, everyone is somewhat aware of what everybody else is doing.

03:05:41 There’s not groups that are very siloed.

03:05:44 And as a result, it’s actually very hard to get away with bad behavior.

03:05:47 There’s a force kind of transparency.

03:05:50 And so you don’t need kind of like the state in that way.

03:05:55 But lying to people doesn’t actually get you ahead.

03:05:58 Sociopathic behavior doesn’t get you ahead because it gets seen.

03:06:01 And so there’s a conditioning environment where the individual is behaving in a way

03:06:06 that is aligned with the interest of the tribe is what gets conditioned.

03:06:11 When it gets to be a much larger system, it becomes easier to hide certain things from

03:06:16 the group as a whole as well as to be less emotionally bound to a bunch of anonymous people.

03:06:22 I would say there’s also a communication protocol where up to about that number of people, we

03:06:29 could all sit around a tribal council and be part of a conversation around a really

03:06:33 big decision.

03:06:34 Do we migrate?

03:06:35 Do we not migrate?

03:06:36 Do we, you know, something like that?

03:06:37 Do we get rid of this person?

03:06:39 And why would I want to agree to be a part of a larger group where everyone can’t be

03:06:47 part of that council?

03:06:49 And so I am going to now be subject to law that I have no say in if I could be part of

03:06:54 a smaller group that could still survive and I get a say in the law that I’m subject to.

03:06:58 So I think the cleaving and a way we can look at it beyond the Dunbar number two is we can

03:07:03 look at that a civilization has binding energy that is holding them together and has cleaving

03:07:08 energy.

03:07:09 And if the binding energy exceeds the cleaving energy, that civilization will last.

03:07:12 And so there are things that we can do to decrease the cleaving energy within the society,

03:07:16 things we can do to increase the binding energy.

03:07:18 I think naturally we saw that had certain characteristics up to a certain size kind

03:07:22 of tribalism.

03:07:24 That ended with a few things.

03:07:25 It ended with people having migrated enough that when you started to get resource wars,

03:07:31 you couldn’t just migrate away easily.

03:07:33 And so tribal warfare became more obligated.

03:07:35 It involved the plow and the beginning of real economic surplus.

03:07:39 So there were a few different kind of forcing functions.

03:07:45 But we’re talking about what size should it be, right?

03:07:48 What size should a society be?

03:07:50 And I think the idea, like if we think about your body for a moment as a self organizing

03:07:55 complex system that is multi scaled, we think about…

03:07:58 Our body is a wonderland.

03:08:00 Our body is a wonderland, yeah.

03:08:04 That’s a John Mayer song.

03:08:06 I apologize.

03:08:07 But yes, so if we think about our body and the billions of cells that are in it.

03:08:12 Well, you don’t have…

03:08:14 Think about how ridiculous it would be to try to have all the tens of trillions of cells

03:08:18 in it with no internal organization structure, right?

03:08:21 Just like a sea of protoplasm.

03:08:24 It wouldn’t work.

03:08:25 Pure democracy.

03:08:26 And so you have cells and tissues, and then you have tissues and organs and organs and

03:08:31 organ systems, and so you have these layers of organization, and then obviously the individual

03:08:36 in a tribe in a ecosystem.

03:08:39 And each of the higher layers are both based on the lower layers, but also influencing

03:08:44 them.

03:08:45 I think the future of civilization will be similar, which is there’s a level of governance

03:08:49 that happens at the level of the individual.

03:08:51 My own governance of my own choice.

03:08:54 I think there’s a level that happens at the level of a family.

03:08:57 We’re making decisions together, we’re inter influencing each other and affecting each

03:09:01 other, taking responsibility for the idea of an extended family.

03:09:05 And you can see that like for a lot of human history, we had an extended family, we had

03:09:08 a local community, a local church or whatever it was, we had these intermediate structures.

03:09:13 Whereas right now, there’s kind of like the individual producer, consumer, taxpayer, voter,

03:09:20 and the massive nation state global complex, and not that much in the way of intermediate

03:09:24 structures that we relate with, and not that much in the way of real personal dynamics,

03:09:28 all impersonalized, made fungible.

03:09:31 And so, I think that we have to have global governance, meaning I think we have to have

03:09:39 governance at the scale we affect stuff, and if anybody is messing up the oceans, that

03:09:43 matters for everybody.

03:09:44 So, that can’t only be national or only local.

03:09:48 Everyone is scared of the idea of global governance because we think about some top down system

03:09:51 of imposition that now has no checks and balances on power.

03:09:54 I’m scared of that same version, so I’m not talking about that kind of global governance.

03:10:00 It’s why I’m even using the word governance as a process rather than government as an

03:10:03 imposed phenomena.

03:10:07 And so, I think we have to have global governance, but I think we also have to have local governance,

03:10:11 and there has to be relationships between them that each, where there are both checks

03:10:16 and balances and power flows of information.

03:10:18 So, I think governance at the level of cities will be a bigger deal in the future than governance

03:10:24 at the level of nation states because I think nation states are largely fictitious things

03:10:30 that are defined by wars and agreements to stop wars and like that.

03:10:34 I think cities are based on real things that will keep being real where the proximity of

03:10:38 certain things together, the physical proximity of things together gives increased value of

03:10:43 those things.

03:10:44 So, you look at like Jeffrey West’s work on scale and finding that companies and nation

03:10:50 states and things that have a kind of complicated agreement structure get diminishing return

03:10:54 of, of production per capita as the total number of people increases beyond about the

03:10:58 tribal scale.

03:10:59 But the city actually gets increasing productivity per capita, but it’s not designed, it’s kind

03:11:04 of this organic thing, right?

03:11:06 So, there should be governance at the level of cities because people can sense and actually

03:11:10 have some agency there, probably neighborhoods and smaller scales within it and also verticals

03:11:15 and some of it won’t be geographic, it’ll be network based, right?

03:11:17 Networks of affinities.

03:11:18 So, I don’t think the future is one type of governance.

03:11:21 Now, what we can say more broadly is say, when we’re talking about groups of people

03:11:26 that inner affect each other, the idea of a civilization is that we can figure out how

03:11:30 to coordinate our choice making to not be at war with each other and hopefully increase

03:11:35 total productive capacity in a way that’s good for everybody, division of labor and

03:11:40 specialty so we all get more better stuff and whatever.

03:11:44 But it’s a, it’s a coordination of our choice making.

03:11:49 I think we can look at civilizations failing on the side of not having enough coordination

03:11:55 of choice making, so they fail on the side of chaos and then they cleave and an internal

03:11:58 war comes about or whatever, or they can’t make smart decisions and they overuse their

03:12:04 resources or whatever.

03:12:07 Or it can fail on the side of trying to get order via imposition, via force, and so it

03:12:14 fails on the side of oppression, which ends up being for a while functionalish for the

03:12:20 thing as a whole, but miserable for most people in it until it fails either because of revolt

03:12:25 or because it can’t innovate enough or something like that.

03:12:28 And so, there’s this like toggling between order via oppression and chaos.

03:12:34 And I think the idea of democracy, not the way we’ve implemented it, but the idea of

03:12:39 it, whether we’re talking about a representative democracy or a direct digital democracy, liquid

03:12:43 democracy, a republic or whatever, the idea of an open society, participatory governance

03:12:50 is can we have order that is emergent rather than imposed so that we aren’t stuck with

03:12:56 chaos and infighting and inability to coordinate, and we’re also not stuck with oppression?

03:13:04 And what would it take to have emergent order?

03:13:08 This is the most kind of central question for me these days because if we look at what

03:13:16 different nation states are doing around the world and we see nation states that are more

03:13:20 authoritarian that in some ways are actually coordinating much more effectively.

03:13:26 So for instance, we can see that China has built high speed rail not just through its

03:13:32 country but around the world and the US hasn’t built any high speed rail yet.

03:13:36 You can see that it brought 300 million people out of poverty in a time where we’ve had increasing

03:13:41 economic inequality happening.

03:13:43 You can see like that if there was a single country that could make all of its own stuff

03:13:49 if the global supply chains failed, China would be the closest one to being able to

03:13:53 start to go closed loop on fundamental things.

03:13:57 Belt and Road Initiative, supply chain on rare earth metals, transistor manufacturing

03:14:03 that is like, oh, they’re actually coordinating more effectively in some important ways.

03:14:08 In the last call it 30 years.

03:14:12 And that’s imposed order.

03:14:14 Imposed order.

03:14:16 And we can see that if in the US, let’s look at why real quick.

03:14:24 We know why we created term limits so that we wouldn’t have forever monarchs.

03:14:29 That’s the thing we were trying to get away from and that there would be checks and balances

03:14:32 on power and that kind of thing.

03:14:34 But that also has created a negative second order effect, which is nobody does long term

03:14:38 planning because somebody comes in who’s got four years, they want reelected.

03:14:43 They don’t do anything that doesn’t create a return within four years that will end up

03:14:46 getting them elected, reelected.

03:14:48 And so the 30 year industrial development to build high speed trains or the new kind

03:14:53 of fusion energy or whatever it is just doesn’t get invested in.

03:14:57 And then if you have left versus right, where whatever someone does for four years, then

03:15:02 the other guy gets in and undoes it for four years.

03:15:05 And most of the energy goes into campaigning against each other.

03:15:08 This system is just dissipating as heat, right?

03:15:11 Like it’s just burning up as heat.

03:15:12 And the system that has no term limits and no internal friction in fighting because they

03:15:16 got rid of those people can actually coordinate better.

03:15:20 But I would argue it has its own fail states eventually and dystopic properties that are

03:15:27 not the thing we want.

03:15:28 So the goal is to accomplish, to create a system that does long term planning without

03:15:34 the negative effects of a monarch or dictator that stays there for the long term and accomplish

03:15:45 that through not doing the imposition of a single leader, but through emergence.

03:15:54 So that perhaps, first of all, the technology in itself seems to maybe disagree a lot for

03:16:03 different possibilities here, which is make primary the system, not the humans.

03:16:08 So the basic, the medium on which the democracy happens, like a platform where people can

03:16:21 make decisions, do the choice making, the coordination of the choice making, where emerges

03:16:29 some kind of order to where like something that applies at the scale of the family, the

03:16:34 family, the city, the country, the continent, the whole world, and then does that so dynamically,

03:16:43 constantly changing based on the needs of the people, sort of always evolving.

03:16:48 And it would all be owned by Google.

03:16:54 Is there a way to, so first of all, you’re optimistic that you could basically create

03:17:00 the technology can save us technology at creating platforms by technology, I mean, like software

03:17:06 network platforms that allows humans to deliberate, like make government together dynamically

03:17:14 without the need for a leader that’s on a podium screaming stuff.

03:17:19 That’s one and two.

03:17:21 If you’re optimistic about that, are you also optimistic about the CEOs of such platforms?

03:17:27 The idea that technology is values neutral, values agnostic, and people can use it for

03:17:36 constructive or destructive purposes, but it doesn’t predispose anything.

03:17:40 It’s just silly and naive.

03:17:44 Technology elicits patterns of human behavior because those who utilize it and get ahead

03:17:49 end up behaving differently because of their utilization of it, and then other people,

03:17:53 then they end up shaping the world or other people race to also get the power of the technology

03:17:57 and so there’s whole schools of anthropology that look at the effect on social systems

03:18:02 and the minds of people of the change in our tooling.

03:18:06 Marvin Harris’s work called cultural materialism looked at this deeply, obviously Marshall

03:18:09 McLuhan looked specifically at the way that information technologies change the nature

03:18:13 of our beliefs, minds, values, social systems.

03:18:19 I will not try to do this rigorously because there are academics will disagree on the subtle

03:18:23 details but I’ll do it kind of like illustratively.

03:18:27 You think about the emergence of the plow, the ox drawn plow in the beginning of agriculture

03:18:31 that came with it where before that you had hunter gatherer and then you had horticulture

03:18:36 kind of a digging stick but not the plow.

03:18:40 Well the world changed a lot with that, right?

03:18:43 And a few of the changes that at least some theorists believe in is when the ox drawn

03:18:54 plow started to proliferate, any culture that utilized it was able to start to actually

03:18:57 cultivate grain because just with a digging stick you couldn’t get enough grain for it

03:19:01 to matter, grain was a storable caloric surplus, they could make it through the famines, they

03:19:04 could grow their population, so the ones that used it got so much ahead that it became obligate

03:19:08 and everybody used it, that corresponding with the use of a plow, animism went away

03:19:14 everywhere that it existed because you can’t talk about the spirit of the buffalo while

03:19:18 beating the cow all day long to pull the plow, so the moment that we do animal husbandry

03:19:23 of that kind where you have to beat the cow all day, you have to say it’s just a dumb

03:19:27 animal, man has dominion over earth and the nature of even our religious and spiritual

03:19:30 ideas change.

03:19:31 You went from women primarily using the digging stick to do the horticulture or gathering

03:19:37 before that, men doing the hunting stuff to now men had to use the plow because the upper

03:19:40 body strength actually really mattered, women would have miscarriages when they would do

03:19:44 it when they were pregnant, so all the caloric supply started to come from men where it had

03:19:48 been from both before and the ratio of male female gods changed to being mostly male gods

03:19:52 following that.

03:19:54 Obviously we went from very, that particular line of thought then also says that feminism

03:20:01 followed the tractor and that the rise of feminism in the West started to follow women

03:20:09 being able to say we can do what men can because the male upper body strength wasn’t differential

03:20:15 once the internal combustion engine was much stronger and we can drive a tractor.

03:20:20 So I don’t think to try to trace complex things to one cause is a good idea, so I think this

03:20:26 is a reductionist view but it has truth in it and so the idea that technology is values

03:20:33 agnostic is silly.

03:20:35 Technology codes patterns of behavior that code rationalizing those patterns of behavior

03:20:39 and believing in them.

03:20:40 The plow also is the beginning of the Anthropocene, right, it was the beginning of us changing

03:20:44 the environment radically to clear cut areas to just make them useful for people which

03:20:49 also meant the change of the view of where the web of life were just a part of it, etc.

03:20:54 So all those types of things.

03:20:57 That’s brilliantly put, by the way, that was just brilliant.

03:21:02 But the question is, so it’s not agnostic, but…

03:21:05 So we have to look at what the psychological effects of specific tech applied certain ways

03:21:10 are and be able to say it’s not just doing the first order thing you intended, it’s doing

03:21:17 like the effect on patriarchy and animism and the end of tribal culture in the beginning

03:21:23 of empire and the class systems that came with that.

03:21:26 We can go on and on about what the plow did.

03:21:28 The beginning of surplus was inheritance, which then became the capital model and like

03:21:32 lots of things.

03:21:34 So we have to say when we’re looking at the tech, what are the values built into the way

03:21:39 the tech is being built that are not obvious?

03:21:42 Right, so you always have to consider externalities.

03:21:44 Yes.

03:21:45 And the externalities are not just physical to the environment, they’re also to how the

03:21:48 people are being conditioned and how the relationality between them is being conditioned.

03:21:51 So the question I’m asking you, so I personally would rather be led by a plow and a tractor

03:21:56 than Stalin, okay?

03:21:58 That’s the question I’m asking you.

03:22:02 In creating an emergent government where people, where there’s a democracy that’s dynamic,

03:22:09 that makes choices, that does governance at like a very kind of liquid, there’s a bunch

03:22:19 of fine resolution layers of abstraction of governance happening at all scales, right?

03:22:26 And doing so dynamically where no one person has power at any one time that can dominate

03:22:32 and impose rule, okay?

03:22:34 That’s the Stalin version.

03:22:35 I’m saying isn’t the alternative that’s emergent empowered or made possible by the plow and

03:22:48 the tractor, which is the modern version of that, is like the internet, the digital space

03:22:54 where we can, the monetary system where you have the currency and so on, but you have

03:23:00 much more importantly, to me at least, is just basic social interaction, the mechanisms

03:23:03 of human transacting with each other in the space of ideas, isn’t?

03:23:08 So yes, it’s not agnostic, definitely not agnostic.

03:23:12 You’ve had a brilliant rant there.

03:23:14 The tractor has effects, but isn’t that the way we achieve an emergent system of governance?

03:23:20 Yes, but I wouldn’t say we’re on track.

03:23:26 You haven’t seen anything promising.

03:23:28 It’s not that I haven’t seen anything promising, it’s that to be on track requires understanding

03:23:32 and guiding some of the things differently than is currently happening and it’s possible.

03:23:36 That’s actually what I really care about.

03:23:38 So you couldn’t have had a Stalin without having certain technologies emerge.

03:23:46 He couldn’t have ruled such a big area without transportation technologies, without the train,

03:23:51 without the communication tech that made it possible.

03:23:55 So when you say you’d rather have a tractor or a plow than a Stalin, there’s a relationship

03:24:00 between them that is more recursive, which is new physical technologies allow rulers

03:24:08 to rule with more power over larger distances historically.

03:24:14 And some things are more responsible for that than others.

03:24:19 Like Stalin also ate stuff for breakfast, but the thing he ate for breakfast is less

03:24:23 responsible for the starvation of millions than the train.

03:24:28 The train is more responsible for that and then the weapons of war are more responsible.

03:24:32 So some technology, let’s not throw it all in the, you’re saying like technology has

03:24:38 a responsibility here, but some is better than others.

03:24:42 I’m saying that people’s use of technology will change their behavior.

03:24:46 So it has behavioral dispositions built in.

03:24:48 The change of the behavior will also change the values in the society.

03:24:52 It’s very complicated, right?

03:24:53 It will also, as a result, both make people who have different kinds of predispositions

03:24:58 with regard to rulership and different kinds of new capacities.

03:25:03 And so we have to think about these things.

03:25:06 It’s kind of well understood that the printing press and then in early industrialism ended

03:25:12 feudalism and created kind of nation states.

03:25:15 So one thing I would say as a long trend that we can look at is that whenever there is a

03:25:22 step function, a major leap in technology, physical technology, the underlying techno

03:25:27 industrial base with which we do stuff, it ends up coding for, it ends up predisposing

03:25:33 a whole bunch of human behavioral patterns that the previous social system had not emerged

03:25:38 to try to solve.

03:25:40 And so it usually ends up breaking the previous social systems, the way the plow broke the

03:25:44 tribal system, the way that the industrial revolution broke the feudal system, and then

03:25:48 new social systems have to emerge so they can deal with the new powers, the new dispositions,

03:25:54 whatever with that tech.

03:25:55 Obviously, the nuke broke nation state governance being adequate and said, we can’t ever have

03:26:00 that again.

03:26:01 So then it created this international governance apparatus world.

03:26:06 So I guess what I’m saying is that the solution is not exponential tech following the current

03:26:21 path of what the market incentivizes exponential tech to do, market being a previous social

03:26:26 tech.

03:26:28 I would say that exponential tech, if we look at different types of social tech, so let’s

03:26:38 just briefly look at that democracy tried to do the emergent order thing, right?

03:26:46 At least that’s the story, and which is, and this is why if you look, this important part

03:26:56 to build first.

03:26:57 It’s kind of doing it.

03:26:58 It’s just doing it poorly.

03:26:59 You’re saying, I mean, that’s, it is emergent order in some sense.

03:27:03 I mean, that’s the hope of democracy versus other forms of government.

03:27:06 Correct.

03:27:07 I mean, I said at least the story because obviously it didn’t do it for women and slaves

03:27:11 early on.

03:27:12 It doesn’t do it for all classes equally, et cetera.

03:27:14 But the idea of democracy is that, is participatory governance.

03:27:20 And so you notice that the modern democracies emerged out of the European enlightenment

03:27:26 and specifically because the idea that a lot of people, some huge number, not a tribal

03:27:31 number, a huge number of anonymous people who don’t know each other, are not bonded

03:27:34 to each other, who believe different things, who grew up in different ways, can all work

03:27:38 together to make collective decisions, well, that affect everybody, and where some of them

03:27:42 will make compromises and the thing that matters to them for what matters to other strangers.

03:27:46 That’s actually wild.

03:27:47 Like it’s a wild idea that that would even be possible.

03:27:50 And it was kind of the result of this high enlightenment idea that we could all do the

03:27:57 philosophy of science and we could all do the Hegelian dialectic.

03:28:03 Those ideas had emerged, right?

03:28:04 And it was that we could all, so our choice making, because we said a society is trying

03:28:11 to coordinate choice making, the emergent order is the order of the choices that we’re

03:28:15 making, not just at the level of the individuals, but what groups of individuals, corporations,

03:28:18 nations, states, whatever do.

03:28:21 Our choices are based on, our choice making is based on our sense making and our meaning

03:28:25 making.

03:28:26 Our sense making is what do we believe is happening in the world, and what do we believe

03:28:30 the effects of a particular thing would be.

03:28:31 Our meaning making is what do we care about, right, our values generation, what do we care

03:28:34 about that we’re trying to move the world in the direction of.

03:28:37 If you ultimately are trying to move the world in a direction that is really, really different

03:28:41 than the direction I’m trying to, we have very different values, we’re gonna have a

03:28:44 hard time.

03:28:46 And if you think the world is a very different world, right, if you think that systemic racism

03:28:51 is rampant everywhere and one of the worst problems, and I think it’s not even a thing,

03:28:56 if you think climate change is almost existential, and I think it’s not even a thing, we’re gonna

03:29:00 have a really hard time coordinating.

03:29:02 And so, we have to be able to have shared sense making of can we come to understand

03:29:07 just what is happening together, and then can we do shared values generation, okay?

03:29:12 Maybe I’m emphasizing a particular value more than you, but I can take your perspective

03:29:17 and I can see how the thing that you value is worth valuing, and I can see how it’s affected

03:29:21 by this thing.

03:29:22 So, can we take all the values and try to come up with a proposition that benefits all

03:29:25 of them better than the proposition I created just to benefit these ones that harms the

03:29:30 ones that you care about, which is why you’re opposing my proposition?

03:29:34 We don’t even try in the process of crafting a proposition currently to see, and this is

03:29:39 the reason that the proposition we vote on, it gets half the votes almost all the time.

03:29:43 It almost never gets 90% of the votes, is because it benefits some things and harms

03:29:47 other things.

03:29:48 We can say all theory of trade offs, but we didn’t even try to say, could we see what

03:29:52 everybody cares about and see if there is a better solution?

03:29:55 So…

03:29:56 How do we fix that try?

03:29:57 I wonder, is it as simple as the social technology of education?

03:30:01 Yes.

03:30:02 Well, no.

03:30:03 I mean, the proposition crafting and refinement process has to be key to a democracy or participatory

03:30:10 governance, and it’s not currently.

03:30:11 But isn’t that the humans creating that situation?

03:30:16 So one way, there’s two ways to fix that.

03:30:20 One is to fix the individual humans, which is the education early in life, and the second

03:30:24 is to create somehow systems that…

03:30:26 Yeah, it’s both.

03:30:28 So I understand the education part, but creating systems, that’s why I mentioned the technologies

03:30:33 is creating social networks, essentially.

03:30:36 Yes, that’s actually necessary.

03:30:37 Okay, so let’s go to the first part and then we’ll come to the second part.

03:30:42 So democracy emerged as an enlightenment era idea that we could all do a dialectic and

03:30:49 come to understand what other people valued, and so that we could actually come up with

03:30:55 a cooperative solution rather than just, fuck you, we’re gonna get our thing in war, right?

03:31:00 And that we could sense make together.

03:31:01 We could all apply the philosophy of science and you weren’t gonna stick to your guns on

03:31:05 what the speed of sound is if we measured it and we found out what it was, and there’s

03:31:08 a unifying element to the objectivity in that way.

03:31:12 And so this is why I believe Jefferson said, if you could give me a perfect newspaper and

03:31:17 a broken government, or in paraphrasing, a broken government and perfect newspaper, I

03:31:21 wouldn’t hesitate to take the perfect newspaper.

03:31:22 Because if the people understand what’s going on, they can build a new government.

03:31:26 If they don’t understand what’s going on, they can’t possibly make good choices.

03:31:30 And Washington, I’m paraphrasing again, first president said the number one aim of the federal

03:31:36 government should be the comprehensive education of every citizen and the science of government.

03:31:41 Science of government was the term of art.

03:31:42 Think about what that means, right?

03:31:44 Science of government would be game theory, coordination theory, history, wouldn’t call

03:31:49 game theory yet, history, sociology, economics, right?

03:31:53 All the things that lead to how we understand human coordination.

03:31:57 I think it’s so profound that he didn’t say the number one aim of the federal government

03:32:02 is rule of law.

03:32:04 And he didn’t say it’s protecting the border from enemies.

03:32:07 Because if the number one aim was to protect the border from enemies, it could do that

03:32:11 as a military dictatorship quite effectively.

03:32:14 And if the goal was rule of law, it could do it as a dictatorship, as a police state.

03:32:21 And so if the number one goal is anything other than the comprehensive education of

03:32:24 all the citizens and the science of government, it won’t stay democracy long.

03:32:28 You can see, so both education and the fourth estate, the fourth estate being the…

03:32:33 So education, can I make sense of the world?

03:32:34 Am I trained to make sense of the world?

03:32:36 The fourth estate is what’s actually going on currently, the news.

03:32:38 Do I have good, unbiased information about it?

03:32:41 Those are both considered prerequisite institutions for democracy to even be a possibility.

03:32:46 And then at the scale it was initially suggested here, the town hall was the key phenomena

03:32:51 where there wasn’t a special interest group crafted a proposition, and the first thing

03:32:55 I ever saw was the proposition, didn’t know anything about it, and I got to vote yes or

03:32:59 no.

03:33:00 It was in the town hall, we all got to talk about it, and the proposition could get crafted

03:33:03 in real time through the conversation, which is why there was that founding fathers statement

03:33:08 that voting is the death of democracy.

03:33:11 Voting fundamentally is polarizing the population in some kind of sublimated war.

03:33:16 And we’ll do that as the last step, but what we wanna do first is to say, how does the

03:33:19 thing that you care about that seems damaged by this proposition, how could that turn into

03:33:23 a solution to make this proposition better?

03:33:26 Where this proposition still tends to the thing it’s trying to tend to and tends to

03:33:29 that better.

03:33:30 Can we work on this together?

03:33:31 And in a town hall, we could have that.

03:33:33 As the scale increased, we lost the ability to do that.

03:33:35 Now, as you mentioned, the internet could change that.

03:33:38 The fact that we had representatives that had to ride a horse from one town hall to

03:33:41 the other one to see what the colony would do, that we stopped having this kind of developmental

03:33:47 propositional development process when the town hall ended.

03:33:52 The fact that we have not used the internet to recreate this is somewhere between insane

03:33:58 and aligned with class interests.

03:34:03 I would push back to say that the internet has those things, it just has a lot of other

03:34:08 things.

03:34:09 I feel like the internet has places where that encourage synthesis of competing ideas

03:34:16 and sense making, which is what we’re talking about.

03:34:19 It’s just that it’s also flooded with a bunch of other systems that perhaps are out competing

03:34:24 it under current incentives, perhaps has to do with capitalism in the market.

03:34:29 Sure.

03:34:30 Linux is awesome, right?

03:34:32 And Wikipedia and places where you have, and they have problems, but places where you have

03:34:36 open source sharing of information, vetting of information towards collective building.

03:34:41 Is that building something like, how much has that affected our court systems or our

03:34:48 policing systems or our military systems or our?

03:34:50 First of all, I think a lot, but not enough.

03:34:53 I think this is something I told you offline yesterday as a, perhaps as a whole nother

03:34:59 discussion, but I don’t think we’re quite quantifying the impact on the world, the positive

03:35:05 impact of Wikipedia.

03:35:08 You said the policing, I mean, I just, I just think the amount of empathy that like knowledge

03:35:16 I think can’t help, but lead to empathy, just knowing, okay.

03:35:25 Just knowing.

03:35:26 Okay.

03:35:27 I’ll give you some pieces of information, knowing how many people died in various wars

03:35:30 that already that Delta, when you have millions of people have that knowledge, it’s like,

03:35:35 it’s a little like slap in the face, like, Oh, like my boyfriend or girlfriend breaking

03:35:41 up with me is not such a big deal when millions of people were tortured, you know, like just

03:35:47 a little bit.

03:35:48 And when a lot of people know that because of Wikipedia, uh, or the effect, their second

03:35:54 order effects of Wikipedia, which is it’s not that necessarily people read Wikipedia.

03:35:58 It’s like YouTubers who don’t really know stuff that well will thoroughly read a Wikipedia

03:36:07 article and create a compelling video describing that Wikipedia article that then millions

03:36:11 of people watch and they understand that.

03:36:14 Holy shit.

03:36:15 A lot of, there was such, first of all, there was such a thing as world war II and world

03:36:18 war I.

03:36:19 Okay.

03:36:20 Like they can at least like learn about it.

03:36:22 They can learn about this was like recent.

03:36:25 They can learn about slavery.

03:36:26 They can learn about all kinds of injustices in the world.

03:36:30 And that I think has a lot of effects to our, to the way, whether you’re a police officer,

03:36:36 a lawyer, a judge in the jury, or just the regular civilian citizen, the way you approach

03:36:46 the every other communication you engage in, even if the system of that communication is

03:36:52 very much flawed.

03:36:53 So I think there’s a huge positive effect on Wikipedia.

03:36:56 That’s my case for Wikipedia.

03:36:57 So you should donate to Wikipedia.

03:36:58 I mean, I’m a huge fan, but there’s very few systems like it, which is sad to me.

03:37:04 So I think it’s, it would be a useful exercise for any, uh, listener of the show to really

03:37:14 try to run the dialectical synthesis process with regard to a topic like this and take

03:37:21 the, um, techno concerned perspective with regard to, uh, information tech that folks

03:37:29 like Tristan Harris take and say, what are all of the things that are getting worse and

03:37:35 what, and are any of them following an exponential curve and how much worse, how quickly could

03:37:39 that be?

03:37:42 And then, and do that fully without mitigating it, then take the techno optimist perspective

03:37:48 and see what things are getting better in a way that Kurzweil or Diamandis or someone

03:37:53 might do and try to take that perspective fully and say, are some of those things exponential?

03:37:59 What could that portend?

03:38:00 And then try to hold all that at the same time.

03:38:03 And I think there are ways in which, depending upon the metrics we’re looking at, things

03:38:10 are getting worse on exponential curves and better on exponential curves for different

03:38:15 metrics at the same time, which I hold as the destabilization of previous system and

03:38:20 either an emergence to a better system or collapse to a lower order are both possible.

03:38:27 And so I want my optimism not to be about my assessment.

03:38:32 I want my assessment to be just as fucking clear as it can be.

03:38:35 I want my optimism to be what inspires the solution process on that clear assessment.

03:38:41 So I never want to apply optimism in the sense making.

03:38:45 I want to just try to be clear.

03:38:47 If anything, I want to make sure that the challenges are really well understood.

03:38:52 But that’s in service of an optimism that there are good potentials, even if I don’t

03:38:57 know what they are, that are worth seeking.

03:39:02 There is some sense of optimism that’s required to even try to innovate really hard problems.

03:39:07 But then I want to take my pessimism and red team my own optimism to see, is that solution

03:39:12 not going to work?

03:39:13 Does it have second order effects?

03:39:14 And then not get upset by that because I then come back to how to make it better.

03:39:19 So just a relationship between optimism and pessimism and the dialectic of how they can

03:39:24 work.

03:39:25 So when I, of course, we can say that Wikipedia is a pretty awesome example of a thing.

03:39:32 We can look at the places where it has limits or has failed, where on a celebrity topic

03:39:40 or corporate interest topic, you can pay Wikipedia editors to edit more frequently and various

03:39:45 things like that.

03:39:46 But you can also see where there’s a lot of information that was kind of decentrally created

03:39:51 that is good information that is more easily accessible to people than everybody buying

03:39:54 their own encyclopedia Britannica or walking down to the library and that can be updated

03:39:58 in real time faster.

03:40:01 And I think you’re very right that the business model is a big difference because Wikipedia

03:40:09 is not a for profit corporation.

03:40:11 It is a – it’s tending to the information commons and it doesn’t have an agenda other

03:40:17 than tending to the information commons.

03:40:19 And I think the two masters issue is a tricky one when I’m trying to optimize for very different

03:40:25 kinds of things where I have to sacrifice one for the other and I can’t find synergistic

03:40:32 satisfiers.

03:40:33 Which one?

03:40:34 And if I have a fiduciary responsibility to shareholder profit maximization and, you know,

03:40:40 what does that end up creating?

03:40:43 I think the ad model that Silicon Valley took, I think Jaron Laney or I don’t know if you’ve

03:40:50 had him on the show, but he has an interesting assessment of the nature of the ad model.

03:40:56 Silicon Valley wanting to support capitalism and entrepreneurs to make things but also

03:41:03 the belief that information should be free and also the network dynamics where the more

03:41:07 people you got on, you got increased value per user, per capita as more people got on

03:41:12 so you didn’t want to do anything to slow the rate of adoption.

03:41:15 Some places actually, you know, PayPal paying people money to join the network because the

03:41:20 value of the network would be, there’d be a Metcalf like dynamic proportional to the

03:41:24 square of the total number of users.

03:41:26 So the ad model made sense of how do we make it free but also be a business, get everybody

03:41:33 on but not really thinking about what it would mean to – and this is now the whole idea

03:41:38 that if you aren’t paying for the product, you are the product.

03:41:44 If they have a fiduciary responsibility to their shareholder to maximize profit, their

03:41:47 customer is the advertiser, the user who it’s being built for is to do behavioral mod for

03:41:54 them for advertisers, that’s a whole different thing than that same type of tech could have

03:42:00 been if applied with a different business model or different purpose.

03:42:05 I think because Facebook and Google and other information and communication platforms end

03:42:14 up harvesting data about user behavior that allows them to model who the people are in

03:42:19 a way that gives them more sometimes specific information and behavioral information than

03:42:27 even a therapist or a doctor or a lawyer or a priest might have in a different setting,

03:42:31 they basically are accessing privileged information.

03:42:35 There should be a fiduciary responsibility.

03:42:38 And in normal fiduciary law, if there’s this principal agent thing, if you are a principal

03:42:45 and I’m an agent on your behalf, I don’t have a game theoretic relationship with you.

03:42:49 If you’re sharing something with me and I’m the priest or I’m the therapist, I’m never

03:42:53 going to use that information to try to sell you a used car or whatever the thing is.

03:42:58 But Facebook is gathering massive amounts of privileged information and using it to

03:43:03 modify people’s behavior for a behavior that they didn’t sign up for wanting the behavior

03:43:07 but what the corporation did.

03:43:08 So I think this is an example of the physical tech evolving in the context of the previous

03:43:14 social tech where it’s being shaped in particular ways.

03:43:18 And here, unlike Wikipedia that evolved for the information commons, this evolved for

03:43:25 fulfilling particular agentic purpose.

03:43:26 Most people when they’re on Facebook think it’s just a tool that they’re using.

03:43:29 They don’t realize it’s an agent, right?

03:43:31 It is a corporation with a profit motive and as I’m interacting with it, it has a goal

03:43:37 for me different than my goal for myself.

03:43:40 And I might want to be on for a short period of time.

03:43:41 Its goal is maximize time on site.

03:43:43 And so there is a rivalry where there should be a fiduciary contract.

03:43:49 I think that’s actually a huge deal.

03:43:52 And I think if we said, could we apply Facebook like technology to develop people’s citizenry

03:44:05 capacity, right?

03:44:06 To develop their personal health and wellbeing and habits as well as their cognitive understanding,

03:44:13 the complexity with which they can process the health of their relationships, that would

03:44:20 be amazing to start to explore.

03:44:22 And this is now the thesis that we started to discuss before is every time there is a

03:44:28 major step function in the physical tech, it obsoletes the previous social tech and

03:44:33 the new social tech has to emerge.

03:44:36 What I would say is that when we look at the nation state level of the world today, the

03:44:41 more top down authoritarian nation states are as the exponential tech started to emerge,

03:44:47 the digital technology started to emerge, they were in a position for better long term

03:44:52 planning and better coordination.

03:44:55 And so the authoritarian states started applying the exponential tech intentionally to make

03:44:59 more effective authoritarian states.

03:45:01 And that’s everything from like an internet of things surveillance system going into machine

03:45:07 learning systems to the Sesame credit system to all those types of things.

03:45:11 And so they’re upgrading their social tech using the exponential tech.

03:45:16 Otherwise within a nation state like the US, but democratic open societies, the countries,

03:45:23 the states are not directing the technology in a way that makes a better open society,

03:45:28 meaning better emergent order.

03:45:30 They’re saying, well, the corporations are doing that and the state is doing the relatively

03:45:34 little thing it would do aligned with the previous corporate law that no longer is relevant

03:45:37 because there wasn’t fiduciary responsibility for things like that.

03:45:40 There wasn’t antitrust because this creates functional monopolies because of network dynamics,

03:45:45 right?

03:45:46 Where YouTube has more users than Vimeo and every other video player together.

03:45:50 Amazon has a bigger percentage of market share than all of the other markets together.

03:45:54 You get one big dog per vertical because of network effect, which is a kind of organic

03:46:00 monopoly that the previous antitrust law didn’t even have a place, that wasn’t a thing.

03:46:05 Antimonopoly was only something that emerged in the space of government contracts.

03:46:11 So what we see is that the new exponential technology is being directed by authoritarian

03:46:15 nation states to make better authoritarian nation states and by corporations to make

03:46:19 more powerful corporations.

03:46:21 Powerful corporations, when we think about the Scottish enlightenment, when the idea

03:46:25 of markets was being advanced, the modern kind of ideas of markets, the biggest corporation

03:46:31 was tiny compared to what the biggest corporation today is.

03:46:35 So the asymmetry of it relative to people was tiny.

03:46:39 And the asymmetry now in terms of the total technology it employs, total amount of money,

03:46:43 total amount of information processing is so many orders of magnitude.

03:46:48 And rather than there be demand for an authentic thing that creates a basis for supply, as

03:46:55 supply started to get way more coordinated and powerful and the demand wasn’t coordinated

03:46:59 because you don’t have a labor union of all the customers working together, but you do

03:47:03 have a coordination on the supply side.

03:47:05 Supply started to recognize that it could manufacture demand.

03:47:08 It could make people want shit that they didn’t want before that maybe wouldn’t increase their

03:47:10 happiness in a meaningful way, might increase addiction.

03:47:14 Addiction is a very good way to manufacture demand.

03:47:17 And so as soon as manufactured demand started through this is the cool thing and you have

03:47:23 to have it for status or whatever it is, the intelligence of the market was breaking.

03:47:28 Now it’s no longer a collective intelligence system that is up regulating real desire for

03:47:32 things that are really meaningful.

03:47:34 We were able to hijack the lower angels of our nature rather than the higher ones.

03:47:38 The addictive patterns drive those and have people want shit that doesn’t actually make

03:47:42 them happy or make the world better.

03:47:44 And so we really also have to update our theory of markets because behavioral econ showed

03:47:51 that homo economicus, the rational actor is not really a thing, but particularly at greater

03:47:55 and greater scale can’t really be a thing.

03:47:58 Voluntaryism isn’t a thing where if my company doesn’t want to advertise on Facebook, I just

03:48:02 will lose to the companies that do because that’s where all the fucking attention is.

03:48:06 And so then I can say it’s voluntary, but it’s not really if there’s a functional monopoly.

03:48:12 Same if I’m going to sell on Amazon or things like that.

03:48:14 So what I would say is these corporations are becoming more powerful than nation states

03:48:21 in some ways.

03:48:24 And they are also debasing the integrity of the nation states, the open societies.

03:48:34 So the democracies are getting weaker as a result of exponential tech and the kind of

03:48:38 new tech companies that are kind of a new feudalism, tech feudalism, because it’s not

03:48:43 a democracy inside of a tech company or the supply and demand relationship when you have

03:48:48 manufactured demand and kind of monopoly type functions.

03:48:53 And so we have basically a new feudalism controlling exponential tech and authoritarian nation

03:48:57 states controlling it.

03:48:58 And those attractors are both shitty.

03:49:01 And so I’m interested in the application of exponential tech to making better social tech

03:49:07 that makes emergent order possible and where then that emergent order can bind and direct

03:49:13 the exponential tech in fundamentally healthy, not X risk oriented directions.

03:49:19 I think the relationship of social tech and physical tech can make it.

03:49:22 I think we can actually use the physical tech to make better social tech, but it’s not given

03:49:26 that we do.

03:49:27 If we don’t make better social tech, then I think the physical tech empowers really

03:49:32 shitty social tech that is not a world that we want.

03:49:35 I don’t know if it’s a road we want to go down, but I tend to believe that the market

03:49:39 will create exactly the thing you’re talking about, which I feel like there’s a lot of

03:49:43 money to be made in creating a social tech that creates a better citizen, that creates

03:49:55 a better human being.

03:50:00 Your description of Facebook and so on, which is a system that creates addiction, which

03:50:05 manufactures demand, is not obviously inherently the consequence of the markets.

03:50:14 I feel like that’s the first stage of us, like baby deer trying to figure out how to

03:50:19 use the internet.

03:50:20 I feel like there’s much more money to be made with something that creates compersion

03:50:28 and love.

03:50:29 Honestly.

03:50:30 I mean, I really, we can have this, I can make the business case for it.

03:50:35 I don’t know if, I don’t think we want to really have that discussion, but do you have

03:50:39 some hope that that’s the case?

03:50:41 I guess if not, then how do we fix the system of markets that worked so well for the United

03:50:47 States for so long?

03:50:49 Like I said, every social tech worked for a while.

03:50:51 Like tribalism worked well for two or 300,000 years.

03:50:55 I think social tech has to keep evolving.

03:50:58 The social technologies with which we organize and coordinate our behavior have to keep evolving

03:51:03 as our physical tech does.

03:51:05 So I think the thing that we call markets, of course we can try to say, oh, even biology

03:51:12 runs on markets.

03:51:15 But the thing that we call markets, the underlying theory, homo economicus, demand, driving supply,

03:51:22 that thing broke.

03:51:23 It broke with scale in particular and a few other things.

03:51:28 So it needs updated in a really fundamental way.

03:51:32 I think there’s something even deeper than making money happening that in some ways will

03:51:37 obsolete money making.

03:51:41 I think capitalism is not about business.

03:51:46 So if you think about business, I’m going to produce a good or a service that people

03:51:50 want and bring it to the market so that people get access to that good or service.

03:51:55 That’s the world of business, but that’s not capitalism.

03:51:58 Capitalism is the management and allocation of capital, which financial services was a

03:52:05 tiny percentage of the total market has become a huge percentage of the total market.

03:52:09 It’s a different creature.

03:52:10 So if I was in business and I was producing a good or service and I was saving up enough

03:52:14 money that I started to be able to invest that money and gain interest or do things

03:52:19 like that, I start realizing I’m making more money on my money than I’m making on producing

03:52:24 the goods and services.

03:52:26 So I stop even paying attention to goods and services and start paying attention to making

03:52:29 money on money and how do I utilize capital to create more capital.

03:52:34 And capital gives me more optionality because I can buy anything with it than a particular

03:52:37 good or service that only some people want.

03:52:42 Capitalism – more capital ended up meaning more control.

03:52:49 I could put more people under my employment.

03:52:51 I could buy larger pieces of land, novel access to resource, mines, and put more technology

03:52:57 under my employment.

03:52:58 So it meant increased agency and also increased control.

03:53:02 I think attentionalism is even more powerful.

03:53:07 So rather than enslave people where the people kind of always want to get away and put in

03:53:14 the least work they can, there’s a way in which economic servitude was just more profitable

03:53:19 than slavery, right?

03:53:21 Have the people work even harder voluntarily because they want to get ahead and nobody

03:53:26 has to be there to whip them or control them or whatever.

03:53:30 This is a cynical take but a meaningful take.

03:53:35 So people – so capital ends up being a way to influence human behavior, right?

03:53:43 And yet where people still feel free in some meaningful way.

03:53:48 They’re not feeling like they’re going to be punished by the state if they don’t do

03:53:53 something.

03:53:54 It’s like punished by the market via homelessness or something.

03:53:56 But the market is this invisible thing I can’t put an agent on so it feels like free.

03:54:01 And so if you want to affect people’s behavior and still have them feel free, capital ends

03:54:10 up being a way to do that.

03:54:12 But I think affecting their attention is even deeper because if I can affect their attention,

03:54:18 I can both affect what they want and what they believe and what they feel.

03:54:22 And we statistically know this very clearly.

03:54:24 Facebook has done studies that based on changing the feed, it can change beliefs, emotional

03:54:29 dispositions, et cetera.

03:54:31 And so I think there’s a way that the harvest and directing of attention is even a more

03:54:38 powerful system than capitalism.

03:54:39 It is effective in capitalism to generate capital, but I think it also generates influence

03:54:44 beyond what capital can do.

03:54:46 And so do we want to have some groups utilizing that type of tech to direct other people’s

03:54:56 attention?

03:54:57 If so, towards what?

03:55:03 Towards what metrics of what a good civilization and good human life would be?

03:55:07 What’s the oversight process?

03:55:08 What is the…

03:55:09 Transparency.

03:55:10 I can answer all the things you’re mentioning.

03:55:14 I can build, I guarantee you if I’m not such a lazy ass, I’ll be part of the many people

03:55:20 doing this as transparency and control, giving control to individual people.

03:55:26 Okay.

03:55:27 So maybe the corporation has coordination on its goals that all of its customers or

03:55:36 users together don’t have.

03:55:37 So there’s some asymmetry of its goals, but maybe I could actually help all of

03:55:44 the customers to coordinate almost like a labor union or whatever by informing and educating

03:55:50 them adequately about the effects, the externalities on them.

03:55:54 This is not toxic waste going into the ocean of the atmosphere.

03:55:58 It’s their minds, their beings, their families, their relationships, such that they will in

03:56:03 group change their behavior.

03:56:10 One way of saying what you’re saying, I think, is that you think that you can rescue homo

03:56:16 economicus from the rational actor that will pursue all the goods and services and choose

03:56:23 the best one at the best price, the kind of Rand von Mises Hayek, that you can rescue

03:56:27 that from Dan Ariely and behavioral econ that says that’s actually not how people make choices.

03:56:31 They make it based on status hacking, largely whether it’s good for them or not in the long

03:56:35 term.

03:56:36 And the large asymmetric corporation can run propaganda and narrative warfare that hits

03:56:41 people’s status buttons and their limbic hijacks and their lots of other things in ways that

03:56:46 they can’t even perceive that are happening.

03:56:50 They’re not paying attention to that.

03:56:51 The site is employing psychologists and split testing and whatever else.

03:56:55 So you’re saying, I think we can recover homo economicus.

03:57:00 And not just through a single mechanism of technology.

03:57:03 There’s the, not to keep mentioning the guy, but platforms like Joe Rogan and so on, that

03:57:09 make help make viral the ways that the education of negative externalities can become viral

03:57:20 in this world.

03:57:21 So interestingly, I actually agree with you that

03:57:26 I got them that we four and a half hours in that we can take can do some good.

03:57:33 All right.

03:57:34 Well, see, what you’re talking about is the application of tech here, broadcast tech where

03:57:38 you can speak to a lot of people.

03:57:40 And that’s not going to be strong enough because the different people need spoken to differently,

03:57:44 which means it has to be different voices that get amplified to those audiences more

03:57:47 like Facebook’s tech.

03:57:48 But nonetheless, we’ll start with broadcast tech plants the first seed and then the word

03:57:52 of mouth is a powerful thing.

03:57:53 You need to do the first broadcast shotgun and then it like lands a catapult or whatever.

03:57:59 I don’t know what the right weapon is, but then it just spreads the word of mouth through

03:58:03 all kinds of tech, including Facebook.

03:58:06 So let’s come back to the fundamental thing.

03:58:08 The fundamental thing is we want to kind of order at various scales from the conflicting

03:58:14 parts of ourself, actually having more harmony than they might have to a family, extended

03:58:22 family, local, all the way up to global.

03:58:25 We want emergent order where our choices have more alignment, right?

03:58:33 We want that to be emergent rather than imposed or rather than we want fundamentally different

03:58:38 things or make totally different sense of the world where warfare of some kind becomes

03:58:42 the only solution.

03:58:45 Emergent order requires us in our choice making, requires us being able to have related sense

03:58:50 making and related meaning making processes.

03:58:55 Can we apply digital technologies and exponential tech in general to try to increase the capacity

03:59:02 to do that where the technology called a town hall, the social tech that we’d all get together

03:59:06 and talk obviously is very scale limited and it’s also oriented to geography rather than

03:59:11 networks of aligned interest.

03:59:13 Can we build new better versions of those types of things?

03:59:16 And going back to the idea that a democracy or participatory governance depends upon comprehensive

03:59:23 education and the science of government, which include being able to understand things like

03:59:27 asymmetric information warfare on the side of governments and how the people can organize

03:59:31 adequately.

03:59:33 Can you utilize some of the technologies now to be able to support increased comprehensive

03:59:38 education of the people and maybe comprehensive informativeness, so both fixing the decay

03:59:44 in both education and the fourth estate that have happened so that people can start self

03:59:48 organizing to then influence the corporations, the nation states to do different things and

03:59:55 or build new ones themselves?

03:59:57 Yeah, fundamentally that’s the thing that has to happen.

04:00:00 The exponential tech gives us a novel problem landscape that the world never had.

04:00:05 The nuke gave us a novel problem landscape and so that required this whole Bretton Woods

04:00:09 world.

04:00:10 The exponential tech gives us a novel problem landscape, our existing problem solving processes

04:00:15 aren’t doing a good job.

04:00:16 We have had more countries get nukes, we have a nuclear de proliferation, we haven’t achieved

04:00:21 any of the UN sustainable development goals, we haven’t kept any of the new categories

04:00:26 of tech from making arms races, so our global coordination is not adequate to the problem

04:00:30 landscape.

04:00:32 So we need fundamentally better problem solving processes, a market or a state is a problem

04:00:36 solving process.

04:00:37 We need better ones that can do the speed and scale of the current issues.

04:00:41 Right now speed is one of the other big things is that by the time we regulated DDT out of

04:00:46 existence or cigarettes not for people under 18, they had already killed so many people

04:00:50 and we let the market do the thing.

04:00:52 But as Elon has made the point that won’t work for AI, by the time we recognize afterwards

04:00:59 that we have an auto poetic AI that’s a problem, you won’t be able to reverse it, that there’s

04:01:02 a number of things that when you’re dealing with tech that is either self replicating

04:01:07 and disintermediate humans to keep going, doesn’t need humans to keep going, or you

04:01:11 have tech that just has exponentially fast effects, your regulation has to come early.

04:01:17 It can’t come after the effects have happened, the negative effects have happened because

04:01:23 the negative effects could be too big too quickly.

04:01:25 So we basically need new problem solving processes that do better at being able to internalize

04:01:31 this externality, solve the problems on the right time scale and the right geographic

04:01:36 scale.

04:01:37 And those new processes to not be imposed have to emerge from people wanting them and

04:01:44 being able to participate in their development, which is what I would call kind of a new cultural

04:01:48 enlightenment or renaissance that has to happen, where people start understanding the new power

04:01:54 that exponential tech offers, the way that it is actually damaging current governance

04:02:00 structures that we care about, and creating an extra landscape, but could also be redirected

04:02:07 towards more protopic purposes, and then saying, how do we rebuild new social institutions?

04:02:13 What are adequate social institutions where we can do participatory governance at scale

04:02:17 and time?

04:02:19 And how can the people actually participate to build those things?

04:02:24 The solution that I see working requires a process like that.

04:02:29 And the result maximizes love.

04:02:32 So again, Elon would be right that love is the answer.

04:02:36 Let me take you back from the scale of societies to the scale that’s far, far more important,

04:02:42 which is the scale of family.

04:02:47 You’ve written a blog post about your dad.

04:02:50 We have various flavors of relationships with our fathers.

04:02:56 What have you learned about life from your dad?

04:03:01 Well, people can read the blog post and see a lot of individual things that I learned

04:03:06 that I really appreciated.

04:03:07 If I was to kind of summarize at a high level, I had a really incredible dad, very, very

04:03:18 unusually positive set of experiences.

04:03:23 We were homeschooled, and he was committed to work from home to be available and prioritize

04:03:28 fathering in a really deep way.

04:03:35 And as a super gifted, super loving, very unique man, he also had his unique issues

04:03:41 that were part of what crafted the unique brilliance, and those things often go together.

04:03:46 And I say that because I think I had some unusual gifts and also some unusual difficulties.

04:03:52 And I think it’s useful for everybody to know their path probably has both of those.

04:03:59 But if I was to say kind of the essence of one of the things my dad taught me across

04:04:05 a lot of lessons was like the intersection of self empowerment, ideas and practices that

04:04:13 self empower, towards collective good, towards some virtuous purpose beyond the self.

04:04:21 And he both said that a million different ways, taught it in a million different ways.

04:04:25 When we were doing construction and he was teaching me how to build a house, we were

04:04:31 putting the wires to the walls before the drywall went on, he made sure that the way

04:04:35 that we put the wires through was beautiful.

04:04:37 Like that the height of the holes was similar, that we twisted the wires in a particular

04:04:44 way.

04:04:45 And it’s like no one’s ever going to see it.

04:04:47 And he’s like, if a job’s worth doing, it’s worth doing well, and excellence is its own

04:04:50 reward.

04:04:51 And those types of ideas.

04:04:52 And if there was a really shitty job to do, he’d say, see the job, do the job, stay out

04:04:55 of the misery.

04:04:56 Just don’t indulge any negativity, do the things that need done.

04:04:59 And so there’s like, there’s an empowerment and a nobility together.

04:05:06 And yeah, extraordinarily fortunate.

04:05:10 Is there ways you think you could have been a better son?

04:05:13 Is there things you regret?

04:05:16 Interesting question.

04:05:18 Let me first say, just as a bit of a criticism, that what kind of man do you think you are

04:05:28 not wearing a suit and tie, if a real man should?

04:05:34 Exactly I agree with your dad on that point.

04:05:36 You mentioned offline that he suggested a real man should wear a suit and tie.

04:05:44 But outside of that, is there ways you could have been a better son?

04:05:48 Maybe next time on your show, I’ll wear a suit and tie.

04:05:52 My dad would be happy about that.

04:06:12 I can answer the question later in life, not early.

04:06:17 I had just a huge amount of respect and reverence for my dad when I was young.

04:06:20 So I was asking myself that question a lot.

04:06:23 So there weren’t a lot of things I knew that I wasn’t seeking to apply.

04:06:32 There was a phase when I went through my kind of individuation, differentiation, where I

04:06:39 had to make him excessively wrong about too many things.

04:06:43 I don’t think I had to, but I did.

04:06:46 And he had a lot of kind of nonstandard model beliefs about things, whether early kind of

04:06:55 ancient civilizations or ideas on evolutionary theory or alternate models of physics.

04:07:03 And they weren’t irrational, but they didn’t all have the standard of epistemic proof that

04:07:10 I would need.

04:07:12 And I went through, and some of them were kind of spiritual ideas as well, I went through

04:07:19 a phase in my early 20s where I kind of had the attitude that Dawkins or a Christopher

04:07:31 Hitchens has that can kind of be like excessively certain and sanctimonious, applying their

04:07:41 reductionist philosophy of science to everything and kind of brutally dismissive.

04:07:47 I’m embarrassed by that phase.

04:07:52 Not to say anything about those men and their path, but for myself.

04:07:57 And so during that time, I was more dismissive of my dad’s epistemology than I would have

04:08:05 liked to have been.

04:08:06 I got to correct that later and apologize for it.

04:08:09 But that’s the first thought that came to mind.

04:08:12 You’ve written the following.

04:08:14 I’ve had the experience countless times, making love, watching a sunset, listening to music,

04:08:22 feeling the breeze, that I would sign up for this whole life and all of its pains just

04:08:29 to experience this exact moment.

04:08:33 This is a kind of wordless knowing.

04:08:37 It’s the most important and real truth I know, that experience itself is infinitely

04:08:42 meaningful and pain is temporary.

04:08:46 And seen clearly, even the suffering is filled with beauty.

04:08:50 I’ve experienced countless lives worth of moments worthy of life, such an unreasonable

04:08:57 fortune.

04:08:58 A few words of gratitude from you, beautifully written.

04:09:03 Is there some beautiful moments?

04:09:05 Now you have experienced countless lives worth of those moments, but is there some things

04:09:11 that if you could, in your darker moments, you can go to to relive, to remind yourself

04:09:20 that the whole ride is worthwhile?

04:09:22 Maybe skip the making love part.

04:09:24 We don’t want to know about that.

04:09:27 I mean, I feel unreasonably fortunate that it is such a humongous list because, I mean,

04:09:42 I feel fortunate to have like had exposure to practices and philosophies in a way of

04:09:48 seeing things that makes me see things that way.

04:09:50 So I can take responsibility for seeing things in that way and not taking for granted really

04:09:55 wonderful things, but I can’t take credit for being exposed to the philosophies that

04:09:58 even gave me that possibility.

04:10:03 You know, it’s not just with my wife, it’s with every person who I really love when we’re

04:10:10 talking and I look at their face, I, in the context of a conversation, feel overwhelmed

04:10:14 by how lucky I am to get to know them.

04:10:17 And like there’s never been someone like them in all of history and there never will be

04:10:21 again and they might be gone tomorrow, I might be gone tomorrow and like I get this moment

04:10:24 with them.

04:10:25 And when you take in the uniqueness of that fully and the beauty of it, it’s overwhelmingly

04:10:30 beautiful.

04:10:33 And I remember the first time I did a big dose of mushrooms and I was looking at a tree

04:10:42 for a long time and I was just crying with overwhelming how beautiful the tree was.

04:10:47 And it was a tree outside the front of my house that I’d walked by a million times and

04:10:50 never looked at like this.

04:10:52 And it wasn’t the dose of mushrooms where I was hallucinating like where the tree was

04:10:58 purple.

04:10:59 Like the tree still looked like, if I had to describe it, it’s green and it has leaves,

04:11:02 looks like this, but it was way fucking more beautiful, like capturing than it normally

04:11:08 was.

04:11:09 And I’m like, why is it so beautiful if I would describe it the same way?

04:11:12 And I realized I had no thoughts taking me anywhere else.

04:11:15 Like what it seemed like the mushrooms were doing was just actually shutting the narrative

04:11:19 off that would have me be distracted so I could really see the tree.

04:11:22 And then I’m like, fuck, when I get off these mushrooms, I’m going to practice seeing the

04:11:25 tree because it’s always that beautiful and I just miss it.

04:11:29 And so I practice being with it and quieting the rest of the mind and then being like,

04:11:33 wow.

04:11:34 And if it’s not mushrooms, like people have peak experiences where they’ll see life and

04:11:39 how incredible it is.

04:11:40 It’s always there.

04:11:41 It’s funny that I had this exact same experience on quite a lot of mushrooms just sitting alone

04:11:49 and looking at a tree and exactly as you described it, appreciating the undistorted beauty of

04:11:55 it.

04:11:56 And it’s funny to me that here’s two humans, very different with very different journeys

04:12:02 or at some moment in time, both looking at a tree like idiots for hours and just in awe

04:12:09 and happy to be alive.

04:12:10 And yeah, even just that moment alone is worth living for, but you did say humans and we

04:12:17 have a moment together as two humans and you mentioned shots that I have to ask, what are

04:12:25 we looking at?

04:12:27 When I went to go get a smoothie before coming here, I got you a keto smoothie that you didn’t

04:12:32 want because you’re not just keto, but fasting.

04:12:35 But I saw the thing with you and your dad where you did shots together and this place

04:12:39 happened to have shots of ginger, turmeric, cayenne juice of some kind.

04:12:45 So I didn’t necessarily plan it for being on the show, I just brought it, but we can

04:12:52 do it that way.

04:12:53 I think we shall toast like heroes, Daniel.

04:12:59 It’s a huge honor.

04:13:00 What do we toast to?

04:13:01 We toast to this moment, this unique moment that we get to share together.

04:13:06 I’m very grateful to be here in this moment with you and yeah, I’m grateful that you invited

04:13:11 me here.

04:13:12 We met for the first time and I will never be the same for the good and the bad, I am.

04:13:23 That is really interesting.

04:13:24 That feels way healthier than the vodka my dad and I were drinking.

04:13:29 So I feel like a better man already, Daniel, this is one of the best conversations I’ve

04:13:33 ever had.

04:13:34 I can’t wait to have many more.

04:13:36 Likewise.

04:13:37 This has been an amazing experience.

04:13:39 Thank you for wasting all your time today.

04:13:40 I want to say in terms of what you’re mentioning about like the, that you work in machine learning

04:13:48 and the optimism that wants to look at the issues, but wants to look at how this increased

04:13:55 technological power could be applied to solving them and that even thinking about the broadcast

04:14:00 of like, can I help people understand the issues better and help organize them?

04:14:05 Like fundamentally you’re oriented like Wikipedia, what I see, to really try to tend to the information

04:14:13 commons without another agentic interest distorting it.

04:14:18 And for you to be able to get guys like Lee Smolin and Roger Penrose and like the greatest

04:14:24 thinkers of, that are alive and have them on the show and most people would never be

04:14:29 exposed to them and talk about it in a way that people can understand, I think it’s an

04:14:34 incredible service.

04:14:35 I think you’re doing great work.

04:14:37 So I was really happy to hear from you.

04:14:39 Thank you, Daniel.

04:14:41 Thanks for listening to this conversation with Daniel Schmachtenberger and thank you

04:14:44 to Ground News, NetSuite, Four Sigmatic, Magic Spoon, and BetterHelp.

04:14:51 Check them out in the description to support this podcast.

04:14:55 And now let me leave you with some words from Albert Einstein.

04:14:59 I know not with what weapons World War III will be fought, but World War IV will be fought

04:15:05 with sticks and stones.

04:15:08 Thank you for listening and hope to see you next time.