Joscha Bach: Artificial Consciousness and the Nature of Reality #101

Transcript

00:00:00 The following is a conversation with Yosha Bach, VP of Research at the AI Foundation,

00:00:05 with a history of research positions at MIT and Harvard. Yosha is one of the most unique

00:00:12 and brilliant people in the artificial intelligence community, exploring the workings

00:00:16 of the human mind, intelligence, consciousness, life on Earth, and the possibly simulated

00:00:23 fabric of our universe. I could see myself talking to Yosha many times in the future.

00:00:28 Quick summary of the ads. Two sponsors, ExpressVPN and Cash App. Please consider supporting the

00:00:35 podcast by signing up at expressvpn.com slash LexPod and downloading Cash App and using code

00:00:42 LEXPodcast. This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,

00:00:50 review it with five stars on Apple Podcast, support it on Patreon, or simply connect with

00:00:54 me on Twitter at LexFriedman. Since this comes up more often than I ever would have imagined,

00:01:02 I challenge you to try to figure out how to spell my last name without using the letter E.

00:01:08 And it’ll probably be the correct way. As usual, I’ll do a few minutes of ads now and never hear

00:01:14 any yas in the middle that can break the flow of the conversation. This show is sponsored by

00:01:19 ExpressVPN. Get it at expressvpn.com slash LexPod to support this podcast and to get an extra three

00:01:27 months free on a one year package. I’ve been using ExpressVPN for many years. I love it.

00:01:34 I think ExpressVPN is the best VPN out there. They told me to say it, but I think it actually

00:01:40 happens to be true. It doesn’t log your data, it’s crazy fast, and it’s easy to use. Literally,

00:01:46 just one big power on button. Again, for obvious reasons, it’s really important that they don’t

00:01:52 log your data. It works on Linux and everywhere else too. Shout out to my favorite flavor of Linux,

00:01:59 Bantu Mate 2004. Once again, get it at expressvpn.com slash LexPod to support this podcast and to get

00:02:08 an extra three months free on a one year package. This show is presented by Cash App, the number one

00:02:17 finance app in the App Store. When you get it, use code LexPodcast. Cash App lets you send money to

00:02:23 friends, buy Bitcoin, and invest in the stock market with as little as one dollar. Since Cash

00:02:29 App does fractional share trading, let me mention that the order execution algorithm that works

00:02:34 behind the scenes to create the abstraction of the fractional orders is an algorithmic marvel.

00:02:40 So big props to the Cash App engineers for taking a step up to the next layer of abstraction over

00:02:45 the stock market, making trading more accessible for new investors and diversification much easier.

00:02:51 So again, if you get Cash App from the App Store or Google Play and use the code LexPodcast,

00:02:57 you get $10, and Cash App will also donate $10 to First, an organization that is helping

00:03:03 advanced robotics and STEM education for young people around the world.

00:03:09 And now here’s my conversation with Joscha Bach. As you’ve said, you grew up in a forest in East

00:03:16 Germany, just as we’re talking about off mic, to parents who are artists. And now I think,

00:03:23 at least to me, you’ve become one of the most unique thinkers in the AI world.

00:03:27 So can we try to reverse engineer your mind a little bit?

00:03:30 What were the key philosopher, scientist ideas, maybe even movies or just realizations that

00:03:38 had an impact on you when you were growing up that kind of led to the trajectory,

00:03:43 or were the key sort of crossroads in the trajectory of your intellectual development?

00:03:49 My father came from a long tradition of architects, a distant branch of the Bach family.

00:03:56 And so basically, he was technically a nerd. And nerds need to interface in society with

00:04:03 nonstandard ways. Sometimes I define a nerd as somebody who thinks that the purpose of

00:04:09 communication is to submit your ideas to peer review. And normal people understand that the

00:04:15 primary purpose of communication is to negotiate alignment. And these purposes tend to conflict,

00:04:21 which means that nerds have to learn how to interact with society at large.

00:04:26 Who is the reviewer in the nerd’s view of communication?

00:04:31 Everybody who you consider to be a peer. So whatever hapless individual is around,

00:04:36 well, you would try to make him or her the gift of information.

00:04:42 Okay. So you’re now, by the way, my research malinformed me. So you’re architect or artist?

00:04:50 So he did study architecture. But basically, my grandfather made the wrong decision. He married

00:04:58 an aristocrat and was drawn into the war. And he came back after 15 years. So basically, my father

00:05:07 was not parented by a nerd, but by somebody who tried to tell him what to do, and expected him

00:05:14 to do what he was told. And he was unable to. He’s unable to do things if he’s not intrinsically

00:05:21 motivated. So in some sense, my grandmother broke her son. And her son responded when he became an

00:05:27 architect to become an artist. So he built 100 Wasser architecture. He built houses without

00:05:33 right angles. He built lots of things that didn’t work in the more brutalist traditions of eastern

00:05:38 Germany. And so he bought an old watermill, moved out to the countryside, and did only what he wanted

00:05:44 to do, which was art. Eastern Germany was perfect for Boheme, because you had complete material

00:05:50 safety. Food was heavily subsidized, healthcare was free. You didn’t have to worry about rent or

00:05:55 pensions or anything. So it’s a socialized communist side. Yes. And the other thing is,

00:06:00 it was almost impossible not to be in political disagreement with your government, which is very

00:06:04 productive for artists. So everything that you do is intrinsically meaningful, because it will

00:06:08 always touch on the deeper currents of society of culture and be in conflict with it and tension

00:06:14 with it. And you will always have to define yourself with respect to this. So what impacted

00:06:19 your father, this outside of the box thinker against the government, against the world artists?

00:06:26 He was actually not a thinker. He was somebody who only got self aware to the degree that he

00:06:31 needed to make himself functional. So in some sense, he was also in the late 1960s. And he was

00:06:39 in some sense a hippie. So he became a one person cult. He lived out there in his kingdom. He built

00:06:44 big sculpture gardens and started many avenues of art and so on and convinced a woman to live with

00:06:53 him. She was also an architect and she adored him and decided to share her life with him.

00:06:58 And I basically grew up in a big cave full of books. I’m almost feral. And I was bored out

00:07:05 there. It was very, very beautiful, very quiet, and quite lonely. So I started to read. And by

00:07:11 the time I came to school, I’ve read everything until fourth grade and then some. And there was

00:07:16 not a real way for me to relate to the outside world. And I couldn’t quite put my finger on why.

00:07:21 And today I know it was because I was a nerd, obviously, and it was the only nerd around. So

00:07:26 there was no other kids like me. And there was nobody interested in physics or computing or

00:07:32 mathematics and so on. And this village school that I went to was basically a nice school.

00:07:38 Kids were nice to me. I was not beaten up, but I also didn’t make many friends or

00:07:42 build deep relationships. They only happened in starting from ninth grade when I went into a

00:07:47 school for mathematics and physics. Do you remember any key books from this moment?

00:07:51 I basically read everything. So I went to the library and I worked my way through the

00:07:56 children’s and young adult sections. And then I read a lot of science fiction,

00:08:01 for instance, Stanislav Lem, basically the great author of Cybernetics, has influenced me. Back

00:08:06 then, I didn’t see him as a big influence because everything that he wrote seemed to be so natural

00:08:10 to me. And it’s only later that I contrasted it with what other people wrote. Another thing that

00:08:16 was very influential on me were the classical philosophers and also the literature of romanticism.

00:08:22 So German poetry and art, Troste Hilshoff and Heine and up to Hesse and so on.

00:08:29 Hesse. I love Hesse. So at which point do the classical philosophers end? At this point,

00:08:35 we’re in the 21st century. What’s the latest classical philosopher? Does this stretch through

00:08:41 even as far as Nietzsche or is this, are we talking about Plato and Aristotle?

00:08:45 I think that Nietzsche is the classical equivalent of a shit poster.

00:08:52 He’s very smart and easy to read, but he’s not so much trolling others. He’s trolling himself

00:08:57 because he was at odds with the world. Largely his romantic relationships didn’t work out.

00:09:02 He got angry and he basically became a nihilist.

00:09:06 Isn’t that a beautiful way to be as an intellectual is to constantly be trolling yourself,

00:09:11 to be in that conflict, in that tension?

00:09:14 I think it’s a lack of self awareness. At some point, you have to understand the

00:09:18 comedy of your own situation. If you take yourself seriously and you are not functional,

00:09:23 it ends in tragedy as it did for Nietzsche.

00:09:25 I think you think he took himself too seriously in that tension.

00:09:29 And as you find the same thing in Hesse and so on, this Steppenwolf syndrome is classic

00:09:34 adolescence where you basically feel misunderstood by the world and you don’t understand that all the

00:09:38 misunderstandings are the result of your own lack of self awareness because you think that you are

00:09:44 a prototypical human and the others around you should behave the same way as you expect them

00:09:48 based on your innate instincts and it doesn’t work out and you become a transcendentalist

00:09:53 to deal with that. So it’s very, very understandable and have great sympathies for this

00:09:58 to the degree that I can have sympathy for my own intellectual history.

00:10:02 But you have to grow out of it.

00:10:04 So as an intellectual, a life well lived, a journey well traveled is one where you don’t

00:10:09 take yourself seriously from that perspective?

00:10:11 No, I think that you are neither serious or not serious yourself because you need to become

00:10:17 unimportant as a subject. That is, if you are a philosopher, belief is not a verb.

00:10:24 You don’t do this for the audience and you don’t do it for yourself.

00:10:27 You have to submit to the things that are possibly true and you have to follow wherever

00:10:32 your inquiry leads. But it’s not about you. It has nothing to do with you.

00:10:36 So do you think then people like Ayn Rand believed sort of an idea of there’s objective

00:10:42 truth. So what’s your sense in the philosophical, if you remove yourself as objective from the

00:10:48 picture, you think it’s possible to actually discover ideas that are true or are we just

00:10:52 in a mesh of relative concepts that are either true nor false? It’s just a giant mess.

00:10:57 You cannot define objective truth without understanding the nature of truth in the first

00:11:02 place. So what does the brain mean by saying that discover something as truth? So for instance,

00:11:08 a model can be predictive or not predictive. Then there can be a sense in which a mathematical

00:11:14 statement can be true because it’s defined as true under certain conditions. So it’s basically

00:11:19 a particular state that a variable can have in a simple game. And then you can have a

00:11:26 correspondence between systems and talk about truth, which is again, a type of model correspondence.

00:11:31 And there also seems to be a particular kind of ground truth. So for instance,

00:11:34 you’re confronted with the enormity of something existing at all. It’s stunning when you realize

00:11:41 something exists rather than nothing. And this seems to be true. There’s an absolute truth in

00:11:47 the fact that something seems to be happening. Yeah, that to me is a showstopper. I could just

00:11:52 think about that idea and be amazed by that idea for the rest of my life and not going any farther

00:11:57 because I don’t even know the answer to that. Why does anything exist at all?

00:12:01 Well, the easiest answer is existence is the default, right? So this is the lowest number of

00:12:04 bits that you would need to encode this. Whose answer?

00:12:07 The simplest answer to this is that existence is the default.

00:12:11 What about nonexistence? I mean, that seems…

00:12:14 Nonexistence might not be a meaningful notion in this sense. So in some sense,

00:12:18 if everything that can exist exists, for something to exist, it probably needs to be implementable.

00:12:23 The only thing that can be implemented is finite automata. So maybe the whole of existence is the

00:12:28 superposition of all finite automata and we are in some region of the fractal that has the properties

00:12:33 that it can contain us. What does it mean to be a superposition of finite automata?

00:12:40 Superposition of all possible rules? Imagine that every automaton is basically an operator

00:12:45 that acts on some substrate and as a result, you get emergent patterns.

00:12:50 What’s the substrate?

00:12:52 I have no idea to know. But some substrate.

00:12:55 It’s something that can store information.

00:12:58 Something that can store information, there’s a automaton.

00:13:00 Something that can hold state.

00:13:01 Still, it doesn’t make sense to me the why that exists at all. I could just sit there

00:13:06 with a beer or a vodka and just enjoy the fact, pondering the why.

00:13:11 It may not have a why. This might be the wrong direction to ask into this. So there could be no

00:13:16 relation in the why direction without asking for a purpose or for a cause. It doesn’t mean

00:13:22 that everything has to have a purpose or cause. So we mentioned some philosophers in that early,

00:13:28 just taking a brief step back into that. So we asked ourselves when did classical philosophy end?

00:13:34 I think for Germany, it largely ended with the first revolution.

00:13:38 That’s basically when we entered the monarchy and started a democracy. And at this point,

00:13:45 we basically came up with a new form of government that didn’t have a good sense of

00:13:50 this new organism that society wanted to be. And in a way, it decapitated the universities.

00:13:56 So the universities went on through modernism like a headless chicken.

00:13:59 At the same time, democracy failed in Germany and we got fascism as a result.

00:14:04 And it burned down things in a similar way as Stalinism burned down intellectual traditions

00:14:08 in Russia. And Germany, both Germanys have not recovered from this. Eastern Germany had this

00:14:14 vulgar dialectic materialism and Western Germany didn’t get much more edgy than Habermas. So in

00:14:20 some sense, both countries lost their intellectual traditions and killing off and driving out the

00:14:24 Jews didn’t help. Yeah. So that was the end of really rigorous what you would say is classical

00:14:33 philosophy. There’s also this thing that in some sense, the low hanging foods in philosophy

00:14:39 were mostly wrapped. And the last big things that we discovered was the constructivist turn

00:14:46 in mathematics. So to understand that the parts of mathematics that work are computation,

00:14:51 there was a very significant discovery in the first half of the 20th century. And it hasn’t

00:14:57 fully permeated philosophy and even physics yet. Physicists checked out the code libraries

00:15:02 for mathematics before constructivism became universal. What’s constructivism? What are you

00:15:08 referring to, Gödel’s incompleteness theorem, those kinds of ideas? So basically, Gödel himself,

00:15:13 I think, didn’t get it yet. Hilbert could get it. Hilbert saw that, for instance, countries

00:15:18 set theoretic experiments and mathematics led into contradictions. And he noticed that with the

00:15:25 current semantics, we cannot build a computer in mathematics that runs mathematics without crashing.

00:15:30 And Gödel could prove this. And so what Gödel could show is using classical mathematical

00:15:35 semantics, you run into contradictions. And because Gödel strongly believed in these semantics and

00:15:40 more than what he could observe and so on, he was shocked. It basically shook his world to the core

00:15:46 because in some sense, he felt that the world has to be implemented in classical mathematics.

00:15:50 And for Turing, it wasn’t quite so bad. I think that Turing could see that the solution is to

00:15:56 understand that mathematics was computation all along, which means you, for instance, pi

00:16:01 in classical mathematics is a value. It’s also a function, but it’s the same thing. And in

00:16:08 computation, a function is only a value when you can compute it. And if you cannot compute the last

00:16:13 digit of pi, you only have a function. You can plug this function into your local sun, let it run

00:16:18 until the sun burns out. This is it. This is the last digit of pi you will know. But it also means

00:16:22 there can be no process in the physical universe or in any physically realized computer that depends

00:16:28 on having known the last digit of pi. Which means there are parts of physics that are defined in

00:16:34 such a way that cannot strictly be true, because assuming that this could be true leads into

00:16:38 contradictions. So I think putting computation at the center of the world view is actually the

00:16:44 right way to think about it. Yes. And Wittgenstein could see it. And Wittgenstein basically preempted

00:16:49 the logitist program of AI that Minsky started later, like 30 years later. Turing was actually

00:16:55 a pupil of Wittgenstein. I didn’t know there’s any connection between Turing and Wittgenstein.

00:17:00 Wittgenstein even cancelled some classes when Turing was not present because he thought it

00:17:03 was not worth spending the time with the others. If you read the Tractatus, it’s a very beautiful

00:17:09 book, like basically one thought on 75 pages. It’s very non typical for philosophy because it doesn’t

00:17:15 have arguments in it and it doesn’t have references in it. It’s just one thought that is not intending

00:17:21 to convince anybody. He says, it’s mostly for people that had the same insight as me,

00:17:26 just spell it out. And this insight is there is a way in which mathematics and philosophy

00:17:32 ought to meet. Mathematics tries to understand the domain of all languages by starting with those

00:17:37 that are so formalizable that you can prove all the properties of the statements that you make.

00:17:42 But the price that you pay is that your language is very, very simple. So it’s very hard to say

00:17:46 something meaningful in mathematics. And it looks complicated to people, but it’s far less complicated

00:17:52 than what our brain is casually doing all the time when it makes sense of reality. And philosophy is

00:17:58 coming from the top. So it’s mostly starting from natural languages with vaguely defined concepts.

00:18:03 And the hope is that mathematics and philosophy can meet at some point. And Wittgenstein was trying

00:18:08 to make them meet. And he already understood that, for instance, you could express everything with

00:18:12 the NAND calculus, that you could reduce the entire logic to NAND gates as we do in our modern

00:18:17 computers. So in some sense, he already understood Turing universality before Turing spelled it out.

00:18:22 I think when he wrote the Tractatus, he didn’t understand yet that the idea was so important

00:18:26 and significant. And I suspect then when Turing wrote it out, nobody cared that much. Turing was

00:18:32 not that famous when he lived. It was mostly his work in decrypting the German codes that made him

00:18:39 famous or gave him some notoriety. But this saint status that he has to computer science right now

00:18:44 and the AI is something that I think he could acquire later. That’s kind of interesting. Do

00:18:48 you think of computation and computer science? And you kind of represent that to me is maybe

00:18:53 that’s the modern day. You in a sense are the new philosopher by sort of the computer scientist

00:19:00 who dares to ask the bigger questions that philosophy originally started is the new

00:19:06 philosopher. Certainly not me. I think I’m mostly still this child that grows up in a

00:19:12 very beautiful valley and looks at the world from the outside and tries to understand what’s going

00:19:16 on. And my teachers tell me things and they largely don’t make sense. So I have to make my

00:19:20 own models. I have to discover the foundations of what the others are saying. I have to try to fix

00:19:25 them to be charitable. I try to understand what they must have thought originally or what their

00:19:30 teachers or their teacher’s teachers must have thought until everything got lost in translation

00:19:34 and how to make sense of the reality that we are in. And whenever I have an original idea,

00:19:38 I’m usually late to the party by say 400 years. And the only thing that’s good is that the parties

00:19:43 get smaller and smaller the older I get and the more I explore it. The parties get smaller and

00:19:49 more exclusive and more exclusive. So it seems like one of the key qualities of your upbringing

00:19:55 was that you were not tethered, whether it’s because of your parents or in general,

00:20:01 maybe something within your mind, some genetic material, you were not tethered to the ideas

00:20:07 of the general populace, which is actually a unique property. We’re kind of the education

00:20:14 system and whatever, not education system, just existing in this world forces certain sets of

00:20:20 ideas onto you. Can you disentangle that? Why are you not so tethered? Even in your work today,

00:20:28 you seem to not care about perhaps a best paper in Europe, right? Being tethered to particular

00:20:38 things that current today in this year, people seem to value as a thing you put on your CV and

00:20:44 resume. You’re a little bit more outside of that world, outside of the world of ideas that people

00:20:50 are especially focused in the benchmarks of today, the things. Can you disentangle that?

00:20:56 Because I think that’s inspiring. And if there were more people like that,

00:20:59 we might be able to solve some of the bigger problems that AI dreams to solve.

00:21:05 And there’s a big danger in this because in a way you are expected to marry into an

00:21:10 intellectual tradition and visit this tradition into a particular school. If everybody comes up

00:21:16 with their own paradigms, the whole thing is not cumulative as an enterprise. So in some sense,

00:21:22 you need a healthy balance. You need paradigmatic thinkers and you need people that work within

00:21:26 given paradigms. Basically, scientists today define themselves largely by methods. And it’s almost a

00:21:32 disease that we think as a scientist, as somebody who was convinced by their guidance counselor,

00:21:38 that they should join a particular discipline and then they find a good mentor to learn the

00:21:42 right methods. And then they are lucky enough and privileged enough to join the right team. And then

00:21:47 their name will show up on influential papers. But we also see that there are diminishing returns

00:21:52 with this approach. And when our field, computer science and AI started, most of the people that

00:21:59 joined this field had interesting opinions. And today’s thinkers in AI either don’t have interesting

00:22:05 opinions at all, or these opinions are inconsequential for what they’re actually

00:22:08 doing. Because what they’re doing is they apply the state of the art methods with a small epsilon.

00:22:13 And this is often a good idea if you think that this is the best way to make progress. And for me,

00:22:21 it’s first of all, very boring. If somebody else can do it, why should I do it? If the current

00:22:27 methods of machine learning lead to strong AI, why should I be doing it? I will just wait until

00:22:32 they’re done and wait until they do this on the beach or read interesting books or write some

00:22:38 and have fun. But if you don’t think that we are currently doing the right thing, if we are missing

00:22:45 some perspectives, then it’s required to think outside of the box. It’s also required to understand

00:22:51 the boxes. But it’s necessary to understand what worked and what didn’t work and for what reasons.

00:22:59 So you have to be willing to ask new questions and design new methods whenever you want to

00:23:03 answer them. And you have to be willing to dismiss the existing methods if you think that they’re

00:23:09 not going to yield the right answers. It’s very bad career advice to do that. So maybe to briefly

00:23:16 stay for one more time in the early days, when would you say for you was the dream

00:23:24 before we dive into the discussions that we just almost started, when was the dream to understand

00:23:30 or maybe to create human level intelligence born for you?

00:23:35 I think that you can see AI largely today as advanced information processing. If you would

00:23:44 change the acronym of AI into that, most people in the field would be happy. It would not change

00:23:48 anything what they’re doing. We’re automating statistics and many of the statistical models

00:23:54 are more advanced than what statisticians had in the past. And it’s pretty good work. It’s very

00:23:59 productive. And the other aspect of AI is philosophical project. And this philosophical

00:24:04 project is very risky and very few people work on it and it’s not clear if it succeeds.

00:24:10 So first of all, you keep throwing sort of a lot of really interesting ideas and I have to

00:24:16 pick which ones we go with. But first of all, you use the term information processing,

00:24:22 just information processing as if it’s the mere, it’s the muck of existence as if it’s the epitome

00:24:31 of existence, that the entirety of the universe might be information processing, that consciousness

00:24:36 and intelligence might be information processing. So that maybe you can comment on if the advanced

00:24:42 information processing is a limiting kind of a round of ideas. And then the other one is,

00:24:48 what do you mean by the philosophical project? So I suspect that general intelligence is the

00:24:54 result of trying to solve general problems. So intelligence, I think, is the ability to model.

00:25:00 It’s not necessarily goal directed rationality or something. Many intelligent people are bad at this,

00:25:06 but it’s the ability to be presented with a number of patterns and see a structure in those patterns

00:25:12 and be able to predict the next set of patterns, to make sense of things. And

00:25:17 some problems are very general. Usually intelligence serves control, so you make these

00:25:22 models for a particular purpose of interacting as an agent with the world and getting certain results.

00:25:26 But the intelligence itself is in the sense instrumental to something, but by itself it’s

00:25:31 just the ability to make models. And some of the problems are so general that the system that makes

00:25:36 them needs to understand what itself is and how it relates to the environment. So as a child,

00:25:42 for instance, you notice you do certain things despite you perceiving yourself as wanting

00:25:47 different things. So you become aware of your own psychology. You become aware of the fact that you

00:25:53 have complex structure in yourself and you need to model yourself, to reverse engineer yourself,

00:25:57 to be able to predict how you will react to certain situations and how you deal with yourself

00:26:02 in relationship to your environment. And this process, this project, if you reverse engineer

00:26:08 yourself and your relationship to reality and the nature of a universe that can continue, if you go

00:26:12 all the way, this is basically the project of AI, or you could say the project of AI is a very

00:26:17 important component in it. The Turing test, in a way, is you ask a system, what is intelligence?

00:26:24 If that system is able to explain what it is, how it works, then you should assign it the property

00:26:32 of being intelligent in this general sense. So the test that Turing was administering

00:26:36 in a way, I don’t think that he couldn’t see it, but he didn’t express it yet in the original 1950

00:26:41 paper, is that he was trying to find out whether he was generally intelligent. Because in order to

00:26:47 take this test, the rub is, of course, you need to be able to understand what that system is saying.

00:26:51 And we don’t yet know if we can build an AI. We don’t yet know if we are generally intelligent.

00:26:56 Basically, you win the Turing test by building an AI. Yes. So in a sense, hidden within the Turing

00:27:03 test is a kind of recursive test. Yes, it’s a test on us. The Turing test is basically

00:27:08 a test of the conjecture, whether people are intelligent enough to understand themselves.

00:27:14 Okay. But you also mentioned a little bit of a self awareness and then the project of AI.

00:27:18 Do you think this kind of emergent self awareness is one of the fundamental aspects of intelligence?

00:27:25 So as opposed to goal oriented, as you said, kind of puzzle solving, is

00:27:31 coming to grips with the idea that you’re an agent in the world.

00:27:37 I find that many highly intelligent people are not very self aware, right? So self awareness

00:27:42 and intelligence are not the same thing. And you can also be self aware if you have good priors,

00:27:47 especially, without being especially intelligent. So you don’t need to be very good at solving

00:27:52 puzzles if the system that you are already implements the solution.

00:27:56 But I do find intelligence, you kind of mentioned children, right? Is that the fundamental project

00:28:03 of AI is to create the learning system that’s able to exist in the world. So you kind of drew

00:28:11 a difference between self awareness and intelligence. And yet you said that the self

00:28:18 awareness seems to be important for children. So I call this ability to make sense of the

00:28:23 world and your own place in it. So to make you able to understand what you’re doing in this world,

00:28:28 sentience. And I would distinguish sentience from intelligence because sentience is

00:28:34 possessing certain classes of models. And intelligence is a way to get to these models

00:28:39 if you don’t already have them. I see. So can you maybe pause a bit and try to

00:28:47 answer the question that we just said we may not be able to answer? And it might be a recursive

00:28:53 meta question of what is intelligence? I think that intelligence is the ability to make models.

00:28:59 So models. I think it’s useful as examples. Very popular now. Neural networks form representations

00:29:08 of a large scale data set. They form models of those data sets. When you say models and look

00:29:17 at today’s neural networks, what are the difference of how you’re thinking about what is intelligent

00:29:22 in saying that intelligence is the process of making models? Two aspects to this question. One

00:29:29 is the representation. Is the representation adequate for the domain that we’re talking about?

00:29:33 One is the representation. Is the representation adequate for the domain that we want to represent?

00:29:39 The other one is the type of the model that you arrive at adequate. So basically, are you

00:29:45 modeling the correct domain? I think in both of these cases, modern AI is lacking still. I think

00:29:53 that I’m not saying anything new here. I’m not criticizing the field. Most of the people that

00:29:58 design our paradigms are aware of that. One aspect that we’re missing is unified learning.

00:30:05 When we learn, we at some point discover that everything that we sense is part of the same

00:30:10 object, which means we learn it all into one model and we call this model the universe.

00:30:14 So the experience of the world that we are embedded on is not a secret direct via to physical

00:30:19 reality. Physical reality is a weird quantum graph that we can never experience or get access to.

00:30:24 But it has these properties that it can create certain patterns that are systemic interface to

00:30:29 the world. And we make sense of these patterns and the relationship between the patterns that

00:30:33 we discover is what we call the physical universe. So at some point in our development as a nervous

00:30:40 system, we discover that everything that we relate to in the world can be mapped to a region in the

00:30:47 same three dimensional space, by and large. We now know in physics that this is not quite true.

00:30:52 The world is not actually three dimensional, but the world that we are entangled with at the level

00:30:56 which we are entangled with is largely a flat three dimensional space. And so this is the

00:31:02 model that our brain is intuitively making. And this is, I think, what gave rise to this intuition

00:31:07 of res extensa of this material world, this material domain. It’s one of the mental domains,

00:31:12 but it’s just the class of all models that relate to this environment, this three dimensional

00:31:17 physics engine in which we are embedded. Physics engine which we’re embedded. I love that. Just

00:31:22 slowly pause. So the quantum graph, I think you called it, which is the real world, which you

00:31:32 can never get access to, there’s a bunch of questions I want to sort of disentangle that.

00:31:37 But maybe one useful one, one of your recent talks I looked at, can you just describe the basics?

00:31:43 Can you talk about what is dualism? What is idealism? What is materialism? What is functionalism?

00:31:49 And what connects with you most in terms of, because you just mentioned there’s a reality

00:31:53 we don’t have access to. Okay. What does that even mean? And why don’t we get access to it?

00:32:00 Aren’t we part of that reality? Why can’t we access it? So the particular trajectory that

00:32:05 mostly exists in the West is the result of our indoctrination by a cult for 2000 years.

00:32:11 A cult? Which one? Oh, 2000 years. The Catholic cult mostly. And for better or worse,

00:32:15 it has created or defined many of the modes of interaction that we have that has created

00:32:20 the society. But it has also in some sense scarred our rationality. And the intuition that exists,

00:32:29 if you would translate the mythology of the Catholic church into the modern world is that

00:32:35 the world in which you and me interact is something like a multiplayer role playing adventure. And the

00:32:41 money and the objects that we have in this world, this is all not real. Or as Eastern philosophers

00:32:48 would say, it’s Maya. It’s just stuff that appears to be meaningful. And this embedding in this

00:32:54 meaning, if you believe in it, is samsara. It’s basically the identification with the needs of

00:33:00 the mundane, secular, everyday existence. And the Catholics also introduced the notion of

00:33:06 higher meaning, the sacred. And this existed before, but eventually the natural shape of God

00:33:12 is the Platonic form of the civilization that you’re part of. It’s basically the superorganism

00:33:17 that is formed by the individuals as an intentional agent. And basically, the Catholics

00:33:22 used a relatively crude mythology to implement software on the minds of people and get the

00:33:28 software synchronized to make them walk on lockstep, to basically get this God online

00:33:34 and to make it efficient and effective. And I think God technically is just a self that

00:33:40 spends multiple brains as opposed to your and my self, which mostly exists just on one brain.

00:33:45 Right? And so in some sense, you can construct a self functionally as a function is implemented

00:33:50 by brains that exists across brains. And this is a God with a small g.

00:33:54 That’s one of the, if you, Yuval Harari kind of talking about,

00:33:59 this is one of the nice features of our brains. It seems to that we can

00:34:03 all download the same piece of software like God in this case and kind of share it.

00:34:07 Yeah. So basically you give everybody a spec and the mathematical constraints

00:34:12 that are intrinsic to information processing,

00:34:16 make sure that given the same spec, you come up with a compatible structure.

00:34:20 Okay. So that’s, there’s the space of ideas that we all share. And we think that’s kind

00:34:24 of the mind, but that’s separate from the idea is from Christianity for,

00:34:32 from religion is that there’s a separate thing between the mind.

00:34:35 There is a real world. And this real world is the world in which God exists.

00:34:39 God is the coder of the multiplayer adventure, so to speak. And we are all players in this game.

00:34:45 And that’s dualism.

00:34:47 Yes. But the aspect is because the mental realm exists in a different implementation

00:34:53 than the physical realm. And the mental realm is real. And a lot of people have this intuition

00:34:59 that there is this real room in which you and me talk and speak right now, then comes a layer of

00:35:04 physics and abstract rules and so on. And then comes another real room where our souls are

00:35:10 and our true form isn’t a thing that gives us phenomenal experience. And this is, of course,

00:35:14 a very confused notion that you would get. And it’s basically, it’s the result of connecting

00:35:20 materialism and idealism in the wrong way.

00:35:24 So, okay. I apologize, but I think it’s really helpful if we just try to define,

00:35:30 try to define terms. Like what is dualism? What is idealism? What is materialism? For

00:35:34 people that don’t know.

00:35:34 So the idea of dualism in our cultural tradition is that there are two substances, a mental

00:35:40 substance and a physical substance. And they interact by different rules. And the physical

00:35:46 world is basically causally closed and is built on a low level causal structure. So

00:35:51 they’re basically a bottom level that is causally closed. That’s entirely mechanical

00:35:56 and mechanical in the widest sense. So it’s computational. There’s basically a physical

00:36:00 world in which information flows around and physics describes the laws of how information

00:36:05 flows around in this world.

00:36:06 Would you compare it to like a computer where you have hardware and software?

00:36:10 The computer is a generalization of information flowing around. Basically,

00:36:14 but if you want to discover that there is a universal principle, you can define this

00:36:17 universal machine that is able to perform all the computations. So all these machines

00:36:23 have the same power. This means that you can always define a translation between them,

00:36:27 as long as they have unlimited memory to be able to perform each other’s computations.

00:36:33 So would you then say that materialism is this whole world is just the hardware and

00:36:38 idealism is this whole world is just the software?

00:36:40 Not quite. I think that most idealists don’t have a notion of software yet because software

00:36:46 also comes down to information processing. So what you notice is the only thing that

00:36:51 is real to you and me is this experiential world in which things matter, in which things

00:36:56 have taste, in which things have color, phenomenal content, and so on.

00:37:00 You are bringing up consciousness. Okay.

00:37:02 This is distinct from the physical world in which things have values only in an abstract

00:37:07 sense. And you only look at cold patterns moving around. So how does anything feel like

00:37:15 something? And this connection between the two things is very puzzling to a lot of people,

00:37:19 of course, too many philosophers. So idealism starts out with the notion that mind is primary,

00:37:23 materialism, things that matter is primary. And so for the idealist, the material patterns that

00:37:30 we see playing out are part of the dream that the mind is dreaming. And we exist in a mind

00:37:37 on a higher plane of existence, if you want. And for the materialist, there is only this

00:37:43 material thing, and that generates some models, and we are the result of these models. And in

00:37:49 some sense, I don’t think that we should understand, if we understand it properly,

00:37:53 materialism and idealism as a dichotomy, but as two different aspects of the same thing.

00:37:59 So the weird thing is we don’t exist in the physical world. We do exist inside of a story

00:38:04 that the brain tells itself. Okay. Let my information processing take that in.

00:38:15 We don’t exist in the physical world. We exist in the narrative.

00:38:18 Basically, a brain cannot feel anything. A neuron cannot feel anything. They’re physical

00:38:22 things. Physical systems are unable to experience anything. But it would be very useful for the

00:38:26 brain or for the organism to know what it would be like to be a person and to feel something.

00:38:30 Yeah. So the brain creates a simulacrum of such a person that it uses to model the interactions

00:38:36 of the person. It’s the best model of what that brain, this organism thinks it is in relationship

00:38:41 to its environment. So it creates that model. It’s a story, a multimedia novel that the brain

00:38:46 is continuously writing and updating. But you also kind of said that

00:38:50 you said that we kind of exist in that story. Yes.

00:38:53 In that story. What is real in any of this? So again, these terms are… You kind of said

00:39:04 there’s a quantum graph. I mean, what is this whole thing running on then? Is the story…

00:39:11 And is it completely fundamentally impossible to get access to it? Because isn’t the story

00:39:16 supposed to… Isn’t the brain in something existing in some kind of context?

00:39:24 So what we can identify as computer scientists, we can engineer systems and test our theories this

00:39:30 way that might have the necessary insufficient properties to produce the phenomena that we are

00:39:36 observing, which is the self in a virtual world that is generated in somebody’s neocortex that is

00:39:44 contained in the skull of this primate here. And when I point at this, this indexicality is of

00:39:48 course wrong. But I do create something that is likely to give rise to patterns on your retina

00:39:55 that allow you to interpret what I’m saying. But we both know that the world that you and me are

00:40:00 seeing is not the real physical world. What we are seeing is a virtual reality generated in your

00:40:05 brain to explain the patterns on your retina. How close is it to the real world? That’s kind

00:40:10 of the question. Is it when you have people like Donald Hoffman that say that you’re really far

00:40:18 away. The thing we’re seeing, you and I now, that interface we have is very far away from anything.

00:40:24 We don’t even have anything close to the sense of what the real world is. Or is it a very surface

00:40:29 piece of architecture? I imagine you look at the Mandelbrot fractal, this famous thing that

00:40:35 Bernard Mandelbrot discovered. You see an overall shape in there. But if you truly understand it,

00:40:43 you know it’s two lines of code. It’s basically in a series that is being tested for complex

00:40:50 numbers in the complex number plane for every point. And for those where the series is diverging,

00:40:56 you paint this black. And where it’s converging, you don’t. And you get the intermediate colors

00:41:04 by taking how far it diverges. This gives you this shape of this fractal. But imagine you live

00:41:13 inside of this fractal and you don’t have access to where you are in the fractal. Or you have not

00:41:18 discovered the generator function even. So what you see is, all I can see right now is a spiral.

00:41:23 And this spiral moves a little bit to the right. Is this an accurate model of reality? Yes, it is.

00:41:28 It is an adequate description. You know that there is actually no spiral in the Mandelbrot fractal.

00:41:33 It only appears like this to an observer that is interpreting things as a two dimensional space and

00:41:39 then defines certain regularities in there at a certain scale that it currently observes. Because

00:41:44 if you zoom in, the spiral might disappear and turn out to be something different at a different

00:41:47 resolution. So at this level, you have the spiral. And then you discover the spiral moves to the

00:41:52 right and at some point it disappears. So you have a singularity. At this point, your model is no

00:41:56 longer valid. You cannot predict what happens beyond the singularity. But you can observe again

00:42:01 and you will see it hit another spiral and at this point it disappeared. So we now have a second

00:42:06 order law. And if you make 30 layers of these laws, then you have a description of the world

00:42:11 that is similar to the one that we come up with when we describe the reality around us.

00:42:14 It’s reasonably predictive. It does not cut to the core of it. It does not explain how it’s

00:42:19 being generated, how it actually works. But it’s relatively good to explain the universe that we

00:42:24 are entangled with. But you don’t think the tools of computer science, the tools of physics could

00:42:28 get, could step outside, see the whole drawing and get at the basic mechanism of how the pattern,

00:42:35 the spirals are generated. Imagine you would find yourself embedded into a motherboard fractal and

00:42:40 you try to figure out what works and you somehow have a Turing machine with enough memory to think.

00:42:46 And as a result, you come to this idea, it must be some kind of automaton. And maybe you just

00:42:51 enumerate all the possible automata until you get to the one that produces your reality.

00:42:56 So you can identify necessary and sufficient condition. For instance,

00:42:59 we discover that mathematics itself is the domain of all languages. And then we see that most of

00:43:05 the domains of mathematics that we have discovered are in some sense describing the same fractals.

00:43:10 This is what category theory is obsessed about, that you can map these different domains to each

00:43:14 other. So they’re not that many fractals. And some of these have interesting structure and

00:43:19 symmetry breaks. And so you can discover what region of this global fractal you might be embedded

00:43:26 in from first principles. But the only way you can get there is from first principles. So basically

00:43:31 your understanding of the universe has to start with automata and then number theory and then

00:43:35 spaces and so on. Yeah. I think like Stephen Wolfram still dreams that he’ll be able to arrive

00:43:41 at the fundamental rules of the cellular automata or the generalization of which

00:43:46 is behind our universe. Yeah. You’ve said on this topic, you said in a recent conversation

00:43:54 that quote, some people think that a simulation can’t be conscious and only a physical system can,

00:44:00 but they got it completely backward. A physical system cannot be conscious. Only a simulation can

00:44:05 be conscious. Consciousness is a simulated property that simulates itself. Just like you said,

00:44:11 the mind is kind of the, we’ll call it story narrative. There’s a simulation. So our mind

00:44:17 is essentially a simulation. Usually I try to use the terminology so that the mind is basically

00:44:24 a principles that produce the simulation. It’s the software that is implemented by your brain.

00:44:29 And the mind is creating both the universe that we are in and the self, the idea of a person that

00:44:36 is on the other side of attention and is embedded in this world. Why is that important that

00:44:41 idea of a self, why is that an important feature in the simulation? It’s basically a result of

00:44:48 the purpose that the mind has. It’s a tool for modeling, right? We are not actually monkeys. We

00:44:53 are side effects of the regulation needs of monkeys. And what the monkey has to regulate is

00:45:00 the relationship of an organism to an outside world that is in large part also consisting of

00:45:06 other organisms. And as a result, it basically has regulation targets that it tries to get to.

00:45:12 These regulation targets start with priors. They’re basically like unconditional reflexes

00:45:16 that we are more or less born with. And then we can reverse engineer them to make them more

00:45:20 consistent. And then we get more detailed models about how the world works and how to interact with

00:45:24 it. And so these priors that you commit to are largely target values that our needs should

00:45:31 approach set points. And this deviation to the set point creates some urge, some tension. And we find

00:45:38 ourselves living inside of feedback loops, right? Consciousness emerges over dimensions of

00:45:42 disagreements with the universe, things that you care, things are not the way they should be,

00:45:48 but you need to regulate. And so in some sense, the sense self is the result of all the

00:45:52 identifications that you’re having. And that identification is a regulation target that

00:45:56 you’re committing to. It’s a dimension that you care about, you think is important. And this is

00:46:01 also what locks you in. If you let go of these commitments, of these identifications, you get

00:46:07 free. There’s nothing that you have to do anymore. And if you let go of all of them, you’re completely

00:46:12 free and you can enter nirvana because you’re done. And actually, this is a good time to pause and say

00:46:17 thank you to a friend of mine, Gustav Soderström, who introduced me to your work. I wanted to give

00:46:23 him a shout out. He’s a brilliant guy. And I think the AI community is actually quite amazing. And

00:46:29 Gustav is a good representative of that. You are as well. So I’m glad, first of all, I’m glad the

00:46:34 internet exists, YouTube exists, where I can watch your talks and then get to your book and study

00:46:40 your writing and think about, you know, that’s amazing. Okay. But you’ve kind of described

00:46:46 sort of this emergent phenomenon of consciousness from the simulation. So what about the hard

00:46:52 problem of consciousness? Can you just linger on it? Why does it still feel like, I understand

00:47:02 you’re kind of, the self is an important part of the simulation, but why does the simulation

00:47:08 feel like something? So if you look at a book by, say, George R. R. Martin, where the characters

00:47:14 have plausible psychology and they stand on a hill because they want to conquer the city below

00:47:19 the hill and they’re done in it. And they look at the color of the sky and they are apprehensive

00:47:24 and feel empowered and all these things. Why do they have these emotions? It’s because it’s

00:47:27 written into the story, right? And it’s written to the story because there’s an adequate model of the

00:47:32 person that predicts what they’re going to do next. And the same thing is true for us. So it’s

00:47:37 basically a story that our brain is writing. It’s not written in words. It’s written in perceptual

00:47:43 content, basically multimedia content. And it’s a model of what the person would feel if it existed.

00:47:50 So it’s a virtual person. And you and me happen to be this virtual person. So this virtual person

00:47:56 gets access to the language center and talks about the sky being blue. And this is us.

00:48:01 But hold on a second. Do I exist in your simulation?

00:48:05 You do exist in an almost similar way as me. So there are internal states that are less

00:48:13 accessible for me that you have and so on. And my model might not be completely adequate.

00:48:20 There are also things that I might perceive about you that you don’t perceive. But in some sense,

00:48:25 both you and me are some puppets, two puppets that enact a play in my mind. And I identify

00:48:31 with one of them because I can control one of the puppets directly. And with the other one,

00:48:36 I can create things in between. So for instance, we can go on an interaction that even leads to

00:48:41 a coupling to a feedback loop. So we can think things together in a certain way or feel things

00:48:46 together. But this coupling is itself not a physical phenomenon. It’s entirely a software

00:48:50 phenomenon. It’s the result of two different implementations interacting with each other.

00:48:54 So that’s interesting. So are you suggesting, like the way you think about it, is the entirety

00:49:02 of existence a simulation and where kind of each mind is a little subsimulation,

00:49:08 that like, why don’t you, why doesn’t your mind have access to my mind’s full state?

00:49:18 Like, for the same reason that my mind doesn’t have access to its own full state.

00:49:22 So what, I mean,

00:49:25 There is no trick involved. So basically, when I know something about myself,

00:49:29 it’s because I made a model. So one part of your brain is tasked with modeling what other parts of

00:49:33 your brain are doing.

00:49:35 Yes. But there seems to be an incredible consistency about this world in the physical

00:49:40 sense that there’s repeatable experiments and so on. How does that fit into our silly,

00:49:46 the center of apes simulation of the world? So why is it so repeatable? Why is everything so

00:49:50 repeatable? And not everything. There’s a lot of fundamental physics experiments that are repeatable

00:49:59 for a long time, all over the place and so on. Laws of physics. How does that fit in?

00:50:05 It seems that the parts of the world that are not deterministic are not long lived.

00:50:10 So if you build a system, any kind of automaton, so if you build simulations of something,

00:50:17 you’ll notice that the phenomena that endure are those that give rise to stable dynamics.

00:50:23 So basically, if you see anything that is complex in the world, it’s the result of usually of some

00:50:28 control of some feedback that keeps it stable around certain attractors. And the things that

00:50:32 are not stable that don’t give rise to certain harmonic patterns and so on, they tend to get

00:50:37 weeded out over time. So if we are in a region of the universe that sustains complexity, which is

00:50:44 required to implement minds like ours, this is going to be a region of the universe that is very

00:50:50 tightly controlled and controllable. So it’s going to have lots of interesting symmetries and also

00:50:55 symmetry breaks that allow the creation of structure. But they exist where? So there’s

00:51:02 such an interesting idea that our mind is simulation that’s constructing the narrative.

00:51:06 But my question is, just to try to understand how that fits with this, with the entirety of the

00:51:14 universe, you’re saying that there’s a region of this universe that allows enough complexity to

00:51:19 create creatures like us. But what’s the connection between the brain, the mind, and the broader

00:51:27 universe? Which comes first? Which is more fundamental? Is the mind the starting point,

00:51:32 the universe is emergent? Is the universe the starting point, the minds are emergent?

00:51:37 I think quite clearly the latter. That’s at least a much easier explanation because it allows us to

00:51:42 make causal models. And I don’t see any way to construct an inverse causality.

00:51:47 So what happens when you die to your mind simulation?

00:51:51 My implementation ceases. So basically the thing that implements myself will no longer be present,

00:51:57 which means if I am not implemented on the minds of other people, the thing that I identify with,

00:52:01 the weird thing is I don’t actually have an identity beyond the identity that I construct.

00:52:07 If I was the Dalai Lama, he identifies as a form of government. So basically the Dalai Lama gets

00:52:14 reborn, not because he’s confused, but because he is not identifying as a human being. He runs on

00:52:21 a human being. He’s basically a governmental software that is instantiated in every new

00:52:27 generation and you. So his advice is to pick someone who does this in the next generation.

00:52:31 So if you identify with this, you are no longer a human and you don’t die in the sense that what

00:52:37 dies is only the body of the human that you run on. To kill the Dalai Lama, you would have to kill

00:52:42 his tradition. And if we look at ourselves, we realize that we are to a small part like this,

00:52:48 most of us. So for instance, if you have children, you realize something lives on in them. Or if you

00:52:53 spark an idea in the world, something lives on, or if you identify with the society around you,

00:52:58 because you are in part that you’re not just this human being.

00:53:01 Yeah. So in a sense, you are kind of like a Dalai Lama in the sense that you,

00:53:07 Joshua Bach, is just a collection of ideas. So like you have this operating system on which

00:53:12 a bunch of ideas live and interact. And then once you die, they kind of part, some of them

00:53:17 jump off the ship.

00:53:18 You put it put it the other way. Identity is a software state. It’s a construction.

00:53:23 It’s not physically real. Identity is not a physical concept.

00:53:27 It’s basically a representation of different objects on the same world line.

00:53:32 But identity lives and dies. Are you attached? What’s the fundamental thing? Is it the ideas

00:53:41 that come together to form identity? Or is each individual identity actually a fundamental thing?

00:53:46 It’s a representation that you can get agency over if you care. So basically,

00:53:49 you can choose what you identify with if you want to.

00:53:53 No, but it just seems if the mind is not real, that the birth and death is not a crucial part

00:54:04 of it. Well, maybe I’m silly. Maybe I’m attached to this whole biological organism. But it seems

00:54:16 that being a physical object in this world is an important aspect of birth and death.

00:54:23 Like it feels like it has to be physical to die. It feels like simulations don’t have to die.

00:54:30 The physics that we experience is not the real physics. There is no color and sound in the real

00:54:34 world. Color and sound are types of representations that you get if you want to model reality with

00:54:40 oscillators. So colors and sound in some sense have octaves, and it’s because they are represented

00:54:45 probably with oscillators. So that’s why colors form a circle of use. And colors have harmonics,

00:54:52 sounds have harmonics as a result of synchronizing oscillators in the brain. So the world that we

00:54:58 subjectively interact with is fundamentally the result of the representation mechanisms in our

00:55:03 brain. They are mathematically to some degree universal. There are certain regularities that

00:55:08 you can discover in the patterns and not others. But the patterns that we get, this is not the real

00:55:12 world. The world that we interact with is always made of too many parts to count. So when you look

00:55:18 at this table and so on, it’s consisting of so many molecules and atoms that you cannot count

00:55:23 them. So you only look at the aggregate dynamics, at limit dynamics. If you had almost infinitely

00:55:29 many particles, what would be the dynamics of the table? And this is roughly what you get. So

00:55:34 geometry that we are interacting with is the result of discovering those operators that

00:55:39 work in the limit that you get by building an infinite series that converges. For those parts

00:55:44 where it converges, it’s geometry. For those parts where it doesn’t converge, it’s chaos.

00:55:48 Right. And then so all of that is filtered through the consciousness that’s emergent in our

00:55:55 narrative. The consciousness gives it color, gives it feeling, gives it flavor.

00:56:00 So I think the feeling, flavor and so on is given by the relationship that a feature has to all the

00:56:06 other features. It’s basically a giant relational graph that is our subjective universe. The color

00:56:12 is given by those aspects of the representation or this experiential color where you care about,

00:56:18 where you have identifications, where something means something, where you are the inside of a

00:56:22 feedback loop. And the dimensions of caring are basically dimensions of this motivational system

00:56:28 that we emerge over. The meaning of the relations, the graph. Can you elaborate on that a little bit?

00:56:34 Like where does the, maybe we can even step back and ask the question of what is consciousness to

00:56:41 be sort of more systematic. Like what do you, how do you think about consciousness?

00:56:47 I think that consciousness is largely a model of the contents of your attention. It’s a mechanism

00:56:52 that has evolved for a certain type of learning. At the moment, our machine learning systems

00:56:58 largely work by building chains of weighted sums of real numbers with some nonlinearity.

00:57:05 And you learn by piping an error signal through these different chained layers and adjusting the

00:57:13 weights in these weighted sums. And you can approximate most polynomials with this

00:57:19 if you have enough training data. But the price is you need to change a lot of these weights.

00:57:24 Basically, the error is piped backwards into the system until it accumulates at certain junctures

00:57:30 in the network. And everything else evens out statistically. And only at these junctures,

00:57:34 this is where you had the actual error in the network, you make the change there. This is a

00:57:38 very slow process. And our brains don’t have enough time for that because we don’t get old

00:57:42 enough to play Go the way that our machines learn to play Go. So instead, what we do is

00:57:47 an attention based learning. We pinpoint the probable region in the network where we can

00:57:52 make an improvement. And then we store this binding state together with the expected outcome

00:57:58 in a protocol. And this ability to make index memories for the purpose of learning to revisit

00:58:03 these commitments later, this requires a memory of the contents of our attention.

00:58:10 Another aspect is when I construct my reality, I make mistakes. So I see things that turn out to

00:58:15 be reflections or shadows and so on, which means I have to be able to point out which features of

00:58:20 my perception gave rise to a present construction of reality. So the system needs to pay attention to

00:58:28 the features that are currently in its focus. And it also needs to pay attention to whether

00:58:33 it pays attention itself, in part because the attentional system gets trained with the same

00:58:37 mechanism, so it’s reflexive, but also in part because your attention lapses if you don’t pay

00:58:41 attention to the attention itself. So it’s the thing that I’m currently seeing, just a dream

00:58:46 that my brain has spun off into some kind of daydream, or am I still paying attention to my

00:58:52 percept? So you have to periodically go back and see whether you’re still paying attention. And

00:58:56 if you have this loop and you make it tight enough between the system becoming aware of the contents

00:59:01 of its attention and the fact that it’s paying attention itself and makes attention the object

00:59:05 of its attention, I think this is the loop over which we wake up. So there’s this attentional

00:59:12 mechanism that’s somehow self referential that’s fundamental to what consciousness is. So just

00:59:19 ask you a question, I don’t know how much you’re familiar with the recent breakthroughs in natural

00:59:23 language processing, they use attentional mechanism, they use something called transformers to

00:59:31 learn patterns and sentences by allowing the network to focus its attention to particular

00:59:37 parts of the sentence and each individual. So like parameterize and make it learnable

00:59:42 the dynamics of a sentence by having like a little window into the sentence. Do you think

00:59:51 that’s like a little step towards that eventually will take us to the attentional mechanisms from

00:59:58 which consciousness can emerge? Not quite. I think it models only one aspect of attention.

01:00:03 In the early days of automated language translation, there was an example that I

01:00:08 found particularly funny where somebody tried to translate a text from English into German

01:00:13 and it was a bat broke the window. And the translation in German was

01:00:25 to translate back into English a bat, this flying mammal broke the window with a baseball bat.

01:00:31 Yes. And it seemed to be the

01:00:34 most similar to this program because it somehow maximized the possibility of translating the

01:00:39 concept bat into German in the same sentence. And this is a mistake that the transformer model is

01:00:45 not doing because it’s tracking identity. And the attentional mechanism in the transformer

01:00:49 model is basically putting its finger on individual concepts and make sure that these concepts pop up

01:00:56 later in the text and tracks basically the individuals through the text. And it’s why

01:01:02 the system can learn things that other systems couldn’t before it, which makes it, for instance,

01:01:07 possible to write a text where it talks about the scientist, then the scientist has a name

01:01:10 and has a pronoun and it gets a consistent story about that thing. What it does not do,

01:01:16 it doesn’t fully integrate this. So this meaning falls apart at some point, it loses track of this

01:01:20 context. It does not yet understand that everything that it says has to refer to the same universe.

01:01:25 And this is where this thing falls apart. But the attention in the transformer model does not go

01:01:31 beyond tracking identity. And tracking identity is an important part of attention, but it’s a

01:01:36 different, very specific attentional mechanism. And it’s not the one that gives rise to the type

01:01:41 of consciousness that we have. Just to linger on, what do you mean by

01:01:44 identity in the context of language? So when you talk about language,

01:01:49 you have different words that can refer to the same concept.

01:01:52 Got it. And in the sense that…

01:01:53 The space of concepts. So… Yes. And it can also be in a nominal sense

01:01:59 or in an inexical sense that you say this word does not only refer to this class of objects,

01:02:05 but it refers to a definite object, to some kind of agent that waves their way through the story.

01:02:11 And it’s only referred by different ways in the language. So the language is basically a

01:02:16 projection from a conceptual representation from a scene that is evolving into a discrete string

01:02:23 of symbols. And what the transformer is able to do, it learns aspects of this projection

01:02:29 mechanism that other models couldn’t learn. So have you ever seen an artificial intelligence

01:02:34 or any kind of construction idea that allows for, unlike neural networks or perhaps within

01:02:40 neural networks, that’s able to form something where the space of concepts continues to be

01:02:47 integrated? So what you’re describing, building a knowledge base, building this consistent,

01:02:54 larger and larger sets of ideas that would then allow for deeper understanding.

01:02:59 Wittgenstein thought that we can build everything from language,

01:03:02 from basically a logical grammatical construct. And I think to some degree,

01:03:07 this was also what Minsky believed. So that’s why he focused so much on common sense reasoning and

01:03:12 so on. And a project that was inspired by him was Psych. That was basically going on.

01:03:19 Yes. Of course, ideas don’t die. Only people die.

01:03:25 That’s true.

01:03:27 And Alt Psych is a productive project. It’s just probably not one that is going to

01:03:31 converge to general intelligence. The thing that Wittgenstein couldn’t solve,

01:03:35 and he looked at this in his book at the end of his life, Philosophical Investigations,

01:03:40 was the notion of images. So images play an important role in Tractatus. The Tractatus is

01:03:44 an attempt to basically turn philosophy into logical probing language, to design a logical

01:03:49 language in which you can do actual philosophy that’s rich enough for doing this. And the

01:03:54 difficulty was to deal with perceptual content. And eventually, I think he decided that he was

01:04:00 not able to solve it. And I think this preempted the failure of the logitist program in AI.

01:04:06 And the solution, as we see it today, is we need more general function approximation. There are

01:04:11 geometric functions that we learn to approximate that cannot be efficiently expressed and computed

01:04:16 in a grammatical language. We can, of course, build automata that go via number theory and so on

01:04:22 to learn in algebra and then compute an approximation of this geometry.

01:04:26 But to equate language and geometry is not an efficient way to think about it.

01:04:32 So function, well, you kind of just said that neural networks are sort of, the approach that

01:04:37 neural networks takes is actually more general than what can be expressed through language.

01:04:44 Yes. So what can be efficiently expressed through language at the data rates at which

01:04:49 we process grammatical language?

01:04:51 Okay. So you don’t think languages, so you disagree with Wittgenstein,

01:04:56 that language is not fundamental to…

01:04:58 I agree with Wittgenstein. I just agree with the late Wittgenstein.

01:05:03 And I also agree with the beauty of the early Wittgenstein. I think that the Tractatus itself

01:05:09 is probably the most beautiful philosophical text that was written in the 20th century.

01:05:13 But language is not fundamental to cognition and intelligence and consciousness.

01:05:18 So I think that language is a particular way or the natural language that we’re using is a

01:05:23 particular level of abstraction that we use to communicate with each other. But the languages

01:05:28 in which we express geometry are not grammatical languages in the same sense. So they work slightly

01:05:35 differently, more general expressions of functions. And I think the general nature of a model is you

01:05:41 have a bunch of parameters. These have a range, these are the variances of the world, and you have

01:05:47 relationships between them, which are constraints, which say if certain parameters have these values,

01:05:52 then other parameters have to have the following values. And this is a very early insight in

01:05:58 computer science. And I think some of the earliest formulations is the Boltzmann machine.

01:06:03 And the problem with the Boltzmann machine is that it has a measure of whether it’s good. This

01:06:07 is basically the energy on the system, the amount of tension that you have left in the constraints

01:06:11 where the constraints don’t quite match. It’s very difficult to, despite having this global

01:06:16 measure, to train it. Because as soon as you add more than trivially few elements,

01:06:22 parameters into the system, it’s very difficult to get it settled in the right architecture.

01:06:27 And so the solution that Hinton and Zanofsky found was to use a restricted Boltzmann machine,

01:06:34 which uses the hidden links, the internal links in the Boltzmann machine and only has

01:06:39 basically input and output layer. But this limits the expressivity of the Boltzmann machine. So now

01:06:44 he builds a network of these primitive Boltzmann machines. And in some sense, you can see almost

01:06:50 continuous development from this to the deep learning models that we’re using today,

01:06:54 even though we don’t use Boltzmann machines at this point. But the idea of the Boltzmann

01:06:58 machine is you take this model, you clamp some of the values to perception, and this forces

01:07:02 the entire machine to go into a state that is compatible with the states that you currently

01:07:06 perceive. And this state is your model of the world. I think it’s a very general way of thinking

01:07:12 about models, but we have to use a different approach to make it work. We have to find

01:07:19 different networks that train the Boltzmann machine. So the mechanism that trains the Boltzmann

01:07:23 machine and the mechanism that makes the Boltzmann machine settle into its state

01:07:28 are distinct from the constrained architecture of the Boltzmann machine itself.

01:07:33 The kind of mechanisms that we want to develop, you’re saying?

01:07:36 Yes. So the direction in which I think our research is going to go

01:07:40 is going to, for instance, what you notice in perception is our perceptual models of the world

01:07:45 are not probabilistic, but possibilistic, which means you should be able to perceive things that

01:07:50 are improbable, but possible. A perceptual state is valid, not if it’s probable, but if it’s

01:07:57 possible, if it’s coherent. So if you see a tiger coming after you, you should be able to see this

01:08:02 even if it’s unlikely. And the probability is necessary for convergence of the model. So

01:08:09 given the state of possibilities that is very, very large and a set of perceptual features,

01:08:15 how should you change the states of the model to get it to converge with your perception?

01:08:21 But the space of ideas that are coherent with the context that you’re sensing

01:08:29 is perhaps not as large. I mean, that’s perhaps pretty small.

01:08:34 The degree of coherence that you need to achieve depends, of course, how deep your models go.

01:08:39 That is, for instance, politics is very simple when you know very little about game theory and

01:08:44 human nature. So the younger you are, the more obvious it is how politics would work, right?

01:08:49 And because you get a coherent aesthetics from relatively few inputs. And the more layers you

01:08:55 model, the more layers you model reality, the harder it gets to satisfy all the constraints.

01:09:00 So, you know, the current neural networks are fundamentally supervised learning system with

01:09:07 a feed forward neural network is back propagation to learn. What’s your intuition about what kind

01:09:12 of mechanisms might we move towards to improve the learning procedure?

01:09:18 I think one big aspect is going to be meta learning and architecture search starts in

01:09:22 this direction. In some sense, the first wave of classical AI work by identifying a problem

01:09:28 and a possible solution and implementing the solution, right? A program that plays chess.

01:09:32 And right now we are in the second wave of AI. So instead of writing the algorithm that

01:09:37 implements the solution, we write an algorithm that automatically searches

01:09:41 for an algorithm that implements the solution. So the learning system in some sense is an

01:09:46 algorithm that itself discovers the algorithm that solves the problem, like Go. Go is too hard

01:09:51 to implement the solution by hand, but we can implement an algorithm that finds the solution.

01:09:56 Yeah. So now let’s move to the third stage, right? The third stage would be meta learning.

01:10:00 Find an algorithm that discovers a learning algorithm for the given domain.

01:10:05 Our brain is probably not a learning system, but a meta learning system.

01:10:08 This is one way of looking at what we are doing. There is another way. If you look at the way our

01:10:13 brain is, for instance, implemented, there is no central control that tells all the neurons how

01:10:18 to wire up. Instead, every neuron is an individual reinforcement learning agent. Every neuron is a

01:10:24 single celled organism that is quite complicated and in some sense quite motivated to get fed.

01:10:28 And it gets fed if it fires on average at the right time. And the right time depends on the

01:10:36 context that the neuron exists in, which is the electrical and chemical environment that it has.

01:10:42 So it basically has to learn a function over its environment that tells us when to fire to get fed.

01:10:48 Or if you see it as a reinforcement learning agent, every neuron is in some sense making a

01:10:52 hypothesis when it sends a signal and tries to pipe a signal through the universe and tries to

01:10:57 get positive feedback for it. And the entire thing is set up in such a way that it’s robustly

01:11:02 self organizing into a brain, which means you start out with different neuron types

01:11:07 that have different priors on which hypothesis to test and how to get its reward.

01:11:12 And you put them into different concentrations in a certain spatial alignment,

01:11:16 and then you entrain it in a particular order. And as a result, you get a well organized brain.

01:11:22 Yeah, so okay, so the brain is a meta learning system with a bunch of reinforcement learning

01:11:29 agents. And what I think you said, but just to clarify, there’s no centralized government that

01:11:40 tells you, here’s a loss function, here’s a loss function, here’s a loss function. Who says what’s

01:11:48 the objective? There are also governments which impose loss functions on different parts of the

01:11:52 brain. So we have differential attention. Some areas in your brain get specially rewarded when

01:11:56 you look at faces. If you don’t have that, you will get prosopagnosia, which basically

01:12:01 the inability to tell people apart by their faces. And the reason that happens is because

01:12:07 it had an evolutionary advantage. So like evolution comes into play here. But it’s basically an

01:12:12 extraordinary attention that we have for faces. I don’t think that people with prosopagnosia have a

01:12:17 perceived defective brain, the brain just has an average attention for faces. So people with

01:12:22 prosopagnosia don’t look at faces more than they look at cups. So the level at which they resolve

01:12:27 the geometry of faces is not higher than for cups. And people that don’t have prosopagnosia look

01:12:34 obsessively at faces, right? For you and me, it’s impossible to move through a crowd

01:12:38 without scanning the faces. And as a result, we make insanely detailed models of faces that allow

01:12:43 us to discern mental states of people. So obviously, we don’t know 99% of the details

01:12:49 of this meta learning system. That’s our mind. Okay. But still, we took a leap from something

01:12:55 much dumber to that from through the evolutionary process. Can you first of all, maybe say how hard

01:13:04 the, how big of a leap is that from our brain, from our ancestors to multi cell organisms? And

01:13:14 is there something we can think about? As we start to think about how to engineer intelligence,

01:13:21 is there something we can learn from evolution? In some sense, life exists because of the market

01:13:28 opportunity of controlled chemical reactions. We compete with dump chemical reactions and we win

01:13:34 in some areas against this dump combustion because we can harness those entropy gradients where you

01:13:40 need to add a little bit of energy in a specific way to harvest more energy. So we out competed

01:13:44 combustion. Yes, in many regions we do and we try very hard because when we are in direct

01:13:49 competition, we lose, right? So because the combustion is going to close the entropy

01:13:55 gradients much faster than we can run. So basically we do this because every cell has

01:14:02 a Turing machine built into it. It’s like literally a read write head on the tape.

01:14:09 So everything that’s more complicated than a molecule that just is a vortex around attractors

01:14:16 that needs a Turing machine for its regulation. And then you bind cells together and you get next

01:14:21 level organizational organism where the cells together implement some kind of software.

01:14:28 For me, a very interesting discovery in the last year was the word spirit because I realized that

01:14:33 what spirit actually means is an operating system for an autonomous robot. And when the word was

01:14:38 invented, people needed this word. But they didn’t have robots that they built themselves yet. The

01:14:43 only autonomous robots that were known were people, animals, plants, ecosystems, cities,

01:14:48 and so on. And they all had spirits. And it makes sense to say that the plant has an operating

01:14:53 system, right? If you pinch the plant in one area, then it’s going to have repercussions

01:14:57 throughout the plant. Everything in the plant is in some sense connected into some global aesthetics

01:15:02 like in other organisms. An organism is not a collection of cells, it’s a function that

01:15:07 tells cells how to behave. And this function is not implemented as some kind of supernatural thing,

01:15:13 like some morphogenetic field. It is an emergent result of the interactions of each cell with each

01:15:19 other cell. Oh my God. So what you’re saying is the organism is a function that tells what to do

01:15:31 and the function emerges from the interaction of the cells. Yes. So it’s basically a description

01:15:39 of what the plant is doing in terms of microstates. And the microstates, the physical implementation

01:15:46 are too many of them to describe them. So the software that we use to describe what the plant is

01:15:51 doing, the spirit of the plant is the software, the operating system of the plant, right? This is

01:15:57 a way in which we, the observers, make sense of the plant. And the same is true for people. So

01:16:03 people have spirits, which is their operating system in a way, right? And there’s aspects of

01:16:07 that operating system that relate to how your body functions and others, how you socially interact,

01:16:12 how you interact with yourself and so on. And we make models of that spirit. And we think it’s a

01:16:18 loaded term because it’s from a pre scientific age. But it took the scientific age a long time

01:16:24 to rediscover a term that is pretty much the same thing. And I suspect that the differences that we

01:16:29 still see between the old word and the new word are translation errors that have happened over

01:16:33 the centuries. Can you actually linger on that? Why do you say that spirit, just to clarify,

01:16:39 because I’m a little bit confused. So the word spirit is a powerful thing. But why did you say

01:16:45 in the last year or so that you discovered this? Do you mean the same old traditional idea of a

01:16:50 spirit? I try to find out what people mean by spirit. When people say spirituality in the US,

01:16:56 it usually refers to the phantom limb that they develop in the absence of culture.

01:17:00 And a culture is in some sense, you could say the spirit of a society that is long game. This thing

01:17:06 that is become self aware at a level above the individuals where you say, if you don’t do the

01:17:12 following things, then the grand, grand, grand grandchildren of our children will have nothing

01:17:16 to eat. So if you take this long scope, where you try to maximize the length of the game that you

01:17:22 are playing as a species, you realize that you’re part of a larger thing that you cannot fully

01:17:27 control. You probably need to submit to the ecosphere instead of trying to completely control

01:17:32 it. There needs to be a certain level at which we can exist as a species if you want to endure.

01:17:39 And our culture is not sustaining this anymore. We basically made this bet with the industrial

01:17:44 revolution that we can control everything. And the modernist societies with basically unfettered

01:17:48 growth led to a situation in which we depend on the ability to control the entire planet.

01:17:54 And since we are not able to do that, as it seems, this culture will die. And we realize that it

01:18:00 doesn’t have a future, right? We called our children generation Z. That’s a very optimistic

01:18:06 thing to do. Yeah. So you can have this kind of intuition that our civilization, you said culture,

01:18:13 but you really mean the spirit of the civilization, the entirety of the civilization may not exist

01:18:22 for long. Yeah. Can you untangle that? What’s your intuition behind that? So you kind of offline

01:18:29 mentioned to me that the industrial revolution was kind of the moment we agreed to accept

01:18:36 the offer sign on the paper on the dotted line with the industrial revolution, we doomed ourselves.

01:18:42 Can you elaborate on that? This is a suspicion. I, of course, don’t know how it plays out. But

01:18:47 it seems to me that in a society in which you leverage yourself very far over an entropic abyss

01:18:55 without land on the other side, it’s relatively clear that your cantilever is at some point

01:19:00 going to break down into this entropic abyss. And you have to pay the bill. Okay. Russia is

01:19:06 my first language. And I’m also an idiot. Me too. This is just two apes.

01:19:13 Instead of playing with a banana, trying to have fun by talking. Okay. Anthropic what? And what’s

01:19:21 entropic? Entropic. So entropic in the sense of entropy. Oh, entropic. Got it. And entropic,

01:19:29 what was the other word you used? Abyss. What’s that? It’s a big gorge. Oh, abyss. Abyss, yes.

01:19:35 Entropic abyss. So many of the things you say are poetic. It’s hurting my ears. And this one

01:19:39 is amazing, right? It’s mispronounced, which makes you more poetic. Wittgenstein would be proud. So

01:19:50 entropic abyss. Okay. Let’s rewind then. The industrial revolution. So how does that get us

01:19:58 into the entropic abyss? So in some sense, we burned a hundred million years worth of trees

01:20:04 to get everybody plumbing. Yes. And the society that we had before that had a very limited number

01:20:10 of people. So basically since zero BC, we hovered between 300 and 400 million people. Yes. And this

01:20:18 only changed with the enlightenment and the subsequent industrial revolution. And in some

01:20:24 sense, the enlightenment freed our rationality and also freed our norms from the preexisting order

01:20:30 gradually. It was a process that basically happened in feedback loops. So it was not that

01:20:35 just one caused the other. It was a dynamic that started. And the dynamic worked by basically

01:20:41 increasing productivity to such a degree that we could feed all our children. And I think the

01:20:48 definition of poverty is that you have as many children as you can feed before they die, which is

01:20:55 in some sense, the state that all animals on earth are in. The definition of poverty is having

01:21:01 enough. So you can have only so many children as you can feed and if you have more, they die. Yes.

01:21:06 And in our societies, you can basically have as many children as you want, they don’t die. Right.

01:21:12 So the reason why we don’t have as many children as we want is because we also have to pay a price

01:21:17 in terms of we have to insert ourselves in a lower social stratum if we have too many children.

01:21:22 So basically everybody in the under middle and lower upper class has only a limited number of

01:21:28 children because having more of them would mean a big economic hit to the individual families.

01:21:33 Yes. Because children, especially in the US, super expensive to have. And you only are taken out of

01:21:39 this if you are basically super rich or if you are super poor. If you’re super poor, it doesn’t

01:21:43 matter how many kids you have because your status is not going to change. And these children allow

01:21:48 you not going to die of hunger. So how does this lead to self destruction? So there’s a lot of

01:21:54 unpleasant properties about this process. So basically what we try to do is we try to

01:21:58 let our children survive, even if they have diseases. Like I would have died before my

01:22:06 mid twenties without modern medicine. And most of my friends would have as well. And so many of us

01:22:12 wouldn’t live without the advantages of modern medicine and modern industrialized society. We

01:22:18 get our protein largely by subduing the entirety of nature. Imagine there would be some very clever

01:22:25 microbe that would live in our organisms and would completely harvest them and change them into a

01:22:32 thing that is necessary to sustain itself. And it would discover that for instance,

01:22:38 brain cells are kind of edible, but they’re not quite nice. So you need to have more fat in them

01:22:43 and you turn them into more fat cells. And basically this big organism would become a

01:22:47 vegetable that is barely alive and it’s going to be very brittle and not resilient when the

01:22:52 environment changes. Yeah, but some part of that organism, the one that’s actually doing all the

01:22:57 using of the, there’ll still be somebody thriving. So it relates back to this original question

01:23:04 I suspect that we are not the smartest thing on this planet. I suspect that basically every complex

01:23:10 system has to have some complex regulation if it depends on feedback loops. And so for instance,

01:23:17 it’s likely that we should describe a certain degree of intelligence to plants. The problem is

01:23:24 that plants don’t have a nervous system. So they don’t have a way to telegraph messages over large

01:23:28 distances almost instantly in the plant. And instead, they will rely on chemicals between

01:23:34 adjacent cells, which means the signal processing speed depends on the signal processing with a

01:23:40 rate of a few millimeters per second. And as a result, if the plant is intelligent,

01:23:46 it’s not going to be intelligent at similar timescales as this.

01:23:49 Yeah, the time scale is different. So you suspect we might not be the most intelligent

01:23:55 but we’re the most intelligent in this spatial scale in our timescale.

01:24:00 So basically, if you would zoom out very far, we might discover that there have been intelligent

01:24:05 ecosystems on the planet that existed for thousands of years in an almost undisturbed state. And it

01:24:12 could be that these ecosystems actively related their environment. So basically change the course

01:24:16 of the evolution vision, this ecosystem to make it more efficient in the future.

01:24:20 So it’s possible something like plants is actually a set of living organisms,

01:24:25 an ecosystem of living organisms that are just operating a different timescale and are far

01:24:30 superior in intelligence than human beings. And then human beings will die out and plants will

01:24:34 still be there and they’ll be there.

01:24:36 Yeah, there’s an evolutionary adaptation playing a role at all of these levels. For instance,

01:24:41 if mice don’t get enough food and get stressed, the next step is to

01:24:45 get more sparse and more scrawny. And the reason for this is because they in a natural

01:24:51 environment, the mice have probably hidden a drought or something else. And if they’re overgrazed,

01:24:56 then all the things that sustain them might go extinct. And there will be no mice a few

01:25:01 generations from now. So to make sure that there will be mice in five generations from now,

01:25:05 basically the mice scale back. And a similar thing happens with the predators of mice.

01:25:10 They should make sure that the mice don’t completely go extinct. So in some sense, if the predators are

01:25:15 smart enough, they will be tasked with shepherding their food supply. Maybe the reason why lions have

01:25:22 much larger brains than antelopes is not so much because it’s so hard to catch an antelope as

01:25:27 opposed to run away from the lion. But the lions need to make complex models of their environment,

01:25:33 more complex than the antelopes.

01:25:35 So first of all, just describing that there’s a bunch of complex systems and human beings may not

01:25:40 even be the most special or intelligent of those complex systems, even on Earth, makes me feel a

01:25:45 little better about the extinction of human species that we’re talking about.

01:25:48 Yes, maybe you’re just Guy Astploit to put the carbon back into the atmosphere.

01:25:52 Yeah, this is just a nice, we tried it out.

01:25:54 The big stain on evolution is not us, it was trees. Earth evolved trees before there could be

01:26:00 digested again. There were no insects that could break all of them apart. Cellulose is so robust

01:26:05 that you cannot get all of it with microorganisms. So many of these trees fell into swamps and all

01:26:11 this carbon became inert and could no longer be recycled into organisms. And we are the species

01:26:16 that is destined to take care of that.

01:26:17 So this is kind of…

01:26:20 To get out of the ground, put it back into the atmosphere and the Earth is already greening.

01:26:24 So we have to be careful about that.

01:26:26 To get out of the ground, put it back into the atmosphere and the Earth is already greening.

01:26:30 So within a million years or so when the ecosystems have recovered from the rapid changes,

01:26:35 that they’re not compatible with right now, the Earth is going to be awesome again.

01:26:39 And there won’t be even a memory of us, of us little apes.

01:26:42 I think there will be memories of us. I suspect we are the first generally intelligent species

01:26:46 in the sense. We are the first species within industrial society because we will leave more

01:26:51 phones than bones in the stratosphere.

01:26:53 Phones than bones. I like it. But then let me push back. You’ve kind of suggested that

01:27:01 we have a very narrow definition of… I mean, why aren’t trees a higher level of general

01:27:08 intelligence?

01:27:09 If trees were intelligent, then they would be at different timescales, which means within

01:27:13 a hundred years, the tree is probably not going to make models that are as complex as

01:27:17 the ones that we make in 10 years.

01:27:18 But maybe the trees are the ones that made the phones, right?

01:27:25 You could say the entirety of life did it. The first cell never died. The first cell

01:27:31 only split, right? And every cell in our body is still an instance of the first cell that

01:27:36 split off from that very first cell. There was only one cell on this planet as far as

01:27:40 we know. And so the cell is not just a building block of life. It’s a hyperorganism. And we

01:27:46 are part of this hyperorganism.

01:27:49 So nevertheless, this hyperorganism, no, this little particular branch of it, which is us

01:27:56 humans, because of the industrial revolution and maybe the exponential growth of technology

01:28:02 might somehow destroy ourselves. So what do you think is the most likely way we might

01:28:07 destroy ourselves? So some people worry about genetic manipulation. Some people, as we’ve

01:28:13 talked about, worry about either dumb artificial intelligence or super intelligent artificial

01:28:18 intelligence destroying us. Some people worry about nuclear weapons and weapons of war in

01:28:25 general. What do you think? If you were a betting man, what would you bet on in terms

01:28:29 of self destruction? And then would it be higher than 50%?

01:28:34 So it’s very likely that nothing that we bet on matters after we win our bets. So I

01:28:40 don’t think that bets are literally the right way to go about this.

01:28:44 I mean, once you’re dead, you won’t be there to collect the wings.

01:28:47 So it’s also not clear if we as a species go extinct. But I think that our present

01:28:53 civilization is not sustainable. So the thing that will change is there will be probably

01:28:57 fewer people on the planet than there are today. And even if not, then still most of

01:29:01 people that are alive today will not have offspring in 100 years from now because of

01:29:05 the geographic changes and so on and the changes in the food supply. It’s quite likely

01:29:10 that many areas of the planet will only be livable with a close cooling chain in 100

01:29:15 years from now. So many of the areas around the equator and in subtropical climates that

01:29:22 are now quite pleasant to live in, will stop to be inhabitable without air conditioning.

01:29:27 So you honestly, wow, cooling chain, close knit cooling chain communities. So you think

01:29:33 you have a strong worry about the effects of global warming?

01:29:38 By itself, it’s not a big issue. If you live in Arizona right now, you have basically three

01:29:42 months in the summer in which you cannot be outside. And so you have a close cooling chain.

01:29:47 You have air conditioning in your car and in your home and you’re fine. And if the air

01:29:50 conditioning would stop for a few days, then in many areas you would not be able to survive.

01:29:56 Can we just pause for a second? You say so many brilliant, poetic things. Do people use

01:30:03 that term closed cooling chain? I imagine that people use it when they describe how they get

01:30:08 meat into a supermarket, right? If you break the cooling chain and this thing starts to thaw,

01:30:13 you’re in trouble and you have to throw it away. That’s such a beautiful way to put it. It’s like

01:30:19 calling a city a closed social chain or something like that. I mean, that’s right. I mean, the

01:30:25 locality of it is really important. It basically means you wake up in a climatized room, you go

01:30:28 to work in a climatized car, you work in a climatized office, you shop in a climatized

01:30:32 supermarket and in between you have very short distance in which you run from your car to the

01:30:37 supermarket, but you have to make sure that your temperature does not approach the temperature of

01:30:42 the environment. The crucial thing is the wet barb temperature. The wet barb temperature. It’s

01:30:46 what you get when you take a wet cloth and you put it around your thermometer and then you move

01:30:54 it very quickly through the air so you get the evaporation heat. And as soon as you can no longer

01:31:01 cool your body temperature via evaporation to a temperature below something like I think 35

01:31:08 degrees, you die. Which means if the outside world is dry, you can still cool yourself down

01:31:15 by sweating. But if it has a certain degree of humidity or if it goes over a certain temperature,

01:31:20 then sweating will not save you. And this means even if you’re a healthy, fit individual within

01:31:26 a few hours, even if you try to be in the shade and so on, you’ll die unless you have

01:31:31 some climatizing equipment. And this itself, as long as you maintain civilization and you have

01:31:37 energy supply and you have foot trucks coming to your home that are climatized, everything is fine.

01:31:41 But what if you lose large scale open agriculture at the same time? So basically you run into foot

01:31:47 insecurity because climate becomes very irregular or weather becomes very irregular and you have a

01:31:52 lot of extreme weather events. So you need to roll most of your foot maybe indoor or you need to

01:31:59 import your foot from certain regions. And maybe you’re not able to maintain the civilization

01:32:04 throughout the planet to get the infrastructure to get the foot to your home.

01:32:09 Right. But there could be significant impacts in the sense that people begin to suffer.

01:32:13 There could be wars over resources and so on. But ultimately, do you not have a, not a faith, but

01:32:20 what do you make of the capacity of technological innovation to help us prevent some of the worst

01:32:30 damages that this condition can create? So as an example, as an almost out there example,

01:32:38 is the work that SpaceX and Elon Musk is doing of trying to also consider our propagation

01:32:45 throughout the universe in deep space to colonize other planets. That’s one technological step.

01:32:51 But of course, what Elon Musk is trying on Mars is not to save us from global warming,

01:32:56 because Mars looks much worse than Earth will look like after the worst outcomes of global warming

01:33:01 imaginable, right? Mars is essentially not habitable.

01:33:06 It’s exceptionally harsh environment, yes. But what he is doing, what a lot of people throughout

01:33:10 history since the Industrial Revolution are doing, are just doing a lot of different technological

01:33:15 innovation with some kind of target. And when it ends up happening, it’s totally unexpected new

01:33:20 things come up. So trying to terraform or trying to colonize Mars, extremely harsh environment,

01:33:27 might give us totally new ideas of how to expand or increase the power of this closed cooling

01:33:36 circuit that empowers the community. So it seems like there’s a little bit of a race between our

01:33:44 open ended technological innovation of this communal operating system that we have and our

01:33:55 general tendency to want to overuse resources and thereby destroy ourselves. You don’t think

01:34:02 technology can win that race? I think the probability is relatively low, given that our

01:34:08 technology is, for instance, the US is stagnating since the 1970s roughly, in terms of technology.

01:34:15 Most of the things that we do are the result of incremental processes. What about Intel?

01:34:19 What about Moore’s Law? It’s basically, it’s very incremental. The things that we’re doing is,

01:34:24 so the invention of the microprocessor was a major thing, right? The miniaturization

01:34:31 of transistors was really major. But the things that we did afterwards largely were not that

01:34:38 innovative. We had gradual changes of scaling things from CPUs into GPUs and things like that.

01:34:48 But I don’t think that there are, basically there are not many things. If you take a person that

01:34:54 died in the 70s and was at the top of their game, they would not need to read that many books

01:34:59 to be current again. But it’s all about books. Who cares about books? There might be things that are

01:35:05 beyond books. Or say papers. No, papers. Forget papers. There might be things that are, so papers

01:35:11 and books and knowledge, that’s a concept of a time when you were sitting there by candlelight

01:35:16 and individual consumers of knowledge. What about the impact that we’re not in the middle of,

01:35:21 might not be understanding of Twitter, of YouTube? The reason you and I are sitting here today

01:35:27 is because of Twitter and YouTube. So the ripple effect, and there’s two minds, sort of two dumb

01:35:35 apes coming up with a new, perhaps a new clean insights, and there’s 200 other apes listening

01:35:42 right now, 200,000 other apes listening right now. And that effect, it’s very difficult to understand

01:35:48 what that effect will have. That might be bigger than any of the advancements of the microprocessor

01:35:53 or any of the industrial revolution, the ability of spread knowledge. And that knowledge,

01:36:02 like it allows good ideas to reach millions much faster. And the effect of that, that might be the

01:36:09 new, that might be the 21st century, is the multiplying of ideas, of good ideas. Because if

01:36:16 you say one good thing today, that will multiply across huge amounts of people, and then they will

01:36:24 say something, and then they will have another podcast, and they’ll say something, and then they’ll

01:36:27 write a paper. That could be a huge, you don’t think that? Yeah, we should have billions for

01:36:33 Neumann’s right now in two rings, and we don’t for some reason. I suspect the reason is that

01:36:38 we destroy our attention span. Also the incentives, of course, different. Yeah, we have extreme

01:36:43 Kardashians, yeah. So the reason why we’re sitting here and doing this as a YouTube video is because

01:36:48 you and me don’t have the attention span to write a book together right now. And you guys probably

01:36:52 don’t have the attention span to read it. So let me tell you, it’s very short. But we’re an hour

01:37:01 and 40 minutes in, and I guarantee you that 80% of the people are still listening. So there is an

01:37:06 attention span. It’s just the form. Who said that the book is the optimal way to transfer information?

01:37:13 This is still an open question. That’s what we’re… It’s something that social media could be doing

01:37:17 that other forms could not be doing. I think the end game of social media is a global brain.

01:37:22 And Twitter is in some sense a global brain that is completely hooked on dopamine, doesn’t have any

01:37:26 kind of inhibition, and as a result is caught in a permanent seizure. It’s also in some sense a

01:37:32 multiplayer role playing game. And people use it to play an avatar that is not like them,

01:37:38 as they were in this sane world, and they look through the world through the lens of their phones

01:37:41 and think it’s the real world. But it’s the Twitter world that is distorted by the popularity

01:37:45 incentives of Twitter. Yeah, the incentives and just our natural biological, the dopamine rush

01:37:52 of a like, no matter how… I try to be very kind of Zen like and minimalist and not be influenced

01:38:01 by likes and so on, but it’s probably very difficult to avoid that to some degree.

01:38:07 Speaking at a small tangent of Twitter, how can Twitter be done better?

01:38:15 I think it’s an incredible mechanism that has a huge impact on society

01:38:19 by doing exactly what you’re doing. Sorry, doing exactly what you described, which is having this…

01:38:25 We’re like, is this some kind of game, and we’re kind of our individual RL agents in this game,

01:38:33 and it’s uncontrollable because there’s not really a centralized control. Neither Jack Dorsey nor

01:38:37 the engineers at Twitter seem to be able to control this game. Or can they? That’s sort

01:38:44 of a question. Is there any advice you would give on how to control this game?

01:38:49 I wouldn’t give advice because I am certainly not an expert, but I can give my thoughts on this.

01:38:53 And our brain has solved this problem to some degree. Our brain has lots of individual agents

01:39:01 that manage to play together in a way. And we have also many contexts in which other organisms

01:39:06 have found ways to solve the problems of cooperation that we don’t solve on Twitter.

01:39:12 And maybe the solution is to go for an evolutionary approach. So imagine that you

01:39:18 have something like Reddit or something like Facebook and something like Twitter,

01:39:23 and you think about what they have in common. What they have in common, they are companies

01:39:27 that in some sense own a protocol. And this protocol is imposed on a community, and the

01:39:32 protocol has different components for monetization, for user management, for user display, for rating,

01:39:39 for anonymity, for import of other content, and so on. And now imagine that you take these

01:39:44 components of the protocol apart, and you do it in some sense like communities within this

01:39:50 social network. And these communities are allowed to mix and match their protocols and design new

01:39:55 ones. So for instance, the UI and the UX can be defined by the community. The rules for sharing

01:40:02 content across communities can be defined. The monetization can be redefined. The way you reward

01:40:07 individual users for what can be redefined. The way users can represent themselves and to each

01:40:13 other can redefined. Who could be the redefiner? So can individual human beings build enough

01:40:18 intuition to redefine those things? This itself can become part of the protocol. So for instance,

01:40:22 it could be in some communities, it will be a single person that comes up with these things.

01:40:27 And others, it’s a group of friends. Some might implement a voting scheme that has some interesting

01:40:32 weighted voting. Who knows? Who knows what will be the best self organizing principle for this.

01:40:36 But the process can’t be automated. I mean, it seems like the brain.

01:40:39 It can be automated so people can write software for this. And eventually the idea is,

01:40:45 let’s not make an assumption about this thing if you don’t know what the right solution is. In

01:40:50 those areas that we have no idea whether the right solution will be people designing this ad hoc,

01:40:55 or machines doing this. Whether you want to enforce compliance by social norms like Wikipedia,

01:41:01 or with software solutions, or with AI that goes through the posts of people, or with a

01:41:06 legal principle, and so on. This is something maybe you need to find out. And so the idea would

01:41:12 be if you let the communities evolve, and you just control it in such a way that you are

01:41:17 incentivizing the most sentient communities. The ones that produce the most interesting

01:41:24 behaviors that allow you to interact in the most helpful ways to the individuals.

01:41:29 You have a network that gives you information that is relevant to you.

01:41:32 It helps you to maintain relationships to others in healthy ways. It allows you to build teams. It

01:41:37 allows you to basically bring the best of you into this thing and goes into a coupling into

01:41:42 a relationship with others in which you produce things that you would be unable to produce alone.

01:41:47 Yes, beautifully put. But the key process of that with incentives and evolution

01:41:53 is things that don’t adopt themselves to effectively get the incentives have to die.

01:42:02 And the thing about social media is communities that are unhealthy or whatever you wanted that

01:42:07 defines the incentives really don’t like dying. One of the things that people really get aggressive,

01:42:13 protest aggressively is when they’re censored. Especially in America. I don’t know much about

01:42:19 the rest of the world, but the idea of freedom of speech, the idea of censorship is really painful

01:42:24 in America. And so what do you think about that? Having grown up in East Germany, do you think

01:42:38 censorship is an important tool in our brain and the intelligence and in social networks?

01:42:45 So basically, if you’re not a good member of the entirety of the system, they should be blocked

01:42:53 away. Well, locked away, blocked. An important thing is who decides that you are a good member.

01:42:59 Who? Is it distributed? And what is the outcome of the process that decides it,

01:43:04 both for the individual and for society at large. For instance, if you have a high trust society,

01:43:09 you don’t need a lot of surveillance. And the surveillance is even in some sense undermining

01:43:14 trust. Because it’s basically punishing people that look suspicious when surveyed,

01:43:21 but do the right thing anyway. And the opposite, if you have a low trust society,

01:43:26 then surveillance can be a better trade off. And the US is currently making a transition from a

01:43:30 relatively high trust or mixed trust society to a low trust society. So surveillance will increase.

01:43:36 Another thing is that beliefs are not just inert representations. There are implementations that

01:43:40 run code on your brain and change your reality and change the way you interact with each other

01:43:45 at some level. And some of the beliefs are just public opinions that we use to display our

01:43:52 alignment. So for instance, people might say, all cultures are the same and equally good,

01:43:58 but still they prefer to live in some cultures over others, very, very strongly so. And it turns

01:44:03 out that the cultures are defined by certain rules of interaction. And these rules of interaction

01:44:08 lead to different results when you implement them. So if you adhere to certain rules,

01:44:12 you get different outcomes in different societies. And this all leads to very tricky

01:44:18 situations when people do not have a commitment to a shared purpose.

01:44:22 And our societies probably need to rediscover what it means to have a shared purpose and how

01:44:27 to make this compatible with a non totalitarian view. So in some sense, the US is caught in a

01:44:34 conundrum between totalitarianism and diversity, and doesn’t need to know how to resolve this.

01:44:42 And the solutions that the US has found so far are very crude because it’s a very young society

01:44:47 that is also under a lot of tension. It seems to me that the US will have to reinvent itself.

01:44:52 What do you think, just philosophizing, what kind of mechanisms of government do you think

01:45:01 we as a species should be involved with, US or broadly? What do you think will work well

01:45:07 as a system? Of course, we don’t know. It all seems to work pretty crappily,

01:45:11 some things worse than others. Some people argue that communism is the best. Others say,

01:45:16 yeah, look at the Soviet Union. Some people argue that anarchy is the best and then completely

01:45:22 discarding the positive effects of government. There’s a lot of arguments. US seems to be doing

01:45:29 pretty damn well in the span of history. There’s a respect for human rights, which seems to be a

01:45:36 nice feature, not a bug. And economically, a lot of growth, a lot of technological development.

01:45:42 People seem to be relatively kind on the grand scheme of things.

01:45:47 What lessons do you draw from that? What kind of government system do you think is good?

01:45:52 Ideally, a government should not be perceivable. It should be frictionless. The more you notice the

01:45:58 influence of the government, the more friction you experience, the less effective and efficient

01:46:04 the government probably is. A government, game theoretically, is an agent that imposes

01:46:10 an offset on your payout metrics to make your Nash equilibrium compatible with the common good.

01:46:17 You have these situations where people act on local incentives and these local incentives,

01:46:23 everybody does the thing that’s locally the best for them, but the global outcome is not good.

01:46:27 And this is even the case when people care about the global outcome, because a regulation mechanism

01:46:31 exists that creates a causal relationship between what I want to have for the global good and what

01:46:36 I do. For instance, if I think that we should fly less and I stay at home, there’s not a single plane

01:46:41 that is going to not start because of me, right? It’s not going to have an influence, but I don’t

01:46:49 get from A to B. So the way to implement this would be to have a government that is sharing

01:46:55 this idea that we should fly less and is then imposing a regulation that, for instance,

01:46:59 makes flying more expensive and gives incentives for inventing other forms of transportation that

01:47:06 are less putting that strain on the environment, for instance. So there’s so much optimism and

01:47:14 so many things you describe, and yet there’s the pessimism of you think our civilization is going

01:47:18 to come to an end. So that’s not a hundred percent probability. Nothing in this world is.

01:47:23 So what’s the trajectory out of self destruction, do you think? I suspect that in some sense,

01:47:30 we are both too smart and not smart enough, which means we are very good at solving near

01:47:35 term problems. And at the same time, we are unwilling to submit to the imperatives that

01:47:43 we would have to follow in if you want to stick around. So that makes it difficult. If you were

01:47:48 unable to solve everything technologically, you can probably understand how high the child mortality

01:47:53 needs to be to absorb the mutation rate and how high the mutation rate needs to be to adapt to a

01:47:59 slowly changing ecosystemic environment. So you could in principle compute all these things game

01:48:04 theoretically and adapt to it. But if you cannot do this, because you are like me and you have

01:48:10 children, you don’t want them to die, you will use any kind of medical information to keep

01:48:16 mortality low. Even if it means that within a few generations, we have enormous genetic drift,

01:48:22 and most of us have allergies as a result of not being adapted to the changes that we

01:48:26 made to our food supply. That’s for now, I say technologically speaking, we’re just very young,

01:48:31 300 years industrial revolution, we’re very new to this idea. So you’re attached to your kids being

01:48:36 alive and not being murdered for the good of society. But that might be a very temporary

01:48:41 moment of time that we might evolve in our thinking. So like you said, we’re both smart

01:48:48 and not smart enough. We are probably not the first human civilization that has discovered

01:48:54 technology that allows us to efficiently overgraze our resources. And this overgrazing,

01:48:59 this thing, at some point, we think we can compensate this because if we have eaten all

01:49:04 the grass, we will find a way to grow mushrooms. But it could also be that the ecosystems tip.

01:49:10 And so what really concerns me is not so much the end of the civilization, because we will

01:49:14 invent a new one. But what concerns me is the fact that, for instance, the oceans might tip.

01:49:21 So for instance, maybe the plankton dies because of ocean acidification and cyanobacteria take over,

01:49:27 and as a result, we can no longer breathe the atmosphere. This would be really concerning.

01:49:32 So basically a major reboot of most complex organisms on Earth. And I think this is a

01:49:37 possibility. I don’t know what the percentage for this possibility is, but it doesn’t seem to be

01:49:42 outlandish to me if you look at the scale of the changes that we’ve already triggered on this

01:49:46 planet. And so Danny Hiller suggests that, for instance, we may be able to put chalk into the

01:49:51 stratosphere to limit solar radiation. Maybe it works. Maybe this is sufficient to counter

01:49:57 the effects of what we’ve done. Maybe it won’t be. Maybe we won’t be able to implement it by

01:50:01 the time it’s prevalent. I have no idea how the future is going to play out in this regard. It’s

01:50:07 just, I think it’s quite likely that we cannot continue like this. All our cousin species,

01:50:12 the other hominids are gone. So the right step would be to what? To rewind

01:50:19 and to rewind towards the industrial revolution and slow the, so try to contain the technological

01:50:28 process that leads to the overconsumption of resources? Imagine you get to choose,

01:50:33 you have one lifetime. You get born into a sustainable agricultural civilization,

01:50:38 300, maybe 400 million people on the planet tops. Or before this, some kind of nomadic

01:50:45 species was like a million or 2 million. And so you don’t meet new people unless you give birth

01:50:51 to them. You cannot travel to other places in the world. There is no internet. There is no

01:50:55 interesting intellectual tradition that reaches considerably deep. So you would not discover

01:51:00 human completeness probably and so on. We wouldn’t exist. And the alternative is you get born into an

01:51:06 insane world. One that is doomed to die because it has just burned a hundred million years worth

01:51:11 of trees in a single century. Which one do you like? I think I like this one. It’s a very weird

01:51:16 thing that when you find yourself on a Titanic and you see this iceberg and it looks like we

01:51:21 are not going to miss it. And a lot of people are in denial. And most of the counter arguments

01:51:25 sound like denial to me. They don’t seem to be rational arguments. And the other thing is we

01:51:30 are born on this Titanic. Without this Titanic, we wouldn’t have been born. We wouldn’t be here. We

01:51:34 wouldn’t be talking. We wouldn’t be on the internet. We wouldn’t do all the things that we enjoy.

01:51:38 And we are not responsible for this happening. If we had the choice, we would probably try to

01:51:45 prevent it. But when we were born, we were never asked when we want to be born, in which society

01:51:51 we want to be born, what incentive structures we want to be exposed to. We have relatively

01:51:55 little agency in the entire thing. Humanity has relatively little agency in the whole thing. It’s

01:52:00 basically a giant machine that’s tumbling down a hill and everybody is frantically trying to push

01:52:04 some buttons. Nobody knows what these buttons are meaning, what they connect to. And most of them

01:52:09 are not stopping this tumbling down the hill. Is it possible that artificial intelligence will give

01:52:15 us an escape latch somehow? So there’s a lot of worry about existential threats of artificial

01:52:25 intelligence. But what AI also allows, in general forms of automation, allows the potential of

01:52:33 extreme productivity growth that will also perhaps in a positive way transform society,

01:52:40 that may allow us to inadvertently to return to the more, to the same kind of ideals of closer to

01:52:52 nature that’s represented in hunter gatherer societies. That’s not destroying the planet,

01:52:59 that’s not doing overconsumption and so on. I mean, generally speaking,

01:53:03 do you have hope that AI can help somehow? I think it’s not fun to be very close to nature

01:53:09 until you completely subdue nature. So our idea of being close to nature means being close to

01:53:16 agriculture, basically forests that don’t have anything in them that eats us.

01:53:21 TITO See, I mean, I want to disagree with that. I think the niceness of being close to nature

01:53:30 is to being fully present and in like, when survival becomes your primary,

01:53:37 not just your goal, but your whole existence. I’m not just romanticizing, I can just speak for

01:53:47 myself. I am self aware enough that that is a fulfilling existence.

01:53:54 I personally prefer to be in nature and not fight for my survival. I think fighting for your survival

01:54:00 while being in the cold and in the rain and being hunted by animals and having open wounds

01:54:06 is very unpleasant.

01:54:07 There’s a contradiction in there. Yes, I and you, just as you said, would not choose it.

01:54:14 But if I was forced into it, it would be a fulfilling existence.

01:54:17 Yes, if you are adapted to it, basically, if your brain is wired up in such a way that you

01:54:23 get rewards optimally in such an environment. And there’s some evidence for this that for

01:54:29 a certain degree of complexity, basically, people are more happy in such an environment because

01:54:33 it’s what you largely have evolved for. In between, we had a few thousand years in which

01:54:38 I think we have evolved for a slightly more comfortable environment. So

01:54:41 there is probably something like an intermediate stage in which people would be more happy than

01:54:47 they would be if they would have to fend for themselves in small groups in the forest and

01:54:51 often die. Versus something like this, where we now have basically a big machine, a big

01:54:57 Mordor in which we run through concrete boxes and press buttons and machines, and largely

01:55:05 don’t feel well cared for as the monkeys that we are. So returning briefly to, not briefly,

01:55:12 but returning to AI, what, let me ask a romanticized question, what is the most beautiful

01:55:19 to you, silly ape, the most beautiful or surprising idea in the development of artificial

01:55:25 intelligence, whether in your own life or in the history of artificial intelligence that you’ve

01:55:30 come across? If you built an AI, it probably can make models at an arbitrary degree of detail,

01:55:37 right, of the world. And then it would try to understand its own nature. It’s tempting to think

01:55:42 that at some point when we have general intelligence, we have competitions where we

01:55:46 will let the AIs wake up in different kinds of physical universes, and we measure how many

01:55:51 movements of the Rubik’s cube it takes until it’s figured out what’s going on in its universe and

01:55:55 what it is in its own nature and its own physics and so on, right? So what if we exist in the

01:56:00 memory of an AI that is trying to understand its own nature and remembers its own genesis and

01:56:05 remembers Lex and Joscha sitting in a hotel room, sparking some of the ideas off that led to the

01:56:11 development of general intelligence. So we’re a kind of simulation that’s running in an AI

01:56:15 system that’s trying to understand itself. It’s not that I believe that, but I think it’s a

01:56:21 beautiful idea. I mean, you kind of returned to this idea with the Turing test of intelligence

01:56:31 being the process of asking and answering what is intelligence. I mean, do you think there is an

01:56:47 answer? Why is there such a search for an answer? So does there have to be like an answer? You just

01:56:57 said an AI system that’s trying to understand the why of what, you know, understand itself.

01:57:04 Is that a fundamental process of greater and greater complexity, greater and greater

01:57:09 intelligence is the continuous trying of understanding itself?

01:57:13 No, I think you will find that most people don’t care about that because they’re well adjusted

01:57:18 enough to not care. And the reason why people like you and me care about it probably has to

01:57:23 do with the need to understand ourselves. It’s because we are in fundamental disagreement with

01:57:28 the universe that we wake up in. They look down on me and they see, oh my God, I’m caught in a

01:57:32 monkey. What’s that? Some people are unhappy with the government and I’m unhappy with the entire

01:57:38 universe that I find myself in. Oh, so you don’t think that’s a fundamental aspect of human nature

01:57:45 that some people are just suppressing? That they wake up shocked they’re in the body of a monkey?

01:57:51 No, there is a clear adaptive value to not be confused by that and by…

01:57:56 Well, no, that’s not what I asked. So you have this clear adaptive value, then there’s clear

01:58:04 there’s clear adaptive value to while fundamentally your brain is confused by that,

01:58:09 by creating an illusion, another layer of the narrative that says, you know, that tries to

01:58:16 suppress that and instead say that, you know, what’s going on with the government right now

01:58:21 is the most important thing. What’s going on with my football team is the most important thing.

01:58:25 But it seems to me, like for me, it was a really interesting moment reading Ernest

01:58:32 Becker’s Denial of Death. That, you know, this kind of idea that we’re all, you know,

01:58:40 the fundamental thing from which most of our human mind springs is this fear of mortality

01:58:49 and being cognizant of your mortality and the fear of that mortality. And then you construct

01:58:54 illusions on top of that. I guess you being just a push on it, you really don’t think it’s possible

01:59:03 that this worry of the big existential questions is actually fundamental as the existentialist

01:59:11 thought to our existence. I think that the fear of death only plays a role as long as you don’t

01:59:17 see the big picture. The thing is that minds are software states, right? Software doesn’t have

01:59:22 identity. Software in some sense is a physical law. But it feels like there’s an identity. I

01:59:30 thought that was the for this particular piece of software and the narrative it tells, that’s

01:59:35 a fundamental property of it. The maintenance of the identity is not terminal. It’s instrumental

01:59:41 to something else. You maintain your identity so you can serve your meaning. So you can do the

01:59:46 things that you’re supposed to do before you die. And I suspect that for most people the fear of

01:59:51 death is the fear of dying before they are done with the things that they feel they have to do,

01:59:54 even though they cannot quite put their finger on it, what that is.

01:59:59 Right. But in the software world, to return to the question, then what happens after we die?

02:00:10 Why would you care? You will not be longer there. The point of dying is that you are gone.

02:00:14 Well, maybe I’m not. This is what, you know, it seems like there’s so much,

02:00:23 in the idea that this is just, the mind is just a simulation that’s constructing a narrative around

02:00:28 some particular aspects of the quantum mechanical wave function world that we can’t quite get direct

02:00:37 access to. Then like the idea of mortality seems to be a little fuzzy as well. It doesn’t, maybe

02:00:44 there’s not a clear answer. The fuzzy idea is the one of continuous existence. We don’t have

02:00:49 continuous existence. How do you know that? Because it’s not computable. Because you’re

02:00:55 saying it’s going to be directly infinite. There is no continuous process. The only thing that

02:00:58 binds you together with the Lex Friedman from yesterday is the illusion that you have memories

02:01:02 about him. So if you want to upload, it’s very easy. You make a machine that thinks it’s you.

02:01:07 Because this is the same thing that you are. You are a machine that thinks it’s you.

02:01:10 But that’s immortality. Yeah, but it’s just a belief. You can create this belief very easily

02:01:15 once you realize that the question whether you are immortal or not depends entirely on your beliefs

02:01:21 and your own continuity. But then you can be immortal by the continuity of the belief.

02:01:28 You cannot be immortal, but you can stop being afraid of your mortality because you realize you

02:01:33 were never continuously existing in the first place. Well, I don’t know if I’d be more terrified

02:01:39 or less terrified by that. It seems like the fact that I existed.

02:01:44 You don’t know this state in which you don’t have a self. You can’t turn off yourself.

02:01:49 I can’t turn off myself.

02:01:50 You can’t turn it off. You can’t turn it off.

02:01:52 I can.

02:01:52 Yes. And you can basically meditate yourself in a state where you are still conscious,

02:01:57 where still things are happening, where you know everything that you knew before,

02:02:00 but you’re no longer identified with changing anything.

02:02:03 And this means that yourself, in a way, dissolves. There is no longer this person. You know that this

02:02:09 person construct exists in other states and it runs on this brain of Lex Friedman, but it’s not

02:02:15 a real thing. It’s a construct. It’s an idea. And you can change that idea. And if you let go of

02:02:21 this idea, if you don’t think that you are special, you realize it’s just one of many people and it’s

02:02:26 not your favorite person even. It’s just one of many. And it’s the one that you are doomed to

02:02:31 control for the most part. And that is basically informing the actions of this organism as a

02:02:37 control model. And this is all there is. And you are somehow afraid that this control model gets

02:02:42 interrupted or loses the identity of continuity.

02:02:47 Yeah. So I’m attached. I mean, yeah, it’s a very popular, it’s a somehow compelling notion that

02:02:52 being attached, like there’s no need to be attached to this idea of an identity.

02:02:58 But that in itself could be an illusion that you construct. So the process of meditation,

02:03:03 while popular, is thought of as getting under the concept of identity. It could be just putting a

02:03:08 cloak over it, just telling it to be quiet for the moment. I think that meditation is eventually just

02:03:18 a bunch of techniques that let you control attention. And when you can control attention,

02:03:22 you can get access to your own source code, hopefully not before you understand what you’re

02:03:31 doing. And then you can change the way it works temporarily or permanently.

02:03:36 So yeah, meditation is to get a glimpse at the source code, get under, so basically control or

02:03:41 turn off the attention.

02:03:42 The entire thing is that you learn to control attention. So everything else is downstream

02:03:46 from controlling attention.

02:03:47 And control the attention that’s looking at the attention.

02:03:50 Normally we only get attention in the parts of our mind that create heat, where you have a

02:03:54 mismatch between model and the results that are happening. And so most people are not self aware

02:04:00 because their control is too good. If everything works out roughly the way you want, and the only

02:04:05 things that don’t work out is whether your football team wins, then you will mostly have

02:04:09 models about these domains. And it’s only when, for instance, your fundamental relationships to

02:04:15 the world around you don’t work, because the ideology of your country is insane, and you don’t

02:04:20 understand why it’s insane, and the other kids are not nerds, and don’t understand why you

02:04:24 understand physics, and you don’t, why you want to understand physics, and you don’t understand

02:04:29 why somebody would not want to understand physics.

02:04:32 So we kind of brought up neurons in the brain as reinforcement learning agents.

02:04:40 And there’s been some successes as you brought up with Go, with AlphaGo, AlphaZero, with ideas

02:04:46 which I think are incredibly interesting ideas of systems playing each other in an automated way

02:04:52 to improve by playing other systems in a particular construct of a game that are a little

02:05:00 bit better than itself, and then thereby improving continuously. All the competitors in the game

02:05:05 are improving gradually. So being just challenging enough and from learning from the process of the

02:05:11 competition. Do you have hope for that reinforcement learning process to achieve

02:05:16 greater and greater level of intelligence? So we talked about different ideas in AI that need to

02:05:21 be solved. Is RL a part of that process of trying to create an AGI system? What do you think?

02:05:28 Definitely forms of unsupervised learning, but there are many algorithms that can achieve that.

02:05:32 And I suspect that ultimately the algorithms that work, there will be a class of them or many of

02:05:38 them. And they might have small differences of like a magnitude and efficiency, but eventually

02:05:45 what matters is the type of model that you form and the types of models that we form right now

02:05:49 are not sparse enough. What does it mean to be sparse? It means that ideally every potential

02:05:59 model state should correspond to a potential world state. So basically if you vary states

02:06:06 in your model, you always end up with valid world states and our mind is not quite there.

02:06:10 So an indication is basically what we see in dreams. The older we get, the more boring our

02:06:14 dreams become because we incorporate more and more constraints that we learned about how the

02:06:19 world works. So many of the things that we imagine to be possible as children turn out to be

02:06:25 constrained by physical and social dynamics. And as a result, fewer and fewer things remain

02:06:31 possible. It’s not because our imagination scales back, but the constraints under which it operates

02:06:36 become tighter and tighter. And so the constraints under which our neural networks operate are

02:06:42 almost limitless, which means it’s very difficult to get a neural network to imagine things that

02:06:47 look real. So I suspect part of what we need to do is we probably need to build dreaming systems.

02:06:55 I suspect that part of the purpose of dreams is similar to a generative adversarial network,

02:07:01 we learn certain constraints and then it produces alternative perspectives on the same set of

02:07:07 constraints. So you can recognize it under different circumstances. Maybe we have flying

02:07:11 dreams as children because we recreate the objects that we know and the maps that we know from

02:07:16 different perspectives, which also means from a bird’s eye perspective. So I mean, aren’t we

02:07:21 doing that anyway? I mean, not with our eyes closed and when we’re sleeping, aren’t we just

02:07:27 constantly running dreams and simulations in our mind as we try to interpret the environment?

02:07:32 I mean, sort of considering all the different possibilities, the way we interact with the

02:07:37 environment seems like, essentially, like you said, sort of creating a bunch of simulations

02:07:46 that are consistent with our expectations, with our previous experiences, with the things we just

02:07:52 saw recently. And through that hallucination process, we are able to then somehow stitch

02:08:02 together what actually we see in the world with the simulations that match it well and thereby

02:08:07 interpret it. I suspect that you and my brain are slightly unusual in this regard,

02:08:13 which is probably what got you into MIT. So this obsession of constantly pondering possibilities

02:08:19 and solutions to problems. Oh, stop it. I think I’m not talking about intellectual stuff. I’m

02:08:27 talking about just doing the kind of stuff it takes to walk and not fall. Yes, this is

02:08:35 largely automatic. Yes, but the process is, I mean… It’s not complicated. It’s relatively

02:08:43 easy to build a neural network that, in some sense, learns the dynamics. The fact that we

02:08:48 haven’t done it right so far doesn’t mean it’s hard, because you can see that a biological

02:08:52 organism does it with relatively few neurons. So basically, you build a bunch of neural

02:08:57 oscillators that entrain themselves with the dynamics of your body in such a way that the

02:09:01 regulator becomes isomorphic in its model to the dynamics that it regulates, and then it’s

02:09:06 automatic. And it’s only interesting in the sense that it captures attention when the system is off.

02:09:12 See, but thinking of the kind of mechanism that’s required to do walking as a controller,

02:09:18 as a neural network, I think it’s a compelling notion, but it discards quietly,

02:09:27 or at least makes implicit, the fact that you need to have something like common sense reasoning

02:09:33 to walk. It’s an open question whether you do or not. But my intuition is to act in this world,

02:09:40 there’s a huge knowledge base that’s underlying it somehow. There’s so much information

02:09:46 of the kind we have never been able to construct in neural networks in an artificial intelligence

02:09:53 systems period, which is like, it’s humbling, at least in my imagination, the amount of information

02:10:00 required to act in this world humbles me. And I think saying that neural networks can accomplish

02:10:07 it is missing the fact that we don’t have yet a mechanism for constructing something like

02:10:16 common sense reasoning. I mean, what’s your sense about to linger on the idea of what kind of

02:10:28 mechanism would be effective at walking? You said just a neural network, not maybe the kind we have,

02:10:33 but something a little bit better, would be able to walk easily. Don’t you think it also needs to know

02:10:42 like a huge amount of knowledge that’s represented under the flag of common sense reasoning?

02:10:47 How much common sense knowledge do we actually have? Imagine that you are really hardworking

02:10:51 for all your life and you form two new concepts every half hour or so. You end up with something

02:10:56 like a million concepts because you don’t get that old. So a million concepts, that’s not a lot.

02:11:02 So it’s not just a million concepts. I think it would be a lot. I personally think it might be

02:11:06 much more than a million. But if you think just about the numbers, you don’t live that long.

02:11:12 If you think about how many cycles do your neurons have in your life, it’s quite limited.

02:11:16 You don’t get that old. Yeah, but the powerful thing is the number of concepts, and they’re

02:11:23 probably deeply hierarchical in nature. The relations, as you described between them,

02:11:29 is the key thing. So it’s like, even if it’s a million concepts, the graph of relations that’s

02:11:35 formed and some kind of, perhaps, some kind of probabilistic relationships, that’s what’s common

02:11:42 sense reasoning is the relationship between things. Yeah, so in some sense, I think of the concepts as

02:11:48 the address space for our behavior programs. And the behavior programs allow us to recognize objects

02:11:53 and interact with them, also mental objects. And a large part of that is the physical world that we

02:11:59 interact with, which is this RAS extender thing, which is basically navigation of information in

02:12:04 space. And basically, it’s similar to a game engine. It’s a physics engine that you can use to

02:12:12 describe and predict how things that look in a particular way, that feel when you touch them in

02:12:18 a particular way, that love proprioception, that love auditory, for example. So it’s a

02:12:22 lot of auditory perception and so on, how they work out. So basically, the geometry of all these

02:12:27 things. And this is probably 80% of what our brain is doing is dealing with that, with this real time

02:12:33 simulation. And by itself, a game engine is fascinating, but it’s not that hard to understand

02:12:39 what it’s doing. And our game engines are already, in some sense, approximating the fidelity of what

02:12:47 we can perceive. So if we put on an Oculus Quest, we get something that is still relatively crude

02:12:54 with respect to what we can perceive, but it’s also in the same ballpark already. It’s just a

02:12:58 couple order of magnitudes away from saturating our perception in terms of the complexity that

02:13:04 it can produce. So in some sense, it’s reasonable to say that the computer that you can buy and put

02:13:10 into your home is able to give a perceptual reality that has a detail that is already in

02:13:15 the same ballpark as what your brain can process. And everything else are ideas about the world.

02:13:22 And I suspect that they are relatively sparse and also the intuitive models that we form about

02:13:27 social interaction. Social interaction is not so hard. It’s just hard for us nerds because we all

02:13:32 have our wires crossed, so we need to deduce them. But the pyres are present in most social animals.

02:13:37 So it’s interesting thing to notice that many domestic social animals, like cats and dogs,

02:13:44 have better social cognition than children. Right. I hope so. I hope it’s not that many concepts

02:13:51 fundamentally to do to exist in this world. For me, it’s more like I’m afraid so because this

02:13:57 thing that we only appear to be so complex to each other because we are so stupid is a little

02:14:02 bit depressing. Yeah, to me that’s inspiring if we’re indeed as stupid as it seems. The things

02:14:11 our brains don’t scale and the information processing that we build tend to scale very well.

02:14:16 Yeah, but I mean, one of the things that worries me is that the fact that the brain doesn’t scale

02:14:23 means that that’s actually a fundamental feature of the brain. All the flaws of the brain,

02:14:30 everything we see that we see as limitations, perhaps there’s a fundamental, the constraints

02:14:34 on the system could be a requirement of its power, which is different than our current

02:14:43 understanding of intelligent systems where scale, especially with deep learning, especially with

02:14:48 reinforcement learning, the hope behind OpenAI and DeepMind, all the major results really have

02:14:55 to do with huge compute. It could also be that our brains are so small, not just because they

02:15:01 take up so much glucose in our body, like 20% of the glucose, so they don’t arbitrarily scale.

02:15:07 There’s some animals like elephants which have larger brains than us and they don’t seem to be

02:15:11 smarter. Elephants seem to be autistic. They have very, very good motor control and they’re really

02:15:16 good with details, but they really struggle to see the big picture. So you can make them

02:15:21 recreate drawings stroke by stroke, they can do that, but they cannot reproduce a still life. So

02:15:27 they cannot make a drawing of a scene that they see. They will always be only able to reproduce

02:15:31 the line drawing, at least as far from what I could see in the experiments. So why is that?

02:15:37 Maybe smarter elephants would meditate themselves out of existence because their brains are too

02:15:41 large. So basically the elephants that were not autistic, they didn’t reproduce.

02:15:46 Yeah. So we have to remember that the brain is fundamentally interlinked with the body in our

02:15:50 human and biological system. Do you think that AGI systems that we try to create or greater

02:15:55 intelligent systems would need to have a body? I think they should be able to make use of a body

02:16:00 if you give it to them. But I don’t think that they fundamentally need a body. So I suspect if

02:16:06 you can interact with the world by moving your eyes and your head, you can make controlled

02:16:11 experiments. And this allows you to have many magnitudes, fewer observations in order to reduce

02:16:19 the uncertainty in your models. So you can pinpoint the areas in your models where you’re

02:16:24 not quite sure and you just move your head and see what’s going on over there and you get additional

02:16:28 information. If you just have to use YouTube as an input and you cannot do anything beyond this,

02:16:33 you probably need just much more data. But we have much more data. So if you can build a system that

02:16:39 has enough time and attention to browse all of YouTube and extract all the information that there

02:16:44 is to be found, I don’t think there’s an obvious limit to what it can do. Yeah, but it seems that

02:16:50 the interactivity is a fundamental thing that the physical body allows you to do. But let me ask on

02:16:55 that topic sort of that that’s what a body is, is allowing the brain to like touch things and move

02:17:00 things and interact with the whether the physical world exists or not, whatever, but interact with

02:17:06 some interface to the physical world. What about a virtual world? Do you think we can do the same kind

02:17:14 of reasoning, consciousness, intelligence if we put on a VR headset and move over to that world?

02:17:23 Do you think there’s any fundamental difference between the interface to the physical world that

02:17:28 it’s here in this hotel and if we were sitting in the same hotel in a virtual world? The question

02:17:32 is, does this nonphysical world or this other environment entice you to solve problems that

02:17:39 require general intelligence? If it doesn’t, then you probably will not develop general intelligence

02:17:44 and arguably most people are not generally intelligent because they don’t have to solve

02:17:48 problems that make them generally intelligent. And even for us, it’s not yet clear if we are smart

02:17:52 enough to build AI and understand our own nature to this degree. So it could be a matter of capacity

02:17:58 and for most people, it’s in the first place a matter of interest. They don’t see the point

02:18:02 because the benefit of attempting this project are marginal because you’re probably not going

02:18:06 to succeed in it and the cost of trying to do it requires complete dedication of your entire life.

02:18:11 Right? But it seems like the possibilities of what you can do in the virtual world,

02:18:15 so imagine that is much greater than you can in the real world. So imagine a situation,

02:18:21 maybe interesting option for me. If somebody came to me and offered what I’ll do is,

02:18:27 so from now on, you can only exist in the virtual world. And so you put on this headset and when you

02:18:34 eat, we’ll make sure to connect your body up in a way that when you eat in the virtual world,

02:18:41 your body will be nourished in the same way in the virtual world. So the aligning incentives

02:18:45 between our common sort of real world and the virtual world, but then the possibilities become

02:18:50 much bigger. Like I could be other kinds of creatures. I could do, I can break the laws

02:18:57 of physics as we know them. I could do a lot. I mean, the possibilities are endless, right? As far

02:19:02 as we think it’s an interesting thought, whether like what existence would be like, what kind of

02:19:08 intelligence would emerge there? What kind of consciousness? What kind of maybe greater

02:19:13 intelligence, even in me, Lex, even at this stage in my life, if I spend the next 20 years in that

02:19:19 world to see how that intelligence emerges. And if that happened at the very beginning,

02:19:26 before I was even cognizant of my existence in this physical world, it’s interesting to think

02:19:31 how that child would develop. And the way virtual reality and digitization of everything is moving,

02:19:37 it’s not completely out of the realm of possibility that we’re all, that some part of our lives will,

02:19:44 if not entirety of it, will live in a virtual world to a greater degree than we currently have

02:19:51 living on Twitter and social media and so on. Do you have, I mean, does something draw you

02:19:56 intellectually or naturally in terms of thinking about AI to this virtual world where more

02:20:03 possibilities are? I think that currently it’s a waste of time to deal with the physical world

02:20:09 before we have mechanisms that can automatically learn how to deal with it.

02:20:13 The body gives you second order agency, but what constitutes the body is the things that

02:20:18 you can indirectly control. The third order are tools, and the second order is the things that

02:20:24 are basically always present, but you operate on them with first order things, which are mental

02:20:29 operators. And the zero order is in some sense, the direct sense of what you’re deciding. Right.

02:20:36 So you observe yourself initiating an action, there are features that you interpret as the

02:20:42 initiation of an action. Then you perform the operations that you perform to make that happen.

02:20:47 And then you see the movement of your limbs and you learn to associate those and thereby model

02:20:52 your own agency over this feedback, right? But the first feedback that you get is from this first

02:20:56 order thing already. Basically, you decide to think a thought and the thought is being thought.

02:21:01 You decide to change the thought and you observe how the thought is being changed.

02:21:05 And in some sense, this is, you could say, an embodiment already, right? And I suspect it’s

02:21:10 sufficient as an embodiment for intelligence. And so it’s not that important at least at

02:21:14 this time to consider variations in the second order. Yes. But the thing that you also put

02:21:20 mentioned just now is physics that you could change in any way you want.

02:21:24 So you need an environment that puts up resistance against you. If there’s nothing to control,

02:21:29 you cannot make models, right? There needs to be a particular way that resists you.

02:21:34 And by the way, your motivation is usually outside of your mind. It resists you. Motivation

02:21:38 is what gets you up in the morning even though it would be much less work to stay in bed.

02:21:43 So it’s basically forcing you to resist the environment and it forces your mind to serve it,

02:21:51 to serve this resistance to the environment. So in some sense, it is also putting up resistance

02:21:56 against the natural tendency of the mind to not do anything. Yeah. So some of that resistance,

02:22:01 just like you described with motivation is like in the first order, it’s in the mind.

02:22:05 Some resistance is in the second order, like actual physical objects pushing against you and so on.

02:22:11 It seems that the second order stuff in virtual reality could be recreated.

02:22:14 Of course. But it might be sufficient that you just do mathematics and mathematics is already

02:22:19 putting up enough resistance against you. So basically just with an aesthetic motive,

02:22:24 this could maybe sufficient to form a type of intelligence. It would probably not be a very

02:22:29 human intelligence, but it might be one that is already general. So to mess with this zero order,

02:22:37 maybe first order, what do you think about ideas of brain computer interfaces? So again, returning

02:22:43 to our friend Elon Musk and Neuralink, a company that’s trying to, of course, there’s a lot of

02:22:48 a trying to cure diseases and so on with a near term, but the longterm vision is to add an extra

02:22:54 layer to basically expand the capacity of the brain connected to the computational world.

02:23:03 Do you think one that’s possible too, how does that change the fundamentals of the zeroth order

02:23:07 in the first order? It’s technically possible, but I don’t see that the FDA would ever allow me to

02:23:11 drill holes in my skull to interface my neocortex the way Elon Musk envisions. So at the moment,

02:23:16 I can do horrible things to mice, but I’m not able to do useful things to people,

02:23:21 except maybe at some point down the line in medical applications. So this thing that we

02:23:26 are envisioning, which means recreational and creational brain computer interfaces

02:23:33 are probably not going to happen in the present legal system.

02:23:36 I love it how I’m asking you out there philosophical and sort of engineering

02:23:43 questions. And for the first time ever, you jumped to the legal FDA.

02:23:48 There would be enough people that would be crazy enough to have holes drilled in their skull to

02:23:51 try a new type of brain computer interface. But also, if it works, FDA will approve it.

02:23:57 I mean, yes, it’s like, you know, I work a lot with autonomous vehicles. Yes,

02:24:02 you can say that it’s going to be a very difficult regulatory process of approving

02:24:05 autonomous, but it doesn’t mean autonomous vehicles are never going to happen.

02:24:08 No, they will totally happen as soon as we create jobs for at least two lawyers

02:24:14 and one regulator per car.

02:24:17 Yes, lawyers, that’s actually like lawyers is the fundamental substrate of reality.

02:24:24 In the US, it’s a very weird system. It’s not universal in the world. The law is a very

02:24:30 interesting software once you realize it, right? These circuits are in some sense streams of

02:24:34 software and this is largely works by exception handling. So you make decisions on the ground

02:24:39 and they get synchronized with the next level structure as soon as an exception is being

02:24:43 thrown. So it escalates the exception handling. The process is very expensive,

02:24:49 especially since it incentivizes the lawyers for producing work for lawyers.

02:24:54 Yes, so the exceptions are actually incentivized for firing often. But to return, outside of

02:25:02 lawyers, is there anything interesting, insightful about the possibility of this extra layer

02:25:13 of intelligence added to the brain?

02:25:15 I do think so, but I don’t think that you need technically invasive procedures to do

02:25:20 so. We can already interface with other people by observing them very, very closely and getting

02:25:25 in some kind of empathetic resonance. And I’m not very good at this, but I noticed that

02:25:31 people are able to do this to some degree. And it basically means that we model an interface

02:25:37 layer of the other person in real time. And it works despite our neurons being slow because

02:25:42 most of the things that we do are built on periodic processes. So you just need to entrain

02:25:46 yourself with the oscillation that happens. And if the oscillation itself changes slowly

02:25:51 enough, you can basically follow along.

02:25:54 Right. But the bandwidth of the interaction, it seems like you can do a lot more computation

02:26:03 when there’s…

02:26:04 Of course. But the other thing is that the bandwidth that our brain, our own mind is

02:26:08 running on is actually quite slow. So the number of thoughts that I can productively

02:26:12 think in any given day is quite limited. If they had the discipline to write it down

02:26:18 and the speed to write it down, maybe it would be a book every day or so. But if you think

02:26:22 about the computers that we can build, the magnitudes at which they operate, this would

02:26:28 be nothing. It’s something that it can put out in a second.

02:26:30 Well, I don’t know. So it’s possible the number of thoughts you have in your brain is… It

02:26:37 could be several orders of magnitude higher than what you’re possibly able to express

02:26:41 through your fingers or through your voice.

02:26:45 Most of them are going to be repetitive because they…

02:26:47 How do you know that?

02:26:48 If they have to control the same problems every day. When I walk, there are going to

02:26:53 be processes in my brain that model my walking pattern and regulate them and so on. But it’s

02:26:58 going to be pretty much the same every day.

02:26:59 But that could be…

02:27:00 Every step.

02:27:01 But I’m talking about intellectual reasoning, thinking. So the question, what is the best

02:27:04 system of government? So you sit down and start thinking about that. One of the constraints

02:27:09 is that you don’t have access to a lot of facts, a lot of studies. You always have to

02:27:16 interface with something else to learn more, to aid in your reasoning process. If you can

02:27:23 directly access all of Wikipedia in trying to understand what is the best form of government,

02:27:28 then every thought won’t be stuck in a loop. Every thought that requires some extra piece

02:27:33 of information will be able to grab it really quickly. That’s the possibility of if the

02:27:38 bottleneck is literally the information that… The bottleneck of breakthrough ideas is just

02:27:47 being able to quickly access huge amounts of information, then the possibility of connecting

02:27:51 your brain to the computer could lead to totally new breakthroughs. You can think of mathematicians

02:27:59 being able to just up the orders of magnitude of power in their reasoning about

02:28:08 mathematical proofs. What if humanity has already discovered the optimal form of

02:28:12 government through an evolutionary process? There is an evolution going on. So what we

02:28:17 discover is that maybe the problem of government doesn’t have stable solutions for us as a species,

02:28:23 because we are not designed in such a way that we can make everybody conform to them.

02:28:28 But there could be solutions that work under given circumstances or that are the best for

02:28:33 certain environment and depends on, for instance, the primary forms of ownership and the means

02:28:38 of production. So if the main means of production is land, then the forms of government will be

02:28:45 regulated by the landowners and you get a monarchy. If you also want to have a form of

02:28:50 government in which you depend on some form of slavery, for instance, where the peasants have

02:28:56 to work very long hours for very little gain, so very few people can have plumbing, then maybe

02:29:01 you need to promise them to get paid in the afterlife, the overtime. So you need a theocracy.

02:29:08 And so for much of human history in the West, we had a combination of monarchy and theocracy

02:29:14 that was our form of governance. At the same time, the Catholic Church implemented game theoretic

02:29:21 principles. I recently reread Thomas Aquinas. It’s very interesting to see this because he was not

02:29:27 dualist. He was translating Aristotle in a particular way for designing an operating

02:29:32 system for the Catholic society. And he says that basically people are animals in very much the same

02:29:39 way as Aristotle envisions, which is basically organisms with cybernetic control. And then he

02:29:44 says that there are additional rational principles that humans can discover and everybody can

02:29:48 discover them so they are universal. If you are sane, you should understand, you should submit to

02:29:53 them because you can rationally deduce them. And these principles are roughly you should be willing

02:30:00 to self regulate correctly. You should be willing to do correct social regulation. It’s

02:30:06 intraorganismic. You should be willing to act on your models so you have skin in the game.

02:30:17 And you should have goal rationality. You should be choosing the right

02:30:20 goals to work on. So basically these three rational principles, goal rationality he calls

02:30:26 prudence or wisdom, social regulation is justice, the correct social one, and the internal regulation

02:30:33 is temperance. And this willingness to act on your models is courage. And then he says that

02:30:40 there are additionally to these four cardinal virtues, three divine virtues. And these three

02:30:45 divine virtues cannot be rationally deduced, but they reveal themselves by the harmony, which means

02:30:49 if you assume them and you extrapolate what’s going to happen, you will see that they make sense.

02:30:55 And it’s often been misunderstood as God has to tell you that these are the things. So basically

02:31:00 there’s something nefarious going on. The Christian conspiracy forces you to believe

02:31:05 some guy with a long beard that they discovered this. So these principles are relatively simple.

02:31:11 Again, it’s for high level organization for the resulting civilization that you form.

02:31:16 A commitment to unity. So basically you serve this higher, larger thing,

02:31:21 this structural principle on the next level. And he calls that faith. Then there needs to be a

02:31:28 commitment to shared purpose. This is basically this global reward that you try to figure out

02:31:32 what that should be and how you can facilitate this. And this is love. The commitment to shared

02:31:36 purpose is the core of love, right? You see the sacred thing that is more important than your own

02:31:40 organismic interests in the other, and you serve this together. And this is how you see the sacred

02:31:45 in the other. And the last one is hope, which means you need to be willing to act on that

02:31:51 principle without getting rewards in the here and now because it doesn’t exist yet.

02:31:55 Then you start out building the civilization, right? So you need to be able to do this in the

02:31:59 absence of its actual existence yet. So it can come into being. So the way it comes into being

02:32:06 is by you accepting those notions and then you see these three divine concepts and you see them

02:32:12 realized. Divine is a loaded concept in our world because we are outside of this cult and we are

02:32:18 still scarred from breaking free of it. But the idea is basically we need to have a civilization

02:32:23 that acts as an intentional agent, like an insect state. And we are not actually a tribal species,

02:32:28 we are a state building species. And what enables state building is basically the formation of

02:32:35 religious states and other forms of rule based administration in which the individual doesn’t

02:32:40 matter as much as the rule or the higher goal. We got there by the question, what’s the optimal

02:32:45 form of governance? So I don’t think that Catholicism is the optimal form of governance

02:32:50 because it’s obviously on the way out, right? So it is for the present type of society that we are

02:32:54 in. Religious institutions don’t seem to be optimal to organize that. So what we discovered right now

02:33:01 that we live in in the West is democracy. And democracy is the rule of oligarchs that are the

02:33:06 people that currently own the means of production that is administered not by the oligarchs

02:33:11 themselves because there’s too much disruption. We have so much innovation that we have in every

02:33:17 generation new means of production that we invent. And corporations die usually after 30 years or so

02:33:23 and something other takes a leading role in our societies. So it’s administered by institutions

02:33:29 and these institutions themselves are not elected but they provide continuity and they are led by

02:33:35 electable politicians. And this makes it possible that you can adapt to change without having to

02:33:40 kill people, right? So you can, for instance, have a change in governments if people think that the

02:33:44 current government is too corrupt or is not up to date, you can just elect new people. Or if a

02:33:50 journalist finds out something inconvenient about the institution and the institution has no plan B

02:33:55 like in Russia, the journalist has to die. This is when you run society by the deep state. So ideally

02:34:02 you have an administration layer that you can change if something bad happens, right? So you

02:34:09 will have a continuity in the whole thing. And this is the system that we came up in the West.

02:34:13 And the way it’s set up in the US is largely a result of low level models. So it’s mostly just

02:34:17 second, third order consequences that people are modeling in the design of these institutions. So

02:34:22 it’s a relatively young society that doesn’t really take care of the downstream effects of

02:34:27 many of the decisions that are being made. And I suspect that AI can help us this in a way if you

02:34:33 can fix the incentives. The society of the US is a society of cheaters. It’s basically cheating is

02:34:39 so indistinguishable from innovation and we want to encourage innovation. Can you elaborate on what

02:34:44 you mean by cheating? It’s basically people do things that they know are wrong. It’s acceptable

02:34:48 to do things that you know are wrong in this society to a certain degree. You can, for instance,

02:34:52 suggest some non sustainable business models and implement them. Right. But you’re always pushing

02:34:57 the boundaries. I mean, yes, this is seen as a good thing largely. Yes. And this is different

02:35:05 from other societies. So for instance, social mobility is an aspect of this. Social mobility

02:35:09 is the result of individual innovation that would not be sustainable at scale for everybody else.

02:35:14 Right. Normally you should not go up, you should go deep, right? We need bakers and if we are very

02:35:18 very good bakers, but in a society that innovates, maybe you can replace all the bakers with a really

02:35:23 good machine. Right. And that’s not a bad thing. And it’s a thing that made the US so successful,

02:35:29 right? But it also means that the US is not optimizing for sustainability, but for innovation.

02:35:34 And so it’s not obvious as the evolutionary process is unrolling, it’s not obvious that that

02:35:39 long term would be better. It has side effects. So you basically, if you cheat, you will have a

02:35:45 certain layer of toxic sludge that covers everything that is a result of cheating.

02:35:50 And we have to unroll this evolutionary process to figure out if these side effects are so damaging

02:35:55 that the system is horrible, or if the benefits actually outweigh the negative effects.

02:36:03 How do we get to which system of government is best? That was from,

02:36:07 I’m trying to trace back the last like five minutes.

02:36:10 I suspect that we can find a way back to AI by thinking about the way in which our brain has to

02:36:16 organize itself. In some sense, our brain is a society of neurons. And our mind is a society

02:36:24 of behaviors. And they need to be organizing themselves into a structure that implements

02:36:30 regulation and government is social regulation. We often see government as the manifestation of

02:36:36 power or local interests, but it’s actually a platform for negotiating the conditions of human

02:36:40 survival. And this platform emerges over the current needs and possibilities and the trajectory

02:36:46 that we have. So given the present state, there are only so many options on how we can move into

02:36:52 the next state without completely disrupting everything. And we mostly agree that it’s a

02:36:56 bad idea to disrupt everything because it will endanger our food supply for a while and the entire

02:37:01 infrastructure and fabric of society. So we do try to find natural transitions,

02:37:06 and there are not that many natural transitions available at any given point.

02:37:10 What do you mean by natural transitions?

02:37:12 So we try not to have revolutions if we can have it.

02:37:14 Right. So speaking of revolutions and the connection between government systems and the mind,

02:37:21 you’ve also said that in some sense, becoming an adult means you take charge of your emotions.

02:37:29 Maybe you never said that. Maybe I just made that up. But in the context of the mind,

02:37:35 what’s the role of emotion? And what is it? First of all, what is emotion? What’s its role?

02:37:42 It’s several things. So psychologists often distinguish between emotion and feeling,

02:37:46 and in common day parlance, we don’t. I think that emotion is a configuration of the cognitive

02:37:52 system. And that’s especially true for the lowest level for the affective state. So when you have

02:37:57 an affect, it’s the configuration of certain modulation parameters like arousal, valence,

02:38:03 your attentional focus, whether it’s wide or narrow, inter reception or extra reception,

02:38:08 and so on. And all these parameters together put you in a certain way. You relate to the

02:38:13 environment and to yourself, and this is in some sense an emotional configuration.

02:38:17 In the more narrow sense, an emotion is an affective state. It has an object,

02:38:22 and the relevance of that object is given by motivation. And motivation is a bunch of needs

02:38:26 that are associated with rewards, things that give you pleasure and pain. And you don’t actually act

02:38:31 on your needs, you act on models of your needs. Because when the pleasure and pain manifest,

02:38:35 it’s too late, you’ve done everything. So you act on expectations that will give you pleasure and

02:38:40 pain. And these are your purposes. The needs don’t form a hierarchy, they just coexist and compete.

02:38:45 And your brain has to find a dynamic homeostasis between them. But the purposes need to be

02:38:51 consistent. So you basically can create a story for your life and make plans. And so we organize

02:38:57 them all into hierarchies. And there is not a unique solution for this. Some people eat to make

02:39:02 art and other people make art to eat. They might end up doing the same things, but they cooperate

02:39:07 in very different ways. Because their ultimate goals are different. And we cooperate based on

02:39:12 shared purpose. Everything else that is not cooperation on shared purpose is transactional.

02:39:16 I don’t think I understood that last piece of achieving the homeostasis.

02:39:26 Are you distinguishing between the experience of emotion and the expression of emotion?

02:39:30 Of course. So the experience of emotion is a feeling. And in this sense, what you feel is

02:39:37 an appraisal that your perceptual system has made of the situation at hand. And it makes this based

02:39:42 on your motivation and on your estimates, not your but of the subconscious geometric parts of your

02:39:50 mind that assess the situation in the world with something like a neural network. And this neural

02:39:56 network is making itself known to the symbolic parts of your mind, to your conscious attention

02:40:02 by mapping them as features into a space. So what you will feel about your emotion is a projection

02:40:08 usually into your body map. So you might feel anxiety in your solar plexus, and you might feel

02:40:12 it as a contraction, which is all geometry. Your body map is the space that is always instantiated

02:40:18 and always available. So it’s a very obvious cheat if your non symbolic parts of your brain

02:40:26 try to talk to your symbolic parts of your brain to map the feelings into the body map.

02:40:31 And then you perceive them as pleasant and unpleasant, depending on whether the appraisal

02:40:35 has a negative or positive valence. And then you have different features of them that give you

02:40:40 more knowledge about the nature of what you’re feeling. So for instance, when you feel

02:40:44 connected to other people, you typically feel this in your chest region around the heart.

02:40:48 And you feel this is an expansive feeling in which you’re reaching out, right? And it’s

02:40:53 very intuitive to encode it like this. That’s why it’s encoded like this. It’s a code in which the

02:40:59 non symbolic parts of your mind talk to the symbolic ones. And then the expression of emotion

02:41:04 is then the final step that could be sort of gestural or visual and so on. That’s part of

02:41:09 the communication. This probably evolved as part of an adversarial communication. So as soon as

02:41:14 you started to observe the facial expression and posture of others to understand what emotional

02:41:19 state they’re in, others started to use this as signaling and also to subvert your model of their

02:41:23 emotional state. So we now look at the inflections, at the difference between the standard face that

02:41:29 they’re going to make in this situation. When you are at a funeral, everybody expects you to make a

02:41:33 solemn face, but the solemn face doesn’t express whether you’re sad or not. It just expresses that

02:41:38 you understand what face you have to make at a funeral. Nobody should know that you are triumphant.

02:41:44 So when you try to read the emotion of another person, you try to look at the delta

02:41:48 between a truly sad expression and the things that are animating this face behind the curtain.

02:41:56 So the interesting thing is, so having done this podcast and the video component, one of the things

02:42:03 I’ve learned is that now I’m Russian and I just don’t know how to express emotion on my face.

02:42:10 One, I see that as weakness, but whatever. The people look to me after you say something,

02:42:16 they look to my face to help them see how they should feel about what you said,

02:42:22 which is fascinating because then they’ll often comment on why did you look bored or why did you

02:42:28 particularly enjoy that part or why did you whatever. It’s a kind of interesting, it makes

02:42:32 me cognizant of I’m part, like you’re basically saying a bunch of brilliant things, but I’m

02:42:38 part of the play that you’re the key actor in by making my facial expressions and then

02:42:45 therefore telling the narrative of what the big, like the big point is, which is fascinating.

02:42:51 Makes me cognizant that I’m supposed to be making facial expressions. Even this conversation is hard

02:42:56 because my preference would be to wear a mask with sunglasses to where I could just listen.

02:43:01 Yes, I understand this because it’s intrusive to interact with others this way. And basically

02:43:07 Eastern European society have a taboo against that and especially Russia,

02:43:11 the further you go to the East and in the US it’s the opposite. You’re expected to be

02:43:17 hyperanimated in your face and you’re also expected to show positive affect.

02:43:22 Yes.

02:43:22 And if you show positive affect without a good reason in Russia,

02:43:27 people will think you are a stupid, unsophisticated person.

02:43:33 Exactly. And here positive affect without reason goes either appreciated or goes unnoticed.

02:43:40 No, it’s the default. It’s being expected. Everything is amazing. Have you seen these?

02:43:45 Lego movie?

02:43:47 No, there was a diagram where somebody gave the appraisals that exist in the US and Russia,

02:43:52 so you have your bell curve. And the lower 10% in the US, it’s a good start. Everything

02:44:02 above the lowest 10%, it’s amazing.

02:44:04 It’s amazing.

02:44:06 And for Russians, everything below the top 10%, it’s terrible. And then everything except the

02:44:14 top percent is, I don’t like it. And the top percent is even so.

02:44:23 It’s funny, but it’s kind of true.

02:44:27 There’s a deeper aspect to this. It’s also how we construct meaning in the US. Usually you focus on

02:44:33 the positive aspects and you just suppress the negative aspects. And in our Eastern European

02:44:40 traditions, we emphasize the fact that if you hold something above the waterline, you also need to

02:44:46 put something below the waterline because existence by itself is as best neutral.

02:44:51 Right. That’s the basic intuition, at best neutral. Or it could be just suffering,

02:44:56 the default is suffering.

02:44:56 There are moments of beauty, but these moments of beauty are inextricably linked to the reality

02:45:02 of suffering. And to not acknowledge the reality of suffering means that you are really stupid and

02:45:07 unaware of the fact that basically every conscious being spends most of the time suffering.

02:45:12 Yeah. You just summarized the ethos of the Eastern Europe. Yeah. Most of life is suffering

02:45:19 with an occasional moments of beauty. And if your facial expressions don’t acknowledge

02:45:24 the abundance of suffering in the world and in existence itself, then you must be an idiot.

02:45:30 It’s an interesting thing when you raise children in the US and you, in some sense,

02:45:36 preserve the identity of the intellectual and cultural traditions that are embedded in your

02:45:40 own families. And your daughter asks you about Ariel the mermaid and asks you,

02:45:46 why is Ariel not allowed to play with the humans? And you tell her the truth. She’s a siren. Sirens

02:45:53 eat people. You don’t play with your food. It does not end well. And then you tell her the original

02:45:58 story, which is not the one by Anderson, which is the romantic one. And there’s a much darker one,

02:46:02 which is Undine’s story. What happened? So Undine is a mermaid or a water woman. She lives on the

02:46:11 ground of a river and she meets this prince and they fall in love. And the prince really,

02:46:15 really wants to be with her. And she says, okay, but the deal is you cannot have any other woman.

02:46:20 If you marry somebody else, even though you cannot be with me, because obviously you cannot breathe

02:46:24 underwater and have other things to do than managing your kingdom as you have here, you will

02:46:29 die. And eventually after a few years, he falls in love with some princess and marries her. And

02:46:35 she shows up and quietly goes into his chamber and nobody is able to stop her or willing to do

02:46:41 so because she is fierce. And she comes quietly and sad out of his chamber. And they ask her,

02:46:47 what has happened? What did you do? And she said, I kissed him to death.

02:46:52 All done.

02:46:53 And you know the Anderson story, right? In the Anderson story, the mermaid is playing with

02:46:59 this prince that she saves and she falls in love with him and she cannot live out there. So she is

02:47:04 giving up her voice and her tale for a human like appearance so she can walk among the humans. But

02:47:11 this guy does not recognize that she is the one that you would marry. Instead, he marries somebody

02:47:16 who has a kingdom and economical and political relationships to his own kingdom and so on,

02:47:22 as he should. And she dies.

02:47:25 Yeah. Instead, Disney, the Little Mermaid story has a little bit of a happy ending. That’s the

02:47:35 Western, that’s the American way.

02:47:37 My own problem is this, of course, that I read Oscar Wilde before I read the other things. So

02:47:41 I’m indoctrinated, inoculated with this romanticism. And I think that the mermaid is right. You

02:47:46 sacrifice your life for romantic love. That’s what you do. Because if you are confronted with

02:47:51 either serving the machine and doing the obviously right thing under the economic and social and

02:47:57 other human incentives, that’s wrong. You should follow your heart.

02:48:04 So do you think suffering is fundamental to happiness along these lines?

02:48:09 Suffering is the result of caring about things that you cannot change. And if you are able to

02:48:14 change what you care about to those things that you can change, you will not suffer.

02:48:17 But would you then be able to experience happiness?

02:48:22 Yes. But happiness itself is not important. Happiness is like a cookie. When you are a child,

02:48:27 you think cookies are very important and you want to have all the cookies in the world,

02:48:30 you look forward to being an adult because then you have as many cookies as you want.

02:48:35 But as an adult, you realize a cookie is a tool. It’s a tool to make you eat vegetables.

02:48:40 And once you eat your vegetables anyway, you stop eating cookies for the most part,

02:48:43 because otherwise you will get diabetes and will not be around for your kids.

02:48:46 Yes, but then the cookie, the scarcity of a cookie, if scarcity is enforced,

02:48:51 nevertheless, so like the pleasure comes from the scarcity.

02:48:54 Yes. But the happiness is a cookie that your brain bakes for itself. It’s not made by the

02:48:59 environment. The environment cannot make you happy. It’s your appraisal of the environment

02:49:03 that makes you happy. And if you can change the appraisal of the environment, which you can learn

02:49:07 to, then you can create arbitrary states of happiness. And some meditators fall into this

02:49:11 trap. So they discover the womb, this basement womb in their brain where the cookies are made,

02:49:16 and they indulge and stuff themselves. And after a few months, it gets really old and

02:49:20 the big crisis of meaning comes. Because they thought before that their unhappiness was the

02:49:25 result of not being happy enough. So they fixed this, right? They can release the newer

02:49:29 transmitters at will if they train. And then the crisis of meaning pops up in a deeper layer.

02:49:36 And the question is, why do I live? How can I make a sustainable civilization that is meaningful to

02:49:40 me? How can I insert myself into this? And this was the problem that you couldn’t solve in the

02:49:44 first place. But at the end of all this, let me then ask that same question. What is the answer

02:49:53 to that? What could the possible answer be of the meaning of life? What could an answer be? What is

02:49:59 it to you? I think that if you look at the meaning of life, you look at what the cell is. Life is the

02:50:06 cell. Or this principle, the cell. It’s this self organizing thing that can participate in evolution.

02:50:14 In order to make it work, it’s a molecular machine. It needs a self replicator and an

02:50:18 entropy extractor and a Turing machine. If any of these parts is missing, you don’t have a cell

02:50:22 and it is not living. And life is basically the emergent complexity over that principle.

02:50:27 Once you have this intelligent super molecule, the cell, there is very little that you cannot

02:50:32 make it do. It’s probably the optimal computronium and especially in terms of resilience. It’s very

02:50:37 hard to sterilize the planet once it’s infected with life. So it’s active function of these three

02:50:45 components or the supercell cell is present in the cell, it’s present in us, and it’s just…

02:50:51 We are just an expression of the cell. It’s a certain layer of complexity in the organization

02:50:55 of cells. So in a way, it’s tempting to think of the cell as a von Neumann probe. If you want to

02:51:02 build intelligence on other planets, the best way to do this is to infect them with cells

02:51:07 and wait for long enough and there’s a reasonable chance the stuff is going to evolve into an

02:51:11 information processing principle that is general enough to become sentient.

02:51:16 That idea is very akin to the same dream and beautiful ideas that are expressed to

02:51:21 cellular automata in their most simple mathematical form. If you just inject the system with some

02:51:26 basic mechanisms of replication and so on, basic rules, amazing things would emerge.

02:51:32 The cell is able to do something that James Trardy calls existential design. He points out

02:51:38 that in technical design, we go from the outside in. We work in a highly controlled environment in

02:51:42 which everything is deterministic, like our computers, our labs, or our engineering workshops.

02:51:48 And then we use this determinism to implement a particular kind of function that we dream up and

02:51:53 that seamlessly interfaces with all the other deterministic functions that we already have in

02:51:57 our world. So it’s basically from the outside in. Biological systems designed from the inside out

02:52:04 as seed will become a seedling by taking some of the relatively unorganized matter around it and

02:52:11 turning it into its own structure and thereby subdue the environment. Cells can cooperate if

02:52:16 they can rely on other cells having a similar organization that is already compatible. But

02:52:21 unless that’s there, the cell needs to divide to create that structure by itself. So it’s a

02:52:27 self organizing principle that works on a somewhat chaotic environment. And the purpose of life in

02:52:32 this sense is to produce complexity. And the complexity allows you to harvest entropy gradients

02:52:38 that you couldn’t harvest without the complexity. And in this sense, intelligence and life are very

02:52:43 strongly connected because the purpose of intelligence is to allow control under conditions

02:52:48 and the conditions of complexity. So basically, you shift the boundary between the ordered systems

02:52:53 into the realm of chaos. You build bridge heads into chaos with complexity. And this is what we

02:53:00 are doing. This is not necessarily a deeper meaning. I think the meaning that we have priors

02:53:05 for that we are all for outside of the priors, there is no meaning. Meaning only exists if the

02:53:09 mind projects it. That is probably civilization. I think that what feels most meaningful to me is

02:53:16 to try to build and maintain a sustainable civilization. And taking a slight step outside

02:53:22 of that, we talked about a man with a beard and God, but something, some mechanism, perhaps must

02:53:34 have planted the seed, the initial seed of the cell. Do you think there is a God? What is a God?

02:53:42 And what would that look like? If there was no spontaneous biogenesis, in the sense that the

02:53:48 first cell formed by some happy random accidents where the molecules just happened to be in the

02:53:54 right constellation to each other. But there could also be the mechanism that allows for the random.

02:53:59 I mean, there’s like turtles all the way down. There seems to be, there has to be a head turtle

02:54:04 at the bottom. Let’s consider something really wild. Imagine, is it possible that a gas giant

02:54:10 could become intelligent? What would that involve? So imagine you have vortices that spontaneously

02:54:16 emerge on the gas giants, like big storm systems that endure for thousands of years.

02:54:21 And some of these storm systems produce electromagnetic fields because some of the

02:54:24 clouds are ferromagnetic or something. And as a result, they can change how certain clouds react

02:54:30 rather than other clouds and thereby produce some self stabilizing patterns that eventually

02:54:34 lead to regulation feedback loops, nested feedback loops and control. So imagine you have such this

02:54:40 thing that basically has emergent self sustaining, self organizing complexity. And at some point,

02:54:44 this breaks up and realizes and basically lam solaris, I am a thinking planet, but I will not

02:54:50 replicate because I can recreate the conditions of my own existence somewhere else. I’m just

02:54:55 basically an intelligence that has spontaneously formed because it could. And now it builds a

02:55:01 von Neumann probe and the best von Neumann probe for such a thing might be the cell.

02:55:05 So maybe it, because it’s very, very clever and very enduring, creates cells and sends them out.

02:55:10 And one of them has infected our planet. And I’m not suggesting that this is the case,

02:55:14 but it would be compatible with the Prince Birmingham hypothesis. And it was my intuition

02:55:19 that our biogenesis is very unlikely. It’s possible, but you probably need to roll the

02:55:24 cosmic dice very often, maybe more often than there are planetary surfaces. I don’t know.

02:55:28 So God is just a large enough, a system that’s large enough that allows randomness.

02:55:37 No, I don’t think that God has anything to do with creation. I think it’s a mistranslation

02:55:41 of the Talmud into the Catholic mythology. I think that Genesis is actually the childhood

02:55:46 memories of a God. So the, when. Sorry, Genesis is the.

02:55:51 The childhood memories of a God. It’s basically a mind that is remembering how it came into being.

02:55:57 Wow.

02:55:57 And we typically interpret Genesis as the creation of a physical universe by a supernatural being.

02:56:03 Yes.

02:56:04 And I think when you read it, there is light and darkness that is being created. And then you

02:56:12 discover sky and ground, create them. You construct the plants and the animals and you

02:56:18 give everything their names and so on. That’s basically cognitive development. It’s a sequence

02:56:24 of steps that every mind has to go through when it makes sense of the world. And when you have

02:56:28 children, you can see how initially they distinguish light and darkness and then they

02:56:33 make out directions in it and they discover sky and ground and they discover the plants and the

02:56:37 animals and they give everything their name. And it’s a creative process that happens in every mind

02:56:41 because it’s not given. Your mind has to invent these structures to make sense of the patterns

02:56:46 on your retina. Also, if there was some big nerd who set up a server and runs this world on it,

02:56:52 this would not create a special relationship between us and the nerd. This nerd would not

02:56:57 have the magical power to give meaning to our existence. So this equation of a creator god

02:57:03 with the god of meaning is a sleight of hand. You shouldn’t do it.

02:57:07 The other one that is done in Catholicism is the equation of the first mover,

02:57:12 the prime mover of Aristotle, which is basically the automaton that runs the universe. Aristotle

02:57:17 says if things are moving and things seem to be moving here, something must move them. If something

02:57:23 moves them, something must move the thing that is moving it. So there must be a prime mover.

02:57:28 This idea to say that this prime mover is a supernatural being is complete nonsense.

02:57:33 It’s an automaton in the simplest case. So we have to explain the enormity that this automaton

02:57:39 exists at all. But again, we don’t have any possibility to infer anything about its properties

02:57:45 except that it’s able to produce change in information. So there needs to be some kind

02:57:51 of computational principle. This is all there is. But to say this automaton is identical again with

02:57:56 the creator of the first cause or with the thing that gives meaning to our life is confusion.

02:58:02 No, I think that what we perceive is the higher being that we are part of. The higher being that

02:58:08 we are part of is the civilization. It’s the thing in which we have a similar relationship as the cell

02:58:13 has to our body. And we have this prior because we have evolved to organize in these structures.

02:58:20 So basically, the Christian God in its natural form without the mythology,

02:58:24 if you undress it, is basically the platonic form of the civilization.

02:58:30 Is the ideal?

02:58:32 Yes, it’s this ideal that you try to approximate when you interact with others,

02:58:36 not based on your incentives, but on what you think is right.

02:58:38 Wow, we covered a lot of ground. And we’re left with one of my favorite lines, and there’s many,

02:58:45 which is happiness is a cookie that the brain bakes itself. It’s been a huge honor and a

02:58:54 pleasure to talk to you. I’m sure our paths will cross many times again.

02:58:59 Joshua, thank you so much for talking today. I really appreciate it.

02:59:02 Thank you, Lex. It was so much fun. I enjoyed it.

02:59:05 Awesome. Thanks for listening to this conversation with Joshua Bach. And thank you to our sponsors,

02:59:12 ExpressVPN and Cash App. Please consider supporting this podcast by getting ExpressVPN at

02:59:18 expressvpn.com slash lexpod and downloading Cash App and using code lexpodcast. If you enjoy this

02:59:27 thing, subscribe on YouTube, review it with five stars in Apple Podcast, support it on Patreon,

02:59:33 or simply connect with me on Twitter at lexfreedman. And yes, try to figure out how to

02:59:39 spell it without the E. And now let me leave you with some words of wisdom from Joshua Bach.

02:59:46 If you take this as a computer game metaphor, this is the best level for humanity to play.

02:59:52 And this best level happens to be the last level, as it happens against the backdrop of a dying

02:59:59 world. But it’s still the best level. Thank you for listening and hope to see you next time.