Jeff Hawkins: The Thousand Brains Theory of Intelligence #208

Transcript

00:00:00 The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand

00:00:05 the structure, function, and origin of intelligence in the human brain.

00:00:10 He previously wrote a seminal book on the subject titled On Intelligence, and recently a new book

00:00:16 called A Thousand Brains, which presents a new theory of intelligence that Richard Dawkins,

00:00:22 for example, has been raving about, calling the book quote brilliant and exhilarating.

00:00:28 I can’t read those two words and not think of him saying it in his British accent.

00:00:34 Quick mention of our sponsors, Codecademy, Biooptimizers, ExpressVPN, Asleep, and Blinkist.

00:00:41 Check them out in the description to support this podcast.

00:00:44 As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions

00:00:49 in his new book is that if human civilization were to destroy itself, all of knowledge,

00:00:54 all our creations will go with us. He proposes that we should think about how to save that

00:01:00 knowledge in a way that long outlives us, whether that’s on Earth, in orbit around Earth,

00:01:07 or in deep space, and then to send messages that advertise this backup of human knowledge

00:01:13 to other intelligent alien civilizations. The main message of this advertisement is not that

00:01:19 we are here, but that we were once here. This little difference somehow was deeply humbling

00:01:28 to me, that we may, with some nonzero likelihood, destroy ourselves, and that an alien civilization

00:01:34 thousands or millions of years from now may come across this knowledge store, and they

00:01:40 would only with some low probability even notice it, not to mention be able to interpret it.

00:01:45 And the deeper question here for me is what information in all of human knowledge is even

00:01:49 essential? Does Wikipedia capture it or not at all? This thought experiment forces me

00:01:55 to wonder what are the things we’ve accomplished and are hoping to still accomplish that will

00:02:00 outlive us? Is it things like complex buildings, bridges, cars, rockets? Is it ideas like science,

00:02:08 physics, and mathematics? Is it music and art? Is it computers, computational systems,

00:02:15 or even artificial intelligence systems? I personally can’t imagine that aliens wouldn’t

00:02:20 already have all of these things, in fact much more and much better. To me, the only

00:02:27 unique thing we may have is consciousness itself, and the actual subjective experience

00:02:32 and the actual subjective experience of suffering, of happiness, of hatred, of love. If we can

00:02:39 record these experiences in the highest resolution directly from the human brain, such that aliens

00:02:44 will be able to replay them, that is what we should store and send as a message. Not

00:02:49 Wikipedia, but the extremes of conscious experiences, the most important of which, of course, is

00:02:56 love. This is the Lex Friedman podcast, and here is my conversation with Jeff Hawkins.

00:03:04 We previously talked over two years ago. Do you think there’s still neurons in your brain

00:03:09 that remember that conversation, that remember me and got excited? Like there’s a Lex neuron

00:03:15 in your brain that just like finally has a purpose? I do remember our conversation. I

00:03:19 have some memories of it, and I formed additional memories of you in the meantime. I wouldn’t

00:03:26 say there’s a neuron or neurons in my brain that know you. There are synapses in my brain

00:03:31 that have formed that reflect my knowledge of you and the model I have of you in the

00:03:36 world. Whether the exact same synapses were formed two years ago, it’s hard to say because

00:03:41 these things come and go all the time. One of the things to know about brains is that

00:03:46 when you think of things, you often erase the memory and rewrite it again. Yes, but I have

00:03:50 a memory of you, and that’s instantiated in synapses. There’s a simpler way to think about

00:03:55 it. We have a model of the world in your head, and that model is continually being updated.

00:04:02 I updated this morning. You offered me this water. You said it was from the refrigerator.

00:04:07 I remember these things. The model includes where we live, the places we know, the words,

00:04:12 the objects in the world. It’s a monstrous model, and it’s constantly being updated.

00:04:17 People are just part of that model. Our animals, our other physical objects, our events we’ve

00:04:23 done. In my mind, it’s no special place for the memories of humans. Obviously, I know a lot about

00:04:33 my wife and friends and so on, but it’s not like a special place for humans or over here.

00:04:41 We model everything, and we model other people’s behaviors too. If I said there’s a copy of your

00:04:46 mind in my mind, it’s just because I’ve learned how humans behave, and I’ve learned some things

00:04:53 about you, and that’s part of my world model. Well, I just also mean the collective intelligence

00:05:00 of the human species. I wonder if there’s something fundamental to the brain that enables that,

00:05:08 so modeling other humans with their ideas. You’re actually jumping into a lot of big

00:05:13 topics. Collective intelligence is a separate topic that a lot of people like to talk about.

00:05:17 We could talk about that. That’s interesting. We’re not just individuals. We live in society

00:05:24 and so on. From our research point of view, again, let’s just talk. We studied the neocortex.

00:05:30 It’s a sheet of neural tissue. It’s about 75% of your brain. It runs on this very repetitive

00:05:37 algorithm. It’s a very repetitive circuit. You can apply that algorithm to lots of different

00:05:44 problems, but underneath, it’s the same thing. We’re just building this model. From our point

00:05:48 of view, we wouldn’t look for these special circuits someplace buried in your brain that

00:05:52 might be related to understanding other humans. It’s more like, how do we build a model of

00:05:58 anything? How do we understand anything in the world? Humans are just another part of

00:06:02 the things we understand. There’s nothing to the brain that knows the

00:06:08 emergent phenomena of collective intelligence. Well, I certainly know about that. I’ve heard

00:06:13 the terms, I’ve read. No, but that’s as an idea.

00:06:16 Well, I think we have language, which is built into our brains. That’s a key part of collective

00:06:21 intelligence. There are some prior assumptions about the world we’re going to live in. When

00:06:27 we’re born, we’re not just a blank slate. Did we evolve to take advantage of those situations?

00:06:35 Yes. Again, we study only part of the brain, the neocortex. There’s other parts of the

00:06:39 brain that are very much involved in societal interactions and human emotions and how we

00:06:45 interact and even societal issues about how we interact with other people, when we support

00:06:53 them, when we’re greedy and things like that. Certainly, the brain is a great place

00:07:00 where to study intelligence. I wonder if it’s the fundamental atom of intelligence.

00:07:06 Well, I would say it’s absolutely in a central component, even if you believe in collective

00:07:12 intelligence as, hey, that’s where it’s all happening. That’s what we need to study,

00:07:16 which I don’t believe that, by the way. I think it’s really important, but I don’t think that

00:07:19 is the thing. Even if you do believe that, then you have to understand how the brain works in

00:07:26 doing that. It’s more like we are intelligent individuals and together, we are much more

00:07:32 magnified, our intelligence. We can do things that we couldn’t do individually, but even as

00:07:37 individuals, we’re pretty damn smart and we can model things and understand the world and interact

00:07:42 with it. To me, if you’re going to start someplace, you need to start with the brain. Then you could

00:07:48 say, well, how do brains interact with each other? What is the nature of language? How do we share

00:07:53 models that I’ve learned something about the world, how do I share it with you? Which is really

00:07:56 what sort of communal intelligence is. I know something, you know something. We’ve had different

00:08:02 experiences in the world. I’ve learned something about brains. Maybe I can impart that to you. You’ve

00:08:06 learned something about physics and you can impart that to me. Even just the epistemological

00:08:15 question of, well, what is knowledge and how do you represent it in the brain? That’s where it’s

00:08:20 going to reside for in our writings. It’s obvious that human collaboration, human interaction

00:08:27 is how we build societies. But some of the things you talk about and work on,

00:08:34 some of those elements of what makes up an intelligent entity is there with a single person.

00:08:40 Absolutely. I mean, we can’t deny that the brain is the core element here. At least I think it’s

00:08:47 obvious. The brain is the core element in all theories of intelligence. It’s where knowledge

00:08:51 is represented. It’s where knowledge is created. We interact, we share, we build upon each other’s

00:08:58 work. But without a brain, you’d have nothing. There would be no intelligence without brains.

00:09:03 And so that’s where we start. I got into this field because I just was curious as to who I am.

00:09:11 How do I think? What’s going on in my head when I’m thinking? What does it mean to know something?

00:09:16 I can ask what it means for me to know something independent of how I learned it from you or from

00:09:21 someone else or from society. What does it mean for me to know that I have a model of you in my

00:09:25 head? What does it mean to know I know what this microphone does and how it works physically,

00:09:28 even when I can’t see right now? How do I know that? What does it mean? How the neurons do that

00:09:34 at the fundamental level of neurons and synapses and so on? Those are really fascinating questions.

00:09:40 And I’m happy to be just happy to understand those if I could.

00:09:44 So in your new book, you talk about our brain, our mind as being made up of many brains.

00:09:55 So the book is called A Thousand Brain Theory of Intelligence. What is the key idea of this book?

00:10:02 The book has three sections and it has sort of maybe three big ideas. So the first section is

00:10:09 all about what we’ve learned about the neocortex and that’s the thousand brains theory. Just to

00:10:13 complete the picture, the second section is all about AI and the third section is about the future

00:10:16 of humanity. So the thousand brains theory, the big idea there, if I had to summarize into one

00:10:27 big idea, is that we think of the brain, the neocortex as learning this model of the world.

00:10:33 But what we learned is actually there’s tens of thousands of independent modeling systems going

00:10:38 on. And so each, we call the column in the cortex is about 150,000 of them, is a complete modeling

00:10:44 system. So it’s a collective intelligence in your head in some sense. So the thousand brains theory

00:10:50 says, well, where do I have knowledge about this coffee cup or where’s the model of this cell phone?

00:10:55 It’s not in one place. It’s in thousands of separate models that are complimentary and

00:10:59 they communicate with each other through voting. So this idea that we feel like we’re one person,

00:11:04 that’s our experience. We can explain that. But reality, there’s lots of these, it’s almost like

00:11:09 little brains, but they’re sophisticated modeling systems, about 150,000 of them in each human

00:11:16 brain. And that’s a total different way of thinking about how the neocortex is structured

00:11:21 than we or anyone else thought of even just five years ago. So you mentioned you started

00:11:27 this journey just looking in the mirror and trying to understand who you are.

00:11:31 So if you have many brains, who are you then? So it’s interesting. We have a singular perception,

00:11:38 right? We think, oh, I’m just here. I’m looking at you. But it’s composed of all these things,

00:11:42 like there’s sounds and there’s vision and there’s touch and all kinds of inputs. Yeah,

00:11:48 we have the singular perception. And what the thousand brain theory says, we have these models

00:11:51 that are visual models. We have a lot of models that are auditory models, models that talk to

00:11:55 models and so on, but they vote. And so these things in the cortex, you can think about these

00:12:01 columns as like little grains of rice, 150,000 stacked next to each other. And each one is its

00:12:07 own little modeling system, but they have these long range connections that go between them.

00:12:12 And we call those voting connections or voting neurons. And so the different columns try to

00:12:20 reach a consensus. Like, what am I looking at? Okay. Each one has some ambiguity, but they come

00:12:24 to a consensus. Oh, there’s a water bottle I’m looking at. We are only consciously able to

00:12:30 perceive the voting. We’re not able to perceive anything that goes on under the hood. So the

00:12:35 voting is what we’re aware of. The results of the vote.

00:12:39 Yeah. Well, you can imagine it this way. We were just talking about eye movements a moment ago. So

00:12:44 as I’m looking at something, my eyes are moving about three times a second. And with each movement,

00:12:49 a completely new input is coming into the brain. It’s not repetitive. It’s not shifting it around.

00:12:54 I’m totally unaware of it. I can’t perceive it. But yet if I looked at the neurons in your brain,

00:12:58 they’re going on and off, on and off, on and off, on and off. But the voting neurons are not.

00:13:03 The voting neurons are saying, we all agree, even though I’m looking at different parts of this,

00:13:06 this is a water bottle right now. And that’s not changing. And it’s in some position and

00:13:11 pose relative to me. So I have this perception of the water bottle about two feet away from me

00:13:15 at a certain pose to me. That is not changing. That’s the only part I’m aware of. I can’t be

00:13:20 aware of the fact that the inputs from the eyes are moving and changing and all this other tapping.

00:13:25 So these long range connections are the part we can be conscious of. The individual activity in

00:13:31 each column doesn’t go anywhere else. It doesn’t get shared anywhere else. There’s no way to extract

00:13:37 it and talk about it or extract it and even remember it to say, oh, yes, I can recall that.

00:13:45 But these long range connections are the things that are accessible to language and to our,

00:13:50 like the hippocampus, our memories, our short term memory systems and so on. So we’re not aware of

00:13:56 95% or maybe it’s even 98% of what’s going on in your brain. We’re only aware of this sort of

00:14:02 stable, somewhat stable voting outcome of all these things that are going on underneath the hood.

00:14:09 So what would you say is the basic element in the thousand brains theory of intelligence

00:14:15 of intelligence? Like what’s the atom of intelligence when you think about it? Is it

00:14:21 the individual brains and then what is a brain? Well, let’s, let’s, can we just talk about what

00:14:25 intelligence is first and then, and then we can talk about the elements are. So in my, in my book,

00:14:31 intelligence is the ability to learn a model of the world, to build internal to your head,

00:14:38 a model that represents the structure of everything, you know, to know what this is a

00:14:42 table and that’s a coffee cup and this is a gooseneck lamp and all this to know these things.

00:14:47 I have to have a model of it in my head. I just don’t look at them and go, what is that?

00:14:50 I already have internal representations of these things in my head and I had to learn them. I wasn’t

00:14:55 born of any of that knowledge. You were, you know, we have some lights in the room here. I, you know,

00:15:00 that’s not part of my evolutionary heritage, right? It’s not in my genes. So, um, we have this

00:15:05 incredible model and the model includes not only what things look like and feel like, but where

00:15:09 they are relative to each other and how they behave. I’ve never picked up this water bottle

00:15:12 before, but I know that if I took my hand on that blue thing and I turn it, it’ll probably make a

00:15:16 funny little sound as the little plastic things detach and then it’ll rotate and it’ll rotate a

00:15:20 certain way and it’ll come off. How do I know that? Because I have this model in my head.

00:15:24 So the essence of intelligence as our ability to learn a model and the more sophisticated our

00:15:29 model is, the smarter we are. Uh, not that there is a single intelligence, because you can know

00:15:34 about, you know, a lot about things that I don’t know. And I know about things you don’t know.

00:15:37 And we can both be very smart, but we both learned a model of the world through interacting with it.

00:15:42 So that is the essence of intelligence. Then we can ask ourselves, what are the mechanisms in the

00:15:46 brain that allow us to do that? And what are the mechanisms of learning, not just the neural

00:15:50 mechanisms, what are the general process by how we learn a model? So that was a big insight for us.

00:15:54 It’s like, what are the, what is the actual things that, how do you learn this stuff? It turns out

00:15:59 you have to learn it through movement. Um, you can’t learn it just by that’s how we learn. We

00:16:04 learn through movement. We learn. Um, so you build up this model by observing things and

00:16:07 touching them and moving them and walking around the world and so on. So either you move or the

00:16:11 thing moves somehow. Yeah. You obviously can learn things just by reading a book, something like that.

00:16:16 But think about if I were to say, oh, here’s a new house. I want you to learn, you know,

00:16:21 what do you do? You have to walk, you have to walk from room to the room. You have to open the doors,

00:16:25 look around, see what’s on the left, what’s on the right. As you do this, you’re building a model in

00:16:29 your head. It’s just, that’s what you’re doing. You can’t just sit there and say, I’m going to grok

00:16:34 the house. No. You know, or you can do it. You don’t even want to just sit down and read some

00:16:37 description of it, right? Yeah. You literally physically interact. The same with like a smartphone.

00:16:41 If I’m going to learn a new app, I touch it and I move things around. I see what happens when I,

00:16:45 when I do things with it. So that’s the basic way we learn in the world. And by the way,

00:16:49 when you say model, you mean something that can be used for prediction in the future.

00:16:54 It’s used for prediction and for behavior and planning. Right. And does a pretty good job

00:17:02 doing so. Yeah. Here’s the way to think about the model. A lot of people get hung up on this. So

00:17:08 you can imagine an architect making a model of a house, right? So there’s a physical model that’s

00:17:13 small. And why do they do that? Well, we do that because you can imagine what it would look like

00:17:17 from different angles. Okay. Look from here, look from there. And you can also say, well,

00:17:21 how, how far to get from the garage to the, to the swimming pool or something like that. Right. You

00:17:25 can imagine looking at this and you can say, what would be the view from this location? So we build

00:17:29 these physical models to let you imagine the future and imagine behaviors. Now we can take

00:17:34 that same model and put it in a computer. So we now, today they’ll build models of houses in a

00:17:39 computer and they, and they do that using a set of, we’ll come back to this term in a moment,

00:17:45 reference frames, but basically you assign a reference frame for the palace and you assign

00:17:49 different things for the house in different locations. And then the computer can generate

00:17:53 an image and say, okay, this is what it looks like in this direction. The brain is doing something

00:17:56 remarkably similar to this surprising. It’s using reference frames. It’s building these,

00:18:02 it’s similar to a model on a computer, which has the same benefits of building a physical model.

00:18:06 It allows me to say, what would this thing look like if it was in this orientation? What would

00:18:10 likely happen if I push this button? I’ve never pushed this button before, or how would I accomplish

00:18:15 something? I want to, I want to convey a new idea I’ve learned. How would I do that? I can imagine

00:18:21 in my head, well, I could talk about it. I could write a book. I could do some podcasts. I could,

00:18:28 you know, maybe tell my neighbor, you know, and I can imagine the outcomes of all these things

00:18:32 before I do any of them. That’s what the model lets you do. It lets us plan the future and

00:18:36 imagine the consequences of our actions. Prediction, you asked about prediction. Prediction

00:18:42 is not the goal of the model. Prediction is an inherent property of it, and it’s how the model

00:18:48 corrects itself. So prediction is fundamental to intelligence. It’s fundamental to building a model,

00:18:55 and the model’s intelligent. And let me go back and be very precise about this. Prediction,

00:19:00 you can think of prediction two ways. One is like, hey, what would happen if I did this? That’s a

00:19:03 type of prediction. That’s a key part of intelligence. But using prediction is like, oh,

00:19:07 what’s this water bottle going to feel like when I pick it up, you know? And that doesn’t seem very

00:19:13 intelligent. But one way to think about prediction is it’s a way for us to learn where our model is

00:19:20 wrong. So if I picked up this water bottle and it felt hot, I’d be very surprised. Or if I picked

00:19:26 it up and it was very light, I’d be surprised. Or if I turned this top and I had to turn it the other

00:19:32 way, I’d be surprised. And so all those might have a prediction like, okay, I’m going to do it. I’ll

00:19:38 drink some water. I’m okay. Okay, I do this. There it is. I feel opening, right? What if I had to turn

00:19:42 it the other way? Or what if it’s split in two? Then I say, oh my gosh, I misunderstood this. I

00:19:47 didn’t have the right model of this thing. My attention would be drawn to it. I’d be looking at

00:19:50 it going, well, how the hell did that happen? Why did it open up that way? And I would update my

00:19:55 model by doing it. Just by looking at it and playing around with that update and say, this is

00:19:58 a new type of water bottle. So you’re talking about sort of complicated things like a water bottle,

00:20:05 but this also applies for just basic vision, just like seeing things. It’s almost like a

00:20:10 precondition of just perceiving the world is predicting it. So just everything that you see

00:20:18 is first passed through your prediction. Everything you see and feel. In fact,

00:20:23 this was the insight I had back in the early 80s. And I know that people have reached the same idea

00:20:31 is that every sensory input you get, not just vision, but touch and hearing, you have an

00:20:37 expectation about it and a prediction. Sometimes you can predict very accurately. Sometimes you

00:20:43 can’t. I can’t predict what next word is going to come out of your mouth. But as you start talking,

00:20:47 I’ll get better and better predictions. And if you talk about some topics, I’d be very surprised.

00:20:51 So I have this sort of background prediction that’s going on all the time for all of my senses.

00:20:58 Again, the way I think about that is this is how we learn. It’s more about how we learn.

00:21:04 It’s a test of our understanding. Our predictions are a test. Is this really a water bottle? If it

00:21:10 is, I shouldn’t see a little finger sticking out the side. And if I saw a little finger sticking

00:21:14 out, I was like, oh, what the hell’s going on? That’s not normal. I mean, that’s fascinating

00:21:20 that… Let me linger on this for a second. It really honestly feels that prediction is

00:21:27 fundamental to everything, to the way our mind operates, to intelligence. So it’s just a different

00:21:35 way to see intelligence, which is like everything starts a prediction. And prediction requires a

00:21:41 model. You can’t predict something unless you have a model of it. Right. But the action is

00:21:46 prediction. So the thing the model does is prediction. But it also… Yeah. But you can

00:21:53 then extend it to things like, oh, what would happen if I took this today? I went and did this.

00:21:59 What would be likely? Or how… You can extend prediction to like, oh, I want to get a promotion

00:22:04 at work. What action should I take? And you can say, if I did this, I predict what might happen.

00:22:09 If I spoke to someone, I predict what might happen. So it’s not just low level predictions.

00:22:13 Yeah. It’s all predictions. It’s all predictions. It’s like this black box so you can ask basically

00:22:17 any question, low level or high level. So we started off with that observation. It’s

00:22:21 this nonstop prediction. And I write about this in the book. And then we asked, how do neurons

00:22:27 actually make predictions physically? Like what does the neuron do when it makes a prediction?

00:22:32 Or the neural tissue does when it makes a prediction. And then we asked, what are the

00:22:35 mechanisms by how we build a model that allows you to make predictions? So we started with prediction

00:22:40 as sort of the fundamental research agenda, if in some sense. And say, well, we understand how

00:22:47 the brain makes predictions. We’ll understand how it builds these models and how it learns.

00:22:51 And that’s the core of intelligence. So it was the key that got us in the door

00:22:55 to say, that is our research agenda. Understand predictions.

00:22:59 So in this whole process, where does intelligence originate, would you say?

00:23:05 So if we look at things that are much less intelligence to humans and you start to build

00:23:12 up a human through the process of evolution, where’s this magic thing that has a prediction

00:23:19 model or a model that’s able to predict that starts to look a lot more like intelligence?

00:23:24 Is there a place where Richard Dawkins wrote an introduction to your book, an excellent

00:23:30 introduction? I mean, it’s, it puts a lot of things into context and it’s funny just looking

00:23:36 at parallels for your book and Darwin’s Origin of Species. So Darwin wrote about the origin

00:23:42 of species. So what is the origin of intelligence?

00:23:47 Well, we have a theory about it and it’s just that, it’s a theory. The theory goes as follows.

00:23:53 As soon as living things started to move, they’re not just floating in sea, they’re not just a

00:23:58 plant, you know, grounded someplace. As soon as they started to move, there was an advantage to

00:24:03 moving intelligently, to moving in certain ways. And there’s some very simple things you can do,

00:24:08 you know, bacteria or single cell organisms can move towards the source of gradient of

00:24:14 food or something like that. But an animal that might know where it is and know where it’s been

00:24:19 and how to get back to that place, or an animal that might say, oh, there was a source of food

00:24:23 someplace, how do I get to it? Or there was a danger, how do I get to it? There was a mate, how

00:24:29 do I get to them? There was a big evolutionary advantage to that. So early on, there was a

00:24:34 pressure to start understanding your environment, like where am I and where have I been? And what

00:24:40 happened in those different places? So we still have this neural mechanism in our brains. In the

00:24:49 mammals, it’s in the hippocampus and entorhinal cortex, these are older parts of the brain.

00:24:55 And these are very well studied. We build a map of the of our environment. So these neurons in

00:25:02 these parts of the brain know where I am in this room, and where the door was and things like that.

00:25:07 So a lot of other mammals have this?

00:25:09 All mammals have this, right? And almost any animal that knows where it is, and get around

00:25:15 must have some mapping system, must have some way of saying, I’ve learned a map of my environment,

00:25:21 I have hummingbirds in my backyard. And they go to the same places all the time. They must know

00:25:26 where they are. They just know where they are when they’re not just randomly flying around. They

00:25:30 know. They know particular flowers they come back to. So we all have this. And it turns out it’s

00:25:36 very tricky to get neurons to do this, to build a map of an environment. And so we now know,

00:25:42 there’s these famous studies that are still very active about place cells and grid cells and these

00:25:47 other types of cells in the older parts of the brain, and how they build these maps of the world.

00:25:51 It’s really clever. It’s obviously been under a lot of evolutionary pressure over a long period

00:25:55 of time to get good at this. So animals now know where they are. What we think has happened,

00:26:01 and there’s a lot of evidence to suggest this, is that that mechanism we learned to map,

00:26:06 like a space, was repackaged. The same type of neurons was repackaged into a more compact form.

00:26:17 And that became the cortical column. And it was in some sense, genericized, if that’s a word. It

00:26:23 was turned into a very specific thing about learning maps of environments to learning maps

00:26:28 of anything, learning a model of anything, not just your space, but coffee cups and so on. And

00:26:34 it got sort of repackaged into a more compact version, a more universal version,

00:26:41 and then replicated. So the reason we’re so flexible is we have a very generic version of

00:26:46 this mapping algorithm, and we have 150,000 copies of it. Sounds a lot like the progress

00:26:52 of deep learning. How so? So take neural networks that seem to work well for a specific task,

00:27:00 compress them, and multiply it by a lot. And then you just stack them on top of it. It’s like the

00:27:07 story of transformers in natural language processing. Yeah. But in deep learning networks,

00:27:12 they end up, you’re replicating an element, but you still need the entire network to do anything.

00:27:18 Right. Here, what’s going on, each individual element is a complete learning system. This is

00:27:24 why I can take a human brain, cut it in half, and it still works. It’s the same thing.

00:27:29 It’s pretty amazing. It’s fundamentally distributed. It’s fundamentally distributed,

00:27:34 complete modeling systems. But that’s our story we like to tell. I would guess it’s likely largely

00:27:42 right. But there’s a lot of evidence supporting that story, this evolutionary story. The thing

00:27:50 which brought me to this idea is that the human brain got big very quickly. So that led to the

00:27:58 proposal a long time ago that, well, there’s this common element just instead of creating

00:28:02 new things, it just replicated something. We also are extremely flexible. We can learn things that

00:28:07 we had no history about. And that tells it that the learning algorithm is very generic. It’s very

00:28:15 kind of universal because it doesn’t assume any prior knowledge about what it’s learning.

00:28:20 And so you combine those things together and you say, okay, well, how did that come about? Where

00:28:26 did that universal algorithm come from? It had to come from something that wasn’t universal. It

00:28:29 came from something that was more specific. So anyway, this led to our hypothesis that

00:28:34 you would find grid cells and place cell equivalents in the neocortex. And when we

00:28:38 first published our first papers on this theory, we didn’t know of evidence for that. It turns out

00:28:43 there was some, but we didn’t know about it. So then we became aware of evidence for grid

00:28:48 cells in parts of the neocortex. And then now there’s been new evidence coming out. There’s some

00:28:53 interesting papers that came out just January of this year. So one of our predictions was if this

00:28:59 evolutionary hypothesis is correct, we would see grid cell place cell equivalents, cells that work

00:29:04 like them through every column in the neocortex. And that’s starting to be seen. What does it mean

00:29:08 that, why is it important that they’re present? Because it tells us, well, we’re asking about the

00:29:13 evolutionary origin of intelligence, right? So our theory is that these columns in the cortex

00:29:19 are working on the same principles, they’re modeling systems. And it’s hard to imagine how

00:29:25 neurons do this. And so we said, hey, it’s really hard to imagine how neurons could learn these

00:29:30 models of things. We can talk about the details of that if you want. But there’s this other part

00:29:36 of the brain, we know that learns models of environments. So could that mechanism to learn

00:29:41 to model this room be used to learn to model the water bottle? Is it the same mechanism? So we said

00:29:47 it’s much more likely the brain’s using the same mechanism, which case it would have these equivalent

00:29:52 cell types. So it’s basically the whole theory is built on the idea that these columns have

00:29:57 reference frames and they’re learning these models and these grid cells create these reference frames.

00:30:02 So it’s basically the major, in some sense, the major predictive part of this theory is that we

00:30:09 will find these equivalent mechanisms in each column in the neocortex, which tells us that

00:30:14 that’s what they’re doing. They’re learning these sensory motor models of the world. So we’re pretty

00:30:21 confident that would happen, but now we’re seeing the evidence. So the evolutionary process, nature

00:30:26 does a lot of copy and paste and see what happens. Yeah. Yeah. There’s no direction to it. But it

00:30:31 just found out like, hey, if I took these elements and made more of them, what happens? And let’s hook

00:30:37 them up to the eyes and let’s hook them to ears. And that seems to work pretty well for us. Again,

00:30:43 just to take a quick step back to our conversation of collective intelligence.

00:30:48 Do you sometimes see that as just another copy and paste aspect is copying and pasting

00:30:56 these brains and humans and making a lot of them and then creating social structures that then

00:31:04 almost operate as a single brain? I wouldn’t have said that, but you said it sounded pretty good.

00:31:08 So to you, the brain is its own thing.

00:31:15 I mean, our goal is to understand how the neocortex works. We can argue how essential

00:31:20 that is to understand the human brain because it’s not the entire human brain. You can argue

00:31:25 how essential that is to understanding human intelligence. You can argue how essential this

00:31:29 is to sort of communal intelligence. Our goal was to understand the neocortex.

00:31:38 Yeah. So what is the neocortex and where does it fit

00:31:41 in the various aspects of what the brain does? Like how important is it to you?

00:31:46 Well, obviously, again, I mentioned again in the beginning, it’s about 70 to 75% of the volume of

00:31:53 the human brain. So it dominates our brain in terms of size. Not in terms of number of neurons,

00:31:58 but in terms of size.

00:32:00 Size isn’t everything, Jeff.

00:32:02 I know, but it’s not that. We know that all high level vision,

00:32:09 hearing, and touch happens in the neocortex. We know that all language occurs and is understood

00:32:13 in the neocortex, whether that’s spoken language, written language, sign language,

00:32:17 whether it’s language of mathematics, language of physics, music. We know that all high level

00:32:23 planning and thinking occurs in the neocortex. If I were to say, what part of your brain designed

00:32:27 a computer and understands programming and creates music? It’s all the neocortex.

00:32:33 So then that’s an undeniable fact. But then there’s other parts of our brain are important too,

00:32:39 right? Our emotional states, our body regulating our body. So the way I like to look at it is,

00:32:48 can you understand the neocortex without the rest of the brain? And some people say you can’t,

00:32:53 and I think absolutely you can. It’s not that they’re not interacting, but you can understand.

00:32:58 Can you understand the neocortex without understanding the emotions of fear? Yes,

00:33:01 you can. You can understand how the system works. It’s just a modeling system. I make the analogy

00:33:06 in the book that it’s like a map of the world, and how that map is used depends on who’s using it.

00:33:12 So how our map of our world in our neocortex, how we manifest as a human depends on the rest of our

00:33:19 brain. What are our motivations? What are my desires? Am I a nice guy or not a nice guy?

00:33:23 Am I a cheater or not a cheater? How important different things are in my life?

00:33:33 But the neocortex can be understood on its own. And I say that as a neuroscientist,

00:33:39 I know there’s all these interactions, and I don’t want to say I don’t know them and we

00:33:43 don’t think about them. But from a layperson’s point of view, you can say it’s a modeling system.

00:33:47 I don’t generally think too much about the communal aspect of intelligence, which you brought up a

00:33:51 number of times already. So that’s not really been my concern.

00:33:55 I just wonder if there’s a continuum from the origin of the universe, like

00:34:00 this pockets of complexities that form living organisms. I wonder if we’re just,

00:34:08 if you look at humans, we feel like we’re at the top. And I wonder if there’s like just,

00:34:13 I wonder if there’s like just where everybody probably every living type pocket of complexity

00:34:20 probably thinks they’re the, pardon the French, they’re the shit. They’re at the top of the

00:34:26 pyramid. Well, if they’re thinking. Well, then what is thinking? In this sense,

00:34:32 the whole point is in their sense of the world, their sense is that they’re at the top of it.

00:34:40 I think what is a turtle, but you’re, you’re, you’re bringing up, you know,

00:34:44 the problems of complexity and complexity theory are, you know, it’s a huge,

00:34:48 interesting problem in science. Um, and you know, I think we’ve made surprisingly little progress

00:34:55 and understanding complex systems in general. Um, and so, you know, the Santa Fe Institute was

00:35:01 founded to study this and even the scientists there will say, it’s really hard. We haven’t

00:35:05 really been able to figure out exactly, you know, that science hasn’t really congealed yet. We’re

00:35:10 still trying to figure out the basic elements of that science. Uh, what, you know, where does

00:35:15 complexity come from and what is it and how you define it, whether it’s DNA creating bodies or

00:35:20 phenotypes or it’s individuals creating societies or ants and, you know, markets and so on. It’s,

00:35:26 it’s a very complex thing. I’m not a complexity theorist person, right? Um, and I, I think you

00:35:32 should ask, well, the brain itself is a complex system. So can we understand that? Um, I think

00:35:38 we’ve made a lot of progress understanding how the brain works. So, uh, but I haven’t

00:35:42 brought it out to like, oh, well, where are we on the complexity spectrum? You know, it’s like,

00:35:47 um, it’s a great question. I’d prefer for that answer to be we’re not special. It seems like

00:35:55 if we’re honest, most likely we’re not special. So if there is a spectrum or probably not in some

00:36:01 kind of significant place, there’s one thing we could say that we are special. And again,

00:36:06 only here on earth, I’m not saying is that if we think about knowledge, what we know,

00:36:14 um, we clearly human brains have, um, the only brains that have a certain types of knowledge.

00:36:21 We’re the only brains on this earth to understand, uh, what the earth is, how old it is,

00:36:25 that the universe is a picture as a whole with the only organisms understand DNA and

00:36:30 the origins of, you know, of species. Uh, no other species on, on this planet has that knowledge.

00:36:37 So if we think about, I like to think about, you know, one of the endeavors of humanity is to

00:36:43 understand the universe as much as we can. Um, I think our species is further along in that

00:36:49 undeniably, um, whether our theories are right or wrong, we can debate, but at least we have

00:36:54 theories. You know, we, we know that what the sun is and how its fusion is and how what black holes

00:36:59 are and, you know, we know general theory of relativity and no other animal has any of this

00:37:04 knowledge. So in that sense that we’re special, uh, are we special in terms of the hierarchy of

00:37:10 complexity in the universe? Probably not. Can we look at a neuron? Yeah. You say that prediction

00:37:20 happens in the neuron. What does that mean? So the neuron traditionally is seen as the

00:37:24 basic element of the brain. So we, I mentioned this earlier that prediction was our research agenda.

00:37:31 Yeah. We said, okay, um, how does the brain make a prediction? Like I I’m about to grab this water

00:37:37 bottle and my brain is predicting what I’m going to feel on, on all my parts of my fingers. If I

00:37:42 felt something really odd on any part here, I’d notice it. So my brain is predicting what it’s

00:37:46 going to feel as I grab this thing. So what does that, how does that manifest itself in neural

00:37:51 tissue? Right. We got brains made of neurons and there’s chemicals and there’s neurons and there’s

00:37:57 spikes and the connect, you know, where, where is the prediction going on? And one argument could be

00:38:03 that, well, when I’m predicting something, um, a neuron must be firing in advance. It’s like, okay,

00:38:09 this neuron represents what you’re going to feel and it’s firing. It’s sending a spike.

00:38:13 And certainly that happens to some extent, but our predictions are so ubiquitous

00:38:17 that we’re making so many of them, which we’re totally unaware of just the vast majority of me

00:38:21 have no idea that you’re doing this. Um, that it, there wasn’t really, we were trying to figure,

00:38:27 how could this be? Where, where are these, where are these happening? Right. And I won’t walk you

00:38:31 through the whole story unless you insist upon it. But we came to the realization that most of your

00:38:38 predictions are occurring inside individual neurons, especially these, the most common

00:38:43 are in the parameter cells. And there are, there’s a property of neurons. We, everyone knows,

00:38:49 or most people know that a neuron is a cell and it has this spike called an action potential,

00:38:53 and it sends information. But we now know that there’s these spikes internal to the neuron,

00:38:58 they’re called dendritic spikes. They travel along the branches of the neuron and they don’t leave

00:39:03 the neuron. They’re just internal only. There’s far more dendritic spikes than there are action

00:39:08 potentials, far more. They’re happening all the time. And what we came to understand that those

00:39:14 dendritic spikes, the ones that are occurring are actually a form of prediction. They’re telling the

00:39:18 neuron, the neuron is saying, I expect that I might become active shortly. And that internal,

00:39:25 so the internal spike is a way of saying, you’re going to, you might be generating external spikes

00:39:30 soon. I predicted you’re going to become active. And, and we’ve, we’ve, we wrote a paper in 2016

00:39:36 which explained how this manifests itself in neural tissue and how it is that this all works

00:39:42 together. But the vast majority, we think it’s, there’s a lot of evidence supporting it. So we,

00:39:48 that’s where we think that most of these predictions are internal. That’s why you can’t

00:39:51 be, they’re internal to the neuron, you can’t perceive them.

00:39:54 Well, from understanding the prediction mechanism of a single neuron, do you think there’s deep

00:40:00 insights to be gained about the prediction capabilities of the mini brains of the neural

00:40:05 brain? Of the mini brains and then the bigger brain and the brain?

00:40:08 Oh yeah. Yeah. Yeah. So having a prediction side of their individual neuron is not that useful.

00:40:12 So what? The way it manifests itself in neural tissue is that when a neuron, a neuron emits these

00:40:22 spikes are a very singular type event. If a neuron is predicting that it’s going to be active, it

00:40:27 emits its spike very, a little bit sooner, just a few milliseconds sooner than it would have

00:40:31 been. It’s like, I give the analogy of the book is like a sprinter on a, on a starting blocks in a,

00:40:36 in a race. And if someone says, get ready, set, you get up and you’re ready to go. And then when

00:40:42 your race starts, you get a little bit earlier start. So that it’s that, that ready set is like

00:40:46 the prediction and the neurons like ready to go quicker. And what happens is when you have a whole

00:40:50 bunch of neurons together and they’re all getting these inputs, the ones that are in the predictive

00:40:55 state, the ones that are anticipating to become active, if they do become active, they, they

00:40:59 sooner, they disable everything else. And it leads to different representations in the brain. So

00:41:04 you have to, it’s not isolated just to the neuron, the prediction occurs with the neuron,

00:41:09 but the network behavior changes. So what happens under different predictions, different inputs

00:41:14 have different representations. So how I, what I predict is going to be different under different

00:41:20 contexts, you know, what my input will be is different under different contexts. So this is,

00:41:24 this is a key to the whole theory, how this works. So the theory of the thousand brains,

00:41:30 if you were to count the number of brains, how would you do it? The thousand brain theory says

00:41:35 that basically every cortical column in the, in your, in your cortex is a complete modeling system.

00:41:42 And that when I ask, where do I have a model of something like a coffee cup? It’s not in one of

00:41:46 those models. It’s in thousands of those models. There’s thousands of models of coffee cups. That’s

00:41:51 what the thousand brains, then there’s a voting mechanism, which you lead, which you’re, which is

00:41:56 the thing you’re, which you’re conscious of, which leads to your singular perception. That’s why you,

00:42:01 you perceive something. So that’s the thousand brains theory. The details, how we got to that

00:42:07 theory are complicated. It wasn’t, we just thought of it one day. And one of those details that we

00:42:13 had to ask, how does a model make predictions? And we’ve talked about just these predictive neurons.

00:42:18 That’s part of this theory. It’s like saying, Oh, it’s a detail, but it was like a crack in the

00:42:22 door. It’s like, how are we going to figure out how these neurons built through this? You know,

00:42:24 what is going on here? So we just looked at prediction as like, well, we know that’s ubiquitous.

00:42:30 We know that every part of the cortex is making predictions. Therefore, whatever the predictive

00:42:34 system is, it’s going to be everywhere. We know there’s a gazillion predictions happening at once.

00:42:39 So this is where we can start teasing apart, you know, ask questions about, you know, how could

00:42:44 neurons be making these predictions? And that sort of built up to now what we have this thousand

00:42:48 brains theory, which is complex. You know, it’s just, I can state it simply, but we just didn’t

00:42:53 think of it. We had to get there step by step, very, it took years to get there.

00:42:59 And where does reference frames fit in? So, yeah.

00:43:04 Okay. So again, a reference frame, I mentioned earlier about the model of a house. And I said,

00:43:11 if you’re going to build a model of a house in a computer, they have a reference frame. And you

00:43:14 can think of reference frame like Cartesian coordinates, like X, Y, and Z axes. So I could

00:43:19 say, oh, I’m going to design a house. I can say, well, the front door is at this location, X, Y,

00:43:24 Z, and the roof is at this location, X, Y, Z, and so on. That’s a type of reference frame.

00:43:29 So it turns out for you to make a prediction, and I walk you through the thought experiment in the

00:43:33 book where I was predicting what my finger was going to feel when I touched a coffee cup.

00:43:37 It was a ceramic coffee cup, but this one will do. And what I realized is that to make a prediction

00:43:45 of what my finger’s going to feel, like it’s going to feel different than this, what’s it feel

00:43:48 different if I touch the hole or this thing on the bottom, make that prediction. The cortex needs to

00:43:53 know where the finger is, the tip of the finger, relative to the coffee cup. And exactly relative

00:43:59 to the coffee cup. And to do that, I have to have a reference frame for the coffee cup. It has to

00:44:03 have a way of representing the location of my finger to the coffee cup. And then we realized,

00:44:08 of course, every part of your skin has to have a reference frame relative to things that touch.

00:44:11 And then we did the same thing with vision. So the idea that a reference frame is necessary

00:44:16 to make a prediction when you’re touching something or when you’re seeing something

00:44:20 and you’re moving your eyes or you’re moving your fingers, it’s just a requirement

00:44:24 to predict. If I have a structure, I’m going to make a prediction. I have to know where it is I’m

00:44:29 looking or touching it. So then we said, well, how do neurons make reference frames? It’s not obvious.

00:44:36 X, Y, Z coordinates don’t exist in the brain. It’s just not the way it works. So that’s when we

00:44:40 looked at the older part of the brain, the hippocampus and the anterior cortex, where we knew

00:44:45 that in that part of the brain, there’s a reference frame for a room or a reference frame for an

00:44:49 environment. Remember, I talked earlier about how you could make a map of this room. So we said,

00:44:55 oh, they are implementing reference frames there. So we knew that reference frames needed to exist

00:45:01 in every quarter of a column. And so that was a deductive thing. We just deduced it. It has to

00:45:07 exist. So you take the old mammalian ability to know where you are in a particular space

00:45:15 and you start applying that to higher and higher levels.

00:45:18 Yeah. First you apply it to like where your finger is. So here’s what I think about it.

00:45:22 The old part of the brain says, where’s my body in this room? The new part of the brain says,

00:45:26 where’s my finger relative to this object? Where is a section of my retina relative to

00:45:34 this object? I’m looking at one little corner. Where is that relative to this patch of my retina?

00:45:40 And then we take the same thing and apply it to concepts, mathematics, physics, humanity,

00:45:47 whatever you want to think about. And eventually you’re pondering your own mortality.

00:45:50 Well, whatever. But the point is when we think about the world, when we have knowledge about

00:45:55 the world, how is that knowledge organized, Lex? Where is it in your head? The answer is it’s in

00:46:00 reference frames. So the way I learned the structure of this water bottle where the

00:46:05 features are relative to each other, when I think about history or democracy or mathematics,

00:46:11 the same basic underlying structure is happening. There’s reference frames for where the knowledge

00:46:15 that you’re assigning things to. So in the book, I go through examples like mathematics

00:46:19 and language and politics. But the evidence is very clear in the neuroscience. The same mechanism

00:46:25 that we use to model this coffee cup, we’re going to use to model high level thoughts.

00:46:30 Your demise of humanity, whatever you want to think about.

00:46:34 It’s interesting to think about how different are the representations of those higher dimensional

00:46:38 concepts, higher level concepts, how different the representation there is in terms of reference

00:46:45 frames versus spatial. But the interesting thing, it’s a different application, but it’s the exact

00:46:52 same mechanism. But isn’t there some aspect to higher level concepts that they seem to be

00:46:59 hierarchical? Like they just seem to integrate a lot of information into them. So is our physical

00:47:05 objects. So take this water bottle. I’m not particular to this brand, but this is a Fiji

00:47:12 water bottle and it has a logo on it. I use this example in my book, our company’s coffee cup has

00:47:18 a logo on it. But this object is hierarchical. It’s got like a cylinder and a cap, but then it

00:47:25 has this logo on it and the logo has a word, the word has letters, the letters have different

00:47:29 features. And so I don’t have to remember, I don’t have to think about this. So I say,

00:47:33 oh, there’s a Fiji logo on this water bottle. I don’t have to go through and say, oh, what is the

00:47:37 Fiji logo? It’s the F and I and the J and I, and there’s a hibiscus flower. And, oh, it has the

00:47:43 statement on it. I don’t have to do that. I just incorporate all of that in some sort of hierarchical

00:47:47 representation. I say, put this logo on this water bottle. And then the logo has a word

00:47:55 and the word has letters, all hierarchical. All that stuff is big. It’s amazing that the

00:47:59 brain instantly just does all that. The idea that there’s water, it’s liquid and the idea that you

00:48:04 can drink it when you’re thirsty, the idea that there’s brands and then there’s like all of that

00:48:11 information is instantly like built into the whole thing once you proceed. So I wanted to

00:48:17 get back to your point about hierarchical representation. The world itself is hierarchical,

00:48:21 right? And I can take this microphone in front of me. I know inside there’s going to be some

00:48:25 electronics. I know there’s going to be some wires and I know there’s going to be a little

00:48:28 diaphragm that moves back and forth. I don’t see that, but I know it. So everything in the world

00:48:33 is hierarchical. You just go into a room. It’s composed of other components. The kitchen has a

00:48:37 refrigerator. The refrigerator has a door. The door has a hinge. The hinge has screws and pin.

00:48:43 So anyway, the modeling system that exists in every cortical column learns the hierarchical

00:48:49 structure of objects. So it’s a very sophisticated modeling system in this grain of rice. It’s hard

00:48:54 to imagine, but this grain of rice can do really sophisticated things. It’s got 100,000 neurons in

00:48:58 it. It’s very sophisticated. So that same mechanism that can model a water bottle or a coffee cup

00:49:07 can model conceptual objects as well. That’s the beauty of this discovery that this guy,

00:49:13 Vernon Malmkastel, made many, many years ago, which is that there’s a single cortical algorithm

00:49:18 underlying everything we’re doing. So common sense concepts and higher

00:49:23 level concepts are all represented in the same way?

00:49:26 They’re set in the same mechanisms, yeah. It’s a little bit like computers. All computers are

00:49:31 universal Turing machines. Even the little teeny one that’s in my toaster and the big one that’s

00:49:37 running some cloud server someplace. They’re all running on the same principle. They can

00:49:41 apply different things. So the brain is all built on the same principle. It’s all about

00:49:46 learning these structured models using movement and reference frames. And it can be applied to

00:49:53 something as simple as a water bottle and a coffee cup. And it can be applied to thinking

00:49:56 what’s the future of humanity and why do you have a hedgehog on your desk? I don’t know.

00:50:02 Nobody knows. Well, I think it’s a hedgehog. That’s right. It’s a hedgehog in the fog.

00:50:09 It’s a Russian reference. Does it give you any inclination or hope about how difficult

00:50:16 it is to engineer common sense reasoning? So how complicated is this whole process?

00:50:21 So looking at the brain, is this a marvel of engineering or is it pretty dumb stuff

00:50:28 stuck on top of each other over? Can it be both? Can it be both, right?

00:50:35 I don’t know if it can be both because if it’s an incredible engineering job, that means it’s

00:50:43 so evolution did a lot of work. Yeah, but then it just copied that.

00:50:48 Yeah. Right. So as I said earlier, figuring out how to model something like a space is really hard

00:50:55 and evolution had to go through a lot of trick. And these cells I was talking about,

00:50:59 these grid cells and place cells, they’re really complicated. This is not simple stuff.

00:51:03 This neural tissue works on these really unexpected, weird mechanisms.

00:51:08 But it did it. It figured it out. But now you could just make lots of copies of it.

00:51:13 But then finding, yeah, so it’s a very interesting idea that’s a lot of copies

00:51:18 of a basic mini brain. But the question is how difficult it is to find that mini brain

00:51:25 that you can copy and paste effectively. Today, we know enough to build this.

00:51:33 I’m sitting here with, I know the steps we have to go. There’s still some engineering problems

00:51:37 to solve, but we know enough. And this is not like, oh, this is an interesting idea. We have

00:51:43 to go think about it for another few decades. No, we actually understand it pretty well in details.

00:51:48 So not all the details, but most of them. So it’s complicated, but it is an engineering problem.

00:51:55 So in my company, we are working on that. We are basically a roadmap of how we do this.

00:52:01 It’s not going to take decades. It’s a matter of a few years optimistically,

00:52:06 but I think that’s possible. It’s, you know, complex things. If you understand them,

00:52:11 you can build them. So in which domain do you think it’s best to build them?

00:52:17 Are we talking about robotics, like entities that operate in the physical world that are

00:52:23 able to interact with that world? Are we talking about entities that operate in the digital world?

00:52:27 Are we talking about something more like more specific, like it’s done in the machine learning

00:52:33 community where you look at natural language or computer vision? Where do you think is easiest?

00:52:41 It’s the first, it’s the first two more than the third one, I would say.

00:52:46 Again, let’s just use computers as an analogy. The pioneers in computing, people like John

00:52:52 Van Norman and Alan Turing, they created this thing, you know, we now call the universal

00:52:56 Turing machine, which is a computer, right? Did they know how it was going to be applied?

00:53:00 Where it was going to be used? Could they envision any of the future? No. They just said,

00:53:04 this is like a really interesting computational idea about algorithms and how you can implement

00:53:11 them in a machine. And we’re doing something similar to that today. Like we are building this

00:53:18 sort of universal learning principle that can be applied to many, many different things.

00:53:24 But the robotics piece of that, the interactive…

00:53:27 Okay. All right. Let’s be just specific. You can think of this cortical column as

00:53:31 what we call a sensory motor learning system. It has the idea that there’s a sensor

00:53:35 and then it’s moving. That sensor can be physical. It could be like my finger

00:53:39 and it’s moving in the world. It could be like my eye and it’s physically moving.

00:53:43 It can also be virtual. So, it could be, an example would be, I could have a system that

00:53:50 lives in the internet that actually samples information on the internet and moves by

00:53:55 following links. That’s a sensory motor system. Something that echoes the process of a finger

00:54:02 moving along a cortical… But in a very, very loose sense. It’s like,

00:54:06 again, learning is inherently about discovering the structure of the world and discover the

00:54:10 structure of the world, you have to move through the world. Even if it’s a virtual world, even if

00:54:14 it’s a conceptual world, you have to move through it. It doesn’t exist in one… It has some structure

00:54:20 to it. So, here’s a couple of predictions at getting what you’re talking about.

00:54:27 In humans, the same algorithm does robotics. It moves my arms, my eyes, my body.

00:54:34 And so, in the future, to me, robotics and AI will merge. They’re not going to be separate fields

00:54:40 because the algorithms for really controlling robots are going to be the same algorithms we

00:54:45 have in our brain, these sensory motor algorithms. Today, we’re not there, but I think that’s going

00:54:50 to happen. But not all AI systems will have to be robotics. You can have systems that have very

00:54:58 different types of embodiments. Some will have physical movements, some will not have physical

00:55:02 movements. It’s a very generic learning system. Again, it’s like computers. The Turing machine,

00:55:08 it doesn’t say how it’s supposed to be implemented, it doesn’t tell you how big it is,

00:55:11 it doesn’t tell you what you can apply it to, but it’s a computational principle.

00:55:15 The cortical column equivalent is a computational principle about learning. It’s about how you

00:55:20 learn and it can be applied to a gazillion things. I think this impact of AI is going to be as large,

00:55:27 if not larger, than computing has been in the last century, by far, because it’s getting at

00:55:33 a fundamental thing. It’s not a vision system or a learning system. It’s not a vision system or

00:55:37 a hearing system. It is a learning system. It’s a fundamental principle, how you learn the structure

00:55:41 in the world, how you can gain knowledge and be intelligent. That’s what the thousand brains says

00:55:46 was going on. We have a particular implementation in our head, but it doesn’t have to be like that

00:55:49 at all. Do you think there’s going to be some kind of impact? Okay, let me ask it another way.

00:55:56 What do increasingly intelligent AI systems do with us humans in the following way? How hard is

00:56:05 the human in the loop problem? How hard is it to interact? The finger on the coffee cup equivalent

00:56:13 of having a conversation with a human being. How hard is it to fit into our little human world?

00:56:20 I think it’s a lot of engineering problems. I don’t think it’s a fundamental problem.

00:56:25 I could ask you the same question. How hard is it for computers to fit into a human world?

00:56:28 Right. That’s essentially what I’m asking. How elitist are we as humans? We try to keep out

00:56:40 systems. I don’t know. I’m not sure that’s the right question. Let’s look at computers as an

00:56:48 analogy. Computers are a million times faster than us. They do things we can’t understand.

00:56:52 Most people have no idea what’s going on when they use computers. How do we integrate them

00:56:57 in our society? Well, we don’t think of them as their own entity. They’re not living things.

00:57:04 We don’t afford them rights. We rely on them. Our survival as seven billion people or something

00:57:12 like that is relying on computers now. Don’t you think that’s a fundamental problem

00:57:18 that we see them as something we don’t give rights to?

00:57:22 Computers? Yeah, computers. Robots,

00:57:25 computers, intelligence systems. It feels like for them to operate successfully,

00:57:29 they would need to have a lot of the elements that we would start having to think about.

00:57:37 Should this entity have rights? I don’t think so. I think

00:57:42 it’s tempting to think that way. First of all, hardly anyone thinks that for computers today.

00:57:47 No one says, oh, this thing needs a right. I shouldn’t be able to turn it off. If I throw it

00:57:52 in the trash can and hit it with a sledgehammer, it might form a criminal act. No one thinks that.

00:57:59 Now we think about intelligent machines, which is where you’re going.

00:58:05 All of a sudden, you’re like, well, now we can’t do that. I think the basic problem we have here

00:58:10 is that people think intelligent machines will be like us. They’re going to have the same emotions

00:58:14 as we do, the same feelings as we do. What if I can build an intelligent machine that absolutely

00:58:19 could care less about whether it was on or off or destroyed or not? It just doesn’t care. It’s

00:58:23 just like a map. It’s just a modeling system. There’s no desires to live. Nothing.

00:58:28 Is it possible to create a system that can model the world deeply and not care

00:58:35 about whether it lives or dies? Absolutely. No question about it.

00:58:38 To me, that’s not 100% obvious. It’s obvious to me. We can debate it if we want.

00:58:43 Where does your desire to live come from? It’s an old evolutionary design. We could argue,

00:58:52 does it really matter if we live or not? Objectively, no. We’re all going to die eventually.

00:59:00 Evolution makes us want to live. Evolution makes us want to fight to live. Evolution makes us want

00:59:05 to care and love one another and to care for our children and our relatives and our family and so

00:59:11 on. Those are all good things. They come about not because we’re smart, because we’re animals

00:59:18 that grew up. The hummingbird in my backyard cares about its offspring. Every living thing

00:59:25 in some sense cares about surviving. When we talk about creating intelligent machines,

00:59:30 we’re not creating life. We’re not creating evolving creatures. We’re not creating living

00:59:35 things. We’re just creating a machine that can learn really sophisticated stuff. That machine,

00:59:40 it may even be able to talk to us. It’s not going to have a desire to live unless somehow we put it

00:59:47 into that system. Well, there’s learning, right? The thing is… But you don’t learn to want to

00:59:52 live. It’s built into you. It’s part of your DNA. People like Ernest Becker argue,

00:59:59 there’s the fact of finiteness of life. The way we think about it is something we learned,

01:00:06 perhaps. Okay. Yeah. Some people decide they don’t want to live. Some people decide the desire to

01:00:13 live is built in DNA, right? But I think what I’m trying to get to is in order to accomplish goals,

01:00:18 it’s useful to have the urgency of mortality. It’s what the Stoics talked about,

01:00:23 is meditating in your mortality. It might be a very useful thing to do to die and have the urgency

01:00:31 of death and to realize that to conceive yourself as an entity that operates in this world that

01:00:38 eventually will no longer be a part of this world and actually conceive of yourself as a conscious

01:00:43 entity might be very useful for you to be a system that makes sense of the world. Otherwise,

01:00:49 you might get lazy. Well, okay. We’re going to build these machines, right? So we’re talking

01:00:55 about building AIs. But we’re building the equivalent of the cortical columns.

01:01:03 The neocortex. The neocortex. And the question is, where do they arrive at? Because we’re not

01:01:11 hard coding everything in. Well, in terms of if you build the neocortex equivalent,

01:01:17 it will not have any of these desires or emotional states. Now, you can argue that

01:01:22 that neocortex won’t be useful unless I give it some agency, unless I give it some desire,

01:01:28 unless I give it some motivation. Otherwise, you’ll be just lazy and do nothing, right?

01:01:31 You could argue that. But on its own, it’s not going to do those things. It’s just not going

01:01:37 to sit there and say, I understand the world. Therefore, I care to live. No, it’s not going

01:01:41 to do that. It’s just going to say, I understand the world. Why is that obvious to you? Do you think

01:01:46 it’s possible? Okay, let me ask it this way. Do you think it’s possible it will at least assign to

01:01:52 itself agency and perceive itself in this world as being a conscious entity as a useful way to

01:02:04 operate in the world and to make sense of the world? I think an intelligent machine can be

01:02:08 conscious, but that does not, again, imply any of these desires and goals that you’re worried about.

01:02:18 We can talk about what it means for a machine to be conscious.

01:02:20 By the way, not worry about, but get excited about. It’s not necessary that we should worry

01:02:24 about it. I think there’s a legitimate problem or not problem, a question asked,

01:02:29 if you build this modeling system, what’s it going to model? What’s its desire? What’s its

01:02:35 goal? What are we applying it to? That’s an interesting question. One thing, and it depends

01:02:42 on the application, it’s not something that inherent to the modeling system. It’s something

01:02:46 we apply to the modeling system in a particular way. If I wanted to make a really smart car,

01:02:52 it would have to know about driving and cars and what’s important in driving and cars.

01:02:58 It’s not going to figure that on its own. It’s not going to sit there and say, I’ve understood

01:03:01 the world and I’ve decided, no, no, no, no, we’re going to have to tell it. We’re going to have to

01:03:06 say, so I imagine I make this car really smart. It learns about your driving habits. It learns

01:03:10 about the world. Is it one day going to wake up and say, you know what? I’m tired of driving

01:03:17 and doing what you want. I think I have better ideas about how to spend my time.

01:03:22 Okay. No, it’s not going to do that. Well, part of me is playing a little bit of devil’s advocate,

01:03:26 but part of me is also trying to think through this because I’ve studied cars quite a bit and

01:03:32 I studied pedestrians and cyclists quite a bit. And there’s part of me that thinks

01:03:38 that there needs to be more intelligence than we realize in order to drive successfully.

01:03:46 That game theory of human interaction seems to require some deep understanding of human nature

01:03:54 that, okay. When a pedestrian crosses the street, there’s some sense. They look at a car usually,

01:04:04 and then they look away. There’s some sense in which they say, I believe that you’re not going

01:04:10 to murder me. You don’t have the guts to murder me. This is the little dance of pedestrian car

01:04:16 interaction is saying, I’m going to look away and I’m going to put my life in your hands because

01:04:22 I think you’re human. You’re not going to kill me. And then the car in order to successfully

01:04:28 operate in like Manhattan streets has to say, no, no, no, no. I am going to kill you like a little

01:04:34 bit. There’s a little bit of this weird inkling of mutual murder. And that’s a dance and somehow

01:04:40 successfully operate through that. Do you think you were born of that? Did you learn that social

01:04:44 interaction? I think it might have a lot of the same elements that you’re talking about,

01:04:50 which is we’re leveraging things we were born with and applying them in the context that.

01:04:57 All right. I would have said that that kind of interaction is learned because people in different

01:05:03 cultures to have different interactions like that. If you cross the street in different cities and

01:05:06 different parts of the world, they have different ways of interacting. I would say that’s learned.

01:05:10 And I would say an intelligent system can learn that too, but that does not lead. And the intelligent

01:05:15 system can understand humans. It could understand that just like I can study an animal and learn

01:05:24 something about that animal. I could study apes and learn something about their culture and so on.

01:05:28 I don’t have to be an ape to know that. I may not be completely, but I can understand something.

01:05:34 So intelligent machine can model that. That’s just part of the world. It’s just part of the

01:05:37 interactions. The question we’re trying to get at, will the intelligent machine have its own personal

01:05:42 agency that’s beyond what we assign to it or its own personal goals or will it evolve and create

01:05:49 these things? My confidence comes from understanding the mechanisms I’m talking about creating.

01:05:55 This is not hand wavy stuff. It’s down in the details. I’m going to build it. And I know what

01:06:00 it’s going to look like. And I know what it’s going to behave. I know what the kind of things

01:06:03 it could do and the kind of things it can’t do. Just like when I build a computer, I know it’s

01:06:08 not going to, on its own, decide to put another register inside of it. It can’t do that. No way.

01:06:13 No matter what your software does, it can’t add a register to the computer.

01:06:17 So in this way, when we build AI systems, we have to make choices about how we embed them.

01:06:26 So I talk about this in the book. I said intelligent system is not just the neocortex

01:06:30 equivalent. You have to have that. But it has to have some kind of embodiment, physical or virtual.

01:06:36 It has to have some sort of goals. It has to have some sort of ideas about dangers,

01:06:41 about things it shouldn’t do. We build in safeguards into systems. We have them in our

01:06:47 bodies. We put them into cars. My car follows my directions until the day it sees I’m about to hit

01:06:53 something and it ignores my directions and puts the brakes on. So we can build those things in.

01:06:58 So that’s a very interesting problem, how to build those in. I think my differing opinion about the

01:07:06 risks of AI for most people is that people assume that somehow those things will disappear

01:07:11 automatically and evolve. And intelligence itself begets that stuff or requires it.

01:07:17 But it’s not. Intelligence of the neocortex equipment doesn’t require this. The neocortex

01:07:21 equipment just says, I’m a learning system. Tell me what you want me to learn and ask me questions

01:07:26 and I’ll tell you the answers. And that, again, it’s again like a map. A map has no intent about

01:07:33 things, but you can use it to solve problems. Okay. So the building, engineering the neocortex

01:07:41 in itself is just creating an intelligent prediction system.

01:07:45 Modeling system. Sorry, modeling system. You can use it to then make predictions.

01:07:52 But you can also put it inside a thing that’s actually acting in this world.

01:07:56 You have to put it inside something. Again, think of the map analogy, right? A map on its own doesn’t

01:08:02 do anything. It’s just inert. It can learn, but it’s just inert. So we have to embed it somehow

01:08:07 in something to do something. So what’s your intuition here? You had a conversation with

01:08:13 Sam Harris recently that was sort of, you’ve had a bit of a disagreement and you’re sticking on

01:08:20 this point. Elon Musk, Stuart Russell kind of have us worry existential threats of AI.

01:08:29 What’s your intuition? Why, if we engineer increasingly intelligent neocortex type of system

01:08:36 in the computer, why that shouldn’t be a thing that we…

01:08:40 It was interesting to use the word intuition and Sam Harris used the word intuition too.

01:08:44 And we didn’t use that intuition, that word. I immediately stopped and said,

01:08:47 oh, that’s the crux of the problem. He’s using intuition. I’m not speaking about my intuition.

01:08:52 I’m speaking about something I understand, something I’m going to build, something I am

01:08:56 building, something I understand completely, or at least well enough to know what… I’m guessing,

01:09:01 I know what this thing’s going to do. And I think most people who are worried, they have trouble

01:09:08 separating out… They don’t have the knowledge or the understanding about what is intelligence,

01:09:13 how’s it manifest in the brain, how’s it separate from these other functions in the brain.

01:09:17 And so they imagine it’s going to be human like or animal like. It’s going to have the same sort of

01:09:21 drives and emotions we have, but there’s no reason for that. That’s just because there’s an unknown.

01:09:27 If the unknown is like, oh my God, I don’t know what this is going to do. We have to be careful.

01:09:31 It could be like us, but really smarter. I’m saying, no, it won’t be like us. It’ll be really

01:09:35 smarter, but it won’t be like us at all. But I’m coming from that, not because I’m just guessing,

01:09:42 I’m not using intuition. I’m basing it on like, okay, I understand this thing works. This is what

01:09:46 it does. It makes money to you. Okay. But to push back, so I also disagree with the intuitions that

01:09:54 Sam has, but I also disagree with what you just said, which, you know, what’s a good analogy. So

01:10:02 if you look at the Twitter algorithm in the early days, just recommender systems, you can understand

01:10:08 how recommender systems work. What you can’t understand in the early days is when you apply

01:10:14 that recommender system at scale to thousands and millions of people, how that can change societies.

01:10:20 Yeah. So the question is, yes, you’re just saying this is how an engineer in your cortex works,

01:10:27 but the, like when you have a very useful, uh, TikTok type of service that goes viral when your

01:10:35 neural cortex goes viral and then millions of people start using it, can that destroy the world?

01:10:40 No. Uh, well, first of all, this is back. One thing I want to say is that, um, AI is a dangerous

01:10:44 technology. I don’t, I’m not denying that. All technology is dangerous. Well, and AI,

01:10:48 maybe particularly so. Okay. So, um, am I worried about it? Yeah, I’m totally worried about it.

01:10:54 The thing where the narrow component we’re talking about now is the existential risk of AI, right?

01:11:00 Yeah. So I want to make that distinction because I think AI can be applied poorly. It can be applied

01:11:05 in ways that, you know, people are going to understand the consequences of it. Um, these are

01:11:11 all potentially very bad things, but they’re not the AI system creating this existential risk on

01:11:18 its own. And that’s the only place that I disagree with other people. Right. So I, I think the

01:11:23 existential risk thing is, um, humans are really damn good at surviving. So to kill off the human

01:11:29 race, it’d be very, very difficult. Yes, but you can even, I’ll go further. I don’t think AI systems

01:11:36 are ever going to try to, I don’t think AI systems are ever going to like say, I’m going to ignore

01:11:40 you. I’m going to do what I think is best. Um, I don’t think that’s going to happen, at least not

01:11:46 in the way I’m talking about it. So you, the Twitter recommendation algorithm is an interesting

01:11:52 example. Let’s, let’s use computers as an analogy again, right? I build a computer. It’s a universal

01:11:59 computing machine. I can’t predict what people are going to use it for. They can build all kinds of

01:12:03 things. They can, they can even create computer viruses. It’s, you know, all kinds of stuff. So

01:12:09 there’s some unknown about its utility and about where it’s going to go. But on the other hand,

01:12:13 I pointed out that once I build a computer, it’s not going to fundamentally change how it computes.

01:12:18 It’s like, I use the example of a register, which is a part, internal part of a computer. Um, you

01:12:23 know, I say it can’t just sit there because computers don’t evolve. They don’t replicate,

01:12:27 they don’t evolve. They don’t, you know, the physical manifestation of the computer itself

01:12:31 is not going to, there’s certain things that can’t do right. So we can break into things like things

01:12:36 that are possible to happen. We can’t predict and things that are just impossible to happen.

01:12:40 Unless we go out of our way to make them happen, they’re not going to happen unless somebody makes

01:12:44 them happen. Yeah. So there’s, there’s a bunch of things to say. One is the physical aspect,

01:12:49 which you’re absolutely right. We have to build a thing for it to operate in the physical world

01:12:54 and you can just stop building them. Uh, you know, the moment they’re not doing the thing you want

01:13:01 them to do or just change the design or change the design. The question is, I mean, there’s,

01:13:05 uh, it’s possible in the physical world. This is probably longer term is you automate the building.

01:13:10 It makes, it makes a lot of sense to automate the building. There’s a lot of factories that

01:13:14 are doing more and more and more automation to go from raw resources to the final product.

01:13:19 It’s possible to imagine that obviously much more efficient to keep, to create a factory that’s

01:13:25 creating robots that do something, uh, you know, that do something extremely useful for society.

01:13:30 It could be a personal assistance. It could be, uh, it could, it could be your toaster, but a

01:13:35 toaster as much as deeper knowledge of your culinary preferences. Yeah. And that could,

01:13:41 uh, I think now you’ve hit on the right thing. The real thing we need to be worried about is

01:13:46 self replication. Right. That is the thing that we’re in the physical world or even the virtual

01:13:51 world self replication because self replication is dangerous. It’s probably more likely to be

01:13:56 killed by a virus, you know, or a human hand veneered virus. Anybody can create a, you know,

01:14:01 there’s the technology is getting so almost anybody, but not anybody, but a lot of people

01:14:05 could create a human engineered virus that could wipe out humanity. That is really dangerous. No

01:14:11 intelligence required, just self replication. So, um, so we need to be careful about that.

01:14:18 So when I think about, you know, AI, I’m not thinking about robots, building robots. Don’t

01:14:24 do that. Don’t build a, you know, just, well, that’s because you’re interesting creating

01:14:28 intelligence. It seems like self replication is a good way to make a lot of money. Well,

01:14:35 fine. But so is, you know, maybe editing viruses is a good way too. I don’t know. The point is,

01:14:41 if as a society, when we want to look at existential risks, the existential risks we face

01:14:46 that we can control almost all evolve around self replication. Yes. The question is, I don’t see a

01:14:54 good, uh, way to make a lot of money by engineering viruses and deploying them on the world. There

01:15:00 could be, there could be applications that are useful, but let’s separate out, let’s separate out.

01:15:04 I mean, you don’t need to, you only need some, you know, terrorists who wants to do it. Cause

01:15:08 it doesn’t take a lot of money to make viruses. Um, let’s just separate out what’s risky and what’s

01:15:13 not risky. I’m arguing that the intelligence side of this equation is not risky. It’s not risky at

01:15:18 all. It’s the self replication side of the equation that’s risky. And I’m arguing that

01:15:23 it’s not risky. And I’m not dismissing that. I’m scared as hell. It’s like the paperclip

01:15:28 maximizer thing. Yeah. Those are often like talked about in the same conversation.

01:15:35 Um, I think you’re right. Like creating ultra intelligent, super intelligent systems

01:15:42 is not necessarily coupled with a self replicating arbitrarily self replicating systems. Yeah. And

01:15:47 you don’t get evolution unless you’re self replicating. Yeah. And so I think that’s the gist

01:15:52 of this argument that people have trouble separating those two out. They just think,

01:15:56 Oh yeah, intelligence looks like us. And look how, look at the damage we’ve done to this planet,

01:16:00 like how we’ve, you know, destroyed all these other species. Yeah. Well we replicate,

01:16:04 which the 8 billion of us are 7 billion of us now. So, um, I think the idea is that the,

01:16:10 the more intelligent we’re able to build systems, the more tempting it becomes from a capitalist

01:16:17 perspective of creating products, the more tempting it becomes to create self, uh, reproducing

01:16:21 systems. All right. So let’s say that’s true. So does that mean we don’t build intelligent systems?

01:16:26 No, that means we regulate, we, we understand the risks. Uh, we regulate them. Uh, you know,

01:16:33 look, there’s a lot of things we could do as society, which have some sort of financial

01:16:37 benefit to someone, which could do a lot of harm. And we have to learn how to regulate those things.

01:16:42 We have to learn how to deal with those things. I will argue this. I would say the opposite. Like I

01:16:46 would say having intelligent machines at our disposal will actually help us in the end more,

01:16:52 because it’ll help us understand these risks better. It’ll help us mitigate these risks

01:16:55 better. It might be ways of saying, oh, well, how do we solve climate change problems? You know,

01:16:59 how do we do this? Or how do we do that? Um, that just like computers are dangerous in the hands of

01:17:05 the wrong people, but they’ve been so great for so many other things. We live with those dangers.

01:17:09 And I think we have to do the same with intelligent machines. We just, but we have to be

01:17:13 constantly vigilant about this idea of a bad actors doing bad things with them and be,

01:17:19 um, don’t ever, ever create a self replicating system. Um, uh, and, and by the way, I don’t even

01:17:25 know if you could create a self replicating system that uses a factory. That’s really dangerous.

01:17:30 You know, nature’s way of self replicating is so amazing. Um, you know, it doesn’t require

01:17:36 anything. It just, you know, the thing and resources and it goes right. Um, if I said to

01:17:41 you, you know what we have to build, uh, our goal is to build a factory that can make that builds

01:17:46 new factories and it has to end to end supply chain. It has to bind the resources, get the

01:17:54 energy. I mean, that’s really hard. It’s, you know, no one’s doing that in the next, you know,

01:18:00 a hundred years. I’ve been extremely impressed by the efforts of Elon Musk and Tesla to try to do

01:18:06 exactly that. Not, not from raw resource. Well, he actually, I think states the goal is to go from

01:18:12 raw resource to the, uh, the final car in one factory. Yeah. That’s the main goal. Of course,

01:18:19 it’s not currently possible, but they’re taking huge leaps. Well, he’s not the only one to do

01:18:23 that. This has been a goal for many industries for a long, long time. Um, it’s difficult to do.

01:18:28 Well, a lot of people, what they do is instead they have like a million suppliers and then they

01:18:34 like there’s everybody’s, they all co locate them and they, and they tie the systems together.

01:18:40 It’s a fundamental, I think that’s, that also is not getting at the issue I was just talking about,

01:18:45 um, which is self replication. It’s, um, I mean, self replication means there’s no

01:18:53 entity involved other than the entity that’s replicating. Um, right. And so if there are

01:18:58 humans in this, in the loop, that’s not really self replicating, right? It’s unless somehow we’re

01:19:04 duped into doing it. But it’s also, I don’t necessarily

01:19:11 agree with you because you’ve kind of mentioned that AI will not say no to us.

01:19:16 I just think they will. Yeah. Yeah. So like, uh, I think it’s a useful feature to build in. I’m

01:19:23 just trying to like, uh, put myself in the mind of engineers to sometimes say no, you know, if you,

01:19:32 I gave the example earlier, right? I gave the example of my car, right? My car turns the wheel

01:19:38 and, and applies the accelerator and the brake as I say, until it decides there’s something dangerous.

01:19:43 Yes. And then it doesn’t do that. Now that was something it didn’t decide to do. It’s something

01:19:50 we programmed into the car. And so good. It was a good idea, right? The question again, isn’t like

01:19:57 if we create an intelligent system, will it ever ignore our commands? Of course it will. And

01:20:02 sometimes is it going to do it because it came up, came up with its own goals that serve its purposes

01:20:08 and it doesn’t care about our purposes? No, I don’t think that’s going to happen.

01:20:12 Okay. So let me ask you about these, uh, super intelligent cortical systems that we engineer

01:20:16 and us humans, do you think, uh, with these entities operating out there in the world,

01:20:24 what is the future most promising future look like? Is it us merging with them or is it us?

01:20:33 Like, how do we keep us humans around when you have increasingly intelligent beings? Is it, uh,

01:20:38 one of the dreams is to upload our minds in the digital space. So can we just

01:20:42 give our minds to these, uh, systems so they can operate on them? Is there some kind of more

01:20:48 interesting merger or is there more, more communication? I talked about all these

01:20:52 scenarios and let me just walk through them. Sure. Um, the uploading the mind one. Yes. Extremely,

01:21:00 really difficult to do. Like, like, we have no idea how to do this even remotely right now. Um,

01:21:06 so it would be a very long way away, but I make the argument you wouldn’t like the result.

01:21:11 Um, and you wouldn’t be pleased with the result. It’s really not what you think it’s going to be.

01:21:16 Um, imagine I could upload your brain into a, into a computer right now. And now the computer

01:21:20 sitting there going, Hey, I’m over here. Great. Get rid of that old bio person. I don’t need them.

01:21:24 You’re still sitting here. Yeah. What are you going to do? No, no, that’s not me. I’m here.

01:21:28 Right. Are you going to feel satisfied then? Then you, but people imagine, look, I’m on my deathbed

01:21:33 and I’m about to, you know, expire and I pushed the button and now I’m uploaded. But think about

01:21:38 it a little differently. And, and so I don’t think it’s going to be a thing because people,

01:21:42 by the time we’re able to do this, if ever, because you have to replicate the entire body,

01:21:47 not just the brain. It’s, it’s really, it’s, I walked through the issues. It’s really substantial.

01:21:52 Um, do you have a sense of what makes us us? Is there, is there a shortcut to what can only save

01:21:59 a certain part that makes us truly ours? No, but I think that machine would feel like it’s you too.

01:22:04 Right. Right. You have two people, just like I have a child, I have a child, right? I have two

01:22:08 daughters. They’re independent people. I created them. Well, partly. Yeah. And, um, uh, I don’t,

01:22:16 just because they’re somewhat like me, I don’t feel on them and they don’t feel like I’m me. So

01:22:20 if you split apart, you have two people. So we can tell them, come back to what, what makes,

01:22:24 what consciousness do you want? We can talk about that, but we don’t have like remote consciousness.

01:22:28 I’m not sitting there going, Oh, I’m conscious of that. You know, I mean, that system of,

01:22:32 so let’s say, let’s, let’s stay on our topic. One was uploading a brand. Yep. It ain’t gonna happen

01:22:38 in a hundred years, maybe a thousand, but I don’t think people are going to want to do it. The

01:22:44 merging your mind with, uh, you know, the neural link thing, right? Like again, really, really

01:22:50 difficult. It’s, it’s one thing to make progress, to control a prosthetic arm. It’s another to have

01:22:54 like a billion or several billion, you know, things and understanding what those signals

01:22:58 mean. Like it’s the one thing that like, okay, I can learn to think some patterns to make something

01:23:03 happen. It’s quite another thing to have a system, a computer, which actually knows exactly what

01:23:08 cells it’s talking to and how it’s talking to them and interacting in a way like that. Very,

01:23:12 very difficult. We’re not getting anywhere closer to that. Um, interesting. Can I, can I, uh, can

01:23:18 I ask a question here? What, so for me, what makes that merger very difficult practically in the next

01:23:24 10, 20, 50 years is like literally the biology side of it, which is like, it’s just hard to do

01:23:32 that kind of surgery in a safe way. But your intuition is even the machine learning part of it,

01:23:38 where the machine has to learn what the heck it’s talking to. That’s even hard. I think it’s even

01:23:43 harder. And it’s not, it’s, it’s easy to do when you’re talking about hundreds of signals. It’s,

01:23:49 it’s a totally different thing to say, talking about billions of years. It’s, it’s a totally

01:23:53 different thing to say, talking about billions of signals. So you don’t think it’s the raw,

01:23:57 the it’s a machine learning problem. You don’t think it could be learned? Well, I’m just saying,

01:24:01 no, I think you’d have to have detailed knowledge. You’d have to know exactly what the types of

01:24:05 neurons you’re connecting to. I mean, in the brain, there’s these, there are all different

01:24:09 types of things. It’s not like a neural network. It’s a very complex organism system up here. We

01:24:13 talked about the grid cells or the place cells, you know, you have to know what kind of cells

01:24:16 you’re talking to and what they’re doing and how their timing works and all, all this stuff,

01:24:20 which you can’t today. There’s no way of doing that. Right. But I think it’s, I think it’s a,

01:24:24 I think the problem you’re right. That the biological aspect of like who wants to have

01:24:28 a surgery and have this stuff inserted in your brain. That’s a problem. But this is when we

01:24:32 solve that problem. I think the, the information coding aspect is much worse. I think that’s much

01:24:38 worse. It’s not like what they’re doing today. Today. It’s simple machine learning stuff

01:24:42 because you’re doing simple things. But if you want to merge your brain, like I’m thinking on

01:24:46 the internet, I’m merged my brain with the machine and we’re both doing, that’s a totally different

01:24:51 issue. That’s interesting. I tend to think if the, okay. If you have a super clean signal

01:24:57 from a bunch of neurons at the start, you don’t know what those neurons are. I think that’s much

01:25:04 easier than the getting of the clean signal. I think if you think about today’s machine learning,

01:25:10 that’s what you would conclude. Right. I’m thinking about what’s going on in the brain

01:25:14 and I don’t reach that conclusion. So we’ll have to see. Sure. But I don’t think even, even then,

01:25:20 I think this kind of a sad future. Like, you know, do I, do I have to like plug my brain

01:25:26 into a computer? I’m still a biological organism. I assume I’m still going to die.

01:25:30 So what have I achieved? Right. You know, what have I achieved? Oh, I disagree that we don’t

01:25:36 know what those are, but it seems like there could be a lot of different applications. It’s

01:25:40 like virtual reality is to expand your brain’s capability to, to like, to read Wikipedia.

01:25:47 Yeah. But, but fine. But, but you’re still a biological organism.

01:25:50 Yes. Yes. You know, you’re still, you’re still mortal. All right. So,

01:25:53 so what are you accomplishing? You’re making your life in this short period of time better. Right.

01:25:58 Just like having the internet made our life better. Yeah. Yeah. Okay. So I think that’s of,

01:26:03 of, if I think about all the possible gains we can have here, that’s a marginal one.

01:26:08 It’s an individual, Hey, I’m better, you know, I’m smarter. But you know, fine. I’m not against it.

01:26:15 I just don’t think it’s earth changing. I, but, but it w so this is the true of the internet.

01:26:20 When each of us individuals are smarter, we get a chance to then share our smartness.

01:26:24 We get smarter and smarter together as like, as a collective, this is kind of like this

01:26:28 ant colony. Why don’t I just create an intelligent machine that doesn’t have any of this biological

01:26:32 nonsense that has all the same. It’s everything except don’t burden it with my brain. Yeah.

01:26:39 Right. It has a brain. It is smart. It’s like my child, but it’s much, much smarter than me.

01:26:43 So I have a choice between doing some implant, doing some hybrid, weird, you know, biological

01:26:48 thing that bleeding and all these problems and limited by my brain or creating a system,

01:26:53 which is super smart that I can talk to. Um, that helps me understand the world that can

01:26:58 read the internet, you know, read Wikipedia and talk to me. I guess my, the open questions there

01:27:03 are what does the men manifestation of super intelligence look like? So like, what are we

01:27:10 going to, you, you talked about why do I want to merge with AI? Like what, what’s the actual

01:27:14 marginal benefit here? If I, if we have a super intelligent system, how will it make our life

01:27:23 better? So let’s, let’s, that’s a great question, but let’s break it down to little pieces. All

01:27:28 right. On the one hand, it can make our life better in lots of simple ways. You mentioned

01:27:32 like a care robot or something that helps me do things. It cooks. I don’t know what it does. Right.

01:27:36 Little things like that. We have super better, smarter cars. We can have, you know, better agents

01:27:42 aids helping us in our work environment and things like that. To me, that’s like the easy stuff, the

01:27:47 simple stuff in the beginning. Um, um, and so in the same way that computers made our lives better

01:27:53 in ways, many, many ways, I will have those kinds of things. To me, the really exciting thing about AI

01:28:00 is the sort of it’s transcendent, transcendent quality in terms of humanity. We’re still

01:28:05 biological organisms. We’re still stuck here on earth. It’s going to be hard for us to live

01:28:09 anywhere else. Uh, I don’t think you and I are going to want to live on Mars anytime soon. Um,

01:28:14 um, and, um, and we’re flawed, you know, we may end up destroying ourselves. It’s totally possible.

01:28:23 Uh, we, if not completely, we could destroy our civilizations. You know, it’s this face the fact

01:28:28 we have issues here, but we can create intelligent machines that can help us in various ways. For

01:28:33 example, one example I gave, and that sounds a little sci fi, but I believe this. If we really

01:28:38 wanted to live on Mars, we’d have to have intelligent systems that go there and build

01:28:42 the habitat for us, not humans. Humans are never going to do this. It’s just too hard. Um, but could

01:28:48 we have a thousand or 10,000, you know, engineer workers up there doing this stuff, building things,

01:28:53 terraforming Mars? Sure. Maybe we can move Mars. But then if we want to, if we want to go around

01:28:57 the universe, should I send my children around the universe or should I send some intelligent machine,

01:29:02 which is like a child that represents me and understands our needs here on earth that could

01:29:07 travel through space. Um, so it’s sort of, it, in some sense, intelligence allows us to transcend

01:29:13 our, the limitations of our biology, uh, with, and, and don’t think of it as a negative thing.

01:29:19 It’s in some sense, my children transcend my, the, my biology too, cause they, they live beyond me.

01:29:26 Yeah. Um, and we impart, they represent me and they also have their own knowledge and I can

01:29:30 impart knowledge to them. So intelligent machines will be like that too, but not limited like us.

01:29:34 I mean, but the question is, um, there’s so many ways that transcendence can happen

01:29:40 and the merger with AI and humans is one of those ways. So you said intelligent,

01:29:46 basically beings or systems propagating throughout the universe, representing us humans.

01:29:53 They represent us humans in the sense they represent our knowledge and our history,

01:29:56 not us individually. Right. Right. But I mean, the question is, is it just a database

01:30:04 with, uh, with the really damn good, uh, model of the world?

01:30:09 It’s conscious, it’s conscious just like us. Okay. But just different?

01:30:12 They’re different. Uh, just like my children are different. They’re like me, but they’re

01:30:16 different. Um, these are more different. I guess maybe I’ve already, I kind of,

01:30:22 I take a very broad view of our life here on earth. I say, you know, why are we living here?

01:30:28 Are we just living because we live? Is it, are we surviving because we can survive? Are we fighting

01:30:32 just because we want to just keep going? What’s the point of it? Right. So to me, the point,

01:30:38 if I asked myself, what’s the point of life is what’s transcends that ephemeral sort of biological

01:30:46 experience is to me, this is my answer is the acquisition of knowledge to understand more about

01:30:53 the universe, uh, and to explore. And that’s partly to learn more. Right. Um, I don’t view it as

01:31:01 a terrible thing. If the ultimate outcome of humanity is we create systems that are intelligent

01:31:09 that are offspring, but they’re not like us at all. And we stay, we stay here and live on earth

01:31:13 as long as we can, which won’t be forever, but as long as we can and, but that would be a great

01:31:20 thing to do. It’s not a, it’s not like a negative thing. Well, would, uh, you be okay then if, uh,

01:31:29 the human species vanishes, but our knowledge is preserved and keeps being expanded by intelligence

01:31:37 systems. I want our knowledge to be preserved and expanded. Yeah. Am I okay with humans dying? No,

01:31:44 I don’t want that to happen. But if it, if it does happen, what if we were sitting here and this is

01:31:50 all the real, the last two people on earth and we’re saying, Lex, we blew it. It’s all over.

01:31:53 Right. Wouldn’t I feel better if I knew that our knowledge was preserved and that we had agents

01:32:00 that knew about that, that were trans, you know, there were that left earth. I wouldn’t want that.

01:32:04 Mm. It’s better than not having that, you know, I make the analogy of like, you know,

01:32:08 the dinosaurs, the poor dinosaurs, they live for, you know, tens of millions of years.

01:32:11 They raised their kids. They, you know, they, they fought to survive. They were hungry. They,

01:32:15 they did everything we do. And then they’re all gone. Yeah. Like, you know, and, and if we didn’t

01:32:20 discover their bones, nobody would ever know that they ever existed. Right. Do we want to be like

01:32:27 that? I don’t want to be like that. There’s a sad aspect to it. And it’s kind of, it’s jarring to

01:32:32 think about that. It’s possible that a human like intelligence civilization has previously existed

01:32:39 on earth. The reason I say this is like, it is jarring to think that we would not, if they went

01:32:46 extinct, we wouldn’t be able to find evidence of them after a sufficient amount of time. Of course,

01:32:53 there’s like, like basically humans, like if we destroy ourselves now, the human civilization

01:32:58 destroyed ourselves. Now, after a sufficient amount of time, we would not be, we’d find evidence of

01:33:03 the dinosaurs would not find evidence of humans. Yeah. That’s kind of an odd thing to think about.

01:33:08 Although I’m not sure if we have enough knowledge about species going back for billions of years,

01:33:14 but we could, we could, we might be able to eliminate that possibility, but it’s an interesting

01:33:18 question. Of course, this is a similar question to, you know, there were lots of intelligent

01:33:23 species throughout our galaxy that have all disappeared. That’s super sad that they’re,

01:33:30 exactly that there may have been much more intelligent alien civilizations in our galaxy

01:33:36 that are no longer there. Yeah. You actually talked about this, that humans might destroy

01:33:42 ourselves and how we might preserve our knowledge and advertise that knowledge to other. Advertise

01:33:53 is a funny word to use. From a PR perspective. There’s no financial gain in this.

01:34:00 You know, like make it like from a tourism perspective, make it interesting. Can you

01:34:04 describe how you think about this problem? Well, there’s a couple things. I broke it down

01:34:07 into two parts, actually three parts. One is, you know, there’s a lot of things we know that,

01:34:14 what if, what if we were, what if we ended, what if our civilization collapsed? Yeah. I’m not

01:34:19 talking tomorrow. Yeah. We could be a thousand years from now, like, so, you know, we don’t

01:34:22 really know, but, but historically it would be likely at some point. Time flies when you’re

01:34:26 having fun. Yeah. That’s a good way to put it. You know, could we, and then intelligent life

01:34:33 evolved again on this planet. Wouldn’t they want to know a lot about us and what we knew? Wouldn’t

01:34:37 they wouldn’t be able to ask us questions? So one very simple thing I said, how would we archive

01:34:42 what we know? That was a very simple idea. I said, you know what, that wouldn’t be that hard to put

01:34:46 a few satellites, you know, going around the sun and we’d upload Wikipedia every day and that kind

01:34:51 of thing. So, you know, if we end up killing ourselves, well, it’s up there and the next intelligent

01:34:55 species will find it and learn something. They would like that. They would appreciate that.

01:34:58 Um, uh, so that’s one thing. The next thing I said, well, what if, you know, how outside,

01:35:05 outside of our solar system, we have the SETI program. We’re looking for these intelligent

01:35:09 signals from everybody. And if you do a little bit of math, which I did in the book, uh, and

01:35:14 you say, well, what if intelligent species only live for 10,000 years before, you know,

01:35:18 technologically intelligent species, like ones are really able to do the stuff we’re just starting

01:35:22 to be able to do. Um, well, the chances are we wouldn’t be able to see any of them because they

01:35:26 would have all been disappeared by now. Um, they would, they’ve lived for 10,000 years and now

01:35:31 they’re gone. And so we’re not going to find these signals being sent from these people because, um,

01:35:36 but I said, what kind of signal could you create that would last a million years or a billion years

01:35:41 that someone would say, dammit, someone smart lived there that we know that that would be a

01:35:46 life changing event for us to figure that out. Well, what we’re looking for today in the study

01:35:49 program, isn’t that we’re looking for very coded signals in some sense. Um, and so I asked myself,

01:35:54 what would be a different type of signal one could create? Um, I’ve always thought about

01:35:58 this throughout my life. And in the book, I gave one, one possible suggestion, which was, um, uh,

01:36:04 we now detect planets going around other, other suns, uh, other stars, uh, excuse me. And we do

01:36:11 that by seeing this, the, the slight dimming of the light as the planets move in front of them.

01:36:14 That’s how, uh, we detect, uh, planets elsewhere in our galaxy. Um, what if we created something

01:36:21 like that, that just rotated around our, our, our, around the sun and it blocked out a little

01:36:26 bit of light in a particular pattern that someone said, Hey, that’s not a planet. That is a sign

01:36:31 that someone was once there. You can say, what if it’s beating up pie, you know, three point,

01:36:36 whatever. Um, so I did it from a distance. Broadly broadcast takes no continue activation on our

01:36:44 part. This is the key, right? No one has to be senior running a computer and supplying it with

01:36:48 power. It just goes on. So we go, it’s continuous. And, and I argued that part of the study program

01:36:55 should be looking for signals like that. And to look for signals like that, you ought to figure

01:36:58 out what the, how would we create a signal? Like what would we create that would be like that,

01:37:03 that would persist for millions of years that would be broadcast broadly. You could see from

01:37:07 a distance that was unequivocal, came from an intelligent species. And so I gave that one

01:37:13 example. Um, cause they don’t know what I know of actually. And then, and then finally, right.

01:37:19 If, if our, ultimately our solar system will die at some point in time, you know, how do we go

01:37:26 beyond that? And I think it’s possible if it all possible, we’ll have to create intelligent machines

01:37:31 that travel throughout the, throughout the solar system or the galaxy. And I don’t think that’s

01:37:36 going to be humans. I don’t think it’s going to be biological organisms. So these are just things to

01:37:41 think about, you know, like, what’s the old, you know, I don’t want to be like the dinosaur. I

01:37:44 don’t want to just live in, okay, that was it. We’re done. You know, well, there is a kind of

01:37:48 presumption that we’re going to live forever, which, uh, I think it is a bit sad to imagine

01:37:55 that the message we send as, as you talk about is that we were once here instead of we are here.

01:38:03 Well, it could be, we are still here. Uh, but it’s more of a, it’s more of an insurance policy

01:38:09 in case we’re not here, you know? Well, I don’t know, but there is something I think about,

01:38:16 we as humans don’t often think about this, but it’s like, like whenever I, um,

01:38:23 record a video, I’ve done this a couple of times in my life. I’ve recorded a video for my future

01:38:28 self, just for personal, just for fun. And it’s always just fascinating to think about

01:38:34 that preserving yourself for future civilizations. For me, it was preserving myself for a future me,

01:38:41 but that’s a little, that’s a little fun example of archival.

01:38:46 Well, these podcasts are, are, are preserving you and I in a way. Yeah. For future,

01:38:51 hopefully well after we’re gone. But you don’t often, we’re sitting here talking about this.

01:38:56 You are not thinking about the fact that you and I are going to die and there’ll be like 10 years

01:39:02 after somebody watching this and we’re still alive. You know, in some sense I do. I’m here

01:39:09 cause I want to talk about ideas and these ideas transcend me and they transcend this time and, and

01:39:16 on our planet. Um, we’re talking here about ideas that could be around a thousand years from now.

01:39:23 Or a million years from now. I, when I wrote my book, I had an audience in mind and one of the

01:39:29 clearest audiences was aliens. No. Were people reading this a hundred years from now? Yes.

01:39:35 I said to myself, how do I make this book relevant to someone reading this a hundred years from now?

01:39:39 What would they want to know that we were thinking back then? What would make it like,

01:39:44 that was an interesting, it’s still an interesting book. I’m not sure I can achieve that, but that was

01:39:49 how I thought about it because these ideas, like especially in the third part of the book, the ones

01:39:53 we were just talking about, you know, these crazy, sounds like crazy ideas about, you know,

01:39:56 storing our knowledge and, and, you know, merging our brains with computers and, and sending, you

01:40:01 know, our machines out into space. It’s not going to happen in my lifetime. Um, and they may not

01:40:07 have been happening in the next hundred years. They may not happen for a thousand years. Who knows?

01:40:10 Uh, but we have the unique opportunity right now. We, you, me, and other people in the world,

01:40:17 right now, we, you, me, and other people like this, um, to sort of at least propose the agenda,

01:40:24 um, that might impact the future like that. That’s a fascinating way to think, uh, both like

01:40:29 writing or creating, try to make, try to create ideas, try to create things that, uh, hold up

01:40:38 in time. Yeah. You know, when understanding how the brain works, we’re going to figure that out

01:40:42 once. That’s it. It’s going to be figured out once. And after that, that’s the answer. And

01:40:46 people will, people will study that thousands of years now. We still, we still, you know,

01:40:51 venerate Newton and, and Einstein and, um, and, you know, because, because ideas are exciting,

01:40:59 even well into the future. Well, the interesting thing is like big ideas, even if they’re wrong,

01:41:05 are still useful. Like, yeah, especially if they’re not completely wrong, right? Right.

01:41:12 Newton’s laws are not wrong. They’re just Einstein’s they’re better. Um, so yeah, I mean,

01:41:19 but we’re talking with Newton and Einstein, we’re talking about physics. I wonder if we’ll ever

01:41:23 achieve that kind of clarity, but understanding, um, like complex systems and the, this particular

01:41:30 manifestation of complex systems, which is the human brain. I’m totally optimistic. We can do

01:41:36 that. I mean, we’re making progress at it. I don’t see any reasons why we can’t completely. I mean,

01:41:41 completely understand in the sense, um, you know, we don’t really completely understand what all

01:41:46 the molecules in this water bottle are doing, but, you know, we have laws that sort of capture it

01:41:50 pretty good. Um, and, uh, so we’ll have that kind of understanding. I mean, it’s not like you’re

01:41:54 gonna have to know what every neuron in your brain is doing. Um, but enough to, um, first of all,

01:42:00 to build it. And second of all, to do, you know, do what physics does, which is like have, uh,

01:42:06 concrete experiments where we can validate this is happening right now. Like it’s not,

01:42:12 this is not some future thing. Um, you know, I’m very optimistic about it because I know about our,

01:42:17 our work and what we’re doing. We’ll have to prove it to people. Um, but, um,

01:42:24 I, I consider myself a rational person and, um, you know, until fairly recently,

01:42:30 I wouldn’t have said that, but right now I’m, where I’m sitting right now, I’m saying, you know,

01:42:33 we, we could, this is going to happen. There’s no big obstacles to it. Um, we finally have a

01:42:39 framework for understanding what’s going on in the cortex and, um, and that’s liberating. It’s,

01:42:44 it’s like, Oh, it’s happening. So I can’t see why we wouldn’t be able to understand it. I just can’t.

01:42:50 Okay. So, I mean, on that topic, let me ask you to play devil’s advocate.

01:42:54 Is it possible for you to imagine, look, look a hundred years from now and looking at your book,

01:43:02 uh, in which ways might your ideas be wrong? Oh, I worry about this all the time. Um,

01:43:11 yeah, it’s still useful. Yeah. Yeah.

01:43:15 Yeah. I think there’s, you know, um, well I can, I can best relate it to like things I’m worried

01:43:24 about right now. So we talked about this voting idea, right? It’s happening. There’s no question.

01:43:29 It’s happening, but it could be far more, um, um, there’s, there’s enough things I don’t know about

01:43:36 it that it might be working into ways differently than I’m thinking about the kind of what’s voting,

01:43:41 who’s voting, you know, where are representations? I talked about, like, you have a thousand models

01:43:45 of a coffee cup like that. That could turn out to be wrong. Um, because it may be, maybe there are a

01:43:52 thousand models that are sub models, but not really a single model of the coffee cup. Um,

01:43:57 I mean, there’s things, these are all sort of on the edges, things that I present as like,

01:44:02 Oh, it’s so simple and clean. Well, it’s not that it’s always going to be more complex.

01:44:05 And, um, and there’s parts of the theory, which I don’t understand the complexity well. So I think,

01:44:14 I think the idea that this brain is a distributed modeling system is not controversial at all. Right.

01:44:19 It’s not, that’s well understood by many people. The question then is,

01:44:22 are each quarter of a column an independent modeling system? Um, I could be wrong about that.

01:44:29 Um, I don’t think so, but I worry about it. My intuition, not even thinking why you could

01:44:35 be wrong is the same intuition I have about any sort of physicist, uh, like string theory

01:44:42 that we as humans desire for a clean explanation. And, uh, a hundred years from now, uh,

01:44:50 intelligent systems might look back at us and laugh at how we try to get rid of the whole mess

01:44:56 by having simple explanation when the reality is it’s way messier. And in fact, it’s impossible

01:45:03 to understand. You can only build it. It’s like this idea of complex systems and cellular automata

01:45:08 is you can only launch the thing. You cannot understand it. Yeah. I think that, you know,

01:45:13 the history of science suggests that’s not likely to occur. Um, the history of science suggests that

01:45:20 as a theorist and we’re theorists, you look for simple explanations, right? Fully knowing

01:45:25 that whatever simple explanation you’re going to come up with is not going to be completely correct.

01:45:30 I mean, it can’t be, I mean, it’s just, it’s just more complexity, but that’s the role of theorists

01:45:35 play. They, they sort of, they give you a framework on which you now can talk about a problem and

01:45:41 figure out, okay, now we can start digging more details. The best frameworks stick around while

01:45:46 the details change. You know, again, you know, the classic example is Newton and Einstein, right? You

01:45:53 know, um, Newton’s theories are still used. They’re still valuable. They’re still practical. They’re

01:46:00 not like wrong. It’s just, they’ve been refined. Yeah. But that’s in physics. It’s not obvious,

01:46:05 by the way, it’s not obvious for physics either that the universe should be such that’s amenable

01:46:10 to these simple. But it’s so far, it appears to be as far as we can tell. Um, yeah. I mean,

01:46:17 but as far as we could tell, and, but it’s also an open question whether the brain is amenable to

01:46:23 such clean theories. That’s the, uh, not the brain, but intelligence. Well, I, I, I don’t know. I would

01:46:28 take intelligence out of it. Just say, you know, um, well, okay. Um, the evidence we have suggests

01:46:37 that the human brain is, is a, at the one time extremely messy and complex, but there’s some

01:46:42 parts that are very regular and structured. That’s why we started the neocortex. It’s extremely

01:46:48 regular in its structure. Yeah. And unbelievably so. And then I mentioned earlier, the other thing is

01:46:53 it’s, it’s universal abilities. It is so flexible to learn so many things. We don’t, we haven’t

01:47:00 figured out what it can’t learn yet. We don’t know, but we haven’t figured it out yet, but it

01:47:03 can learn things that it never was evolved to learn. So those give us hope. Um, that’s why I

01:47:09 went into this field because I said, you know, this regular structure, it’s doing this amazing

01:47:14 number of things. There’s gotta be some underlying principles that are, that are common and other,

01:47:19 other scientists have come up with the same conclusions. Um, and so it’s promising and,

01:47:25 um, and that’s, and whether the theories play out exactly this way or not, that is the role that

01:47:32 theorists play. And so far it’s worked out well, even though, you know, maybe, you know, we don’t

01:47:38 understand all the laws of physics, but so far it’s been pretty damn useful. The ones we have

01:47:42 are our theories are pretty useful. You mentioned that, uh, we should not necessarily be,

01:47:49 at least to the degree that we are worried about the existential risks of artificial intelligence

01:47:55 relative to, uh, human risks from human nature being existential risk.

01:48:02 What aspect of human nature worries you the most in terms of the survival of the human species?

01:48:07 I mean, I’m disappointed in humanity, humans. I mean, all of us, I’m one. So I’m disappointed

01:48:15 myself too. Um, it’s kind of a sad state. There’s two things that disappoint me. One is

01:48:24 how it’s difficult for us to separate our rational component of ourselves from our evolutionary

01:48:30 heritage, which is, you know, not always pretty, you know, um, uh, rape is a, is an evolutionary

01:48:38 good strategy for reproduction. Murder can be at times too, you know, making other people miserable

01:48:45 at times is a good strategy for reproduction. It’s just, and it’s just, and, and so now that

01:48:50 we know that, and yet we have this sort of, you know, we, you and I can have this very rational

01:48:54 discussion talking about, you know, intelligence and brains and life and so on. So many, it seems

01:48:59 like it’s so hard. It’s just a big, big transition to get humans, all humans to, to make the

01:49:05 transition from be like, let’s pay no attention to all that ugly stuff over here. Let’s just focus

01:49:11 on the interesting. What’s unique about humanity is our knowledge and our intellect. But the fact

01:49:16 that we’re striving is in itself amazing, right? The fact that we’re able to overcome that part.

01:49:22 And it seems like we are more and more becoming successful at overcoming that part. That is the

01:49:28 optimistic view. And I agree with you, but I worry about it. I’m not saying I’m worrying about it. I

01:49:33 think that was your question. I still worry about it. Yes. You know, we could be in tomorrow because

01:49:38 some terrorists could get nuclear bombs and, you know, blow us all up. Who knows? Right. The other

01:49:43 thing I think I’m disappointed is, and it’s just, I understand it. It’s, I guess you can’t really

01:49:47 be disappointed. It’s just a fact is that we’re so prone to false beliefs that we, you know, we have

01:49:53 a model in our head, the things we can interact with directly, physical objects, people, that

01:50:00 model is pretty good. And we can test it all the time, right? I touch something, I look at it,

01:50:04 talk to you, see if my model is correct. But so much of what we know is stuff I can’t directly

01:50:09 interact with. I only know because someone told me about it. And so we’re prone, inherently prone

01:50:16 to having false beliefs because if I’m told something, how am I going to know it’s right

01:50:20 or wrong? Right. And so then we have the scientific process, which says we are inherently flawed.

01:50:26 So the only way we can get closer to the truth is by looking for contrary evidence.

01:50:34 Yeah. Like this conspiracy theory, this theory that scientists keep telling me about that the

01:50:41 earth is round. As far as I can tell, when I look out, it looks pretty flat.

01:50:46 Yeah. So, yeah, there is a tension, but it’s also, I tend to believe that we haven’t figured

01:50:55 out most of this thing, right? Most of nature around us is a mystery. And so it…

01:51:02 But that doesn’t, does that worry you? I mean, it’s like, oh, that’s like a pleasure,

01:51:06 more to figure out, right? Yeah. That’s exciting. But I’m saying like

01:51:09 there’s going to be a lot of quote unquote, wrong ideas. I mean, I’ve been thinking a lot about

01:51:16 engineering systems like social networks and so on. And I’ve been worried about censorship

01:51:21 and thinking through all that kind of stuff, because there’s a lot of wrong ideas. There’s a

01:51:25 lot of dangerous ideas, but then I also read a history, read history and see when you censor

01:51:33 ideas that are wrong. Now this could be a small scale censorship, like a young grad student who

01:51:39 comes up, who like raises their hand and says some crazy idea. A form of censorship could be,

01:51:46 I shouldn’t use the word censorship, but like de incentivize them from no, no, no, no,

01:51:52 this is the way it’s been done. Yeah. Yeah. You’re a foolish kid. Don’t

01:51:54 think that’s it. Yeah. You’re foolish. So in some sense,

01:51:59 those wrong ideas, most of the time end up being wrong, but sometimes end up being

01:52:05 I agree with you. So I don’t like the word censorship. Um, at the very end of the book, I,

01:52:11 I ended up with a sort of a, um, a plea or a recommended force of action. Um, the best way I

01:52:20 could, I know how to deal with this issue that you bring up is if everybody understood as part of

01:52:26 your upbringing in life, something about how your brain works, that it builds a model of the world,

01:52:31 uh, how it works, you know, how basically it builds that model of the world and that the model

01:52:34 is not the real world. It’s just a model and it’s never going to reflect the entire world. And it

01:52:39 can be wrong and it’s easy to be wrong. And here’s all the ways you can get a wrong model in your

01:52:44 head. Right? It’s not prescribed what’s right or wrong. Just understand that process. If we all

01:52:50 understood the processes and I got together and you say, I disagree with you, Jeff. And I said,

01:52:54 Lex, I disagree with you that at least we understand that we’re both trying to model

01:52:59 something. We both have different information, which leads to our different models. And therefore

01:53:03 I shouldn’t hold it against you and you shouldn’t hold it against me. And we can at least agree that,

01:53:07 well, what can we look for in that’s common ground to test our, our beliefs, as opposed to so much,

01:53:13 uh, as we raise our kids on dogma, which is this is a fact, this is a fact, and these people are

01:53:20 bad. And, and, and, you know, where every, if everyone knew just to, to be skeptical of every

01:53:31 belief and why, and how their brains do that, I think we might have a better world.

01:53:36 Do you think the human mind is able to comprehend reality? So you talk about this creating models

01:53:45 how close do you think we get to, uh, to reality? There’s so the wildest ideas is like Donald

01:53:51 Hoffman saying, we’re very far away from reality. Do you think we’re getting close to reality?

01:53:56 Well, it depends on what you define reality. Uh, we are getting, we have a model of the world

01:54:02 that’s very useful, right? For, for basic goals. Well, for our survival and our pleasure right

01:54:10 now. Right. Um, so that’s useful. Um, I mean, it’s really useful. Oh, we can build planes. We can build computers. We can do these things. Right.

01:54:17 Uh, I don’t think, I don’t know the answer to that question. Um, I think that’s part of the

01:54:24 question we’re trying to figure out, right? Like, you know, obviously if you end up with a theory of

01:54:27 everything that really is a theory of everything and all of a sudden everything comes into play

01:54:32 and there’s no room for something else, then you might feel like we have a good model of the world.

01:54:37 Yeah. But if we have a theory of everything and somehow, first of all, you’ll never be able to

01:54:41 really conclusively say it’s a theory of everything, but say somehow we are very damn sure it’s a theory

01:54:46 of everything. We understand what happened at the big bang and how just the entirety of the

01:54:51 physical process. I’m still not sure that gives us an understanding of, uh, the next

01:54:58 many layers of the hierarchy of abstractions that form. Well, also what if string theory

01:55:03 turns out to be true? And then you say, well, we have no reality, no modeling what’s going on in

01:55:09 those other dimensions that are wrapped into it on each other. Right. Or, or the multiverse,

01:55:14 you know, I honestly don’t know how for us, for human interaction, for ideas of intelligence,

01:55:21 how it helps us to understand that we’re made up of vibrating strings that are

01:55:26 like 10 to the whatever times smaller than us. I don’t, you know, you could probably build better

01:55:33 weapons, a better rockets, but you’re not going to be able to understand intelligence. I guess,

01:55:37 I guess maybe better computers. No, you won’t be. I think it’s just more purely knowledge.

01:55:41 You might lead to a better understanding of the, of the beginning of the universe,

01:55:46 right? It might lead to a better understanding of, uh, I don’t know. I guess I think the acquisition

01:55:52 of knowledge has always been one where you, you pursue it for its own pleasure. Um, and you don’t

01:56:01 always know what is going to make a difference. Yeah. Uh, you’re pleasantly surprised by the,

01:56:06 the weird things you find. Do you think, uh, for the, for the neocortex in general, do you,

01:56:11 do you think there’s a lot of innovation to be done on the machine side? You know,

01:56:16 you use the computer as a metaphor quite a bit. Is there different types of computer that would

01:56:21 help us build intelligence manifestations of intelligent machines? Yeah. Or is it, oh no,

01:56:26 it’s going to be totally crazy. Uh, we have no idea how this is going to look out yet.

01:56:32 You can already see this. Um, today we’ve, of course, we model these things on traditional

01:56:37 computers and now, now GPUs are really popular with, with, uh, you know, neural networks and so

01:56:43 on. Um, but there are companies coming up with fundamentally new physical substrates, um, that

01:56:50 are just really cool. I don’t know if they’re going to work or not. Um, but I think there’ll

01:56:55 be decades of innovation here. Yeah. Totally. Do you think the final thing will be messy,

01:57:01 like our biology is messy? Or do you think, uh, it’s, it’s the, it’s the old bird versus

01:57:07 airplane question, or do you think we could just, um, build airplanes that, that fly way better

01:57:16 than birds in the same way we could build, uh, uh, electrical neocortex? Yeah. You know,

01:57:23 can I, can I, can I riff on the bird thing a bit? Because I think that’s interesting.

01:57:27 People really misunderstand this. The Wright brothers, um, the problem they were trying to

01:57:33 solve was controlled flight, how to turn an airplane, not how to propel an airplane.

01:57:38 They weren’t worried about that. Interesting. Yeah. They already had, at that time,

01:57:41 there was already wing shapes, which they had from studying birds. There was already gliders

01:57:45 that carry people. The problem was if you put a rudder on the back of a glider and you turn it,

01:57:49 the plane falls out of the sky. So the problem was how do you control flight? And they studied

01:57:55 birds and they actually had birds in captivity. They watched birds in wind tunnels. They observed

01:58:00 them in the wild and they discovered the secret was the birds twist their wings when they turn.

01:58:05 And so that’s what they did on the Wright brothers flyer. They had these sticks that

01:58:07 you would twist the wing. And that was the, that was their innovation, not the propeller.

01:58:12 And today airplanes still twist their wings. We don’t twist the entire wing. We just twist

01:58:16 the tail end of it, the flaps, which is the same thing. So today’s airplanes fly on the

01:58:22 same principles as birds would observe. So everyone get that analogy wrong, but let’s

01:58:26 step back from that. Once you understand the principles of flight, you can choose

01:58:32 how to implement them. No one’s going to use bones and feathers and muscles, but they do have wings

01:58:39 and we don’t flap them. We have propellers. So when we have the principles of computation that

01:58:45 goes on to modeling the world in a brain, we understand those principles very clearly.

01:58:50 We have choices on how to implement them. And some of them will be biological like and some won’t.

01:58:54 And, but I do think there’s going to be a huge amount of innovation here.

01:58:59 Just think about the innovation when in the computer, they had to invent the transistor,

01:59:03 they invented the Silicon chip. They had to invent, you know, then this software. I mean,

01:59:09 it’s millions of things they had to do, memory systems. We’re going to do, it’s going to be

01:59:13 similar. Well, it’s interesting that the deep learning, the effectiveness of deep learning for

01:59:19 specific tasks is driving a lot of innovation in the hardware, which may have effects for actually

01:59:27 allowing us to discover intelligence systems that operate very differently or at least much

01:59:31 bigger than deep learning. Yeah. Interesting. So ultimately it’s good to have an application

01:59:37 that’s making our life better now because the capitalist process, if you can make money.

01:59:42 Yeah. Yeah. That works. I mean, the other way, I mean, Neil deGrasse Tyson writes about this

01:59:48 is the other way we fund science, of course, is through military. So like, yeah. Conquests.

01:59:53 So here’s an interesting thing we’re doing on this regard. So we’ve decided, we used to have

01:59:57 a series of these biological principles and we can see how to build these intelligent machines,

02:00:01 but we’ve decided to apply some of these principles to today’s machine learning techniques.

02:00:07 So one of the, we didn’t talk about this principle. One is a sparsity in the brain,

02:00:11 um, most of the neurons are active at any point in time. It’s sparse and the connectivity is sparse

02:00:15 and that’s different than deep learning networks. Um, so we’ve already shown that we can speed up

02:00:20 existing deep learning networks, uh, anywhere from 10 to a factor of a hundred. I mean,

02:00:26 literally a hundred, um, and make a more robust at the same time. So this is commercially very,

02:00:31 very valuable. Um, and so, you know, if we can prove this actually in the largest systems that

02:00:38 are commercially applied today, there’s a big commercial desire to do this. Well,

02:00:44 sparsity is something that doesn’t run really well on existing hardware. It doesn’t really run

02:00:50 really well, um, on, um, GPUs, um, and on CPUs. And so that would be a way of sort of bringing more,

02:00:59 more brain principles into the existing system on a, on a commercially valuable basis.

02:01:03 Another thing we can think we can do is we’re going to use these dendrites,

02:01:06 um, models that we, uh, I talked earlier about the prediction occurring inside a neuron

02:01:13 that that basic property can be applied to existing neural networks and allow them to

02:01:18 learn continuously, which is something they don’t do today. And so the dendritic spikes that you

02:01:22 were talking about. Yeah. Well, we wouldn’t model the spikes, but the idea that you have

02:01:26 that neuron today’s neural networks have this company called the point neurons is a very simple

02:01:30 model of a neuron. And, uh, by adding dendrites to them at just one more level of complexity,

02:01:36 uh, that’s in biological systems, you can solve problems in continuous learning, um,

02:01:41 and rapid learning. So we’re trying to take, we’re trying to bring the existing field,

02:01:47 and we’ll see if we can do it. We’re trying to bring the existing field of machine learning,

02:01:51 um, commercially along with us, you brought up this idea of keeping, you know,

02:01:55 paying for it commercially along with us as we move towards the ultimate goal of a true AI system.

02:02:00 Even small innovations on your own networks are really, really exciting.

02:02:04 Yeah.

02:02:04 Is it seems like such a trivial model of the brain and applying different insights

02:02:11 that just even, like you said, continuous, uh, learning or, uh, making it more asynchronous

02:02:19 or maybe making more dynamic or like, uh, incentivizing, making it robust and making it

02:02:28 somehow much better incentivizing sparsity, uh, somehow. Yeah. Well, if you can make things a

02:02:35 hundred times faster, then there’s plenty of incentive. That’s true. People, people are

02:02:40 spending millions of dollars, you know, just training some of these networks. Now these, uh,

02:02:44 these transforming networks, let me ask you the big question for young people listening to this

02:02:51 today in high school and college, what advice would you give them in terms of, uh, which career

02:02:57 path to take and, um, maybe just about life in general? Well, in my case, um, I didn’t start

02:03:06 life with any kind of goals. I was, when I was going to college, it’s like, Oh, what do I study?

02:03:11 Well, maybe I’ll do this electrical engineering stuff, you know? Um, it wasn’t like, you know,

02:03:15 today you see some of these young kids are so motivated, like I’m changing the world. I was

02:03:18 like, you know, whatever. And, um, but then I did fall in love with something besides my wife,

02:03:25 but I fell in love with this, like, Oh my God, it would be so cool to understand how the brain works.

02:03:30 And then I, I said to myself, that’s the most important thing I could work on. I can’t imagine

02:03:34 anything more important because if we understand how the brains work, you build tells the machines

02:03:38 and they could figure out all the other big questions of the world. Right. So, and then I

02:03:42 said, but I want to understand how I work. So I fell in love with this idea and I became passionate

02:03:46 about it. And this is a trope. People say this, but it was, it’s true because I was passionate

02:03:54 about it. I was able to put up almost so much crap, you know, you know, I was, I was in that,

02:04:01 you know, I was like person said, you can’t do this. I was, I was a graduate student at Berkeley

02:04:05 when they said, you can’t study this problem, you know, no one’s can solve this or you can’t get

02:04:09 funded for it. You know, then I went into do mobile computing and it was like, people say,

02:04:13 you can’t do that. You can’t build a cell phone, you know? So, but all along I kept being motivated

02:04:18 because I wanted to work on this problem. I said, I want to understand the brain works. And I got

02:04:22 myself, you know, I got one lifetime. I’m going to figure it out, do the best I can. So by having

02:04:28 that, cause you know, it’s really, as you pointed out, Lex, it’s really hard to do these things.

02:04:33 People, it just, there’s so many downers along the way. So many ways, obstacles to get in your

02:04:38 way. Yeah. I’m sitting here happy all the time, but trust me, it’s not always like that.

02:04:42 Well, that’s, I guess the happiness, the passion is a prerequisite for surviving the whole thing.

02:04:47 Yeah, I think so. I think that’s right. And so I don’t want to sit to someone and say, you know,

02:04:53 you need to find a passion and do it. No, maybe you don’t. But if you do find something you’re

02:04:57 passionate about, then you can follow it as far as your passion will let you put up with it.

02:05:04 Do you remember how you found it? How the spark happened?

02:05:09 Why specifically for me?

02:05:10 Yeah. Cause you said it’s such an interesting, so like almost like later in life, by later,

02:05:15 I mean like not when you were five, you didn’t really know. And then all of a sudden you fell

02:05:21 in love with that idea. Yeah, yeah. There was two separate events that compounded one another.

02:05:25 One, when I was probably a teenager, it might’ve been 17 or 18, I made a list of the most

02:05:31 interesting problems I could think of. First was why does the universe exist? It seems like

02:05:36 not existing is more likely. The second one was, well, given it exists, why does it behave the way

02:05:41 it does? Laws of physics, why is it equal MC squared, not MC cubed? That’s an interesting

02:05:45 question. The third one was like, what’s the origin of life? And the fourth one was, what’s

02:05:51 intelligence? And I stopped there. I said, well, that’s probably the most interesting one. And I

02:05:56 put that aside as a teenager. But then when I was 22 and I was reading the, no, excuse me, it was

02:06:05 1979, excuse me, 1979, I was reading, so I was, at that time I was 22, I was reading the September

02:06:13 issue of Scientific American, which is all about the brain. And then the final essay was by Francis

02:06:19 Crick, who of DNA fame, and he had taken his interest to studying the brain now. And he said,

02:06:25 you know, there’s something wrong here. He says, we got all this data, all this fact, this is 1979,

02:06:33 all these facts about the brain, tons and tons of facts about the brain. Do we need more facts? Or do

02:06:39 we just need to think about a way of rearranging the facts we have? Maybe we’re just not thinking

02:06:42 about the problem correctly. Cause he says, this shouldn’t be like this. So I read that and I said,

02:06:51 wow. I said, I don’t have to become like an experimental neuroscientist. I could just

02:06:57 take, look at all those facts and try and become a theoretician and try to figure it out. And I said

02:07:04 that I felt like it was something I would be good at. I said, I wouldn’t be a good experimentalist.

02:07:08 I don’t have the patience for it, but I’m a good thinker and I love puzzles. And this is like the

02:07:14 biggest puzzle in the world. It’s the biggest puzzle of all time. And I got all the puzzle

02:07:18 pieces in front of me. Damn, that was exciting. And there’s something obviously you can’t

02:07:23 convert into words that just kind of sparked this passion. And I have that a few times in my life,

02:07:29 just something just like you, it grabs you. Yeah. I felt it was something that was both

02:07:37 important and that I could make a contribution to. And so all of a sudden it felt like,

02:07:41 oh, it gave me purpose in life. I honestly don’t think it has to be as big as one of those four

02:07:46 questions. I think you can find those things in the smallest. Oh, absolutely. David Foster Wallace

02:07:54 said like the key to life is to be unboreable. I think it’s very possible to find that intensity

02:08:01 of joy in the smallest thing. Absolutely. I’m just, you asked me my story. Yeah. No, but I’m

02:08:06 actually speaking to the audience. It doesn’t have to be those four. You happen to get excited by one

02:08:10 of the bigger questions of in the universe, but even the smallest things and watching the Olympics

02:08:18 now, just giving yourself life, giving your life over to the study and the mastery of a particular

02:08:25 sport is fascinating. And if it sparks joy and passion, you’re able to, in the case of the

02:08:32 Olympics, basically suffer for like a couple of decades to achieve. I mean, you can find joy and

02:08:37 passion just being a parent. I mean, yeah, the parenting one is funny. So I was, not always,

02:08:43 but for a long time, wanted kids and get married and stuff. And especially that has to do with the

02:08:48 fact that I’ve seen a lot of people that I respect get a whole nother level of joy from kids. And

02:08:58 at first is like, you’re thinking is, well, like I don’t have enough time in the day, right? If I

02:09:05 have this passion to solve, but like, if I want to solve intelligence, how’s this kid situation

02:09:13 going to help me? But then you realize that, you know, like you said, the things that sparks joy,

02:09:22 and it’s very possible that kids can provide even a greater or deeper, more meaningful joy than

02:09:28 those bigger questions when they enrich each other. And that seemed like, obviously when I

02:09:34 was younger, it’s probably a counterintuitive notion because there’s only so many hours in the

02:09:37 day, but then life is finite and you have to pick the things that give you joy.

02:09:44 Yeah. But you also understand you can be patient too. I mean, it’s finite, but we do have, you know,

02:09:50 whatever, 50 years or something. So in my case, I had to give up on my dream of the neuroscience

02:09:58 because I was a graduate student at Berkeley and they told me I couldn’t do this and I couldn’t

02:10:02 get funded. And so I went back in the computing industry for a number of years. I thought it

02:10:09 would be four, but it turned out to be more. But I said, I’ll come back. I’m definitely going to

02:10:14 come back. I know I’m going to do this computer stuff for a while, but I’m definitely coming back.

02:10:17 Everyone knows that. And it’s like raising kids. Well, yeah, you have to spend a lot of time with

02:10:22 your kids. It’s fun, enjoyable. But that doesn’t mean you have to give up on other dreams. It just

02:10:28 means that you may have to wait a week or two to work on that next idea. Well, you talk about the

02:10:36 darker side of me, disappointing sides of human nature that we’re hoping to overcome so that we

02:10:42 don’t destroy ourselves. I tend to put a lot of value in the broad general concept of love,

02:10:48 of the human capacity of compassion towards each other, of just kindness, whatever that longing of

02:10:58 like just the human to human connection. It connects back to our initial discussion. I tend to

02:11:05 see a lot of value in this collective intelligence aspect. I think some of the magic of human

02:11:09 civilization happens when there’s a party is not as fun when you’re alone. I totally agree with

02:11:16 you on these issues. Do you think from a neocortex perspective, what role does love play in the human

02:11:24 condition? Well, those are two separate things from a neocortex point of view. It doesn’t impact

02:11:29 our thinking about the neocortex. From a human condition point of view, I think it’s core.

02:11:34 I mean, we get so much pleasure out of loving people and helping people. I’ll rack it up to

02:11:44 old brain stuff and maybe we can throw it under the bus of evolution if you want. That’s fine.

02:11:52 It doesn’t impact how I think about how we model the world, but from a humanity point of view,

02:11:57 I think it’s essential. Well, I tend to give it to the new brain and also I tend to give it to

02:12:03 the old brain. Also, I tend to think that some aspects of that need to be engineered into AI

02:12:09 systems, both in their ability to have compassion for other humans and their ability to maximize

02:12:21 love in the world between humans. I’m more thinking about social networks. Whenever there’s a deep

02:12:27 AI systems in humans, specific applications where it’s AI and humans, I think that’s something that

02:12:35 often not talked about in terms of metrics over which you try to maximize,

02:12:44 like which metric to maximize in a system. It seems like one of the most

02:12:48 powerful things in societies is the capacity to love.

02:12:55 It’s fascinating. I think it’s a great way of thinking about it. I have been thinking more of

02:13:01 these fundamental mechanisms in the brain as opposed to the social interaction between humans

02:13:06 and AI systems in the future. If you think about that, you’re absolutely right. That’s a complex

02:13:13 system. I can have intelligent systems that don’t have that component, but they’re not interacting

02:13:17 with people. They’re just running something or building some place or something. I don’t know.

02:13:21 But if you think about interacting with humans, yeah, but it has to be engineered in there. I

02:13:26 don’t think it’s going to appear on its own. That’s a good question.

02:13:30 Yeah. Well, we could, we’ll leave that open. In terms of, from a reinforcement learning

02:13:38 perspective, whether the darker sides of human nature or the better angels of our nature win out,

02:13:46 statistically speaking, I don’t know. I tend to be optimistic and hope that love wins out in the end.

02:13:52 You’ve done a lot of incredible stuff and your book is driving towards this fourth question that

02:14:01 you started with on the nature of intelligence. What do you hope your legacy for people reading

02:14:08 a hundred years from now? How do you hope they remember your work? How do you hope they remember

02:14:14 this book? Well, I think as an entrepreneur or a scientist or any human who’s trying to accomplish

02:14:21 some things, I have a view that really all you can do is accelerate the inevitable. Yeah. It’s like,

02:14:30 you know, if we didn’t figure out, if we didn’t study the brain, someone else will study the

02:14:33 brain. If, you know, if Elon didn’t make electric cars, someone else would do it eventually.

02:14:38 And if, you know, if Thomas Edison didn’t invent a light bulb, we wouldn’t be using candles today.

02:14:42 So, what you can do as an individual is you can accelerate something that’s beneficial

02:14:48 and make it happen sooner than it would have. That’s really it. That’s all you can do.

02:14:53 You can’t create a new reality that it wasn’t going to happen. So, from that perspective,

02:15:01 I would hope that our work, not just me, but our work in general, people would look back and said,

02:15:07 hey, they really helped make this better future happen sooner. They, you know, they helped us

02:15:14 understand the nature of false beliefs sooner than they might have. Now we’re so happy that

02:15:18 we have these intelligent machines doing these things, helping us that maybe that solved the

02:15:22 climate change problem and they made it happen sooner. So, I think that’s the best I would hope

02:15:28 for. Some would say those guys just moved the needle forward a little bit in time.

02:15:33 Well, I do. It feels like the progress of human civilization is not, is there’s a lot

02:15:40 of trajectories. And if you have individuals that accelerate towards one direction that helps steer

02:15:48 human civilization. So, I think in those long stretch of time, all trajectories will be traveled.

02:15:55 But I think it’s nice for this particular civilization on earth to travel down one that’s

02:15:59 not. Well, I think you’re right. We have to take the whole period of, you know, World War II,

02:16:03 Nazism or something like that. Well, that was a bad sidestep, right? We’ve been over there for a

02:16:07 while. But, you know, there is the optimistic view about life that ultimately it does converge

02:16:13 in a positive way. It progresses ultimately, even if we have years of darkness. So, yeah. So,

02:16:21 I think you can perhaps that’s accelerating the positive could also mean eliminating some bad

02:16:27 missteps along the way, too. But I’m an optimistic in that way. Despite we talked about the end of

02:16:34 civilization, you know, I think we’re going to live for a long time. I hope we are. I think our

02:16:40 society in the future is going to be better. We’re going to have less discord. We’re going to have

02:16:42 less people killing each other. You know, we’ll make them live in some sort of way that’s compatible

02:16:47 with the carrying capacity of the earth. I’m optimistic these things will happen. And all we

02:16:53 can do is try to get there sooner. And at the very least, if we do destroy ourselves,

02:16:57 we’ll have a few satellites orbiting that will tell alien civilization that we were once here.

02:17:05 Or maybe our future, you know, future inhabitants of earth. You know, imagine we,

02:17:10 you know, the planet of the apes in here. You know, we kill ourselves, you know,

02:17:13 a million years from now or a billion years from now. There’s another species on the planet.

02:17:16 Curious creatures were once here. Jeff, thank you so much for your work. And thank you so much for

02:17:23 talking to me once again. Well, actually, it’s great. I love what you do. I love your podcast.

02:17:27 You have the most interesting people, me aside. So it’s a real service, I think you do for,

02:17:35 in a very broader sense for humanity, I think. Thanks, Jeff. All right. It’s a pleasure.

02:17:40 Thanks for listening to this conversation with Jeff Hawkins. And thank you to

02:17:43 Codecademy, BioOptimizers, ExpressVPN, Asleep, and Blinkist. Check them out in the description

02:17:50 to support this podcast. And now, let me leave you with some words from Albert Camus.

02:17:57 An intellectual is someone whose mind watches itself. I like this, because I’m happy to be

02:18:04 both halves, the watcher and the watched. Can they be brought together? This is the

02:18:10 practical question we must try to answer. Thank you for listening. I hope to see you next time.