David Eagleman: Neuroplasticity and the Livewired Brain #119

Transcript

00:00:00 The following is a conversation with David Eagleman,

00:00:02 a neuroscientist and one of the great science communicators

00:00:06 of our time, exploring the beauty and mystery

00:00:09 of the human brain.

00:00:10 He’s an author of a lot of amazing books

00:00:13 about the human mind, and his new one called Livewired.

00:00:18 Livewired is a work of 10 years on a topic

00:00:21 that is fascinating to me, which is neuroplasticity

00:00:24 or the malleability of the human brain.

00:00:27 Quick summary of the sponsors.

00:00:29 Athletic Greens, BetterHelp, and Cash App.

00:00:32 Click the sponsor links in the description

00:00:34 to get a discount and to support this podcast.

00:00:37 As a side note, let me say that the adaptability

00:00:41 of the human mind at the biological, chemical,

00:00:44 cognitive, psychological, and even sociological levels

00:00:48 is the very thing that captivated me many years ago

00:00:51 when I first began to wonder how would my engineer

00:00:54 something like it in the machine.

00:00:57 The open question today in the 21st century

00:01:00 is what are the limits of this adaptability?

00:01:03 As new, smarter and smarter devices and AI systems

00:01:07 come to life, or as better and better brain computer

00:01:10 interfaces are engineered, will our brain be able to adapt,

00:01:13 to catch up, to excel?

00:01:16 I personally believe yes, that we’re far from reaching

00:01:19 the limitation of the human mind and the human brain,

00:01:23 just as we are far from reaching the limitations

00:01:26 of our computational systems.

00:01:28 If you enjoy this thing, subscribe on YouTube,

00:01:31 review it with five stars on Apple Podcast,

00:01:33 follow on Spotify, support on Patreon,

00:01:36 or connect with me on Twitter at Lex Friedman.

00:01:40 As usual, I’ll do a few minutes of ads now

00:01:41 and no ads in the middle.

00:01:43 I try to make these interesting,

00:01:45 but I give you timestamps so you can skip.

00:01:47 But please do check out the sponsors

00:01:49 by clicking the links in the description.

00:01:51 It’s the best way to support this podcast.

00:01:55 This show is brought to you by Athletic Greens,

00:01:58 the all in one daily drink to support better health

00:02:01 and peak performance.

00:02:02 Even with a balanced diet, it’s difficult to cover

00:02:04 all of your nutritional bases.

00:02:07 That’s where Athletic Greens will help.

00:02:09 Their daily drink is like nutritional insurance

00:02:11 for your body as delivered straight to your door.

00:02:15 As you may know, I fast often, sometimes intermittent fasting

00:02:18 for 16 hours, sometimes 24 hours,

00:02:21 dinner to dinner, sometimes more.

00:02:24 I break the fast with Athletic Greens.

00:02:26 It’s delicious, refreshing, just makes me feel good.

00:02:30 I think it’s like 50 calories, less than a gram of sugar,

00:02:34 but has a ton of nutrients to make sure my body

00:02:36 has what it needs despite what I’m eating.

00:02:40 Go to athleticgreens.com slash lex

00:02:43 to claim a special offer of a free vitamin D3K2 for a year.

00:02:49 If you listen to the Joe Rogan experience,

00:02:51 you might’ve listened to him rant about

00:02:53 how awesome vitamin D is for your immune system.

00:02:56 So there you have it.

00:02:57 So click the athleticgreens.com slash lex

00:03:00 in the description to get the free stuff

00:03:03 and to support this podcast.

00:03:06 This show is sponsored by BetterHelp, spelled H E L P, help.

00:03:11 Check it out at betterhelp.com slash lex.

00:03:14 They figure out what you need and match you

00:03:15 with a licensed professional therapist in under 48 hours.

00:03:19 It’s not a crisis line, it’s not self help,

00:03:21 it’s professional counseling done securely online.

00:03:25 I’m a bit from the David Goggins line of creatures

00:03:28 and so have some demons to contend with,

00:03:30 usually on long runs or all nights full of self doubt.

00:03:34 I think suffering is essential for creation,

00:03:37 but you can suffer beautifully

00:03:39 in a way that doesn’t destroy you.

00:03:41 For most people, I think a good therapist can help on this.

00:03:45 So it’s at least worth a try.

00:03:47 Check out their reviews, they’re good.

00:03:49 It’s easy, private, affordable, available worldwide.

00:03:52 You can communicate by text anytime

00:03:54 and schedule a weekly audio and video session.

00:03:58 Check it out at betterhelp.com slash lex.

00:04:02 This show is presented by Cash App,

00:04:04 the number one finance app in the App Store.

00:04:06 When you get it, use code lexpodcast.

00:04:09 Cash App lets you send money to friends, buy Bitcoin,

00:04:11 invest in the stock market with as little as $1.

00:04:14 Since Cash App allows you to buy Bitcoin,

00:04:16 let me mention that cryptocurrency

00:04:18 in the context of the history of money is fascinating.

00:04:21 I recommend Ascent of Money as a great book on this history.

00:04:25 Davidson credits on ledgers started around 30,000 years ago

00:04:29 and the first decentralized cryptocurrency

00:04:32 released just over 10 years ago.

00:04:34 So given that history, cryptocurrency is still very much

00:04:37 in its early days of development,

00:04:39 but it’s still aiming to

00:04:40 and just might redefine the nature of money.

00:04:44 So again, if you get Cash App from the App Store or Google Play

00:04:47 and use code lexpodcast, you get $10

00:04:51 and Cash App will also donate $10 to FIRST,

00:04:53 an organization that is helping to advance robotics

00:04:56 and STEM education for young people around the world.

00:05:00 And now here’s my conversation with David Eagleman.

00:05:05 You have a new book coming out on the changing brain.

00:05:10 Can you give a high level overview of the book?

00:05:13 It’s called Livewired by the way.

00:05:14 Yeah, the thing is we typically think about the brain

00:05:17 in terms of the metaphors we already have,

00:05:19 like hardware and software, that’s how we build

00:05:21 all our stuff, but what’s happening in the brain

00:05:24 is fundamentally so different.

00:05:26 So I coined this new term liveware,

00:05:29 which is a system that’s constantly reconfiguring itself

00:05:32 physically as it learns and adapts to the world around it.

00:05:37 It’s physically changing.

00:05:38 So it’s liveware meaning like hardware but changing.

00:05:43 Yeah, exactly.

00:05:44 Well, the hardware and the software layers are blended

00:05:47 and so typically engineers are praised for their efficiency

00:05:53 and making something really clean and clear,

00:05:55 like, okay, here’s the hardware layer,

00:05:56 then I’m gonna run software on top of it.

00:05:57 And there’s all sorts of universality that you get out

00:06:00 of a piece of hardware like that that’s useful.

00:06:02 But what the brain is doing is completely different.

00:06:05 And I am so excited about where this is all going

00:06:08 because I feel like this is where our engineering will go.

00:06:13 So currently we build all our devices a particular way,

00:06:17 but I can’t tear half the circuitry out of your cell phone

00:06:20 and expect it to still function.

00:06:22 But you can do that with the brain.

00:06:26 So just as an example, kids who are under

00:06:29 about seven years old can get one half of their brain

00:06:32 removed, it’s called a hemispherectomy, and they’re fine.

00:06:35 They have a slight limp on the other side of their body,

00:06:37 but they can function just fine that way.

00:06:40 And this is generally true.

00:06:42 You know, sometimes children are born without a hemisphere

00:06:45 and their visual system rewires so that everything is

00:06:48 on the single remaining hemisphere.

00:06:52 What thousands of cases like this teach us

00:06:55 is that it’s a very malleable system that is simply trying

00:06:59 to accomplish the tasks in front of it by rewiring itself

00:07:04 with the available real estate.

00:07:06 How much of that is a quirk or a feature of evolution?

00:07:09 Like, how hard is it to engineer?

00:07:11 Because evolution took a lot of work.

00:07:14 Trillions of organisms had to die for it to create

00:07:18 this thing we have in our skull.

00:07:21 Like, because you said you kind of look forward to the idea

00:07:24 that we might be engineering our systems like this

00:07:27 in the future, like creating liveware systems.

00:07:30 How hard do you think is it to create systems like that?

00:07:33 Great question.

00:07:34 It has proven itself to be a difficult challenge.

00:07:37 What I mean by that is even though it’s taken evolution

00:07:40 a really long time to get where it is now,

00:07:44 all we have to do now is peek at the blueprints.

00:07:47 It’s just three pounds, this organ,

00:07:49 and we just figure out how to do it.

00:07:50 But that’s the part that I mean is a difficult challenge

00:07:52 because there are tens of thousands of neuroscientists,

00:07:57 we’re all poking and prodding and trying to figure this out,

00:07:59 but it’s an extremely complicated system.

00:08:00 But it’s only gonna be complicated until we figure out

00:08:03 the general principles.

00:08:05 Exactly like if you had a magic camera

00:08:09 you could look inside the nucleus of a cell

00:08:10 and you’d see hundreds of thousands of things

00:08:13 moving around or whatever,

00:08:14 and then it takes Crick and Watson to say,

00:08:16 oh, you know what, you’re just trying to maintain

00:08:17 the order of the base pairs and all the rest is details.

00:08:20 Then it simplifies it and we come to understand something.

00:08:23 That was my goal in LiveWire,

00:08:25 which I’ve written over 10 years, by the way,

00:08:26 is to try to distill things down to the principles

00:08:29 of what plastic systems are trying to accomplish.

00:08:34 But to even just linger, you said,

00:08:36 it’s possible to be born with just one hemisphere

00:08:38 and you still are able to function.

00:08:41 First of all, just to pause on that,

00:08:43 I mean, that’s kind of, that’s amazing.

00:08:47 I don’t know if people quite,

00:08:50 I mean, you kind of hear things here and there.

00:08:51 This is why I’m kind of, I’m really excited about your book

00:08:54 is I don’t know if there’s definitive sort of popular sources

00:09:00 to think about this stuff.

00:09:01 I mean, there’s a lot of, I think from my perspective,

00:09:05 what I heard is there’s like been debates over decades

00:09:07 about how much neuroplasticity there is in the brain

00:09:12 and so on, and people have learned a lot of things

00:09:14 and now it’s converging towards people

00:09:16 that are understanding there’s much more plastic

00:09:20 than people realize.

00:09:21 But just like linger on that topic,

00:09:23 like how malleable is the hardware of the human brain?

00:09:28 Maybe you said children at each stage of life.

00:09:32 Yeah, so here’s the whole thing.

00:09:33 I think part of the confusion about plasticity

00:09:36 has been that there are studies

00:09:38 at all sorts of different ages,

00:09:40 and then people might read that from a distance

00:09:42 and they think, oh, well, Fred didn’t recover

00:09:45 when half his brain was taken out

00:09:47 and so clearly you’re not plastic,

00:09:49 but then you do it with a child and they are plastic.

00:09:52 And so part of my goal here was to pull together

00:09:56 the tens of thousands of papers on this,

00:09:59 both from clinical work and from all the way down

00:10:02 to the molecular and understand

00:10:04 what are the principles here?

00:10:04 The principles are that plasticity diminishes,

00:10:08 that’s no surprise.

00:10:09 By the way, maybe I should just define plasticity.

00:10:11 It’s the ability of a system to mold into a new shape

00:10:14 and then hold that shape.

00:10:16 That’s why we make things that we call plastic

00:10:20 because they are moldable and they can hold that new shape,

00:10:23 like a plastic toy or something.

00:10:25 And so maybe we’ll use a lot of terms that are synonymous.

00:10:29 So something is plastic, something is malleable,

00:10:34 changing, live wire, the name of the book is like synonyms.

00:10:38 So I’ll tell you, exactly right,

00:10:39 but I’ll tell you why I chose live wire

00:10:41 instead of plasticity.

00:10:42 So I use the term plasticity in the book, but sparingly,

00:10:47 because that was a term coined by William James

00:10:51 over a hundred years ago and he was, of course,

00:10:53 very impressed with plastic manufacturing

00:10:55 that you could mold something into shape

00:10:57 and then it holds that.

00:10:58 But that’s not what’s actually happening in the brain.

00:11:01 It’s constantly rewiring your entire life.

00:11:03 You never hit an end point.

00:11:06 The whole point is for it to keep changing.

00:11:08 So even in the few minutes of conversation

00:11:11 that we’ve been having, your brain is changing,

00:11:12 my brain is changing.

00:11:15 Next time I see your face, I will remember,

00:11:18 oh yeah, like that time Lex and I sat together

00:11:19 and we did these things.

00:11:21 I wonder if your brain will have like a Lex thing

00:11:24 going on for the next few months.

00:11:25 Like it’ll stay there until you get rid of it

00:11:27 because it was useful for now.

00:11:29 Yeah, no, I’ll probably never get rid of it.

00:11:30 Let’s say for some circumstance,

00:11:32 you and I don’t see each other for the next 35 years.

00:11:34 When I run into you, I’ll be like, oh yeah.

00:11:36 That looks familiar.

00:11:37 Yeah, yeah, we sat down for a podcast

00:11:40 back when there were podcasts.

00:11:42 Exactly.

00:11:43 Back when we lived outside virtual reality.

00:11:46 Exactly.

00:11:47 So you chose live wire to mold a plastic.

00:11:50 Exactly, because plastic implies,

00:11:52 I mean, it’s the term that’s used in the field

00:11:54 and so that’s why we need to use it still for a while.

00:11:57 But yeah, it implies something gets molded into shape

00:11:59 and then holds that shape forever.

00:12:00 But in fact, the whole system is completely changing.

00:12:03 Then back to how malleable is the human brain

00:12:07 at each stage of life.

00:12:08 So what, just at a high level, is it malleable?

00:12:13 So yes, and plasticity diminishes.

00:12:17 But one of the things that I felt like

00:12:19 I was able to put together for myself

00:12:21 after reading thousands of papers on this issue

00:12:23 is that different parts of the brain

00:12:26 have different plasticity windows.

00:12:30 So for example, with the visual cortex,

00:12:33 that cements itself into place pretty quickly

00:12:35 over the course of a few years.

00:12:37 And I argue that’s because of the stability of the data.

00:12:41 In other words, what you’re getting in from the world,

00:12:43 you’ve got a certain number of angles, colors, shapes.

00:12:47 It’s essentially the world is visually stable.

00:12:49 So that hardens around that data.

00:12:52 As opposed to, let’s say, the somatosensory cortex,

00:12:55 which is the part that’s taking information

00:12:56 from your body, or the motor cortex right next to it,

00:12:58 which is what drives your body.

00:13:00 The fact is, bodies are always changing.

00:13:01 You get taller over time, you get fatter, thinner,

00:13:05 over time, you might break a leg

00:13:06 and have to limp for a while, stuff like that.

00:13:08 So because the data there is always changing,

00:13:11 by the way, you might get on a bicycle,

00:13:12 you might get on a surfboard, things like that.

00:13:15 Because the data is always changing,

00:13:16 that stays more malleable.

00:13:19 And when you look through the brain,

00:13:20 you find that it appears to be this,

00:13:23 how stable the data is determines how fast

00:13:26 something hardens into place.

00:13:28 But the point is, different parts of the brain

00:13:30 harden into place at different times.

00:13:31 Do you think it’s possible that,

00:13:35 depending on how much data you get on different sensors,

00:13:38 that it stays more malleable longer?

00:13:41 So like, if you look at different cultures

00:13:44 that experience, like if you keep your eyes closed,

00:13:47 or maybe you’re blind, I don’t know,

00:13:48 but let’s say you keep your eyes closed

00:13:51 for your entire life, then the visual cortex

00:13:55 might be much less malleable.

00:13:58 The reason I bring that up is like,

00:14:01 well maybe we’ll talk about brain computer interfaces

00:14:03 a little bit down the line, but is this,

00:14:08 is the malleability a genetic thing,

00:14:11 or is it more about the data, like you said, that comes in?

00:14:14 Ah, so the malleability itself is a genetic thing.

00:14:17 The big trick that Mother Nature discovered with humans

00:14:20 is make a system that’s really flexible,

00:14:24 as opposed to most other creatures to different degrees.

00:14:28 So if you take an alligator, it’s born,

00:14:31 its brain does the same thing every generation.

00:14:34 If you compare an alligator 100,000 years ago

00:14:36 to an alligator now, they’re essentially the same.

00:14:39 We, on the other hand, as humans,

00:14:41 drop into a world with a half baked brain,

00:14:44 and what we require is to absorb the culture around us,

00:14:48 and the language, and the beliefs, and the customs,

00:14:50 and so on, that’s what Mother Nature has done with us,

00:14:55 and it’s been a tremendously successful trick

00:14:57 we’ve taken over the whole planet as a result of this.

00:15:00 So that’s an interesting point,

00:15:01 I mean, just to link on it, that,

00:15:03 I mean, this is a nice feature,

00:15:05 like if you were to design a thing

00:15:07 to survive in this world, do you put it at age zero

00:15:11 already equipped to deal with the world

00:15:14 in a hard coded way, or do you put it,

00:15:17 do you make it malleable and just throw it in,

00:15:19 take the risk that you’re maybe going to die,

00:15:23 but you’re going to learn a lot in the process,

00:15:25 and if you don’t die, you’ll learn a hell of a lot

00:15:27 to be able to survive in the environment.

00:15:29 So this is the experiment that Mother Nature ran,

00:15:31 and it turns out that, for better or worse, we’ve won.

00:15:34 I mean, yeah, we put other animals in the zoos,

00:15:37 and we, yeah, that’s right.

00:15:38 AI might do better.

00:15:39 Okay, fair enough, that’s true.

00:15:41 And maybe what the trick Mother Nature did

00:15:43 is just the stepping stone to AI, but.

00:15:46 So that’s a beautiful feature of the human brain,

00:15:50 that it’s malleable, but let’s,

00:15:52 on the topic of Mother Nature, what do we start with?

00:15:56 Like, how blank is the slate?

00:15:58 Ah, so it’s not actually a blank slate.

00:16:01 What it’s, it’s terrific engineering that’s set up in there,

00:16:05 but much of that engineering has to do with,

00:16:07 okay, just make sure that things get to the right place.

00:16:10 For example, like the fibers from the eyes

00:16:12 getting to the visual cortex,

00:16:13 or all this very complicated machinery in the ear

00:16:15 getting to the auditory cortex, and so on.

00:16:17 So things, first of all, there’s that.

00:16:19 And then what we also come equipped with

00:16:21 is the ability to absorb language

00:16:24 and culture and beliefs, and so on.

00:16:27 So you’re already set up for that.

00:16:28 So no matter what you’re exposed to,

00:16:30 you will absorb some sort of language.

00:16:32 That’s the trick, is how do you engineer something

00:16:34 just enough that it’s then a sponge

00:16:37 that’s ready to take in and fill in the blanks?

00:16:40 How much of the malleability is hardware?

00:16:42 How much is software?

00:16:43 Is that useful at all in the brain?

00:16:45 So what are we talking about?

00:16:46 So there’s neurons, there’s synapses,

00:16:52 and all kinds of different synapses,

00:16:54 and there’s chemical communication,

00:16:55 like electrical signals,

00:16:57 and there’s chemical communication from the synapses.

00:17:04 I would say the software would be the timing

00:17:09 and the nature of the electrical signals, I guess,

00:17:11 and the hardware would be the actual synapses.

00:17:14 So here’s the thing, this is why I really, if we can,

00:17:16 I wanna get away from the hardware and software metaphor

00:17:19 because what happens is,

00:17:21 as activity passes through the system, it changes things.

00:17:25 Now, the thing that computer engineers

00:17:27 are really used to thinking about is synapses,

00:17:30 where two neurons connect.

00:17:31 Of course, each neuron connects with 10,000 of its neighbors,

00:17:33 but at a point where they connect,

00:17:36 what we’re all used to thinking about

00:17:37 is the changing of the strength of that connection,

00:17:40 the synaptic weight.

00:17:42 But in fact, everything is changing.

00:17:44 The receptor distribution inside that neuron

00:17:47 so that you’re more or less sensitive

00:17:49 to the neurotransmitter,

00:17:50 then the structure of the neuron itself

00:17:53 and what’s happening there,

00:17:54 all the way down to biochemical cascades inside the cell,

00:17:57 all the way down to the nucleus,

00:17:59 and for example, the epigenome,

00:18:00 which is these little proteins that are attached to the DNA

00:18:05 that cause conformational changes,

00:18:07 that cause more genes to be expressed or repressed.

00:18:11 All of these things are plastic.

00:18:13 The reason that most people only talk

00:18:15 about the synaptic weights

00:18:17 is because that’s really all we can measure well.

00:18:20 And all this other stuff is really, really hard to see

00:18:22 with our current technology.

00:18:23 So essentially, that just gets ignored.

00:18:25 But in fact, the system is plastic

00:18:27 at all these different levels.

00:18:28 And my way of thinking about this

00:18:33 is an analogy to pace layers.

00:18:37 So pace layers is a concept that Stewart Brand

00:18:40 suggested about how to think about cities.

00:18:43 So you have fashion, which changes rapidly in cities.

00:18:46 You have governance, which changes more slowly.

00:18:52 You have the structure, the buildings of a city,

00:18:54 which changes more slowly, all the way down to nature.

00:18:57 You’ve got all these different layers of things

00:18:59 that are changing at different paces, at different speeds.

00:19:02 I’ve taken that idea and mapped it onto the brain,

00:19:05 which is to say you have some biochemical cascades

00:19:08 that are just changing really rapidly

00:19:10 when something happens, all the way down to things

00:19:11 that are more and more cemented in there.

00:19:14 And this actually allows us to understand a lot

00:19:19 about particular kinds of things that happen.

00:19:20 For example, one of the oldest,

00:19:22 probably the oldest rule in neurology

00:19:24 is called Ribot’s Law, which is that older memories

00:19:27 are more stable than newer memories.

00:19:29 So when you get old and demented,

00:19:32 you’ll be able to remember things from your young life.

00:19:36 Maybe you’ll remember this podcast,

00:19:37 but you won’t remember what you did

00:19:39 a month ago or a year ago.

00:19:41 And this is a very weird structure, right?

00:19:43 No other system works this way,

00:19:44 where older memories are more stable than newer memories.

00:19:48 But it’s because through time,

00:19:52 things get more and more cemented

00:19:53 into deeper layers of the system.

00:19:56 And so this is, I think, the way we have to think

00:19:59 about the brain, not as, okay, you’ve got neurons,

00:20:03 you’ve got synaptic weights, and that’s it.

00:20:04 So, yeah, so the idea of LiveWare and LiveWired

00:20:08 is that it’s like a, it’s a gradual, yeah,

00:20:14 it’s a gradual spectrum between software and hardware.

00:20:18 And so the metaphors completely doesn’t make sense.

00:20:22 Cause like when you talk about software and hardware,

00:20:24 it’s really hard lines.

00:20:26 I mean, of course, software is unlike hard,

00:20:31 but even hardware, but like, so there’s two groups,

00:20:36 but in the software world,

00:20:37 there’s levels of abstractions, right?

00:20:39 There’s the operating system, there’s machine code,

00:20:42 and then it gets higher and higher levels.

00:20:44 But somehow that’s actually fundamentally different

00:20:46 than the layers of abstractions in the hardware.

00:20:50 But in the brain, it’s all like the same.

00:20:53 And I love the city, the city metaphor.

00:20:55 I mean, yeah, it’s kind of mind blowing

00:20:57 cause it’s hard to know what to think about that.

00:21:01 Like if I were to ask the question,

00:21:04 this is an important question for machine learning is,

00:21:07 how does the brain learn?

00:21:09 So essentially you’re saying that,

00:21:13 I mean, it just learns on all of these different levels

00:21:17 at all different paces.

00:21:19 Exactly right.

00:21:19 And as a result, what happens is

00:21:21 as you practice something, you get good at something,

00:21:24 you’re physically changing the circuitry,

00:21:26 you’re adapting your brain around the thing

00:21:30 that is relevant to you.

00:21:31 So let’s say you take up, do you know how to surf?

00:21:34 Nope.

00:21:35 Okay, great.

00:21:36 So let’s say you take up surfing now at this age.

00:21:39 What happens is you’ll be terrible at first,

00:21:41 you don’t know how to operate your body,

00:21:42 you don’t know how to read the waves, things like that.

00:21:43 And through time you get better and better.

00:21:45 What you’re doing is you’re burning that

00:21:46 into the actual circuitry of your brain.

00:21:48 You’re of course conscious when you’re first doing it,

00:21:50 you’re thinking about, okay, where am I doing?

00:21:52 What’s my body weight?

00:21:53 But eventually when you become a pro at it,

00:21:55 you are not conscious of it at all.

00:21:57 In fact, you can’t even unpack what it is that you did.

00:22:00 Think about riding a bicycle.

00:22:01 You can’t describe how you’re doing it,

00:22:04 you’re just doing it, you’re changing your balance

00:22:05 when you come, you know, you do this to go to a stop.

00:22:08 So this is what we’re constantly doing

00:22:10 is actually shaping our own circuitry

00:22:14 based on what is relevant for us.

00:22:16 Survival, of course, being the top thing that’s relevant.

00:22:18 But interestingly, especially with humans,

00:22:21 we have these particular goals in our lives,

00:22:23 computer science, neuroscience, whatever.

00:22:25 And so we actually shape our circuitry around that.

00:22:28 I mean, you mentioned this gets slower and slower with age,

00:22:31 but is there, like I think I’ve read and spoken offline,

00:22:36 even on this podcast with a developmental neurobiologist,

00:22:41 I guess would be the right terminology,

00:22:43 is like looking at the very early,

00:22:45 like from embryonic stem cells to the creation of the brain.

00:22:50 And like, that’s mind blowing how much stuff happens there.

00:22:55 So it’s very malleable at that stage.

00:22:59 And then, but after that,

00:23:01 at which point does it stop being malleable?

00:23:04 So that’s the interesting thing

00:23:06 is that it remains malleable your whole life.

00:23:08 So even when you’re an old person,

00:23:10 you’ll be able to remember new faces and names,

00:23:13 you’ll be able to learn new sorts of tasks.

00:23:15 And thank goodness,

00:23:16 cause the world is changing rapidly

00:23:17 in terms of technology and so on.

00:23:19 I just sent my mother an Alexa

00:23:21 and she figured out how to go on the settings

00:23:23 and do the thing.

00:23:24 And I was really impressed that she was able to do it.

00:23:26 So there are parts of the brain

00:23:28 that remain malleable their whole life.

00:23:30 The interesting part is that really your goal

00:23:34 is to make an internal model of the world.

00:23:36 Your goal is to say, okay,

00:23:39 the brain is trapped in silence and darkness,

00:23:42 and it’s trying to understand

00:23:43 how the world works out there, right?

00:23:46 I love that image.

00:23:47 Yeah, I guess it is.

00:23:48 Yeah.

00:23:49 You forget, it’s like this lonely thing

00:23:53 is sitting in its own container

00:23:54 and trying to actually throw a few sensors,

00:23:57 figure out what the hell’s going on.

00:23:58 You know what I sometimes think about

00:23:59 is that movie, The Martian with Matt Damon,

00:24:03 the, I mean, it was written in a book, of course,

00:24:05 but the movie poster shows Matt Damon

00:24:08 all alone on the red planet.

00:24:09 And I think, God, that’s actually what it’s like

00:24:12 to be inside your head and my head and anybody’s head

00:24:16 is that you’re essentially on your own planet in there.

00:24:20 And I’m essentially on my own planet.

00:24:21 And everyone’s got their own world

00:24:24 where you’ve absorbed all of your experiences

00:24:26 up to this moment in your life

00:24:28 that have made you exactly who you are

00:24:29 and same for me and everyone.

00:24:31 And we’ve got this very thin bandwidth of communication.

00:24:36 And I’ll say something like,

00:24:38 oh yeah, that tastes just like peaches.

00:24:40 And you’ll say, oh, I know what you mean.

00:24:42 But the experience, of course,

00:24:44 might be vastly different for us.

00:24:47 But anyway, yes.

00:24:48 So the brain is trapped in silence and darkness,

00:24:50 each one of us, and what it’s trying to do,

00:24:53 this is the important part,

00:24:54 it’s trying to make an internal model

00:24:55 of what’s going on out there,

00:24:56 as in how do I function in the world?

00:24:58 How do I interact with other people?

00:25:00 Do I say something nice and polite?

00:25:02 Do I say something aggressive and mean?

00:25:04 Do I, you know, all these things

00:25:05 that it’s putting together about the world.

00:25:08 And I think what happens when people get older and older,

00:25:12 it may not be that plasticity is diminishing.

00:25:15 It may be that their internal model essentially

00:25:18 has set itself up in a way where it says,

00:25:20 okay, I’ve pretty much got

00:25:21 a really good understanding of the world now,

00:25:22 and I don’t really need to change, right?

00:25:25 So when much older people find themselves

00:25:28 in a situation where they need to change,

00:25:30 they can actually are able to do it.

00:25:33 It’s just that I think this notion

00:25:34 that we all have that plasticity diminishes

00:25:36 as we grow older is in part

00:25:38 because the motivation isn’t there.

00:25:41 But if you were 80 and you get fired from your job

00:25:44 and suddenly had to figure out

00:25:45 how to program a WordPress site or something,

00:25:47 you’d figure it out.

00:25:48 Got it.

00:25:49 So the capability, the possibility of change is there.

00:25:53 But then that’s the highest challenge,

00:25:57 the interesting challenge to this plasticity,

00:26:00 to this liveware system.

00:26:03 If we could talk about brain computer interfaces

00:26:06 and Neuralink, what are your thoughts

00:26:09 about the efforts of Elon Musk, Neuralink, BCI in general

00:26:13 in this regard, which is adding a machine,

00:26:18 a computer, the capability of a computer

00:26:21 to communicate with the brain

00:26:22 and the brain to communicate with a computer

00:26:24 at the very basic applications

00:26:26 and then like the futuristic kind of thoughts.

00:26:28 Yeah, first of all, it’s terrific

00:26:30 that people are jumping in and doing that

00:26:32 because it’s clearly the future.

00:26:34 The interesting part is our brains have pretty good methods

00:26:37 of interacting with technology.

00:26:38 So maybe it’s your fat thumbs on a cell phone or something,

00:26:41 but, or maybe it’s watching a YouTube video

00:26:44 and getting into your eye that way.

00:26:45 But we have pretty rapid ways of communicating

00:26:48 with technology and getting data.

00:26:49 So if you actually crack open the skull

00:26:52 and go into the inner sanctum of the brain,

00:26:56 you might be able to get a little bit faster,

00:26:58 but I’ll tell you, I’m not so sanguine

00:27:03 on the future of that as a business.

00:27:06 And I’ll tell you why.

00:27:07 It’s because there are various ways

00:27:10 of getting data in and out

00:27:11 and an open head surgery is a big deal.

00:27:14 Neurosurgeons don’t wanna do it

00:27:16 because there’s always risk of death

00:27:18 and infection on the table.

00:27:19 And also it’s not clear how many people would say,

00:27:23 I’m gonna volunteer to get something in my head

00:27:26 so that I can text faster, 20% faster.

00:27:29 So I think it’s, mother nature surrounds the brain

00:27:33 with this armored bunker of the skull

00:27:37 because it’s a very delicate material.

00:27:39 And there’s an expression in neurosurgery

00:27:44 about the brain is,

00:27:46 the person is never the same after you open up their skull.

00:27:49 Now, whether or not that’s true or whatever, who cares?

00:27:51 But it’s a big deal to do an open head surgery.

00:27:54 So what I’m interested in is how can we get information

00:27:57 in and out of the brain

00:27:58 without having to crack the skull open?

00:28:00 Without messing with the biological part,

00:28:03 directly connecting or messing

00:28:06 with the intricate biological thing that we got going on

00:28:10 and it seems to be working.

00:28:11 Yeah, exactly.

00:28:12 And by the way, where Neuralink is going,

00:28:15 which is wonderful, is going to be in patient cases.

00:28:18 It really matters for all kinds of surgeries

00:28:20 that a person needs,

00:28:21 whether for Parkinson’s or epilepsy or whatever.

00:28:24 It’s a terrific new technology

00:28:25 for essentially sewing electrodes in there

00:28:27 and getting more higher density of electrodes.

00:28:30 So that’s great.

00:28:30 I just don’t think as far as the future of BCI goes,

00:28:34 I don’t suspect that people will go in and say,

00:28:38 yeah, drill a hole in my head and do this.

00:28:40 Well, it’s interesting

00:28:41 because I think there’s a similar intuition

00:28:44 but say in the world of autonomous vehicles

00:28:46 that folks know how hard it is

00:28:49 and it seems damn impossible.

00:28:51 The similar intuition about,

00:28:52 I’m sticking on the Elon Musk thing

00:28:54 is just a good, easy example.

00:28:57 Similar intuition about colonizing Mars,

00:28:59 it like, if you really think about it,

00:29:01 it seems extremely difficult.

00:29:04 And almost, I mean, just technically difficult

00:29:08 to a degree where you wanna ask,

00:29:12 is it really worth doing, worth trying?

00:29:14 And then the same is applied with BCI.

00:29:17 But the thing about the future

00:29:21 is it’s hard to predict.

00:29:23 So the exciting thing to me with,

00:29:26 so once it does, once if successful,

00:29:29 it’s able to help patients,

00:29:31 it may be able to discover something very surprising

00:29:36 of our ability to directly communicate with the brain.

00:29:39 So exactly what you’re interested in is figuring out

00:29:42 how to play with this malleable brain,

00:29:46 but like help assist it somehow.

00:29:49 I mean, it’s such a compelling notion to me

00:29:52 that we’re now working on

00:29:53 all these exciting machine learning systems

00:29:55 that are able to learn from data.

00:29:59 And then if we can have this other brain

00:30:04 that’s a learning system,

00:30:05 that’s live wired on the human side

00:30:09 and them to be able to communicate,

00:30:11 it’s like a self play mechanism

00:30:14 was able to beat the world champion at Go.

00:30:17 So they can play with each other,

00:30:19 the computer and the brain, like when you sleep.

00:30:22 I mean, there’s a lot of futuristic kind of things

00:30:24 that it’s just exciting possibilities,

00:30:27 but I hear you, we understand so little

00:30:30 about the actual intricacies of the communication

00:30:34 of the brain that it’s hard to find the common language.

00:30:38 Well, interestingly, the technologies that have been built

00:30:45 don’t actually require the perfect common language.

00:30:48 So for example, hundreds of thousands of people

00:30:51 are walking around with artificial ears

00:30:52 and artificial eyes,

00:30:53 meaning cochlear implants or retinal implants.

00:30:56 So this is, you take a essentially digital microphone,

00:31:00 you slip an electrode strip into the inner ear

00:31:03 and people can learn how to hear that way,

00:31:05 or you take an electrode grid

00:31:06 and you plug it into the retina at the back of the eye

00:31:09 and people can learn how to see that way.

00:31:11 The interesting part is those devices

00:31:13 don’t speak exactly the natural biological language,

00:31:17 they speak the dialect of Silicon Valley.

00:31:19 And it turns out that as recently as about 25 years ago,

00:31:24 a lot of people thought this was never gonna work.

00:31:26 They thought it wasn’t gonna work for that reason,

00:31:28 but the brain figures it out.

00:31:30 It’s really good at saying, okay, look,

00:31:32 there’s some correlation between what I can touch

00:31:34 and feel and hear and so on,

00:31:35 and the data that’s coming in,

00:31:37 or between I clap my hands and I have signals coming in there

00:31:41 and it figures out how to speak any language.

00:31:44 Oh, that’s fascinating.

00:31:45 So like no matter if it’s Neuralink,

00:31:50 so directly communicating with the brain,

00:31:51 or it’s a smartphone or Google Glass,

00:31:54 or the brain figures out the efficient way of communication.

00:31:58 Well, exactly, exactly.

00:32:00 And what I propose is the potato head theory of evolution,

00:32:03 which is that all our eyes and nose and mouth and ears

00:32:08 and fingertips, all this stuff is just plug and play.

00:32:10 And the brain can figure out

00:32:12 what to do with the data that comes in.

00:32:14 And part of the reason that I think this is right,

00:32:17 and I care so deeply about this,

00:32:18 is when you look across the animal kingdom,

00:32:20 you find all kinds of weird peripheral devices plugged in,

00:32:23 and the brain figures out what to do with the data.

00:32:25 And I don’t believe that Mother Nature

00:32:27 has to reinvent the principles of brain operation each time

00:32:32 to say, oh, now I’m gonna have heat pits

00:32:33 to detect infrared.

00:32:34 Now I’m gonna have something

00:32:36 to detect electroreceptors on the body.

00:32:39 Now I’m gonna detect something

00:32:40 to pick up the magnetic field of the earth

00:32:42 with cryptochromes in the eye.

00:32:43 And so instead the brain says, oh, I got it.

00:32:45 There’s data coming in.

00:32:47 Is that useful?

00:32:48 Can I do something with it?

00:32:48 Oh, great, I’m gonna mold myself

00:32:50 around the data that’s coming in.

00:32:52 It’s kind of fascinating to think that,

00:32:55 we think of smartphones

00:32:56 and all this new technology as novel.

00:32:58 It’s totally novel as outside of what evolution

00:33:02 ever intended or like what nature ever intended.

00:33:05 It’s fascinating to think that like the entirety

00:33:08 of the process of evolution is perfectly fine

00:33:10 and ready for the smartphone and the internet.

00:33:14 Like it’s ready.

00:33:15 It’s ready to be valuable to that.

00:33:17 And whatever comes to cyborgs, to virtual reality,

00:33:21 we kind of think like, this is, you know,

00:33:23 there’s all these like books written about what’s natural

00:33:27 and we’re like destroying our natural cells

00:33:29 by like embracing all this technology.

00:33:32 It’s kind of, you know,

00:33:34 probably not giving the brain enough credit.

00:33:36 Like this thing is just fine with new tech.

00:33:40 Oh, exactly, it wraps itself around.

00:33:41 And by the way, wait till you have kids.

00:33:43 You’ll see the ease with which they pick up on stuff.

00:33:46 And as Kevin Kelly said,

00:33:50 technology is what gets invented after you’re born.

00:33:54 But the stuff that already exists when you’re born,

00:33:56 that’s not even tech, that’s just background furniture.

00:33:58 Like the fact that the iPad exists for my son and daughter,

00:34:00 like that’s just background furniture.

00:34:02 So, yeah, it’s because we have

00:34:06 this incredibly malleable system,

00:34:08 that just absorbs whatever is going on in the world

00:34:10 and learns what to do with it.

00:34:11 So do you think, just to linger for a little bit more,

00:34:15 do you think it’s possible to co adjust?

00:34:22 Like we’re kind of, you know,

00:34:25 for the machine to adjust to the brain,

00:34:27 for the brain to adjust to the machine.

00:34:29 I guess that’s what’s already happening.

00:34:31 Sure, that is what’s happening.

00:34:32 So for example, when you put electrodes

00:34:34 in the motor cortex to control a robotic arm

00:34:37 for somebody who’s paralyzed,

00:34:39 the engineers do a lot of work to figure out,

00:34:41 okay, what can we do with the algorithm here

00:34:42 so that we can detect what’s going on from these cells

00:34:45 and figure out how to best program the robotic arm to move

00:34:49 given the data that we’re measuring from these cells.

00:34:51 But also the brain is learning too.

00:34:54 So, you know, the paralyzed woman says,

00:34:57 wait, I’m trying to grab this thing.

00:34:58 And by the way, it’s all about relevance.

00:35:00 So if there’s a piece of food there and she’s hungry,

00:35:04 she’ll figure out how to get this food into her mouth

00:35:08 with the robotic arm because that is what matters.

00:35:13 Well, that’s, okay, first of all,

00:35:15 that paints a really promising and beautiful,

00:35:17 for some reason, really optimistic picture

00:35:20 that, you know, our brain is able to adjust to so much.

00:35:26 You know, so many things happened this year, 2020,

00:35:29 that you think like, how are we ever going to deal with it?

00:35:32 And it’s somehow encouraging

00:35:35 and inspiring that like we’re going to be okay.

00:35:41 Well, that’s right.

00:35:41 I actually think, so 2020 has been an awful year

00:35:45 for almost everybody in many ways,

00:35:46 but the one silver lining has to do with brain plasticity,

00:35:50 which is to say we’ve all been on our, you know,

00:35:54 on our gerbil wheels, we’ve all been in our routines.

00:35:56 And, you know, as I mentioned,

00:35:58 our internal models are all about

00:36:00 how do you maximally succeed?

00:36:02 How do you optimize your operation

00:36:04 in this circumstance where you are, right?

00:36:07 And then all of a sudden, bang, 2020 comes,

00:36:09 we’re completely off our wheels.

00:36:10 We’re having to create new things all the time

00:36:14 and figure out how to do it.

00:36:15 And that is terrific for brain plasticity because,

00:36:18 and we know this because there are very large studies

00:36:23 on older people who stay cognitively active

00:36:26 their whole lives.

00:36:28 Some fraction of them have Alzheimer’s disease

00:36:31 physically, but nobody knows that when they’re alive.

00:36:34 Even though their brain is getting chewed up

00:36:36 with the ravages of Alzheimer’s,

00:36:38 cognitively they’re doing just fine.

00:36:40 Why?

00:36:40 It’s because they’re challenged all the time.

00:36:44 They’ve got all these new things going on,

00:36:45 all this novelty, all these responsibilities,

00:36:47 chores, social life, all these things happening.

00:36:49 And as a result, they’re constantly building new roadways,

00:36:52 even as parts degrade.

00:36:54 And that’s the only good news is that

00:36:57 we are in a situation where suddenly

00:36:59 we can’t just operate like automata anymore.

00:37:02 We have to think of completely new ways to do things.

00:37:05 And that’s wonderful.

00:37:07 I don’t know why this question popped into my head.

00:37:11 It’s quite absurd, but are we gonna be okay?

00:37:16 Yeah.

00:37:17 You said this is the promising silver lining

00:37:19 just from your own,

00:37:20 cause you’ve written about this and thought about this

00:37:22 outside of maybe even the plasticity of the brain,

00:37:25 but just this whole pandemic kind of changed the way

00:37:29 it knocked us out of this hamster wheel like that of habit.

00:37:35 A lot of people had to reinvent themselves.

00:37:39 Unfortunately, and I have a lot of friends

00:37:42 who either already or are going to lose their business,

00:37:48 is basically it’s taking the dreams that people have had

00:37:52 and said this dream, this particular dream you’ve had

00:37:58 will no longer be possible.

00:38:00 So you have to find something new.

00:38:02 What are your, are we gonna be okay?

00:38:06 Yeah, we’ll be okay in the sense that,

00:38:08 I mean, it’s gonna be a rough time

00:38:09 for many or most people,

00:38:11 but in the sense that it is sometimes useful

00:38:18 to find that what you thought was your dream

00:38:20 was not the thing that you’re going to do.

00:38:24 This is obviously the plot in lots of Hollywood movies

00:38:26 that someone says, I’m gonna do this,

00:38:27 and then that gets foiled

00:38:29 and they end up doing something better.

00:38:31 But this is true in life.

00:38:32 I mean, in general, even though we plan our lives

00:38:38 as best we can, it’s predicated on our notion of,

00:38:42 okay, given everything that’s around me,

00:38:43 this is what’s possible for me next.

00:38:47 But it takes 2020 to knock you off that

00:38:49 where you think, oh, well, actually,

00:38:50 maybe there’s something I can be doing

00:38:52 that’s bigger, that’s better.

00:38:54 Yeah, you know, for me, one exciting thing,

00:38:56 and I just talked to Grant Sanderson.

00:38:59 I don’t know if you know who he is.

00:39:00 He’s a 3Blue1Brown, it’s a YouTube channel.

00:39:03 He does, he’s a, if you see it, you would recognize it.

00:39:08 He’s like a really famous math guy,

00:39:11 and he’s a math educator,

00:39:12 and he does these incredible, beautiful videos.

00:39:15 And now I see sort of at MIT,

00:39:17 folks are struggling to try to figure out,

00:39:19 you know, if we do teach remotely,

00:39:21 how do we do it effectively?

00:39:23 So you have these world class researchers

00:39:27 and professors trying to figure out

00:39:30 how to put content online that teaches people.

00:39:33 And to me, a possible future of that is,

00:39:37 you know, Nobel Prize winning faculty become YouTubers.

00:39:42 Like that to me is so exciting, like what Grant said,

00:39:47 which is like the possibility of creating canonical videos

00:39:52 on the thing you’re a world expert in.

00:39:55 You know, there’s so many topics.

00:39:56 It just, the world doesn’t, you know, there’s faculty.

00:40:00 I mentioned Russ Tedrick.

00:40:02 There’s all these people in robotics

00:40:04 that are experts in a particular beautiful field

00:40:07 on which there’s only just papers.

00:40:09 There’s no popular book.

00:40:12 There’s no clean canonical video

00:40:16 showing the beauty of a subject.

00:40:18 And one possibility is they try to create that

00:40:22 and share it with the world.

00:40:25 This is the beautiful thing.

00:40:26 This of course has been happening for a while already.

00:40:28 I mean, for example, when I go and I give book talks,

00:40:31 often what’ll happen is some 13 year old

00:40:33 will come up to me afterwards and say something,

00:40:35 and I’ll say, my God, that was so smart.

00:40:37 Like, how did you know that?

00:40:38 And they’ll say, oh, I saw it on a Ted talk.

00:40:40 Well, what an amazing opportunity.

00:40:42 Here you got the best person in the world on subject X

00:40:46 giving a 15 minute talk as beautifully as he or she can.

00:40:51 And the 13 year old just grows up with that.

00:40:53 That’s just the mother’s milk, right?

00:40:55 As opposed to when we grew up,

00:40:57 you know, I had whatever homeroom teacher I had

00:41:00 and, you know, whatever classmates I had.

00:41:03 And hopefully that person knew what he or she was teaching

00:41:06 and often didn’t and, you know, just made things up.

00:41:08 So the opportunity that has become extraordinary

00:41:12 to get the best of the world.

00:41:14 And the reason this matters, of course,

00:41:15 is because obviously, back to plasticity,

00:41:18 the way that we, the way our brain gets molded

00:41:22 is by absorbing everything from the world,

00:41:24 all of the knowledge and the data and so on that it can get,

00:41:28 and then springboarding off of that.

00:41:33 And we’re in a very lucky time now

00:41:34 because we grew up with a lot of just in case learning.

00:41:40 So, you know, just in case you ever need to know

00:41:42 these dates in Mongolian history, here they are.

00:41:44 But what kids are grown up with now, like my kids,

00:41:47 is tons of just in time learning.

00:41:48 So as soon as they’re curious about something,

00:41:50 they ask Alexa, they ask Google Home,

00:41:51 they get the answer right there

00:41:53 in the context of their curiosity.

00:41:54 The reason this matters is because for plasticity to happen,

00:41:59 you need to care, you need to be curious about something.

00:42:02 And this is something, by the way,

00:42:03 that the ancient Romans had noted.

00:42:06 They had outlined seven different levels of learning

00:42:08 and the highest level is when you’re curious about a topic.

00:42:10 But anyway, so kids now are getting tons

00:42:13 of just in time learning, and as a result,

00:42:15 they’re gonna be so much smarter than we are.

00:42:18 They’re just, and we can already see that.

00:42:19 I mean, my boy is eight years old, my girl is five.

00:42:22 But I mean, the things that he knows are amazing

00:42:25 because it’s not just him having to do

00:42:27 the rote memorization stuff that we did.

00:42:29 Yeah, it’s just fascinating what the brain,

00:42:32 what young brains look like now

00:42:33 because of all those TED Talks just loaded in there.

00:42:36 And there’s also, I mean, a lot of people, right,

00:42:39 kind of, there’s a sense that our attention span

00:42:42 is growing shorter, but it’s complicated

00:42:46 because for example, most people, majority of people,

00:42:50 it’s the 80 plus percent of people listen

00:42:53 to the entirety of these things,

00:42:54 two, three hours for the podcast,

00:42:56 long form podcasts are becoming more and more popular.

00:43:00 So like that’s, it’s all really giant complicated mess.

00:43:04 And the point is that the brain is able to adjust to it

00:43:07 and somehow like form a worldview

00:43:11 within this new medium of like information that we have.

00:43:17 You have like these short tweets

00:43:19 and you have these three, four hour podcasts

00:43:22 and you have Netflix movie.

00:43:24 I mean, it’s just, it’s adjusting to the entirety

00:43:27 and just absorbing it and taking it all in

00:43:30 and then pops up COVID that forces us all to be home

00:43:34 and it all just adjusts and figures it out.

00:43:39 Yeah, yeah, exactly.

00:43:40 It’s fascinating.

00:43:41 Been talking about the brain

00:43:43 as if it’s something separate from the human

00:43:48 that carries it a little bit.

00:43:50 Like whenever you talk about the brain,

00:43:52 it’s easy to forget that that’s like, that’s us.

00:43:55 Like how much do you,

00:43:59 how much is the whole thing like predetermined?

00:44:04 Like how much is it already encoded in there?

00:44:08 And how much is it the, what’s the hit?

00:44:17 The actions, the decisions, the judgments, the…

00:44:22 You mean like who you are?

00:44:23 Who you are.

00:44:24 Oh, yeah, yeah, okay, great question.

00:44:25 Right, so there used to be a big debate

00:44:27 about nature versus nurture.

00:44:28 And we now know that it’s always both.

00:44:31 You can’t even separate them

00:44:32 because you come to the table with a certain amount of nature

00:44:35 for example, your whole genome and so on.

00:44:37 The experiences you have in the womb,

00:44:39 like whether your mother is smoking or drinking,

00:44:41 things like that, whether she’s stressed, so on.

00:44:43 Those all influence how you’re gonna pop out of the womb.

00:44:47 From there, everything is an interaction

00:44:50 between all of your experiences and the nature.

00:44:55 What I mean is, I think of it like a space time cone

00:44:59 where you have, you drop into the world

00:45:01 and depending on the experience that you have,

00:45:03 you might go off in this direction

00:45:04 or that direction or in that direction

00:45:05 because there’s interaction on the way.

00:45:08 Your experiences determine what happens

00:45:11 with the expression of your genes.

00:45:12 So some genes get repressed, some get expressed and so on.

00:45:15 And you actually become a different person

00:45:17 based on your experiences.

00:45:18 There’s a whole field called epigenomics,

00:45:21 which is, or epigenetics I should say,

00:45:24 which is about the epigenome.

00:45:26 And that is the layer that sits on top of the DNA

00:45:30 and causes the genes to express differently.

00:45:32 That is directly related to the experiences that you have.

00:45:35 So if, just as an example, they take rat pups

00:45:38 and one group is placed away from their parents

00:45:41 and the other group is groomed and licked

00:45:43 and taken good care of,

00:45:44 that changes their gene expression

00:45:46 for the rest of their life.

00:45:46 They go off in different directions

00:45:48 in this space time cone.

00:45:50 So yeah, this is of course why it matters

00:45:55 that we take care of children and pour money

00:45:58 into things like education and good childcare

00:46:00 and so on for children broadly,

00:46:04 because these formative years matter so much.

00:46:08 So is there a free will?

00:46:11 This is a great question.

00:46:13 I apologize for the absurd high level

00:46:16 philosophical questions.

00:46:17 No, no, these are my favorite kind of questions.

00:46:19 Here’s the thing, here’s the thing.

00:46:20 We don’t know.

00:46:21 If you ask most neuroscientists,

00:46:23 they’ll say that we can’t really think

00:46:26 of how you would get free will in there

00:46:28 because as far as we can tell, it’s a machine.

00:46:30 It’s a very complicated machine.

00:46:33 Enormously sophisticated, 86 billion neurons,

00:46:36 about the same number of glial cells.

00:46:38 Each of these things is as complicated

00:46:40 as the city of San Francisco.

00:46:41 Each neuron in your head has the entire human genome in it.

00:46:43 It’s expressing millions of gene products.

00:46:47 These are incredibly complicated biochemical cascades.

00:46:50 Each one is connected to 10,000 of its neighbors,

00:46:51 which means you have like half a quadrillion connections

00:46:54 in the brain.

00:46:55 So it’s incredibly complicated thing,

00:46:58 but it is fundamentally appears to just be a machine.

00:47:02 And therefore, if there’s nothing in it

00:47:04 that’s not being driven by something else,

00:47:07 then it seems it’s hard to understand

00:47:10 where free will would come from.

00:47:12 So that’s the camp that pretty much all of us fall into,

00:47:14 but I will say, our science is still quite young.

00:47:18 And I’m a fan of the history of science,

00:47:20 and the thing that always strikes me as interesting

00:47:22 is when you look back at any moment in science,

00:47:26 everybody believes something is true,

00:47:28 and they simply didn’t know about

00:47:31 what Einstein revealed or whatever.

00:47:33 And so who knows?

00:47:35 And they all feel like that we’ve,

00:47:37 at any moment in history,

00:47:38 they all feel like we’ve converged to the final answer.

00:47:40 Exactly, exactly.

00:47:41 Like all the pieces of the puzzle are there.

00:47:43 And I think that’s a funny illusion

00:47:45 that’s worth getting rid of.

00:47:47 And in fact, this is what drives good science

00:47:49 is recognizing that we don’t have most of the puzzle pieces.

00:47:52 So as far as the free will question goes, I don’t know.

00:47:55 At the moment, it seems, wow,

00:47:57 it’d be really impossible to figure out

00:47:58 how something else could fit in there,

00:48:00 but 100 years from now,

00:48:02 our textbooks might be very different than they are now.

00:48:05 I mean, could I ask you to speculate

00:48:07 where do you think free will could be squeezed into there?

00:48:11 Like, what’s that even,

00:48:14 is it possible that our brain just creates

00:48:16 kinds of illusions that are useful for us?

00:48:19 Or like what, where could it possibly be squeezed in?

00:48:24 Well, let me give a speculation answer

00:48:27 to your very nice question,

00:48:28 but don’t, and the listeners of this podcast,

00:48:32 don’t quote me on this.

00:48:33 Yeah, exactly.

00:48:33 I’m not saying this is what I believe to be true,

00:48:35 but let me just give an example.

00:48:36 I give this at the end of my book, Incognito.

00:48:38 So the whole book of Incognito is about,

00:48:41 all the what’s happening in the brain.

00:48:42 And essentially I’m saying, look,

00:48:44 here’s all the reasons to think

00:48:45 that free will probably does not exist.

00:48:47 But at the very end, I say, look,

00:48:50 imagine that you are,

00:48:53 imagine that you’re a Kalahari Bushman

00:48:56 and you find a radio in the sand

00:48:58 and you’ve never seen anything like this.

00:49:01 And you look at this radio and you realize

00:49:04 that when you turn this knob, you hear voices coming from,

00:49:07 there are voices coming from it.

00:49:08 So being a radio materialist,

00:49:11 you try to figure out like, how does this thing operate?

00:49:14 So you take off the back cover

00:49:15 and you realize there’s all these wires.

00:49:16 And when you take out some wires,

00:49:19 the voices get garbled or stop or whatever.

00:49:22 And so what you end up developing is a whole theory

00:49:24 about how this connection, this pattern of wires

00:49:26 gives rise to voices.

00:49:29 But it would never strike you that in distant cities,

00:49:31 there’s a radio tower and there’s invisible stuff beaming.

00:49:34 And that’s actually the origin of the voices.

00:49:36 And this is just necessary for it.

00:49:38 So I mentioned this just as a speculation,

00:49:42 say, look, how would we know,

00:49:44 what we know about the brain for absolutely certain

00:49:46 is that when you damage pieces and parts of it,

00:49:48 things get jumbled up.

00:49:50 But how would you know if there’s something else going on

00:49:52 that we can’t see like electromagnetic radiation

00:49:55 that is what’s actually generating this?

00:49:58 Yeah, you paint a beautiful example

00:50:01 of how totally,

00:50:06 because we don’t know most of how our universe works,

00:50:10 how totally off base we might be with our science until,

00:50:14 I mean, yeah, I mean, that’s inspiring, that’s beautiful.

00:50:19 It’s kind of terrifying, it’s humbling.

00:50:21 It’s all of the above.

00:50:23 And the important part just to recognize

00:50:26 is that of course we’re in the position

00:50:28 of having massive unknowns.

00:50:31 And we have of course the known unknowns

00:50:36 and that’s all the things we’re pursuing in our labs

00:50:38 and trying to figure out that,

00:50:39 but there’s this whole space of unknown unknowns.

00:50:41 Things we haven’t even realized we haven’t asked yet.

00:50:44 Let me kind of ask a weird, maybe a difficult question,

00:50:47 part that has to do with,

00:50:50 I’ve been recently reading a lot about World War II.

00:50:54 I’m currently reading a book I recommend for people,

00:50:56 which as a Jew has been difficult to read,

00:51:00 but the rise and fall of the Third Reich.

00:51:04 So let me just ask about like the nature of genius,

00:51:08 the nature of evil.

00:51:10 If we look at somebody like Einstein,

00:51:14 we look at Hitler, Stalin, modern day Jeffrey Epstein,

00:51:19 just folks who through their life have done with Einstein

00:51:24 and works of genius and with the others I mentioned

00:51:27 have done evil on this world.

00:51:30 What do we think about that in a livewired brain?

00:51:34 Like how do we think about these extreme people?

00:51:39 Here’s what I’d say.

00:51:41 This is a very big and difficult question,

00:51:43 but what I would say briefly on it is,

00:51:47 first of all, I saw a cover of Time Magazine some years ago

00:51:51 and it was a big sagittal slice of the brain

00:51:55 and it said something like, what makes us good and evil?

00:51:59 And there was a little spot pointing to it

00:52:00 and there was a picture of Gandhi

00:52:01 and there was a little spot that was pointing to Hitler.

00:52:03 And these Time Magazine covers always make me mad

00:52:05 because it’s so goofy to think that we’re gonna find

00:52:08 some spot in the brain or something.

00:52:10 Instead, the interesting part is because we’re livewired,

00:52:16 we are all about the world and the culture around us.

00:52:20 So somebody like Adolf Hitler got all this positive feedback

00:52:25 about what was going on and the crazier and crazier

00:52:28 the ideas he had and he’s like, let’s set up death camps

00:52:31 and murder a bunch of people and so on.

00:52:34 Somehow he was getting positive feedback from that

00:52:37 and all these other people, they’re all spun each other up.

00:52:40 And you look at anything like, I mean, look at the cultural

00:52:45 revolution in China or the Russian revolution

00:52:51 or things like this where you look at these things,

00:52:52 my God, how do people all behave like this?

00:52:55 But it’s easy to see groups of people spinning themselves up

00:52:58 in particular ways where they all say,

00:53:00 well, would I have thought this was right

00:53:03 in a different circumstance?

00:53:04 I don’t know, but Fred thinks it’s right

00:53:05 and Steve thinks it’s right,

00:53:06 everyone around me seems to think it’s right.

00:53:08 And so part of the maybe downside of having a livewired brain

00:53:13 is that you can get crowds of people doing things as a group.

00:53:17 So it’s interesting to, we would pinpoint Hitler

00:53:20 as saying that’s the evil guy.

00:53:21 But in a sense, I think it was Tolstoy who said

00:53:24 the king becomes slave to the people.

00:53:30 In other words, Hitler was just a representation

00:53:34 of whatever was going on with that huge crowd

00:53:37 that he was surrounded with.

00:53:39 So I only bring that up to say that it’s very difficult

00:53:45 to say what it is about this person’s brain

00:53:48 or that person’s brain.

00:53:49 He obviously got feedback for what he was doing.

00:53:51 The other thing, by the way,

00:53:52 about what we often think of as being evil in society

00:53:57 is my lab recently published some work

00:54:01 on in groups and out groups,

00:54:04 which is a very important part of this puzzle.

00:54:08 So it turns out that we are very engineered

00:54:13 to care about in groups versus out groups.

00:54:16 And this seems to be like a really fundamental thing.

00:54:18 So we did this experiment in my lab

00:54:20 where we brought people and we stick them in the scanner.

00:54:23 And we, I don’t know if you noticed,

00:54:25 but we show them on the screen six hands

00:54:30 and the computer goes around randomly picks a hand.

00:54:33 And then you see that hand gets stabbed

00:54:34 with a syringe needle.

00:54:36 So you actually see a syringe needle enter the hand

00:54:38 and come out.

00:54:39 And it’s really, what that does is that triggers

00:54:42 parts of the pain matrix,

00:54:44 this areas in your brain that are involved

00:54:46 in feeling physical pain.

00:54:47 Now, the interesting thing is it’s not your hand

00:54:48 that was stabbed.

00:54:49 So what you’re seeing is empathy.

00:54:51 This is you seeing someone else’s hand gets stabbed.

00:54:54 You feel like, oh God, this is awful, right?

00:54:56 Okay.

00:54:57 We contrast that by the way,

00:54:58 with somebody’s hand getting poked as a Q tip,

00:55:00 which is, you know, looks visually the same,

00:55:02 but you don’t have that same level of response.

00:55:06 Now what we do is we label each hand with a one word label,

00:55:10 Christian, Jewish, Muslim, atheist, Scientologist, Hindu.

00:55:14 And now the computer goes around, picks a hand,

00:55:16 stabs the hand.

00:55:17 And the question is, how much does your brain care

00:55:21 about all the people in your out group

00:55:23 versus the one label that happens to match you?

00:55:26 And it turns out for everybody across all religions,

00:55:29 they care much more about their in group

00:55:31 than their out group.

00:55:31 And when I say they care, what I mean is

00:55:33 you get a bigger response from their brain.

00:55:35 Everything’s the same.

00:55:36 It’s the same hands.

00:55:38 It’s just a one word label.

00:55:40 You care much more about your in group than your out group.

00:55:42 And I wish this weren’t true, but this is how humans are.

00:55:45 I wonder how fundamental that is,

00:55:47 or if it’s the emergent thing about culture.

00:55:53 Like if we lived alone with like,

00:55:55 if it’s genetically built into the brain,

00:55:57 like this longing for tribe.

00:56:00 So I’ll tell you, we addressed that.

00:56:02 So here’s what we did.

00:56:03 There are two, actually there are two other things

00:56:06 we did as part of this study

00:56:07 that I think matter for this point.

00:56:09 One is, so okay, so we show that you have

00:56:11 a much bigger response.

00:56:13 And by the way, this is not a cognitive thing.

00:56:14 This is a very low level basic response

00:56:17 to seeing pain in somebody, okay.

00:56:19 Great study by the way.

00:56:20 Thanks, thanks, thanks.

00:56:21 What we did next is we next have it where we say,

00:56:24 okay, the year is 2025 and these three religions

00:56:28 are now in a war against these three religions.

00:56:30 And it’s all randomized, right?

00:56:31 But what you see is your thing and you have two allies now

00:56:34 against these others.

00:56:37 And now it happens over the course of many trials,

00:56:38 you see everybody gets stabbed at different times.

00:56:41 And the question is, do you care more about your allies?

00:56:43 And the answer is yes.

00:56:44 Suddenly people who a moment ago,

00:56:45 you didn’t really care when they got stabbed.

00:56:47 Now, simply with this one word thing

00:56:49 that they’re now your allies, you care more about them.

00:56:52 But then what I wanted to do was look at

00:56:55 how ingrained is this or how arbitrary is it?

00:56:57 So we brought new participants in and we said,

00:57:01 here’s a coin, toss the coin.

00:57:02 If it’s heads, you’re an Augustinian.

00:57:04 If it’s a tails, you’re a Justinian.

00:57:06 These are totally made up.

00:57:08 Okay, so they toss it, they get whatever.

00:57:10 We give them a band that says Augustinian on it,

00:57:13 whatever tribe they’re in now, and they get in the scanner

00:57:16 and they see a thing on the screen that says

00:57:18 the Augustinians and Justinians are two warring tribes.

00:57:21 Then you see a bunch of hands,

00:57:22 some are labeled Augustinians, some are Justinian.

00:57:24 And now you care more about whichever team you’re on

00:57:27 than the other team, even though it’s totally arbitrary

00:57:29 and you know it’s arbitrary

00:57:30 because you’re the one who tossed the coin.

00:57:32 So it’s a state that’s very easy to find ourselves in.

00:57:36 In other words, just before walking in the door,

00:57:39 they’d never even heard of Augustinian versus Justinian

00:57:41 and now their brain is representing it

00:57:43 simply because they’re told they’re on this team.

00:57:46 You know, now I did my own personal study of this.

00:57:49 So once you’re an Augustinian, that tends to be sticky

00:57:55 because I’ve been a Packers fan,

00:57:57 grew to be a Packers fan my whole life.

00:57:59 Now when I’m in Boston with like the Patriots,

00:58:03 it’s been tough going for my livewired brain

00:58:05 to switch to the Patriots.

00:58:07 So once you become, it’s as interesting,

00:58:10 once the tribe is sticky.

00:58:12 Yeah, I’ll admit that’s true.

00:58:14 That’s it, you know.

00:58:15 You know, we never tried that about saying,

00:58:16 okay, now you’re a Justinian and you were an Augustinian.

00:58:19 We never saw how sticky it is.

00:58:21 But there are studies of this,

00:58:24 of monkey troops on some island.

00:58:30 And what happens is they look at the way monkeys behave

00:58:33 when they’re part of this tribe

00:58:34 and how they treat members of the other tribe of monkeys.

00:58:37 And then what they do, I’ve forgotten how they do that,

00:58:39 exactly, but they end up switching a monkey

00:58:42 so he ends up in the other troop.

00:58:43 And very quickly they end up becoming a part

00:58:45 of that other troop and hating and behaving badly

00:58:48 towards the original troop.

00:58:50 These are fascinating studies, by the way.

00:58:52 This is beautiful.

00:58:55 In your book, you have a good light bulb joke.

00:59:01 How many psychiatrists does it take to change a light bulb?

00:59:04 Only one, but the light bulb has to want to change.

00:59:07 Sorry.

00:59:09 I’m a sucker for a good light bulb joke.

00:59:11 Okay, so given, you know, I’ve been interested

00:59:15 in psychiatry my whole life, just maybe tangentially.

00:59:19 I’ve kind of early on dreamed to be a psychiatrist

00:59:22 until I understood what it entails.

00:59:25 But, you know, is there hope for psychiatry

00:59:31 for somebody else to help this live, wired brain to adjust?

00:59:37 Oh yeah, I mean, in the sense that,

00:59:40 and this has to do with this issue

00:59:41 about us being trapped on our own planet.

00:59:43 Forget psychiatrists, just think of like

00:59:45 when you’re talking with a friend

00:59:47 and you say, oh, I’m so upset about this.

00:59:48 And your friend says, hey, just look at it this way.

00:59:53 You know, all we have access to under normal circumstances

00:59:55 is just the way we’re seeing something.

00:59:57 And so it’s super helpful to have friends and communities

01:00:02 and psychiatrists and so on to help things change that way.

01:00:05 So that’s how psychiatrists sort of helped us.

01:00:07 But more importantly, the role that psychiatrists have played

01:00:10 is that there’s this sort of naive assumption

01:00:13 that we all come to the table with,

01:00:15 which is that everyone is fundamentally just like us.

01:00:18 And when you’re a kid, you believe this entirely,

01:00:21 but as you get older and you start realizing,

01:00:22 okay, there’s something called schizophrenia

01:00:25 and that’s a real thing.

01:00:26 And to be inside that person’s head is totally different

01:00:29 than what it is to be inside my head or their psychopathy.

01:00:32 And to be inside the psychopath’s head,

01:00:34 he doesn’t care about other people.

01:00:36 He doesn’t care about hurting other people.

01:00:37 He’s just doing what he needs to do to get what he needs.

01:00:40 That’s a different head.

01:00:42 There’s a million different things going on

01:00:45 and it is different to be inside those heads.

01:00:48 This is where the field of psychiatry comes in.

01:00:51 Now, I think it’s an interesting question

01:00:53 about the degree to which neuroscience is leaking into

01:00:57 and taking over psychiatry

01:00:58 and what the landscape will look like 50 years from now.

01:01:00 It may be that psychiatry as a profession changes a lot

01:01:05 or maybe goes away entirely,

01:01:07 and neuroscience will essentially be able

01:01:09 to take over some of these functions,

01:01:10 but it has been extremely useful to understand

01:01:14 the differences between how people behave and why

01:01:18 and what you can tell about what’s going on

01:01:19 inside their brain just based on observation

01:01:22 of their behavior.

01:01:25 This might be years ago, but I’m not sure.

01:01:28 There’s an Atlantic article you’ve written

01:01:32 about moving away from a distinction

01:01:34 between neurological disorders,

01:01:36 quote unquote, brain problems,

01:01:39 and psychiatric disorders or quote unquote, mind problems.

01:01:43 So on that topic, how do you think about this gray area?

01:01:47 Yeah, this is exactly the evolution that things are going

01:01:50 is there was psychiatry and then there were guys and gals

01:01:54 in labs poking cells and so on.

01:01:55 Those were the neuroscientists.

01:01:57 But yeah, I think these are moving together

01:01:58 for exactly the reason you just cited.

01:02:00 And where this matters a lot,

01:02:02 the Atlantic article that I wrote

01:02:04 was called The Brain on Trial,

01:02:06 where this matters a lot is the legal system

01:02:09 because the way we run our legal system now,

01:02:12 and this is true everywhere in the world,

01:02:13 is someone shows up in front of the judge’s bench,

01:02:17 or let’s say there’s five people

01:02:18 in front of the judge’s bench,

01:02:20 and they’ve all committed the same crime.

01:02:21 What we do, because we feel like, hey, this is fair,

01:02:23 is we say, all right, you’re gonna get the same sentence.

01:02:25 You’ll all get three years in prison or whatever it is.

01:02:27 But in fact, brains can be so different.

01:02:29 This guy’s got schizophrenia, this guy’s a psychopath,

01:02:31 this guy’s tweaked down on drugs, and so on and so on,

01:02:33 that it actually doesn’t make sense to keep doing that.

01:02:37 And what we do in this country more than anywhere

01:02:41 in the world is we imagine that incarceration

01:02:44 is a one size fits all solution.

01:02:45 And you may know we have the,

01:02:47 America has the highest incarceration rate

01:02:49 in the whole world in terms of the percentage

01:02:50 of our population we put behind bars.

01:02:52 So there’s a much more refined thing we can do

01:02:56 as neuroscience comes in and changes,

01:02:59 and has the opportunity to change the legal system.

01:03:01 Which is to say, this doesn’t let anybody off the hook.

01:03:03 It doesn’t say, oh, it’s not your fault, and so on.

01:03:06 But what it does is it changes the equation

01:03:09 so it’s not about, hey, how blameworthy are you?

01:03:12 But instead is about, hey, what do we do from here?

01:03:15 What’s the best thing to do from here?

01:03:16 So if you take somebody with schizophrenia

01:03:17 and you have them break rocks in the hot summer sun

01:03:21 in a chain gang, that doesn’t help their schizophrenia.

01:03:24 That doesn’t fix the problem.

01:03:25 If you take somebody with a drug addiction

01:03:28 who’s in jail for being caught with two ounces

01:03:30 of some illegal substance, and you put them in prison,

01:03:34 it doesn’t actually fix the addiction.

01:03:36 It doesn’t help anything.

01:03:38 Happily, what neuroscience and psychiatry

01:03:40 bring to the table is lots of really useful things

01:03:43 you can do with schizophrenia, with drug addiction,

01:03:45 things like this.

01:03:46 And that’s why, so I don’t know if you guys

01:03:49 better run a national law and profit

01:03:50 called the Center for Science and Law.

01:03:52 And it’s all about this intersection

01:03:53 of neuroscience and psychiatry.

01:03:55 It’s the intersection of neuroscience and legal system.

01:03:57 And we’re trying to implement changes

01:03:59 in every county, in every state.

01:04:02 I’ll just, without going down that rabbit hole,

01:04:04 I’ll just say one of the very simplest things to do

01:04:07 is to set up specialized court systems

01:04:09 where you have a mental health court

01:04:12 that has judges and juries with expertise

01:04:14 in mental illness.

01:04:15 Because if you go, by the way, to a regular court

01:04:17 and the person says, or the defense lawyer says,

01:04:21 this person has schizophrenia, most of the jury will say,

01:04:24 man, I call bullshit on that.

01:04:25 Why?

01:04:26 Because they don’t know about schizophrenia.

01:04:28 They don’t know what it’s about.

01:04:30 And it turns out people who know about schizophrenia

01:04:34 feel very differently as a juror

01:04:35 than someone who happens not to know anybody with

01:04:37 schizophrenia, they think it’s an excuse.

01:04:39 So you have judges and juries with expertise

01:04:42 in mental illness and they know the rehabilitative

01:04:44 strategies that are available.

01:04:45 That’s one thing.

01:04:46 Having a drug court where you have judges and juries

01:04:48 with expertise in rehabilitative strategies

01:04:50 and what can be done and so on.

01:04:51 A specialized prostitution court and so on.

01:04:54 All these different things.

01:04:55 By the way, this is very easy for counties

01:04:57 to implement this sort of thing.

01:04:59 And this is, I think, where this matters

01:05:01 to get neuroscience into public policy.

01:05:05 What’s the process of injecting expertise into this?

01:05:08 Yeah, I’ll tell you exactly what it is.

01:05:10 A county needs to run out of money first.

01:05:12 I’ve seen this happen over and over.

01:05:14 So what happens is a county has a completely full jail

01:05:17 and they say, you know what?

01:05:18 We need to build another jail.

01:05:19 And then they realize, God, we don’t have any money.

01:05:21 We can’t afford this.

01:05:22 We’ve got too many people in jail.

01:05:23 And that’s when they turn to,

01:05:25 God, we need something smarter.

01:05:26 And that’s when they set up specialized court systems.

01:05:28 Yeah.

01:05:30 We’re all function best when our back is against the wall.

01:05:34 And that’s what COVID is good for.

01:05:36 It’s because we’ve all had our routines

01:05:38 and we are optimized for the things we do.

01:05:40 And suddenly our backs are against the wall, all of us.

01:05:43 Yeah, it’s really, I mean,

01:05:44 one of the exciting things about COVID.

01:05:47 I mean, I’m a big believer in the possibility

01:05:51 of what government can do for the people.

01:05:56 And when it becomes too big of a bureaucracy,

01:05:59 it starts functioning poorly, it starts wasting money.

01:06:02 It’s nice to, I mean, COVID reveals that nicely.

01:06:07 And lessons to be learned about who gets elected

01:06:11 and who goes into government.

01:06:14 Hopefully this, hopefully this inspires talented

01:06:18 and young people to go into government

01:06:20 to revolutionize different aspects of it.

01:06:23 Yeah, so that’s the positive silver lining of COVID.

01:06:28 I mean, I thought it’d be fun to ask you,

01:06:30 I don’t know if you’re paying attention

01:06:31 to the machine learning world and GPT3.

01:06:37 So the GPT3 is this language model,

01:06:39 this neural network that’s able to,

01:06:41 it has 175 billion parameters.

01:06:44 So it’s very large and it’s trained

01:06:47 in an unsupervised way on the internet.

01:06:51 It just reads a lot of unstructured texts

01:06:55 and it’s able to generate some pretty impressive things.

01:06:59 The human brain compared to that has about,

01:07:02 you know, a thousand times more synapses.

01:07:05 People get so upset when machine learning people

01:07:10 compare the brain and we know synapses are different.

01:07:14 It was very different, very different.

01:07:16 But like, do you, what do you think about GPT3?

01:07:20 Here’s what I think, here’s what I think, a few things.

01:07:22 What GPT3 is doing is extremely impressive,

01:07:25 but it’s very different from what the brain does.

01:07:27 So it’s a good impersonator, but just as one example,

01:07:33 everybody takes a passage that GPT3 has written

01:07:37 and they say, wow, look at this, and it’s pretty good, right?

01:07:40 But it’s already gone through a filtering process

01:07:42 of humans looking at it and saying,

01:07:43 okay, well that’s crap, that’s crap, okay.

01:07:45 Oh, here’s a sentence that’s pretty cool.

01:07:47 Now here’s the thing, human creativity

01:07:49 is about absorbing everything around it

01:07:51 and remixing that and coming up with stuff.

01:07:53 So in that sense, we’re sort of like GPT3,

01:07:55 you know, we’re remixing what we’ve gotten in before.

01:07:59 But we also know, we also have very good models

01:08:03 of what it is to be another human.

01:08:04 And so, you know, I don’t know if you speak French

01:08:08 or something, but I’m not gonna start speaking in French

01:08:09 because then you’ll say, wait, what are you doing?

01:08:11 I don’t understand it.

01:08:12 Instead, everything coming out of my mouth

01:08:14 is meant for your ears.

01:08:16 I know what you’ll understand.

01:08:18 I know the vocabulary that you know and don’t know.

01:08:20 I know what parts you care about.

01:08:23 That’s a huge part of it.

01:08:25 And so of all the possible sentences I could say,

01:08:29 I’m navigating this thin bandwidth

01:08:31 so that it’s something useful for our conversation.

01:08:34 Yeah, in real time, but also throughout your life.

01:08:36 I mean, we’re co evolving together.

01:08:39 We’re learning how to communicate together.

01:08:42 Exactly, but this is what GPT3 does not do.

01:08:46 All it’s doing is saying, okay,

01:08:47 I’m gonna take all these senses and remix stuff

01:08:49 and pop some stuff out.

01:08:51 But it doesn’t know how to make it

01:08:52 so that you, Lex, will feel like,

01:08:54 oh yeah, that’s exactly what I needed to hear.

01:08:56 That’s the next sentence that I needed to know about

01:08:59 for something.

01:09:00 Well, of course, it could be,

01:09:02 all the impressive results we see.

01:09:04 The question is, if you raise the number of parameters,

01:09:07 whether it’s going to be after some…

01:09:09 It will not be.

01:09:11 It will not be.

01:09:11 Raising more parameters won’t…

01:09:14 Here’s the thing.

01:09:15 It’s not that I don’t think neural networks

01:09:16 can’t be like the human brain,

01:09:18 because I suspect they will be at some point, 50 years.

01:09:20 Who knows?

01:09:21 But what we are missing in artificial neural networks

01:09:26 is we’ve got this basic structure where you’ve got units

01:09:29 and you’ve got synapses that are connected.

01:09:32 And that’s great.

01:09:33 And it’s done incredibly mind blowing, impressive things,

01:09:35 but it’s not doing the same algorithms as the human brain.

01:09:40 So when I look at my children, as little kids,

01:09:43 as infants, they can do things that no GPT3 can do.

01:09:47 They can navigate a complex room.

01:09:50 They can navigate social conversation with an adult.

01:09:54 They can lie.

01:09:55 They can do a million things.

01:09:58 They are active thinkers in our world and doing things.

01:10:03 And this, of course, I mean, look,

01:10:04 we totally agree on how incredibly awesome

01:10:07 artificial neural networks are right now,

01:10:09 but we also know the things that they can’t do well,

01:10:12 like be generally intelligent,

01:10:14 do all these different things.

01:10:16 The reason about the world,

01:10:17 efficiently learn, efficiently adapt.

01:10:19 Exactly.

01:10:20 But it’s still the rate of improvement.

01:10:23 It’s, to me, it’s possible that we’ll be surprised.

01:10:28 I agree, possible we’ll be surprised.

01:10:30 But what I would assert,

01:10:33 and I’m glad I’m getting to say this on your podcast,

01:10:36 we can look back at this in two years and 10 years,

01:10:38 is that we’ve got to be much more sophisticated

01:10:41 than units and synapses between them.

01:10:44 Let me give you an example,

01:10:45 and this is something I talk about in LiveWired,

01:10:47 is despite the amazing impressiveness,

01:10:50 mind blowing impressiveness,

01:10:52 computers don’t have some basic things,

01:10:54 artificial neural networks don’t have some basic things

01:10:56 that we like caring about relevance, for example.

01:10:59 So as humans, we are confronted

01:11:02 with tons of data all the time,

01:11:03 and we only encode particular things

01:11:05 that are relevant to us.

01:11:07 We have this very deep sense of relevance

01:11:10 that I mentioned earlier is based on survival

01:11:11 at the most basic level,

01:11:12 but then all the things about my life and your life,

01:11:16 what’s relevant to you, that we encode.

01:11:19 This is very useful.

01:11:20 Computers at the moment don’t have that.

01:11:21 They don’t even have a yen to survive

01:11:24 and things like that.

01:11:24 So we filled out a bunch of the junk we don’t need.

01:11:27 We’re really good at efficiently

01:11:29 zooming in on things we need.

01:11:32 Again, could be argued, you know,

01:11:34 let me put on my Freud hat.

01:11:35 Maybe it’s, I mean, that’s our conscious mind.

01:11:42 There’s no reason that neural networks

01:11:44 aren’t doing the same kind of filtration.

01:11:46 I mean, in the sense with GPT3 is doing,

01:11:48 so there’s a priming step.

01:11:50 It’s doing an essential kind of filtration

01:11:53 when you ask it to generate tweets from,

01:11:58 I don’t know, from an Elon Musk or something like that.

01:12:00 It’s doing a filtration of it’s throwing away

01:12:04 all the parameters it doesn’t need for this task.

01:12:06 And it’s figuring out how to do that successfully.

01:12:09 And then ultimately it’s not doing a very good job

01:12:12 right now, but it’s doing a lot better job

01:12:14 than we expected.

01:12:15 But it won’t ever do a really good job.

01:12:17 And I’ll tell you why.

01:12:18 I mean, so let’s say we say,

01:12:20 hey, produce an Elon Musk tweet.

01:12:21 And we see like, oh, wow, it produced these three.

01:12:23 That’s great.

01:12:24 But again, we’re not seeing the 3000 produced

01:12:27 that didn’t really make any sense.

01:12:28 It’s because it has no idea what it is like to be a human.

01:12:32 And all the things that you might want to say

01:12:34 and all the reasons you wouldn’t,

01:12:35 like when you go to write a tweet,

01:12:37 you might write something you think,

01:12:38 ah, it’s not gonna come off quite right

01:12:39 in this modern political climate or whatever.

01:12:41 Like, you know, you can change things.

01:12:43 So.

01:12:44 And it somehow boils down to fear of mortality

01:12:46 and all of these human things at the end of the day,

01:12:49 all contained with that tweeting experience.

01:12:52 Well, interestingly, the fear of mortality

01:12:55 is at the bottom of this,

01:12:56 but you’ve got all these more things like,

01:12:58 you know, oh, I want to,

01:13:01 just in case the chairman of my department reads this,

01:13:03 I want it to come off well there.

01:13:04 Just in case my mom looks at this tweet,

01:13:05 I want to make sure she, you know, and so on.

01:13:08 So those are all the things that humans are able

01:13:10 to sort of throw into the calculation.

01:13:13 I mean.

01:13:14 What it required, what it requires though,

01:13:16 is having a model of your chairman,

01:13:18 having a model of your mother,

01:13:19 having a model of, you know,

01:13:22 the person you want to go on a date with

01:13:24 who might look at your tweet and so on.

01:13:26 All these things are,

01:13:27 you’re running models of what it is like to be them.

01:13:30 So in terms of the structure of the brain,

01:13:34 again, this may be going into speculation land.

01:13:37 I hope you go along with me.

01:13:39 Yeah, of course.

01:13:39 Yep.

01:13:40 Is, okay, so the brain seems to be intelligent

01:13:45 and our AI systems aren’t very currently.

01:13:48 So where do you think intelligence arises in the brain?

01:13:52 Like what is it about the brain?

01:13:55 So if you mean where location wise,

01:13:58 it’s no single spot.

01:13:59 It would be equivalent to asking,

01:14:01 I’m looking at New York city,

01:14:04 where is the economy?

01:14:06 The answer is you can’t point to anywhere.

01:14:08 The economy is all about the interaction

01:14:09 of all of the pieces and parts of the city.

01:14:12 And that’s what, you know, intelligence,

01:14:14 whatever we mean by that in the brain

01:14:15 is interacting from everything going on at once.

01:14:18 In terms of a structure.

01:14:19 So we look humans are much smarter than fish,

01:14:23 maybe not dolphins, but dolphins are mammals, right?

01:14:26 I assert that what we mean by smarter

01:14:28 has to do with live wiring.

01:14:30 So what we mean when we say, oh, we’re smart

01:14:32 is, oh, we can figure out a new thing

01:14:33 and figure out a new pathway to get where we need to go.

01:14:36 And that’s because fish are essentially coming to the table

01:14:39 with, you know, okay, here’s the hardware, go swim, mate.

01:14:43 But we have the capacity to say,

01:14:46 okay, look, I’m gonna absorb, oh, oh,

01:14:47 but you know, I saw someone else do this thing

01:14:49 and I read once that you could do this other thing

01:14:51 and so on.

01:14:52 So do you think there’s, is there something,

01:14:54 I know these are mysteries,

01:14:56 but like architecturally speaking,

01:15:00 what feature of the brain of the live wire aspect of it

01:15:06 that is really useful for intelligence?

01:15:08 So like, is it the ability of neurons to reconnect?

01:15:15 Like, is there something,

01:15:16 is there any lessons about the human brain

01:15:18 you think might be inspiring for us

01:15:21 to take into the artificial, into the machine learning world?

01:15:26 Yeah, I’m actually just trying to write some up on this now

01:15:29 called, you know, if you wanna build a robot,

01:15:31 start with the stomach.

01:15:32 And what I mean by that, what I mean by that is

01:15:35 a robot has to care, it has to have hunger,

01:15:37 it has to care about surviving, that kind of thing.

01:15:40 Here’s an example.

01:15:41 So the penultimate chapter of my book,

01:15:44 I titled The Wolf and the Mars Rover.

01:15:46 And I just look at this simple comparison

01:15:48 of you look at a wolf, it gets its leg caught in a trap.

01:15:52 What does it do?

01:15:53 It gnaws its leg off,

01:15:55 and then it figures out how to walk on three legs.

01:15:58 No problem.

01:15:59 Now, the Mars Rover Curiosity got its front wheel stuck

01:16:02 in some Martian soil, and it died.

01:16:05 This project that cost billions of dollars died

01:16:09 because it got its wheels.

01:16:09 Wouldn’t it be terrific if we could build a robot

01:16:12 that chewed off its front wheel and figured out

01:16:15 how to operate with a slightly different body plan?

01:16:17 That’s the kind of thing that we wanna be able to build.

01:16:21 And to get there, what we need,

01:16:23 the whole reason the wolf is able to do that

01:16:25 is because its motor and somatosensory systems

01:16:27 are live wired.

01:16:28 So it says, oh, you know what?

01:16:29 Turns out we’ve got a body plan that’s different

01:16:31 than what I thought a few minutes ago,

01:16:34 but I have a yen to survive and I care about relevance,

01:16:38 which in this case is getting to food,

01:16:40 getting back to my pack and so on.

01:16:42 So I’m just gonna figure out how to operate with this.

01:16:44 Oh, whoops, that didn’t work.

01:16:46 Oh, okay, I’m kind of getting it to work.

01:16:48 But the Mars Rover doesn’t do that.

01:16:49 It just says, oh geez, I was pre programmed.

01:16:51 Four wheels, now I have three, I’m screwed.

01:16:53 Yeah, you know, I don’t know if you’re familiar

01:16:55 with a philosopher named Ernest Becker.

01:16:58 He wrote a book called Denial of Death.

01:17:00 And there’s a few psychologists, Sheldon Solomon,

01:17:03 I think I just spoke with him on his podcast

01:17:07 who developed terror management theory,

01:17:09 which is like Ernest Becker is a philosopher

01:17:12 that basically said that fear of mortality

01:17:17 is at the core of it.

01:17:18 Yeah.

01:17:19 And so I don’t know if it sounds compelling as an idea

01:17:23 that all of the civilization we’ve constructed

01:17:26 is based on this, but it’s.

01:17:29 I’m familiar with his work.

01:17:30 Here’s what I think.

01:17:31 I think that yes, fundamentally this desire to survive

01:17:35 is at the core of it, I would agree with that.

01:17:37 But how that expresses itself in your life

01:17:40 ends up being very different.

01:17:41 The reason you do what you do is, I mean,

01:17:45 you could list the 100 reasons why you chose

01:17:47 to write your tweet this way and that way.

01:17:49 And it really has nothing to do with the survival part.

01:17:51 It has to do with, you know, trying to impress fellow humans

01:17:53 and surprise them and say something.

01:17:55 Yeah, so many things built on top of each other,

01:17:56 but it’s fascinating to think

01:17:58 that in artificial intelligence systems,

01:18:00 we wanna be able to somehow engineer this drive

01:18:05 for survival, for immortality.

01:18:08 I mean, because as humans, we’re not just about survival,

01:18:11 we’re aware of the fact that we’re going to die,

01:18:14 which is a very kind of, we’re aware of like space time.

01:18:17 Most people aren’t, by the way.

01:18:18 Aren’t?

01:18:19 Aren’t.

01:18:20 Confucius said, he said, each person has two lives.

01:18:25 The second one begins when you realize

01:18:28 that you have just one.

01:18:29 Yeah.

01:18:30 But most people, it takes a long time

01:18:31 for most people to get there.

01:18:32 I mean, you could argue this kind of Freudian thing,

01:18:34 which Erzbecker argues is they actually figured it out

01:18:41 early on and the terror they felt

01:18:44 was like the reason it’s been suppressed.

01:18:47 And the reason most people, when I ask them

01:18:49 about whether they’re afraid of death,

01:18:50 they basically say no.

01:18:53 They basically say like, I’m afraid I won’t get,

01:18:56 like submit the paper before I die.

01:18:59 Like they kind of see, they see death

01:19:01 as a kind of a inconvenient deadline

01:19:04 for a particular set of, like a book you’re writing.

01:19:08 As opposed to like, what the hell?

01:19:10 This thing ends at any moment.

01:19:14 Like most people, as I’ve encountered,

01:19:17 do not meditate on the idea that like right now

01:19:20 you could die.

01:19:21 Like right now, like in the next five minutes,

01:19:26 it could be all over and, you know, meditate on that idea.

01:19:29 I think that somehow brings you closer

01:19:32 to like the core of the motivations

01:19:36 and the core of the human cognition condition.

01:19:40 I think it might be the core, but like I said,

01:19:41 it is not what drives us day to day.

01:19:43 Yeah, there’s so many things on top of it,

01:19:45 but it is interesting.

01:19:46 I mean, as the ancient poet said,

01:19:49 death whispers at my ear, live for I come.

01:19:53 So it’s, it is certainly motivating

01:19:56 when we think about that.

01:19:58 Okay, I’ve got some deadline.

01:19:59 I don’t know exactly when it is,

01:20:00 but I better make stuff happen.

01:20:02 It is motivating, but I don’t think,

01:20:04 I mean, I know for just speaking for me personally,

01:20:06 that’s not what motivates me day to day.

01:20:08 It’s instead, oh, I want to get this, you know,

01:20:13 program up and running before this,

01:20:14 or I want to make sure my coauthor isn’t mad at me

01:20:17 because I haven’t gotten this in,

01:20:18 or I don’t want to miss this grant deadline,

01:20:19 or, you know, whatever the thing is.

01:20:21 Yeah, it’s too distant in a sense.

01:20:24 Nevertheless, it is good to reconnect.

01:20:26 But for the AI systems, none of that is there.

01:20:31 Like a neural network does not fear its mortality.

01:20:34 And that seems to be somehow

01:20:37 fundamentally missing the point.

01:20:39 I think that’s missing the point,

01:20:40 but I wonder, it’s an interesting speculation

01:20:42 about whether you can build an AI system

01:20:43 that is much closer to being a human

01:20:45 without the mortality and survival piece,

01:20:48 but just the thing of relevance,

01:20:51 just I care about this versus that.

01:20:52 Right now, if you have a robot roll into the room,

01:20:54 it’s going to be frozen

01:20:55 because it doesn’t have any reason to go there versus there.

01:20:57 It doesn’t have any particular set of things

01:21:02 about this is how I should navigate my next move

01:21:05 because I want something.

01:21:07 Yeah, the thing about humans

01:21:10 is they seem to generate goals.

01:21:13 They’re like, you said livewired.

01:21:15 I mean, it’s very flexible in terms of the goals

01:21:19 and creative in terms of the goals we generate

01:21:21 when we enter a room.

01:21:23 You show up to a party without a goal,

01:21:25 usually, and then you figure it out along the way.

01:21:27 Yes, but this goes back to the question about free will,

01:21:29 which is when I walk into the party,

01:21:33 if you rewound it 10,000 times,

01:21:35 would I go and talk to that couple over there

01:21:37 versus that person?

01:21:38 Like, I might do this exact same thing every time

01:21:41 because I’ve got some goal stack and I think,

01:21:44 okay, well, at this party,

01:21:45 I really want to meet these kind of people

01:21:47 or I feel awkward or whatever my goals are.

01:21:51 By the way, so there was something

01:21:52 that I meant to mention earlier.

01:21:54 If you don’t mind going back,

01:21:56 which is this, when we were talking about BCI.

01:21:59 So I don’t know if you know this,

01:22:00 but what I’m spending 90% of my time doing now

01:22:02 is running a company.

01:22:03 Do you know about this?

01:22:04 Yes, I wasn’t sure what the company is involved in.

01:22:08 Right, so. Can you talk about it?

01:22:09 Yeah, yeah.

01:22:10 So when it comes to the future of BCI,

01:22:15 you can put stuff into the brain invasively,

01:22:18 but my interest has been how you can get data streams

01:22:22 into the brain noninvasively.

01:22:24 So I run a company called Neosensory

01:22:26 and what we build is this little wristband.

01:22:29 We’ve built this in many different form factors.

01:22:30 Oh, wow, that’s it?

01:22:31 Yeah, this is it.

01:22:32 And it’s got these vibratory motors in it.

01:22:35 So these things, as I’m speaking, for example,

01:22:38 it’s capturing my voice and running algorithms

01:22:41 and then turning that into patterns of vibration here.

01:22:44 So people who are deaf, for example,

01:22:48 learn to hear through their skin.

01:22:50 So the information is getting up to their brain this way

01:22:54 and they learn how to hear.

01:22:55 So it turns out on day one, people are pretty good,

01:22:58 like better than you’d expect at being able to say,

01:23:00 oh, that’s weird, was that a dog barking?

01:23:02 Was that a baby crying?

01:23:03 Was that a door knock, a doorbell?

01:23:04 Like people are pretty good at it,

01:23:06 but with time they get better and better

01:23:09 and what it becomes is a new qualia.

01:23:12 In other words, a new subjective internal experience.

01:23:15 So on day one, they say, whoa, what was that?

01:23:18 Oh, oh, that was the dog barking.

01:23:20 But by three months later, they say,

01:23:23 oh, there’s a dog barking somewhere.

01:23:24 Oh, there’s the dog.

01:23:25 That’s fascinating.

01:23:26 And by the way, that’s exactly how you learn

01:23:27 how to use your ears.

01:23:29 So of course you don’t remember this,

01:23:30 but when you were an infant, all you have are

01:23:33 your eardrum vibrating causes spikes to go down,

01:23:36 your auditory nerves and impinging your auditory cortex.

01:23:40 Your brain doesn’t know what those mean automatically,

01:23:43 but what happens is you learn how to hear

01:23:44 by looking for correlations.

01:23:46 You clap your hands as a baby,

01:23:48 you look at your mother’s mouth moving

01:23:50 and that correlates with what’s going on there.

01:23:53 And eventually your brain says, all right,

01:23:54 I’m just gonna summarize this as an internal experience,

01:23:57 as a conscious experience.

01:23:59 And that’s exactly what happens here.

01:24:01 The weird part is that you can feed data into the brain,

01:24:04 not through the ears, but through any channel

01:24:06 that gets there.

01:24:07 As long as the information gets there,

01:24:08 your brain figures out what to do with it.

01:24:10 That’s fascinating.

01:24:11 Like expanding the set of sensors,

01:24:14 it could be arbitrarily, yeah,

01:24:19 it could expand arbitrarily, which is fascinating.

01:24:21 Well, exactly.

01:24:22 And by the way, the reason I use this skin,

01:24:24 there’s all kinds of cool stuff going on

01:24:26 in the AR world with glasses.

01:24:28 But the fact is your eyes are overtaxed

01:24:29 and your ears are overtaxed

01:24:30 and you need to be able to see and hear other stuff.

01:24:33 But you’re covered with the skin,

01:24:34 which is this incredible computational material

01:24:38 with which you can feed information.

01:24:39 And we don’t use our skin for much of anything nowadays.

01:24:42 My joke in the lab is that I say,

01:24:44 we don’t call this the waste for nothing.

01:24:45 Because originally we built this as the vest

01:24:47 and you’re passing in all this information that way.

01:24:51 And what I’m doing here with the deaf community

01:24:56 is what’s called sensory substitution,

01:24:59 where I’m capturing sound and I’m just replacing the ears

01:25:02 with the skin and that works.

01:25:05 One of the things I talk about LiveWire

01:25:07 is sensory expansion.

01:25:09 So what if you took something like your visual system,

01:25:11 which picks up on a very thin slice

01:25:13 of the electromagnetic spectrum,

01:25:15 and you could see infrared or ultraviolet.

01:25:18 So we’ve hooked that up, infrared and ultraviolet detectors,

01:25:21 and I can feel what’s going on.

01:25:22 So just as an example, the first night I built the infrared,

01:25:25 one of my engineers built it, the infrared detector,

01:25:27 I was walking in the dark between two houses

01:25:29 and suddenly I felt all this infrared radiation.

01:25:31 I was like, where’s that come from?

01:25:32 And I just followed my wrist and I found an infrared camera,

01:25:35 a night vision camera that was,

01:25:37 but I immediately, oh, there’s that thing there.

01:25:40 Of course, I would have never seen it,

01:25:42 but now it’s just part of my reality.

01:25:45 That’s fascinating.

01:25:46 Yeah, and then of course,

01:25:46 what I’m really interested in is sensory addition.

01:25:49 What if you could pick up on stuff

01:25:51 that isn’t even part of what we normally pick up on,

01:25:55 like the magnetic field of the earth

01:25:57 or Twitter or stock market or things like that.

01:26:00 Or the, I don’t know, some weird stuff

01:26:02 like the moods of other people or something like that.

01:26:04 Sure, now what you need is a way to measure this.

01:26:06 So as long as there’s a machine that can measure it,

01:26:08 it’s easy, it’s trivial to feed this in here

01:26:09 and you come to be, it comes to be part of your reality.

01:26:14 It’s like you have another sensor.

01:26:16 And that kind of thing is without doing like,

01:26:19 if you look in Neuralink,

01:26:20 I forgot how you put it, but it was eloquent,

01:26:23 without getting, cutting into the brain, basically.

01:26:26 Yeah, exactly, exactly.

01:26:27 So this costs, at the moment, $399.

01:26:30 That’s not gonna kill you.

01:26:32 Yeah, it’s not gonna kill you.

01:26:33 You just put it on and when you’re done, you take it off.

01:26:36 Yeah, and so, and the name of the company, by the way,

01:26:39 is Neosensory for new senses, because the whole idea is.

01:26:42 Beautiful, that’s.

01:26:43 You can, as I said, you come to the table

01:26:45 with certain plug and play devices and then that’s it.

01:26:48 Like I can pick up on this little bit

01:26:49 of the electromagnetic radiation,

01:26:50 you can pick up on this little frequency band

01:26:53 for hearing and so on, but I’m stuck there

01:26:56 and there’s no reason we have to be stuck there.

01:26:57 We can expand our umwelt by adding new senses, yeah.

01:27:01 What’s umwelt?

01:27:02 Oh, I’m sorry, the umwelt is the slice of reality

01:27:05 that you pick up on.

01:27:06 So each animal has its own umwelt.

01:27:09 Yeah, exactly.

01:27:10 Nice.

01:27:11 I’m sorry, I forgot to define it before.

01:27:12 It’s such an important concept, which is to say,

01:27:17 for example, if you are a tick,

01:27:19 you pick up on butyric gas, you pick up on odor

01:27:22 and you pick up on temperature, that’s it.

01:27:24 That’s how you construct your reality

01:27:25 is with those two sensors.

01:27:26 If you are a blind echolocating bat,

01:27:28 you’re picking up on air compression waves coming back,

01:27:31 you know, echolocation.

01:27:32 If you are the black ghost knife fish,

01:27:34 you’re picking up on changes in the electrical field

01:27:37 around you with electroreception.

01:27:40 That’s how they swim around

01:27:41 and tell there’s a rock there and so on.

01:27:43 But that’s all they pick up on.

01:27:45 That’s their umwelt.

01:27:47 That’s the signals they get from the world

01:27:49 from which to construct their reality.

01:27:51 And they can be totally different umwelts.

01:27:53 That’s fantastic.

01:27:54 And so our human umwelt is, you know,

01:27:57 we’ve got little bits that we can pick up on.

01:27:59 One of the things I like to do with my students

01:28:01 is talk about, imagine that you are a bloodhound dog, right?

01:28:05 You are a bloodhound dog with a huge snout

01:28:07 with 200 million scent receptors in it.

01:28:09 And your whole world is about smelling.

01:28:11 You know, you’ve got slits in your nostrils,

01:28:13 like big nose fulls of air and so on.

01:28:15 Do you have a dog?

01:28:16 Nope, used to.

01:28:17 Used to, okay, right.

01:28:18 So you know, you walk your dog around

01:28:19 and your dog is smelling everything.

01:28:21 The whole world is full of signals

01:28:22 that you do not pick up on.

01:28:23 And so imagine if you were that dog

01:28:25 and you looked at your human master and thought,

01:28:26 my God, what is it like to have

01:28:28 the pitiful little nose of a human?

01:28:30 How could you not know that there’s a cat 100 yards away

01:28:32 or that your friend was here six hours ago?

01:28:34 And so the idea is because we’re stuck in our own belt,

01:28:37 because we have this little pitiful noses,

01:28:39 we think, okay, well, yeah, we’re seeing reality,

01:28:41 but you can have very different sorts of realities

01:28:44 depending on the peripheral plug and play devices

01:28:46 you’re equipped with.

01:28:47 It’s fascinating to think that like,

01:28:49 if we’re being honest, probably our own belt

01:28:52 is, you know, some infinitely tiny percent

01:28:57 of the possibilities of how you can sense,

01:29:01 quote unquote, reality, even if you could,

01:29:04 I mean, there’s a guy named Donald Hoffman, yeah,

01:29:08 who basically says we’re really far away from reality

01:29:13 in terms of our ability to sense anything.

01:29:15 Like we’re very, we’re almost like we’re floating out there

01:29:20 that’s almost like completely attached

01:29:22 to the actual physical reality.

01:29:24 It’s fascinating that we can have extra senses

01:29:27 that could help us get a little bit closer.

01:29:29 Exactly, and by the way, this has been the fruits

01:29:33 of science is realizing, like, for example,

01:29:36 you know, you open your eyes

01:29:36 and there’s the world around you, right?

01:29:38 But of course, depending on how you calculate it,

01:29:40 it’s less than a 10 trillionth of the electromagnetic

01:29:42 spectrum that we call visible light.

01:29:45 The reason I say it depends,

01:29:46 because, you know, it’s actually infinite

01:29:47 in all directions presumably.

01:29:49 Yeah, and so that’s exactly that.

01:29:51 And then science allows you to actually look

01:29:53 into the rest of it.

01:29:54 Exactly, so understanding how big the world is out there.

01:29:57 And the same with the world of really small

01:29:59 and the world of really large.

01:30:00 Exactly.

01:30:01 That’s beyond our ability to sense.

01:30:03 Exactly, and so the reason I think this kind of thing

01:30:04 matters is because we now have an opportunity

01:30:07 for the first time in human history to say,

01:30:10 okay, well, I’m just gonna include other things

01:30:13 in my own belt.

01:30:13 So I’m gonna include infrared radiation

01:30:15 and have a direct perceptual experience of that.

01:30:19 And so I’m very, you know, I mean,

01:30:21 so, you know, I’ve given up my lab

01:30:22 and I run this company 90% of my time now.

01:30:25 That’s what I’m doing.

01:30:26 I still teach at Stanford and I’m, you know,

01:30:27 teaching courses and stuff like that.

01:30:29 But this is like, this is your passion.

01:30:32 The fire is on this.

01:30:35 Yeah, I feel like this is the most important thing

01:30:37 that’s happening right now.

01:30:38 I mean, obviously I think that,

01:30:40 because that’s what I’m devoting my time in my life to.

01:30:42 But I mean, it’s a brilliant set of ideas.

01:30:45 It certainly is like, it’s a step in a very vibrant future,

01:30:50 I would say.

01:30:50 Like the possibilities there are endless.

01:30:54 Exactly.

01:30:55 So if you ask what I think about Neuralink,

01:30:57 I think it’s amazing what those guys are doing

01:30:59 and working on,

01:31:00 but I think it’s not practical for almost everybody.

01:31:02 For example, for people who are deaf, they buy this

01:31:05 and, you know, every day we’re getting tons of emails

01:31:08 and tweets and whatever from people saying, wow,

01:31:09 I picked up on this and then I had no idea that was a,

01:31:12 I didn’t even know that was happening out there.

01:31:14 And they’re coming to hear, by the way,

01:31:16 this is, you know, less than a 10 year old,

01:31:18 by the way, this is less than a 10th of the price

01:31:20 of a hearing aid and like 250 times less

01:31:23 than a cochlear implant.

01:31:25 That’s amazing.

01:31:27 People love hearing about what, you know,

01:31:30 brilliant folks like yourself could recommend

01:31:33 in terms of books.

01:31:35 Of course, you’re an author of many books.

01:31:36 So I’ll, in the introduction,

01:31:38 mention all the books you’ve written.

01:31:40 People should definitely read LiveWired.

01:31:42 I’ve gotten a chance to read some of it and it’s amazing.

01:31:44 But is there three books, technical, fiction,

01:31:48 philosophical that had an impact on you

01:31:52 when you were younger or today and books,

01:31:56 perhaps some of which you would want to recommend

01:31:59 that others read?

01:32:01 You know, as an undergraduate,

01:32:02 I majored in British and American literature.

01:32:04 That was my major because I love literature.

01:32:06 I grew up with literature.

01:32:08 My father had these extensive bookshelves.

01:32:10 And so I grew up in the mountains in New Mexico.

01:32:13 And so that was mostly why I spent my time was reading books.

01:32:16 But, you know, I love, you know, Faulkner, Hemingway.

01:32:23 I love many South American authors,

01:32:26 Gabriel Garcia Marquez and Italo Calvino.

01:32:28 I would actually recommend Invisible Cities.

01:32:29 I just, I loved that book by Italo Calvino.

01:32:33 Sorry, it’s a book of fiction.

01:32:37 Anthony Dorr wrote a book called

01:32:39 All the Light We Cannot See,

01:32:41 which actually was inspired by incognito,

01:32:44 by exactly what we were talking about earlier

01:32:45 about how you can only see a little bit of the,

01:32:48 what we call visible light in the electromagnetic radiation.

01:32:51 I wrote about this in incognito,

01:32:52 and then he reviewed incognito for the Washington Post.

01:32:54 Oh no, that’s awesome.

01:32:56 And then he wrote this book called,

01:32:57 the book has nothing to do with that,

01:32:58 but that’s where the title comes from.

01:33:00 All the Light We Cannot See

01:33:01 is about the rest of the spectrum.

01:33:02 But the, that’s an absolutely gorgeous book.

01:33:08 That’s a book of fiction.

01:33:09 Yeah, it’s a book of fiction.

01:33:10 What’s it about?

01:33:12 It takes place during World War II

01:33:14 about these two young people,

01:33:15 one of whom is blind and yeah.

01:33:18 Anything else?

01:33:19 So what, any, so you mentioned Hemingway?

01:33:21 I mean.

01:33:22 Old Man and the Sea, what’s your favorite?

01:33:27 Snow’s a Kilimanjaro.

01:33:29 Oh wow, okay.

01:33:30 It’s a collection of short stories that I love.

01:33:31 As far as nonfiction goes,

01:33:33 I grew up with Cosmos,

01:33:35 both watching the PBS series and then reading the book,

01:33:38 and that influenced me a huge amount in terms of what I do.

01:33:41 I, from the time I was a kid,

01:33:42 I felt like I want to be Carl Sagan.

01:33:44 Like, I just, that’s what I loved.

01:33:46 And in the end, I just, you know,

01:33:47 I studied space physics for a while as an undergrad,

01:33:50 but then I, in my last semester,

01:33:53 discovered neuroscience last semester,

01:33:55 and I just thought, wow, I’m hooked on that.

01:33:57 So the Carl Sagan of the brain.

01:34:01 That was my aspiration.

01:34:02 Is the aspiration.

01:34:03 I mean, you’re doing an incredible job of it.

01:34:07 So you open the book live wide with a quote by Heidegger.

01:34:11 Every man is born as many men and dies as a single one.

01:34:17 Well, what do you mean, or what?

01:34:20 I’ll tell you what I meant by it.

01:34:21 So he had his own reason why he was writing that,

01:34:23 but I meant this in terms of brain plasticity,

01:34:25 in terms of the library,

01:34:26 which is this issue that I mentioned before

01:34:28 about this, you know, this cone,

01:34:29 the space time cone that we are in,

01:34:31 which is that when you dropped into the world,

01:34:35 you, Lex, had all this different potential.

01:34:37 You could have been a great surfer

01:34:40 or a great chess player or a,

01:34:42 you could have been thousands of different men

01:34:45 when you grew up,

01:34:46 but what you did is things that were not your choice

01:34:49 and your choice along the way.

01:34:50 You know, you ended up navigating a particular path

01:34:52 and now you’re exactly who you are.

01:34:54 You used to have lots of potential,

01:34:55 but the day you die, you will be exactly Lex.

01:34:59 You will be that one person, yeah.

01:35:01 So on that, in that context,

01:35:03 I mean, first of all, it’s just a beautiful,

01:35:06 it’s a humbling picture, but it’s a beautiful one

01:35:09 because it’s all the possible trajectories

01:35:12 and you pick one and you walk down that road

01:35:14 and it’s the Robert Frost poem.

01:35:16 But on that topic, let me ask the biggest

01:35:18 and the most ridiculous question.

01:35:21 So in this live, wide brain,

01:35:23 when we choose all these different trajectories

01:35:25 and end up with one, what’s the meaning of it all?

01:35:27 What’s, is there a why here?

01:35:32 What’s the meaning of life?

01:35:34 Yeah.

01:35:34 David Engelman.

01:35:36 That’s it.

01:35:37 I mean, this is the question that everyone has attacked

01:35:42 from their own life or point of view,

01:35:45 by which I mean, culturally,

01:35:47 if you grew up in a religious society,

01:35:49 you have one way of attacking that question.

01:35:51 So if you grew up in a secular or scientific society,

01:35:53 you have a different way of attacking that question.

01:35:55 Obviously, I don’t know, I abstain on that question.

01:35:59 Yeah.

01:36:00 I mean, I think one of the fundamental things,

01:36:03 I guess, in that, in all those possible trajectories

01:36:06 is you’re always asking.

01:36:09 I mean, that’s the act of asking

01:36:11 what the heck is this thing for,

01:36:14 is equivalent to, or at least runs in parallel

01:36:18 to all the choices that you’re making.

01:36:20 Cause it’s kind of, that’s the underlying question.

01:36:23 Well, that’s right.

01:36:24 And by the way, you know,

01:36:25 this is the interesting thing about human psychology.

01:36:27 You know, we’ve got all these layers of things

01:36:29 at which we can ask questions.

01:36:30 And so if you keep asking yourself the question about,

01:36:33 what is the optimal way for me to be spending my time?

01:36:36 What should I be doing?

01:36:37 What charity should I get involved with and so on?

01:36:39 If you’re asking those big questions

01:36:42 that steers you appropriately,

01:36:44 if you’re the type of person who never asks,

01:36:46 hey, is there something better I can be doing with my time,

01:36:48 then presumably you won’t optimize

01:36:51 whatever it is that is important to you.

01:36:53 So you’ve, I think just in your eyes, in your work,

01:36:58 there’s a passion that just is obvious and it’s inspiring.

01:37:03 It’s contagious.

01:37:04 What, if you were to give advice to us,

01:37:09 a young person today,

01:37:10 in the crazy chaos that we live today about life,

01:37:14 about how to discover their passion,

01:37:20 is there some words that you could give?

01:37:24 First of all, I would say the main thing

01:37:26 for a young person is stay adaptable.

01:37:29 And this is back to this issue of why COVID

01:37:31 is useful for us because it forces us off our tracks.

01:37:35 The fact is the jobs that will exist 20 years from now,

01:37:39 we don’t even have names for it.

01:37:40 We can’t even imagine the jobs that are gonna exist.

01:37:42 And so when young people that I know go into college

01:37:44 and they say, hey, what should I major in and so on,

01:37:47 college is and should be less and less vocational,

01:37:50 as in, oh, I’m gonna learn how to do this

01:37:52 and then I’m gonna do that the rest of my career.

01:37:54 The world just isn’t that way anymore

01:37:55 with the exponential speed of things.

01:37:57 So the important thing is learning how to learn,

01:38:00 learning how to be livewired and adaptable.

01:38:03 That’s really key.

01:38:04 And what I advise young people when I talk to them is,

01:38:09 what you digest, that’s what gives you the raw storehouse

01:38:13 of things that you can remix and be creative with.

01:38:17 And so eat broadly and widely.

01:38:21 And obviously this is the wonderful thing

01:38:23 about the internet world we live in now

01:38:24 is you kind of can’t help it.

01:38:25 You’re constantly, whoa.

01:38:27 You go down some mole hole of Wikipedia

01:38:28 and you think, oh, I didn’t even realize that was a thing.

01:38:31 I didn’t know that existed.

01:38:32 And so.

01:38:33 Embrace that.

01:38:34 Embrace that, yeah, exactly.

01:38:36 And what I tell people is just always do a gut check

01:38:39 about, okay, I’m reading this paper

01:38:41 and yeah, I think that, but this paper, wow,

01:38:44 that really, I really cared about that in some way.

01:38:47 I tell them just to keep a real sniff out for that.

01:38:50 And when you find those things, keep going down those paths.

01:38:54 Yeah, don’t be afraid.

01:38:55 I mean, that’s one of the challenges and the downsides

01:38:58 of having so many beautiful options

01:39:00 is that sometimes people are a little bit afraid

01:39:02 to really commit, but that’s very true.

01:39:05 If there’s something that just sparks your interest

01:39:09 and passion, just run with it.

01:39:10 I mean, that’s, it goes back to the Haider quote.

01:39:14 I mean, we only get this one life

01:39:16 and that trajectory, it doesn’t last forever.

01:39:20 So just if something sparks your imagination,

01:39:23 your passion is run with it.

01:39:24 Yeah, exactly.

01:39:26 I don’t think there’s a more beautiful way to end it.

01:39:29 David, it’s a huge honor to finally meet you.

01:39:32 Your work is inspiring so many people.

01:39:34 I’ve talked to so many people who are passionate

01:39:36 about neuroscience, about the brain, even outside

01:39:39 that read your book.

01:39:40 So I hope you keep doing so.

01:39:43 I think you’re already there with Carl Sagan.

01:39:46 I hope you continue growing.

01:39:48 Yeah, it was an honor talking with you today.

01:39:50 Thanks so much.

01:39:50 Great, you too, Lex, wonderful.

01:39:53 Thanks for listening to this conversation

01:39:54 with David Eagleman, and thank you to our sponsors,

01:39:58 Athletic Greens, BetterHelp, and Cash App.

01:40:01 Click the sponsor links in the description

01:40:03 to get a discount and to support this podcast.

01:40:07 If you enjoy this thing, subscribe on YouTube,

01:40:09 review it with Five Stars on Apple Podcast,

01:40:11 follow on Spotify, support on Patreon,

01:40:14 or connect with me on Twitter at Lex Friedman.

01:40:18 And now let me leave you with some words

01:40:20 from David Eagleman in his book,

01:40:21 Some Forty Tales from the Afterlives.

01:40:25 Imagine for a moment there were nothing but

01:40:28 the product of billions of years of molecules

01:40:30 coming together and ratcheting up through natural selection.

01:40:35 There were composed only of highways of fluids

01:40:37 and chemicals sliding along roadways

01:40:39 within billions of dancing cells.

01:40:42 The trillions of synaptic connections hum in parallel

01:40:46 that this vast egg like fabric of micro thin circuitry

01:40:50 runs algorithms undreamt of in modern science,

01:40:54 and that these neural programs give rise to

01:40:56 our decision making, loves, desires, fears, and aspirations.

01:41:02 To me, understanding this would be a numinous experience,

01:41:07 better than anything ever proposed in any holy text.

01:41:11 Thank you for listening and hope to see you next time.