John Hopfield: Physics View of the Mind and Neurobiology #76

Transcript

00:00:00 The following is a conversation with John Hopfield,

00:00:03 professor at Princeton, whose life’s work weaved beautifully

00:00:07 through biology, chemistry, neuroscience, and physics.

00:00:11 Most crucially, he saw the messy world of biology

00:00:15 through the piercing eyes of a physicist.

00:00:18 He’s perhaps best known for his work

00:00:20 on associative neural networks,

00:00:22 now known as Hopfield networks,

00:00:24 that were one of the early ideas that catalyzed

00:00:27 the development of the modern field of deep learning.

00:00:31 As his 2019 Franklin Medal in Physics Award states,

00:00:35 he applied concepts of theoretical physics

00:00:37 to provide new insights on important biological questions

00:00:41 in a variety of areas, including genetics and neuroscience

00:00:45 with significant impact on machine learning.

00:00:48 And as John says in his 2018 article titled,

00:00:51 Now What?, his accomplishments have often come about

00:00:55 by asking that very question, now what?

00:00:59 And often responding by a major change of direction.

00:01:04 This is the Artificial Intelligence Podcast.

00:01:07 If you enjoy it, subscribe on YouTube,

00:01:09 give it five stars on Apple Podcast,

00:01:11 support it on Patreon, or simply connect with me on Twitter,

00:01:14 and Lex Friedman, spelled F R I D M A M.

00:01:18 As usual, I’ll do one or two minutes of ads now

00:01:21 and never any ads in the middle

00:01:22 that can break the flow of the conversation.

00:01:25 I hope that works for you

00:01:26 and doesn’t hurt the listening experience.

00:01:29 This show is presented by Cash App,

00:01:31 the number one finance app in the App Store.

00:01:34 When you get it, use code LexPodcast.

00:01:37 Cash App lets you send money to friends, buy Bitcoin,

00:01:41 and invest in the stock market with as little as $1.

00:01:44 Since Cash App does fractional share trading,

00:01:47 let me mention that the order execution algorithm

00:01:49 that works behind the scenes

00:01:50 to create the abstraction of fractional orders

00:01:53 is to me an algorithmic marvel.

00:01:56 So big props to the Cash App engineers

00:01:58 for solving a hard problem

00:02:00 that in the end provides an easy interface

00:02:02 that takes a step up the next layer of abstraction

00:02:05 over the stock market,

00:02:06 making trading more accessible for new investors

00:02:09 and diversification much easier.

00:02:12 So again, if you get Cash App from the App Store,

00:02:15 Google Play, and use code LexPodcast,

00:02:18 you’ll get $10,

00:02:19 and Cash App will also donate $10 to First,

00:02:22 one of my favorite organizations

00:02:24 that is helping advance robotics and STEM education

00:02:27 for young people around the world.

00:02:29 And now here’s my conversation with John Hopfield.

00:02:35 What difference between biological neural networks

00:02:37 and artificial neural networks

00:02:39 is most captivating and profound to you?

00:02:44 At the higher philosophical level,

00:02:47 let’s not get technical just yet.

00:02:49 But one of the things that very much intrigues me

00:02:53 is the fact that neurons have all kinds of components,

00:03:00 properties to them.

00:03:03 And in evolutionary biology,

00:03:05 if you have some little quirk

00:03:07 in how a molecule works or how a cell works,

00:03:11 and it can be made use of,

00:03:13 evolution will sharpen it up

00:03:15 and make it into a useful feature rather than a glitch.

00:03:20 And so you expect in neurobiology for evolution

00:03:24 to have captured all kinds of possibilities

00:03:27 of getting neurons,

00:03:29 of how you get neurons to do things for you.

00:03:33 And that aspect has been completely suppressed

00:03:36 in artificial neural networks.

00:03:38 So the glitches become features

00:03:43 in the biological neural network.

00:03:46 They can.

00:03:48 Look, let me take one of the things

00:03:50 that I used to do research on.

00:03:54 If you take things which oscillate,

00:03:58 they have rhythms which are sort of close to each other.

00:04:02 Under some circumstances,

00:04:04 these things will have a phase transition

00:04:06 and suddenly the rhythm will,

00:04:08 everybody will fall into step.

00:04:10 There was a marvelous physical example of that

00:04:14 in the Millennium Bridge across the Thames River,

00:04:17 about, built about 2001.

00:04:21 And pedestrians walking across,

00:04:23 pedestrians don’t walk synchronized,

00:04:26 they don’t walk in lockstep.

00:04:28 But they’re all walking about the same frequency

00:04:31 and the bridge could sway at that frequency

00:04:33 and the slight sway made pedestrians tend a little bit

00:04:36 to lock into step and after a while,

00:04:39 the bridge was oscillating back and forth

00:04:41 and the pedestrians were walking in step to it.

00:04:43 And you could see it in the movies made out of the bridge.

00:04:46 And the engineers made a simple minor mistake.

00:04:50 They assume when you walk, it’s step, step, step

00:04:53 and it’s back and forth motion.

00:04:56 But when you walk, it’s also right foot left

00:04:58 with side to side motion.

00:05:00 And it’s the side to side motion

00:05:01 for which the bridge was strong enough,

00:05:04 but it wasn’t stiff enough.

00:05:09 And as a result, you would feel the motion

00:05:11 and you’d fall into step with it.

00:05:12 And people were very uncomfortable with it.

00:05:15 They closed the bridge for two years

00:05:16 while they built stiffening for it.

00:05:20 Now, nerve cells produce action potentials.

00:05:23 You have a bunch of cells which are loosely coupled together

00:05:26 producing action potentials at the same rate.

00:05:29 There’ll be some circumstances

00:05:31 under which these things can lock together.

00:05:34 Other circumstances in which they won’t.

00:05:39 Well, if they’re fired together,

00:05:40 you can be sure that other cells are gonna notice it.

00:05:43 So you can make a computational feature out of this

00:05:45 in an evolving brain.

00:05:50 Most artificial neural networks

00:05:51 don’t even have action potentials,

00:05:53 let alone have the possibility for synchronizing them.

00:05:56 And you mentioned the evolutionary process.

00:06:01 So the evolutionary process

00:06:04 that builds on top of biological systems

00:06:08 leverages the weird mess of it somehow.

00:06:15 So how do you make sense of that ability

00:06:18 to leverage all the different kinds of complexities

00:06:22 in the biological brain?

00:06:24 Well, look, in the biological molecule level,

00:06:29 you have a piece of DNA

00:06:31 which encodes for a particular protein.

00:06:35 You could duplicate that piece of DNA

00:06:37 and now one part of it can code for that protein,

00:06:41 but the other one could itself change a little bit

00:06:45 and thus start coding for a molecule

00:06:46 which is slightly different.

00:06:48 Now, if that molecule was just slightly different,

00:06:51 had a function which helped any old chemical reaction

00:06:56 which was important to the cell,

00:07:00 you would go ahead and let that try,

00:07:03 and evolution would slowly improve that function.

00:07:07 And so you have the possibility of duplicating

00:07:12 and then having things drift apart.

00:07:14 One of them retain the old function,

00:07:16 the other one do something new for you.

00:07:18 And there’s evolutionary pressure to improve.

00:07:23 Look, there isn’t in computers too,

00:07:25 but improvement has to do with closing some companies

00:07:28 and opening some others.

00:07:30 The evolutionary process looks a little different.

00:07:34 Yeah, similar timescale perhaps.

00:07:37 Much shorter in timescale.

00:07:39 Companies close, yeah, go bankrupt and are born,

00:07:42 yeah, shorter, but not much shorter.

00:07:45 Some companies last a century, but yeah, you’re right.

00:07:51 I mean, if you think of companies as a single organism

00:07:53 that builds and you all know, yeah,

00:07:55 it’s a fascinating dual correspondence there

00:08:00 between biological organisms.

00:08:02 And companies have difficulty having a new product

00:08:05 competing with an old product.

00:08:10 When IBM built its first PC, you probably read the book,

00:08:14 they made a little isolated internal unit to make the PC.

00:08:18 And for the first time in IBM’s history,

00:08:22 they didn’t insist that you build it out of IBM components.

00:08:27 But they understood that they could get into this market,

00:08:31 which is a very different thing

00:08:33 by completely changing their culture.

00:08:35 And biology finds other markets in a more adaptive way.

00:08:44 Yeah, it’s better at it.

00:08:47 It’s better at that kind of integration.

00:08:50 So maybe you’ve already said it,

00:08:52 but what to use the most beautiful aspect

00:08:55 or mechanism of the human mind?

00:09:01 Is it the adaptive, the ability to adapt

00:09:05 as you’ve described, or is there some other little quirk

00:09:07 that you particularly like?

00:09:11 Adaptation is everything when you get down to it.

00:09:16 But the difference, there are differences between adaptation

00:09:21 where your learning goes on only over generations

00:09:25 and over evolutionary time,

00:09:28 where your learning goes on at the time scale

00:09:30 of one individual who must learn from the environment

00:09:34 during that individual’s lifetime.

00:09:39 And biology has both kinds of learning in it.

00:09:43 And the thing which makes neurobiology hard

00:09:47 is that a mathematical system, as it were,

00:09:53 built on this other kind of evolutionary system.

00:09:58 What do you mean by mathematical system?

00:10:01 Where’s the math and the biology?

00:10:03 Well, when you talk to a computer scientist

00:10:05 about neural networks, it’s all math.

00:10:08 The fact that biology actually came about from evolution,

00:10:13 and the fact that biology is about a system

00:10:19 which you can build in three dimensions.

00:10:25 If you look at computer chips,

00:10:27 computer chips are basically two dimensional structures,

00:10:31 maybe 2.1 dimensions, but they really have difficulty

00:10:36 doing three dimensional wiring.

00:10:39 Biology is, the neocortex is actually also sheet like,

00:10:45 and it sits on top of the white matter,

00:10:47 which is about 10 times the volume of the gray matter

00:10:50 and contains all what you might call the wires.

00:10:53 But there’s a huge, the effect of computer structure

00:11:01 on what is easy and what is hard is immense.

00:11:09 And biology does, it makes some things easy

00:11:13 that are very difficult to understand

00:11:16 how to do computationally.

00:11:17 On the other hand, you can’t do simple floating point

00:11:21 arithmetic because it’s awfully stupid.

00:11:23 And you’re saying this kind of three dimensional

00:11:25 complicated structure makes, it’s still math.

00:11:30 It’s still doing math.

00:11:32 The kind of math it’s doing enables you to solve problems

00:11:36 of a very different kind.

00:11:38 That’s right, that’s right.

00:11:40 So you mentioned two kinds of adaptation,

00:11:43 the evolutionary adaptation and the adaptation

00:11:46 or learning at the scale of a single human life.

00:11:50 Which do you, which is particularly beautiful to you

00:11:56 and interesting from a research

00:11:59 and from just a human perspective?

00:12:02 And which is more powerful?

00:12:05 I find things most interesting that I begin to see

00:12:10 how to get into the edges of them

00:12:12 and tease them apart a little bit and see how they work.

00:12:15 And since I can’t see the evolutionary process going on,

00:12:21 I’m in awe of it.

00:12:26 But I find it just a black hole as far as trying

00:12:30 to understand what to do.

00:12:32 And so in a certain sense, I’m in awe of it,

00:12:35 but I couldn’t be interested in working on it.

00:12:39 The human life’s time scale is however thing

00:12:43 you can tease apart and study.

00:12:47 Yeah, you can do, there’s developmental neurobiology

00:12:51 which understands how the connections

00:12:56 and how the structure evolves from a combination

00:13:00 of what the genetics is like and the real,

00:13:04 the fact that you’re building a system in three dimensions.

00:13:08 In just days and months, those early days

00:13:14 of a human life are really interesting.

00:13:17 They are and of course, there are times

00:13:21 of immense cell multiplication.

00:13:24 There are also times of the greatest cell death

00:13:28 in the brain is during infancy.

00:13:32 It’s turnover.

00:13:33 So what is not effective, what is not wired well enough

00:13:39 to use at the moment, throw it out.

00:13:42 It’s a mysterious process.

00:13:45 From, let me ask, from what field do you think

00:13:49 the biggest breakthrough is in understanding

00:13:51 the mind will come in the next decades?

00:13:56 Is it neuroscience, computer science, neurobiology,

00:13:59 psychology, physics, maybe math, maybe literature?

00:14:09 Well, of course, I see the world always

00:14:11 through a lens of physics.

00:14:12 I grew up in physics and the way I pick problems

00:14:19 is very characteristic of physics

00:14:21 and of an intellectual background which is not psychology,

00:14:25 which is not chemistry and so on and so on.

00:14:28 Yeah, both of your parents are physicists.

00:14:30 Both of my parents were physicists

00:14:31 and the real thing I got out of that was a feeling

00:14:36 that the world is an understandable place

00:14:41 and if you do enough experiments and think about

00:14:45 what they mean and structure things

00:14:48 so you can do the mathematics of the,

00:14:50 relevant to the experiments, you ought to be able

00:14:53 to understand how things work.

00:14:55 But that was, that was a few years ago.

00:14:58 Did you change your mind at all through many decades

00:15:03 of trying to understand the mind,

00:15:06 of studying in different kinds of ways?

00:15:07 Not even the mind, just biological systems.

00:15:11 You still have hope that physics, that you can understand?

00:15:17 There’s a question of what do you mean by understand?

00:15:20 Of course.

00:15:21 When I taught freshman physics, I used to say,

00:15:24 I wanted to get physics to understand the subject,

00:15:26 to understand Newton’s laws.

00:15:28 I didn’t want them simply to memorize a set of examples

00:15:33 to which they knew the equations to write down

00:15:36 to generate the answers.

00:15:38 I had this nebulous idea of understanding

00:15:42 so that if you looked at a situation,

00:15:44 you could say, oh, I expect the ball to make that trajectory

00:15:48 or I expect some intuitive notion of understanding

00:15:52 and I don’t know how to express that very well

00:15:58 and I’ve never known how to express it well.

00:16:01 And you run smack up against it when you do these,

00:16:04 look at these simple neural nets,

00:16:07 feed forward neural nets, which do amazing things

00:16:13 and yet, you know, contain nothing of the essence

00:16:16 of what I would have felt was understanding.

00:16:20 Understanding is more than just an enormous lookup table.

00:16:24 Let’s linger on that.

00:16:26 How sure you are of that?

00:16:28 What if the table gets really big?

00:16:31 So, I mean, asked another way,

00:16:34 these feed forward neural networks,

00:16:37 do you think they’ll ever understand?

00:16:40 Could answer that in two ways.

00:16:41 I think if you look at real systems,

00:16:45 feedback is an essential aspect

00:16:50 of how these real systems compute.

00:16:53 On the other hand, if I have a mathematical system

00:16:55 with feedback, I know I can unlayer this and do it,

00:16:58 but I have an exponential expansion

00:17:03 in the amount of stuff I have to build

00:17:06 if I can resolve the problem that way.

00:17:08 So feedback is essential.

00:17:10 So we can talk even about recurrent neural nets,

00:17:13 so recurrence, but do you think all the pieces are there

00:17:17 to achieve understanding through these simple mechanisms?

00:17:22 Like back to our original question,

00:17:25 what is the fundamental, is there a fundamental difference

00:17:28 between artificial neural networks and biological

00:17:31 or is it just a bunch of surface stuff?

00:17:34 Suppose you ask a neurosurgeon, when is somebody dead?

00:17:41 Yeah.

00:17:42 So we’ll probably go back to saying,

00:17:44 well, I can look at the brain rhythms

00:17:47 and tell you this is a brain

00:17:49 which has never could have functioned again.

00:17:51 This is one of the, this other one is one of the stuff

00:17:53 we treat it well is still recoverable.

00:17:58 And then just do that by some electrodes

00:18:00 looking at simple electrical patterns,

00:18:05 which don’t look in any detail at all

00:18:08 what individual neurons are doing.

00:18:13 These rhythms are utterly absent

00:18:17 from anything which goes on at Google.

00:18:23 Yeah, but the rhythms.

00:18:26 But the rhythms what?

00:18:27 So, well, that’s like comparing, okay, I’ll tell you,

00:18:31 it’s like you’re comparing the greatest classical musician

00:18:36 in the world to a child first learning to play.

00:18:39 The question I’m at, but they’re still both

00:18:41 playing the piano.

00:18:42 I’m asking, is there, will it ever go on at Google?

00:18:48 Do you have a hope?

00:18:49 Because you’re one of the seminal figures

00:18:52 in both launching both disciplines,

00:18:55 both sides of the river.

00:18:59 I think it’s going to go on generation after generation.

00:19:04 The way it has where what you might call

00:19:09 the AI computer science community says,

00:19:12 let’s take the following.

00:19:14 This is our model of neurobiology at the moment.

00:19:16 Let’s pretend it’s good enough

00:19:20 and do everything we can with it.

00:19:24 And it does interesting things.

00:19:25 And after a while it sort of grinds into the sand

00:19:30 and you say, ah, something else is needed for neurobiology.

00:19:35 And some other grand thing comes in

00:19:38 and enables you to go a lot further.

00:19:42 What will go into the sand again?

00:19:44 And I think it could be generations of this evolution.

00:19:47 I don’t know how many of them.

00:19:48 And each one is going to get you further

00:19:50 into what a brain does.

00:19:53 And in some sense, past the Turing test longer

00:19:58 and in more broad aspects.

00:20:05 And how many of these are going to have to be

00:20:08 before you say, I’ve made something,

00:20:11 I’ve made a human, I don’t know.

00:20:15 But your sense is it might be a couple.

00:20:17 My sense is it might be a couple more.

00:20:19 Yeah.

00:20:20 And going back to my brainwaves as it were.

00:20:25 Yes, from the AI point of view,

00:20:32 they would say, ah, maybe these are an epiphenomenon

00:20:35 and not important at all.

00:20:40 The first car I had, a real wreck of a 1936 Dodge,

00:20:46 go above about 45 miles an hour and the wheels would shimmy.

00:20:50 Yeah.

00:20:52 Good speedometer that.

00:20:56 Now, nobody designed the car that way.

00:20:59 The car is malfunctioning to have that.

00:21:02 But in biology, if it were useful to know

00:21:05 when are you going more than 45 miles an hour,

00:21:08 you just capture that.

00:21:10 And you wouldn’t worry about where it came from.

00:21:15 Yeah.

00:21:16 It’s going to be a long time before that kind of thing,

00:21:18 which can take place in large complex networks of things

00:21:25 is actually used in the computation.

00:21:27 Look, how many transistors are there

00:21:32 in your laptop these days?

00:21:34 Actually, I don’t know the number.

00:21:36 It’s on the scale of 10 to the 10.

00:21:38 I can’t remember the number either.

00:21:40 Yeah.

00:21:43 And all the transistors are somewhat similar.

00:21:45 And most physical systems with that many parts,

00:21:49 all of which are similar, have collective properties.

00:21:54 Yes.

00:21:55 Sound waves in air, earthquakes,

00:21:57 what have you, have collective properties.

00:21:59 Weather.

00:22:02 There are no collective properties used

00:22:05 in artificial neural networks, in AI.

00:22:10 Yeah, it’s very.

00:22:12 If biology uses them,

00:22:14 it’s going to take us to more generations of things

00:22:17 for people to actually dig in

00:22:18 and see how they are used and what they mean.

00:22:22 See, you’re very right.

00:22:25 We might have to return several times to neurobiology

00:22:28 and try to make our transistors more messy.

00:22:32 Yeah, yeah.

00:22:35 At the same time, the simple ones will conquer big aspects.

00:22:40 And I think one of the most, biggest surprises to me was

00:22:47 how well learning systems

00:22:49 because they’re manifestly nonbiological,

00:22:52 how important they can be actually,

00:22:54 and how important and how useful they can be in AI.

00:22:59 So if we can just take a stroll to some of your work.

00:23:04 If we can just take a stroll to some of your work

00:23:10 that is incredibly surprising,

00:23:12 that it works as well as it does,

00:23:14 that launched a lot of the recent work with neural networks.

00:23:18 If we go to what are now called Hopfield networks,

00:23:26 can you tell me what is associative memory in the mind

00:23:29 for the human side?

00:23:31 Let’s explore memory for a bit.

00:23:33 Okay, what do you mean by associative memory is,

00:23:37 ah, you have a memory of each of your friends.

00:23:42 Your friend has all kinds of properties

00:23:43 from what they look like, what their voice sounds like,

00:23:46 to where they went to college, where you met them,

00:23:50 go on and on, what science papers they’ve written.

00:23:55 And if I start talking about a 5 foot 10 wire,

00:24:00 cognitive scientist who’s got a very bad back,

00:24:03 it doesn’t take very long for you to say,

00:24:06 oh, he’s talking about Jeff Hinton.

00:24:07 I never mentioned the name or anything very particular.

00:24:14 But somehow a few facts that are associated

00:24:18 with a particular person enables you to get a hold

00:24:21 of the rest of the facts.

00:24:23 Or not the rest of them, another subset of them.

00:24:26 And it’s this ability to link things together,

00:24:33 link experiences together, which goes under

00:24:37 the general name of associative memory.

00:24:40 And a large part of intelligent behavior

00:24:43 is actually just large associative memories at work,

00:24:47 as far as I can see.

00:24:49 What do you think is the mechanism of how it works?

00:24:53 What do you think is the mechanism of how it works

00:24:57 in the mind?

00:24:58 Is it a mystery to you still?

00:25:03 Do you have inklings of how this essential thing

00:25:07 for cognition works?

00:25:10 What I made 35 years ago was, of course,

00:25:14 a crude physics model to actually enable you

00:25:19 to understand my old sense of understanding

00:25:24 as a physicist, because you could say,

00:25:26 ah, I understand why this goes to stable states.

00:25:29 It’s like things going downhill.

00:25:32 And that gives you something with which to think

00:25:39 in physical terms rather than only in mathematical terms.

00:25:42 So you’ve created these associative artificial networks.

00:25:47 That’s right.

00:25:48 Now, if you look at what I did,

00:25:53 I didn’t at all describe a system which gracefully learns.

00:25:59 I described a system in which you could understand

00:26:02 how learning could link things together,

00:26:06 how very crudely it might learn.

00:26:09 One of the things which intrigues me

00:26:11 as I reinvestigate that system now to some extent is,

00:26:15 look, I see you, I’ll see you every second

00:26:20 for the next hour or what have you.

00:26:23 Each look at you is a little bit different.

00:26:26 I don’t store all those second by second images.

00:26:30 I don’t store 3,000 images.

00:26:32 I somehow compact this information.

00:26:34 So I now have a view of you,

00:26:37 which I can use.

00:26:44 It doesn’t slavishly remember anything in particular,

00:26:47 but it compacts the information into useful chunks,

00:26:50 which are somehow these chunks,

00:26:54 which are not just activities of neurons,

00:26:57 bigger things than that,

00:26:59 which are the real entities which are useful to you.

00:27:03 Which are useful to you.

00:27:06 Useful to you to describe,

00:27:10 to compress this information coming at you.

00:27:13 And you have to compress it in such a way

00:27:15 that if the information comes in just like this again,

00:27:19 I don’t bother to rewrite it or efforts to rewrite it

00:27:24 simply do not yield anything

00:27:26 because those things are already written.

00:27:29 And that needs to be not,

00:27:32 look this up, have I stored it somewhere already?

00:27:36 There’ll be something which is much more automatic

00:27:39 in the machine hardware.

00:27:41 Right, so in the human mind,

00:27:44 how complicated is that process do you think?

00:27:47 So you’ve created,

00:27:50 feels weird to be sitting with John Hotfield

00:27:52 calling him Hotfield Networks, but.

00:27:54 It is weird.

00:27:55 Yeah, but nevertheless, that’s what everyone calls him.

00:28:00 So here we are.

00:28:02 So that’s a simplification.

00:28:04 That’s what a physicist would do.

00:28:06 You and Richard Feynman sat down

00:28:08 and talked about associative memory.

00:28:09 Now, if you look at the mind

00:28:14 where you can’t quite simplify it so perfectly,

00:28:17 do you think that?

00:28:18 Well, let me backtrack just a little bit.

00:28:21 Yeah.

00:28:22 Biology is about dynamical systems.

00:28:25 Computers are dynamical systems.

00:28:29 You can ask, if you want to model biology,

00:28:35 if you want to model neurobiology,

00:28:38 what is the time scale?

00:28:39 There’s a dynamical system in which,

00:28:42 of a fairly fast time scale in which you could say,

00:28:46 the synapses don’t change much during this computation,

00:28:49 so I’ll think of the synapses fixed

00:28:51 and just do the dynamics of the activity.

00:28:54 Or you can say, the synapses are changing fast enough

00:28:58 that I have to have the synaptic dynamics

00:29:01 working at the same time as the system dynamics

00:29:05 in order to understand the biology.

00:29:11 Most, if you look at the feedforward artificial neural nets,

00:29:16 they’re all done as learnings.

00:29:18 First of all, I spend some time learning, not performing,

00:29:21 and I turn off learning and I turn off learning,

00:29:23 and I turn off learning and I perform.

00:29:26 Right.

00:29:27 That’s not biology.

00:29:30 And so as I look more deeply at neurobiology,

00:29:34 even as associative memory,

00:29:37 I’ve got to face the fact that the dynamics

00:29:39 of the synapse change is going on all the time.

00:29:44 And I can’t just get by by saying,

00:29:46 I’ll do the dynamics of activity with fixed synapses.

00:29:50 Yeah.

00:29:52 So the synaptic, the dynamics of the synapses

00:29:56 is actually fundamental to the whole system.

00:29:58 Yeah, yeah.

00:30:00 And there’s nothing necessarily separating the time scales.

00:30:04 When the time scale’s gonna be separated,

00:30:06 it’s neat from the physicist’s

00:30:08 or the mathematician’s point of view,

00:30:10 but it’s not necessarily true in neurobiology.

00:30:13 So you’re kind of dancing beautifully

00:30:16 between showing a lot of respect to physics

00:30:20 and then also saying that physics

00:30:24 cannot quite reach the complexity of biology.

00:30:29 So where do you land?

00:30:30 Or do you continuously dance between the two points?

00:30:33 I continuously dance between them

00:30:34 because my whole notion of understanding

00:30:39 is that you can describe to somebody else

00:30:43 how something works in ways which are honest and believable

00:30:47 and still not describe all the nuts and bolts in detail.

00:30:54 Weather.

00:30:55 I can describe weather

00:30:59 as 10 to the 32 molecules colliding in the atmosphere.

00:31:04 I can simulate weather that way if I have a big enough machine.

00:31:07 I’ll simulate it accurately.

00:31:11 It’s no good for understanding.

00:31:13 If I want to understand things, I want to understand things

00:31:16 in terms of wind patterns, hurricanes,

00:31:19 pressure differentials, and so on,

00:31:21 all things as they’re collective.

00:31:24 And the physicist in me always hopes

00:31:29 that biology will have some things

00:31:32 that can be said about it which are both true

00:31:35 and for which you don’t need all the molecular details

00:31:38 as the molecules colliding.

00:31:39 That’s what I mean from the roots of physics,

00:31:42 by understanding.

00:31:45 So what did, again, sorry,

00:31:47 but Hopfield Networks help you understand

00:31:51 what insight did give us about memory, about learning?

00:31:57 They didn’t give insights about learning.

00:32:02 They gave insights about how things having learned

00:32:06 could be expressed, how having learned a picture of you,

00:32:13 a picture of you reminds me of your name.

00:32:16 That would, but it didn’t describe a reasonable way

00:32:20 of actually doing the learning.

00:32:24 They only said if you had previously learned

00:32:27 the connections of this kind of pattern,

00:32:30 would now be able to,

00:32:31 behave in a physical way was to say,

00:32:34 ah, if I put the part of the pattern in here,

00:32:37 the other part of the pattern will complete over here.

00:32:40 I could understand that physics,

00:32:43 if the right learning stuff had already been put in.

00:32:46 And it could understand why then putting in a picture

00:32:48 of somebody else would generate something else over here.

00:32:52 But it did not have a reasonable description

00:32:56 of the learning that was going on.

00:32:59 It did not have a reasonable description

00:33:01 of the learning process.

00:33:03 But even, so forget learning.

00:33:05 I mean, that’s just a powerful concept

00:33:07 that sort of forming representations

00:33:11 that are useful to be robust,

00:33:14 you know, for error correction kind of thing.

00:33:17 So this is kind of what the biology does

00:33:20 we’re talking about.

00:33:22 Yeah, and what my paper did was simply enable you,

00:33:26 there are lots of ways of being robust.

00:33:34 If you think of a dynamical system,

00:33:36 you think of a system where a path is going on in time.

00:33:42 And if you think for a computer,

00:33:43 there’s a computational path,

00:33:45 which is going on in a huge dimensional space

00:33:48 of ones and zeros.

00:33:51 And an error correction system is a system,

00:33:55 which if you get a little bit off that trajectory,

00:33:58 will push you back onto that trajectory again.

00:34:00 So you get to the same answer in spite of the fact

00:34:03 that there were things,

00:34:04 so that the computation wasn’t being ideally done

00:34:07 all the way along the line.

00:34:10 And there are lots of models for error correction.

00:34:13 But one of the models for error correction is to say,

00:34:17 there’s a valley that you’re following, flowing down.

00:34:20 And if you push a little bit off the valley,

00:34:23 just like water being pushed a little bit by a rock,

00:34:26 it gets back and follows the course of the river.

00:34:30 And that basically the analog

00:34:35 in the physical system, which enables you to say,

00:34:38 oh yes, error free computation and an associative memory

00:34:43 are very much like things that I can understand

00:34:46 from the point of view of a physical system.

00:34:49 The physical system is, can be under some circumstances,

00:34:54 an accurate metaphor.

00:34:58 It’s not the only metaphor.

00:34:59 There are error correction schemes,

00:35:01 which don’t have a valley and energy behind them.

00:35:06 But those are error correction schemes,

00:35:09 which a mathematician may be able to understand,

00:35:11 but I don’t.

00:35:13 So there’s the physical metaphor that seems to work here.

00:35:18 That’s right, that’s right.

00:35:20 So these kinds of networks actually led to a lot of the work

00:35:26 that is going on now in neural networks,

00:35:29 artificial neural networks.

00:35:30 So the follow on work with restricted Boltzmann machines

00:35:34 and deep belief nets followed on from these ideas

00:35:40 of the Hopfield network.

00:35:41 So what do you think about this continued progress

00:35:46 of that work towards now re revigorated exploration

00:35:51 of feed forward neural networks

00:35:54 and recurrent neural networks

00:35:55 and convolutional neural networks

00:35:57 and kinds of networks that are helping solve

00:36:01 image recognition, natural language processing,

00:36:03 all that kind of stuff.

00:36:05 It always intrigued me that one of the most long lived

00:36:09 of the learning systems is the Boltzmann machine,

00:36:14 which is intrinsically a feedback network.

00:36:18 And with the brilliance of Hind and Sinowski

00:36:24 to understand how to do learning in that.

00:36:28 And it’s still a useful way to understand learning

00:36:30 and the learning that you understand in that

00:36:34 has something to do with the way

00:36:36 that feed forward systems work.

00:36:39 But it’s not always exactly simple

00:36:41 to express that intuition.

00:36:45 But it’s always amuses me to see Hinton

00:36:49 going back to the will yet again

00:36:51 on a form of the Boltzmann machine

00:36:53 because really that which has feedback

00:36:59 and interesting probabilities in it

00:37:02 is a lovely encapsulation of something in computational.

00:37:07 Something computational?

00:37:09 Something both computational and physical.

00:37:12 Computational and it’s very much related

00:37:15 to feed forward networks.

00:37:17 Physical in that Boltzmann machine learning

00:37:21 is really learning a set of parameters

00:37:24 for a physics Hamiltonian or energy function.

00:37:29 What do you think about learning in this whole domain?

00:37:32 Do you think the aforementioned guy,

00:37:37 Jeff Hinton, all the work there with backpropagation,

00:37:42 all the kind of learning that goes on in these networks,

00:37:49 if we compare it to learning in the brain, for example,

00:37:53 is there echoes of the same kind of power

00:37:55 that backpropagation reveals

00:37:59 about these kinds of recurrent networks?

00:38:01 Or is it something fundamentally different

00:38:03 going on in the brain?

00:38:10 I don’t think the brain is as deep

00:38:13 as the deepest networks go,

00:38:17 the deepest computer science networks.

00:38:22 And I do wonder whether part of that depth

00:38:24 of the computer science networks is necessitated

00:38:28 by the fact that the only learning

00:38:29 that’s easily done on a machine is feed forward.

00:38:36 And so there’s the question of to what extent

00:38:39 is the biology, which has some feed forward

00:38:42 and some feed back,

00:38:46 been captured by something which has got many more neurons

00:38:51 but much more depth than the neurons in it.

00:38:56 So part of you wonders if the feedback is actually

00:39:00 more essential than the number of neurons or the depth,

00:39:03 the dynamics of the feedback.

00:39:06 The dynamics of the feedback.

00:39:08 Look, if you don’t have feedback,

00:39:11 it’s a little bit like a building a big computer

00:39:14 and running it through one clock cycle.

00:39:17 And then you can’t do anything

00:39:19 until you reload something coming in.

00:39:24 How do you use the fact that there are multiple clock cycles?

00:39:28 How do I use the fact that you can close your eyes,

00:39:30 stop listening to me and think about a chessboard

00:39:33 for two minutes without any input whatsoever?

00:39:38 Yeah, that memory thing,

00:39:42 that’s fundamentally a feedback kind of mechanism.

00:39:45 You’re going back to something.

00:39:47 Yes, it’s hard to understand.

00:39:51 It’s hard to introspect,

00:39:53 let alone consciousness.

00:39:57 Oh, let alone consciousness, yes, yes.

00:40:01 Because that’s tied up in there too.

00:40:02 You can’t just put that on another shelf.

00:40:06 Every once in a while I get interested in consciousness

00:40:09 and then I go and I’ve done that for years

00:40:12 and ask one of my betters, as it were,

00:40:17 their view on consciousness.

00:40:18 It’s been interesting collecting them.

00:40:21 What is consciousness?

00:40:25 Let’s try to take a brief step into that room.

00:40:30 Well, ask Marvin Minsky,

00:40:32 his view on consciousness.

00:40:33 And Marvin said,

00:40:36 consciousness is basically overrated.

00:40:40 It may be an epiphenomenon.

00:40:42 After all, all the things your brain does,

00:40:45 but they’re actually hard computations

00:40:49 you do nonconsciously.

00:40:55 And there’s so much evidence

00:40:57 that even the simple things you do,

00:41:00 you can make decisions,

00:41:03 you can make committed decisions about them,

00:41:05 the neurobiologist can say,

00:41:07 he’s now committed, he’s going to move the hand left

00:41:12 before you know it.

00:41:14 So his view that consciousness is not,

00:41:16 that’s just like little icing on the cake.

00:41:19 The real cake is in the subconscious.

00:41:21 Yum, yum.

00:41:22 Subconscious, nonconscious.

00:41:24 Nonconscious, what’s the better word, sir?

00:41:27 It’s only that Freud captured the other word.

00:41:29 Yeah, it’s a confusing word, subconscious.

00:41:33 Nicholas Chaiter wrote an interesting book.

00:41:38 I think the title of it is The Mind is Flat.

00:41:44 Flat in a neural net sense, might be flat

00:41:49 as something which is a very broad neural net

00:41:53 without any layers in depth,

00:41:56 whereas a deep brain would be many layers

00:41:58 and not so broad.

00:42:00 In the same sense that if you push Minsky hard enough,

00:42:05 he would probably have said,

00:42:07 consciousness is your effort to explain to yourself

00:42:12 that which you have already done.

00:42:16 Yeah, it’s the weaving of the narrative

00:42:20 around the things that have already been computed for you.

00:42:23 That’s right, and so much of what we do

00:42:27 for our memories of events, for example.

00:42:32 If there’s some traumatic event you witness,

00:42:35 you will have a few facts about it correctly done.

00:42:39 If somebody asks you about it, you will weave a narrative

00:42:42 which is actually much more rich in detail than that

00:42:47 based on some anchor points you have of correct things

00:42:50 and pulling together general knowledge on the other,

00:42:53 but you will have a narrative.

00:42:56 And once you generate that narrative,

00:42:58 you are very likely to repeat that narrative

00:43:00 and claim that all the things you have in it

00:43:02 are actually the correct things.

00:43:05 There was a marvelous example of that

00:43:06 in the Watergate slash impeachment era of John Dean.

00:43:16 John Dean, you’re too young to know,

00:43:19 had been the personal lawyer of Nixon.

00:43:26 And so John Dean was involved in the coverup

00:43:28 and John Dean ultimately realized

00:43:32 the only way to keep himself out of jail for a long time

00:43:35 was actually to tell some of the truths about Nixon.

00:43:38 And John Dean was a tremendous witness.

00:43:41 He would remember these conversations in great detail

00:43:45 and very convincing detail.

00:43:49 And long afterward, some of the tapes,

00:43:54 the secret tapes as it were from which these,

00:43:57 Don was, Gene was recalling these conversations

00:44:01 were published, and one found out that John Dean

00:44:04 had a good but not exceptional memory.

00:44:07 What he had was an ability to paint vividly

00:44:10 and in some sense accurately the tone of what was going on.

00:44:16 By the way, that’s a beautiful description of consciousness.

00:44:23 Do you, like where do you stand in your today?

00:44:32 So perhaps it changes day to day,

00:44:34 but where do you stand on the importance of consciousness

00:44:37 in our whole big mess of cognition?

00:44:42 Is it just a little narrative maker

00:44:45 or is it actually fundamental to intelligence?

00:44:51 That’s a very hard one.

00:44:56 When I asked Francis Crick about consciousness,

00:45:00 he launched forward in a long monologue

00:45:03 about Mendel and the peas and how Mendel knew

00:45:07 that there was something and how biologists understood

00:45:10 that there was something in inheritance,

00:45:13 which was just very, very different.

00:45:16 And the fact that inherited traits didn’t just wash out

00:45:21 into a gray, but this or this and propagated

00:45:27 that that was absolutely fundamental to the biology.

00:45:30 And it took generations of biologists to understand

00:45:34 that there was genetics and it took another generation

00:45:37 or two to understand that genetics came from DNA.

00:45:42 But very shortly after Mendel, thinking biologists

00:45:47 did realize that there was a deep problem about inheritance.

00:45:54 And Francis would have liked to have said,

00:45:58 and that’s why I’m working on consciousness.

00:46:01 But of course, he didn’t have any smoking gun

00:46:03 in the sense of Mendel.

00:46:08 And that’s the weakness of his position.

00:46:10 If you read his book, which he wrote with Koch, I think.

00:46:16 Yeah, Christoph Koch, yeah.

00:46:18 I find it unconvincing for the smoking gun reason.

00:46:22 So I’m going on collecting views without actually having taken

00:46:30 a very strong one myself,

00:46:32 because I haven’t seen the entry point.

00:46:35 Not seeing the smoking gun from the point of view

00:46:38 of physics, I don’t see the entry point.

00:46:41 Whereas in neurobiology, once I understood the idea

00:46:44 of a collective, an evolution of dynamics,

00:46:48 which could be described as a collective phenomenon,

00:46:52 I thought, ah, there’s a point where what I know

00:46:55 about physics is so different from any neurobiologist

00:46:59 that I have something that I might be able to contribute.

00:47:01 And right now, there’s no way to grasp at consciousness

00:47:05 from a physics perspective.

00:47:07 From my point of view, that’s correct.

00:47:11 And of course, people, physicists, like everybody else,

00:47:16 think very muddily about things.

00:47:18 You ask the closely related question about free will.

00:47:23 Do you believe you have free will?

00:47:27 Physicists will give an offhand answer,

00:47:30 and then backtrack, backtrack, backtrack,

00:47:32 where they realize that the answer they gave

00:47:34 must fundamentally contradict the laws of physics.

00:47:38 Natural, answering questions of free will

00:47:40 and consciousness naturally lead to contradictions

00:47:42 from a physics perspective.

00:47:45 Because it eventually ends up with quantum mechanics,

00:47:48 and then you get into that whole mess

00:47:50 of trying to understand how much,

00:47:54 from a physics perspective, how much is determined,

00:47:58 already predetermined, how much is already deterministic

00:48:01 about our universe, and there’s lots of different things.

00:48:03 And if you don’t push quite that far, you can say,

00:48:07 essentially, all of neurobiology, which is relevant,

00:48:10 can be captured by classical equations of motion.

00:48:13 Right, because in my view of the mysteries of the brain

00:48:18 are not the mysteries of quantum mechanics,

00:48:22 but the mysteries of what can happen

00:48:24 when you have a dynamical system, driven system,

00:48:28 with 10 to the 14 parts.

00:48:32 That that complexity is something which is,

00:48:37 that the physics of complex systems

00:48:39 is at least as badly understood

00:48:42 as the physics of phase coherence in quantum mechanics.

00:48:46 Can we go there for a second?

00:48:48 You’ve talked about attractor networks,

00:48:51 and just maybe you could say what are attractor networks,

00:48:54 and more broadly, what are interesting network dynamics

00:48:58 that emerge in these or other complex systems?

00:49:05 You have to be willing to think

00:49:06 in a huge number of dimensions,

00:49:08 because in a huge number of dimensions,

00:49:11 the behavior of a system can be thought

00:49:12 as just the motion of a point over time

00:49:15 in this huge number of dimensions.

00:49:17 All right.

00:49:19 And an attractor network is simply a network

00:49:22 where there is a line and other lines

00:49:25 converge on it in time.

00:49:28 That’s the essence of an attractor network.

00:49:31 That’s how you.

00:49:32 In a highly dimensional space.

00:49:34 And the easiest way to get that

00:49:37 is to do it in a highly dimensional space,

00:49:40 where some of the dimensions provide the dissipation,

00:49:44 which, if I have a physical system,

00:49:50 trajectories can’t contract everywhere.

00:49:53 They have to contract in some places and expand in others.

00:49:56 There’s a fundamental classical theorem

00:49:59 of statistical mechanics,

00:50:00 which goes under the name of Liouville’s theorem,

00:50:04 which says you can’t contract everywhere.

00:50:08 If you contract somewhere, you expand somewhere else.

00:50:12 In interesting physical systems,

00:50:15 you’ve got driven systems

00:50:17 where you have a small subsystem,

00:50:19 which is the interesting part.

00:50:21 And the rest of the contraction and expansion,

00:50:24 the physicists would say it’s entropy flow

00:50:26 in this other part of the system.

00:50:30 But basically, attractor networks are dynamics

00:50:35 that are funneling down so that you can’t be any,

00:50:40 so that if you start somewhere in the dynamical system,

00:50:42 you will soon find yourself

00:50:44 on a pretty well determined pathway, which goes somewhere.

00:50:47 If you start somewhere else,

00:50:48 you’ll wind up on a different pathway,

00:50:50 but I don’t have just all possible things.

00:50:53 You have some defined pathways which are allowed

00:50:56 and onto which you will converge.

00:51:00 And that’s the way you make a stable computer,

00:51:01 and that’s the way you make a stable behavior.

00:51:06 So in general, looking at the physics

00:51:08 of the emergent stability in networks,

00:51:15 what are some interesting characteristics that,

00:51:19 what are some interesting insights

00:51:20 from studying the dynamics of such high dimensional systems?

00:51:24 Most dynamical systems, most driven dynamical systems,

00:51:29 are driven, they’re coupled somehow to an energy source.

00:51:33 And so their dynamics keeps going

00:51:35 because it’s coupling to the energy source.

00:51:40 Most of them, it’s very difficult to understand at all

00:51:42 what the dynamical behavior is going to be.

00:51:47 You have to run it.

00:51:49 You have to run it.

00:51:50 There’s a subset of systems which has

00:51:54 what is actually known to the mathematicians

00:51:57 as a Lyapunov function, and those systems,

00:52:02 you can understand convergent dynamics

00:52:05 by saying you’re going downhill on something or other.

00:52:10 And that’s what I found with ever knowing

00:52:13 what Lyapunov functions were in the simple model

00:52:17 I made in the early 80s, was an energy function

00:52:20 so you could understand how you could get this channeling

00:52:23 on the pathways without having to follow the dynamics

00:52:28 in infinite detail.

00:52:31 You started rolling a ball at the top of a mountain,

00:52:34 it’s gonna wind up at the bottom of a valley.

00:52:36 You know that’s true without actually watching

00:52:40 the ball roll down.

00:52:43 There’s certain properties of the system

00:52:45 that when you can know that.

00:52:48 That’s right.

00:52:49 And not all systems behave that way.

00:52:53 Most don’t, probably.

00:52:55 Most don’t, but it provides you with a metaphor

00:52:57 for thinking about systems which are stable

00:53:00 and who to have these attractors behave

00:53:03 even if you can’t find a Lyapunov function behind them

00:53:07 or an energy function behind them.

00:53:09 It gives you a metaphor for thought.

00:53:11 Yeah, speaking of thought,

00:53:17 if I had a glint in my eye with excitement

00:53:21 and said I’m really excited about this something

00:53:25 called deep learning and neural networks

00:53:28 and I would like to create an intelligent system

00:53:32 and came to you as an advisor, what would you recommend?

00:53:37 Is it a hopeless pursuit to use neural networks

00:53:42 to achieve thought?

00:53:44 Is it, what kind of mechanisms should we explore?

00:53:48 What kind of ideas should we explore?

00:53:52 Well, you look at the simple networks,

00:53:56 the one past networks.

00:54:01 They don’t support multiple hypotheses very well.

00:54:04 Hmm.

00:54:06 As I have tried to work with very simple systems

00:54:09 which do something which you might consider to be thinking,

00:54:12 thought has to do with the ability to do mental exploration

00:54:17 before you take a physical action.

00:54:22 Almost like we were mentioning, playing chess,

00:54:25 visualizing, simulating inside your head different outcomes.

00:54:30 Yeah, yeah.

00:54:31 And now you would do that in a feed forward network

00:54:37 because you’ve pre calculated all kinds of things.

00:54:41 But I think the way neurobiology does it

00:54:44 hasn’t pre calculated everything.

00:54:49 It actually has parts of a dynamical system

00:54:52 in which you’re doing exploration in a way which is.

00:54:57 There’s a creative element.

00:55:01 Like there’s an.

00:55:02 There’s a creative element.

00:55:04 And in a simple minded neural net,

00:55:13 you have a constellation of instances

00:55:20 of which you’ve learned.

00:55:23 And if you are within that space,

00:55:25 if a new question is a question within this space,

00:55:32 you can actually rely on that system pretty well

00:55:37 to come up with a good suggestion for what to do.

00:55:41 If on the other hand,

00:55:42 the query comes from outside the space,

00:55:46 you have no way of knowing how the system

00:55:48 is gonna behave.

00:55:49 There are no limitations on what can happen.

00:55:51 And so with the artificial neural net world

00:55:55 is always very much,

00:55:57 I have a population of examples.

00:56:01 The test set must be drawn from the equivalent population.

00:56:04 If the test set has examples,

00:56:06 which are from a population which is completely different,

00:56:11 there’s no way that you could expect

00:56:14 to get the answer right.

00:56:16 Yeah, what they call outside the distribution.

00:56:20 That’s right, that’s right.

00:56:22 And so if you see a ball rolling across the street at dusk,

00:56:28 if that wasn’t in your training set,

00:56:33 the idea that a child may be coming close behind that

00:56:37 is not going to occur to the neural net.

00:56:40 And it is to our,

00:56:42 there’s something in your biology that allows that.

00:56:45 Yeah, there’s something in the way

00:56:47 of what it means to be outside of the population

00:56:52 of the training set.

00:56:53 The population of the training set

00:56:55 isn’t just sort of this set of examples.

00:57:01 There’s more to it than that.

00:57:03 And it gets back to my question of,

00:57:06 what is it to understand something?

00:57:09 Yeah.

00:57:12 You know, in a small tangent,

00:57:14 you’ve talked about the value of thinking

00:57:16 of deductive reasoning in science

00:57:18 versus large data collection.

00:57:21 So sort of thinking about the problem.

00:57:25 I suppose it’s the physics side of you

00:57:27 of going back to first principles and thinking,

00:57:31 but what do you think is the value of deductive reasoning

00:57:33 in the scientific process?

00:57:37 Well, there are obviously scientific questions

00:57:39 in which the route to the answer to it

00:57:42 comes through the analysis of one hell of a lot of data.

00:57:46 Right.

00:57:49 Cosmology, that kind of stuff.

00:57:50 And that’s never been the kind of problem

00:57:56 in which I’ve had any particular insight.

00:57:58 Though I must say, if you look at,

00:58:01 cosmology is one of those.

00:58:04 If you look at the actual things that Jim Peebles,

00:58:06 one of this year’s Nobel Prize in physics,

00:58:10 ones from the local physics department,

00:58:12 the kinds of things he’s done,

00:58:13 he’s never crunched large data.

00:58:17 Never, never, never.

00:58:19 He’s used the encapsulation of the work of others

00:58:23 in this regard.

00:58:25 Right.

00:58:27 But it ultimately boiled down to thinking

00:58:30 through the problem.

00:58:31 Like what are the principles under which

00:58:33 a particular phenomenon operates?

00:58:35 Yeah, yeah.

00:58:37 And look, physics is always going to look

00:58:39 for ways in which you can describe the system

00:58:42 in a way which rises above the details.

00:58:47 And to the hard dyed, the wool biologist,

00:58:53 biology works because of the details.

00:58:56 In physics, to the physicists,

00:58:58 we want an explanation which is right

00:59:01 in spite of the details.

00:59:03 And there will be questions which we cannot answer

00:59:05 as physicists because the answer cannot be found that way.

00:59:13 There’s, I’m not sure if you’re familiar

00:59:15 with the entire field of brain computer interfaces

00:59:19 that’s become more and more intensely researched

00:59:24 and developed recently, especially with companies

00:59:25 like Neuralink with Elon Musk.

00:59:29 Yeah, I know there have always been the interests

00:59:31 both in things like getting the eyes

00:59:35 to be able to control things

00:59:38 or getting the thought patterns

00:59:40 to be able to move what had been a connected limb

00:59:45 which is now connected through a computer.

00:59:48 That’s right.

00:59:48 So in the case of Neuralink,

00:59:51 they’re doing 1,000 plus connections

00:59:54 where they’re able to do two way,

00:59:56 activate and read spikes, neural spikes.

01:00:01 Do you have hope for that kind of computer brain interaction

01:00:06 in the near or maybe even far future

01:00:09 of being able to expand the ability

01:00:13 of the mind of cognition or understand the mind?

01:00:20 It’s interesting watching things go.

01:00:23 When I first became interested in neurobiology,

01:00:27 most of the practitioners thought you would be able

01:00:29 to understand neurobiology by techniques

01:00:32 which allowed you to record only one cell at a time.

01:00:36 One cell, yeah.

01:00:38 People like David Hubel,

01:00:43 very strongly reflected that point of view.

01:00:47 And that’s been taken over by a generation,

01:00:50 a couple of generations later,

01:00:52 by a set of people who says not until we can record

01:00:56 from 10 to the four, 10 to the five at a time,

01:00:59 will we actually be able to understand

01:01:00 how the brain actually works.

01:01:03 And in a general sense, I think that’s right.

01:01:09 You have to begin to be able to look

01:01:12 for the collective modes, the collective operations of things.

01:01:18 It doesn’t rely on this action potential or that cell.

01:01:21 It relies on the collective properties of this set of cells

01:01:24 connected with this kind of patterns and so on.

01:01:27 And you’re not going to succeed in seeing

01:01:29 what those collective activities are

01:01:31 without recording many cells at once.

01:01:38 The question is how many at once?

01:01:40 What’s the threshold?

01:01:41 And that’s the…

01:01:42 Yeah, and look, it’s being pursued hard

01:01:47 in the motor cortex.

01:01:48 The motor cortex does something which is complex,

01:01:53 and yet the problem you’re trying to address

01:01:55 is fairly simple.

01:02:00 Now, neurobiology does it in ways that differ

01:02:02 from the way an engineer would do it.

01:02:04 An engineer would put in six highly accurate stepping motors

01:02:10 are controlling a limb rather than 100,000 muscle fibers,

01:02:15 each of which has to be individually controlled.

01:02:19 And so understanding how to do things in a way

01:02:22 which is much more forgiving and much more neural,

01:02:26 I think would benefit the engineering world.

01:02:33 The engineering world, a touch.

01:02:36 Let’s put in a pressure sensor or two,

01:02:38 rather than an array of a gazillion pressure sensors,

01:02:42 none of which are accurate,

01:02:44 all of which are perpetually recalibrating themselves.

01:02:48 So you’re saying your hope is,

01:02:50 your advice for the engineers of the future

01:02:53 is to embrace the large chaos of a messy, air prone system

01:03:00 like those of the biological systems.

01:03:03 Like that’s probably the way to solve some of these.

01:03:05 I think you’ll be able to make better computations

01:03:10 slash robotics that way than by trying to force things

01:03:17 into a robotics where joint motors are powerful

01:03:22 and stepping motors are accurate.

01:03:25 But then the physicists, the physicist in you

01:03:27 will be lost forever in such systems

01:03:31 because there’s no simple fundamentals to explore

01:03:33 in systems that are so large and messy.

01:03:38 Well, you say that, and yet there’s a lot of physics

01:03:43 in the Navier Stokes equations,

01:03:45 the equations of nonlinear hydrodynamics,

01:03:49 huge amount of physics in them.

01:03:51 All the physics of atoms and molecules has been lost,

01:03:55 but it’s been replaced by this other set of equations,

01:03:58 which is just as true as the equations at the bottom.

01:04:02 Now those equations are going to be harder to find

01:04:06 in general biology, but the physicist in me says

01:04:10 there are probably some equations of that sort.

01:04:13 They’re out there.

01:04:14 They’re out there, and if physics

01:04:17 is going to contribute anything,

01:04:19 it may contribute to trying to find out

01:04:22 what those equations are and how to capture them

01:04:24 from the biology.

01:04:26 Would you say that’s one of the main open problems

01:04:29 of our age is to discover those equations?

01:04:34 Yeah, if you look at, there’s molecules

01:04:38 and there’s psychological behavior,

01:04:42 and these two are somehow related.

01:04:45 They’re layers of detail, they’re layers of collectiveness,

01:04:51 and to capture that in some vague way,

01:04:58 several stages on the way up to see how these things

01:05:01 can actually be linked together.

01:05:04 So it seems in our universe, there’s a lot of elegant

01:05:08 equations that can describe the fundamental way

01:05:11 that things behave, which is a surprise.

01:05:13 I mean, it’s compressible into equations.

01:05:15 It’s simple and beautiful, but it’s still an open question

01:05:20 whether that link is equally between molecules

01:05:25 and the brain is equally compressible

01:05:29 into elegant equations.

01:05:31 But your sense, well, you’re both a physicist

01:05:36 and a dreamer, you have a sense that…

01:05:38 Yeah, but I can only dream physics dreams.

01:05:42 Physics dreams.

01:05:44 There was an interesting book called Einstein’s Dreams,

01:05:46 which alternates between chapters on his life

01:05:52 and descriptions of the way time might have been but isn’t.

01:05:57 The linking between these being important ideas

01:06:04 that Einstein might have had to think about

01:06:06 the essence of time as he was thinking about time.

01:06:11 So speaking of the essence of time in your biology,

01:06:14 you’re one human, famous, impactful human,

01:06:18 but just one human with a brain living the human condition.

01:06:22 But you’re ultimately mortal, just like all of us.

01:06:27 Has studying the mind as a mechanism

01:06:30 changed the way you think about your own mortality?

01:06:38 It has, really, because particularly as you get older

01:06:41 and the body comes apart in various ways,

01:06:47 I became much more aware of the fact

01:06:52 that what is somebody is contained in the brain

01:06:59 and not in the body that you worry about burying.

01:07:02 And it is to a certain extent true

01:07:07 that for people who write things down,

01:07:10 equations, dreams, notepads, diaries,

01:07:15 fractions of their thought does continue to live

01:07:18 after they’re dead and gone,

01:07:20 after their body is dead and gone.

01:07:24 And there’s a sea change in that going on in my lifetime

01:07:29 between when my father died, except for the things

01:07:33 which were actually written by him, as it were.

01:07:37 Very few facts about him will have ever been recorded.

01:07:40 And the number of facts which are recorded

01:07:42 about each and every one of us, forever now,

01:07:46 as far as I can see, in the digital world.

01:07:51 And so the whole question of what is death

01:07:58 may be different for people a generation ago

01:08:00 and a generation further ahead.

01:08:04 Maybe we have become immortal under some definitions.

01:08:07 Yeah, yeah.

01:08:09 Last easy question, what is the meaning of life?

01:08:17 Looking back, you’ve studied the mind,

01:08:23 us weird descendants of apes.

01:08:27 What’s the meaning of our existence on this little earth?

01:08:31 What’s the meaning of our existence on this little earth?

01:08:39 Oh, that word meaning is as slippery as the word understand.

01:08:46 Interconnected somehow, perhaps.

01:08:51 Is there, it’s slippery, but is there something

01:08:55 that you, despite being slippery,

01:08:58 can hold long enough to express?

01:09:03 I’ve been amazed at how hard it is

01:09:07 to define the things in a living system

01:09:14 in the sense that one hydrogen atom

01:09:17 is pretty much like another,

01:09:19 but one bacterium is not so much like another bacterium,

01:09:24 even of the same nominal species.

01:09:26 In fact, the whole notion of what is the species

01:09:28 gets a little bit fuzzy.

01:09:31 And do species exist in the absence

01:09:33 of certain classes of environments?

01:09:36 And pretty soon one winds up with a biology

01:09:40 which the whole thing is living,

01:09:43 but whether there’s actually any element of it

01:09:47 which by itself would be said to be living

01:09:52 becomes a little bit vague in my mind.

01:09:54 So in a sense, the idea of meaning

01:09:58 is something that’s possessed by an individual,

01:10:01 like a conscious creature.

01:10:03 And you’re saying that it’s all interconnected

01:10:07 in some kind of way that there might not even

01:10:09 be an individual.

01:10:10 We’re all kind of this complicated mess

01:10:14 of biological systems at all different levels

01:10:17 where the human starts and when the human ends is unclear.

01:10:20 Yeah, yeah, and we’re in neurobiology where the,

01:10:25 oh, you say the neocortex is the thinking,

01:10:27 but there’s lots of things that are done on the spinal cord.

01:10:31 And so where’s the essence of thought?

01:10:35 Is it just gonna be neocortex?

01:10:37 Can’t be, can’t be.

01:10:40 Yeah, maybe to understand and to build thought

01:10:43 you have to build the universe along with the neocortex.

01:10:47 It’s all interlinked through the spinal cord.

01:10:51 John, it’s a huge honor talking today.

01:10:54 Thank you so much for your time.

01:10:55 I really appreciate it.

01:10:57 Well, thank you for the challenge of talking with you.

01:10:59 And it’ll be interesting to see whether you can win

01:11:01 five minutes out of this with just coherence

01:11:04 to anyone or not.

01:11:06 Beautiful.

01:11:08 Thanks for listening to this conversation

01:11:09 with John Hopfield and thank you

01:11:12 to our presenting sponsor, Cash App.

01:11:14 Download it, use code LexPodcast.

01:11:17 You’ll get $10 and $10 will go to FIRST,

01:11:20 an organization that inspires and educates young minds

01:11:23 to become science and technology innovators of tomorrow.

01:11:26 If you enjoy this podcast, subscribe on YouTube,

01:11:29 get five stars on Apple Podcast, support on Patreon,

01:11:32 or simply connect with me on Twitter at Lex Friedman.

01:11:37 And now let me leave you with some words of wisdom

01:11:39 from John Hopfield in his article titled, Now What?

01:11:43 Choosing problems is the primary determinant

01:11:46 of what one accomplishes in science.

01:11:49 I have generally had a relatively short attention span

01:11:52 in science problems.

01:11:53 Thus, I have always been on the lookout

01:11:56 for more interesting questions,

01:11:57 either as my present ones get worked out

01:12:00 or as they get classified by me as intractable,

01:12:03 given my particular talents.

01:12:06 He then goes on to say,

01:12:08 what I have done in science relies entirely

01:12:11 on experimental and theoretical studies by experts.

01:12:15 I have a great respect for them,

01:12:16 especially for those who are willing to attempt

01:12:19 communication with someone who is not an expert in the field.

01:12:24 I would only add that experts are good

01:12:26 at answering questions.

01:12:28 If you’re brash enough, ask your own.

01:12:32 Don’t worry too much about how you found them.

01:12:34 Thank you for listening and hope to see you next time.