Transcript
00:00:00 The following is a conversation with Carl Friston,
00:00:03 one of the greatest neuroscientists in history.
00:00:06 Cited over 245,000 times,
00:00:10 known for many influential ideas in brain imaging,
00:00:13 neuroscience, and theoretical neurobiology,
00:00:16 including especially the fascinating idea
00:00:19 of the free energy principle for action and perception.
00:00:24 Carl’s mix of humor, brilliance, and kindness,
00:00:28 to me, are inspiring and captivating.
00:00:31 This was a huge honor and a pleasure.
00:00:34 This is the Artificial Intelligence Podcast.
00:00:36 If you enjoy it, subscribe on YouTube,
00:00:38 review it with five stars on Apple Podcast,
00:00:41 support it on Patreon,
00:00:42 or simply connect with me on Twitter,
00:00:44 at Lex Friedman, spelled F R I D M A N.
00:00:48 As usual, I’ll do a few minutes of ads now,
00:00:50 and never any ads in the middle
00:00:52 that can break the flow of the conversation.
00:00:54 I hope that works for you,
00:00:55 and doesn’t hurt the listening experience.
00:00:58 This show is presented by Cash App,
00:01:00 the number one finance app in the App Store.
00:01:03 When you get it, use code LEXPODCAST.
00:01:06 Cash App lets you send money to friends by Bitcoin,
00:01:09 and invest in the stock market with as little as $1.
00:01:12 Since Cash App allows you to send
00:01:14 and receive money digitally,
00:01:16 let me mention a surprising fact related to physical money.
00:01:20 Of all the currency in the world,
00:01:22 roughly 8% of it is actual physical money.
00:01:25 The other 92% of money only exists digitally.
00:01:29 So again, if you get Cash App from the App Store,
00:01:32 Google Play, and use the code LEXPODCAST, you get $10,
00:01:37 and Cash App will also donate $10 to FIRST,
00:01:39 an organization that is helping to advance robotics
00:01:42 and STEM education for young people around the world.
00:01:45 And now, here’s my conversation with Carl Friston.
00:01:50 How much of the human brain do we understand
00:01:53 from the low level of neuronal communication
00:01:56 to the functional level to the highest level,
00:02:01 maybe the psychiatric disorder level?
00:02:04 Well, we’re certainly in a better position
00:02:06 than we were last century.
00:02:08 How far we’ve got to go, I think,
00:02:10 is almost an unanswerable question.
00:02:13 So you’d have to set the parameters,
00:02:16 you know, what constitutes understanding, what level
00:02:20 of understanding do you want?
00:02:21 I think we’ve made enormous progress
00:02:25 in terms of broad brush principles.
00:02:29 Whether that affords a detailed cartography
00:02:32 of the functional anatomy of the brain and what it does,
00:02:35 right down to the microcircuitry and the neurons,
00:02:38 that’s probably out of reach at the present time.
00:02:42 So the cartography, so mapping the brain,
00:02:44 do you think mapping of the brain,
00:02:47 the detailed, perfect imaging of it,
00:02:50 does that get us closer to understanding
00:02:54 of the mind, of the brain?
00:02:56 So how far does it get us if we have
00:02:59 that perfect cartography of the brain?
00:03:01 I think there are lower bounds on that.
00:03:03 It’s a really interesting question.
00:03:06 And it would determine the sort of scientific career
00:03:09 you’d pursue if you believe that knowing
00:03:11 every dendritic connection, every sort of microscopic,
00:03:16 synaptic structure right down to the molecular level
00:03:18 was gonna give you the right kind of information
00:03:22 to understand the computational anatomy,
00:03:25 then you’d choose to be a microscopist
00:03:27 and you would study little cubic millimeters of brain
00:03:32 for the rest of your life.
00:03:33 If on the other hand you were interested
00:03:35 in holistic functions and a sort of functional anatomy
00:03:40 of the sort that a neuropsychologist would understand,
00:03:44 you’d study brain lesions and strokes,
00:03:46 just looking at the whole person.
00:03:48 So again, it comes back to at what level
00:03:50 do you want understanding?
00:03:52 I think there are principled reasons not to go too far.
00:03:57 If you commit to a view of the brain
00:04:01 as a machine that’s performing a form of inference
00:04:06 and representing things, that level of understanding
00:04:15 is necessarily cast in terms of probability densities
00:04:20 and ensemble densities, distributions.
00:04:24 And what that tells you is that you don’t really want
00:04:27 to look at the atoms to understand the thermodynamics
00:04:30 of probabilistic descriptions of how the brain works.
00:04:34 So I personally wouldn’t look at the molecules
00:04:38 or indeed the single neurons in the same way
00:04:41 if I wanted to understand the thermodynamics
00:04:44 of some non equilibrium steady state of a gas
00:04:47 or an active material, I wouldn’t spend my life
00:04:49 looking at the individual molecules
00:04:54 that constitute that ensemble.
00:04:55 I’d look at their collective behavior.
00:04:57 On the other hand, if you go too coarse grain,
00:05:00 you’re gonna miss some basic canonical principles
00:05:03 of connectivity and architectures.
00:05:06 I’m thinking here this bit colloquial,
00:05:10 but this current excitement about high field
00:05:13 magnetic resonance imaging at seven Tesla, why?
00:05:17 Well, it gives us for the first time the opportunity
00:05:19 to look at the brain in action at the level
00:05:22 of a few millimeters that distinguish
00:05:24 between different layers of the cortex
00:05:27 that may be very important in terms of evincing
00:05:32 generic principles of conical microcircuitry
00:05:35 that are replicated throughout the brain
00:05:37 that may tell us something fundamental
00:05:39 about message passing in the brain
00:05:41 and these density dynamics or neuronal
00:05:44 or some more population dynamics
00:05:46 that underwrite our brain function.
00:05:49 So somewhere between a millimeter and a meter.
00:05:53 Lingering for a bit on the big questions if you allow me,
00:05:58 what to use the most beautiful or surprising characteristic
00:06:01 of the human brain?
00:06:03 I think it’s hierarchical and recursive aspect.
00:06:06 It’s recurrent aspect.
00:06:08 Of the structure or of the actual
00:06:10 representation of power of the brain?
00:06:12 Well, I think one speaks to the other.
00:06:15 I was actually answering in a dull minded way
00:06:18 from the point of view of purely its anatomy
00:06:20 and its structural aspects.
00:06:22 I mean, there are many marvelous organs in the body.
00:06:26 Let’s take your liver for example.
00:06:28 Without it, you wouldn’t be around for very long
00:06:32 and it does some beautiful and delicate biochemistry
00:06:35 and homeostasis and evolved with a finesse
00:06:41 that would easily parallel the brain
00:06:43 but it doesn’t have a beautiful anatomy.
00:06:45 It has a simple anatomy which is attractive
00:06:47 in a minimalist sense but it doesn’t have
00:06:48 that crafted structure of sparse connectivity
00:06:52 and that recurrence and that specialization
00:06:55 that the brain has.
00:06:56 So you said a lot of interesting terms here.
00:06:58 So the recurrence, the sparsity,
00:07:00 but you also started by saying hierarchical.
00:07:03 So I’ve never thought of our brain as hierarchical.
00:07:11 Sort of I always thought it’s just like a giant mess,
00:07:14 interconnected mess where it’s very difficult
00:07:16 to figure anything out.
00:07:18 But in what sense do you see the brain as hierarchical?
00:07:21 Well, I see it, it’s not a magic soup.
00:07:24 Which of course is what I used to think
00:07:28 before I studied medicine and the like.
00:07:34 So a lot of those terms imply each other.
00:07:38 So hierarchies, if you just think about
00:07:41 the nature of a hierarchy,
00:07:43 how would you actually build one?
00:07:46 And what you would have to do is basically
00:07:48 carefully remove the right connections
00:07:51 that destroy the completely connected soups
00:07:54 that you might have in mind.
00:07:56 So a hierarchy is in and of itself defined
00:08:00 by a sparse and particular connectivity structure.
00:08:04 I’m not committing to any particular form of hierarchy.
00:08:08 But your sense is there is some.
00:08:10 Oh, absolutely, yeah.
00:08:11 In virtue of the fact that there is a sparsity
00:08:14 of connectivity, not necessarily of a qualitative sort,
00:08:19 but certainly of a quantitative sort.
00:08:20 So it is demonstrably so that the further apart
00:08:27 two parts of the brain are,
00:08:29 the less likely they are to be wired,
00:08:32 to possess axonal processes, neuronal processes
00:08:35 that directly communicate one message
00:08:39 or messages from one part of that brain
00:08:41 to the other part of the brain.
00:08:42 So we know there’s a sparse connectivity.
00:08:45 And furthermore, on the basis of anatomical connectivity
00:08:50 in traces studies, we know that that sparsity
00:08:57 underwrites a hierarchical and very structured
00:09:00 sort of connectivity that might be best understood
00:09:07 like a little bit like an onion.
00:09:09 There is a concentric, sometimes referred to as centripetal
00:09:15 by people like Marcel Masulam,
00:09:17 hierarchical organization to the brain.
00:09:19 So you can think of the brain as in a rough sense,
00:09:23 like an onion, and all the sensory information
00:09:28 and all the afferent outgoing messages
00:09:31 that supply commands to your muscles
00:09:33 or to your secretory organs come from the surface.
00:09:37 So there’s a massive exchange interface
00:09:40 with the world out there on the surface.
00:09:43 And then underneath, there’s a little layer
00:09:45 that sits and looks at the exchange on the surface.
00:09:49 And then underneath that, there’s a layer
00:09:51 right the way down to the very center,
00:09:53 to the deepest part of the onion.
00:09:55 That’s what I mean by a hierarchical organization.
00:09:58 There’s a discernible structure defined
00:10:02 by the sparsity of connections
00:10:04 that lends the architecture a hierarchical structure
00:10:08 that tells one a lot about the kinds of representations
00:10:12 and messages.
00:10:13 Coming back to your earlier question,
00:10:15 is this about the representational capacity
00:10:19 or is it about the anatomy?
00:10:20 Well, one underwrites the other.
00:10:24 If one simply thinks of the brain
00:10:26 as a message passing machine,
00:10:29 a process that is in the service of doing something,
00:10:33 then the circuitry and the connectivity
00:10:37 that shape that message passing also dictate its function.
00:10:42 So you’ve done a lot of amazing work
00:10:44 in a lot of directions.
00:10:46 So let’s look at one aspect of that,
00:10:49 of looking into the brain
00:10:51 and trying to study this onion structure.
00:10:54 What can we learn about the brain by imaging it?
00:10:57 Which is one way to sort of look at the anatomy of it.
00:11:00 Broadly speaking, what are the methods of imaging,
00:11:04 but even bigger, what can we learn about it?
00:11:07 Right, so well, most human neuroimaging
00:11:13 that you might see in science journals
00:11:18 that speaks to the way the brain works,
00:11:22 measures brain activity over time.
00:11:24 So that’s the first thing to say,
00:11:26 that we’re effectively looking at fluctuations
00:11:30 in neuronal responses,
00:11:32 usually in response to some sensory input
00:11:36 or some instruction, some task.
00:11:40 Not necessarily, there’s a lot of interest
00:11:42 in just looking at the brain
00:11:44 in terms of resting state, endogenous,
00:11:46 or intrinsic activity.
00:11:48 But crucially, at every point,
00:11:50 looking at these fluctuations,
00:11:52 either induced or intrinsic in the neural activity,
00:11:57 and understanding them at two levels.
00:11:59 So normally, people would recourse
00:12:03 to two principles of brain organization
00:12:06 that are complementary.
00:12:07 One, functional specialization or segregation.
00:12:10 So what does that mean?
00:12:11 It simply means that there are certain parts of the brain
00:12:16 that may be specialized for certain kinds of processing.
00:12:19 For example, visual motion,
00:12:21 our ability to recognize or to perceive movement
00:12:26 in the visual world.
00:12:27 And furthermore, that specialized processing
00:12:31 may be spatially or anatomically segregated,
00:12:34 leading to functional segregation.
00:12:37 Which means that if I were to compare your brain activity
00:12:40 during a period of viewing a static image,
00:12:45 and then compare that to the responses of fluctuations
00:12:49 in the brain when you were exposed to a moving image,
00:12:52 say a flying bird,
00:12:54 we’d expect to see
00:12:56 restricted, segregated differences in activity.
00:13:01 And those are basically the hotspots
00:13:03 that you see in the statistical parametric maps
00:13:06 that test for the significance of the responses
00:13:08 that are circumscribed.
00:13:11 So now, basically, we’re talking about
00:13:13 some people have perhaps unkindly called a neocartography.
00:13:19 This is a phrenology augmented by modern day neuroimaging,
00:13:24 basically finding blobs or bumps on the brain
00:13:28 that do this or do that,
00:13:30 and trying to understand the cartography
00:13:33 of that functional specialization.
00:13:35 So how much is there such,
00:13:38 this is such a beautiful sort of ideal to strive for.
00:13:43 We humans, scientists, would like this,
00:13:46 to hope that there’s a beautiful structure to this
00:13:48 where it’s, like you said, there’s segregated regions
00:13:51 that are responsible for the different function.
00:13:54 How much hope is there to find such regions
00:13:57 in terms of looking at the progress of studying the brain?
00:14:00 Oh, I think enormous progress has been made
00:14:02 in the past 20 or 30 years.
00:14:06 So this is beyond incremental.
00:14:08 At the advent of brain imaging,
00:14:11 the very notion of functional segregation
00:14:14 was just a hypothesis based upon a century,
00:14:18 if not more, of careful neuropsychology,
00:14:21 looking at people who had lost via insult
00:14:24 or traumatic brain injury particular parts of the brain,
00:14:29 and then saying, well, they can’t do this
00:14:30 or they can’t do that.
00:14:32 For example, losing the visual cortex
00:14:34 and not being able to see,
00:14:35 or losing particular parts of the visual cortex
00:14:39 or regions known as V5
00:14:44 or the middle temporal region, MT,
00:14:47 and noticing that they selectively
00:14:49 could not see moving things.
00:14:51 And so that created the hypothesis
00:14:55 that perhaps visual movement processing
00:14:59 was located in this functionally segregated area.
00:15:02 And you could then go and put invasive electrodes
00:15:05 in animal models and say, yes, indeed,
00:15:08 we can excite activity here.
00:15:10 We can form receptive fields that are sensitive to
00:15:13 or defined in terms of visual motion.
00:15:16 But at no point could you exclude the possibility
00:15:18 that everywhere else in the brain
00:15:20 was also very interested in visual motion.
00:15:23 By the way, I apologize to interrupt,
00:15:24 but a tiny little tangent.
00:15:26 You said animal models, just out of curiosity,
00:15:31 from your perspective, how different is the human brain
00:15:34 versus the other animals
00:15:35 in terms of our ability to study the brain?
00:15:38 Well, clearly, the further away you go from a human brain,
00:15:43 the greater the differences,
00:15:45 but not as remarkable as you might think.
00:15:48 So people will choose their level of approximation
00:15:53 to the human brain,
00:15:54 depending upon the kinds of questions
00:15:56 that they want to answer.
00:15:57 So if you’re talking about sort of canonical principles
00:16:00 of microcircuitry, it might be perfectly okay
00:16:02 to look at a mouse, indeed.
00:16:04 You could even look at flies, worms.
00:16:08 If, on the other hand, you wanted to look at
00:16:09 the finer details of organization of visual cortex
00:16:13 and V1, V2, these are designated patches of cortex
00:16:17 that may do different things, indeed, do.
00:16:20 You’d probably want to use a primate
00:16:23 that looked a little bit more like a human,
00:16:26 because there are lots of ethical issues
00:16:28 in terms of the use of nonhuman primates
00:16:32 to answer questions about human anatomy.
00:16:37 But I think most people assume
00:16:39 that most of the important principles are conserved
00:16:43 in a continuous way, right from, well, yes,
00:16:48 worms right through to you and me.
00:16:53 So now returning to, so that was the early sort of ideas
00:16:56 of studying the functional regions of the brain
00:17:00 by if there’s some damage to it,
00:17:02 to try to infer that that part of the brain
00:17:06 might be somewhat responsible for this type of function.
00:17:09 So where does that lead us?
00:17:11 What are the next steps beyond that?
00:17:12 Right, well, I’ll just actually just reverse a bit,
00:17:16 come back to your sort of notion
00:17:17 that the brain is a magic soup.
00:17:19 That was actually a very prominent idea at one point,
00:17:23 notions such as Lashley’s law of mass action
00:17:28 inherited from the observation that for certain animals,
00:17:33 if you just took out spoonfuls of the brain,
00:17:36 it didn’t matter where you took these spoonfuls out,
00:17:38 they always showed the same kinds of deficits.
00:17:40 So it was very difficult to infer functional specialization
00:17:44 purely on the basis of lesion deficit studies.
00:17:49 But once we had the opportunity
00:17:50 to look at the brain lighting up
00:17:52 and it’s literally it’s sort of excitement, neuronal excitement
00:17:57 when looking at this versus that,
00:18:01 one was able to say, yes, indeed,
00:18:03 these functionally specialized responses
00:18:05 are very restricted and they’re here or they’re over there.
00:18:08 If I do this, then this part of the brain lights up.
00:18:11 And that became doable in the early 90s.
00:18:16 In fact, shortly before with the advent
00:18:19 of positron emission tomography.
00:18:21 And then functional magnetic resonance imaging
00:18:23 came along in the early 90s.
00:18:26 And since that time, there has been an explosion
00:18:29 of discovery, refinement, confirmation.
00:18:36 There are people who believe that it’s all in the anatomy.
00:18:38 If you understand the anatomy,
00:18:40 then you understand the function at some level.
00:18:43 And many, many hypotheses were predicated
00:18:45 on a deep understanding of the anatomy and the connectivity,
00:18:49 but they were all confirmed
00:18:51 and taken much further with neuroimaging.
00:18:55 So that’s what I meant by we’ve made an enormous amount
00:18:57 of progress in this century indeed,
00:19:01 and in relation to the previous century,
00:19:03 by looking at these functionally selective responses.
00:19:09 But that wasn’t the whole story.
00:19:11 So there’s this sort of near phrenology,
00:19:13 but finding bumps and hot spots in the brain
00:19:15 that did this or that.
00:19:17 The bigger question was, of course,
00:19:20 the functional integration.
00:19:22 How all of these regionally specific responses
00:19:26 were orchestrated, how they were distributed,
00:19:29 how did they relate to distributed processing
00:19:32 and indeed representations in the brain.
00:19:35 So then you turn to the more challenging issue
00:19:39 of the integration, the connectivity.
00:19:42 And then we come back to this beautiful,
00:19:44 sparse, recurrent, hierarchical connectivity
00:19:49 that seems characteristic of the brain
00:19:51 and probably not many other organs.
00:19:53 But nevertheless, we come back to this challenge
00:19:58 of trying to figure out how everything is integrated.
00:20:01 But what’s your feeling?
00:20:02 What’s the general consensus?
00:20:04 Have we moved away from the magic soup view of the brain?
00:20:07 So there is a deep structure to it.
00:20:11 And then maybe a further question.
00:20:14 You said some people believe that the structure
00:20:16 is most of it, that you can really get
00:20:19 at the core of the function
00:20:20 by just deeply understanding the structure.
00:20:22 Where do you sit on that, do you?
00:20:25 I think it’s got some mileage to it, yes, yeah.
00:20:28 So it’s a worthy pursuit of going,
00:20:31 of studying through imaging and all the different methods
00:20:34 to actually study the structure.
00:20:36 No, absolutely, yeah, yeah.
00:20:38 Sorry, I’m just noting, you were accusing me
00:20:41 of using lots of long words
00:20:42 and then you introduced one there, which is deep,
00:20:44 which is interesting.
00:20:46 Because deep is the sort of millennial equivalent
00:20:50 of hierarchical.
00:20:51 So if you’ve put deep in front of anything,
00:20:53 not only are you very millennial and very trending,
00:20:57 but you’re also implying a hierarchical architecture.
00:21:01 So it is a depth, which is, for me, the beautiful thing.
00:21:05 That’s right, the word deep kind of,
00:21:07 yeah, exactly, it implies hierarchy.
00:21:10 I didn’t even think about that.
00:21:11 That indeed, the implicit meaning
00:21:14 of the word deep is hierarchy.
00:21:16 Yep. Yeah.
00:21:18 So deep inside the onion is the center of your soul.
00:21:22 Beautifully put.
00:21:23 Maybe briefly, if you could paint a picture
00:21:26 of the kind of methods of neuroimaging,
00:21:30 maybe the history which you were a part of,
00:21:33 from statistical parametric mapping.
00:21:35 I mean, just what’s out there that’s interesting
00:21:37 for people maybe outside the field
00:21:40 to understand of what are the actual methodologies
00:21:43 of looking inside the human brain?
00:21:45 Right, well, you can answer that question
00:21:47 from two perspectives.
00:21:48 Basically, it’s the modality.
00:21:49 What kind of signal are you measuring?
00:21:52 And they can range from,
00:21:55 and let’s limit ourselves
00:21:56 to sort of imaging based noninvasive techniques.
00:22:01 So you’ve essentially got brain scanners,
00:22:03 and brain scanners can either measure
00:22:05 the structural attributes, the amount of water,
00:22:07 the amount of fat, or the amount of iron
00:22:09 in different parts of the brain,
00:22:10 and you can make lots of inferences
00:22:11 about the structure of the organ of the sort
00:22:15 that you might have produced from an X ray,
00:22:17 but a very nuanced X ray that is looking
00:22:21 at this kind of property or that kind of property.
00:22:24 So looking at the anatomy noninvasively
00:22:27 would be the first sort of neuroimaging
00:22:30 that people might want to employ.
00:22:32 Then you move on to the kinds of measurements
00:22:34 that reflect dynamic function,
00:22:38 and the most prevalent of those fall into two camps.
00:22:42 You’ve got these metabolic, sometimes hemodynamic,
00:22:46 blood related signals.
00:22:48 So these metabolic and or hemodynamic signals
00:22:53 are basically proxies for elevated activity
00:22:58 and message passing and neuronal dynamics
00:23:03 in particular parts of the brain.
00:23:05 Characteristically though, the time constants
00:23:07 of these hemodynamic or metabolic responses
00:23:11 to neural activity are much longer
00:23:14 than the neural activity itself.
00:23:15 And this is referring,
00:23:19 forgive me for the dumb questions,
00:23:20 but this would be referring to blood,
00:23:22 like the flow of blood.
00:23:24 Absolutely, absolutely.
00:23:25 So there’s a ton of,
00:23:26 it seems like there’s a ton of blood vessels in the brain.
00:23:29 Yeah.
00:23:30 So what’s the interaction between the flow of blood
00:23:33 and the function of the neurons?
00:23:35 Is there an interplay there or?
00:23:37 Yup, yup, and that interplay accounts for several careers
00:23:42 of world renowned scientists, yes, absolutely.
00:23:47 So this is known as neurovascular coupling,
00:23:49 is exactly what you said.
00:23:50 It’s how does the neural activity,
00:23:52 the neuronal infrastructure, the actual message passing
00:23:54 that we think underlies our capacity to perceive and act,
00:24:01 how is that coupled to the vascular responses
00:24:06 that supply the energy for that neural processing?
00:24:09 So there’s a delicate web of large vessels,
00:24:13 arteries and veins, that gets progressively finer
00:24:16 and finer in detail until it perfuses
00:24:18 at a microscopic level,
00:24:20 the machinery where little neurons lie.
00:24:23 So coming back to this sort of onion perspective,
00:24:27 we were talking before using the onion as a metaphor
00:24:30 for a deep hierarchical structure,
00:24:32 but also I think it’s just anatomically quite
00:24:36 a useful metaphor.
00:24:37 All the action, all the heavy lifting
00:24:40 in terms of neural computation is done
00:24:41 on the surface of the brain,
00:24:43 and then the interior of the brain is constituted
00:24:47 by fatty wires, essentially, axonal processes
00:24:52 that are enshrouded by myelin sheaths.
00:24:55 And these, when you dissect them, they look fatty and white,
00:24:59 and so it’s called white matter,
00:25:01 as opposed to the actual neuro pill,
00:25:04 which does the computation constituted largely by neurons,
00:25:07 and that’s known as gray matter.
00:25:08 So the gray matter is a surface or a skin
00:25:13 that sits on top of this big ball,
00:25:16 now we are talking magic soup,
00:25:17 but a big ball of connections like spaghetti,
00:25:20 very carefully structured with sparse connectivity
00:25:23 that preserves this deep hierarchical structure,
00:25:25 but all the action takes place on the surface,
00:25:28 on the cortex of the onion, and that means
00:25:34 that you have to supply the right amount of blood flow,
00:25:38 the right amount of nutrient,
00:25:41 which is rapidly absorbed and used by neural cells
00:25:45 that don’t have the same capacity
00:25:46 that your leg muscles would have
00:25:48 to basically spend their energy budget
00:25:52 and then claim it back later.
00:25:55 So one peculiar thing about cerebral metabolism,
00:25:58 brain metabolism, is it really needs to be driven
00:26:01 in the moment, which means you basically
00:26:03 have to turn on the taps.
00:26:04 So if there’s lots of neural activity
00:26:08 in one part of the brain, a little patch
00:26:10 of a few millimeters, even less possibly,
00:26:14 you really do have to water that piece
00:26:16 of the garden now and quickly,
00:26:18 and by quickly I mean within a couple of seconds.
00:26:21 So that contains a lot of, hence the imaging
00:26:26 could tell you a story of what’s happening.
00:26:28 Absolutely, but it is slightly compromised
00:26:32 in terms of the resolution.
00:26:33 So the deployment of these little microvessels
00:26:37 that water the garden to enable the neural activity
00:26:42 to play out, the spatial resolution
00:26:45 is in order of a few millimeters,
00:26:48 and crucially, the temporal resolution
00:26:50 is the order of a few seconds.
00:26:52 So you can’t get right down and dirty
00:26:54 into the actual spatial and temporal scale
00:26:57 of neural activity in and of itself.
00:26:59 To do that, you’d have to turn
00:27:00 to the other big imaging modality,
00:27:02 which is the recording of electromagnetic signals
00:27:05 as they’re generated in real time.
00:27:07 So here, the temporal bandwidth, if you like,
00:27:10 or the low limit on the temporal resolution
00:27:12 is incredibly small, talking about milliseconds.
00:27:17 And then you can get into the phasic fast responses
00:27:23 that is in and of itself the neural activity,
00:27:27 and start to see the succession or cascade
00:27:32 of hierarchical recurrent message passing
00:27:35 evoked by a particular stimulus.
00:27:37 But the problem is you’re looking
00:27:39 at electromagnetic signals that have passed
00:27:42 through an enormous amount of magic soup
00:27:45 or spaghetti of collectivity, and through the scalp
00:27:49 and the skull, and it’s become spatially very diffuse.
00:27:52 So it’s very difficult to know where you are.
00:27:54 So you’ve got this sort of catch 22.
00:27:58 You can either use an imaging modality
00:28:00 that tells you within millimeters
00:28:02 which part of the brain is activated,
00:28:04 but you don’t know when,
00:28:05 or you’ve got these electromagnetic EEG, MEG setups
00:28:10 that tell you to within a few milliseconds
00:28:15 when something has responded, but you’re not aware.
00:28:19 So you’ve got these two complementary measures,
00:28:22 either indirect via the blood flow,
00:28:25 or direct via the electromagnetic signals
00:28:28 caused by neural activity.
00:28:31 These are the two big imaging devices.
00:28:33 And then the second level of responding to your question,
00:28:36 what are the, from the outside,
00:28:39 what are the big ways of using this technology?
00:28:44 So once you’ve chosen the kind of neural imaging
00:28:47 that you want to use to answer your set questions,
00:28:50 and sometimes it would have to be both,
00:28:53 then you’ve got a whole raft of analyses,
00:28:57 time series analyses usually, that you can bring to bear
00:29:01 in order to answer your questions
00:29:04 or address your hypothesis about those data.
00:29:07 And interestingly, they both fall
00:29:08 into the same two camps we were talking about before,
00:29:11 this dialectic between specialization and integration,
00:29:14 differentiation and integration.
00:29:17 So it’s the cartography, the blobology analyses.
00:29:20 I apologize, I probably shouldn’t interrupt so much,
00:29:23 but just heard a fun word, the blah.
00:29:27 Blobology.
00:29:27 Blobology.
00:29:29 It’s a neologism, which means the study of blobs.
00:29:33 So nothing bob.
00:29:34 Are you being witty and humorous,
00:29:36 or does the word blobology ever appear
00:29:39 in a textbook somewhere?
00:29:40 It would appear in a popular book.
00:29:43 It would not appear in a worthy specialist journal.
00:29:47 Yeah, I thought so.
00:29:48 It’s the fond word for the study of literally little blobs
00:29:53 on brain maps showing activations.
00:29:56 So the kind of thing that you’d see in the newspapers
00:29:59 on ABC or BBC reporting the latest finding
00:30:04 from brain imaging.
00:30:05 Interestingly though, the maths involved
00:30:10 in that stream of analysis does actually call upon
00:30:15 the mathematics of blobs.
00:30:17 So seriously, they’re actually called Euler characteristics
00:30:21 and they have a lot of fancy names in mathematics.
00:30:27 We’ll talk about it, about your ideas
00:30:28 in free energy principle.
00:30:30 I mean, there’s echoes of blobs there
00:30:33 when you consider sort of entities,
00:30:36 mathematically speaking.
00:30:38 Yes, absolutely.
00:30:40 Well, circumscribed, well defined,
00:30:43 you entities of, well, from the free energy point of view,
00:30:48 entities of anything, but from the point of view
00:30:50 of the analysis, the cartography of the brain,
00:30:55 these are the entities that constitute the evidence
00:30:59 for this functional segregation.
00:31:01 You have segregated this function in this blob
00:31:04 and it is not outside of the blob.
00:31:06 And that’s basically the, if you were a map maker
00:31:10 of America and you did not know its structure,
00:31:14 the first thing were you doing constituting
00:31:16 or creating a map would be to identify the cities,
00:31:19 for example, or the mountains or the rivers.
00:31:22 All of these uniquely spatially localizable features,
00:31:26 possibly topological features have to be placed somewhere
00:31:30 because that requires a mathematics of identifying
00:31:33 what does a city look like on a satellite image
00:31:36 or what does a river look like
00:31:37 or what does a mountain look like?
00:31:39 What would it, you know, what data features
00:31:42 would evidence that particular top,
00:31:46 you know, that particular thing
00:31:48 that you wanted to put on the map?
00:31:50 And they normally are characterized
00:31:52 in terms of literally these blobs
00:31:54 or these sort of, another way of looking at this
00:31:57 is that a certain statistical measure
00:32:01 of the degree of activation crosses a threshold
00:32:04 and in crossing that threshold
00:32:06 in the spatially restricted part of the brain,
00:32:09 it creates a blob.
00:32:11 And that’s basically what statistical parametric mapping does.
00:32:14 It’s basically mathematically finessed blobology.
00:32:19 Okay, so those are the,
00:32:20 you kind of described these two methodologies for,
00:32:24 one is temporally noisy, one is spatially noisy
00:32:26 and you kind of have to play and figure out
00:32:28 what can be useful.
00:32:31 It’d be great if you can sort of comment.
00:32:33 I got a chance recently to spend a day
00:32:34 at a company called Neuralink
00:32:37 that uses brain computer interfaces
00:32:39 and their dream is to, well,
00:32:42 there’s a bunch of sort of dreams,
00:32:45 but one of them is to understand the brain
00:32:47 by sort of, you know, getting in there
00:32:51 past the so called sort of factory wall,
00:32:53 getting in there and be able to listen,
00:32:55 communicate both directions.
00:32:57 What are your thoughts about this,
00:32:59 the future of this kind of technology
00:33:01 of brain computer interfaces
00:33:02 to be able to now have a window
00:33:06 or direct contact within the brain
00:33:08 to be able to measure some of the signals,
00:33:10 to be able to sense signals,
00:33:11 to understand some of the functionality of the brain?
00:33:15 Ambivalent, my sense is ambivalent.
00:33:17 So it’s a mixture of good and bad
00:33:19 and I acknowledge that freely.
00:33:22 So the good bits, if you just look at the legacy
00:33:24 of that kind of reciprocal but invasive
00:33:29 your brain stimulation,
00:33:31 I didn’t paint a complete picture
00:33:33 when I was talking about sort of the ways
00:33:34 we understand the brain prior to neuroimaging.
00:33:37 It wasn’t just lesion deficit studies.
00:33:39 Some of the early work, in fact,
00:33:41 literally 100 years from where we’re sitting
00:33:43 at the institution of neurology,
00:33:45 was done by stimulating the brain of say dogs
00:33:50 and looking at how they responded
00:33:51 with their muscles or with their salivation
00:33:56 and imputing what that part of the brain must be doing.
00:34:00 If I stimulate it and I vote this kind of response,
00:34:06 then that tells me quite a lot
00:34:07 about the functional specialization.
00:34:09 So there’s a long history of brain stimulation
00:34:12 which continues to enjoy a lot of attention nowadays.
00:34:16 Positive attention.
00:34:17 Oh yes, absolutely.
00:34:19 You know, deep brain stimulation for Parkinson’s disease
00:34:22 is now a standard treatment
00:34:23 and also a wonderful vehicle
00:34:25 to try and understand the neuronal dynamics
00:34:29 underlying movement disorders like Parkinson’s disease.
00:34:33 Even interest in magnetic stimulation,
00:34:37 stimulating the magnetic fields
00:34:39 and will it work in people who are depressed, for example.
00:34:43 Quite a crude level of understanding what you’re doing,
00:34:45 but there is historical evidence
00:34:49 that these kinds of brute force interventions
00:34:51 do change things.
00:34:53 They, you know, it’s a little bit like banging the TV
00:34:56 when the valves aren’t working properly,
00:34:58 but it’s still, it works.
00:35:00 So, you know, there is a long history.
00:35:04 Brain computer interfacing or BCI,
00:35:08 I think is a beautiful example of that.
00:35:11 It’s sort of carved out its own niche
00:35:12 and its own aspirations
00:35:14 and there’ve been enormous advances within limits.
00:35:20 Advances in terms of our ability to understand
00:35:25 how the brain, the embodied brain,
00:35:29 engages with the world.
00:35:32 I’m thinking here of sensory substitution,
00:35:34 the augmenting our sensory capacities
00:35:37 by giving ourselves extra ways of sensing
00:35:40 and sampling the world,
00:35:42 ranging from sort of trying to replace lost visual signals
00:35:48 through to giving people completely new signals.
00:35:50 So, one of the, I think, most engaging examples of this
00:35:57 is equipping people with a sense of magnetic fields.
00:36:00 So you can actually give them magnetic sensors
00:36:03 that enable them to feel,
00:36:05 should we say, tactile pressure around their tummy,
00:36:08 where they are in relation to the magnetic field of the Earth.
00:36:13 And after a few weeks, they take it for granted.
00:36:17 They integrate it, they embody this,
00:36:19 simulate this new sensory information
00:36:22 into the way that they literally feel their world,
00:36:25 but now equipped with this sense of magnetic direction.
00:36:29 So that tells you something
00:36:31 about the brain’s plastic potential
00:36:32 to remodel and its plastic capacity
00:36:37 to suddenly try to explain the sensory data at hand
00:36:43 by augmenting the sensory sphere
00:36:48 and the kinds of things that you can measure.
00:36:51 Clearly, that’s purely for entertainment
00:36:54 and understanding the nature and the power of our brains.
00:37:00 I would imagine that most BCI is pitched
00:37:03 at solving clinical and human problems
00:37:08 such as locked in syndrome, such as paraplegia,
00:37:12 or replacing lost sensory capacities
00:37:16 like blindness and deafness.
00:37:18 So then we come to the negative part of my ambivalence,
00:37:24 the other side of it.
00:37:26 So I don’t want to be deflationary
00:37:30 because much of my deflationary comments
00:37:33 is probably large out of ignorance than anything else.
00:37:37 But generally speaking, the bandwidth
00:37:42 and the bit rates that you get
00:37:44 from brain computer interfaces as we currently know them,
00:37:49 we’re talking about bits per second.
00:37:51 So that would be like me only being able to communicate
00:37:55 with any world or with you using very, very, very slow Morse code.
00:38:06 And it is not even within an order of magnitude
00:38:13 near what we actually need for an inactive realization
00:38:18 of what people aspire to when they think about
00:38:21 sort of curing people with paraplegia or replacing sight
00:38:28 despite heroic efforts.
00:38:30 So one has to ask, is there a lower bound
00:38:33 on the kinds of recurrent information exchange
00:38:41 between a brain and some augmented or artificial interface?
00:38:46 And then we come back to, interestingly,
00:38:51 what I was talking about before,
00:38:52 which is if you’re talking about function
00:38:56 in terms of inference, and I presume we’ll get to that
00:39:00 later on in terms of the free energy principle,
00:39:01 then at the moment, there may be fundamental reasons
00:39:05 to assume that is the case.
00:39:06 We’re talking about ensemble activity.
00:39:08 We’re talking about basically, for example,
00:39:13 let’s paint the challenge facing brain computer interfacing
00:39:20 in terms of controlling another system
00:39:24 that is highly and deeply structured,
00:39:27 very relevant to our lives, very nonlinear,
00:39:30 that rests upon the kind of nonequilibrium
00:39:34 steady states and dynamics that the brain does,
00:39:37 the weather, all right?
00:39:39 So imagine you had some very aggressive satellites
00:39:45 that could produce signals that could perturb
00:39:48 some little parts of the weather system.
00:39:53 And then what you’re asking now is,
00:39:55 can I meaningfully get into the weather
00:39:58 and change it meaningfully and make the weather respond
00:40:01 in a way that I want it to?
00:40:03 You’re talking about chaos control on a scale
00:40:06 which is almost unimaginable.
00:40:08 So there may be fundamental reasons
00:40:11 why BCI, as you might read about it in a science fiction novel,
00:40:18 aspirational BCI may never actually work
00:40:22 in the sense that to really be integrated
00:40:26 and be part of the system is a requirement
00:40:32 that requires you to have evolved with that system,
00:40:35 that you have to be part of a very delicately structured,
00:40:43 deeply structured, dynamic, ensemble activity
00:40:48 that is not like rewiring a broken computer
00:40:51 or plugging in a peripheral interface adapter.
00:40:54 It is much more like getting into the weather patterns
00:40:58 or, come back to your magic soup,
00:41:00 getting into the active matter
00:41:02 and meaningfully relate that to the outside world.
00:41:07 So I think there are enormous challenges there.
00:41:09 So I think the example of the weather is a brilliant one.
00:41:13 And I think you paint a really interesting picture
00:41:15 and it wasn’t as negative as I thought.
00:41:17 It’s essentially saying that it might be
00:41:19 incredibly challenging, including the low bound
00:41:22 of the bandwidth and so on.
00:41:23 I kind of, so just to full disclosure,
00:41:26 I come from the machine learning world.
00:41:28 So my natural thought is the hardest part
00:41:32 is the engineering challenge of controlling the weather,
00:41:34 of getting those satellites up and running and so on.
00:41:37 And once they are, then the rest is fundamentally
00:41:42 the same approaches that allow you to win in the game of Go
00:41:46 will allow you to potentially play in this soup,
00:41:49 in this chaos.
00:41:51 So I have a hope that sort of machine learning methods
00:41:54 will help us play in this soup.
00:41:58 But perhaps you’re right that it is a biology
00:42:04 and the brain is just an incredible system
00:42:08 that may be almost impossible to get in.
00:42:12 But for me, what seems impossible
00:42:15 is the incredible mess of blood vessels
00:42:19 that you also described without,
00:42:22 we also value the brain.
00:42:24 You can’t make any mistakes, you can’t damage things.
00:42:27 So to me, that engineering challenge seems nearly impossible.
00:42:31 One of the things I was really impressed by at Neuralink
00:42:35 is just talking to brilliant neurosurgeons
00:42:39 and the roboticists that made me realize
00:42:43 that even though it seems impossible,
00:42:45 if anyone can do it, it’s some of these world class
00:42:48 engineers that are trying to take it on.
00:42:50 So I think the conclusion of our discussion here
00:42:55 of this part is basically that the problem is really hard
00:43:00 but hopefully not impossible.
00:43:02 Absolutely.
00:43:03 So if it’s okay, let’s start with the basics.
00:43:07 So you’ve also formulated a fascinating principle,
00:43:12 the free energy principle.
00:43:13 Could we maybe start at the basics
00:43:15 and what is the free energy principle?
00:43:19 Well, in fact, the free energy principle
00:43:23 inherits a lot from the building
00:43:29 of these data analytic approaches
00:43:31 to these very high dimensional time series
00:43:34 you get from the brain.
00:43:35 So I think it’s interesting to acknowledge that.
00:43:37 And in particular, the analysis tools
00:43:39 that try to address the other side,
00:43:43 which is a functional integration,
00:43:44 so the connectivity analyses.
00:43:46 So on the one hand, but I should also acknowledge
00:43:51 it inherits an awful lot from machine learning as well.
00:43:55 So the free energy principle is just a formal statement
00:44:02 that the existential imperatives for any system
00:44:08 that manages to survive in a changing world
00:44:11 can be cast as an inference problem
00:44:18 in the sense that you can interpret
00:44:21 the probability of existing as the evidence that you exist.
00:44:25 And if you can write down that problem of existence
00:44:29 as a statistical problem,
00:44:30 then you can use all the maths that has been developed
00:44:33 for inference to understand and characterize
00:44:38 the ensemble dynamics that must be in play
00:44:42 in the service of that inference.
00:44:45 So technically, what that means is
00:44:48 you can always interpret anything that exists
00:44:52 in virtue of being separate from the environment
00:44:55 in which it exists as trying to minimize
00:45:02 variational free energy.
00:45:03 And if you’re from the machine learning community,
00:45:05 you will know that as a negative evidence lower bound
00:45:09 or a negative elbow, which is the same as saying
00:45:13 you’re trying to maximize or it will look as if
00:45:16 all your dynamics are trying to maximize
00:45:19 the complement of that which is the marginal likelihood
00:45:24 or the evidence for your own existence.
00:45:26 So that’s basically the free energy principle.
00:45:30 But to even take a sort of a small step backwards,
00:45:34 you said the existential imperative.
00:45:38 There’s a lot of beautiful poetic words here,
00:45:40 but to put it crudely, it’s a fascinating idea
00:45:46 of basically just of trying to describe
00:45:49 if you’re looking at a blob,
00:45:51 how do you know this thing is alive?
00:45:54 What does it mean to be alive?
00:45:55 What does it mean to exist?
00:45:57 And so you can look at the brain,
00:45:59 you can look at parts of the brain,
00:46:00 or this is just a general principle
00:46:02 that applies to almost any system.
00:46:07 That’s just a fascinating sort of philosophically
00:46:10 at every level question and a methodology
00:46:13 to try to answer that question.
00:46:14 What does it mean to be alive?
00:46:16 So that’s a huge endeavor and it’s nice
00:46:21 that there’s at least some,
00:46:23 from some perspective, a clean answer.
00:46:25 So maybe can you talk about that optimization view of it?
00:46:30 So what’s trying to be minimized, maximized?
00:46:33 A system that’s alive, what is it trying to minimize?
00:46:36 Right, you’ve made a big move there.
00:46:40 First of all, it’s good to make big moves.
00:46:45 But you’ve assumed that the thing exists
00:46:52 in a state that could be living or nonliving.
00:46:54 So I may ask you, what licenses you
00:46:57 to say that something exists?
00:47:00 That’s why I use the word existential.
00:47:02 It’s beyond living, it’s just existence.
00:47:05 So if you drill down onto the definition
00:47:08 of things that exist, then they have certain properties
00:47:13 if you borrow the maths
00:47:16 from nonequilibrium steady state physics
00:47:19 that enable you to interpret their existence
00:47:26 in terms of this optimization procedure.
00:47:29 So it’s good you introduced the word optimization.
00:47:32 So what the free energy principle
00:47:36 in its sort of most ambitious,
00:47:39 but also most deflationary and simplest, says
00:47:44 is that if something exists,
00:47:47 then it must, by the mathematics
00:47:51 of nonequilibrium steady state,
00:47:55 exhibit properties that make it look
00:47:59 as if it is optimizing a particular quantity.
00:48:03 And it turns out that particular quantity
00:48:06 happens to be exactly the same
00:48:08 as the evidence lower bound in machine learning
00:48:11 or Bayesian model evidence in Bayesian statistics.
00:48:15 Or, and then I can list a whole other list
00:48:18 of ways of understanding this key quantity,
00:48:23 which is a bound on surprise or self information
00:48:29 if you have information theory.
00:48:31 There are a number of different perspectives
00:48:34 on this quantity.
00:48:34 It’s just basically the log probability
00:48:36 of being in a particular state.
00:48:40 I’m telling this story as an honest,
00:48:42 an attempt to answer your question.
00:48:45 And I’m answering it as if I was pretending
00:48:49 to be a physicist who was trying to understand
00:48:52 the fundaments of nonequilibrium steady state.
00:48:58 And I shouldn’t really be doing that
00:48:59 because the last time I was taught physics,
00:49:02 I was in my 20s.
00:49:03 What kind of systems,
00:49:04 when you think about the free energy principle,
00:49:06 what kind of systems are you imagining
00:49:08 as a sort of more specific kind of case study?
00:49:11 Yeah, I’m imagining a range of systems,
00:49:15 but at its simplest, a single celled organism
00:49:23 that can be identified from its eco niche
00:49:26 or its environment.
00:49:27 So at its simplest, that’s basically
00:49:31 what I always imagined in my head.
00:49:33 And you may ask, well, is there any,
00:49:36 how on earth can you even elaborate questions
00:49:41 about the existence of a single drop of oil, for example?
00:49:48 But there are deep questions there.
00:49:49 Why doesn’t the oil, why doesn’t the thing,
00:49:52 the interface between the drop of oil
00:49:55 that contains an interior
00:49:57 and the thing that is not the drop of oil,
00:50:00 which is the solvent in which it is immersed,
00:50:03 how does that interface persist over time?
00:50:07 Why doesn’t the oil just dissolve into solvent?
00:50:10 So what special properties of the exchange
00:50:16 between the surface of the oil drop
00:50:18 and the external states in which it’s immersed,
00:50:22 if you’re a physicist, say it would be the heat bath.
00:50:24 You’ve got a physical system, an ensemble again,
00:50:28 we’re talking about density dynamics, ensemble dynamics,
00:50:30 an ensemble of atoms or molecules immersed in the heat bath.
00:50:35 But the question is, how did the heat bath get there?
00:50:38 And why does it not dissolve?
00:50:41 How is it maintaining itself?
00:50:42 Exactly.
00:50:43 What actions is it?
00:50:44 I mean, it’s such a fascinating idea of a drop of oil
00:50:47 and I guess it would dissolve in water,
00:50:50 it wouldn’t dissolve in water.
00:50:51 So what?
00:50:52 Precisely, so why not?
00:50:53 So why not?
00:50:54 Why not?
00:50:55 And how do you mathematically describe,
00:50:57 I mean, it’s such a beautiful idea.
00:50:58 And also the idea of like, where does the thing,
00:51:02 where does the drop of oil end and where does it begin?
00:51:07 Right, so I mean, you’re asking deep questions,
00:51:10 deep in a nonmillennial sense here.
00:51:12 In a hierarchical sense.
00:51:16 But what you can do, so this is the deflationary part of it.
00:51:21 Can I just qualify my answer by saying that normally
00:51:23 when I’m asked this question,
00:51:24 I answer from the point of view of a psychologist,
00:51:26 we talk about predictive processing and predictive coding
00:51:29 and the brain as an inference machine,
00:51:31 but you haven’t asked me from that perspective,
00:51:33 I’m answering from the point of view of a physicist.
00:51:36 So the question is not so much why,
00:51:41 but if it exists, what properties must it display?
00:51:44 So that’s the deflationary part of the free energy principle.
00:51:46 The free energy principle does not supply an answer
00:51:50 as to why, it’s saying if something exists,
00:51:54 then it must display these properties.
00:51:57 That’s the sort of thing that’s on offer.
00:52:01 And it so happens that these properties it must display
00:52:05 are actually intriguing and have this inferential gloss,
00:52:10 this sort of self evidencing gloss that inherits on the fact
00:52:14 that the very preservation of the boundary
00:52:19 between the oil drop and the not oil drop
00:52:22 requires an optimization of a particular function
00:52:26 or a functional that defines the presence
00:52:30 or the existence of this oil drop,
00:52:33 which is why I started with existential imperatives.
00:52:36 It is a necessary condition for existence
00:52:39 that this must occur because the boundary
00:52:42 basically defines the thing that’s existing.
00:52:46 So it is that self assembly aspect
00:52:47 it’s that you were hinting at in biology,
00:52:53 sometimes known as autopoiesis
00:52:56 in computational chemistry with self assembly.
00:53:00 It’s the, what does it look like?
00:53:03 Sorry, how would you describe things
00:53:06 that configure themselves out of nothing?
00:53:08 The way they clearly demarcate themselves
00:53:12 from the states or the soup in which they are immersed.
00:53:18 So from the point of view of computational chemistry,
00:53:20 for example, you would just understand that
00:53:23 as a configuration of a macro molecule
00:53:25 to minimize its free energy, its thermodynamic free energy.
00:53:28 It’s exactly the same principle that we’ve been talking about
00:53:31 that thermodynamic free energy is just the negative elbow.
00:53:35 It’s the same mathematical construct.
00:53:38 So the very emergence of existence, of structure, of form
00:53:42 that can be distinguished from the environment
00:53:45 or the thing that is not the thing
00:53:49 necessitates the existence of an objective function
00:53:56 that it looks as if it is minimizing.
00:53:58 It’s finding a free energy minima.
00:54:00 And so just to clarify, I’m trying to wrap my head around.
00:54:04 So the free energy principle says that if something exists,
00:54:09 these are the properties it should display.
00:54:11 Yeah.
00:54:12 So what that means is we can’t just look,
00:54:17 we can’t just go into a soup and there’s no mechanism.
00:54:21 Free energy principle doesn’t give us a mechanism
00:54:23 to find the things that exist.
00:54:25 Is that what’s implying, is being implied
00:54:28 that you can kind of use it to reason,
00:54:33 to think about like, study a particular system
00:54:36 and say, does this exhibit these qualities?
00:54:40 That’s an excellent question.
00:54:42 But to answer that, I’d have to return
00:54:44 to your previous question about what’s the difference
00:54:46 between living and nonliving things.
00:54:48 Yes, well, actually, sorry.
00:54:51 So yeah, maybe we can go there.
00:54:54 Maybe we can go there, you kind of drew a line
00:54:57 and forgive me for the stupid questions,
00:54:58 but you kind of drew a line between living and existing.
00:55:02 Is there an interesting sort of distinction?
00:55:07 Yeah, I think there is.
00:55:09 So things do exist, grains of sand,
00:55:15 rocks on the moon, trees, you.
00:55:19 So all of these things can be separated from the environment
00:55:24 in which they are immersed.
00:55:26 And therefore, they must at some level
00:55:28 be optimizing their free energy,
00:55:32 taking this sort of model evidence interpretation
00:55:36 of this quantity that basically means
00:55:38 they’re self evidencing.
00:55:39 Another nice little twist of phrase here
00:55:42 is that you are your own existence proof,
00:55:45 statistically speaking, which I don’t think
00:55:48 I said that, somebody did, but I love that phrase.
00:55:53 You are your own existence proof.
00:55:55 Yeah, so it’s so existential, isn’t it?
00:55:59 I’m gonna have to think about that for a few days.
00:56:01 That’s a beautiful line.
00:56:06 So the step through to answer your question
00:56:09 about what’s it good for,
00:56:13 we’ll go along the following lines.
00:56:15 First of all, you have to define what it means
00:56:18 to exist, which now, as you’ve rightly pointed out,
00:56:22 you have to define what probabilistic properties
00:56:25 must the states of something possess
00:56:27 so it knows where it finishes.
00:56:30 And then you write that down in terms
00:56:32 of statistical dependencies, again, sparsity.
00:56:36 Again, it’s not what’s connected or what’s correlated
00:56:39 or what depends upon, it’s what’s not correlated
00:56:43 and what doesn’t depend upon something.
00:56:45 Again, it comes down to the deep structures,
00:56:49 not in this instance, hierarchical,
00:56:50 but the structures that emerge
00:56:54 from removing connectivity and dependency.
00:56:56 And in this instance, basically being able
00:57:00 to identify the surface of the oil drop
00:57:02 from the water in which it is immersed.
00:57:06 And when you do that, you start to realize,
00:57:09 well, there are actually four kinds of states
00:57:12 in any given universe that contains anything.
00:57:15 The things that are internal to the surface,
00:57:18 the things that are external to the surface
00:57:20 and the surface in and of itself,
00:57:22 which is why I use a metaphor,
00:57:24 a little single celled organism
00:57:25 that has an interior and exterior
00:57:27 and then the surface of the cell.
00:57:29 And that’s mathematically a Markov blanket.
00:57:32 Just to pause, I’m in awe of this concept
00:57:34 that there’s the stuff outside the surface,
00:57:36 stuff inside the surface and the surface itself,
00:57:38 the Markov blanket.
00:57:40 It’s just the most beautiful kind of notion
00:57:43 about trying to explore what it means
00:57:46 to exist mathematically.
00:57:48 I apologize, it’s just a beautiful idea.
00:57:50 But it came out of California, so that’s.
00:57:53 I changed my mind.
00:57:54 I take it all back.
00:57:55 So anyway, so you were just talking
00:57:59 about the surface, about the Markov blanket.
00:58:01 So this surface or these blanket states
00:58:04 that are the, because they are now defined
00:58:09 in relation to these independencies
00:58:17 and what different states internal blanket
00:58:21 or external states can,
00:58:23 which ones can influence each other
00:58:25 and which cannot influence each other.
00:58:27 You can now apply standard results
00:58:30 that you would find in non equilibrium physics
00:58:33 or steady state or thermodynamics or hydrodynamics,
00:58:37 usually out of equilibrium solutions
00:58:41 and apply them to this partition.
00:58:43 And what it looks like is if all the normal gradient flows
00:58:48 that you would associate with any non equilibrium system
00:58:52 apply in such a way that part of the Markov blanket
00:58:57 and the internal states seem to be hill climbing
00:59:01 or doing a gradient descent on the same quantity.
00:59:05 And that means that you can now describe
00:59:09 the very existence of this oil drop.
00:59:13 You can write down the existence of this oil drop
00:59:16 in terms of flows, dynamics, equations of motion,
00:59:20 where the blanket states or part of them,
00:59:24 we call them active states and the internal states
00:59:28 now seem to be and must be trying to look
00:59:32 as if they’re minimizing the same function,
00:59:35 which is a low probability of occupying these states.
00:59:39 Interesting thing is that what would they be called
00:59:44 if you were trying to describe these things?
00:59:45 So what we’re talking about are internal states,
00:59:50 external states and blanket states.
00:59:52 Now let’s carve the blanket states
00:59:54 into two sensory states and active states.
00:59:57 Operationally, it has to be the case
00:59:59 that in order for this carving up
01:00:01 into different sets of states to exist,
01:00:04 the active states, the Markov blanket
01:00:06 cannot be influenced by the external states.
01:00:09 And we already know that the internal states
01:00:11 can’t be influenced by the external states
01:00:13 because the blanket separates them.
01:00:15 So what does that mean?
01:00:16 Well, it means the active states, the internal states
01:00:19 are now jointly not influenced by external states.
01:00:23 They only have autonomous dynamics.
01:00:26 So now you’ve got a picture of an oil drop
01:00:30 that has autonomy, it has autonomous states,
01:00:34 it has autonomous states in the sense
01:00:35 that there must be some parts of the surface of the oil drop
01:00:38 that are not influenced by the external states
01:00:40 and all the interior.
01:00:41 And together, those two states endow
01:00:44 even a little oil drop with autonomous states
01:00:47 that look as if they are optimizing
01:00:51 their variational free energy or their negative elbow,
01:00:56 their moral evidence.
01:00:59 And that would be an interesting intellectual exercise.
01:01:03 And you could say, you could even go into the realms
01:01:05 of panpsychism, that everything that exists
01:01:08 is implicitly making inferences on self evidencing.
01:01:13 Now we make the next move, but what about living things?
01:01:17 I mean, so let me ask you,
01:01:19 what’s the difference between an oil drop
01:01:21 and a little tadpole or a little lava or a plankton?
01:01:27 The picture was just painted of an oil drop.
01:01:30 Just immediately in a matter of minutes
01:01:32 took me into the world of panpsychism,
01:01:35 where you’ve just convinced me,
01:01:38 made me feel like an oil drop is a living,
01:01:41 it’s certainly an autonomous system,
01:01:43 but almost a living system.
01:01:44 So it has sensory capabilities and acting capabilities
01:01:48 and it maintains something.
01:01:50 So what is the difference between that
01:01:53 and something that we traditionally
01:01:56 think of as a living system?
01:01:59 That it could die or it can’t,
01:02:02 I mean, yeah, mortality, I’m not exactly sure.
01:02:05 I’m not sure what the right answer there is
01:02:08 because they can move,
01:02:09 like movement seems like an essential element
01:02:11 to being able to act in the environment,
01:02:13 but the oil drop is doing that.
01:02:15 So I don’t know.
01:02:16 Is it?
01:02:18 The oil drop will be moved,
01:02:19 but does it in and of itself move autonomously?
01:02:22 Well, the surface is performing actions
01:02:26 that maintain its structure.
01:02:29 Yeah, you’re being too clever.
01:02:30 I was, I had in mind a passive little oil drop
01:02:34 that’s sitting there at the bottom
01:02:37 on the top of a glass of water.
01:02:39 Sure, I guess.
01:02:40 What I’m trying to say is you’re absolutely right.
01:02:42 You’ve nailed it.
01:02:44 It’s movement.
01:02:45 So where does that movement come from?
01:02:47 If it comes from the inside,
01:02:49 then you’ve got, I think, something that’s living.
01:02:53 What do you mean from the inside?
01:02:54 What I mean is that the internal states
01:02:58 that can influence the active states,
01:03:01 where the active states can influence,
01:03:02 but they’re not influenced by the external states,
01:03:05 can cause movement.
01:03:07 So there are two types of oil drops, if you like.
01:03:10 There are oil drops where the internal states
01:03:12 are so random that they average themselves away,
01:03:20 and the thing cannot, on average,
01:03:23 when you do the averaging, move.
01:03:25 So a nice example of that would be the Sun.
01:03:29 The Sun certainly has internal states.
01:03:31 There’s lots of intrinsic autonomous activity going on,
01:03:34 but because it’s not coordinated,
01:03:35 because it doesn’t have the deep, in the millennial sense,
01:03:38 the hierarchical structure that the brain does,
01:03:40 there is no overall mode or pattern or organization
01:03:45 that expresses itself on the surface
01:03:48 that allows it to actually swim.
01:03:51 It can certainly have a very active surface,
01:03:54 but en masse, at the scale of the actual surface of the Sun,
01:03:58 the average position of that surface cannot, in itself, move,
01:04:02 because the internal dynamics are more like a hot gas.
01:04:06 They are literally like a hot gas,
01:04:08 whereas your internal dynamics are much more structured
01:04:11 and deeply structured,
01:04:12 and now you can express on your active states
01:04:16 with your muscles and your secretory organs,
01:04:19 your autonomic nervous system and its effectors,
01:04:22 you can actually move, and that’s all you can do.
01:04:26 And that’s something which,
01:04:28 if you haven’t thought of it like this before,
01:04:30 I think it’s nice to just realize
01:04:32 there is no other way that you can change the universe
01:04:37 other than simply moving.
01:04:39 Whether that moving is articulating with my voice box
01:04:43 or walking around or squeezing juices
01:04:46 out of my secretory organs,
01:04:48 there’s only one way you can change the universe.
01:04:51 It’s moving.
01:04:53 And the fact that you do so nonrandomly makes you alive.
01:04:58 Yeah, so it’s that nonrandomness.
01:05:00 And that would be manifested,
01:05:04 we realize, in terms of essentially swimming,
01:05:07 essentially moving, changing one’s shape,
01:05:10 a morphogenesis that is dynamic and possibly adaptive.
01:05:15 So that’s what I was trying to get at
01:05:17 between the difference between the oil drop
01:05:19 and the little tadpole.
01:05:21 The tadpole is moving around.
01:05:23 Its active states are actually changing the external states.
01:05:26 And there’s now a cycle,
01:05:28 an action perception cycle, if you like,
01:05:30 a recurrent dynamic that’s going on
01:05:34 that depends upon this deeply structured autonomous behavior
01:05:39 that rests upon internal dynamics
01:05:44 that are not only modeling
01:05:48 the data impressed upon their surface or the blanket states,
01:05:53 but they are actively resampling those data by moving.
01:05:58 They’re moving towards chemical gradients and chemotaxis.
01:06:03 So they’ve gone beyond just being good little models
01:06:08 of the kind of world they live in.
01:06:11 For example, an oil droplet could, in a panpsychic sense,
01:06:15 be construed as a little being
01:06:18 that has now perfectly inferred.
01:06:20 It’s a passive, nonliving oil drop
01:06:23 living in a bowl of water.
01:06:25 No problem.
01:06:27 But to now equip that oil drop with the ability to go out
01:06:31 and test that hypothesis about different states of beings.
01:06:34 So it can actually push its surface over there, over there,
01:06:36 and test for chemical gradients,
01:06:38 or then you start to move to a much more lifelike form.
01:06:42 This is all fun, theoretically interesting,
01:06:44 but it actually is quite important
01:06:47 in terms of reflecting what I have seen
01:06:49 since the turn of the millennium,
01:06:53 which is this move towards an inactive
01:06:56 and embodied understanding of intelligence.
01:07:00 And you say you’re from machine learning.
01:07:03 So what that means,
01:07:05 the central importance of movement,
01:07:10 I think has yet to really hit machine learning.
01:07:14 It certainly has now diffused itself throughout robotics.
01:07:20 And perhaps you could say certain problems in active vision
01:07:23 where you actually have to move the camera
01:07:25 to sample this and that.
01:07:27 But machine learning of the data mining deep learning sort
01:07:31 simply hasn’t contended with this issue.
01:07:34 What it’s done, instead of dealing with the movement problem
01:07:37 and the active sampling of data,
01:07:39 it’s just said, we don’t need to worry about,
01:07:40 we can see all the data because we’ve got big data.
01:07:43 So we can ignore movement.
01:07:45 So that for me is an important omission
01:07:50 in current machine learning.
01:07:52 The current machine learning is much more like the oil drop.
01:07:54 Yes.
01:07:55 But an oil drop that enjoys exposure
01:07:59 to nearly all the data that it will ever need to be exposed to,
01:08:03 as opposed to the tadpoles swimming out
01:08:05 to find the right data.
01:08:07 For example, it likes food.
01:08:10 That’s a good hypothesis.
01:08:11 Let’s test it out.
01:08:12 Let’s go and move and ingest food, for example,
01:08:15 and see is that evidence that I’m the kind of thing
01:08:18 that likes this kind of food.
01:08:20 So the next natural question, and forgive this question,
01:08:24 but if we think of sort of even artificial intelligence
01:08:27 systems, which I just painted a beautiful picture
01:08:29 of existence and life.
01:08:32 So do you ascribe, do you find within this framework
01:08:39 a possibility of defining consciousness
01:08:45 or exploring the idea of consciousness?
01:08:47 Like what, you know, self awareness
01:08:52 and expand it to consciousness?
01:08:55 Yeah.
01:08:56 How can we start to think about consciousness
01:08:58 within this framework?
01:08:59 Is it possible?
01:09:00 Well, yeah, I think it’s possible to think about it,
01:09:03 whether you’ll get it.
01:09:04 Get anywhere is another question.
01:09:06 And again, I’m not sure that I’m licensed
01:09:10 to answer that question.
01:09:12 I think you’d have to speak to a qualified philosopher
01:09:15 to get a definitive answer to that.
01:09:17 But certainly, there’s a lot of interest
01:09:19 in using not just these ideas, but related ideas
01:09:23 from information theory to try and tie down
01:09:27 the maths and the calculus and the geometry of consciousness,
01:09:34 either in terms of sort of a minimal consciousness,
01:09:39 even less than a minimal selfhood.
01:09:42 And what I’m talking about is the ability, effectively,
01:09:48 to plan, to have agency.
01:09:52 So you could argue that a virus does have a form of agency
01:09:57 in virtue of the way that it selectively
01:10:00 finds hosts and cells to live in and moves around.
01:10:05 But you wouldn’t endow it with the capacity
01:10:09 to think about planning and moving in a purposeful way
01:10:14 where it countenances the future.
01:10:17 Whereas you might an ant.
01:10:18 You might think an ant’s not quite as unconscious
01:10:22 as a virus.
01:10:24 It certainly seems to have a purpose.
01:10:26 It talks to its friends en route during its foraging.
01:10:29 It has a different kind of autonomy, which is biotic,
01:10:37 but beyond a virus.
01:10:38 So there’s something about, so there’s
01:10:41 some line that has to do with the complexity of planning
01:10:45 that may contain an answer.
01:10:48 I mean, it would be beautiful if we
01:10:49 can find a line beyond which we could say a being is conscious.
01:10:55 Yes, it will be.
01:10:56 These are wonderful lines that we’ve drawn with existence,
01:11:00 life, and consciousness.
01:11:02 Yes, it will be very nice.
01:11:05 One little wrinkle there, and this
01:11:07 is something I’ve only learned in the past few months,
01:11:09 is the philosophical notion of vagueness.
01:11:12 So you’re saying it would be wonderful to draw a line.
01:11:14 I had always assumed that that line at some point
01:11:18 would be drawn until about four months ago,
01:11:22 and the philosopher taught me about vagueness.
01:11:24 So I don’t know if you’ve come across this,
01:11:26 but it’s a technical concept.
01:11:28 And I think most revealingly illustrated with,
01:11:33 at what point does a pile of sand become a pile?
01:11:37 Is it one grain, two grains, three grains, or four grains?
01:11:41 So at what point would you draw the line
01:11:44 between being a pile of sand and a collection of grains of sand?
01:11:51 In the same way, is it right to ask,
01:11:53 where would I draw the line between conscious
01:11:55 and unconscious?
01:11:56 And it might be a vague concept.
01:11:59 Having said that, I agree with you entirely.
01:12:02 Systems that have the ability to plan.
01:12:06 So just technically, what that means
01:12:08 is your inferential self evidencing,
01:12:13 by which I simply mean the thermodynamics and gradient
01:12:19 flows that underwrite the preservation of your oil
01:12:22 droplet like form, can be described
01:12:29 as an optimization of log Bayesian model
01:12:32 evidence, your elbow.
01:12:36 That self evidencing must be evidence
01:12:39 for a model of what’s causing the sensory impressions
01:12:44 on the sensory part of your surface or your Markov
01:12:47 blanket.
01:12:48 If that model is capable of planning,
01:12:51 it must include a model of the future consequences
01:12:53 of your active states or your action, just planning.
01:12:56 So we’re now in the game of planning as inference.
01:12:59 Now notice what we’ve made, though.
01:13:00 We’ve made quite a big move away from big data and machine
01:13:04 learning, because again, it’s the consequences of moving.
01:13:08 It’s the consequences of selecting those data or those
01:13:11 data or looking over there.
01:13:14 And that tells you immediately that even
01:13:17 to be a contender for a conscious artifact or a strong
01:13:22 AI or generalized, I don’t know what that’s called nowadays,
01:13:26 then you’ve got to have movement in the game.
01:13:29 And furthermore, you’ve got to have a generative model
01:13:32 of the sort you might find in, say, a variational auto
01:13:34 encoder that is thinking about the future conditioned
01:13:39 upon different courses of action.
01:13:41 Now that brings a number of things to the table, which
01:13:44 now you start to think, well, those
01:13:46 have got all the right ingredients
01:13:47 to talk about consciousness.
01:13:48 I’ve now got to select among a number of different courses
01:13:51 of action into the future as part of planning.
01:13:54 I’ve now got free will.
01:13:56 The act of selecting this course of action or that policy
01:13:59 or that policy or that action suddenly
01:14:02 makes me into an inference machine,
01:14:04 a self evidencing artifact that now
01:14:09 looks as if it’s selecting amongst different alternative
01:14:12 ways forward, as I actively swim here or swim there
01:14:15 or look over here, look over there.
01:14:17 So I think you’ve now got to a situation
01:14:19 if there is planning in the mix.
01:14:22 You’re now getting much closer to that line
01:14:25 if that line were ever to exist.
01:14:27 I don’t think it gets you quite as far as self aware, though.
01:14:32 And then you have to, I think, grapple with the question,
01:14:39 how would formally write down a calculus or a maths
01:14:43 of self awareness?
01:14:44 I don’t think it’s impossible to do.
01:14:47 But I think there would be pressure on you
01:14:51 to actually commit to a formal definition of what
01:14:53 you mean by self awareness.
01:14:55 I think most people that I know would probably
01:15:00 say that a goldfish, a pet fish, was not self aware.
01:15:07 They would probably argue about their favorite cat,
01:15:10 but would be quite happy to say that their mom was self aware.
01:15:14 So.
01:15:15 I mean, but that might very well connect
01:15:17 to some level of complexity with planning.
01:15:20 It seems like self awareness is essential for complex planning.
01:15:26 Yeah.
01:15:27 Do you want to take that further?
01:15:28 Because I think you’re absolutely right.
01:15:29 Again, the line is unclear, but it
01:15:31 seems like integrating yourself into the world,
01:15:36 into your planning, is essential for constructing complex plans.
01:15:42 Yes.
01:15:43 Yeah.
01:15:43 So mathematically describing that in the same elegant way
01:15:47 as you have with the free energy principle might be difficult.
01:15:51 Well, yes and no.
01:15:53 I don’t think that, well, perhaps we should just,
01:15:55 can we just go back?
01:15:57 That’s a very important answer you gave.
01:15:58 And I think if I just unpacked it,
01:16:01 you’d see the truisms that you’ve just exposed for us.
01:16:06 But let me, sorry, I’m mindful that I didn’t answer
01:16:10 your question before.
01:16:11 Well, what’s the free energy principle good for?
01:16:13 Is it just a pretty theoretical exercise
01:16:15 to explain nonequilibrium steady states?
01:16:17 Yes, it is.
01:16:19 It does nothing more for you than that.
01:16:21 It can be regarded, it’s going to sound very arrogant,
01:16:24 but it is of the sort of theory of natural selection,
01:16:27 or a hypothesis of natural selection.
01:16:32 Beautiful, undeniably true, but tells you
01:16:37 absolutely nothing about why you have legs and eyes.
01:16:42 It tells you nothing about the actual phenotype,
01:16:44 and it wouldn’t allow you to build something.
01:16:48 So the free energy principle by itself
01:16:51 is as vacuous as most tautological theories.
01:16:54 And by tautological, of course,
01:16:56 I’m talking to the theory of natural,
01:16:58 the survival of the fittest.
01:17:00 What’s the fittest of those that survive?
01:17:01 Why do they cycle?
01:17:02 It’s the fitter.
01:17:03 It just goes around in circles.
01:17:05 In a sense, the free energy principle has that same
01:17:08 deflationary tautology under the hood.
01:17:15 It’s a characteristic of things that exist.
01:17:17 Why do they exist?
01:17:18 Because they minimize their free energy.
01:17:19 Why do they minimize their free energy?
01:17:21 Because they exist.
01:17:22 And you just keep on going round and round and round.
01:17:24 But the practical thing,
01:17:28 which you don’t get from natural selection,
01:17:32 but you could say has now manifest in things
01:17:35 like differential evolution or genetic algorithms
01:17:38 and MCMC, for example, in machine learning.
01:17:41 The practical thing you can get is,
01:17:43 if it looks as if things that exist
01:17:45 are trying to have density dynamics
01:17:49 and look as though they’re optimizing
01:17:51 a variational free energy,
01:17:53 and a variational free energy has to be
01:17:55 a functional of a generative model,
01:17:57 a probabilistic description of causes and consequences,
01:18:01 causes out there, consequences in the sensorium
01:18:04 on the sensory parts of the Markov blanket,
01:18:07 then it should, in theory, be possible
01:18:08 to write down the generative model,
01:18:10 work out the gradients,
01:18:11 and then cause it to autonomously self evidence.
01:18:15 So you should be able to write down oil droplets.
01:18:18 You should be able to create artifacts
01:18:20 where you have supplied the objective function
01:18:24 that supplies the gradients,
01:18:25 that supplies the self organizing dynamics
01:18:28 to non equilibrium steady state.
01:18:30 So there is actually a practical application
01:18:32 of the free energy principle
01:18:34 when you can write down your required evidence
01:18:37 in terms of, well, when you can write down
01:18:40 the generative model that is the thing
01:18:43 that has the evidence.
01:18:44 The probability of these sensory data
01:18:46 or this data, given that model,
01:18:50 is effectively the thing that the elbow
01:18:54 or the variational free energy bounds or approximates.
01:18:57 That means that you can actually write down the model
01:19:01 and the kind of thing that you want to engineer,
01:19:04 the kind of AGI or artificial general intelligence
01:19:10 that you want to manifest probabilistically,
01:19:14 and then you engineer, a lot of hard work,
01:19:16 but you would engineer a robot and a computer
01:19:19 to perform a gradient descent on that objective function.
01:19:23 So it does have a practical implication.
01:19:26 Now, why am I wittering on about that?
01:19:27 It did seem relevant to, yes.
01:19:28 So what kinds of, so the answer to,
01:19:32 would it be easier or would it be hard?
01:19:34 Well, mathematically, it’s easy.
01:19:36 I’ve just told you all you need to do
01:19:38 is write down your perfect artifact,
01:19:43 probabilistically, in the form
01:19:45 of a probabilistic generative model,
01:19:46 a probability distribution over the causes
01:19:48 and consequences of the world
01:19:52 in which this thing is immersed.
01:19:54 And then you just engineer a computer and a robot
01:19:58 to perform a gradient descent on that objective function.
01:20:00 No problem.
01:20:02 But of course, the big problem
01:20:04 is writing down the generative model.
01:20:05 So that’s where the heavy lifting comes in.
01:20:08 So it’s the form and the structure of that generative model
01:20:12 which basically defines the artifact that you will create
01:20:15 or, indeed, the kind of artifact that has self awareness.
01:20:19 So that’s where all the hard work comes,
01:20:22 very much like natural selection doesn’t tell you
01:20:24 in the slightest why you have eyes.
01:20:27 So you have to drill down on the actual phenotype,
01:20:29 the actual generative model.
01:20:31 So with that in mind, what did you tell me
01:20:36 that tells me immediately the kinds of generative models
01:20:40 I would have to write down in order to have self awareness?
01:20:43 What you said to me was I have to have a model
01:20:48 that is effectively fit for purpose
01:20:50 for this kind of world in which I operate.
01:20:53 And if I now make the observation
01:20:55 that this kind of world is effectively largely populated
01:20:59 by other things like me, i.e. you,
01:21:02 then it makes enormous sense
01:21:04 that if I can develop a hypothesis
01:21:07 that we are similar kinds of creatures,
01:21:11 in fact, the same kind of creature,
01:21:13 but I am me and you are you,
01:21:16 then it becomes, again, mandated to have a sense of self.
01:21:21 So if I live in a world
01:21:23 that is constituted by things like me,
01:21:26 basically a social world, a community,
01:21:29 then it becomes necessary now for me to infer
01:21:32 that it’s me talking and not you talking.
01:21:34 I wouldn’t need that if I was on Mars by myself
01:21:37 or if I was in the jungle as a feral child.
01:21:40 If there was nothing like me around,
01:21:43 there would be no need to have an inference
01:21:46 at a hypothesis, oh yes, it is me
01:21:49 that is experiencing or causing these sounds
01:21:51 and it is not you.
01:21:52 It’s only when there’s ambiguity in play
01:21:54 induced by the fact that there are others in that world.
01:21:58 So I think that the special thing about self aware artifacts
01:22:03 is that they have learned to, or they have acquired,
01:22:08 or at least are equipped with, possibly by evolution,
01:22:11 generative models that allow for the fact
01:22:14 there are lots of copies of things like them around,
01:22:17 and therefore they have to work out it’s you and not me.
01:22:20 That’s brilliant.
01:22:23 I’ve never thought of that.
01:22:24 I never thought of that, that the purpose
01:22:28 of the really usefulness of consciousness
01:22:31 or self awareness in the context of planning
01:22:34 existing in the world is so you can operate
01:22:36 with other things like you, and like you could,
01:22:39 it doesn’t have to necessarily be human.
01:22:40 It could be other kind of similar creatures.
01:22:43 Absolutely, well, we view a lot of our attributes
01:22:46 into our pets, don’t we?
01:22:47 Or we try to make our robots humanoid.
01:22:49 And I think there’s a deep reason for that,
01:22:51 that it’s just much easier to read the world
01:22:54 if you can make the simplifying assumption
01:22:56 that basically you’re me, and it’s just your turn to talk.
01:23:00 I mean, when we talk about planning,
01:23:01 when you talk specifically about planning,
01:23:04 the highest, if you like, manifestation or realization
01:23:07 of that planning is what we’re doing now.
01:23:09 I mean, the human condition doesn’t get any higher
01:23:12 than this talking about the philosophy of existence
01:23:16 and the conversation.
01:23:17 But in that conversation, there is a beautiful art
01:23:23 of turn taking and mutual inference, theory of mind.
01:23:28 I have to know when you wanna listen.
01:23:29 I have to know when you want to interrupt.
01:23:31 I have to make sure that you’re online.
01:23:32 I have to have a model in my head
01:23:34 of your model in your head.
01:23:35 That’s the highest, the most sophisticated form
01:23:38 of generative model, where the generative model
01:23:40 actually has a generative model
01:23:41 of somebody else’s generative model.
01:23:42 And I think that, and what we are doing now evinces
01:23:47 the kinds of generative models
01:23:49 that would support self awareness,
01:23:51 because without that, we’d both be talking over each other,
01:23:54 or we’d be singing together in a choir.
01:23:58 That’s not a brilliant analogy for what I’m trying to say,
01:24:01 but yeah, we wouldn’t have this discourse.
01:24:05 We wouldn’t have this.
01:24:06 Yeah, the dance of it.
01:24:06 Yeah, that’s right.
01:24:07 As I interrupt, I mean, that’s beautifully put.
01:24:12 I’ll re listen to this conversation many times.
01:24:17 There’s so much poetry in this, and mathematics.
01:24:21 Let me ask the silliest, or perhaps the biggest question
01:24:26 as a last kind of question.
01:24:29 We’ve talked about living in existence
01:24:33 and the objective function under which
01:24:35 these objects would operate.
01:24:37 What do you think is the objective function
01:24:39 of our existence?
01:24:41 What’s the meaning of life?
01:24:44 What do you think is the, for you, perhaps,
01:24:47 the purpose, the source of fulfillment,
01:24:50 the source of meaning for your existence,
01:24:53 as one blob in this soup?
01:24:57 I’m tempted to answer that, again, as a physicist,
01:25:00 until it’s the free energy I expect
01:25:03 consequent upon my behavior.
01:25:05 So technically, we could get a really interesting
01:25:08 conversation about what that comprises
01:25:10 in terms of searching for information,
01:25:13 resolving uncertainty about the kind of thing that I am.
01:25:16 But I suspect that you want a slightly more personal
01:25:20 and fun answer, but which can be consistent with that.
01:25:25 And I think it’s reassuringly simple
01:25:30 and hops back to what you were taught as a child,
01:25:36 that you have certain beliefs about the kind of creature
01:25:39 and the kind of person you are.
01:25:41 And all that self evidencing means,
01:25:44 all that minimizing variational free energy
01:25:46 in an inactive and embodied way,
01:25:50 means is fulfilling the beliefs about
01:25:53 what kind of thing you are.
01:25:55 And of course, we’re all given those scripts,
01:25:58 those narratives, at a very early age,
01:26:01 usually in the form of bedtime stories or fairy stories
01:26:04 that I’m a princess and I’m gonna meet a beast
01:26:07 who’s gonna transform and he’s gonna be a prince.
01:26:09 And so the narratives are all around you
01:26:11 from your parents to the friends
01:26:14 to the society feeds these stories.
01:26:17 And then your objective function is to fulfill.
01:26:21 Exactly, that narrative that has been encultured
01:26:24 by your immediate family, but as you say,
01:26:27 also the sort of the culture in which you grew up
01:26:29 and you create for yourself.
01:26:30 I mean, again, because of this active inference,
01:26:33 this inactive aspect of self evidencing,
01:26:36 not only am I modeling my environment,
01:26:40 my eco niche, my external states out there,
01:26:44 but I’m actively changing them all the time
01:26:46 and doing the same back, we’re doing it together.
01:26:49 So there’s a synchrony that means that I’m creating
01:26:53 my own culture over different timescales.
01:26:56 So the question now is for me being very selfish,
01:27:00 what scripts were I given?
01:27:02 It basically was a mixture between Einstein and shark homes.
01:27:06 So I smoke as heavily as possible,
01:27:09 try to avoid too much interpersonal contact,
01:27:15 enjoy the fantasy that you’re a popular scientist
01:27:21 who’s gonna make a difference in a slightly quirky way.
01:27:23 So that’s what I grew up on.
01:27:25 My father was an engineer and loved science
01:27:28 and he loved sort of things like Sir Arthur Edmonds,
01:27:33 Spacetime and Gravitation, which was the first
01:27:37 understandable version of general relativity.
01:27:41 So all the fairy stories I was told as I was growing up
01:27:45 were all about these characters.
01:27:48 I’m keeping the Hobbit out of this
01:27:50 because that doesn’t quite fit my narrative.
01:27:53 There’s a journey of exploration, I suppose, of sorts.
01:27:56 So yeah, I’ve just grown up to be what I imagine
01:28:01 a mild mannered Sherlock Holmes slash Albert Einstein
01:28:05 would do in my shoes.
01:28:07 And you did it elegantly and beautifully.
01:28:10 Carl was a huge honor talking today, it was fun.
01:28:12 Thank you so much for your time.
01:28:13 No, thank you. Appreciate it.
01:28:15 Thank you for listening to this conversation
01:28:17 with Carl Friston and thank you
01:28:19 to our presenting sponsor, Cash App.
01:28:21 Please consider supporting the podcast
01:28:23 by downloading Cash App and using code LexPodcast.
01:28:27 If you enjoy this podcast, subscribe on YouTube,
01:28:29 review it with five stars on Apple Podcast,
01:28:32 support on Patreon, or simply connect with me on Twitter
01:28:35 at LexFriedman.
01:28:37 And now let me leave you with some words from Carl Friston.
01:28:41 Your arm moves because you predict it will
01:28:44 and your motor system seeks to minimize prediction error.
01:28:48 Thank you for listening and hope to see you next time.