Transcript
00:00:00 turns out that if you train a planarian and then cut their heads off, the tail will regenerate a
00:00:04 brand new brain that still remembers the original information. I think planaria hold the answer to
00:00:09 pretty much every deep question of life. For one thing, they’re similar to our ancestors. So they
00:00:14 have true symmetry, they have a true brain, they’re not like earthworms, they’re, you know,
00:00:17 they’re much more advanced life form. They have lots of different internal organs, but they’re
00:00:20 these little, they’re about, you know, maybe two centimeters in the centimeter to two in size.
00:00:24 And they have a head and a tail. And the first thing is planaria are immortal. So they do not
00:00:30 age. There’s no such thing as an old planarian. So that right there tells you that these theories
00:00:34 of thermodynamic limitations on lifespan are wrong. It’s not that well over time of everything
00:00:40 degrades. No, planaria can keep it going for probably, you know, how long have they been
00:00:44 around 400 million years, right? So these are the actual, so the planaria in our lab
00:00:48 are actually in physical continuity with planaria that were here 400 million years ago.
00:00:54 The following is a conversation with Michael Levin, one of the most fascinating and brilliant
00:01:00 biologists I’ve ever talked to. He and his lab at Tufts University works on novel ways to understand
00:01:07 and control complex pattern formation in biological systems. Andre Karpathy, a world
00:01:12 class AI researcher, is the person who first introduced me to Michael Levin’s work. I bring
00:01:18 this up because these two people make me realize that biology has a lot to teach us about AI,
00:01:25 and AI might have a lot to teach us about biology. This is the Lex Friedman podcast.
00:01:32 To support it, please check out our sponsors in the description. And now, dear friends,
00:01:37 here’s Michael Levin. Embryogenesis is the process of building the human body from a single cell. I
00:01:44 think it’s one of the most incredible things that exists on earth from a single embryo. So how does
00:01:50 this process work? Yeah, it is an incredible process. I think it’s maybe the most magical
00:01:56 process there is. And I think one of the most fundamentally interesting things about it is that
00:02:01 it shows that each of us takes the journey from so called just physics to mind, right? Because we
00:02:07 all start life as a single quiescent, unfertilized oocyte, and it’s basically a bag of chemicals,
00:02:12 and you look at that and you say, okay, this is chemistry and physics. And then nine months and
00:02:16 some years later, you have an organism with high level cognition and preferences and an inner life
00:02:22 and so on. And what embryogenesis tells us is that that transformation from physics to mind is
00:02:27 gradual. It’s smooth. There is no special place where, you know, a lightning bolt says, boom,
00:02:32 now you’ve gone from physics to true cognition. That doesn’t happen. And so we can see in this
00:02:37 process that the whole mystery, you know, the biggest mystery of the universe, basically,
00:02:41 how you get mind from matter. From just physics, in quotes. Yeah. So where’s the magic into the
00:02:47 thing? How do we get from information encoded in DNA and make physical reality out of that
00:02:54 information? So one of the things that I think is really important if we’re going to bring in DNA
00:02:59 into this picture is to think about the fact that what DNA encodes is the hardware of life. DNA
00:03:05 contains the instructions for the kind of micro level hardware that every cell gets to play with.
00:03:09 So all the proteins, all the signaling factors, the ion channels, all the cool little pieces of
00:03:14 hardware that cells have, that’s what’s in the DNA. The rest of it is in so called generic laws.
00:03:20 And these are laws of mathematics. These are laws of computation. These are laws of physics,
00:03:25 of all kinds of interesting things that are not directly in the DNA. And that process, you know,
00:03:32 I think the reason I always put just physics in quotes is because I don’t think there is such a
00:03:36 thing as just physics. I think that thinking about these things in binary categories, like this is
00:03:41 physics, this is true cognition, this is as if it’s only faking these kinds of things. I think
00:03:45 that’s what gets us in trouble. I think that we really have to understand that it’s a continuum
00:03:49 and we have to work up the scaling, the laws of scaling. And we can certainly talk about that.
00:03:53 There’s a lot of really interesting thoughts to be had there.
00:03:56 So the physics is deeply integrated with the information. So the DNA doesn’t exist on its own.
00:04:03 The DNA is integrated as, in some sense, in response to the laws of physics at every scale.
00:04:10 The laws of the environment it exists in.
00:04:14 Yeah, the environment and also the laws of the universe. I mean, the thing about the DNA is that
00:04:18 it’s once evolution discovers a certain kind of machine, that if the physical implementation is
00:04:25 appropriate, it’s sort of, and this is hard to talk about because we don’t have a good vocabulary
00:04:29 for this yet, but it’s a very kind of a platonic notion that if the machine is there, it pulls down
00:04:36 interesting things that you do not have to evolve from scratch because the laws of physics give it
00:04:42 to you for free. So just as a really stupid example, if you’re trying to evolve a particular
00:04:47 triangle, you can evolve the first angle and you evolve the second angle, but you don’t need to
00:04:50 evolve the third. You know what it is already. Now, why do you know? That’s a gift for free
00:04:54 from geometry in a particular space. You know what that angle has to be. And if you evolve
00:04:58 an ion channel, which is, ion channels are basically transistors, right? They’re voltage
00:05:01 gated current conductances. If you evolve that ion channel, you immediately get to use things
00:05:06 like truth tables. You get logic functions. You don’t have to evolve the logic function.
00:05:10 You don’t have to evolve a truth table. It doesn’t have to be in the DNA. You get it for free,
00:05:14 right? And the fact that if you have NAND gates, you can build anything you want, you get that for
00:05:17 free. All you have to evolve is that first step, that first little machine that enables you to
00:05:22 couple to those laws. And there’s laws of adhesion and many other things. And this is all that
00:05:27 interplay between the hardware that’s set up by the genetics and the software that’s made, right?
00:05:33 The physiological software that basically does all the computation and the cognition and everything
00:05:38 else is a real interplay between the information and the DNA and the laws of physics of computation
00:05:43 and so on. So is it fair to say, just like this idea that the laws of mathematics are discovered,
00:05:50 they’re latent within the fabric of the universe in that same way the laws of biology are kind of
00:05:55 discovered? Yeah, I think that’s absolutely, and it’s probably not a popular view, but I think
00:05:59 that’s right on the money. Yeah. Well, I think that’s a really deep idea. Then embryogenesis
00:06:05 is the process of revealing, of embodying, of manifesting these laws. You’re not building the
00:06:16 laws. You’re just creating the capacity to reveal. Yes. I think, again, not the standard view of
00:06:23 molecular biology by any means, but I think that’s right on the money. I’ll give you a simple example.
00:06:27 Some of our latest work with these xenobots, right? So what we’ve done is to take some skin
00:06:31 cells off of an early frog embryo and basically ask about their plasticity. If we give you a
00:06:36 chance to sort of reboot your multicellularity in a different context, what would you do?
00:06:40 Because what you might assume by… The thing about embryogenesis is that it’s super reliable,
00:06:45 right? It’s very robust. And that really obscures some of its most interesting features. We get
00:06:50 used to it. We get used to the fact that acorns make oak trees and frog eggs make frogs. And we
00:06:54 say, well, what else is it going to make? That’s what it makes. That’s a standard story.
00:06:57 But the reality is… And so you look at these skin cells and you say, well, what do they know
00:07:03 how to do? Well, they know how to be a passive boring two dimensional outer layer, keeping the
00:07:07 bacteria from getting into the embryo. That’s what they know how to do. Well, it turns out that if
00:07:11 you take these skin cells and you remove the rest of the embryo, so you remove all of the rest of
00:07:17 the cells and you say, well, you’re by yourself now, what do you want to do? So what they do is
00:07:20 they form this multi little creature that runs around the dish. They have all kinds of incredible
00:07:26 and incredible capacities. They navigate through mazes. They have various behaviors that they do
00:07:30 both independently and together. Basically, they implement von Neumann’s dream of self replication,
00:07:38 because if you sprinkle a bunch of loose cells into the dish, what they do is they run around,
00:07:42 they collect those cells into little piles. They sort of mush them together until those little
00:07:46 piles become the next generation of xenobots. So you’ve got this machine that builds copies of
00:07:50 itself from loose material in its environment. None of this are things that you would have expected
00:07:56 from the frog genome. In fact, the genome is wild type. There’s nothing wrong with their genetics.
00:08:01 Nothing has been added, no nanomaterials, no genomic editing, nothing. And so what we have
00:08:06 done there is engineered by subtraction. What you’ve done is you’ve removed the other cells
00:08:11 that normally basically bully these cells into being skin cells. And you find out that what they
00:08:15 really want to do is to be this, their default behaviors to be a xenobot. But in vivo, in the
00:08:21 embryo, they get told to be skinned by these other cell types. And so now here comes this really
00:08:28 interesting question that you just posed. When you ask where does the form of the tadpole and
00:08:33 the frog come from, the standard answer is, well, it’s selection. So over millions of years,
00:08:39 it’s been shaped to produce the specific body that’s fit for froggy environments.
00:08:44 Where does the shape of the xenobot come from? There’s never been any xenobots. There’s never
00:08:48 been selection to be a good xenobot. These cells find themselves in the new environment.
00:08:51 In 48 hours, they figure out how to be an entirely different protoorganism with new capacities like
00:08:57 kinematic self replication. That’s not how frogs or tadpoles replicate. We’ve made it impossible
00:09:02 for them to replicate their normal way. Within a couple of days, these guys find a new way of
00:09:05 doing it that’s not done anywhere else in the biosphere. Well, actually, let’s step back and
00:09:09 define, what are xenobots? So a xenobot is a self assembling little protoorganism. It’s also a
00:09:16 biological robot. Those things are not distinct. It’s a member of both classes. How much is it
00:09:22 biology? How much is that robot? At this point, most of it is biology because what we’re doing is
00:09:28 we’re discovering natural behaviors of the cells and also of the cell collectives. Now, one of the
00:09:35 really important parts of this was that we’re working together with Josh Bongaert’s group at
00:09:39 University of Vermont. They’re computer scientists, they do AI, and they’ve basically been able to
00:09:45 use a simulated evolution approach to ask, how can we manipulate these cells, give them signals,
00:09:51 not rewire their DNA, so not hardware, but experience signals? So can we remove some cells?
00:09:56 Can we add some cells? Can we poke them in different ways to get them to do other things?
00:09:59 So in the future, there’s going to be, we’re now, and this is future unpublished work, but
00:10:04 we’re doing all sorts of interesting ways to reprogram them to new behaviors. But before you
00:10:08 can start to reprogram these things, you have to understand what their innate capacities are.
00:10:13 Okay, so that means engineering, programming, you’re engineering them in the future. And in
00:10:19 some sense, the definition of a robot is something you in part engineer versus evolve. I mean,
00:10:28 it’s such a fuzzy definition anyway, in some sense, many of the organisms within our body
00:10:33 are kinds of robots. And I think robots is a weird line because it’s, we tend to see robots
00:10:40 as the other. I think there will be a time in the future when there’s going to be something akin to
00:10:45 the civil rights movements for robots, but we’ll talk about that later perhaps. Anyway, so how do
00:10:52 you, can we just linger on it? How do you build a Xenobot? What are we talking about here? From
00:11:00 when does it start and how does it become the glorious Xenobot?
00:11:06 Yeah, so just to take one step back, one of the things that a lot of people get stuck on is they
00:11:12 say, well, you know, engineering requires new DNA circuits or it requires new nanomaterials,
00:11:19 you know, what the thing is, we are now moving from old school engineering, which use passive
00:11:24 materials, right? That things, you know, wood, metal, things like this, that basically the only
00:11:28 thing you could depend on is that they were going to keep their shape. That’s it. They don’t do
00:11:31 anything else. It’s on you as an engineer to make them do everything they’re going to do.
00:11:35 And then there were active materials and now computation materials. This is a whole new era.
00:11:39 These are agential materials. This is you’re now collaborating with your substrate because your
00:11:43 material has an agenda. These cells have, you know, billions of years of evolution. They have goals.
00:11:51 They have preferences. They’re not just going to sit where you put them. That’s hilarious that you
00:11:54 have to talk your material into keeping its shape. That’s it. That is exactly right. That is exactly
00:11:58 right. Stay there. It’s like getting a bunch of cats or something and trying to organize the shape
00:12:04 out of them. It’s funny. We’re on the same page here because in a paper, this is, this is currently
00:12:08 just been accepted in nature by engineering. One of the figures I have is building a tower
00:12:12 out of Legos versus dogs, right? So think about the difference, right? If you build out of Legos,
00:12:17 you have full control over where it’s going to go. But if somebody knocks it over, it’s game over.
00:12:22 With the dogs, you cannot just come and stack them. They’re not going to stay that way. But
00:12:26 the good news is that if you train them, then somebody knocks it over, they’ll get right back
00:12:29 up. So it’s all right. So as an engineer, what you really want to know is what can they depend
00:12:33 on this thing to do, right? That’s really, you know, a lot of people have definitions of robots
00:12:37 as far as what they’re made of or how they got here, you know, design versus evolve, whatever.
00:12:41 I don’t think any of that is useful. I think, I think as an engineer, what you want to know is
00:12:45 how much can I depend on this thing to do when I’m not around to micromanage it? What level of,
00:12:50 what level of dependency can I, can I give this thing? How much agency does it have?
00:12:54 Which then tells you what techniques do you use? So do you use micromanagement,
00:12:57 like you put everything where it goes? Do you train it? Do you give it signals? Do you try
00:13:01 to convince it to do things, right? How much, you know, how intelligent is your substrate?
00:13:04 And so now we’re moving into this, into this area where you’re, you’re, you’re working with
00:13:08 agential materials. That’s a collaboration. That’s not, that’s not old, old style.
00:13:12 What’s the word you’re using? Agential?
00:13:14 Agential.
00:13:14 Yeah.
00:13:15 What’s that mean?
00:13:15 Agency. It comes from the word agency. So, so basically the material has agency, meaning that
00:13:20 it has some, some level of obviously not human level, but some level of preferences, goals,
00:13:26 memories, ability to remember things, to compute into the future, meaning anticipate,
00:13:30 you know, when you’re working with cells, they have all of that to some, to various degrees.
00:13:34 Is that empowering or limiting having material as a mind of its own, literally?
00:13:39 I think it’s both, right? So it raises difficulties because it means that
00:13:43 it, if you, if you’re using the old mindset, which is a linear kind of extrapolation of what’s going
00:13:48 to happen, you’re going to be surprised and shocked all the time because biology does not
00:13:54 do what we linearly expect materials to do. On the other hand, it’s massively liberating. And
00:13:59 so in the following way, I’ve argued that advances in regenerative medicine require us to take
00:14:04 advantage of this because what it means is that you can get the material to do things that you
00:14:09 don’t know how to micromanage. So just as a simple example, right? If you, if you, you had a rat
00:14:13 and you wanted this rat to do a circus trick, put a ball in the little hoop, you can do it the
00:14:19 micromanagement way, which is try to control every neuron and try to play the thing like a puppet,
00:14:22 right? And maybe someday that’ll be possible, maybe, or you can train the rat. And this is
00:14:26 why humanity for thousands of years before we knew any neuroscience, we had no idea what’s
00:14:31 behind, what’s between the ears of any animal. We were able to train these animals because once you
00:14:35 recognize the level of agency of a certain system, you can use appropriate techniques. If you know
00:14:40 the currency of motivation, reward and punishment, you know how smart it is, you know what kinds of
00:14:44 things it likes to do. You are searching a much more, much smoother, much nicer problem space than
00:14:50 if you try to micromanage the thing. And in regenerative medicine, when you’re trying to get,
00:14:54 let’s say an arm to grow back or an eye to repair a cell birth defect or something,
00:14:57 do you really want to be controlling tens of thousands of genes at each point to try to
00:15:02 micromanage it? Or do you want to find the high level modular controls that say,
00:15:07 build an arm here. You already know how to build an arm. You did it before, do it again.
00:15:11 So that’s, I think it’s both, it’s both difficult and it challenges us to develop new ways of
00:15:15 engineering and it’s hugely empowering. Okay. So how do you do, I mean, maybe sticking with
00:15:21 the metaphor of dogs and cats, I presume you have to figure out the, find the dogs and dispose of
00:15:31 the cats. Because, you know, it’s like the old herding cats is an issue. So you may be able to
00:15:38 train dogs. I suspect you will not be able to train cats. Or if you do, you’re never going to
00:15:44 be able to trust them. So is there a way to figure out which material is amenable to herding? Is it in
00:15:53 the lab work or is it in simulation? Right now it’s largely in the lab because we, our simulations
00:15:59 do not capture yet the most interesting and powerful things about biology. So the simulation
00:16:04 does, what we’re pretty good at simulating are feed forward emergent types of things,
00:16:10 right? So cellular automata, if you have simple rules and you sort of roll those forward for
00:16:15 every, every agent or every cell in the simulation, then complex things happen, you know, ant colony
00:16:19 or algorithms, things like that. We’re good at that. And that’s, and that’s fine. The difficulty
00:16:23 with all of that is that it’s incredibly hard to reverse. So this is a really hard inverse problem,
00:16:28 right? If you look at a bunch of termites and they make a, you know, a thing with a single chimney
00:16:31 and you say, well, I like it, but I’d like two chimneys. How do you change the rules of behavior
00:16:36 free termites? So they make two chimneys, right? Or, or if you say, here are a bunch of cells that are
00:16:40 creating this kind of organism. I don’t think that’s optimal. I’d like to repair that birth
00:16:44 defect. How do you control all the, all the individual low level rules, right? All the protein
00:16:49 interactions and everything else, rolling it back from the anatomy that you want to the low level
00:16:53 hardware rules is in general intractable. It’s a, it’s an inverse problem that’s generally not
00:16:57 solvable. So right now it’s mostly in the lab because what we need to do is we need to understand
00:17:02 how biology uses top down controls. So the idea is not, not bottom up emergence, but the idea of
00:17:09 things like a goal directed test operate exit kinds of loops where, where it’s basically an
00:17:14 error minimization function over a new space and not a space of gene expression, but for example,
00:17:19 a space of anatomy. So just as a simple example, if you have, you have a salamander and it’s got
00:17:23 an arm, you can, you can amputate that arm anywhere along the length. It will grow exactly
00:17:29 what’s needed and then it stops. That’s the most amazing thing about regeneration is that it stops
00:17:32 it knows when to stop. When does it stop? It stops when a correct salamander arm has been completed.
00:17:37 So that tells you that’s right. That’s a, that’s a, a means ends kind of analysis where it has to
00:17:42 know what the correct limb is supposed to look like, right? So it has a way to ascertain the
00:17:47 current shape. It has a way to measure that Delta from, from what shape it’s supposed to be. And it
00:17:51 will keep taking actions, meaning remodeling and growing and everything else until that’s complete.
00:17:55 So once you know that, and we’ve taken advantage of this in the lab to do some, some really wild
00:17:59 things with, with both planaria and frog embryos and so on, once you know that, you can start
00:18:04 playing with that, with that homeostatic cycle. You can ask, for example, well, how does it remember
00:18:08 what the correct shape is? And can we mess with that memory? Can we give it a false memory of
00:18:12 what the shape should be and let the cells build something else? Or can we mess with the measurement
00:18:16 apparatus, right? So it gives you, it gives you those kinds of, so, so, so the idea is to
00:18:21 basically appropriate a lot of the approaches and concepts from cognitive neuroscience and
00:18:28 behavioral science into things that previously were taken to be dumb materials. And, you know,
00:18:33 you get yelled at in class if you, if you, for being anthropomorphic, if you said, well, my cells
00:18:37 want to do this and my cells want to do that. And I think, I think that’s a, that’s a major mistake
00:18:41 that leaves a ton of capabilities on the table. So thinking about biologic systems as things that
00:18:45 have memory, have almost something like cognitive ability, but I mean, how incredible is it,
00:18:56 you know, that the salamander arm is being rebuilt, not with a dictator. It’s kind of like
00:19:03 the cellular automata system. All the individual workers are doing their own thing. So where’s that
00:19:10 top down signal that does the control coming from? Like, how can you find it? Like, why does it stop
00:19:16 growing? How does it know the shape? How does it have memory of the shape? And how does it tell
00:19:21 everybody to be like, whoa, whoa, whoa, slow down, we’re done. So the first thing to think about,
00:19:26 I think, is that there are no examples anywhere of a central dictator, because in this kind of
00:19:33 science, because everything is made of parts. And so we, even though we feel as a unified central
00:19:40 sort of intelligence and kind of point of cognition, we are a bag of neurons, right?
00:19:45 All intelligence is collective intelligence. There’s this, this is important to kind of
00:19:50 think about, because a lot of people think, okay, there’s real intelligence, like me,
00:19:54 and then there’s collective intelligence, which is ants and flocks of birds and termites and
00:19:59 things like that. And maybe it’s appropriate to think of them as an individual, and maybe it’s
00:20:05 not, and a lot of people are skeptical about that and so on. But you’ve got to realize that
00:20:09 we are not, there’s no such thing as this like indivisible diamond of intelligence that’s like
00:20:13 this one central thing that’s not made of parts. We are all made of parts. And so if you believe,
00:20:19 which I think is hard to get around, that we in fact have a centralized set of goals and
00:20:25 preferences and we plan and we do things and so on, you are already committed to the fact that
00:20:30 a collection of cells is able to do this, because we are a collection of cells. There’s no getting
00:20:34 around that. In our case, what we do is we navigate the three dimensional world and we
00:20:37 have behavior. This is blowing my mind right now, because we are just a collection of cells.
00:20:41 Oh yeah. So when I’m moving this arm, I feel like I’m the central dictator of that action,
00:20:50 but there’s a lot of stuff going on. All the cells here are collaborating in some interesting way.
00:20:57 They’re getting signal from the central nervous system.
00:21:00 Well, even the central nervous system is misleadingly named because it isn’t really
00:21:05 central. Again, it’s just a bunch of cells. I mean, all of them, right? There are no,
00:21:10 there are no singular indivisible intelligences anywhere. We are all, every example that we’ve
00:21:16 ever seen is a collective of something. It’s just that we’re used to it. We’re used to that. We’re
00:21:21 used to, okay, this thing is kind of a single thing, but it’s really not. You zoom in, you know
00:21:24 what you see. You see a bunch of cells running around. Is there some unifying, I mean, we’re
00:21:29 jumping around, but that something that you look at as the bioelectrical signal versus the
00:21:36 biochemical, the chemistry, the electricity, maybe the life is in that versus the cells.
00:21:47 It’s the, there’s an orchestra playing and the resulting music is the dictator.
00:21:57 That’s not bad. That’s Dennis Noble’s kind of view of things. He has two really good books
00:22:02 where he talks about this musical analogy, right? So I think that’s, I like it. I like it.
00:22:07 Is it wrong though?
00:22:08 I don’t think it’s, no, I don’t think it’s wrong. I don’t think it’s wrong. I think the important
00:22:13 thing about it is that we have to come to grips with the fact that a true proper cognitive
00:22:23 intelligence can still be made of parts. Those things are, and in fact it has to be, and I think
00:22:27 it’s a real shame, but I see this all the time. When you have a collective like this, whether it
00:22:32 be a group of robots or a collection of cells or neurons or whatever, as soon as we gain some
00:22:40 insight into how it works, meaning that, oh, I see, in order to take this action, here’s the
00:22:45 information that got processed via this chemical mechanism or whatever. Immediately people say,
00:22:50 oh, well then that’s not real cognition. That’s just physics. I think this is fundamentally
00:22:54 flawed because if you zoom into anything, what are you going to see? Of course you’re just going to
00:22:58 see physics. What else could be underneath, right? It’s not going to be fairy dust. It’s going to be
00:23:01 physics and chemistry, but that doesn’t take away from the magic of the fact that there are certain
00:23:05 ways to arrange that physics and chemistry and in particular the bioelectricity, which I like a lot,
00:23:11 to give you an emergent collective with goals and preferences and memories and anticipations
00:23:18 that do not belong to any of the subunits. So I think what we’re getting into here,
00:23:22 and we can talk about how this happens during embryogenesis and so on, what we’re getting into
00:23:26 is the origin of a self with a capital S. So we ourselves, there are many other kinds of
00:23:33 selves, and we can tell some really interesting stories about where selves come from and how they
00:23:37 become unified. Yeah, is this the first, or at least humans tend to think that this is the
00:23:42 level of which the self with a capital S is first born, and we really don’t want to see
00:23:49 human civilization or Earth itself as one living organism. Yeah, that’s very uncomfortable to us.
00:23:54 It is, yeah. But is, yeah, where’s the self born? We have to grow up past that. So what I like to do
00:24:01 is, I’ll tell you two quick stories about that. I like to roll backwards. So as opposed to, so if
00:24:06 you start and you say, okay, here’s a paramecium, and you see it, you know, it’s a single cell
00:24:10 organism, you see it doing various things, and people will say, okay, I’m sure there’s some
00:24:14 chemical story to be told about how it’s doing it, so that’s not a paramecium.
00:24:18 So that’s not true cognition, right? And people will argue about that. I like to work it backwards.
00:24:23 I say, let’s agree that you and I, as we sit here, are examples of true cognition, if anything,
00:24:28 as if there’s anything that’s true cognition, we are examples of it. Now let’s just roll back
00:24:32 slowly, right? So you roll back to the time when you were a small child and used to doing whatever,
00:24:36 and then just sort of day by day, you roll back, and eventually you become more or less that
00:24:41 paramecium, and then you sort of even below that, right, as an unfertilized OSI. So
00:24:46 it’s, no one has, to my knowledge, no one has come up with any convincing discrete step at which
00:24:53 my cognitive powers disappear, right? It just doesn’t, the biology doesn’t offer any specific
00:24:59 step. It’s incredibly smooth and slow and continuous. And so I think this idea that it just
00:25:04 sort of magically shows up at one point, and then, you know, humans have true selves that don’t exist
00:25:10 elsewhere, I think it runs against everything we know about evolution, everything we know about
00:25:13 developmental biology, these are all slow continua. And the other really important story I
00:25:18 want to tell is where embryos come from. So think about this for a second. Amniote embryos, so this
00:25:23 is humans, birds, and so on, mammals and birds and so on. Imagine a flat disk of cells, so there’s
00:25:29 maybe 50,000 cells. And in that, so when you get an egg from a fertilized, let’s say you buy a
00:25:35 fertilized egg from a farm, right? That egg will have about 50,000 cells in a flat disk, it looks
00:25:42 like a little tiny little frisbee. And in that flat disk, what’ll happen is there’ll be one set
00:25:50 of cells will become special, and it will tell all the other cells, I’m going to be the head,
00:25:56 you guys don’t be the head. And so it’ll amplify symmetry breaking amplification, you get one
00:26:00 embryo, there’s some neural tissue and some other stuff forms. Now, you say, okay, I had one egg
00:26:06 and one embryo, and there you go, what else could it be? Well, the reality is, and I used to, I did
00:26:10 all of this as a grad student, if you take a little needle, and you make a scratch in that
00:26:16 blastoderm in that disk, such that the cells can’t talk to each other for a while, it heals up, but
00:26:20 for a while, they can’t talk to each other. What will happen is that both regions will decide that
00:26:26 they can be the embryo, and there will be two of them. And then when they heal up, they become
00:26:29 conjoint twins, and you can make two, you can make three, you can make lots. So the question of how
00:26:33 many cells are in there cannot be answered until it’s actually played all the way through. It isn’t
00:26:40 necessarily that there’s just one, there can be many. So what you have is you have this medium,
00:26:44 this, this undifferentiated, I’m sure there’s a there’s a psychological version of this somewhere
00:26:49 that I don’t know the proper terminology. But you have this, you have this list, like the ocean of
00:26:53 potentiality, you have these 1000s of cells, and some number of individuals are going to be formed
00:26:58 out of it, usually one, sometimes zero, sometimes several. And they form out of these cells,
00:27:05 because a region of these cells organizes into a collective that will have goals, goals that
00:27:10 individual cells don’t have, for example, make a limb, make an eye, how many eyes? Well, exactly
00:27:15 two. So individual cells don’t know what an eye is, they don’t know how many eyes you’re supposed
00:27:19 to have, but the collective does. The collective has goals and memories and anticipations that the
00:27:23 individual cells don’t. And that that the establishment of that boundary with its own
00:27:27 ability to maintain to to pursue certain goals. That’s the origin of selfhood.
00:27:33 But I, is that goal in there somewhere? Were they always destined? Like, are they discovering
00:27:42 that goal? Like, where the hell did evolution discover this when you went from the prokaryotes
00:27:49 to eukaryotic cells? And then they started making groups. And when you make a certain group,
00:27:55 you make a, you make it sound, and it’s such a tricky thing to try to understand, you make it
00:28:03 sound like this cells didn’t get together and came up with a goal. But the very act of them
00:28:09 getting together revealed the goal that was always there. There was always that potential
00:28:16 for that goal. So the first thing to say is that there are way more questions here than
00:28:20 certainties. Okay, so everything I’m telling you is cutting edge developing, you know, stuff. So
00:28:25 it’s not as if any of us know the answer to this. But, but here’s, here’s, here’s my opinion on
00:28:29 this. I think what evolution, I don’t think that evolution produces solutions to specific problems,
00:28:36 in other words, specific environments, like here’s a frog that can live well in a froggy
00:28:39 environment. I think what evolution produces is problem solving machines that that will that will
00:28:46 solve problems in different spaces. So not just three dimensional spaces, but in a way,
00:28:50 three dimensional space. This goes back to what we were talking about before we the brain is a
00:28:55 evolutionarily a late development. It’s a system that is able to net to pursue goals in three
00:29:01 dimensional space by giving commands to muscles, where did that system come from that system
00:29:05 evolved from a much more ancient, evolutionarily much more ancient system, where collections of
00:29:10 cells gave instructions to for cell behaviors, meaning cells move to divide to die to change into
00:29:18 cells to navigate more for space, the space of anatomies, the space of all possible anatomies.
00:29:23 And before that, cells were navigating transcriptional space, which is a space of all
00:29:27 possible gene expressions. And before that metabolic space. So what evolution has done,
00:29:31 I think, is is is is produced hardware that is very good at navigating different spaces using a
00:29:38 bag of tricks, right, which which I’m sure many of them we can steal for autonomous vehicles and
00:29:42 robotics and various things. And what happens is that they navigate these spaces without a whole
00:29:47 lot of commitment to what the space is. In fact, they don’t know what the space is, right? We are
00:29:51 all brains in a vat, so to speak. Every cell does not know, right? Every cell is some other name,
00:29:57 some other cells external environment, right? So where does that with that border between you,
00:30:02 you and the outside world, you don’t really know where that is, right? Every every collection of
00:30:05 cell has to figure that out from scratch. And the fact that evolution requires all of these things
00:30:10 to figure out what they are, what effectors they have, what sensors they have, where does it make
00:30:15 sense to draw a boundary between me and the outside world? The fact that you have to build all
00:30:18 that from scratch, this autopoiesis is what defines the border of a self. Now, biology uses like a
00:30:26 multi a multi scale competency architecture, meaning that every level has goals. So so
00:30:31 molecular networks have goals, cells have goals, tissues, organs, colonies. And and it’s the
00:30:38 interplay of all of those that that enable biology to solve problems in new ways, for example, in
00:30:43 xenobots and various other things. This is, you know, it’s exactly as you said, in many ways,
00:30:50 the cells are discovering new ways of being. But at the same time, evolution certainly shapes all
00:30:56 this. So so evolution is very good at this agential bioengineering, right? When evolution
00:31:01 is discovering a new way of being an animal, you know, an animal or a plant or something,
00:31:06 sometimes it’s by changing the hardware, you know, protein, changing proteins, protein structure,
00:31:10 and so on. But much of the time, it’s not by changing the hardware, it’s by changing the
00:31:14 signals that the cells give to each other. It’s doing what we as engineers do, which is try to
00:31:17 convince the cells to do various things by using signals, experiences, stimuli. That’s what biology
00:31:22 does. It has to, because it’s not dealing with a blank slate. Every time as you know, if you’re
00:31:27 evolution, and you’re trying to make make a make an organism, you’re not dealing with a passive
00:31:32 material that is fresh, and you have to specify it already wants to do certain things. So the easiest
00:31:37 way to do that search to find whatever is going to be adaptive, is to find the signals that are
00:31:42 going to convince cells to do various things, right? Your sense is that evolution operates
00:31:48 both in the software and the hardware. And it’s just easier, more efficient to operate in the
00:31:54 software. Yes. And I should also say, I don’t think the distinction is sharp. In other words,
00:31:58 I think it’s a continuum. But I think we can but I think it’s a meaningful distinction where you can
00:32:03 make changes to a particular protein, and now the enzymatic function is different, and it metabolizes
00:32:08 differently, and whatever, and that will have implications for fitness. Or you can change the
00:32:14 huge amount of information in the genome that isn’t structural at all. It’s, it’s, it’s signaling,
00:32:20 it’s when and how do cells say certain things to each other. And that can have massive changes,
00:32:25 as far as how it’s going to solve problems. I mean, this idea of multi hierarchical
00:32:29 competence architecture, which is incredible to think about. So this hierarchy that evolution
00:32:35 builds, I don’t know who’s responsible for this. I also see the incompetence of bureaucracies
00:32:43 of humans when they get together. So how the hell does evolution build this, where at every level,
00:32:53 only the best get to stick around, they somehow figure out how to do their job without knowing
00:32:57 the bigger picture. And then there’s like the bosses that do the bigger thing somehow, or that
00:33:04 you can now abstract away the small group of cells as a as an organ or something. And then
00:33:11 that organ does something bigger in the context of the full body or something like this.
00:33:17 How is that built? Is there some intuition you can kind of provide of how that’s constructed,
00:33:23 that that hierarchical competence architecture? I love that competence,
00:33:29 just the word competence is pretty cool in this context, because everybody’s good at their job.
00:33:34 Yeah, no, it’s really key. And the other nice thing about competency is that so my central
00:33:39 belief in all of this is that engineering is the right perspective on all of this stuff,
00:33:43 because it gets you away from subjective terms. You know, people talk about sentience and this
00:33:50 and that those things very hard to define, or people argue about them philosophically.
00:33:54 I think that engineering terms like competency, like, you know, pursuit of goals, right? All of
00:34:02 these things are, are empirically incredibly useful, because you know, when you see it,
00:34:06 and if it helps you build, right, if I if I can pick the right level, I say, this thing has,
00:34:11 I believe this is x level of like, competency, I think it’s like a thermostat, or I think it’s
00:34:17 like a better thermostat, or I think it’s a, you know, various other kinds of, you know,
00:34:22 many, many different kinds of complex systems. If that helps me to control and predict and build
00:34:28 such systems, then that’s all there is to say, there’s no more philosophy to argue about. So I
00:34:32 like competency in that way, because you can quantify, you could, you have to, in fact, you
00:34:35 have to, you have to make a claim competent at what? And then, or if I say, if I tell you,
00:34:38 it has a goal, the question is, what’s the goal? And how do you know? And I say, well, because
00:34:42 every time I deviated from this particular state, that’s what it spends energy to get back to,
00:34:46 that’s the goal. And we can quantify it, and we can be objective about it. So so so the the,
00:34:51 we’re not used to thinking about this, I give a talk sometimes called Why don’t robots get cancer,
00:34:56 right? And the reason robots don’t get cancer is because generally speaking, with a few exceptions,
00:35:00 our architectures have been, you’ve got a bunch of dumb parts. And you hope that if you put them
00:35:05 together, the the the overlying machine will have some intelligence and do something rather,
00:35:09 right, but the individual parts don’t don’t care, they don’t have an agenda. Biology isn’t like
00:35:13 that every level has an agenda. And the final outcome is the result of cooperation and competition,
00:35:20 both within and across levels. So for example, during embryogenesis, your tissues and organs are
00:35:25 competing with each other. And it’s actually a really important part of development, there’s a
00:35:28 reason they compete with each other, they’re not all just, you know, sort of helping each other,
00:35:33 they’re also competing for information for metabolic for limited metabolic constraints.
00:35:38 But to get back to your your other point, which is, you know, which is which is the seems like
00:35:43 really efficient and good and so on compared to some of our human efforts. We also have to keep
00:35:48 in mind that what happens here is that each level bends the option space for the level beneath so
00:35:56 that your parts basically they don’t see the the geometry. So I’m using them. And I think I take
00:36:03 I take this the seriously terminology from from like, from like relativity, right, where the space
00:36:10 is literally bent. So the option space is deformed by the higher level so that the lower levels, all
00:36:15 they really have to do is go down their concentration gradient, they don’t have to,
00:36:18 in fact, they don’t, they can’t know what the big picture is. But if you bend the space just right,
00:36:22 if they do what locally seems right, they end up doing your bidding, they end up doing things that
00:36:26 are optimal in the in the higher space. Conversely, because the components are good at getting their
00:36:33 job done, you as the higher level don’t need to try to compute all the low level controls,
00:36:38 all you’re doing is bending the space, you don’t know or care how they’re going to do it.
00:36:42 Give you a super simple example in the in the tappel, we found that okay, so so tappels need
00:36:47 to become frogs and to become to go from a tappel head to a frog head, you have to rearrange the
00:36:51 face. So the eyes have to move forward, the jaws have to come out the nostrils move like everything
00:36:55 moves. It used to be thought that because all tappels look the same, and all frogs look the
00:36:59 same. If you just remember, if every piece just moves in the right direction, the right amount,
00:37:03 then you get your you get your fraud. Right. So we decided to test we I have this hypothesis that I
00:37:08 thought I thought actually, the system is probably more intelligent than that. So what did we do?
00:37:11 We made what we call Picasso tappels. So these are so everything is scrambled. So the eyes are on the
00:37:15 back of the head, the jaws are off to the side, everything is scrambled. Well, guess what they
00:37:18 make, they make pretty normal frogs, because all the different things move around in novel
00:37:23 paths configurations until they get to the correct froggy sort of frog face configuration,
00:37:28 then they stop. So, so the thing about that is now imagine evolution, right? So, so you make some
00:37:34 sort of mutation, and it does, like every mutation, it does many things. So something good comes of it,
00:37:40 but also it moves your mouth off to the side, right? Now, if if there wasn’t this multi scale
00:37:46 competency, you can see where this is going, if there wasn’t this multi scale competency,
00:37:49 the organism would be dead, your fitness is zero, because you can’t eat. And you would never get to
00:37:53 explore the other beneficial consequences of that mutation, you’d have to wait until you find some
00:37:57 other way of doing it without moving the mouth, that’s really hard. So, so the fitness landscape
00:38:01 would be incredibly rugged evolution would take forever. The reason it works, one of the reasons
00:38:06 it works so well, is because you do that, no worries, the mouth will find its way where where
00:38:11 it belongs, right? So now you get to explore. So what that means is that all of these mutations
00:38:15 that otherwise would be deleterious are now neutral, because the competency of the parts
00:38:21 make up for all kinds of things. So all the noise of development, all the variability in the
00:38:26 environment, all these things, the competency of the parts makes up for it. So the so so that’s
00:38:32 all that’s all fantastic, right? That’s all that’s all great. The only other thing to remember when
00:38:36 we compare this to human efforts is this. Every component has its own goals in various spaces,
00:38:41 usually with very little regard for the welfare of the other levels. So so as a simple example,
00:38:46 you know, you as a as a complex system, you will go out and you will do you know, jiu jitsu,
00:38:52 or whatever, you’ll have some go you have to go rock climbing, scrape a bunch of cells off your
00:38:55 hands. And then you’re happy as a system, right? You come back, and you’ve accomplished some goals,
00:38:59 and you’re really happy. Those cells are dead. They’re gone. Right? Did you think about those
00:39:03 cells? Not really, right? You had some you had some bruising out selfish SOB. That’s it. And so
00:39:08 and so that’s the thing to remember is that, you know, and we know this from from history is that
00:39:13 is that just being a collective isn’t enough. Because what the goals of that collective will
00:39:19 be relative to the welfare of the individual parts is a massively open and justify the means
00:39:24 I’m telling you, Stalin was onto something. No, that’s the danger. But we can exactly that’s the
00:39:29 danger of for us humans, we have to construct ethical systems under which we don’t take seriously
00:39:39 the full mechanism of biology and apply it to the way the world functions,
00:39:43 which is which is an interesting line we’ve drawn. The world that built us is the one we
00:39:51 reject in some sense, when we construct human societies, the idea that this country was founded
00:39:59 on that all men are created equal. That’s such a fascinating idea. That’s like, you’re fighting
00:40:05 against nature and saying, well, there’s something bigger here than a hierarchical competency
00:40:14 architecture. But there’s so many interesting things you said. So from an algorithmic perspective,
00:40:21 the act of bending the option space. That’s really, that’s really profound. Because if you
00:40:29 look at the way AI systems are built today, there’s a big system, like I said, with robots,
00:40:36 and as a goal, and he gets better and better at optimizing that goal at accomplishing that goal.
00:40:42 But if biology built a hierarchical system where everything is doing computation,
00:40:49 and everything is accomplishing the goal, not only that, it’s kind of dumb,
00:40:56 you know, with the limited with a bent option space is just doing the thing that’s the easiest
00:41:03 thing for in some sense. And somehow that allows you to have turtles on top of turtles,
00:41:10 literally dumb systems on top of dumb systems that as a whole create something incredibly smart.
00:41:18 Yeah, I mean, every system is has some degree of intelligence in its own problem domain. So,
00:41:25 so cells will have problems they’re trying to solve in physiological space and transcriptional
00:41:30 space. And then I can give you some some cool examples of that. But the collective is trying
00:41:34 to solve problems in anatomical space, right and forming a, you know, a creature and growing your
00:41:38 blood vessels and so on. And then the collective the the the whole body is solving yet other
00:41:44 problems, they may be in social space and linguistic space and three dimensional space.
00:41:48 And who knows, you know, the group might be solving problems in, you know, I don’t know,
00:41:52 some sort of financial space or something. So one of the major differences with with most,
00:41:59 with most AIs today is is a the kind of flatness of the architecture, but also of the fact that
00:42:06 they’re constructed from outside their their borders, and they’re, you know, so a few. So,
00:42:14 to a large extent, and of course, there are counter examples now, but but to a large extent,
00:42:18 our technology has been such that you create a machine or a robot, it knows what its sensors are,
00:42:23 it knows what its effectors are, it knows the boundary between it and the outside world,
00:42:27 although this is given from the outside. Biology constructs this from scratch. Now the best example
00:42:32 of this that that originally in robotics was actually Josh Bongard’s work in 2006, where he
00:42:38 made these, these robots that did not know their shape to start with. So like a baby, they sort of
00:42:43 floundered around, they made some hypotheses, well, I did this, and I moved in this way. Well,
00:42:47 maybe I’m a whatever, maybe I have wheels, or maybe I have six legs or whatever, right? And
00:42:50 they would make a model and eventually will crawl around. So that’s, I mean, that’s really good.
00:42:54 That’s part of the autopoiesis, but we can go a step further. And some people are doing this. And
00:42:58 then we’re sort of working on some of this too, is this idea that let’s even go back further,
00:43:02 you don’t even know what sensors you have, you don’t know where you end in the outside world
00:43:06 begins. All you have is is certain things like active inference, meaning you’re trying to minimize
00:43:11 surprise, right? You have some metabolic constraints, you don’t have all the energy you
00:43:14 need, you don’t have all the time in the world to think about everything you want to think about. So
00:43:18 that means that you can’t afford to be a micro reductionist, you know, all this data coming in,
00:43:23 you have to course grain it and say, I’m gonna take all this stuff, and I’m gonna call that a
00:43:26 cat. I’m gonna take all this, I’m gonna call that the edge of the table I don’t want to fall off of.
00:43:30 And I don’t want to know anything about the micro states, what I want to know is what is the optimal
00:43:34 way to cut up my world. And by the way, this thing over here, that’s me. And the reason that’s me is
00:43:38 because I have more control over this than I have over any of this other stuff. And so now you can
00:43:42 begin to write. So that’s self construction at that, that figuring out making models of the
00:43:46 outside world, and then turning that inwards, and starting to make a model of yourself, right, which
00:43:51 immediately starts to get into issues of agency and control. Because in order to if you are under
00:43:58 metabolic constraints, meaning you don’t have the energy, right, that all the energy in the world,
00:44:02 you have to be efficient, that immediately forces you to start telling stories about coarse grained
00:44:08 agents that do things, right, you don’t have the energy to like Laplace’s demon, you know,
00:44:11 calculate every, every possible state that’s going to happen, you have to you have to course grain,
00:44:17 and you have to say, that is the kind of creature that does things, either things that I avoid,
00:44:21 or things that I will go towards, that’s a major food or whatever, whatever it’s going to be.
00:44:25 And so right at the base of simple, very simple organisms starting to make
00:44:31 models of agents doing things, that is the origin of models of free will, basically, right, because
00:44:39 you see the world around you as having agency. And then you turn that on yourself. And you say,
00:44:42 wait, I have agency too, I can I do things, right. And and then you make decisions about what you’re
00:44:47 going to do. So all of this one one model is to view all of those kinds of things as
00:44:53 being driven by that early need to determine what you are and to do so and to then take
00:44:59 actions in the most energetically efficient space possible. Right. So free will emerges
00:45:04 when you try to simplify, tell a nice narrative about your environment. I think that’s very
00:45:10 plausible. Yeah. You think free was an illusion. So you’re kind of implying that it’s a useful hack.
00:45:19 Well, I’ll say two things. The first thing is, I think I think it’s very plausible to say that
00:45:24 any organism that self or any agent that self whether it’s biological or not, any agent that
00:45:30 self constructs under energy constraints, is going to believe in free will, we’ll get to whether it
00:45:36 has free will momentarily. But but I think but I think what what it definitely drives is a view of
00:45:41 yourself and the outside world as an agential view, I think that’s inescapable. So that’s true
00:45:45 for even primitive organisms? I think so. I think that’s now now they don’t have now obviously,
00:45:50 you have to scale down, right. So so so so they don’t have the kinds of complex metacognition
00:45:55 that we have. So they can do long term planning and thinking about free will and so on and so on.
00:45:59 But but the sense of agency is really useful to accomplish tasks simple or complicated. That’s
00:46:05 right. In all kinds of spaces, not just in obvious three dimensional space. I mean, we’re very good
00:46:09 that the thing is, humans are very good at detecting agency of like medium sized objects
00:46:16 moving at medium speeds in the three dimensional world, right? We see a bowling ball and we see a
00:46:20 mouse and we immediately know what the difference is, right? And how we’re going to mostly things
00:46:23 you can eat or get eaten by. Yeah, yeah. That’s our that’s our training set, right? From the time
00:46:28 you’re little, your training set is visual data on on this this like little chunk of your experience.
00:46:33 But imagine if imagine if from the time that we were born, we had innate senses of your blood
00:46:39 chemistry, if you could feel your blood chemistry, the way you can see, right, you had a high bandwidth
00:46:42 connection, and you could feel your blood chemistry, and you could see, you could sense all
00:46:46 the things that your organs were doing. So your pancreas, your liver, all the things. If we had
00:46:51 that you we would be very good at detecting intelligence and physiological space, we would
00:46:55 know the level of intelligence that our various organs were deploying to deal with things that
00:47:00 were coming to anticipate the stimuli to, you know, but but we’re just terrible at that. We
00:47:04 don’t, in fact, in fact, people don’t even, you know, you talk about intelligence that these are
00:47:07 the paper spaces. And a lot of people think that’s just crazy, because, because all we’re all we know
00:47:12 is motion. We do have access to that information. So it’s actually possible that so evolution could
00:47:18 if we wanted to construct an organism that’s able to perceive the flow of blood through your body,
00:47:24 the way you see an old friend and say, yo, what’s up? How’s the wife and the kids? In that same way,
00:47:32 you would see that you would feel like a connection to the liver. Yeah, yeah, I think,
00:47:37 you know, maybe other people’s liver and not just your own, because you don’t have access to other
00:47:41 people’s. Not yet. But you could imagine some really interesting connection, right? But like
00:47:46 sexual selection, like, oh, that girl’s got a nice liver. Well, that’s like, the way her blood flows,
00:47:52 the dynamics of the blood is very interesting. It’s novel. I’ve never seen one of those.
00:47:58 But you know, that’s exactly what we’re trying to half ass when we, when we judge judgment of
00:48:03 beauty by facial symmetry and so on. That’s a half assed assessment of exactly that. Because
00:48:09 if your cells could not cooperate enough to keep your organism symmetrical, you know,
00:48:13 you can make some inferences about what else is wrong, right? Like that’s a very, you know,
00:48:17 that’s a very basic. Interesting. Yeah. So that in some deep sense, actually, that is what we’re
00:48:23 doing. We’re trying to infer how health, we use the word healthy, but basically, how functional
00:48:33 is this biological system I’m looking at so I can hook up with that one and make offspring? Yeah,
00:48:41 yeah. Well, what kind of hardware might their genomics give me that that might be useful in
00:48:45 the future? I wonder why evolution didn’t give us a higher resolution signal. Like why the whole
00:48:50 peacock thing with the feathers? It doesn’t seem, it’s a very low bandwidth signal for
00:48:58 sexual selection. I’m gonna, and I’m not an expert on this stuff, but on peacocks. Well,
00:49:02 you know, but I’ll take a stab at the reason. I think that it’s because it’s an arms race. You
00:49:08 see, you don’t want everybody to know everything about you. So I think that as much as, as much as,
00:49:14 and in fact, there’s another interesting part of this arms race, which is, if you think about this,
00:49:21 the most adaptive, evolvable system is one that has the most level of top down control, right?
00:49:27 If it’s really easy to say to a bunch of cells, make another finger versus, okay, here’s 10,000
00:49:33 gene expression changes that you need to do to make it to change your finger, right? The system
00:49:38 with good top down control that has memory and when we need to get back to that, by the way,
00:49:42 that’s a question I neglected to answer about where the memory is and so on. A system that uses
00:49:48 all of that is really highly evolvable and that’s fantastic. But guess what? It’s also highly subject
00:49:53 to hijacking by parasites, by cheaters of various kinds, by conspecifics. Like we found that,
00:50:01 and then that goes back to the story of the pattern memory in these planaria,
00:50:04 there’s a bacterium that lives on these planaria. That bacterium has an input into how many heads
00:50:09 the worm is going to have because it’s hijacks that control system and it’s able to make a
00:50:14 chemical that basically interfaces with the system that calculates how many heads you’re
00:50:18 supposed to have and they can make them have two heads. And so you can imagine that if you
00:50:22 are two, so you want to be understandable for your own parts to understand each other,
00:50:25 but you don’t want to be too understandable because you’ll be too easily controllable.
00:50:28 And so I think that my guess is that that opposing pressure keeps us from being a super high
00:50:36 bandwidth kind of thing where we can just look at somebody and know everything about them.
00:50:40 So it’s a kind of biological game of Texas hold them. You’re showing some cards and you’re hiding
00:50:45 other cards and there’s part of it and there’s bluffing and there’s all that. And then there’s
00:50:50 probably whole species that would do way too much bluffing. That’s probably where peacocks fall.
00:50:56 There’s a book that I don’t remember if I read or if I read summaries of the book,
00:51:04 but it’s about evolution of beauty and birds. Where is that from? Is that a book or does
00:51:10 Richard Dawkins talk about it? But basically there’s some species start to like over select
00:51:15 for beauty, not over select. They just some reason select for beauty. There is a case to be made.
00:51:21 Actually now I’m starting to remember, I think Darwin himself made a case that you can select
00:51:27 based on beauty alone. There’s a point where beauty doesn’t represent some underlying biological
00:51:35 truth. You start to select for beauty itself. And I think the deep question is there some evolutionary
00:51:44 value to beauty, but it’s an interesting kind of thought that can we deviate completely from
00:51:53 the deep biological truth to actually appreciate some kind of the summarization in itself.
00:52:00 Let me get back to memory because this is a really interesting idea. How do a collection of cells
00:52:07 remember anything? How do biological systems remember anything? How is that akin to the kind
00:52:13 of memory we think of humans as having within our big cognitive engine?
00:52:17 Yeah. One of the ways to start thinking about bioelectricity is to ask ourselves, where did
00:52:25 neurons and all these cool tricks that the brain uses to run these amazing problem solving abilities
00:52:32 on and basically an electrical network, right? Where did that come from? They didn’t just evolve,
00:52:36 you know, appear out of nowhere. It must have evolved from something. And what it evolved from
00:52:40 was a much more ancient ability of cells to form networks to solve other kinds of problems. For
00:52:46 example, to navigate more for space to control the body shape. And so all of the components
00:52:52 of neurons, so ion channels, neurotransmitter machinery, electrical synapses, all this stuff
00:52:58 is way older than brains, way older than neurons, in fact, older than multicellularity. And so
00:53:03 it was already that even bacterial biofilms, there’s some beautiful work from UCSD on brain
00:53:09 like dynamics and bacterial biofilms. So evolution figured out very early on that electrical networks
00:53:14 are amazing at having memories, at integrating information across distance, at different kinds
00:53:19 of optimization tasks, you know, image recognition and so on, long before there were brains.
00:53:24 Can you actually just step back? We’ll return to it. What is bioelectricity? What is biochemistry?
00:53:30 What is, what are electrical networks? I think a lot of the biology community focuses on
00:53:36 the chemicals as the signaling mechanisms that make the whole thing work. You have, I think,
00:53:47 to a large degree, uniquely, maybe you can correct me on that, have focused on the bioelectricity,
00:53:53 which is using electricity for signaling. There’s also probably mechanical. Sure, sure. Like knocking
00:54:00 on the door. So what’s the difference? And what’s an electrical network? Yeah, so I want to make
00:54:07 sure and kind of give credit where credit is due. So as far back as 1903, and probably late 1800s
00:54:14 already, people were thinking about the importance of electrical phenomena in life. So I’m for sure
00:54:20 not the first person to stress the importance of electricity. People, there were waves of research
00:54:25 in the in the 30s, in the 40s, and then, again, in the kind of 70s, 80s, and 90s of sort of the
00:54:33 pioneers of bioelectricity, who did some amazing work on all this, I think, I think what what
00:54:37 we’ve done that’s new, is to step away from this idea that, and I’ll describe what what the
00:54:43 bioelectricity is a step away from the idea that, well, here’s another piece of physics that you
00:54:46 need to keep track of to understand physiology and development. And to really start looking at this
00:54:51 as saying, no, this is a privileged computational layer that gives you access to the actual
00:54:57 cognition of the tissue of basal cognition. So, so merging that that developmental biophysics with
00:55:02 ideas and cognition of computation, and so on, I think I think that’s what we’ve done that’s new.
00:55:05 But people have been talking about bioelectricity for a really long time. And so I’ll, so I’ll
00:55:09 define that. So what happens is that if you have, if you have a single cell, cell has a membrane,
00:55:16 in that membrane are proteins called ion channels, and those proteins allow charged molecules,
00:55:21 potassium, sodium, chloride, to go in and out under certain circumstances. And when there’s
00:55:27 an imbalance of of those ions, there becomes a voltage gradient across that membrane. And so
00:55:33 all cells, all living cells try to hold a particular kind of voltage difference across
00:55:38 the membrane, and they spend a lot of energy to do so. When you now now, so that’s it, that’s it,
00:55:44 that’s a single cell. When you have multiple cells, the cells sitting next to each other,
00:55:48 they can communicate their voltage state to each other via a number of different ways. But one of
00:55:53 them is this thing called a gap junction, which is basically like a little submarine hatch that
00:55:56 just kind of docks, right? And the ions from one side can flow to the other side, and vice versa.
00:56:02 So…
00:56:02 Isn’t it incredible that this evolved? Isn’t that wild? Because that didn’t exist.
00:56:09 Correct. This had to be, this had to be evolved.
00:56:11 It had to be invented.
00:56:12 That’s right.
00:56:13 Somebody invented electricity in the ocean. When did this get invented?
00:56:17 Yeah. So, I mean, it is incredible. The guy who discovered gap junctions,
00:56:22 Werner Loewenstein, I visited him. He was really old.
00:56:25 A human being?
00:56:26 He discovered them.
00:56:27 Because who really discovered them lived probably four billion years ago.
00:56:32 Good point.
00:56:32 So you give credit where credit is due, I’m just saying.
00:56:35 He rediscovered gap junctions. But when I visited him in Woods Hole, maybe 20 years ago now,
00:56:43 he told me that he was writing, and unfortunately, he passed away, and I think this book never got
00:56:47 written. He was writing a book on gap junctions and consciousness. And I think it would have been
00:56:52 an incredible book, because gap junctions are magic. I’ll explain why in a minute.
00:56:57 What happens is that, just imagine, the thing about both these ion channels and these gap
00:57:02 junctions is that many of them are themselves voltage sensitive. So that’s a voltage sensitive
00:57:08 current conductance. That’s a transistor. And as soon as you’ve invented one, immediately,
00:57:13 you now get access to, from this platonic space of mathematical truths, you get access to all of the
00:57:20 cool things that transistors do. So now, when you have a network of cells, not only do they talk to
00:57:26 each other, but they can send messages to each other, and the differences of voltage can propagate.
00:57:30 Now, to neuroscientists, this is old hat, because you see this in the brain, right? This action
00:57:34 potentials, the electricity. They have these awesome movies where you can take a zebra,
00:57:40 like a transparent animal, like a zebrafish, and you can literally look down, and you can see all
00:57:45 the firings as the fish is making decisions about what to eat and things like this. It’s amazing.
00:57:49 Well, your whole body is doing that all the time, just much slower. So there are very few things
00:57:54 that neurons do that all the cells in your body don’t do. They all do very similar things, just
00:57:59 on a much slower timescale. And whereas your brain is thinking about how to solve problems in
00:58:04 three dimensional space, the cells in an embryo are thinking about how to solve problems in
00:58:08 anatomical space. They’re trying to have memories like, hey, how many fingers are we supposed to
00:58:12 have? Well, how many do we have now? What do we do to get from here to there? That’s the kind of
00:58:15 problems they’re thinking about. And the reason that gap junctions are magic is, imagine, right,
00:58:20 from the earliest time. Here are two cells. This cell, how can they communicate? Well,
00:58:29 the simple version is this cell could send a chemical signal, it floats over, and it hits
00:58:34 a receptor on this cell, right? Because it comes from outside, this cell can very easily tell that
00:58:39 that came from outside. Whatever information is coming, that’s not my information. That information
00:58:44 is coming from the outside. So I can trust it, I can ignore it, I can do various things with it,
00:58:48 I can do various things with it, whatever, but I know it comes from the outside. Now imagine
00:58:52 instead that you have two cells with a gap junction between them. Something happens,
00:58:55 let’s say this cell gets poked, there’s a calcium spike, the calcium spike or whatever small
00:58:59 molecule signal propagates through the gap junction to this cell. There’s no ownership
00:59:04 metadata on that signal. This cell does not know now that it came from outside because it looks
00:59:10 exactly like its own memories would have looked like of whatever had happened, right? So gap
00:59:15 junctions to some extent wipe ownership information on data, which means that if I can’t, if you and
00:59:21 I are sharing memories and we can’t quite tell who the memories belong to, that’s the beginning of a
00:59:26 mind melt. That’s the beginning of a scale up of cognition from here’s me and here’s you to no,
00:59:31 now there’s just us. So they enforce a collective intelligence gap junctions. That’s right. It
00:59:36 helps. It’s the beginning. It’s not the whole story by any means, but it’s the start.
00:59:39 Where’s state stored of the system? Is it in part in the gap junctions themselves? Is it in the
00:59:48 cells? There are many, many layers to this as always in biology. So there are chemical networks.
00:59:55 So for example, gene regulatory networks, right? Which, or basically any kind of chemical pathway
01:00:00 where different chemicals activate and repress each other, they can store memories. So in a
01:00:04 dynamical system sense, they can store memories. They can get into stable states that are hard to
01:00:09 pull them out of. So that becomes, once they get in, that’s a memory, a permanent memory or a
01:00:13 semi permanent memory of something that’s happened. There are cytoskeletal structures that are
01:00:17 physically, they store memories in physical configuration. There are electrical memories
01:00:24 like flip flops where there is no physical. So if you look, I showed my students this example
01:00:30 as a flip flop. And the reason that it stores a zero one is not because some piece of the hardware
01:00:37 moved. It’s because there’s a cycling of the current in one side of the thing. If I come over
01:00:42 and I hold the other side to a high voltage for a brief period of time, it flips over and now it’s
01:00:50 here. But none of the hardware moved. The information is in a stable dynamical sense. And
01:00:54 if you were to x ray the thing, you couldn’t tell me if it was zero or one, because all you would
01:00:58 see is where the hardware is. You wouldn’t see the energetic state of the system. So there are
01:01:03 bioelectrical states that are held in that exact way, like volatile ram basically, like in the
01:01:09 electrical state. It’s very akin to the different ways that memory is stored in a computer.
01:01:15 So there’s ram, there’s hard drive. You can make that mapping, right? So I think the interesting
01:01:21 thing is that based on the biology, we can have a more sophisticated, you know, I think we can
01:01:26 revise some of our computer engineering methods because there are some interesting things that
01:01:32 biology we haven’t done yet. But that mapping is not bad. I mean, I think it works in many ways.
01:01:38 Yeah, I wonder because I mean, the way we build computers at the root of computer science is the
01:01:43 idea of proof of correctness. We program things to be perfect, reliable. You know, this idea of
01:01:52 resilience and robustness to unknown conditions is not as important. So that’s what biology is really
01:01:58 good at. So I don’t know what kind of systems. I don’t know how we go from a computer to a
01:02:04 biological system in the future. Yeah, I think that, you know, the thing about biology is all
01:02:10 about making really important decisions really quickly on very limited information. I mean,
01:02:15 that’s what biology is all about. You have to act, you have to act now. The stakes are very high,
01:02:19 and you don’t know most of what you need to know to be perfect. And so there’s not even an attempt
01:02:24 to be perfect or to get it right in any sense. There are just things like active inference,
01:02:29 minimize surprise, optimize some efficiency and some things like this that guides the whole
01:02:37 business. I mentioned too offline that somebody who’s a fan of your work is Andre Kapathy.
01:02:44 And he’s, amongst many things, also writes occasionally a great blog. He came up with
01:02:52 this idea, I don’t know if he coined the term, but of software 2.0, where the programming is
01:03:00 done in the space of configuring these artificial neural networks. Is there some sense in which that
01:03:08 would be the future of programming for us humans, where we’re less doing like Python like programming
01:03:16 and more… How would that look like? But basically doing the hyperparameters of something
01:03:25 akin to a biological system and watching it go and adjusting it and creating some kind of feedback
01:03:33 loop within the system so it corrects itself. And then we watch it over time accomplish the goals
01:03:40 we want it to accomplish. Is that kind of the dream of the dogs that you described in the Nature
01:03:46 paper? Yeah. I mean, that’s what you just painted is a very good description of our efforts at
01:03:54 regenerative medicine as a kind of somatic psychiatry. So the idea is that you’re not trying
01:04:01 to micromanage. I mean, think about the limitations of a lot of the medicines today. We try to
01:04:07 interact down at the level of pathways. So we’re trying to micromanage it. What’s the problem? Well,
01:04:14 one problem is that for almost every medicine other than antibiotics, once you stop it, the
01:04:20 problem comes right back. You haven’t fixed anything. You were addressing symptoms. You
01:04:23 weren’t actually curing anything, again, except for antibiotics. That’s one problem. The other
01:04:28 problem is you have massive amount of side effects because you were trying to interact at the lowest
01:04:33 level. It’s like, I’m going to try to program this computer by changing the melting point of
01:04:40 copper. Maybe you can do things that way, but my God, it’s hard to program at the hardware level.
01:04:46 So what I think we’re starting to understand is that, and by the way, this goes back to what you
01:04:53 were saying before about that we could have access to our internal state. So people who practice that
01:04:58 kind of stuff, so yoga and biofeedback and those, those are all the people that uniformly will say
01:05:04 things like, well, the body has an intelligence and this and that. Those two sets overlap perfectly
01:05:08 because that’s exactly right. Because once you start thinking about it that way, you realize that
01:05:13 the better locus of control is not always at the lowest level. This is why we don’t all program
01:05:18 with a soldering iron. We take advantage of the high level intelligences that are there,
01:05:24 intelligences that are there, which means trying to figure out, okay, which of your tissues can
01:05:28 learn? What can they learn? Why is it that certain drugs stop working after you take them for a while
01:05:35 with this habituation, right? And so can we understand habituation, sensitization, associative
01:05:40 learning, these kinds of things in chemical pathways? We’re going to have a completely
01:05:44 different way. I think we’re going to have a completely different way of using drugs and of
01:05:49 medicine in general when we start focusing on the goal states and on the intelligence of our
01:05:54 subsystems as opposed to treating everything as if the only path was micromanagement from
01:05:59 chemistry upwards. Well, can you speak to this idea of somatic psychiatry? What are somatic cells?
01:06:05 How do they form networks that use bioelectricity to have memory and all those kinds of things?
01:06:11 Yeah. What are somatic cells like basics here? Somatic cells just means the cells of your body.
01:06:16 Soma just means body, right? So somatic cells are just the… I’m not even specifically making a
01:06:20 distinction between somatic cells and stem cells or anything like that. I mean, basically all the
01:06:23 cells in your body, not just neurons, but all the cells in your body. They form electrical
01:06:28 networks during embryogenesis, during regeneration. What those networks are doing
01:06:33 in part is processing information about what our current shape is and what the goal shape is.
01:06:39 Now, how do I know this? Because I can give you a couple of examples. One example is when we started
01:06:45 studying this, we said, okay, here’s a planarian. A planarian is a flatworm. It has one head and one
01:06:50 tail normally. And the amazing… There’s several amazing things about planaria, but basically they
01:06:55 kind of… I think planaria hold the answer to pretty much every deep question of life.
01:07:00 For one thing, they’re similar to our ancestors. So they have true symmetry. They have a true
01:07:04 brain. They’re not like earthworms. They’re a much more advanced life form. They have lots
01:07:08 of different internal organs, but they’re these little… They’re about maybe two centimeters in
01:07:12 the centimeter to two in size. They have a head and a tail. And the first thing is planaria are
01:07:17 immortal. So they do not age. There’s no such thing as an old planarian. So that right there
01:07:22 tells you that these theories of thermodynamic limitations on lifespan are wrong. It’s not that
01:07:27 well over time everything degrades. No, planaria can keep it going for probably how long have
01:07:33 they been around 400 million years. So the planaria in our lab are actually in physical
01:07:38 continuity with planaria that were here 400 million years ago. So there’s planaria that
01:07:43 have lived that long essentially. What does it mean physical continuity? Because what they do
01:07:49 is they split in half. The way they reproduce is they split in half. So the planaria, the back end
01:07:54 grabs the petri dish, the front end takes off and they rip themselves in half. But isn’t it some
01:07:59 sense where like you are a physical continuation? Yes, except that we go through a bottleneck of one
01:08:07 cell, which is the egg. They do not. I mean, they can. There’s certain planaria. Got it. So we go
01:08:11 through a very ruthless compression process and they don’t. Yes. Like an auto encoder, you know,
01:08:17 sort of squashed down to one cell and then back out. These guys just tear themselves in half.
01:08:22 And so the other amazing thing about them is they regenerate. So you can cut them into pieces.
01:08:26 The record is, I think, 276 or something like that by Thomas Hunt Morgan. And each piece regrows a
01:08:32 perfect little worm. They know exactly, every piece knows exactly what’s missing, what needs
01:08:36 to happen. In fact, if you chop it in half, as it grows the other half, the original tissue shrinks
01:08:45 so that when the new tiny head shows up, they’re proportional. So it keeps perfect proportion.
01:08:50 If you starve them, they shrink. If you feed them again, they expand. Their control,
01:08:54 their anatomical control is just insane. Somebody cut them into over 200 pieces.
01:08:58 Yeah. Thomas Hunt Morgan did. Hashtag science. Amazing. And maybe more. I mean,
01:09:03 they didn’t have antibiotics back then. I bet he lost some due to infection. I bet it’s
01:09:06 actually more than that. I bet you could do more than that. Humans can’t do that.
01:09:11 Well, yes. I mean, again, true, except that… Maybe you can at the embryonic level.
01:09:16 Well, that’s the thing, right? So when I talk about this, I say, just remember that
01:09:21 as amazing as it is to grow a whole planarian from a tiny fragment,
01:09:24 half of the human population can grow a full body from one cell. So development is really,
01:09:30 you can look at development as just an example of regeneration.
01:09:34 Yeah. To think, we’ll talk about regenerative medicine, but there’s some sense of what would
01:09:39 be like that warm in like 500 years where I can just go regrow a hand.
01:09:46 Yep. With given time, it takes time to grow large things.
01:09:49 For now.
01:09:50 Yeah, I think so. I think.
01:09:51 You can probably… Why not accelerate? Oh, biology takes its time?
01:09:56 I’m not going to say anything is impossible, but I don’t know of a way to accelerate these
01:10:00 processes. I think it’s possible. I think we are going to be regenerative, but I don’t know of a
01:10:04 way to make it faster.
01:10:04 I could just think people from a few centuries from now would be like, well, they used to have
01:10:10 to wait a week for the hand to regrow. It’s like when the microwave was invented. You can toast
01:10:17 your… What’s that called when you put a cheese on a toast? It’s delicious is all I know. I’m
01:10:27 blanking. Anywho. All right. So planaria, why were we talking about the magical planaria that they
01:10:33 have the mystery of life?
01:10:34 Yeah. So the reason we’re talking about planaria is not only are they immortal,
01:10:37 not only do they regenerate every part of the body, they generally don’t get cancer,
01:10:43 which we can talk about why that’s important. They’re smart. They can learn things. You can
01:10:47 train them. And it turns out that if you train a planaria and then cut their heads off, the tail
01:10:52 will regenerate a brand new brain that still remembers the original information.
01:10:56 Do they have a biological network going on or no?
01:10:58 Yes.
01:10:59 So their somatic cells are forming a network. And that’s what you mean by a true brain? What’s the
01:11:05 requirement for a true brain?
01:11:07 Like everything else, it’s a continuum, but a true brain has certain characteristics as far as the
01:11:12 density, like a localized density of neurons that guides behavior.
01:11:15 In the head.
01:11:16 Exactly. Exactly. If you cut their head off, the tail doesn’t do anything. It just sits there
01:11:22 until a new brain regenerates. They have all the same neurotransmitters that you and I have.
01:11:28 But here’s why we’re talking about them in this context. So here’s your planaria. You cut off the
01:11:32 head. You cut off the tail. You have a middle fragment. That middle fragment has to make one
01:11:35 head and one tail. How does it know how many of each to make? And where do they go? How come it
01:11:40 doesn’t switch? How come, right? So we did a very simple thing. And we said, okay, let’s make the
01:11:46 hypothesis that there’s a somatic electrical network that remembers the correct pattern,
01:11:52 and that what it’s doing is recalling that memory and building to that pattern.
01:11:55 So what we did was we used a way to visualize electrical activity in these cells, right? It’s a
01:12:01 variant of what people used to look for electricity in the brain. And we saw that that fragment has a
01:12:08 very particular electrical pattern. You can literally see it once we developed the technique.
01:12:12 It has a very particular electrical pattern that shows you where the head and the tail goes,
01:12:17 right? You can just see it. And then we said, okay, well now let’s test the idea that that’s
01:12:22 a memory that actually controls where the head and the tail goes. Let’s change that pattern. So
01:12:25 basically, incept the false memory. And so what you can do is you can do that in many different
01:12:29 ways. One way is with drugs that target ion channels to say, and so you pick these drugs
01:12:34 and you say, okay, I’m going to do it so that instead of this one head, one tail electrical
01:12:39 pattern, you have a two headed pattern, right? You’re just editing the electrical information
01:12:43 in the network. When you do that, guess what the cells build? They build a two headed worm.
01:12:47 And the coolest thing about it, no genetic changes. So we haven’t touched the genome.
01:12:51 The genome is totally wild type. But the amazing thing about it is that when you take these two
01:12:54 headed animals and you cut them into pieces again, some of those pieces will continue to
01:12:59 make two headed animals. So that information, that memory, that electrical circuit, not only does it
01:13:05 hold the information for how many heads, not only does it use that information to tell the cells
01:13:09 what to do to regenerate, but it stores it. Once you’ve reset it, it keeps. And we can go back,
01:13:14 we can take a two headed animal and put it back to one headed. So now imagine, so there’s a couple
01:13:18 of interesting things here that that have implications for understanding what genomes
01:13:22 and things like that. Imagine I take this two headed animal. Oh, and by the way, when they
01:13:27 reproduce, when they tear themselves in half, you still get two headed animals. So imagine I take
01:13:31 them and I throw them in the Charles River over here. So 100 years later, some scientists come
01:13:34 along and they scoop up some samples and they go, oh, there’s a single headed form and a two headed
01:13:38 form. Wow, a speciation event. Cool. Let’s sequence the genome and see why, what happened. The genomes
01:13:43 are identical. There’s nothing wrong with the genome. So if you ask the question, how does,
01:13:47 so, so this goes back to your very first question is where do body plans come from, right? How does
01:13:51 the planarian know how many heads it’s supposed to have? Now it’s interesting because you could
01:13:55 say DNA, but what happened, what, what, as it turns out, the DNA produces a piece of hardware
01:14:01 that by default says one head the way that when you turn on a calculator, by default, it’s a zero
01:14:07 every single time, right? When you turn it on, it just says zero, but it’s a programmable calculator
01:14:11 as it turns out. So once you’ve changed that next time, it won’t say zero. It’ll say something else
01:14:16 and the same thing here. So you can make, you can make one headed, two headed, you can make no
01:14:19 headed worms. We’ve done some other things along these lines, some other really weird constructs.
01:14:24 So, so this, this, this, this question of, right. So again, it’s really important. The, the hardware
01:14:28 software distinction is really important because the hardware is essential because without proper
01:14:33 hardware, you’re never going to get to the right physiology of having that memory. But once you
01:14:38 have it, it doesn’t fully determine what the information is going to be. You can have other
01:14:42 information in there and it’s reprogrammable by us, by bacteria, by various parasites, probably
01:14:47 things like that. The other amazing thing about these planarias, think about this, most animals,
01:14:52 when we get a mutation in our bodies, our children don’t inherit it, right? So you can go on, you
01:14:56 could run around for 50, 60 years getting mutations. Your children don’t have those mutations
01:15:00 because we go through the egg stage. Planaria tear themselves in half and that’s how they reproduce.
01:15:05 So for 400 million years, they keep every mutation that they’ve had that doesn’t kill the cell that
01:15:10 it’s in. So when you look at these planaria, their bodies are what’s called mixoploid, meaning that
01:15:14 every cell might have a different number of chromosomes. They look like a tumor. If you look
01:15:17 at the, the, the, the, the genome is an incredible mess because they accumulate all this stuff.
01:15:22 And yet the, their body structure is, they are the best regenerators on the planet. Their anatomy is
01:15:28 rock solid, even though their genome is always all kinds of crap. So this is a kind of a scandal,
01:15:32 right? That, you know, when we learn that, well, you know, what are genomes to what genomes determine
01:15:37 your body? Okay. Why is the animal with the worst genome have the best anatomical control, the most
01:15:41 cancer resistant, the most regenerative, right? Really, we’re just beginning to start to understand
01:15:46 this relationship between the, the genomically determined hardware and, and, and by the way,
01:15:50 just as of, as of a couple of months ago, I think I now somewhat understand why this is,
01:15:55 but it’s really, it’s really a major, you know, a major puzzle.
01:15:57 I mean, that really throws a wrench into the whole nature versus nurture because you usually
01:16:05 associate electricity with the, with the nurture and the hardware with the nature.
01:16:13 And it’s, there’s just this weird integrated mess that propagates through generations.
01:16:19 Yeah. It’s much more fluid. It’s much more complex. You can, you can imagine what’s,
01:16:25 what’s happening here is just, just imagine the evolution of a, of a, of an animal like this,
01:16:29 that, that multi scale, this goes back to this multi scale competency, right? Imagine that you
01:16:33 have two, two, two, you have, you have an animal that that where its, its tissues have some degree
01:16:38 of multi scale competency. So for example, if the like, like we saw in the tadpole, you know,
01:16:42 if you put an eye on its tail, they can still see out of that eye, right? That the, you know,
01:16:46 there’s all, there’s incredible plasticity. So if you have an animal and it comes up for selection
01:16:50 and the fitness is quite good, evolution doesn’t know whether the fitness is good because the
01:16:56 genome was awesome or because the genome was kind of junky, but, but the competency made up for it,
01:17:01 right? And things kind of ended up good. So what that means is that the more competency you have,
01:17:06 the harder it is for selection to pick the best genomes, it hides information, right? And so that
01:17:11 means that, so, so what happens, you know, evolution basically starts all those start,
01:17:16 all the hard work is being done to increase the competency because it’s harder and harder to see
01:17:21 the genomes. And so I think in planaria, what happened is that there’s this runaway phenomenon
01:17:25 where all the effort went into the algorithm such that we know you got a crappy genome. We can’t
01:17:31 keep, we can’t clean up the genome. We can’t keep track of it. So what’s going to happen is what
01:17:35 survives are the algorithms that can create a great worm no matter what the genome is. So
01:17:40 everything went into the algorithm and which, which of course then reduces the pressure on
01:17:44 keeping a, you know, keeping a clean genome. So this idea of, right, and different animals have
01:17:49 this in different, to different levels, but this idea of putting energy into an algorithm that
01:17:54 does not overtrain on priors, right? It can’t assume, I mean, I think biology is this way in
01:17:59 general, evolution doesn’t take the past too seriously because it makes these basically
01:18:04 problem solving machines as opposed to like exactly what, you know, to, to, to deal with
01:18:08 exactly what happened last time. Yeah. Problem solving versus memory recall. So a little memory,
01:18:14 but a lot of problem solving. I think so. Yeah. In many cases, yeah. Problem solving.
01:18:22 I mean, it’s incredible that those kinds of systems are able to be constructed,
01:18:25 um, especially how much they contrast with the way we build problem solving systems in the AI world.
01:18:32 Um, back to Xenobots. I’m not sure if we ever described how Xenobots are built, but
01:18:39 I mean, you have a paper titled biological robots perspectives on an emerging interdisciplinary
01:18:45 field. And the beginning you, uh, you mentioned that the word Xenobots is like controversial.
01:18:51 Do you guys get in trouble for using Xenobots or what? Do people not like the word Xenobots?
01:18:57 Are you trying to be provocative with the word Xenobots versus biological robots?
01:19:02 I don’t know. Is there some drama that we should be aware of? There’s a little bit of drama. Uh,
01:19:07 I think, I think the drama is basically related to people, um, having very fixed ideas about what
01:19:15 terms mean. And I think in many cases, these ideas are completely out of date with, with where science
01:19:22 is now. And for sure they’re, they’re out of date with what’s going to be, I mean, these, these,
01:19:28 these concepts, uh, are not going to survive the next couple of decades. So if you ask a person
01:19:33 and including, um, you know, a lot of people in biology who kind of want to keep a sharp
01:19:38 distinction between biologicals and robots, right? See, what’s a robot? Well, a robot,
01:19:42 it comes out of a factory. It’s made by humans. It is boring. It is a meaning that you can predict
01:19:46 everything it’s going to do. It’s made of metal and certain other inorganic materials. Living
01:19:50 organisms are magical. They, they, they arise, right? And so on. So these, these distinctions,
01:19:54 I think these, these distinctions, I think were, were never good, but, uh, they’re going to be
01:20:00 completely useless going forward. And so part of, there’s a couple of papers that that’s one paper
01:20:05 and there’s another one that Josh Bongar and I wrote where we really attack the terminology.
01:20:09 And we say these binary categories are based on very, um, nonessential kind of surface, uh,
01:20:16 limitations of, of technology and imagination that were true before, but they’ve got to go. And so,
01:20:22 and so we call them Zenobots. So, so Xeno for Xenopus Levus, where this is, it’s the frog that,
01:20:27 that these guys are made of, but we think it’s an example of, of, of, uh, of a biobot technology,
01:20:32 because ultimately if we, if we under, once we understand how to, uh, communicate and manipulate,
01:20:39 um, the inputs to these cells, we will be able to get them to build whatever we want them to build.
01:20:45 And that’s robotics, right? It’s, it’s the rational construction of machines that have
01:20:49 useful purposes. I, I absolutely think that this is a robotics platform, whereas some biologists
01:20:54 don’t, but it’s built in a way that, uh, all the different components are doing their own computation.
01:21:02 So in a way that we’ve been talking about, so you’re trying to do top down control in that
01:21:06 biological system. And in the future, all of this will, will, will merge together because
01:21:09 of course at some point we’re going to throw in synthetic biology circuits, right? New, new, um,
01:21:13 you know, new transcriptional circuits to get them to do new things. Of course we’ll throw some of
01:21:17 that in, but we specifically stayed away from all of that because in the first few papers,
01:21:21 and there’s some more coming down the pike that are, I think going to be pretty, pretty dynamite,
01:21:25 um, that, uh, we want to show what the native cells are made of. Because what happens is,
01:21:30 you know, if you engineer the heck out of them, right, if we were to put in new, you know,
01:21:33 new transcription factors and some new metabolic machinery and whatever, people will say, well,
01:21:38 okay, you engineered this and you made it do whatever. And fine. I wanted to show, uh, and,
01:21:44 and, and the whole team, uh, wanted to show the plasticity and the intelligence in the biology.
01:21:50 What does it do that’s surprising before you even start manipulating the hardware in that way?
01:21:55 Yeah. Don’t try to, uh, over control the thing. Let it flourish. The, the full beauty of the
01:22:02 biological system. Why Xenopus Levus? How do you pronounce it? The frog.
01:22:07 Xenopus Levus. Yeah. Yeah. It’s a very popular.
01:22:09 Why this frog?
01:22:10 It’s been used since, uh, I think the fifties. Uh, it’s just very convenient because you can,
01:22:15 you know, we, we keep the adults in this, in this, uh, very fine frog habitat. They lay eggs. They
01:22:19 lay tens of thousands of eggs at a time. Um, the eggs develop right in front of your eyes. It’s the
01:22:24 most mad magical thing you can, you can see because normally, you know, if you were to deal
01:22:29 with mice or rabbits or whatever, you don’t see the early stages, right? Cause everything’s inside
01:22:32 the mother. Everything’s in a Petri dish at room temperature. So you just, you, you have an egg,
01:22:36 it’s fertilized and you can just watch it divide and divide and divide. And on all the organs
01:22:40 form, you can just see it. And at that point, um, the community has, has developed lots of
01:22:44 different tools for understanding what’s going on and also for, for manipulating, right? So it’s,
01:22:50 it’s people use it for, um, you know, for understanding birth defects and neurobiology
01:22:54 and cancer immunology. So you get the whole, uh, embryogenesis in the Petri dish.
01:23:00 That’s so cool to watch. Is there videos of this? Oh yeah. Yeah. Yeah. There’s,
01:23:03 but yeah, there’s, there’s amazing videos on, on, online. I mean, mammalian embryos are super cool
01:23:08 too. For example, monozygotic twins are what happens when you cut a mammalian embryo in half.
01:23:12 You don’t get two half bodies. You get two perfectly normal bodies because it’s a
01:23:15 regeneration event, right? Development is just the, it’s just the kind of regeneration really.
01:23:19 And why this particular frog? It’s just, uh, cause they were doing in the fifties and.
01:23:25 It breeds well in, um, you know, in, in, it’s easy to raise in, in the laboratory and, uh,
01:23:32 it’s very prolific and all the tools basically for decades, people have been developing tools.
01:23:36 There’s other, some people use other frogs, but I have to say this is, this is, this is important.
01:23:40 Xenobots are fundamentally not anything about frogs. So, um, I can’t say too much about this
01:23:46 cause it’s not published and peer reviewed yet, but we’ve made Xenobots out of other things that
01:23:50 have nothing to do with frogs. It’s, this is not a frog phenomenon. This is, we, we started with
01:23:54 frog because it’s so convenient, but this, this, this plasticity is not a fraud. You know, it’s
01:23:59 not related to the fact that they’re frogs. What happens when you kiss it? Does it turn
01:24:02 into a prince? No. Or a princess? Which way? Uh, prince. Yeah. Prince should be a prince.
01:24:07 Yeah. Uh, that’s an experiment that I don’t believe we’ve done. And if we have, I don’t
01:24:10 want to collaborate, I can, I can take on the lead, uh, on that effort. Okay, cool. Uh,
01:24:17 how does the cells coordinate? Let’s focus in on just the embryogenesis. So there’s one cell,
01:24:24 so it divides, doesn’t have to be very careful about what each cell starts doing once they divide.
01:24:32 Yes. And like, when there’s three of them, it’s like the cofounders or whatever,
01:24:37 like, well, like slow down, you’re responsible for this. When do they become specialized and
01:24:44 how do they coordinate that specialization? So, so this is the basic science of developmental
01:24:49 biology. There’s a lot known about all of that, but, um, but I’ll tell you what I think is kind
01:24:55 of the most important part, which is, yes, it’s very important who does what. However,
01:25:01 because going back to this issue of why I made this claim that, um, biology doesn’t take the past
01:25:07 too seriously. And what I mean by that is it doesn’t assume that everything is the way it’s,
01:25:12 it’s expected to be. Right. And here’s an example of that. Um, this was, this was done, this was,
01:25:17 this was an old experiment going back to the forties, but, um, basically imagine imagine
01:25:21 it’s a new salamander and it’s got these little tube tubules that go to the kidneys, right? It’s
01:25:25 a little tube. Take a cross section of that tube. You see eight to 10 cells that have
01:25:30 cooperated to make this little tube in cross section, right? So one amazing, one amazing
01:25:34 thing you can do is, um, you can, you can mess with a very early cell division to make the cells
01:25:41 gigantic, bigger. You can, you can make them different sizes. You can force them to be different
01:25:44 sizes. So if you make the cells different sizes, the whole nude is still the same size.
01:25:50 So if you take a cross section through the, through that tubule, instead of eight to 10
01:25:53 cells, you might have four or five or you might have, you know, three until you make the cells so
01:25:59 enormous that one single cell wraps around itself and, and gives you that same large scale structure
01:26:06 with a completely different molecular mechanism. So now instead of cell to cell communication to
01:26:11 make a tubule, instead of that, it’s one cell using the cytoskeleton to bend itself around.
01:26:15 So think about what that means in the service of a large scale, talk about top down control,
01:26:20 right? In the service of a large scale anatomical feature, different molecular mechanisms get
01:26:24 called up. So now think about this, you’re, you’re, you’re a nude cell and trying to make an embryo.
01:26:30 If you had a fixed idea of who was supposed to do what, you’d be screwed because now your cells
01:26:34 are gigantic. Nothing would work. The, there’s an incredible tolerance for changes in the size of
01:26:40 the parts and the amount of DNA in those parts. Um, all sorts of stuff you can, you can, the life
01:26:45 is highly interoperable. You can put electrodes in there and you can put weird nanomaterials. It
01:26:49 still works. It’s, it’s, uh, this is that problem solving action, right? It’s able to do what it
01:26:54 needs to do, even when circumstances change. That is, you know, the hallmark of intelligence,
01:27:00 right? William James defined intelligence as the ability to get to the same goal by different
01:27:04 means. That’s this, you get to the same goal by completely different means. And so, so,
01:27:08 so why am I bringing this up is just to say that, yeah, it’s important for the cells to do the right
01:27:12 stuff, but they have incredible tolerances for things not being what you expect and to still
01:27:17 get their job done. So if you’re, you know, um, all of these things are not hardwired.
01:27:23 There are organisms that might be hardwired. For example, the nematode C elegans in that organism,
01:27:28 every cell is numbered, meaning that every C elegans has exactly the same number of cells
01:27:32 as every other C elegans. They’re all in the same place. They all divide. There’s literally a map
01:27:36 of how it works that in that, in that sort of system, it’s, it’s, it’s much more cookie cutter,
01:27:40 but, but most, most organisms are incredibly plastic in that way. Is there something particularly
01:27:47 magical to you about the whole developmental biology process? Um, is there something you
01:27:53 could say, cause you just said it, they’re very good at accomplishing the goal of the job they
01:27:58 need to do the competency thing, but you get fricking organism from one cell. It’s like, uh,
01:28:06 I mean, it’s very hard, hard to intuit that whole process to even think about reverse engineering
01:28:14 that process. Right. Very hard to the point where I often just imagine, I, I sometimes ask my
01:28:19 students to do this thought experiment. Imagine you were, you were shrunk down to the, to the scale
01:28:23 of a single cell and you were in the middle of an embryo and you were looking around at what’s going
01:28:27 on and the cells running around, some cells are dying at the, you know, every time you look,
01:28:30 it’s kind of a different number of cells for most organisms. And so I think that if you didn’t know
01:28:35 what embryonic development was, you would have no clue that what you’re seeing is always going to
01:28:40 make the same thing. Nevermind knowing what that, what that is. Nevermind being able to say, even
01:28:44 with full genomic information, being able to say, what the hell are they building? We have no way
01:28:48 to do that. But, but just even to guess that, wow, the, the, the outcome of all this activity is it’s
01:28:54 always going to be, it’s always going to build the same thing. The imperative to create the final you
01:29:00 as you are now is there already. So you can, you would, so you start from the same embryo,
01:29:06 you create a very similar organism. Yeah. Except for cases like the Xenobots, when you give them
01:29:14 a different environment, they come up with a different way to be adaptive in that environment.
01:29:18 But overall, I mean, so, so I think, so I think to, you know, kind of summarize it,
01:29:24 I think what evolution is really good at is creating hardware that has a very stable baseline
01:29:31 mode, meaning that left to its own devices, it’s very good at doing the same thing. But it has a
01:29:36 bunch of problem solving capacity such that if any, if any assumptions don’t hold, if your cells are
01:29:41 a weird size, or you get the wrong number of cells, or there’s a, you know, somebody stuck
01:29:45 in electrode halfway through the body, whatever, it will still get most of what it needs to do done.
01:29:52 You’ve talked about the magic and the power of biology here. If we look at the human brain,
01:29:57 how special is the brain in this context? You’re kind of minimizing the importance of the brain
01:30:03 or lessening its, we think of all the special computation happens in the brain,
01:30:08 everything else is like the help. You’re kind of saying that the whole thing is the whole thing
01:30:14 is doing computation. But nevertheless, how special is the human brain in this full context of
01:30:22 biology? Yeah, I mean, look, there’s no getting away from the fact that the human brain allows
01:30:27 us to do things that we could not do without it. You can say the same thing about the liver.
01:30:31 Yeah, no, this is this is true. And so and so, you know, I, my goal is not No, you’re right. My goal
01:30:37 is just being polite to the brain right now. Well, being a politician, like, listen,
01:30:42 everybody has everybody has a role. Yeah, it’s very important role. That’s right. We have to
01:30:46 acknowledge the importance of the brain, you know, there are more than enough people who are
01:30:52 cheerleading the brain, right? So so I don’t feel like nothing I say is going to reduce people’s
01:30:58 excitement about the human brain. And so so I emphasize other things credit. I don’t think it
01:31:04 gets too much credit. I think other things don’t get enough credit. I think the brain is the human
01:31:08 brain is incredible and special and all that. I think other things need more credit. And and I
01:31:13 also think that this and I’m sort of this way about everything. I don’t like binary categories,
01:31:19 but almost anything I like a continuum. And the thing about the human brain is that it by by by
01:31:24 accepting that as some kind of an important category or essential, essential thing, we end
01:31:32 up with all kinds of weird pseudo problems and conundrum. So for example, when we talk about it,
01:31:38 you know, if you don’t want to talk about ethics and other other things like that, and what you
01:31:44 know, this this idea that surely if we look out into the universe, surely, we don’t believe that
01:31:50 this human brain is the only way to be sentient, right? Surely we don’t, you know, and to have high
01:31:54 level cognition. I just can’t even wrap my mind around this, this idea that that is the only way
01:31:59 to do it. No doubt there are other architectures made bond made of completely different principles
01:32:04 that achieve the same thing. And once we believe that, then that tells us something important. It
01:32:09 tells us that things that are not quite human brains or chimeras of human brains and other
01:32:15 tissue or human brains or other kinds of brains and novel configurations or things that are sort
01:32:20 of brains, but not really, or plants or embryos or whatever, might also have important cognitive
01:32:26 status. So that’s the only thing I think we have to be really careful about treating the human
01:32:32 brain as if it was some kind of like sharp binary category. You know, you are or you aren’t. I don’t
01:32:37 believe that exists. So when we look out at all the beautiful variety of human brains,
01:32:44 semi biological architectures out there in the universe, how many intelligent alien civilizations
01:32:52 do you think are out there? Boy, I have no expertise in that whatsoever. You haven’t met
01:32:59 any? I have met the ones we’ve made. I think that I mean, exactly. In some sense with synthetic
01:33:06 biology, are you not creating aliens? I absolutely think so because look, all of life,
01:33:12 all of all standard model systems are an end of one course of evolution on Earth, right? And trying
01:33:19 to make conclusions about biology from looking at life on Earth is like testing your theory on the
01:33:26 same data that generated it. It’s all it’s all kind of like locked in. So we absolutely have to
01:33:32 create novel examples that have no history on Earth that don’t, you know, xenobots have no
01:33:40 history of selection to be a good xenobot. The cells have selection for various things, but the
01:33:44 xenobot itself never existed before. And so we can make chimeras, you know, we make frog a lottles
01:33:48 that are sort of half frog, half axolotl. You can make all sorts of high brats, right constructions
01:33:53 of living tissue with robots and whatever. We need to be making these things until we find actual
01:33:58 aliens, because otherwise, we’re just looking at an end of one set of examples, all kinds of frozen
01:34:03 accidents of evolution and so on. We need to go beyond that to really understand biology. But
01:34:08 we’re still even when you do a synthetic biology, you’re locked in to the basic components of the
01:34:17 way biology is done on this Earth. Yeah, right. And also, and the and also the basic constraints
01:34:23 of the environment, even artificial environments to construct in the lab are tied up to the
01:34:27 environment. I mean, what do you? Okay, let’s say there is I mean, what I think is there’s
01:34:34 a nearly infinite number of intelligent civilizations living or dead out there.
01:34:41 If you pick one out of the box, what do you think it would look like? So in when you think about
01:34:50 synthetic biology, or creating synthetic organisms, how hard is it to create something that’s very
01:34:58 different? Yeah, I think it’s very hard to create something that’s very different, right? It’s we
01:35:06 are just locked in both both both experimentally and in terms of our imagination, right? It’s very
01:35:12 hard. And you also emphasize several times that the idea of shape. Yeah, the individual cell get
01:35:18 together with other cells and they kind of they’re gonna build a shape. So it’s shape and function,
01:35:23 but shape is a critical thing. Yeah. So here, I’ll take a stab. I mean, I agree with you. I did
01:35:29 to whatever extent that we can say anything, I do think that there’s, you know, probably an
01:35:33 infinite number of, of different different architectures with with that are with interesting
01:35:38 cognitive properties out there. What can we say about them? I think that the only things that are
01:35:45 going I don’t I don’t think we can rely on any of the typical stuff, you know, carbon based, none of
01:35:50 that. Like, I think all of that is just, you know, us being having having a lack of imagination. But
01:35:56 I think the things that are going to be universal, if anything is, are things, for example, driven by
01:36:03 resource limitation, the fact that you are fighting a hostile world, and you have to draw a
01:36:09 boundary between yourself and the world somewhere, the fact that that boundary is not given to you
01:36:13 by anybody, you have to you have to assume it, you know, estimated yourself. And the fact that
01:36:18 you have to course grain your experience and the fact that you’re going to try to minimize surprise
01:36:22 and the fact that like these, these are the things that I think are fundamental about biology,
01:36:25 none of the, you know, the facts about the genetic code, or even the fact that we have genes or the
01:36:30 biochemistry of it, I don’t think any of those things are fundamental. But it’s going to be a
01:36:34 lot more about the information and about the creation of the self, the fact that so in my in
01:36:38 my framework, selves are demarcated by the scale of the goals that they can pursue. So from little
01:36:44 tiny local goals to like massive, you know, planetary scale goals for certain humans,
01:36:49 and everything and everything in between. So you can draw this like cognitive light cone about
01:36:53 that determines the the scale of the goals you could possibly pursue. I think those kinds of
01:36:58 frameworks, like that, like active inference, and so on are going to be universally applicable,
01:37:04 but but none of the other things that are that are typically discussed. Quick pause,
01:37:08 do you need a bathroom break? We were just talking about, you know, aliens and all that. That’s a
01:37:16 funny thing, which is, I don’t know if you’ve seen them, there’s a kind of debate that goes on about
01:37:20 cognition and plants, and what can you say about different kinds of computation and cognition and
01:37:24 plants. And I always I always look at that something like if you’re weirded out by cognition
01:37:28 and plants, you’re not ready for exobiology, right? If you know something that’s that similar
01:37:34 here on Earth is already like freaking you out, then I think there’s going to be all kinds of
01:37:38 cognitive life out there that we’re gonna have a really hard time recognizing. I think robots will
01:37:44 help us, yeah, like expand our mind about cognition, either that or the word like xenobots. So,
01:37:54 and they maybe becomes the same thing is, you know, really, when the human engineers,
01:38:01 the thing, at least in part, and then is able to achieve some kind of cognition that’s different
01:38:08 than what you’re used to, then you start to understand like, oh, you know, every living
01:38:14 organism is capable of cognition. Oh, I need to kind of broaden my understanding what cognition
01:38:19 is. But do you think plants, like when you when you eat them, are they screaming? I don’t know
01:38:25 about screaming. I think you have to see what I think when I eat a salad. Yeah, good. Yeah,
01:38:30 I think you have to scale down the expectations in terms of right, so probably they’re not
01:38:34 screaming in the way that we would be screaming. However, there’s plenty of data on plants being
01:38:39 able to do anticipation and certain kinds of memory and so on. I think, you know, what you
01:38:46 just said about robots, I hope you’re right. And I hope that’s but there’s two, there’s two ways
01:38:51 that people can take that right. So one way is exactly what you just said to try to kind of
01:38:54 expand their expand their their their notions for that category. The other way people often go is
01:39:02 they just sort of define the term is if if if it’s not a natural product, it’s it’s just faking,
01:39:08 right? It’s not really intelligence if it was made by somebody else, because it’s that same,
01:39:11 it’s the same thing. They can see how it’s done. And once you see how it’s like a magic trick,
01:39:16 when you see how it’s done, it’s not as fun anymore. And and I think people have a real
01:39:21 tendency for that. And they sort of which which I find really strange in the sense that if somebody
01:39:25 said to me, we have this this this sort of blind, like, like, hill climbing search,
01:39:32 and then and then we have a really smart team of engineers, which one do you think is going to
01:39:36 produce a system that has good intelligence? I think it’s really weird to say that it only
01:39:41 comes from the blind search, right? It can’t be done by people who, by the way, can also use
01:39:45 evolutionary techniques if they want to, but also rational design. I think it’s really weird to say
01:39:49 that real intelligence only comes from natural evolution. So I hope you’re right. I hope people
01:39:55 take it the other the other way. But there’s a nice shortcut. So I work with Lego robots a lot now
01:40:01 from for my own personal pleasure. Not in that way internet. So four legs. And one of the things
01:40:13 that changes my experience with the robots a lot is when I can’t understand why I did a certain
01:40:21 thing. And there’s a lot of ways to engineer that. Me, the person that created the software that runs
01:40:27 it. There’s a lot of ways for me to build that software in such a way that I don’t exactly know
01:40:33 why it did a certain basic decision. Of course, as an engineer, you can go in and start to look at
01:40:40 logs. You can log all kind of data, sensory data, the decisions you made, you know, all the outputs
01:40:45 in your networks and so on. But I also try to really experience that surprise and that really
01:40:52 experience as another person would that totally doesn’t know how it’s built. And I think the magic
01:40:57 is there in not knowing how it works. That I think biology does that for you through the layers of
01:41:06 abstraction. Yeah, it because nobody really knows what’s going on inside the biological. Like each
01:41:14 one component is clueless about the big picture. I think there’s actually really cheap systems that
01:41:20 can that can illustrate that kind of thing, which is even like, you know, fractals, right? Like,
01:41:27 you have a very small, short formula in Z, and you see it and there’s no magic, you’re just going to
01:41:32 crank through, you know, Z squared plus C, whatever, you’re just going to crank through it. But the
01:41:36 result of it is this incredibly rich, beautiful image, right? That that just like, wow, all of
01:41:43 that was in this, like, 10 character long string, like amazing. So the fact that you can you can
01:41:49 know everything there is to know about the details and the process and all the parts and every like,
01:41:54 there’s literally no magic of any kind there. And yet the outcome is something that you would never
01:42:01 have expected. And it’s just it just, you know, is incredibly rich and complex and beautiful. So
01:42:07 there’s a lot of that. You write that you work on developing conceptual frameworks for understanding
01:42:13 unconventional cognition. So the kind of thing we’ve been talking about, I just like the term
01:42:17 unconventional cognition. And you want to figure out how to detect, study and communicate with
01:42:23 the thing. You’ve already mentioned a few examples, but what is unconventional cognition? Is it as
01:42:29 simply as everything else outside of what we define usually as cognition, cognitive science,
01:42:34 the stuff going on between our ears? Or is there some deeper way to get at the fundamentals of
01:42:41 what is cognition? Yeah, I think like, and I’m certainly not the only person who works in
01:42:47 unconventional, unconventional cognition. So it’s the term used? Yeah, that’s one that I so I’ve
01:42:53 coined a number of weird terms, but that’s not one of mine like that. That’s an existing thing. So
01:42:56 so for example, somebody like Andy Adam Askey, who I don’t know if you’ve if you’ve had him on,
01:43:00 if you haven’t, you should he’s a he’s a he’s a, you know, very interesting guy. He’s a computer
01:43:05 scientist, and he does unconventional cognition and slime molds, all kinds of weird. He’s a real
01:43:10 weird, weird cat, really interesting. Anyway, so so that’s, you know, it’s a bunch of terms that
01:43:15 I’ve come up with. But that’s not one of mine. So I think like many terms, that one is, is really
01:43:21 defined by the times, meaning that unconventional cognitive things that are unconventional cognition
01:43:26 today are not going to be considered unconventional cognition at some point. It’s one of those,
01:43:31 it’s one of those things. And so it’s, you know, it’s, it’s, it’s this, it’s this really deep
01:43:37 question of how do you recognize, communicate with, classify cognition, when you cannot rely
01:43:46 on the typical milestones, right? So typical, you know, again, if you stick with the with the, the
01:43:52 history of life on Earth, like these, these exact model systems, you would say, Ah, here’s a particular
01:43:56 structure of the brain. And this one has fewer of those. And this one has a bigger frontal cortex.
01:44:00 And this one, right, so these are these are landmarks that that we’re that we’re used to,
01:44:04 and and allows us to make very kind of rapid judgments about things. But if you can’t rely on
01:44:10 that, either because you’re looking at a synthetic thing, or an engineered thing, or an alien thing,
01:44:16 then what do you do? Right? How do you and so and so that’s what I’m really interested. I’m
01:44:19 interested in mind in all of its possible implementations, not just the obvious ones
01:44:25 that we know from from looking at brains here on Earth. Whenever I think about something like
01:44:31 unconventional cognition, I think about cellular automata, I’m just captivated by the beauty of the
01:44:36 thing. The fact that from simple little objects, you can create some such beautiful complexity
01:44:46 that very quickly, you forget about the individual objects, and you see the things that it creates
01:44:53 as its own organisms. That blows my mind every time. Like, honestly, I could full time just
01:45:01 eat mushrooms and watch cellular automata. Don’t even have to do mushrooms.
01:45:06 Just cellular automata. It feels like, I mean, from the engineering perspective, I love
01:45:13 when a very simple system captures something really powerful, because then you can study
01:45:18 that system to understand something fundamental about complexity about life on Earth.
01:45:24 Anyway, how do I communicate with a thing? If cellular automata can do cognition, if a plant
01:45:32 can do cognition, if a xenobot can do cognition, how do I like whisper in its ear and get an
01:45:40 answer back to how do I have a conversation? How do I have a xenobot on a podcast?
01:45:46 It’s a really interesting line of investigation that opens up. I mean, we’ve thought about this.
01:45:53 So you need a few things. You need to understand the space in which they live. So not just the
01:46:00 physical modality, like can they see light, can they feel vibration? I mean, that’s important,
01:46:03 of course, because that’s how you deliver your message. But not just the ideas for a communication
01:46:08 medium, not just the physical medium, but saliency, right? So what’s important to this
01:46:16 system? And systems have all kinds of different levels of sophistication of what you could expect
01:46:22 to get back. And I think what’s really important, I call this the spectrum of persuadability,
01:46:28 which is this idea that when you’re looking at a system, you can’t assume where on the spectrum
01:46:33 it is. You have to do experiments. And so for example, if you look at a gene regulatory network,
01:46:41 which is just a bunch of nodes that turn each other on and off at various rates, you might
01:46:45 look at that and you say, well, there’s no magic here. I mean, clearly this thing is as deterministic
01:46:50 as it gets. It’s a piece of hardware. The only way we’re going to be able to control it is by
01:46:54 rewiring it, which is the way molecular biology works, right? We can add nodes, remove nodes,
01:46:57 whatever. Well, so we’ve done simulations and shown that biological, and now we’re doing this in the
01:47:03 lab, the biological networks like that have associative memory. So they can actually learn,
01:47:08 they can learn from experience. They have habituation, they have sensitization, they
01:47:12 have associative memory, which you wouldn’t have known if you assume that they have to be on the
01:47:15 left side of that spectrum. So when you’re going to communicate with something, and we’ve even,
01:47:21 Charles Abramson and I have written a paper on behaviorist approaches to synthetic organisms,
01:47:26 meaning that if you’re given something, you have no idea what it is or what it can do,
01:47:29 how do you figure out what its psychology is, what its level is, what does it, and so we literally
01:47:34 lay out a set of protocols, starting with the simplest things and then moving up to more complex
01:47:38 things where you can make no assumptions about what this thing can do, right? You have to start
01:47:42 and you’ll find out. So when you’re going to, so here’s a simple, I mean, here’s one way to
01:47:47 communicate with something. If you can train it, that’s a way of communicating. So if you can
01:47:51 provide, if you can figure out what the currency of reward of positive and negative reinforcement is,
01:47:56 right, and you can get it to do something it wasn’t doing before based on experiences you’ve
01:48:01 given, you have taught it one thing. You have communicated one thing, that such and such an
01:48:06 action is good, some other action is not good. That’s like a basic atom of a primitive atom
01:48:11 of communication. What about in some sense, if it gets you to do something you haven’t done before,
01:48:19 is it answering back? Yeah, most certainly. And there’s, I’ve seen cartoons, I think maybe Gary
01:48:24 Larson or somebody had had a cartoon of these rats in the maze and the one rat, you know,
01:48:29 assist to the other. You look at this every time, every time I walk over here, he starts scribbling
01:48:32 in that on the, you know, on the clipboard that he has, it’s awesome. If we step outside ourselves
01:48:38 and really measure how much, like if I actually measure how much I’ve changed because of my
01:48:46 interaction with certain cellular automata. I mean, you really have to take that into
01:48:52 consideration about like, well, these things are changing you too. Yes. I know, you know how it
01:48:58 works and so on, but you’re being changed by the thing. Yeah, absolutely. I think I read,
01:49:04 I don’t know any details, but I think I read something about how wheat and other things
01:49:08 have domesticated humans in terms of, right, but by their properties change the way that
01:49:13 the human behavior and societal structures. In that sense, cats are running the world
01:49:20 because they’ve took over the, so first off, so first they, while not giving a shit about humans,
01:49:27 clearly with every ounce of their being, they’ve somehow got just millions and millions of humans
01:49:35 to take them home and feed them. And then not only the physical space did they take over,
01:49:43 they took over the digital space. They dominate the internet in terms of cuteness, in terms of
01:49:48 memeability. And so they’re like, they got themselves literally inside the memes, they
01:49:55 become viral and spread on the internet. And they’re the ones that are probably controlling
01:50:01 humans. That’s my theory. Another, that’s a follow up paper after the frog kissing. Okay.
01:50:06 I mean, you mentioned sentience and consciousness. You have a paper titled Generalizing Frameworks
01:50:18 for Sentience Beyond Natural Species. So beyond normal cognition, if we look at sentience and
01:50:30 consciousness, and I wonder if you draw an interesting distinction between those two
01:50:34 elsewhere, outside of humans, and maybe outside of Earth, you think aliens have sentience. And
01:50:45 if they do, how do we think about it? So when you have this framework, what is this paper? What is
01:50:50 the way you propose to think about sentience? Yeah, that particular paper was a very short
01:50:57 commentary on another paper that was written about crabs. It was a really good paper on them,
01:51:01 crabs and various, like a rubric of different types of behaviors that could be applied to
01:51:07 different creatures, and they’re trying to apply it to crabs and so on. Consciousness,
01:51:13 we can talk about if you want, but it’s a whole separate kettle of fish. I almost never talk about
01:51:18 crabs. In this case, yes. I almost never talk about consciousness, per se. I’ve said very,
01:51:24 very little about it, but we can talk about it if you want. Mostly what I talk about is cognition,
01:51:29 because I think that that’s much easier to deal with in a kind of rigorous experimental way.
01:51:36 I think that all of these terms have, you know, sentience and so on, have different definitions,
01:51:45 and I fundamentally, I think that people can, as long as they specify what they mean ahead of time,
01:51:53 I think people can define them in various ways. The only thing that I really think
01:51:58 that I really kind of insist on is that the right way to think about all this stuff is
01:52:06 from an engineering perspective. What does it help me to control, predict, and does it help
01:52:12 me do my next experiment? That’s not a universal perspective. Some people have philosophical
01:52:20 kind of underpinnings, and those are primary, and if anything runs against that, then it must
01:52:25 automatically be wrong. Some people will say, I don’t care what else. If your theory says to me
01:52:31 that thermostats have little tiny goals, I’m not, so that’s it. That’s my philosophical
01:52:38 preconception. Thermostats do not have goals, and that’s it. That’s one way of doing it,
01:52:43 and some people do it that way. I do not do it that way, and I think that we can’t,
01:52:47 I don’t think we can know much of anything from a philosophical armchair. I think that
01:52:51 all of these theories and ways of doing things stand or fall based on just basically one set
01:52:57 of criteria. Does it help you run a rich research program? That’s it.
01:53:01 I agree with you totally, but forget philosophy. What about the poetry of ambiguity? What about
01:53:08 at the limits of the things you can engineer using terms that can be defined in multiple ways
01:53:14 and living within that uncertainty in order to play with words until something lands that you
01:53:22 can engineer? I mean, that’s to me where consciousness sits currently. Nobody really
01:53:27 understands the heart problem of consciousness, the subject, what it feels like, because it really
01:53:33 feels like, it feels like something to be this biological system. This conglomerate of a bunch
01:53:39 of cells in this hierarchy of competencies feels like something, and yeah, I feel like one thing,
01:53:45 and is that just a side effect of a complex system, or is there something more that humans have,
01:53:58 or is there something more that any biological system has? Some kind of magic, some kind of,
01:54:03 not just a sense of agency, but a real sense with a capital letter S of agency.
01:54:10 Yeah.
01:54:12 Ah, boy, yeah, that’s a deep question.
01:54:13 Is there room for poetry in engineering or no?
01:54:16 No, there definitely is, and a lot of the poetry comes in when we realize that none of the
01:54:22 categories we deal with are sharp as we think they are, right? And so in the different areas of all
01:54:29 these spectra are where a lot of the poetry sits, I have many new theories about things,
01:54:34 but I, in fact, do not have a good theory about consciousness that I plan to trot out.
01:54:38 And you almost don’t see it as useful for your current work to think about consciousness?
01:54:42 I think it will come. I have some thoughts about it, but I don’t feel like they’re going to move
01:54:46 the needle yet on that.
01:54:47 And you want to ground it in engineering always.
01:54:50 So, well, I mean, so if we really tackle consciousness per se, in the terms of the
01:54:58 hard problem, that isn’t necessarily going to be groundable in engineering, right? That
01:55:04 aspect of cognition is, but actual consciousness per se, first person perspective, I’m not sure
01:55:10 that that’s groundable in engineering. And I think specifically what’s different about it is
01:55:16 there’s a couple of things. So let’s, you know, here we go. I’ll say a couple of things about
01:55:20 consciousness. One thing is that what makes it different is that for every other thing,
01:55:28 other aspect of science, when we think about having a correct or a good theory of it,
01:55:35 we have some idea of what format that theory makes predictions in. So whether those be numbers
01:55:41 or whatever, we have some idea. We may not know the answer, we may not have the theory,
01:55:45 but we know that when we get the theory, here’s what it’s going to output, and then we’ll know
01:55:49 if it’s right or wrong. For actual consciousness, not behavior, not neural correlates, but actual
01:55:54 first person consciousness. If we had a correct theory of consciousness, or even a good one,
01:55:59 what the hell would, what format would it make predictions in, right? Because all the things
01:56:05 that we know about basically boil down to observable behaviors. So the only thing I can
01:56:10 think of when I think about that is, it’ll be poetry, or it’ll be something to, if I ask you,
01:56:19 okay, you’ve got a great theory of consciousness, and here’s this creature, maybe it’s a natural one,
01:56:23 maybe it’s an engineered one, whatever. And I want you to tell me what your theory says about this
01:56:30 being, what it’s like to be this being. The only thing I can imagine you giving me is some piece
01:56:36 of art, a poem or something, that once I’ve taken it in, I share, I now have a similar state as
01:56:45 whatever. That’s about as good as I can come up with. Well, it’s possible that once you have a
01:56:51 good understanding of consciousness, it would be mapped to some things that are more measurable.
01:56:56 So for example, it’s possible that a conscious being is one that’s able to suffer. So you start
01:57:07 to look at pain and suffering. You can start to connect it closer to things that you can measure
01:57:16 that, in terms of how they reflect themselves in behavior and problem solving and creation and
01:57:25 attainment of goals, for example, which I think suffering is one of the, you know, life is suffering.
01:57:31 It’s one of the big aspects of the human condition. And so if consciousness is somehow a,
01:57:40 maybe at least a catalyst for suffering, you could start to get like echoes of it. You start to see
01:57:48 like the actual effects of consciousness and behavior. That it’s not just about subjective
01:57:52 experience. It’s like it’s really deeply integrated in the problem solving decision making of a
01:57:59 system, something like this. But also it’s possible that we realize, this is not a philosophical
01:58:06 statement. Philosophers can write their books. I welcome it. You know, I take the Turing test
01:58:13 really seriously. I don’t know why people really don’t like it. When a robot convinces you that
01:58:20 it’s intelligent, I think that’s a really incredible accomplishment. And there’s some deep
01:58:26 sense in which that is intelligence. If it looks like it’s intelligent, it is intelligent. And I
01:58:32 think there’s some deep aspect of a system that appears to be conscious. In some deep sense,
01:58:43 it is conscious. At least for me, we have to consider that possibility. And a system that
01:58:51 appears to be conscious is an engineering challenge. Yeah, I don’t disagree with any of
01:58:58 that. I mean, especially intelligence, I think, is a publicly observable thing. Science fiction
01:59:06 has dealt with this for a century or much more, maybe. This idea that when you are confronted with
01:59:12 something that just doesn’t meet any of your typical assumptions, so you can’t look in the
01:59:17 skull and say, oh, well, there’s that frontal cortex, so then I guess we’re good. So this thing
01:59:23 lands on your front lawn, and the little door opens, and something trundles out, and it’s shiny
01:59:30 and aluminum looking, and it hands you this poem that it wrote while it was flying over,
01:59:35 and how happy it is to meet you. What’s going to be your criteria for whether you get to take it
01:59:40 apart and see what makes it tick, or whether you have to be nice to it and whatever? All the
01:59:46 criteria that we have now and that people are using, and as you said, a lot of people are
01:59:51 down on the Turing test and things like this, but what else have we got? Because measuring
01:59:55 the cortex size isn’t going to cut it in the broader scheme of things. So I think it’s a
02:00:03 wide open problem. Our solution to the problem of other minds, it’s very simplistic. We give each
02:00:11 other credit for having minds just because we’re sort of on an anatomical level, we’re pretty
02:00:15 similar, and so it’s good enough. But how far is that going to go? So I think that’s really primitive.
02:00:21 So yeah, I think it’s a major unsolved problem. It’s a really challenging direction of thought
02:00:28 to the human race that you talked about, like embodied minds. If you start to think that other
02:00:36 things other than humans have minds, that’s really challenging. Because all men are created equal
02:00:43 starts being like, all right, well, we should probably treat not just cows with respect,
02:00:52 but like plants, and not just plants, but some kind of organized conglomerates of cells
02:01:02 in a petri dish. In fact, some of the work we’re doing, like you’re doing and the whole community
02:01:08 of science is doing with biology, people might be like, we were really mean to viruses.
02:01:13 Yeah. I mean, yeah, the thing is, you’re right. And I certainly get phone calls about people
02:01:20 complaining about frog skin and so on. But I think we have to separate the sort of deep
02:01:26 philosophical aspects versus what actually happens. So what actually happens on Earth
02:01:30 is that people with exactly the same anatomical structure kill each other on a daily basis.
02:01:37 So I think it’s clear that simply knowing that something else is equally or maybe more
02:01:44 cognitive or conscious than you are is not a guarantee of kind behavior, that much we know of.
02:01:51 And so then we look at a commercial farming of mammals and various other things. And so I think
02:01:56 on a practical basis, long before we get to worrying about things like frog skin,
02:02:03 we have to ask ourselves, why are we, what can we do about the way that we’ve been behaving
02:02:08 towards creatures, which we know for a fact, because of our similarities are basically just
02:02:13 like us. That’s kind of a whole other social thing. But fundamentally, of course, you’re
02:02:18 absolutely right in that we are also, think about this, we are on this planet in some way,
02:02:24 incredibly lucky. It’s just dumb luck that we really only have one dumb animal.
02:02:31 We only have one dominant species. It didn’t have to work out that way. So you could easily
02:02:37 imagine that there could be a planet somewhere with more than one equally or maybe near equally
02:02:43 intelligent species. But they may not look anything like each other. So there may be
02:02:49 multiple ecosystems where there are things of similar to human like intelligence. And then
02:02:54 you’d have all kinds of issues about how do you relate to them when they’re physically
02:02:59 like you at all. But yet in terms of behavior and culture and whatever, it’s pretty obvious
02:03:04 that they’ve got as much on the ball as you have. Or maybe imagine that there was another
02:03:10 group of beings that was on average 40 IQ points lower. We’re pretty lucky in many ways. We don’t
02:03:18 really have, even though we still act badly in many ways. But the fact is, all humans are more
02:03:24 or less in that same range, but didn’t have to work out that way. Well, but I think that’s part
02:03:30 of the way life works on Earth, maybe human civilization works, is it seems like we want
02:03:38 ourselves to be quite similar. And then within that, you know, where everybody’s about the same
02:03:45 relatively IQ, intelligence, problem solving capabilities, even physical characteristics.
02:03:49 But then we’ll find some aspect of that that’s different. And that seems to be like,
02:03:58 I mean, it’s really dark to say, but that seems to be the, not even a bug, but like a feature
02:04:07 of the early development of human civilization. You pick the other, your tribe versus the other
02:04:14 tribe and you war, it’s a kind of evolution in the space of memes, a space of ideas, I think,
02:04:22 and you war with each other. So we’re very good at finding the other, even when the characteristics
02:04:28 are really the same. And that’s, I don’t know what that, I mean, I’m sure so many of these things
02:04:35 echo in the biological world in some way. Yeah. There’s a fun experiment that I did. My son
02:04:41 actually came up with this and we did a biology unit together. He’s a homeschool. And so we did
02:04:46 this a couple of years ago. We did this thing where, imagine you get this slime mold, right?
02:04:50 Fisarum polycephalum, and it grows on a Petri dish of agar and it sort of spreads out and it’s a
02:04:57 single cell protist, but it’s like this giant thing. And so you put down a piece of oat and
02:05:02 it wants to go get the oat and it sort of grows towards the oat. So what you do is you take a
02:05:05 razor blade and you just separate the piece of the whole culture that’s growing towards the
02:05:10 oat. You just kind of separate it. And so now think about the interesting decision making
02:05:15 calculus for that little piece. I can go get the oat and therefore I won’t have to share those
02:05:20 nutrients with this giant mass over there. So the nutrients per unit volume is going to be amazing.
02:05:25 I should go eat the oat. But if I first rejoin, because Fisarum, once you cut it, has the ability
02:05:30 to join back up. If I first rejoin, then that whole calculus becomes impossible because there
02:05:36 is no more me anymore. There’s just we and then we will go eat this thing, right? So this
02:05:40 interesting, you can imagine a kind of game theory where the number of agents isn’t fixed
02:05:46 and that it’s not just cooperate or defect, but it’s actually merge and whatever, right?
02:05:50 Yeah. So that computation, how does it do that decision making?
02:05:54 Yeah. So it’s really interesting. And so empirically, what we found is that it tends
02:06:00 to merge first. It tends to merge first and then the whole thing goes. But it’s really interesting
02:06:04 that that calculus, I mean, I’m not an expert in the economic game theory and all that,
02:06:09 but maybe there’s some sort of hyperbolic discounting or something. But maybe this idea
02:06:14 that the actions you take not only change your payoff, but they change who or what you are,
02:06:22 and that you could take an action after which you don’t exist anymore, or you are radically
02:06:27 changed, or you are merged with somebody else. As far as I know, that’s a whole different
02:06:33 thing. As far as I know, we’re still missing a formalism for even knowing how to model
02:06:38 any of that.
02:06:39 Do you see evolution, by the way, as a process that applies here on Earth? Where did evolution
02:06:45 come from?
02:06:46 Yeah.
02:06:47 So this thing from the very origin of life that took us to today, what the heck is that?
02:06:54 I think evolution is inevitable in the sense that if you combine, and basically, I think
02:07:00 one of the most useful things that was done in early computing, I guess in the 60s, it
02:07:05 started with evolutionary computation and just showing how simple it is that if you have
02:07:13 imperfect heredity and competition together, those two things, or three things, so heredity,
02:07:19 imperfect heredity, and competition, or selection, those three things, and that’s it. Now you’re
02:07:25 off to the races. And so that can be, it’s not just on Earth because it can be done in
02:07:29 the computer, it can be done in chemical systems, it can be done in, you know, Lee Smolin says
02:07:33 it works on cosmic scales. So I think that that kind of thing is incredibly pervasive
02:07:42 and general. It’s a general feature of life. It’s interesting to think about, you know,
02:07:49 the standard thought about this is that it’s blind, right? Meaning that the intelligence
02:07:55 of the process is zero, it’s stumbling around. And I think that back in the day, when the
02:08:01 options were it’s dumb like machines, or it’s smart like humans, then of course, the scientists
02:08:07 went in this direction, because nobody wanted creationism. They said, okay, it’s got to
02:08:10 be like completely blind. I’m not actually sure, right? Because I think that everything
02:08:15 is a continuum. And I think that it doesn’t have to be smart with foresight like us, but
02:08:20 it doesn’t have to be completely blind either. I think there may be aspects of it. And in
02:08:25 particular, this kind of multi scale competency might give it a little bit of look ahead maybe
02:08:30 or a little bit of problem solving sort of baked in. But that’s going to be completely
02:08:36 different in different systems. I do think it’s general. I don’t think it’s just on Earth.
02:08:41 I think it’s a very fundamental thing.
02:08:44 And it does seem to have a kind of direction that it’s taking us that’s somehow perhaps
02:08:50 is defined by the environment itself. It feels like we’re headed towards something. Like,
02:08:57 we’re playing out a script that was just like a single cell defines the entire organism.
02:09:03 It feels like from the origin of Earth itself, it’s playing out a kind of script. You can’t
02:09:10 really go any other way.
02:09:12 I mean, so this is very controversial, and I don’t know the answer. But people have argued
02:09:17 that this is called, you know, sort of rewinding the tape of life, right? And some people have
02:09:22 argued, I think, I think Conway Morris, maybe has argued that it is that there’s a deep
02:09:28 attractor, for example, to human to the human kind of structure and that and that if you
02:09:34 were to rewind it again, you’d basically get more or less the same thing. And then other
02:09:37 people have argued that, no, it’s incredibly sensitive to frozen accidents. And then once
02:09:41 certain stochastic decisions are made downstream, everything is going to be different. I don’t
02:09:46 know. I don’t know. You know, we’re very bad at predicting attractors in the space of complex
02:09:52 systems, generally speaking, right? We don’t know. So maybe, so maybe evolution on Earth
02:09:56 has these deep attractors that no matter what has happened, it pretty much would likely
02:10:01 to end up there or maybe not. I don’t know.
02:10:03 What’s a really difficult idea to imagine that if you ran Earth a million times, 500,000
02:10:10 times you would get Hitler? Like, yeah, we don’t like to think like that. We think like,
02:10:17 because at least maybe in America, you’d like to think that individual decisions can change
02:10:23 the world. And if individual decisions could change the world, then surely any perturbation
02:10:30 could result in a totally different trajectory. But maybe there’s a, in this competency hierarchy,
02:10:38 it’s a self correcting system. There’s just ultimately, there’s a bunch of chaos that
02:10:43 ultimately is leading towards something like a super intelligent, artificial intelligence
02:10:47 system that answers 42. I mean, there might be a kind of imperative for life that it’s
02:10:56 headed to. And we’re too focused on our day to day life of getting coffee and snacks and
02:11:04 having sex and getting a promotion at work, not to see the big imperative of life on Earth
02:11:12 that is headed towards something.
02:11:14 Yeah, maybe, maybe. It’s difficult. I think one of the things that’s important about Chimerica
02:11:24 bioengineering technologies, all of those things are that we have to start developing
02:11:29 a better science of predicting the cognitive goals of composite systems. So we’re just
02:11:35 not very good at it, right? We don’t know if I create a composite system, and this could
02:11:41 be Internet of Things or swarm robotics or a cellular swarm or whatever. What is the
02:11:48 emergent intelligence of this thing? First of all, what level is it going to be at? And
02:11:51 if it has goal directed capacity, what are the goals going to be? Like, we are just not
02:11:56 very good at predicting that yet. And I think that it’s an existential level need for us
02:12:06 to be able to because we’re building these things all the time, right? We’re building
02:12:10 both physical structures like swarm robotics, and we’re building social financial structures
02:12:16 and so on, with very little ability to predict what sort of autonomous goals that system
02:12:21 is going to have, of which we are now cogs. And so learning to predict and control those
02:12:26 things is going to be critical. So in fact, if you’re right and there is some kind of
02:12:31 attractor to evolution, it would be nice to know what that is and then to make a rational
02:12:36 decision of whether we’re going to go along or we’re going to pop out of it or try to
02:12:39 pop out of it because there’s no guarantee. I mean, that’s the other kind of important
02:12:44 thing. A lot of people, I get a lot of complaints from people who email me and say, you know,
02:12:49 what you’re doing, it isn’t natural. And I’ll say, look, natural, that’d be nice if somebody
02:12:56 was making sure that natural was matched up to our values, but no one’s doing that. Evolution
02:13:02 optimizes for biomass. That’s it. Nobody’s optimizing. It’s not optimizing for your happiness.
02:13:07 I don’t think necessarily it’s optimizing for intelligence or fairness or any of that
02:13:11 stuff.
02:13:12 I’m going to find that person that emailed you, beat them up, take their place, steal
02:13:18 everything they own and say, no, this is natural.
02:13:22 This is natural. Yeah, exactly. Because it comes from an old worldview where you could
02:13:28 assume that whatever is natural, that that’s probably for the best. And I think we’re long
02:13:32 out of that garden of Eden kind of view. So I think we can do better. I think we, and
02:13:37 we have to, right? Natural just isn’t great for a lot of life forms.
02:13:42 What are some cool synthetic organisms that you think about, you dream about? When you
02:13:46 think about embodied mind, what do you imagine? What do you hope to build?
02:13:51 Yeah, on a practical level, what I really hope to do is to gain enough of an understanding
02:13:57 of the embodied intelligence of the organs and tissues such that we can achieve a radically
02:14:04 different regenerative medicine so that we can say, basically, and I think about it as,
02:14:11 you know, in terms of like, okay, can you, what’s the goal kind of end game for this
02:14:18 whole thing? To me, the end game is something that you would call an anatomical compiler.
02:14:22 So the idea is you would sit down in front of the computer and you would draw the body
02:14:27 or the organ that you wanted. Not molecular details, but like, yeah, this is what I want.
02:14:31 I want a six legged, you know, frog with a propeller on top, or I want a heart that looks
02:14:36 like this, or I want a leg that looks like this. And what it would do if we knew what
02:14:39 we were doing is put out, convert that anatomical description into a set of stimuli that would
02:14:47 have to be given to cells to convince them to build exactly that thing, right? I probably
02:14:51 won’t live to see it, but I think it’s achievable. And I think with that, if we can have that,
02:14:56 then that is basically the solution to all of medicine except for infectious disease.
02:15:03 So birth defects, right? Traumatic injury, cancer, aging, degenerative disease. If we
02:15:07 knew how to tell cells what to build, all of those things go away. So those things go
02:15:11 away. And the positive feedback spiral of economic costs, where all of the advances
02:15:18 are increasingly more heroic and expensive interventions of a sinking ship when you’re
02:15:22 like 90 and so on, right? All of that goes away because basically, instead of trying
02:15:26 to fix you up as you degrade, you progressively regenerate, you apply the regenerative medicine
02:15:33 early before things degrade. So I think that that’ll have massive economic impacts over
02:15:38 what we’re trying to do now, which is not at all sustainable. And that’s what I hope.
02:15:43 I hope that we get it. So to me, yes, the xenobots will be doing useful things, cleaning
02:15:50 up the environment, cleaning out your joints and all that kind of stuff. But more important
02:15:55 than that, I think we can use these synthetic systems to try to develop a science of detecting
02:16:04 and manipulating the goals of collective intelligences of cells specifically for regenerative medicine.
02:16:10 And then sort of beyond that, if we think further beyond that, what I hope is that kind
02:16:15 of like what you said, all of this drives a reconsideration of how we formulate ethical
02:16:22 norms because this old school, so in the olden days, what you could do is if you were confronted
02:16:29 with something, you could tap on it, right? And if you heard a metallic clanging sound,
02:16:33 you’d say, ah, fine, right? So you could conclude it was made in a factory. I can take it apart.
02:16:37 I can do whatever, right? If you did that and you got sort of a squishy kind of warm
02:16:40 sensation, you’d say, ah, I need to be more or less nice to it and whatever. That’s not
02:16:46 going to be feasible. It was never really feasible, but it was good enough because we
02:16:49 didn’t have any, we didn’t know any better. That needs to go. And I think that by breaking
02:16:55 down those artificial barriers, someday we can try to build a system of ethical norms
02:17:03 that does not rely on these completely contingent facts of our earthly history, but on something
02:17:08 much, much deeper that really takes agency and the capacity to suffer and all that takes
02:17:15 that seriously.
02:17:16 The capacity to suffer and the deep questions I would ask of a system is can I eat it and
02:17:21 can I have sex with it? Which is the two fundamental tests of, again, the human condition. So I
02:17:30 can basically do what Dali does that’s in the physical space. So print out like a 3D
02:17:39 print Pepe the Frog with a propeller head, propeller hat is the dream.
02:17:46 Well yes and no. I mean, I want to get away from the 3D printing thing because that will
02:17:50 be available for some things much earlier. I mean, we can already do bladders and ears
02:17:55 and things like that because it’s micro level control, right? When you 3D print, you are
02:17:59 in charge of where every cell goes. And for some things that, you know, for, for like
02:18:02 this thing, they had that I think 20 years ago or maybe earlier than that, you could
02:18:06 do that.
02:18:07 So yeah, I would like to emphasize the Dali part where you provide a few words and it
02:18:11 generates a painting. So here you say, I want a frog with these features and then it would
02:18:19 go direct a complex biological system to construct something like that.
02:18:25 Yeah. The main magic would be, I mean, I think from, from looking at Dali and so on, it looks
02:18:30 like the first part is kind of solved now where you go from, from the words to the image,
02:18:34 like that seems more or less solved. The next step is really hard. This is what keeps things
02:18:39 like CRISPR and genomic editing and so on. That’s what limits all the impacts for regenerative
02:18:46 medicine because going back to, okay, this is the knee joint that I want, or this is
02:18:51 the eye that I want. Now, what genes do I edit to make that happen, right? Going back
02:18:56 in that direction is really hard. So instead of that, it’s going to be, okay, I understand
02:18:59 how to motivate cells to build particular structures. Can I rewrite the memory of what
02:19:03 they think they’re supposed to be building such that then I can, you know, take my hands
02:19:07 off the wheel and let them, let them do their thing.
02:19:09 So some of that is experiment, but some of that may be AI can help too. Just like with
02:19:13 protein folding, this is exactly the problem that protein folding in the most simple medium
02:19:23 tried and has solved with alpha fold, which is how does the sequence of letters result
02:19:31 in this three dimensional shape? And you have to, I guess it didn’t solve it because you
02:19:37 have to, if you say, I want this shape, how do I then have a sequence of letters? Yeah.
02:19:43 The reverse engineering step is really tricky.
02:19:45 It is. I think, I think we’re, we’re, and we’re doing some of this now is, is to use
02:19:51 AI to try and build actionable models of the intelligence of the cellular collectives.
02:19:57 So try to help us and help us gain models that, that, that, and, and we’ve had some
02:20:02 success in this. So we, we did something like this for, for, you know, for repairing birth
02:20:08 defects of the brain in frog. We’ve done some of this for normalizing melanoma where you
02:20:14 can really start to use AI to make models of how would I impact this thing if I wanted
02:20:20 to given all the complexities, right. And, and, and given all the, the, the, the controls
02:20:25 that it, that it knows how to do.
02:20:27 So when you say regenerative medicine, so we talked about creating biological organisms,
02:20:34 but if you regrow a hand, that information is already there, right? The biological system
02:20:41 has that information. So how does regenerative medicine work today? How do you hope it works?
02:20:48 What’s the hope there?
02:20:49 Yeah.
02:20:50 Yeah. How do you make it happen?
02:20:52 Well today there’s a set of popular approaches. So, so one is 3d printing. So the idea is
02:20:57 I’m going to make a scaffold of the thing that I want. I’m going to seed it with cells
02:21:00 and then, and then there it is, right? So kind of direct, and then that works for certain
02:21:03 things. You can make a bladder that way or an ear, something like that. The other, the
02:21:08 other ideas is some sort of stem cell transplant. These are the ideas. If we, if we put in stem
02:21:14 cells with appropriate factors, we can get them to generate certain kinds of neurons
02:21:17 for certain diseases and so on. All of those things are good for relatively simple structures,
02:21:24 but when you want an eye or a hand or something else, I think in this maybe an unpopular opinion,
02:21:30 I think the only hope we have in any reasonable kind of timeframe is to understand how the
02:21:36 thing was motivated to get made in the first place. So what is it that, that made those
02:21:41 cells in the, in the beginning, create a particular arm with a particular set of sizes and shapes
02:21:48 and number of fingers and all that. And why is it that a salamander can keep losing theirs
02:21:51 and keep regrowing theirs and a planarian can do the same even more? So to me, uh, kind
02:21:57 of ultimate regenerate medicine was when you can tell the cells to build whatever it is
02:22:02 you need them to build. Right. And so the, so that we can all be like planaria basically,
02:22:07 do you have to start at the very beginning or can you, um, do a shortcut? Cause we’re
02:22:13 going to hand, you already got the whole organism. Yeah. So here’s what we’ve done, right? So,
02:22:19 we’ve, we’ve more or less solved that in frogs. So frogs, unlike salamanders do not regenerate
02:22:24 their legs as adults. And so, so, uh, we’ve shown that with a very, um, uh, kind of simple
02:22:31 intervention. So what we do is there’s two things you need to, uh, you need to have a
02:22:36 signal that tells the cells what to do, and then you need some way of delivering it. And
02:22:39 so this is work together with, um, with David Kaplan and I should do a, um, a disclosure
02:22:44 here. We have a company called morphosuticals and spin off where we’re trying to, uh, to
02:22:48 address, uh, uh, regenerate, you know, limb regeneration. So we’ve solved it in the frog
02:22:52 and we’re now in trials and mice. So now we’re going to, we’re in mammals now. It’s, I can’t
02:22:56 say anything about how it’s going, but the frog thing is solved. So what you do is, um,
02:22:59 after you have a little frog, Lou Skywalker with every growing hand. Yeah, basically,
02:23:04 basically. Yeah. Yeah. So what you do is we did, we did with legs instead of forearms.
02:23:07 And what you do is, um, after amputation, normally they, they don’t regenerate. You
02:23:11 put on a wearable bioreactor. So it’s this thing that, um, that goes on and, uh, Dave
02:23:15 Kaplan does lab makes these things and inside it’s a, it’s a very controlled environment.
02:23:21 It is a silk gel that carries, uh, some drugs, for example, ion channel drugs. And what you’re
02:23:26 doing is you’re saying to the cells, you should regrow what normally goes here. So, uh, that
02:23:33 whole thing is on for 24 hours and you take it off and you don’t touch the leg. Again,
02:23:37 this is really important because what we’re not looking for is a set of micromanagement,
02:23:41 uh, you know, printing or controlling the cells we want to trigger. We want to, we want
02:23:45 to interact with it early on and then not touch it again because, because we don’t know
02:23:49 how to make a frog leg, but the frog knows how to make a frog leg. So 24 hours, 18 months
02:23:54 of leg growth after that, without us touching it again. And after 18 months, you get a pretty
02:23:58 good leg that kind of shows this proof of concept that early on when the cells right
02:24:02 after injury, when they’re first making a decision about what they’re going to do, you
02:24:05 can, you can impact them. And once they’ve decided to make a leg, they don’t need you
02:24:09 after that. They can do their own thing. So that’s an approach that we’re now taking.
02:24:14 What about cancer suppression? That’s something you mentioned earlier. How can all of these
02:24:18 ideas help with cancer suppression?
02:24:20 So let’s, let’s go back to the beginning and ask what, what, what, what cancer is. So I
02:24:23 think, um, you know, asking why there’s cancer is the wrong question. I think the right question
02:24:28 is why is there ever anything but cancer? So, so in the normal state, you have a bunch
02:24:33 of cells that are all cooperating towards a large scale goal. If that process of cooperation
02:24:38 breaks down and you’ve got a cell that is isolated from that electrical network that
02:24:42 lets you remember what the big goal is, you revert back to your unicellular lifestyle
02:24:47 as far as, now think about that border between self and world, right? Normally when all these
02:24:51 cells are connected by gap junctions into an electrical network, they are all one self,
02:24:56 right? That meaning that, um, their goals, they have these large tissue level goals and
02:25:01 so on. As soon as a cell is disconnected from that, the self is tiny, right? And so at that
02:25:06 point, and so, so people, a lot of people model cancer cell cells as being more selfish
02:25:11 and all that. They’re not more selfish. They’re equally selfish. It’s just that their self
02:25:14 is smaller. Normally the self is huge. Now they got tiny little selves. Now what are
02:25:18 the goals of tiny little selves? Well, proliferate, right? And migrate to wherever life is good.
02:25:22 And that’s metastasis. That’s proliferation and metastasis. So, so one thing we found
02:25:26 and people have noticed years ago that when cells convert to cancer, the first thing they
02:25:31 see is they close the gap junctions. And it’s a lot like, I think it’s a lot like that experiment
02:25:36 with the slime mold where until you close that gap junction, you can’t even entertain
02:25:41 the idea of leaving the collective because there is no you at that point, right? Your
02:25:44 mind melded with this, with this whole other network. But as soon as the gap junction is
02:25:48 closed, now the boundary between you and now, now the rest of the body is just outside environment
02:25:53 to you. You’re just a, you’re just a unicellular organism and the rest of the body’s environment.
02:25:58 So, so we, so we studied this process and we worked out a way to artificially control
02:26:04 the bioelectric state of these cells to physically force them to remain in that network. And
02:26:10 so then, then what that, what that means is that nasty mutations like KRAS and things
02:26:15 like that, these really tough oncogenic mutations that cause tumors. If you, if you do them
02:26:20 and then, but then within artificially control of the bioelectrics, you greatly reduce tumor
02:26:29 genesis or, or normalize cells that had already begun to convert. You basically, they go back
02:26:33 to being normal cells. And so this is another, much like with the planaria, this is another
02:26:38 way in which the bioelectric state kind of dominates what the, what the genetic state
02:26:43 is. So if you sequence the, you know, if you sequence the nucleic acid, you’ll see the
02:26:47 KRAS mutation, you’ll say, ah, well that’s going to be a tumor, but there isn’t a tumor
02:26:50 because, because bioelectrically you’ve kept the cells connected and they’re just working
02:26:54 on making nice skin and kidneys and whatever else. So, so we’ve started moving that to,
02:26:59 to, to human glioblastoma cells and we’re hoping for, you know, a patient in the future
02:27:04 interaction with patients.
02:27:07 So is this one of the possible ways in which we may quote cure cancer?
02:27:12 I think so. Yeah, I think so. I think, I think the actual cure, I mean, there are other technology,
02:27:17 you know, immune therapy, I think is a great technology. Chemotherapy, I don’t think is
02:27:21 a good, is a good technology. I think we’ve got to get, get off of that.
02:27:25 So chemotherapy just kills cells.
02:27:27 Yeah. Well, chemotherapy hopes to kill more of the tumor cells than of your cells. That’s
02:27:32 it. It’s a fine balance. The problem is the cells are very similar because they are your
02:27:36 cells. And so if you don’t have a very tight way of distinguishing between them, then the
02:27:43 toll that chemo takes on the rest of the body is just unbelievable.
02:27:46 And immunotherapy tries to get the immune system to do some of the work.
02:27:49 Exactly. Yeah. I think that’s potentially a very good, a very good approach. If, if
02:27:54 the immune system can be taught to recognize enough of, of the cancer cells, that that’s
02:27:59 a pretty good approach. But I, but I think, but I think our approach is in a way more
02:28:02 fundamental because if you can, if you can keep the cells harnessed towards organ level
02:28:08 goals as opposed to individual cell goals, then nobody will be making a tumor or metastasizing
02:28:13 and so on.
02:28:15 So we’ve been living through a pandemic. What do you think about viruses in this full beautiful
02:28:21 biological context we’ve been talking about? Are they beautiful to you? Are they terrifying?
02:28:30 Also maybe let’s say, are they, since we’ve been discriminating this whole conversation,
02:28:36 are they living? Are they embodied minds? Embodied minds that are assholes?
02:28:43 As far as I know, and I haven’t been able to find this paper again, but, but somewhere
02:28:47 I saw in the last couple of months, there was some, there was some papers showing an
02:28:51 example of a virus that actually had physiology. So there was some, something was going on,
02:28:55 I think proton flux or something on the virus itself. But, but barring that, generally speaking,
02:29:01 viruses are very passive. They don’t do anything by themselves. And so I don’t see any particular
02:29:06 reason to attribute much of a mind to them. I think, you know, they represent a way to
02:29:14 hijack other minds for sure, like, like cells and other things.
02:29:18 But that’s an interesting interplay though. If they’re hijacking other minds, you know,
02:29:24 the way we’re, we were talking about living organisms that they can interact with each
02:29:28 other and have it alter each other’s trajectory by having interacted. I mean, that’s, that’s
02:29:36 a deep, meaningful connection between a virus and a cell. And I think both are transformed
02:29:45 by the experience. And so in that sense, both are living.
02:29:49 Yeah. Yeah. You know, the whole category, I, this question of what’s living and what’s
02:29:56 not living, I really, I’m not sure. And I know there’s people that work on this and
02:30:00 I don’t want to piss anybody off, but, but I have not found that particularly useful
02:30:05 as, as to try and make that a binary kind of a distinction. I think level of cognition
02:30:11 is very interesting of, but as a, as a continuum, but, but living and nonliving, I, you know,
02:30:17 I don’t, I really know what to do with that. I don’t, I don’t know what you do next after,
02:30:20 after making that distinction.
02:30:21 That’s why I make the very binary distinction. Can I have sex with it or not? Can I eat it
02:30:27 or not? Those, cause there’s, those are actionable, right?
02:30:30 Yeah. Well, I think that’s a critical point that you brought up because how you relate
02:30:34 to something is really what this is all about, right? As an engineer, how do I control it?
02:30:40 But maybe I shouldn’t be controlling it. Maybe I should be, you know, can I have a relationship
02:30:44 with it? Should I be listening to its advice? Like, like all the way from, you know, I need
02:30:48 to take it apart all the way to, I better do what it says cause it seems to be pretty
02:30:52 smart and everything in between, right? That’s really what we’re asking about.
02:30:56 Yeah. We need to understand our relationship to it. We’re searching for that relationship,
02:31:01 even in the most trivial senses. You came up with a lot of interesting terms. We’ve mentioned
02:31:08 some of them. Agential material. That’s a really interesting one. That’s a really interesting
02:31:14 one for the future of computation and artificial intelligence and computer science and all
02:31:19 of that. There’s also, let me go through some of them. If they spark some interesting thought
02:31:25 for you, there’s teleophobia, the unwarranted fear of erring on the side of too much agency
02:31:32 when considering a new system.
02:31:35 Yeah.
02:31:36 That’s the opposite. I mean, being afraid of maybe anthropomorphizing the thing.
02:31:41 This’ll get some people ticked off, I think. But I don’t think, I think the whole notion
02:31:47 of anthropomorphizing is a holdover from a pre scientific age where humans were magic
02:31:54 and everything else wasn’t magic and you were anthropomorphizing when you dared suggest
02:32:00 that something else has some features of humans. And I think we need to be way beyond that.
02:32:05 And this issue of anthropomorphizing, I think it’s a cheap charge. I don’t think it holds
02:32:12 any water at all other than when somebody makes a cognitive claim. I think all cognitive
02:32:18 claims are engineering claims, really. So when somebody says this thing knows or this
02:32:22 thing hopes or this thing wants or this thing predicts, all you can say is fabulous. Give
02:32:27 me the engineering protocol that you’ve derived using that hypothesis and we will see if this
02:32:33 thing helps us or not. And then, and then we can, you know, then we can make a rational
02:32:36 decision.
02:32:37 I also like anatomical compiler, a future system representing the longterm end game
02:32:43 of the science of morphogenesis that reminds us how far away from true understanding we
02:32:49 are. Someday you will be able to sit in front of an anatomical computer, specify the shape
02:32:54 of the animal or a plant that you want, and it will convert that shape specification to
02:32:59 a set of stimuli that will have to be given to cells to build exactly that shape. No matter
02:33:05 how weird it ends up being, you have total control. Just imagine the possibility for
02:33:12 memes in the physical space. One of the glorious accomplishments of human civilizations is
02:33:18 memes in digital space. Now this could create memes in physical space. I am both excited
02:33:25 and terrified by that possibility. Cognitive light cone, I think we also talked about the
02:33:31 outer boundary in space and time of the largest goal a given system can work towards. Is this
02:33:39 kind of like shaping the set of options?
02:33:42 It’s a little different than options. It’s really focused on… I first came up with
02:33:49 this back in 2018, I want to say. There was a conference, a Templeton conference where
02:33:55 they challenged us to come up with frameworks. I think actually it’s the diverse intelligence
02:34:01 community.
02:34:02 Summer Institute.
02:34:03 Yeah, they had a Summer Institute.
02:34:04 That’s the logos, the bee with some circuits.
02:34:06 Yeah, it’s got different life forms. The whole program is called diverse intelligence. They
02:34:13 challenged us to come up with a framework that was suitable for analyzing different
02:34:18 kinds of intelligence together. Because the kinds of things you do to a human are not
02:34:23 good with an octopus, not good with a plant and so on. I started thinking about this.
02:34:29 I asked myself what do all cognitive agents, no matter what their provenance, no matter
02:34:35 what their architecture is, what do cognitive agents have in common? It seems to me that
02:34:41 what they have in common is some degree of competency to pursue a goal. What you can
02:34:46 do then is you can draw. What I ended up drawing was this thing that it’s kind of like a backwards
02:34:51 Minkowski cone diagram where all of space is collapsed into one axis and then here and
02:34:58 then time is this axis. Then what you can do is you can draw for any creature, you can
02:35:04 semi quantitatively estimate what are the spatial and temporal goals that it’s capable
02:35:12 of pursuing.
02:35:13 For example, if you are a tick and all you really are able to pursue is maximum or a
02:35:20 bacterium and maximizing the level of some chemical in your vicinity, that’s all you’ve
02:35:24 got, it’s a tiny little icon, then you’re a simple system like a tick or a bacterium.
02:35:29 If you are something like a dog, well, you’ve got some ability to care about some spatial
02:35:37 region, some temporal. You can remember a little bit backwards, you can predict a little
02:35:41 bit forwards, but you’re never ever going to care about what happens in the next town
02:35:46 over four weeks from now. As far as we know, it’s just impossible for that kind of architecture.
02:35:51 If you’re a human, you might be working towards world peace long after you’re dead. You might
02:35:56 have a planetary scale goal that’s enormous. Then there may be other greater intelligences
02:36:04 somewhere that can care in the linear range about numbers of creatures, some sort of Buddha
02:36:08 like character that can care about everybody’s welfare, really care the way that we can’t.
02:36:16 It’s not a mapping of what you can sense, how far you can sense. It’s not a mapping
02:36:20 of how far you can act. It’s a mapping of how big are the goals you are capable of envisioning
02:36:25 and working towards. I think that enables you to put synthetic kinds of constructs,
02:36:33 AIs, aliens, swarms, whatever on the same diagram because we’re not talking about what
02:36:40 you’re made of or how you got here. We’re talking about what are the size and complexity
02:36:44 of the goals towards which you can work.
02:36:46 Is there any other terms that pop into mind that are interesting?
02:36:50 I’m trying to remember. I have a list of them somewhere on my website.
02:36:54 Human morphology, yeah, definitely check it out. Morphosutical, I like that one. Ionisutical.
02:37:01 Yeah. Those refer to different types of interventions in the regenerative medicine space. Amorphosutical
02:37:08 is something that it’s a kind of intervention that really targets the cells decision making
02:37:16 process about what they’re going to build. Ionisuticals are like that, but more focused
02:37:20 specifically on the bioelectrics. There’s also, of course, biochemical, biomechanical,
02:37:24 who knows what else, maybe optical kinds of signaling systems there as well.
02:37:29 Target morphology is interesting. It’s designed to capture this idea that it’s not just feedforward
02:37:37 emergence and oftentimes in biology, I mean, of course that happens too, but in many cases
02:37:41 in biology, the system is specifically working towards a target in anatomical morphospace.
02:37:48 It’s a navigation task really. These kinds of problem solving can be formalized as navigation
02:37:57 tasks and that they’re really going towards a particular region. How do you know? Because
02:38:00 you deviate them and then they go back.
02:38:03 Let me ask you, because you’ve really challenged a lot of ideas in biology in the work you
02:38:12 do, probably because some of your rebelliousness comes from the fact that you came from a different
02:38:18 field of computer engineering, but could you give advice to young people today in high
02:38:23 school or college that are trying to pave their life story, whether it’s in science
02:38:31 or elsewhere, how they can have a career they can be proud of or a life they can be proud
02:38:36 of advice?
02:38:37 Boy, it’s dangerous to give advice because things change so fast, but one central thing
02:38:42 I can say, moving up and through academia and whatnot, you will be surrounded by really
02:38:47 smart people. What you need to do is be very careful at distinguishing specific critique
02:38:56 versus kind of meta advice. What I mean by that is if somebody really smart and successful
02:39:03 and obviously competent is giving you specific critiques on what you’ve done, that’s gold.
02:39:11 It’s an opportunity to hone your craft, to get better at what you’re doing, to learn,
02:39:15 to find your mistakes. That’s great.
02:39:17 If they are telling you what you ought to be studying, how you ought to approach things,
02:39:23 what is the right way to think about things, you should probably ignore most of that. The
02:39:28 reason I make that distinction is that a lot of really successful people are very well
02:39:36 calibrated on their own ideas and their own field and their own area. They know exactly
02:39:43 what works and what doesn’t and what’s good and what’s bad, but they’re not calibrated
02:39:46 on your ideas. The things they will say, oh, this is a dumb idea, don’t do this and you
02:39:53 shouldn’t do that, that stuff is generally worse than useless. It can be very demoralizing
02:40:01 and really limiting. What I say to people is read very broadly, work really hard, know
02:40:09 what you’re talking about, take all specific criticism as an opportunity to improve what
02:40:14 you’re doing and then completely ignore everything else. I just tell you from my own experience,
02:40:21 most of what I consider to be interesting and useful things that we’ve done, very smart
02:40:26 people have said, this is a terrible idea, don’t do that. I think we just don’t know.
02:40:32 We have no idea beyond our own. At best, we know what we ought to be doing. We very rarely
02:40:37 know what anybody else should be doing.
02:40:39 Yeah, and their ideas, their perspective has been also calibrated, not just on their field
02:40:45 and specific situation, but also on a state of that field at a particular time in the
02:40:51 past. There’s not many people in this world that are able to achieve revolutionary success
02:40:57 multiple times in their life. Whenever you say somebody very smart, usually what that
02:41:02 means is somebody who’s smart, who achieved a success at a certain point in their life
02:41:09 and people often get stuck in that place where they found success. To be constantly challenging
02:41:14 your worldview is a very difficult thing. Also at the same time, probably if a lot of
02:41:23 people tell, that’s the weird thing about life, if a lot of people tell you that something
02:41:29 is stupid or is not going to work, that either means it’s stupid, it’s not going to work,
02:41:36 or it’s actually a great opportunity to do something new and you don’t know which one
02:41:42 it is and it’s probably equally likely to be either. Well, I don’t know, the probabilities.
02:41:49 Depends how lucky you are, depends how brilliant you are, but you don’t know and so you can’t
02:41:53 take that advice as actual data.
02:41:55 Yeah, you have to and this is kind of hard to describe and fuzzy, but I’m a firm believer
02:42:03 that you have to build up your own intuition. So over time, you have to take your own risks
02:42:09 that seem like they make sense to you and then learn from that and build up so that
02:42:13 you can trust your own gut about what’s a good idea even when, and then sometimes you’ll
02:42:18 make mistakes and they’ll turn out to be a dead end and that’s fine, that’s science,
02:42:21 but what I tell my students is life is hard and science is hard and you’re going to sweat
02:42:28 and bleed and everything and you should be doing that for ideas that really fire you
02:42:34 up inside and really don’t let kind of the common denominator of standardized approaches
02:42:44 to things slow you down.
02:42:46 So you mentioned planaria being in some sense immortal. What’s the role of death in life?
02:42:53 What’s the role of death in this whole process we have? Is it, when you look at biological
02:42:58 systems, is death an important feature, especially as you climb up the hierarchy of competency?
02:43:08 Boy, that’s an interesting question. I think that it’s certainly a factor that promotes
02:43:17 change and turnover and an opportunity to do something different the next time for a
02:43:24 larger scale system. So apoptosis, it’s really interesting. I mean, death is really interesting
02:43:29 in a number of ways. One is like you could think about like what was the first thing
02:43:33 to die? That’s an interesting question. What was the first creature that you could say
02:43:37 actually died? It’s a tough thing because we don’t have a great definition for it. So
02:43:42 if you bring a cabbage home and you put it in your fridge, at what point are you going
02:43:48 to say it’s died, right? So it’s kind of hard to know. There’s one paper in which I talk
02:43:58 about this idea that, I mean, think about this and imagine that you have a creature
02:44:04 that’s aquatic, let’s say it’s a frog or something or a tadpole, and the animal dies,
02:44:11 in the pond it dies for whatever reason. Most of the cells are still alive. So you could
02:44:17 imagine that if when it died, there was some sort of breakdown of the connectivity between
02:44:23 the cells, a bunch of cells crawled off, they could have a life as amoebas. Some of them
02:44:28 could join together and become a xenobot and twiddle around, right? So we know from planaria
02:44:33 that there are cells that don’t obey the Hayflick limit and just sort of live forever. So you
02:44:37 could imagine an organism that when the organism dies, it doesn’t disappear, rather the individual
02:44:42 cells that are still alive, crawl off and have a completely different kind of lifestyle
02:44:46 and maybe come back together as something else, or maybe they don’t. So all of this,
02:44:50 I’m sure, is happening somewhere on some planet. So death in any case, I mean, we already kind
02:44:57 of knew this because the molecules, we know that when something dies, the molecules go
02:45:00 through the ecosystem, but even the cells don’t necessarily die at that point, they
02:45:05 might have another life in a different way. You can think about something like HeLa, right?
02:45:09 The HeLa cell line, you know, that has this, that’s had this incredible life. There are
02:45:14 way more HeLa cells now than there ever been, than there, than there were when, when she
02:45:18 was alive.
02:45:19 It seems like as the organisms become more and more complex, like if you look at the
02:45:22 mammals, their relationship with death becomes more and more complex. So the survival imperative
02:45:29 starts becoming interesting and humans are arguably the first species that have invented
02:45:37 the fear of death. The understanding that you’re going to die, let’s put it this way,
02:45:43 like long, so not like instinctual, like, I need to run away from the thing that’s going
02:45:49 to eat me, but starting to contemplate the finiteness of life.
02:45:53 Yeah. I mean, one thing, so, so one thing about the human light, cognitive light cone
02:45:59 is that for the first, as far as we know, for the first time, you might have goals that
02:46:04 are longer than your lifespan, that are not achievable, right? So if you’re, if you are,
02:46:08 let’s say, and I don’t know if this is true, but if you’re a goldfish and you have a 10
02:46:11 minute attention span, I’m not sure if that’s true, but let’s say, let’s say there’s some
02:46:14 organism with a, with a short kind of cognitive light cone that way, all of your goals are
02:46:20 potentially achievable because you’re probably going to live the next 10 minutes. So whatever
02:46:23 goals you have, they are totally achievable. If you’re a human, you could have all kinds
02:46:27 of goals that are guaranteed not achievable because they just take too long, like guaranteed
02:46:31 you’re not going to achieve them. So I wonder if, you know, is that, is that a, you know,
02:46:35 like a perennial, you know, sort of thorn in our, in our psychology that drives some,
02:46:39 some psychosis or whatever? I have, I have no idea. Another interesting thing about that,
02:46:43 actually, I’ve been thinking about this a lot in the last couple of weeks, this notion
02:46:47 of giving up. So you would think that evolutionarily, the most adaptive way of being is that you
02:46:58 go, you, you, you, you fight as long as you physically can. And then when you can’t, you
02:47:02 can’t, and there’s in, there’s this photograph, there’s videos you can find of insects are
02:47:06 crawling around where like, you know, like, like most of it is already gone, and it’s
02:47:10 still sort of crawling, you know, like, Terminator style, right? Like, as far as as long as you
02:47:15 physically can, you keep going. Mammals don’t do that. So a lot of mammals, including rats,
02:47:20 have this thing where when, when they think it’s a hopeless situation, they literally
02:47:25 give up and die when physically, they could have kept going. I mean, humans certainly
02:47:29 do this. And there’s, there’s some like, really unpleasant experiments that the this guy forget
02:47:33 his name did with drowning rats, where if he where where rats normally drown after a
02:47:37 couple of minutes, but if you teach them that if you just tread water for a couple of minutes,
02:47:41 you’ll get rescued, they can tread water for like an hour. And so right, and so they literally
02:47:45 just give up and die. And so evolutionarily, that doesn’t seem like a good strategy at
02:47:49 all evolutionarily, since why would you like, what’s the benefit ever of giving up, you
02:47:53 just do what you can, and you know, one time out of 1000, you’ll actually get rescued, right?
02:47:57 But this issue of actually giving up suggests some very interesting metacognitive controls
02:48:03 where you’ve now gotten to the point where survival actually isn’t the top drive. And
02:48:08 that for whatever, you know, there are other considerations that have like taken over.
02:48:11 And I think that’s uniquely a mammalian thing. But then I don’t know.
02:48:15 Yeah, the Camus, the existentialist question of why live, just the fact that humans commit
02:48:23 suicide is a really fascinating question from an evolutionary perspective.
02:48:27 And what was the first and that’s the other thing, like, what is the simplest system,
02:48:33 whether whether evolved or natural or whatever, that is able to do that? Right? Like, you
02:48:38 can think, you know, what other animals are actually able to do that? I’m not sure.
02:48:42 Maybe you could see animals over time, for some reason, lowering the value of survive
02:48:49 at all costs, gradually, until other objectives might become more important.
02:48:55 Maybe. I don’t know how evolutionarily how that how that gets off the ground. That just
02:48:59 seems like that would have such a strong pressure against it, you know. Just imagine, you know,
02:49:06 a population with a lower, you know, if you were a mutant in a population that had less
02:49:13 of a less of a survival imperative, would you put your genes outperform the others?
02:49:19 Is there such a thing as population selection? Because maybe suicide is a way for organisms
02:49:26 to decide themselves that they’re not fit for the environment? Somehow?
02:49:31 Yeah, that’s a that’s a really contrary, you know, population level selection is a kind
02:49:36 of a deep controversial area. But it’s tough because on the face of it, if that was your
02:49:42 genome, it wouldn’t get propagated because you would die and then your neighbor who didn’t
02:49:47 have that would would have all the kids.
02:49:49 It feels like there could be some deep truth there that we’re not understanding. What about
02:49:55 you yourself as one biological system? Are you afraid of death?
02:49:59 To be honest, I’m more concerned with especially now getting older and having helped a couple
02:50:05 of people pass. I think about what’s a what’s a good way to go? Basically, like nowadays,
02:50:14 I don’t know what that is, I, you know, sitting in a, you know, a facility that sort of tries
02:50:19 to stretch you out as long as you can, that doesn’t seem that doesn’t seem good. And there’s
02:50:24 not a lot of opportunities to sort of, I don’t know, sacrifice yourself for something useful,
02:50:29 right? There’s not terribly many opportunities for that in modern society. So I don’t know,
02:50:33 that’s that’s that’s more of I’m not I’m not particularly worried about death itself.
02:50:38 But I’ve seen it happen. And and it’s not it’s not pretty. And I don’t know what what
02:50:46 a better what a better alternative is.
02:50:48 So the existential aspect of it does not worry you deeply? The fact that this ride ends?
02:50:56 No, it began. I mean, the ride began, right? So there was I don’t know how many billions
02:51:01 of years before that I wasn’t around. So that’s okay.
02:51:04 But isn’t the experience of life? It’s almost like feels like you’re immortal. Because the
02:51:10 way you make plans, the way you think about the future. I mean, if you if you look at
02:51:15 your own personal rich experience, yes, you can understand, okay, eventually, I died as
02:51:22 people I love that have died. So surely, I will die and it hurts and so on. But like,
02:51:28 he sure doesn’t. It’s so easy to get lost in feeling like this is going to go on forever.
02:51:34 Yeah, it’s a little bit like the people who say they don’t believe in free will, right?
02:51:37 I mean, you can say that but but when you go to a restaurant, you still have to pick
02:51:41 a soup and stuff. So right, so so I don’t know if I know I’ve actually seen that that
02:51:46 happened at lunch with a with a well known philosopher and he didn’t believe in free
02:51:49 will and the other waitress came around and he was like, Well, let me see. I was like,
02:51:53 What are you doing here? You’re gonna choose a sandwich, right? So it’s I think it’s one
02:51:58 of those things. I think you can know that, you know, you’re not going to live forever.
02:52:02 But you can’t you can’t. It’s not practical to live that way unless you know, so you buy
02:52:07 insurance and then you do some stuff like that. But but but mostly, you know, I think
02:52:11 you just you just live as if as if as if you can make plans.
02:52:17 We talked about all kinds of life. We talked about all kinds of embodied minds. What do
02:52:22 you think is the meaning of it all? What’s the meaning of all the biological lives we’ve
02:52:28 been talking about here on Earth? Why are we here?
02:52:33 I don’t know that that’s a that that’s a well posed question other than the existential
02:52:38 question you post before.
02:52:40 Is that question hanging out with the question of what is consciousness and there at retreat
02:52:47 somewhere? Not sure because sipping pina coladas and because they’re ambiguously defined.
02:52:55 Maybe I’m not sure that any of these things really ride on the correctness of our scientific
02:53:01 understanding. But I mean, just just for an example, right? I’ve always found I’ve always
02:53:06 found it weird that people get really worked up to find out realities about their their
02:53:16 bodies, for example. Right. You’ve seen them. Ex Machina. Right. And so there’s this great
02:53:22 scene where he’s cutting his hand to find out, you know, a piece full of cock. Now,
02:53:26 to me, right? If if I open up and I find out and I find a bunch of cogs, my conclusion
02:53:31 is not, oh, crap, I must not have true cognition. That sucks. My conclusion is, wow, cogs can
02:53:37 have true cognition. Great. So right. So. So it seems to me, I guess I guess I’m with
02:53:42 Descartes on this one, that whatever whatever the truth ends up being of of of how is what
02:53:48 is consciousness, how it can be conscious? None of that is going to alter my primary
02:53:53 experience, which is this is what it is. And if and if a bunch of molecular networks can
02:53:56 do it, fantastic. If it turns out that there’s a there’s a non corporeal, you know, so great.
02:54:03 We can we’ll study that, whatever. But but the fundamental existential aspect of it is,
02:54:09 you know, if somebody if somebody told me today that, yeah, yeah, you were created yesterday
02:54:13 and all your memories are, you know, sort of fake, you know, kind of like like like Boltzmann
02:54:18 brains, right. And the human, you know, human skepticism, all that. Yeah. OK. Well, so so
02:54:23 but but here I am now. So so it’s the experience. It’s primal, so like that’s the that’s the
02:54:31 thing that matters. So the the backstory doesn’t matter. I think so. I think so. From a first
02:54:36 person perspective, now from a third person, like scientifically, it’s all very interesting.
02:54:39 From a third person perspective, I could say, wow, that’s that’s amazing that that this
02:54:43 happens and how does it happen and whatever. But from a first person perspective, I could
02:54:48 care less. Like I just it’s just what I’ve what I learned from any of these scientific
02:54:52 facts is, OK, well, I guess then that’s that that then I guess that’s what is sufficient
02:54:57 to to give me my, you know, amazing first person perspective. I think if you dig deeper
02:55:01 and deeper and get a get surprising answers to why the hell we’re here, it might give
02:55:10 you some guidance on how to live. Maybe, maybe. I don’t know. That would be nice. On the one
02:55:18 hand, you might be right, because on the one hand, if I don’t know what else could possibly
02:55:23 give you that guidance. Right. So so you would think that it would have to be that or you
02:55:26 would do it would have to be science because there isn’t anything else. So so that’s so
02:55:30 maybe on the other hand, I am really not sure how you go from any, you know, what they call
02:55:36 from an is to an odd right from any factual description of what’s going on. This goes
02:55:41 back to the natural. Right. Just because somebody says, oh, man, that’s that’s completely not
02:55:44 natural. It’s never happened on Earth before. I’m not impressed by that whatsoever. I think
02:55:50 I think whatever hazard hasn’t happened, we are now in a position to do better if we can.
02:55:56 Right. Well, this also because you said there’s science and there’s nothing else. There it’s
02:56:03 it’s really tricky to know how to intellectually deal with a thing that science doesn’t currently
02:56:12 understand. Right. So like, the thing is, if you believe that science solves everything,
02:56:22 you can too easily in your mind think our current understanding, like, we’ve solved
02:56:30 everything. Right. Right. Right. Like, it jumps really quickly to not science as a mechanism
02:56:36 as a as a process, but more like science of today. Like, you could just look at human
02:56:43 history and throughout human history, just physicists and everybody would claim we’ve
02:56:48 solved everything. Sure. Sure. Like, like, there’s a few small things to figure out.
02:56:53 And we basically solved everything. Were in reality, I think asking, like, what is the
02:56:58 meaning of life is resetting the palette of like, we might be tiny and confused and don’t
02:57:08 have anything figured out. It’s almost going to be hilarious a few centuries from now when
02:57:12 they look back how dumb we were. Yeah, I 100% agree. So when I say science and nothing else,
02:57:21 I certainly don’t mean the science of today because I think overall, I think we are we
02:57:27 know very little. I think most of the things that we’re sure of now are going to be, as
02:57:32 you said, are going to look hilarious down the line. So I think we’re just at the beginning
02:57:36 of a lot of really important things. When I say nothing but science, I also include
02:57:42 the kind of first person, what I call science that you do. So the interesting thing about
02:57:48 I think about consciousness and studying consciousness and things like that in the first person is
02:57:52 unlike doing science in the third person, where you as the scientist are minimally changed
02:57:57 by it, maybe not at all. So when I do an experiment, I’m still me, there’s the experiment, whatever
02:58:01 I’ve done, I’ve learned something, so that’s a small change. But but overall, that’s it.
02:58:04 In order to really study consciousness, you will you are part of the experiment, you will
02:58:10 be altered by that experiment, right? Whatever, whatever it is that you’re doing, whether
02:58:13 it’s some sort of contemplative practice or, or some sort of psychoactive, you know, whatever.
02:58:22 You are now you are now your own experiment, and you are right. And so I consider I fold
02:58:26 that in, I think that’s that’s part of it. I think that exploring our own mind and our
02:58:29 own consciousness is very important. I think much of it is not captured by what currently
02:58:34 is third person science for sure. But ultimately, I include all of that in science, with a capital
02:58:41 S in terms of like a, a rational investigation of both first and third person aspects of
02:58:48 our world.
02:58:50 We are our own experiment, as beautifully put. And when when two systems get to interact
02:58:57 with each other, that’s the kind of experiment. So I’m deeply honored that you would do this
02:59:03 experiment with me today. Thanks so much. I’m a huge fan of your work. Likewise, thank
02:59:07 you for doing everything you’re doing. I can’t wait to see the kind of incredible things
02:59:13 you build. So thank you for talking. Really appreciate being here. Thank you.
02:59:18 Thank you for listening to this conversation with Michael Levin. To support this podcast,
02:59:22 please check out our sponsors in the description. And now let me leave you with some words from
02:59:26 Charles Darwin in The Origin of Species. From the war of nature, from famine and death,
02:59:35 the most exalted object which we’re capable of conceiving, namely, the production of the
02:59:41 higher animals directly follows. There’s grandeur in this view of life, with its several
02:59:47 powers having been originally breathed into a few forms, or into one, and that whilst
02:59:54 this planet has gone cycling on according to the fixed laws of gravity, from its most
02:59:59 simpler beginning, endless forms, most beautiful and most wonderful, have been and are being
03:00:06 evolved. Thank you for listening, and hope to see you next time.