Transcript
00:00:00 The following is a conversation with Brett Weinstein,
00:00:03 evolutionary biologist, author, cohost
00:00:05 of the Dark Horse podcast, and, as he says,
00:00:09 reluctant radical.
00:00:11 Even though we’ve never met or spoken before this,
00:00:14 we both felt like we’ve been friends for a long time,
00:00:17 I don’t agree on everything with Brett,
00:00:19 but I’m sure as hell happy he exists
00:00:22 in this weird and wonderful world of ours.
00:00:25 Quick mention of our sponsors,
00:00:27 Jordan Harmon’s show, ExpressVPN, Magic Spoon,
00:00:30 and Four Sigmatic.
00:00:32 Check them out in the description to support this podcast.
00:00:35 As a side note, let me say a few words about COVID 19
00:00:39 and about science broadly.
00:00:41 I think science is beautiful and powerful.
00:00:44 It is the striving of the human mind
00:00:46 to understand and to solve the problems of the world.
00:00:50 But as an institution,
00:00:51 it is susceptible to the flaws of human nature,
00:00:54 to fear, to greed, power, and ego.
00:00:58 2020 is the story of all of these
00:01:00 that has both scientific triumph and tragedy.
00:01:04 We needed great leaders and we didn’t get them.
00:01:07 What we needed is leaders who communicate
00:01:09 in an honest, transparent, and authentic way
00:01:12 about the uncertainty of what we know
00:01:14 and the large scale scientific efforts
00:01:16 to reduce that uncertainty and to develop solutions.
00:01:19 I believe there are several candidates for solutions
00:01:21 that could have all saved hundreds of billions of dollars
00:01:25 and lessened or eliminated
00:01:27 the suffering of millions of people.
00:01:30 Let me mention five of the categories of solutions.
00:01:33 Masks, at home testing, anonymized contact tracing,
00:01:37 antiviral drugs, and vaccines.
00:01:39 Within each of these categories,
00:01:41 institutional leaders should have constantly asked
00:01:44 and answered publicly, honestly,
00:01:46 the following three questions.
00:01:48 One, what data do we have on the solution
00:01:52 and what studies are we running
00:01:53 to get more and better data?
00:01:55 Two, given the current data and uncertainty,
00:01:57 how effective and how safe is the solution?
00:02:01 Three, what is the timeline and cost involved
00:02:04 with mass manufacturing distribution of the solution?
00:02:07 In the service of these questions,
00:02:09 no voices should have been silenced,
00:02:11 no ideas left off the table.
00:02:13 Open data, open science,
00:02:15 open, honest scientific communication and debate
00:02:17 was the way, not censorship.
00:02:20 There are a lot of ideas out there
00:02:21 that are bad, wrong, dangerous,
00:02:24 but the moment we have the hubris
00:02:26 to say we know which ideas those are
00:02:29 is the moment we’ll lose our ability to find the truth,
00:02:32 to find solutions,
00:02:33 the very things that make science beautiful and powerful
00:02:37 in the face of all the dangers that threaten the wellbeing
00:02:40 and the existence of humans on Earth.
00:02:43 This conversation with Brett
00:02:44 is less about the ideas we talk about.
00:02:46 We agree on some, disagree on others.
00:02:49 It is much more about the very freedom to talk,
00:02:52 to think, to share ideas.
00:02:54 This freedom is our only hope.
00:02:57 Brett should never have been censored.
00:03:00 I asked Brett to do this podcast to show solidarity
00:03:03 and to show that I have hope for science and for humanity.
00:03:08 This is the Lex Friedman podcast
00:03:10 and here’s my conversation with Brett Weinstein.
00:03:13 What to you is beautiful about the study of biology,
00:03:18 the science, the engineering, the philosophy of it?
00:03:21 It’s a very interesting question.
00:03:22 I must say at one level, it’s not a conscious thing.
00:03:27 I can say a lot about why as an adult
00:03:30 I find biology compelling,
00:03:32 but as a kid I was completely fascinated with animals.
00:03:36 I loved to watch them and think about why they did
00:03:40 what they did and that developed into a very conscious
00:03:44 passion as an adult.
00:03:46 But I think in the same way that one is drawn to a person,
00:03:51 I was drawn to the never ending series of near miracles
00:03:59 that exists across biological nature.
00:04:02 When you see a living organism,
00:04:03 do you see it from an evolutionary biology perspective
00:04:08 of like this entire thing that moves around
00:04:10 in this world or do you see from an engineering perspective
00:04:14 that first principles almost down to the physics,
00:04:18 like the little components that build up hierarchies
00:04:21 that you have cells, the first proteins and cells
00:04:24 and organs and all that kind of stuff.
00:04:27 So do you see low level or do you see high level?
00:04:30 Well, the human mind is a strange thing
00:04:32 and I think it’s probably a bit like a time sharing machine
00:04:37 in which I have different modules.
00:04:40 We don’t know enough about biology for them to connect.
00:04:43 So they exist in isolation and I’m always aware
00:04:46 that they do connect, but I basically have to step
00:04:48 into a module in order to see the evolutionary dynamics
00:04:53 of the creature and the lineage that it belongs to.
00:04:56 I have to step into a different module to think
00:04:59 of that lineage over a very long time scale,
00:05:02 a different module still to understand
00:05:04 what the mechanisms inside would have to look like
00:05:06 to account for what we can see from the outside.
00:05:11 And I think that probably sounds really complicated,
00:05:15 but one of the things about being involved
00:05:20 in a topic like biology and doing so for one,
00:05:25 really not even just my adult life for my whole life
00:05:27 is that it becomes second nature.
00:05:29 And when we see somebody do an amazing parkour routine
00:05:34 or something like that, we think about what they must
00:05:38 be doing in order to accomplish that.
00:05:41 But of course, what they are doing is tapping
00:05:43 into some kind of zone, right?
00:05:46 They are in a zone in which they are in such command
00:05:51 of their center of gravity, for example,
00:05:53 that they know how to hurl it around a landscape
00:05:56 so that they always land on their feet.
00:05:59 And I would just say for anyone who hasn’t found a topic
00:06:04 on which they can develop that kind of facility,
00:06:08 it is absolutely worthwhile.
00:06:11 It’s really something that human beings are capable
00:06:13 of doing across a wide range of topics,
00:06:16 many things our ancestors didn’t even have access to.
00:06:19 And that flexibility of humans,
00:06:21 that ability to repurpose our machinery
00:06:26 for topics that are novel means really,
00:06:29 the world is your oyster.
00:06:30 You can figure out what your passion is
00:06:32 and then figure out all of the angles
00:06:34 that one would have to pursue to really deeply understand it.
00:06:38 And it is well worth having at least one topic like that.
00:06:42 You mean embracing the full adaptability
00:06:45 of both the body and the mind.
00:06:49 So like, I don’t know what to attribute the parkour to,
00:06:53 like biomechanics of how our bodies can move,
00:06:56 or is it the mind?
00:06:58 Like how much percent wise,
00:07:00 is it the entirety of the hierarchies of biology
00:07:04 that we’ve been talking about,
00:07:06 or is it just all the mind?
00:07:09 The way to think about creatures
00:07:10 is that every creature is two things simultaneously.
00:07:14 A creature is a machine of sorts, right?
00:07:17 It’s not a machine in the,
00:07:20 I call it an aqueous machine, right?
00:07:22 And it’s run by an aqueous computer, right?
00:07:24 So it’s not identical to our technological machines.
00:07:29 But every creature is both a machine
00:07:31 that does things in the world
00:07:33 sufficient to accumulate enough resources
00:07:36 to continue surviving, to reproduce.
00:07:39 It is also a potential.
00:07:41 So each creature is potentially, for example,
00:07:45 the most recent common ancestor
00:07:47 of some future clade of creatures
00:07:48 that will look very different from it.
00:07:50 And if a creature is very, very good at being a creature,
00:07:53 but not very good in terms of the potential
00:07:56 it has going forward,
00:07:57 then that lineage will not last very long into the future
00:08:01 because change will throw at challenges
00:08:04 that its descendants will not be able to meet.
00:08:07 So the thing about humans is we are a generalist platform,
00:08:13 and we have the ability to swap out our software
00:08:17 to exist in many, many different niches.
00:08:20 And I was once watching an interview
00:08:24 with this British group of parkour experts
00:08:27 who were being, they were discussing what it is they do
00:08:31 and how it works.
00:08:31 And what they essentially said is,
00:08:33 look, you’re tapping into deep monkey stuff, right?
00:08:39 And I thought, yeah, that’s about right.
00:08:41 And anybody who is proficient at something
00:08:46 like skiing or skateboarding, you know,
00:08:49 has the experience of flying down the hill
00:08:54 on skis, for example,
00:08:56 bouncing from the top of one mogul to the next.
00:08:59 And if you really pay attention,
00:09:02 you will discover that your conscious mind
00:09:04 is actually a spectator.
00:09:05 It’s there, it’s involved in the experience,
00:09:08 but it’s not driving.
00:09:09 Some part of you knows how to ski,
00:09:10 and it’s not the part of you that knows how to think.
00:09:12 And I would just say that what accounts
00:09:17 for this flexibility in humans
00:09:19 is the ability to bootstrap a new software program
00:09:24 and then drive it into the unconscious layer
00:09:27 where it can be applied very rapidly.
00:09:30 And, you know, I will be shocked
00:09:31 if the exact thing doesn’t exist in robotics.
00:09:36 You know, if you programmed a robot
00:09:37 to deal with circumstances that were novel to it,
00:09:40 how would you do it?
00:09:41 It would have to look something like this.
00:09:43 There’s a certain kind of magic, you’re right,
00:09:46 with the consciousness being an observer.
00:09:48 When you play guitar, for example, or piano for me,
00:09:51 music, when you get truly lost in it,
00:09:55 I don’t know what the heck is responsible
00:09:57 for the flow of the music,
00:09:59 the kind of the loudness of the music going up and down,
00:10:02 the timing, the intricate, like even the mistakes,
00:10:06 all those things,
00:10:07 that doesn’t seem to be the conscious mind.
00:10:09 It is just observing,
00:10:12 and yet it’s somehow intricately involved.
00:10:14 More, like, because you mentioned parkour,
00:10:17 the dance is like that too.
00:10:18 When you start up in tango dancing,
00:10:20 if when you truly lose yourself in it,
00:10:24 then it’s just like you’re an observer,
00:10:26 and how the hell is the body able to do that?
00:10:29 And not only that, it’s the physical motion
00:10:31 is also creating the emotion,
00:10:33 the, like, the damn is good to be alive feeling.
00:10:40 So, but then that’s also intricately connected
00:10:44 to the full biology stack that we’re operating in.
00:10:47 I don’t know how difficult it is to replicate that.
00:10:50 We’re talking offline about Boston Dynamics robots.
00:10:54 They’ve recently been, they did both parkour,
00:10:57 they did flips, they’ve also done some dancing,
00:11:02 and it’s something I think a lot about
00:11:03 because what most people don’t realize
00:11:07 because they don’t look deep enough
00:11:09 is those robots are hard coded to do those things.
00:11:13 The robots didn’t figure it out by themselves,
00:11:16 and yet the fundamental aspect of what it means to be human
00:11:20 is that process of figuring out, of making mistakes,
00:11:23 and then there’s something about overcoming
00:11:26 those challenges and the mistakes
00:11:27 and, like, figuring out how to lose yourself
00:11:30 in the magic of the dancing or just movement
00:11:34 is what it means to be human.
00:11:35 That learning process, so that’s what I want to do
00:11:38 with the, almost as a fun side thing
00:11:42 with the Boston Dynamics robots,
00:11:44 is to have them learn and see what they figure out,
00:11:48 even if they make mistakes.
00:11:50 I want to let Spot make mistakes
00:11:55 and in so doing discover what it means to be alive,
00:12:00 discover beauty, because I think
00:12:02 that’s the essential aspect of mistakes.
00:12:05 Boston Dynamics folks want Spot to be perfect
00:12:09 because they don’t want Spot to ever make mistakes
00:12:11 because it wants to operate in the factories,
00:12:13 it wants to be very safe and so on.
00:12:16 For me, if you construct the environment,
00:12:19 if you construct a safe space for robots
00:12:22 and allow them to make mistakes,
00:12:24 something beautiful might be discovered,
00:12:26 but that requires a lot of brain power.
00:12:29 So Spot is currently very dumb
00:12:32 and I’m gonna give it a brain.
00:12:34 So first make it see, currently it can’t see,
00:12:36 meaning computer vision, it has to understand
00:12:39 its environment, it has to see all the humans,
00:12:41 but then also has to be able to learn,
00:12:43 learn about its movement, learn how to use its body
00:12:47 to communicate with others, all those kinds of things
00:12:49 that dogs know how to do well,
00:12:51 humans know how to do somewhat well.
00:12:54 I think that’s a beautiful challenge,
00:12:56 but first you have to allow the robot to make mistakes.
00:13:00 Well, I think your objective is laudable,
00:13:03 but you’re gonna realize
00:13:04 that the Boston Dynamics folks are right
00:13:07 the first time Spot poops on your rug.
00:13:11 I hear the same thing about kids and so on.
00:13:13 I still wanna have kids.
00:13:14 No, you should, it’s a great experience.
00:13:18 So let me step back into what you said
00:13:19 in a couple of different places.
00:13:21 One, I have always believed that the missing element
00:13:24 in robotics and artificial intelligence
00:13:27 is a proper development, right?
00:13:30 It is no accident, it is no mere coincidence
00:13:33 that human beings are the most dominant species
00:13:36 on planet Earth and that we have the longest childhoods
00:13:38 of any creature on Earth by far, right?
00:13:42 The development is the key to the flexibility.
00:13:44 And so the capability of a human at adulthood
00:13:49 is the mirror image, it’s the flip side
00:13:53 of our helplessness at birth.
00:13:57 So I’ll be very interested to see what happens
00:13:59 in your robot project if you do not end up
00:14:03 reinventing childhood for robots,
00:14:05 which of course is foreshadowed in 2001 quite brilliantly.
00:14:10 But I also wanna point out,
00:14:12 you can see this issue of your conscious mind
00:14:16 becoming a spectator very well
00:14:18 if you compare tennis to table tennis, right?
00:14:24 If you watch a tennis game, you could imagine
00:14:28 that the players are highly conscious as they play.
00:14:32 You cannot imagine that
00:14:33 if you’ve ever played ping pong decently.
00:14:36 A volley in ping pong is so fast
00:14:39 that your conscious mind, if your reactions
00:14:42 had to go through your conscious mind,
00:14:43 you wouldn’t be able to play.
00:14:44 So you can detect that your conscious mind,
00:14:47 while very much present, isn’t there.
00:14:49 And you can also detect where consciousness
00:14:52 does usefully intrude.
00:14:54 If you go up against an opponent in table tennis
00:14:57 that knows a trick that you don’t know how to respond to,
00:15:01 you will suddenly detect that something
00:15:03 about your game is not effective,
00:15:06 and you will start thinking about what might be,
00:15:08 how do you position yourself so that move
00:15:10 that puts the ball just in that corner of the table
00:15:12 or something like that doesn’t catch you off guard.
00:15:15 And this, I believe, is we highly conscious folks,
00:15:22 those of us who try to think through things
00:15:23 very deliberately and carefully,
00:15:26 mistake consciousness for the highest kind of thinking.
00:15:30 And I really think that this is an error.
00:15:33 Consciousness is an intermediate level of thinking.
00:15:36 What it does is it allows you,
00:15:37 it’s basically like uncompiled code.
00:15:40 And it doesn’t run very fast.
00:15:42 It is capable of being adapted to new circumstances.
00:15:45 But once the code is roughed in,
00:15:48 it gets driven into the unconscious layer,
00:15:50 and you become highly effective at whatever it is.
00:15:52 And from that point, your conscious mind
00:15:55 basically remains there to detect things
00:15:57 that aren’t anticipated by the code you’ve already written.
00:16:00 And so I don’t exactly know how one would establish this,
00:16:05 how one would demonstrate it.
00:16:07 But it must be the case that the human mind
00:16:10 contains sandboxes in which things are tested, right?
00:16:15 Maybe you can build a piece of code
00:16:16 and run it in parallel next to your active code
00:16:19 so you can see how it would have done comparatively.
00:16:23 But there’s gotta be some way of writing new code
00:16:26 and then swapping it in.
00:16:28 And frankly, I think this has a lot to do
00:16:29 with things like sleep cycles.
00:16:31 Very often, when I get good at something,
00:16:34 I often don’t get better at it while I’m doing it.
00:16:36 I get better at it when I’m not doing it,
00:16:38 especially if there’s time to sleep and think on it.
00:16:41 So there’s some sort of new program
00:16:44 swapping in for old program phenomenon,
00:16:46 which will be a lot easier to see in machines.
00:16:50 It’s gonna be hard with the wetware.
00:16:53 I like, I mean, it is true,
00:16:55 because somebody that played,
00:16:56 I played tennis for many years,
00:16:58 I do still think the highest form of excellence in tennis
00:17:01 is when the conscious mind is a spectator.
00:17:05 So the compiled code is the highest form of being human.
00:17:11 And then consciousness is just some specific compiler.
00:17:16 You used to have like Borland C++ compiler.
00:17:19 You could just have different kind of compilers.
00:17:22 Ultimately, the thing that by which we measure
00:17:28 the power of life, the intelligence of life
00:17:30 is the compiled code.
00:17:31 And you can probably do that compilation all kinds of ways.
00:17:34 Yeah, I’m not saying that tennis is played consciously
00:17:37 and table tennis isn’t.
00:17:38 I’m saying that because tennis is slowed down
00:17:41 by the just the space on the court,
00:17:43 you could imagine that it was your conscious mind playing.
00:17:47 But when you shrink the court down,
00:17:48 It becomes obvious.
00:17:49 It becomes obvious that your conscious mind
00:17:51 is just present rather than knowing where to put the paddle.
00:17:54 And weirdly for me,
00:17:58 I would say this probably isn’t true
00:17:59 in a podcast situation.
00:18:01 But if I have to give a presentation,
00:18:03 especially if I have not overly prepared,
00:18:06 I often find the same phenomenon
00:18:08 when I’m giving the presentation.
00:18:10 My conscious mind is there watching
00:18:11 some other part of me present,
00:18:13 which is a little jarring, I have to say.
00:18:17 Well, that means you’ve gotten good at it.
00:18:20 Not let the conscious mind get in the way
00:18:22 of the flow of words.
00:18:24 Yeah, that’s the sensation to be sure.
00:18:27 And that’s the highest form of podcasting too.
00:18:29 I mean, that’s what it looks like
00:18:32 when a podcast is really in the pocket,
00:18:34 like Joe Rogan, just having fun
00:18:38 and just losing themselves.
00:18:39 And that’s something I aspire to as well,
00:18:41 just losing yourself in conversation.
00:18:43 Somebody that has a lot of anxiety with people,
00:18:45 like I’m such an introvert.
00:18:47 I’m scared.
00:18:48 I was scared before you showed up.
00:18:49 I’m scared right now.
00:18:50 There’s just anxiety.
00:18:52 There’s just, it’s a giant mess.
00:18:55 It’s hard to lose yourself.
00:18:56 It’s hard to just get out of the way of your own mind.
00:19:00 Yeah, actually, trust is a big component of that.
00:19:04 Your conscious mind retains control
00:19:08 if you are very uncertain.
00:19:11 But when you do get into that zone when you’re speaking,
00:19:14 I realize it’s different for you
00:19:15 with English as a second language,
00:19:16 although maybe you present in Russian and it happens.
00:19:20 But do you ever hear yourself say something
00:19:22 and you think, oh, that’s really good, right?
00:19:25 Like you didn’t come up with it,
00:19:26 some other part of you that you don’t exactly know
00:19:30 came up with it?
00:19:31 I don’t think I’ve ever heard myself in that way
00:19:36 because I have a much louder voice
00:19:38 that’s constantly yelling in my head at,
00:19:41 why the hell did you say that?
00:19:43 There’s a very self critical voice that’s much louder.
00:19:47 So I’m very, maybe I need to deal with that voice,
00:19:51 but it’s been like, what is it called?
00:19:53 Like a megaphone just screaming
00:19:54 so I can’t hear the other voice that says,
00:19:56 good job, you said that thing really nicely.
00:19:58 So I’m kind of focused right now on the megaphone person
00:20:02 in the audience versus the positive,
00:20:05 but that’s definitely something to think about.
00:20:07 It’s been productive, but the place where I find gratitude
00:20:12 and beauty and appreciation of life is in the quiet moments
00:20:16 when I don’t talk, when I listen to the world around me,
00:20:20 when I listen to others, when I talk,
00:20:23 I’m extremely self critical in my mind.
00:20:26 When I produce anything out into the world
00:20:29 that originated with me,
00:20:32 like any kind of creation, extremely self critical.
00:20:35 It’s good for productivity,
00:20:37 for always striving to improve and so on.
00:20:40 It might be bad for just appreciating
00:20:45 the things you’ve created.
00:20:46 I’m a little bit with Marvin Minsky on this
00:20:49 where he says the key to a productive life
00:20:54 is to hate everything you’ve ever done in the past.
00:20:57 I didn’t know he said that.
00:20:59 I must say, I resonate with it a bit.
00:21:01 And unfortunately, my life currently has me putting
00:21:06 a lot of stuff into the world,
00:21:08 and I effectively watch almost none of it.
00:21:12 I can’t stand it.
00:21:15 Yeah, what do you make of that?
00:21:16 I don’t know.
00:21:18 I just yesterday read Metamorphosis by Kafka,
00:21:21 we read Metamorphosis by Kafka
00:21:23 where he turns into a giant bug
00:21:25 because of the stress that the world puts on him.
00:21:29 His parents put on him to succeed.
00:21:31 And I think that you have to find the balance
00:21:35 because if you allow the self critical voice
00:21:39 to become too heavy, the burden of the world,
00:21:41 the pressure that the world puts on you
00:21:44 to be the best version of yourself and so on to strive,
00:21:47 then you become a bug and that’s a big problem.
00:21:51 And then the world turns against you because you’re a bug.
00:21:56 You become some kind of caricature of yourself.
00:21:59 I don’t know, you become the worst version of yourself
00:22:03 and then thereby end up destroying yourself
00:22:07 and then the world moves on.
00:22:09 That’s the story.
00:22:10 That’s a lovely story.
00:22:12 I do think this is one of these places,
00:22:14 and frankly, you could map this onto
00:22:17 all of modern human experience,
00:22:19 but this is one of these places
00:22:20 where our ancestral programming
00:22:23 does not serve our modern selves.
00:22:25 So I used to talk to students
00:22:27 about the question of dwelling on things.
00:22:30 Dwelling on things is famously understood to be bad
00:22:35 and it can’t possibly be bad.
00:22:36 It wouldn’t exist, the tendency toward it
00:22:38 wouldn’t exist if it was bad.
00:22:40 So what is bad is dwelling on things
00:22:42 past the point of utility.
00:22:45 And that’s obviously easier to say than to operationalize,
00:22:49 but if you realize that your dwelling is the key, in fact,
00:22:53 to upgrading your program for future well being
00:22:57 and that there’s a point, presumably,
00:23:00 from diminishing returns, if not counter productivity,
00:23:03 there is a point at which you should stop
00:23:05 because that is what is in your best interest,
00:23:08 then knowing that you’re looking for that point is useful.
00:23:12 This is the point at which it is no longer useful
00:23:14 for me to dwell on this error I have made.
00:23:16 That’s what you’re looking for.
00:23:17 And it also gives you license, right?
00:23:20 If some part of you feels like it’s punishing you
00:23:23 rather than searching, then that also has a point
00:23:27 at which it’s no longer valuable
00:23:29 and there’s some liberty in realizing,
00:23:32 yep, even the part of me that was punishing me
00:23:35 knows it’s time to stop.
00:23:37 So if we map that onto compiled code discussion,
00:23:40 as a computer science person, I find that very compelling.
00:23:43 You know, when you compile code, you get warnings sometimes.
00:23:48 And usually, if you’re a good software engineer,
00:23:54 you’re going to make sure there’s no,
00:23:56 you know, you treat warnings as errors.
00:23:58 So you make sure that the compilation produces no warnings.
00:24:02 But at a certain point, when you have a large enough system,
00:24:05 you just let the warnings go.
00:24:06 It’s fine.
00:24:07 Like, I don’t know where that warning came from,
00:24:10 but, you know, just ultimately you need to compile the code
00:24:15 and run with it and hope nothing terrible happens.
00:24:19 Well, I think what you will find, and believe me,
00:24:21 I think what you’re talking about
00:24:24 with respect to robots and learning
00:24:27 is gonna end up having to go to a deep developmental state
00:24:31 and a helplessness that evolves into hyper competence
00:24:34 and all of that.
00:24:36 But I live, I noticed that I live by something
00:24:41 that I, for lack of a better descriptor,
00:24:44 call the theory of close calls.
00:24:47 And the theory of close calls says that people
00:24:50 typically miscategorize the events in their life
00:24:55 where something almost went wrong.
00:24:58 And, you know, for example, if you,
00:25:01 I have a friend who, I was walking down the street
00:25:04 with my college friends and one of my friends
00:25:06 stepped into the street thinking it was clear
00:25:08 and was nearly hit by a car going 45 miles an hour,
00:25:12 would have been an absolute disaster, might have killed her,
00:25:14 certainly would have permanently injured her.
00:25:18 But she didn’t, you know, car didn’t touch her, right?
00:25:21 Now you could walk away from that and think nothing of it
00:25:25 because, well, what is there to think?
00:25:26 Nothing happened.
00:25:28 Or you could think, well, what is the difference
00:25:30 between what did happen and my death?
00:25:33 The difference is luck.
00:25:35 I never want that to be true, right?
00:25:37 I never want the difference between what did happen
00:25:40 and my death to be luck.
00:25:41 Therefore, I should count this as very close to death
00:25:45 and I should prioritize coding
00:25:47 so it doesn’t happen again at a very high level.
00:25:50 So anyway, my basic point is
00:25:53 the accidents and disasters and misfortune
00:25:58 describe a distribution that tells you
00:26:02 what’s really likely to get you in the end.
00:26:04 And so personally, you can use them to figure out
00:26:10 where the dangers are so that you can afford
00:26:12 to take great risks because you have a really good sense
00:26:14 of how they’re gonna go wrong.
00:26:15 But I would also point out civilization has this problem.
00:26:19 Civilization is now producing these events
00:26:22 that are major disasters,
00:26:24 but they’re not existential scale yet, right?
00:26:27 They’re very serious errors that we can see.
00:26:30 And I would argue that the pattern is
00:26:32 you discover that we are involved in some industrial process
00:26:35 at the point it has gone wrong, right?
00:26:37 So I’m now always asking the question,
00:26:40 okay, in light of the Fukushima triple meltdown,
00:26:44 the financial collapse of 2008,
00:26:46 the Deepwater Horizon blowout, COVID 19,
00:26:51 and its probable origins in the Wuhan lab,
00:26:55 what processes do I not know the name of yet
00:26:58 that I will discover at the point
00:27:00 that some gigantic accident has happened?
00:27:03 And can we talk about the wisdom or lack thereof
00:27:06 of engaging in that process before the accident, right?
00:27:09 That’s what a wise civilization would be doing.
00:27:11 And yet we don’t.
00:27:12 I just wanna mention something that happened
00:27:15 a couple of days ago.
00:27:17 I don’t know if you know who JB Straubel is.
00:27:20 He’s the co founder of Tesla,
00:27:21 CTO of Tesla for many, many years.
00:27:24 His wife just died.
00:27:26 She was riding a bicycle.
00:27:28 And in the same thin line between death and life
00:27:35 that many of us have been in,
00:27:37 where you walk into the intersection
00:27:39 and there’s this close call.
00:27:41 Every once in a while, you get the short straw.
00:27:50 I wonder how much of our own individual lives
00:27:54 and the entirety of the human civilization
00:27:57 rests on this little roll of the dice.
00:28:00 Well, this is sort of my point about the close calls
00:28:03 is that there’s a level at which we can’t control it, right?
00:28:06 The gigantic asteroid that comes from deep space
00:28:11 that you don’t have time to do anything about.
00:28:13 There’s not a lot we can do to hedge that out,
00:28:15 or at least not short term.
00:28:17 But there are lots of other things.
00:28:20 Obviously, the financial collapse of 2008
00:28:23 didn’t break down the entire world economy.
00:28:27 It threatened to, but a Herculean effort
00:28:28 managed to pull us back from the brink.
00:28:31 The triple meltdown at Fukushima was awful,
00:28:34 but every one of the seven fuel pools held,
00:28:37 there wasn’t a major fire that made it impossible
00:28:39 to manage the disaster going forward.
00:28:41 We got lucky.
00:28:44 We could say the same thing about the blowout
00:28:47 at the Deepwater Horizon,
00:28:49 where a hole in the ocean floor large enough
00:28:52 that we couldn’t have plugged it, could have opened up.
00:28:54 All of these things could have been much, much worse, right?
00:28:57 And I think we can say the same thing about COVID,
00:28:59 as terrible as it is.
00:29:00 And we cannot say for sure that it came from the Wuhan lab,
00:29:04 but there’s a strong likelihood that it did.
00:29:06 And it also could be much, much worse.
00:29:10 So in each of these cases, something is telling us,
00:29:13 we have a process that is unfolding
00:29:16 that keeps creating risks where it is luck
00:29:18 that is the difference between us
00:29:19 and some scale of disaster that is unimaginable.
00:29:22 And that wisdom, you can be highly intelligent
00:29:26 and cause these disasters.
00:29:28 To be wise is to stop causing them, right?
00:29:31 And that would require a process of restraint,
00:29:36 a process that I don’t see a lot of evidence of yet.
00:29:38 So I think we have to generate it.
00:29:41 And somehow, at the moment,
00:29:45 we don’t have a political structure
00:29:47 that would be capable of taking
00:29:51 a protective algorithm and actually deploying it, right?
00:29:55 Because it would have important economic consequences.
00:29:57 And so it would almost certainly be shot down.
00:30:00 But we can obviously also say,
00:30:03 we paid a huge price for all of the disasters
00:30:07 that I’ve mentioned.
00:30:09 And we have to factor that into the equation.
00:30:12 Something can be very productive short term
00:30:13 and very destructive long term.
00:30:17 Also, the question is how many disasters we avoided
00:30:20 because of the ingenuity of humans
00:30:23 or just the integrity and character of humans.
00:30:28 That’s sort of an open question.
00:30:30 We may be more intelligent than lucky.
00:30:35 That’s the hope.
00:30:36 Because the optimistic message here that you’re getting at
00:30:40 is maybe the process that we should be,
00:30:44 that maybe we can overcome luck with ingenuity.
00:30:48 Meaning, I guess you’re suggesting the processes
00:30:51 we should be listing all the ways
00:30:53 that human civilization can destroy itself,
00:30:57 assigning likelihood to it,
00:30:59 and thinking through how can we avoid that.
00:31:03 And being very honest with the data out there
00:31:06 about the close calls and using those close calls
00:31:10 to then create sort of mechanism
00:31:13 by which we minimize the probability of those close calls.
00:31:17 And just being honest and transparent
00:31:21 with the data that’s out there.
00:31:23 Well, I think we need to do a couple things for it to work.
00:31:27 So I’ve been an advocate for the idea
00:31:30 that sustainability is actually,
00:31:32 it’s difficult to operationalize,
00:31:33 but it is an objective that we have to meet
00:31:35 if we’re to be around long term.
00:31:38 And I realized that we also need to have reversibility
00:31:41 of all of our processes.
00:31:43 Because processes very frequently when they start
00:31:46 do not appear dangerous.
00:31:47 And then when they scale, they become very dangerous.
00:31:51 So for example, if you imagine
00:31:54 the first internal combustion engine vehicle
00:31:58 driving down the street,
00:31:59 and you imagine somebody running after them saying,
00:32:01 hey, if you do enough of that,
00:32:02 you’re gonna alter the atmosphere
00:32:04 and it’s gonna change the temperature of the planet.
00:32:05 It’s preposterous, right?
00:32:07 Why would you stop the person
00:32:08 who’s invented this marvelous new contraption?
00:32:10 But of course, eventually you do get to the place
00:32:13 where you’re doing enough of this
00:32:14 that you do start changing the temperature of the planet.
00:32:17 So if we built the capacity,
00:32:20 if we basically said, look, you can’t involve yourself
00:32:23 in any process that you couldn’t reverse if you had to,
00:32:27 then progress would be slowed,
00:32:30 but our safety would go up dramatically.
00:32:33 And I think in some sense, if we are to be around long term,
00:32:38 we have to begin thinking that way.
00:32:40 We’re just involved in too many very dangerous processes.
00:32:43 So let’s talk about one of the things
00:32:46 that if not threatened human civilization
00:32:50 certainly hurt it at a deep level, which is COVID 19.
00:32:56 What percent probability would you currently place
00:33:00 on the hypothesis that COVID 19 leaked
00:33:02 from the Wuhan Institute of Virology?
00:33:06 So I maintain a flow chart of all the possible explanations,
00:33:10 and it doesn’t break down exactly that way.
00:33:15 The likelihood that it emerged from a lab is very, very high.
00:33:20 If it emerged from a lab,
00:33:21 the likelihood that the lab was the Wuhan Institute
00:33:23 is very, very high.
00:33:27 There are multiple different kinds of evidence
00:33:30 that point to the lab,
00:33:31 and there is literally no evidence that points to nature.
00:33:35 Either the evidence points nowhere or it points to the lab,
00:33:38 and the lab could mean any lab,
00:33:39 but geographically, obviously,
00:33:41 the labs in Wuhan are the most likely,
00:33:44 and the lab that was most directly involved
00:33:46 with research on viruses that look like COVID,
00:33:50 that look like SARS COVID 2,
00:33:52 is obviously the place that one would start.
00:33:55 But I would say the likelihood that this virus
00:33:59 came from a lab is well above 95%.
00:34:04 We can talk about the question of could a virus
00:34:06 have been brought into the lab and escaped from there
00:34:08 without being modified.
00:34:09 That’s also possible,
00:34:11 but it doesn’t explain any of the anomalies
00:34:13 in the genome of SARS COVID 2.
00:34:17 Could it have been delivered from another lab?
00:34:20 Could Wuhan be a distraction
00:34:23 in order that we would connect the dots in the wrong way?
00:34:26 That’s conceivable.
00:34:27 I currently have that below 1% on my flowchart,
00:34:30 but I think…
00:34:31 A very dark thought that somebody would do that
00:34:34 almost as a political attack on China.
00:34:37 Well, it depends.
00:34:39 I don’t even think that’s one possibility.
00:34:42 Sometimes when Eric and I talk about these issues,
00:34:44 we will generate a scenario just to prove
00:34:48 that something could live in that space, right?
00:34:51 It’s a placeholder for whatever may actually have happened.
00:34:53 And so it doesn’t have to have been an attack on China.
00:34:57 That’s certainly one possibility.
00:34:59 But I would point out,
00:35:01 if you can predict the future in some unusual way
00:35:06 better than others, you can print money, right?
00:35:10 That’s what markets that allow you to bet for
00:35:12 or against virtually any sector allow you to do.
00:35:16 So you can imagine a simply amoral person
00:35:23 or entity generating a pandemic,
00:35:26 attempting to cover their tracks
00:35:28 because it would allow them to bet against things
00:35:30 like cruise ships, air travel, whatever it is,
00:35:35 and bet in favor of, I don’t know,
00:35:39 sanitizing gel and whatever else you would do.
00:35:43 So am I saying that I think somebody did that?
00:35:46 No, I really don’t think it happened.
00:35:47 We’ve seen zero evidence
00:35:49 that this was intentionally released.
00:35:51 However, were it to have been intentionally released
00:35:54 by somebody who did not know,
00:35:56 did not want it known where it had come from,
00:35:59 releasing it into Wuhan would be one way
00:36:00 to cover their tracks.
00:36:01 So we have to leave the possibility formally open,
00:36:05 but acknowledge there’s no evidence.
00:36:07 And the probability therefore is low.
00:36:09 I tend to believe maybe this is the optimistic nature
00:36:13 that I have that people who are competent enough
00:36:18 to do the kind of thing we just described
00:36:21 are not going to do that
00:36:23 because it requires a certain kind of,
00:36:26 I don’t wanna use the word evil,
00:36:27 but whatever word you wanna use to describe
00:36:29 the kind of disregard for human life required to do that,
00:36:36 that’s just not going to be coupled with competence.
00:36:40 I feel like there’s a trade off chart
00:36:42 where competence on one axis and evil is on the other.
00:36:45 And the more evil you become,
00:36:48 the crappier you are at doing great engineering,
00:36:52 scientific work required to deliver weapons
00:36:55 of different kinds, whether it’s bioweapons
00:36:57 or nuclear weapons, all those kinds of things.
00:36:59 That seems to be the lessons I take from history,
00:37:02 but that doesn’t necessarily mean
00:37:04 that’s what’s going to be happening in the future.
00:37:08 But to stick on the lab leak idea,
00:37:11 because the flow chart is probably huge here
00:37:13 because there’s a lot of fascinating possibilities.
00:37:16 One question I wanna ask is,
00:37:18 what would evidence for natural origins look like?
00:37:20 So one piece of evidence for natural origins
00:37:25 is that it’s happened in the past
00:37:30 that viruses have jumped.
00:37:33 Oh, they do jump.
00:37:35 So like that’s possible to have happened.
00:37:39 So that’s a sort of like a historical evidence,
00:37:42 like, okay, well, it’s possible that it have…
00:37:46 It’s not evidence of the kind you think it is.
00:37:48 It’s a justification for a presumption, right?
00:37:52 So the presumption upon discovering
00:37:54 a new virus circulating is certainly
00:37:55 that it came from nature, right?
00:37:57 The problem is the presumption evaporates
00:38:00 in the face of evidence, or at least it logically should.
00:38:04 And it didn’t in this case.
00:38:05 It was maintained by people who privately
00:38:08 in their emails acknowledged that they had grave doubts
00:38:11 about the natural origin of this virus.
00:38:14 Is there some other piece of evidence
00:38:17 that we could look for and see that would say,
00:38:21 this increases the probability that it’s natural origins?
00:38:24 Yeah, in fact, there is evidence.
00:38:27 I always worry that somebody is going to make up
00:38:31 some evidence in order to reverse the flow.
00:38:34 Oh, boy.
00:38:35 Well, let’s say I am…
00:38:36 There’s a lot of incentive for that actually.
00:38:38 There’s a huge amount of incentive.
00:38:39 On the other hand, why didn’t the powers that be,
00:38:43 the powers that lied to us about weapons
00:38:45 of mass destruction in Iraq,
00:38:46 why didn’t they ever fake weapons
00:38:48 of mass destruction in Iraq?
00:38:49 Whatever force it is, I hope that force is here too.
00:38:52 And so whatever evidence we find is real.
00:38:54 It’s the competence thing I’m talking about,
00:38:56 but okay, go ahead, sorry.
00:38:58 Well, we can get back to that.
00:39:00 But I would say, yeah, the giant piece of evidence
00:39:03 that will shift the probabilities in the other direction
00:39:07 is the discovery of either a human population
00:39:10 in which the virus circulated prior to showing up in Wuhan
00:39:14 that would explain where the virus learned all of the tricks
00:39:16 that it knew instantly upon spreading from Wuhan.
00:39:20 So that would do it, or an animal population
00:39:24 in which an ancestor epidemic can be found
00:39:27 in which the virus learned this before jumping to humans.
00:39:30 But I point out in that second case,
00:39:33 you would certainly expect to see a great deal of evolution
00:39:36 in the early epidemic, which we don’t see.
00:39:39 So there almost has to be a human population
00:39:42 somewhere else that had the virus circulate
00:39:44 or an ancestor of the virus that we first saw
00:39:47 in Wuhan circulating.
00:39:48 And it has to have gotten very sophisticated
00:39:50 in that prior epidemic before hitting Wuhan
00:39:54 in order to explain the total lack of evolution
00:39:56 and extremely effective virus that emerged
00:40:00 at the end of 2019.
00:40:01 So you don’t believe in the magic of evolution
00:40:03 to spring up with all the tricks already there?
00:40:05 Like everybody who doesn’t have the tricks,
00:40:07 they die quickly.
00:40:09 And then you just have this beautiful virus
00:40:11 that comes in with a spike protein
00:40:13 and through mutation and selection,
00:40:17 just like the ones that succeed and succeed big
00:40:23 are the ones that are going to just spring into life
00:40:25 with the tricks.
00:40:26 Well, no, that’s called a hopeful monster.
00:40:30 And hopeful monsters don’t work.
00:40:33 The job of becoming a new pandemic virus is too difficult.
00:40:37 It involves two very difficult steps
00:40:39 and they both have to work.
00:40:40 One is the ability to infect a person and spread
00:40:43 in their tissues sufficient to make an infection.
00:40:46 And the other is to jump between individuals
00:40:49 at a sufficient rate that it doesn’t go extinct
00:40:51 for one reason or another.
00:40:53 Those are both very difficult jobs.
00:40:55 They require, as you describe, selection.
00:40:58 And the point is selection would leave a mark.
00:41:00 We would see evidence that it would stay.
00:41:02 In animals or humans, we would see.
00:41:04 Both, right?
00:41:05 And we see this evolutionary trace of the virus
00:41:09 gathering the tricks up.
00:41:10 Yeah, you would see the virus,
00:41:12 you would see the clumsy virus get better and better.
00:41:14 And yes, I am a full believer in the power of that process.
00:41:17 In fact, I believe it.
00:41:19 What I know from studying the process
00:41:22 is that it is much more powerful than most people imagine.
00:41:25 That what we teach in the Evolution 101 textbook
00:41:28 is too clumsy a process to do what we see it doing
00:41:32 and that actually people should increase their expectation
00:41:35 of the rapidity with which that process can produce
00:41:39 just jaw dropping adaptations.
00:41:42 That said, we just don’t see evidence that it happened here
00:41:45 which doesn’t mean it doesn’t exist,
00:41:46 but it means in spite of immense pressure
00:41:49 to find it somewhere, there’s been no hint
00:41:51 which probably means it took place inside of a laboratory.
00:41:55 So inside the laboratory,
00:41:58 gain of function research on viruses.
00:42:00 And I believe most of that kind of research
00:42:04 is doing this exact thing that you’re referring to
00:42:07 which is accelerated evolution
00:42:09 and just watching evolution do its thing
00:42:11 and a bunch of viruses
00:42:12 and seeing what kind of tricks get developed.
00:42:16 The other method is engineering viruses.
00:42:21 So manually adding on the tricks.
00:42:26 Which do you think we should be thinking about here?
00:42:30 So mind you, I learned what I know
00:42:33 in the aftermath of this pandemic emerging.
00:42:35 I started studying the question and I would say
00:42:39 based on the content of the genome and other evidence
00:42:43 in publications from the various labs
00:42:45 that were involved in generating this technology,
00:42:50 a couple of things seem likely.
00:42:52 This SARS CoV2 does not appear to be entirely the result
00:42:57 of either a splicing process or serial passaging.
00:43:02 It appears to have both things in its past
00:43:07 or it’s at least highly likely that it does.
00:43:09 So for example, the fern cleavage site
00:43:11 looks very much like it was added in to the virus
00:43:15 and it was known that that would increase its infectivity
00:43:18 in humans and increase its tropism.
00:43:22 The virus appears to be excellent
00:43:27 at spreading in humans and minks and ferrets.
00:43:32 Now minks and ferrets are very closely related to each other
00:43:34 and ferrets are very likely to have been used
00:43:36 in a serial passage experiment.
00:43:38 The reason being that they have an ACE2 receptor
00:43:41 that looks very much like the human ACE2 receptor.
00:43:43 And so were you going to passage the virus
00:43:46 or its ancestor through an animal
00:43:49 in order to increase its infectivity in humans,
00:43:51 which would have been necessary,
00:43:53 ferrets would have been very likely.
00:43:55 It is also quite likely
00:43:57 that humanized mice were utilized
00:44:01 and it is possible that human airway tissue was utilized.
00:44:05 I think it is vital that we find out
00:44:07 what the protocols were.
00:44:09 If this came from the Wuhan Institute,
00:44:11 we need to know it
00:44:12 and we need to know what the protocols were exactly
00:44:14 because they will actually give us some tools
00:44:17 that would be useful in fighting SARS CoV2
00:44:20 and hopefully driving it to extinction,
00:44:22 which ought to be our priority.
00:44:24 It is a priority that does not,
00:44:26 it is not apparent from our behavior,
00:44:28 but it really is, it should be our objective.
00:44:31 If we understood where our interests lie,
00:44:33 we would be much more focused on it.
00:44:36 But those protocols would tell us a great deal.
00:44:39 If it wasn’t the Wuhan Institute, we need to know that.
00:44:42 If it was nature, we need to know that.
00:44:44 And if it was some other laboratory,
00:44:45 we need to figure out what and where
00:44:48 so that we can determine what we can determine
00:44:51 about what was done.
00:44:53 You’re opening up my mind about why we should investigate,
00:44:57 why we should know the truth of the origins of this virus.
00:45:01 So for me personally,
00:45:03 let me just tell the story of my own kind of journey.
00:45:07 When I first started looking into the lab leak hypothesis,
00:45:12 what became terrifying to me
00:45:15 and important to understand and obvious
00:45:19 is the sort of like Sam Harris way of thinking,
00:45:22 which is it’s obvious that a lab leak of a deadly virus
00:45:27 will eventually happen.
00:45:29 My mind was, it doesn’t even matter
00:45:32 if it happened in this case.
00:45:34 It’s obvious that it’s going to happen in the future.
00:45:37 So why the hell are we not freaking out about this?
00:45:40 And COVID 19 is not even that deadly
00:45:42 relative to the possible future viruses.
00:45:45 It’s this, the way I disagree with Sam on this,
00:45:47 but he thinks about this way about AGI as well,
00:45:50 not about artificial intelligence.
00:45:52 It’s a different discussion, I think,
00:45:54 but with viruses, it seems like something that could happen
00:45:56 on the scale of years, maybe a few decades.
00:46:00 AGI is a little bit farther out for me,
00:46:02 but it seemed, the terrifying thing,
00:46:04 it seemed obvious that this will happen very soon
00:46:08 for a much deadlier virus as we get better and better
00:46:11 at both engineering viruses
00:46:13 and doing this kind of evolutionary driven research,
00:46:16 gain of function research.
00:46:18 Okay, but then you started speaking out about this as well,
00:46:23 but also started to say, no, no, no,
00:46:25 we should hurry up and figure out the origins now
00:46:27 because it will help us figure out
00:46:29 how to actually respond to this particular virus,
00:46:35 how to treat this particular virus.
00:46:37 What is in terms of vaccines, in terms of antiviral drugs,
00:46:40 in terms of just all the number of responses
00:46:45 that we should have.
00:46:46 Okay, I still am much more freaking out about the future.
00:46:53 Maybe you can break that apart a little bit.
00:46:57 Which are you most focused on now?
00:47:03 Which are you most freaking out about now
00:47:06 in terms of the importance of figuring out
00:47:08 the origins of this virus?
00:47:10 I am most freaking out about both of them
00:47:13 because they’re both really important
00:47:15 and we can put bounds on this.
00:47:18 Let me say first that this is a perfect test case
00:47:20 for the theory of close calls
00:47:22 because as much as COVID is a disaster,
00:47:25 it is also a close call from which we can learn much.
00:47:28 You are absolutely right.
00:47:29 If we keep playing this game in the lab,
00:47:32 if we are not, if we are,
00:47:34 especially if we do it under pressure
00:47:36 and when we are told that a virus
00:47:37 is going to leap from nature any day
00:47:40 and that the more we know,
00:47:41 the better we’ll be able to fight it,
00:47:42 we’re gonna create the disaster,
00:47:44 all the sooner.
00:47:46 So yes, that should be an absolute focus.
00:47:49 The fact that there were people saying
00:47:50 that this was dangerous back in 2015
00:47:54 ought to tell us something.
00:47:55 The fact that the system bypassed a ban
00:47:57 and offshored the work to China
00:48:00 ought to tell us this is not a Chinese failure.
00:48:02 This is a failure of something larger and harder to see.
00:48:07 But I also think that there’s a clock ticking
00:48:11 with respect to SARS CoV2 and COVID,
00:48:14 the disease that it creates.
00:48:16 And that has to do with whether or not
00:48:18 we are stuck with it permanently.
00:48:20 So if you think about the cost to humanity
00:48:22 of being stuck with influenza,
00:48:24 it’s an immense cost year after year.
00:48:27 And we just stop thinking about it because it’s there.
00:48:30 Some years you get the flu, most years you don’t.
00:48:32 Maybe you get the vaccine to prevent it.
00:48:34 Maybe the vaccine isn’t particularly well targeted.
00:48:37 But imagine just simply doubling that cost.
00:48:40 Imagine we get stuck with SARS CoV2
00:48:44 and its descendants going forward
00:48:45 and that it just settles in
00:48:48 and becomes a fact of modern human life.
00:48:51 That would be a disaster, right?
00:48:52 The number of people we will ultimately lose
00:48:54 is incalculable.
00:48:55 The amount of suffering that will be caused is incalculable.
00:48:58 The loss of wellbeing and wealth, incalculable.
00:49:01 So that ought to be a very high priority,
00:49:04 driving this extinct before it becomes permanent.
00:49:08 And the ability to drive extinct goes down
00:49:12 the longer we delay effective responses.
00:49:15 To the extent that we let it have this very large canvas,
00:49:18 large numbers of people who have the disease
00:49:21 in which mutation and selection can result in adaptation
00:49:25 that we will not be able to counter
00:49:26 the greater its ability to figure out features
00:49:29 of our immune system and use them to its advantage.
00:49:33 So I’m feeling the pressure of driving it extinct.
00:49:37 I believe we could have driven it extinct six months ago
00:49:40 and we didn’t do it because of very mundane concerns
00:49:43 among a small number of people.
00:49:44 And I’m not alleging that they were brazen about
00:49:52 or that they were callous about deaths that would be caused.
00:49:55 I have the sense that they were working
00:49:56 from a kind of autopilot in which you,
00:50:00 let’s say you’re in some kind of a corporation,
00:50:02 a pharmaceutical corporation,
00:50:04 you have a portfolio of therapies
00:50:08 that in the context of a pandemic might be very lucrative.
00:50:11 Those therapies have competitors.
00:50:13 You of course wanna position your product
00:50:15 so that it succeeds and the competitors don’t.
00:50:18 And lo and behold, at some point through means
00:50:22 that I think those of us on the outside
00:50:23 can’t really intuit, you end up saying things
00:50:28 about competing therapies that work better
00:50:30 and much more safely than the ones you’re selling
00:50:33 that aren’t true and do cause people to die
00:50:36 in large numbers.
00:50:38 But it’s some kind of autopilot, at least part of it is.
00:50:43 So there’s a complicated coupling of the autopilot
00:50:47 of institutions, companies, governments.
00:50:53 And then there’s also the geopolitical game theory thing
00:50:57 going on where you wanna keep secrets.
00:51:00 It’s the Chernobyl thing where if you messed up,
00:51:04 there’s a big incentive, I think,
00:51:07 to hide the fact that you messed up.
00:51:10 So how do we fix this?
00:51:12 And what’s more important to fix?
00:51:14 The autopilot, which is the response
00:51:18 that we often criticize about our institutions,
00:51:21 especially the leaders in those institutions,
00:51:23 Anthony Fauci and so on,
00:51:25 some of the members of the scientific community.
00:51:29 And the second part is the game with China
00:51:35 of hiding the information
00:51:37 in terms of on the fight between nations.
00:51:40 Well, in our live streams on Dark Horse,
00:51:42 Heather and I have been talking from the beginning
00:51:44 about the fact that although, yes,
00:51:47 what happens began in China,
00:51:50 it very much looks like a failure
00:51:51 of the international scientific community.
00:51:54 That’s frightening, but it’s also hopeful
00:51:57 in the sense that actually if we did the right thing now,
00:52:01 we’re not navigating a puzzle about Chinese responsibility.
00:52:05 We’re navigating a question of collective responsibility
00:52:10 for something that has been terribly costly to all of us.
00:52:14 So that’s not a very happy process.
00:52:17 But as you point out, what’s at stake
00:52:20 is in large measure at the very least
00:52:22 the strong possibility this will happen again
00:52:24 and that at some point it will be far worse.
00:52:27 So just as a person that does not learn the lessons
00:52:32 of their own errors doesn’t get smarter
00:52:34 and they remain in danger,
00:52:37 we collectively, humanity has to say,
00:52:40 well, there sure is a lot of evidence
00:52:43 that suggests that this is a self inflicted wound.
00:52:46 When you have done something
00:52:47 that has caused a massive self inflicted wound,
00:52:51 self inflicted wound, it makes sense to dwell on it
00:52:55 exactly to the point that you have learned the lesson
00:52:57 that makes it very, very unlikely
00:52:59 that something similar will happen again.
00:53:01 I think this is a good place to kind of ask you
00:53:04 to do almost like a thought experiment
00:53:07 or to steel man the argument against the lab leak hypothesis.
00:53:15 So if you were to argue, you said 95% chance
00:53:20 that the virus leak from a lab.
00:53:26 There’s a bunch of ways I think you can argue
00:53:29 that even talking about it is bad for the world.
00:53:37 So if I just put something on the table,
00:53:40 it’s to say that for one,
00:53:44 it would be racism versus Chinese people
00:53:46 that talking about that it leaked from a lab,
00:53:51 there’s a kind of immediate kind of blame
00:53:53 and it can spiral down into this idea
00:53:56 that’s somehow the people are responsible for the virus
00:54:00 and this kind of thing.
00:54:02 Is it possible for you to come up
00:54:03 with other steel man arguments against talking
00:54:08 or against the possibility of the lab leak hypothesis?
00:54:12 Well, so I think steel manning is a tool
00:54:16 that is extremely valuable,
00:54:19 but it’s also possible to abuse it.
00:54:22 I think that you can only steel man a good faith argument.
00:54:26 And the problem is we now know
00:54:28 that we have not been engaged in opponents
00:54:31 who were wielding good faith arguments
00:54:32 because privately their emails reflect their own doubts.
00:54:36 And what they were doing publicly was actually a punishment,
00:54:39 a public punishment for those of us who spoke up
00:54:43 with I think the purpose of either backing us down
00:54:46 or more likely warning others
00:54:49 not to engage in the same kind of behavior.
00:54:51 And obviously for people like you and me
00:54:53 who regard science as our likely best hope
00:54:58 for navigating difficult waters,
00:55:01 shutting down people who are using those tools honorably
00:55:05 is itself dishonorable.
00:55:07 So I don’t feel that there’s anything to steel man.
00:55:13 And I also think that immediately at the point
00:55:17 that the world suddenly with no new evidence on the table
00:55:21 switched gears with respect to the lab leak,
00:55:24 at the point that Nicholas Wade had published his article
00:55:26 and suddenly the world was going to admit
00:55:28 that this was at least a possibility, if not a likelihood,
00:55:32 we got to see something of the rationalization process
00:55:36 that had taken place inside the institutional world.
00:55:39 And it very definitely involved the claim
00:55:41 that what was being avoided was the targeting
00:55:45 of Chinese scientists.
00:55:49 And my point would be,
00:55:50 I don’t wanna see the targeting of anyone.
00:55:53 I don’t want to see racism of any kind.
00:55:55 On the other hand, once you create license to lie
00:56:00 in order to protect individuals when the world has a stake
00:56:05 in knowing what happened, then it is inevitable
00:56:08 that that process, that license to lie will be used
00:56:12 by the thing that captures institutions
00:56:14 for its own purposes.
00:56:15 So my sense is it may be very unfortunate
00:56:19 if the story of what happened here
00:56:22 can be used against Chinese people.
00:56:26 That would be very unfortunate.
00:56:27 And as I think I mentioned,
00:56:30 Heather and I have taken great pains to point out
00:56:33 that this doesn’t look like a Chinese failure.
00:56:35 It looks like a failure
00:56:36 of the international scientific community.
00:56:38 So I think it is important to broadcast that message
00:56:41 along with the analysis of the evidence.
00:56:43 But no matter what happened, we have a right to know.
00:56:46 And I frankly do not take the institutional layer
00:56:50 at its word that its motivations are honorable
00:56:53 and that it was protecting good hearted scientists
00:56:56 at the expense of the world.
00:56:58 That explanation does not add up.
00:57:00 Well, this is a very interesting question about
00:57:04 whether it’s ever okay to lie at the institutional layer
00:57:08 to protect the populace.
00:57:12 I think both you and I are probably on the same,
00:57:18 have the same sense that it’s a slippery slope.
00:57:21 Even if it’s an effective mechanism in the short term,
00:57:25 in the long term, it’s going to be destructive.
00:57:27 This happened with masks.
00:57:30 This happened with other things.
00:57:32 If you look at just history pandemics,
00:57:35 there’s an idea that panic is destructive
00:57:40 amongst the populace.
00:57:41 So you want to construct a narrative,
00:57:44 whether it’s a lie or not to minimize panic.
00:57:49 But you’re suggesting that almost in all cases,
00:57:52 and I think that was the lesson from the pandemic
00:57:57 in the early 20th century,
00:57:59 that lying creates distrust
00:58:03 and distrust in the institutions is ultimately destructive.
00:58:08 That’s your sense that lying is not okay?
00:58:10 Well, okay.
00:58:12 There are obviously places where complete transparency
00:58:15 is not a good idea, right?
00:58:17 To the extent that you broadcast a technology
00:58:19 that allows one individual to hold the world hostage,
00:58:24 obviously you’ve got something to be navigated.
00:58:27 But in general, I don’t believe that the scientific system
00:58:32 should be lying to us.
00:58:36 In the case of this particular lie,
00:58:39 the idea that the wellbeing of Chinese scientists
00:58:45 outweighs the wellbeing of the world is preposterous.
00:58:50 Right, as you point out,
00:58:51 one thing that rests on this question
00:58:53 is whether we continue to do this kind of research
00:58:55 going forward.
00:58:56 And the scientists in question, all of them,
00:58:58 American, Chinese, all of them were pushing the idea
00:59:03 that the risk of a zoonotic spillover event
00:59:06 causing a major and highly destructive pandemic
00:59:08 was so great that we had to risk this.
00:59:12 Now, if they themselves have caused it,
00:59:14 and if they are wrong, as I believe they are,
00:59:16 about the likelihood of a major world pandemic
00:59:19 spilling out of nature
00:59:20 in the way that they wrote into their grant applications,
00:59:24 then the danger is the call is coming from inside the house
00:59:28 and we have to look at that.
00:59:31 And yes, whatever we have to do
00:59:33 to protect scientists from retribution, we should do,
00:59:38 but we cannot protecting them by lying to the world.
00:59:42 And even worse,
00:59:45 by demonizing people like me, like Josh Rogin,
00:59:54 like Yuri Dagan, the entire drastic group on Twitter,
00:59:58 by demonizing us for simply following the evidence
01:00:02 is to set a terrible precedent, right?
01:00:05 You’re demonizing people for using the scientific method
01:00:08 to evaluate evidence that is available to us in the world.
01:00:11 What a terrible crime it is to teach that lesson, right?
01:00:16 Thou shalt not use scientific tools.
01:00:18 No, I’m sorry.
01:00:19 Whatever your license to lie is, it doesn’t extend to that.
01:00:22 Yeah, I’ve seen the attacks on you,
01:00:25 the pressure on you has a very important effect
01:00:29 on thousands of world class biologists actually.
01:00:36 At MIT, colleagues of mine, people I know,
01:00:40 there’s a slight pressure to not be allowed
01:00:44 to one, speak publicly and two, actually think.
01:00:51 Like do you even think about these ideas?
01:00:53 It sounds kind of ridiculous,
01:00:55 but just in the privacy of your own home,
01:00:58 to read things, to think, it’s many people,
01:01:03 many world class biologists that I know
01:01:06 will just avoid looking at the data.
01:01:10 There’s not even that many people
01:01:12 that are publicly opposing gain of function research.
01:01:15 They’re also like, it’s not worth it.
01:01:18 It’s not worth the battle.
01:01:20 And there’s many people that kind of argue
01:01:21 that those battles should be fought in private,
01:01:27 with colleagues in the privacy of the scientific community
01:01:31 that the public is somehow not maybe intelligent enough
01:01:35 to be able to deal with the complexities
01:01:38 of this kind of discussion.
01:01:39 I don’t know, but the final result
01:01:41 is combined with the bullying of you
01:01:44 and all the different pressures
01:01:47 in the academic institutions is that
01:01:49 it’s just people are self censoring
01:01:51 and silencing themselves
01:01:53 and silencing the most important thing,
01:01:55 which is the power of their brains.
01:01:58 Like these people are brilliant.
01:02:01 And the fact that they’re not utilizing their brain
01:02:04 to come up with solutions
01:02:06 outside of the conformist line of thinking is tragic.
01:02:11 Well, it is.
01:02:12 I also think that we have to look at it
01:02:15 and understand it for what it is.
01:02:17 For one thing, it’s kind of a cryptic totalitarianism.
01:02:20 Somehow people’s sense of what they’re allowed
01:02:23 to think about, talk about, discuss
01:02:25 is causing them to self censor.
01:02:27 And I can tell you it’s causing many of them to rationalize,
01:02:30 which is even worse.
01:02:31 They’re blinding themselves to what they can see.
01:02:34 But it is also the case, I believe,
01:02:37 that what you’re describing about what people said,
01:02:40 and a great many people understood
01:02:43 that the lab leak hypothesis
01:02:45 could not be taken off the table,
01:02:47 but they didn’t say so publicly.
01:02:48 And I think that their discussions with each other
01:02:52 about why they did not say what they understood,
01:02:55 that’s what capture sounds like on the inside.
01:02:59 I don’t know exactly what force captured the institutions.
01:03:02 I don’t think anybody knows for sure out here in public.
01:03:07 I don’t even know that it wasn’t just simply a process.
01:03:10 But you have these institutions.
01:03:13 They are behaving towards a kind of somatic obligation.
01:03:19 They have lost sight of what they were built to accomplish.
01:03:22 And on the inside, the way they avoid
01:03:26 going back to their original mission
01:03:28 is to say things to themselves,
01:03:30 like the public can’t have this discussion.
01:03:32 It can’t be trusted with it.
01:03:34 Yes, we need to be able to talk about this,
01:03:35 but it has to be private.
01:03:36 Whatever it is they say to themselves,
01:03:38 that is what capture sounds like on the inside.
01:03:40 It’s a institutional rationalization mechanism.
01:03:44 And it’s very, very deadly.
01:03:46 And at the point you go from lab leak to repurposed drugs,
01:03:50 you can see that it’s very deadly in a very direct way.
01:03:54 Yeah, I see this in my field with things
01:03:59 like autonomous weapon systems.
01:04:01 People in AI do not talk about the use of AI
01:04:04 in weapon systems.
01:04:05 They kind of avoid the idea that AI’s use them
01:04:08 in the military.
01:04:09 It’s kind of funny, there’s this like kind of discomfort
01:04:13 and they’re like, they all hurry,
01:04:14 like something scary happens and a bunch of sheep
01:04:17 kind of like run away.
01:04:19 That’s what it looks like.
01:04:21 And I don’t even know what to do about it.
01:04:23 And then I feel this natural pull
01:04:26 every time I bring up autonomous weapon systems
01:04:29 to go along with the sheep.
01:04:30 There’s a natural kind of pull towards that direction
01:04:33 because it’s like, what can I do as one person?
01:04:37 Now there’s currently nothing destructive happening
01:04:40 with autonomous weapon systems.
01:04:42 So we’re in like in the early days of this race
01:04:44 that in 10, 20 years might become a real problem.
01:04:48 Now where the discussion we’re having now,
01:04:50 we’re now facing the result of that in the space of viruses,
01:04:55 like for many years avoiding the conversations here.
01:05:00 I don’t know what to do that in the early days,
01:05:03 but I think we have to, I guess, create institutions
01:05:05 where people can stand out.
01:05:08 People can stand out and like basically be individual
01:05:12 thinkers and break out into all kinds of spaces of ideas
01:05:16 that allow us to think freely, freedom of thought.
01:05:19 And maybe that requires a decentralization of institutions.
01:05:22 Well, years ago, I came up with a concept
01:05:26 called cultivated insecurity.
01:05:28 And the idea is, let’s just take the example
01:05:31 of the average Joe, right?
01:05:34 The average Joe has a job somewhere
01:05:37 and their mortgage, their medical insurance,
01:05:42 their retirement, their connection with the economy
01:05:46 is to one degree or another dependent
01:05:49 on their relationship with the employer.
01:05:54 That means that there is a strong incentive,
01:05:57 especially in any industry where it’s not easy to move
01:06:00 from one employer to the next.
01:06:02 There’s a strong incentive to stay
01:06:05 in your employer’s good graces, right?
01:06:07 So it creates a very top down dynamic,
01:06:09 not only in terms of who gets to tell other people
01:06:13 what to do, but it really comes down to
01:06:16 who gets to tell other people how to think.
01:06:18 So that’s extremely dangerous.
01:06:21 The way out of it is to cultivate security
01:06:25 to the extent that somebody is in a position
01:06:28 to go against the grain and have it not be a catastrophe
01:06:32 for their family and their ability to earn,
01:06:34 you will see that behavior a lot more.
01:06:36 So I would argue that some of what you’re talking about
01:06:38 is just a simple predictable consequence
01:06:41 of the concentration of the sources of wellbeing
01:06:48 and that this is a solvable problem.
01:06:51 You got a chance to talk with Joe Rogan yesterday.
01:06:55 Yes, I did.
01:06:56 And I just saw the episode was released
01:06:59 and Ivermectin is trending on Twitter.
01:07:04 Joe told me it was an incredible conversation.
01:07:06 I look forward to listening to it today.
01:07:07 Many people have probably, by the time this is released,
01:07:10 have already listened to it.
01:07:13 I think it would be interesting to discuss a postmortem.
01:07:18 How do you feel how that conversation went?
01:07:21 And maybe broadly, how do you see the story
01:07:25 as it’s unfolding of Ivermectin from the origins
01:07:30 from before COVID 19 through 2020 to today?
01:07:34 I very much enjoyed talking to Joe
01:07:36 and I’m undescribably grateful
01:07:41 that he would take the risk of such a discussion,
01:07:44 that he would, as he described it,
01:07:46 do an emergency podcast on the subject,
01:07:49 which I think that was not an exaggeration.
01:07:52 This needed to happen for various reasons
01:07:55 that he took us down the road of talking about
01:07:59 the censorship campaign against Ivermectin,
01:08:01 which I find utterly shocking
01:08:04 and talking about the drug itself.
01:08:07 And I should say we talked, we had Pierre Corey available.
01:08:10 He came on the podcast as well.
01:08:12 He is, of course, the face of the FLCCC,
01:08:17 the Frontline COVID 19 Critical Care Alliance.
01:08:20 These are doctors who have innovated ways
01:08:23 of treating COVID patients and they happened on Ivermectin
01:08:26 and have been using it.
01:08:29 And I hesitate to use the word advocating for it
01:08:32 because that’s not really the role of doctors or scientists,
01:08:36 but they are advocating for it in the sense
01:08:38 that there is this pressure not to talk about
01:08:41 its effectiveness for reasons that we can go into.
01:08:44 So maybe step back and say, what is Ivermectin
01:08:48 and how much studies have been done
01:08:52 to show its effectiveness?
01:08:54 So Ivermectin is an interesting drug.
01:08:56 It was discovered in the 70s
01:08:58 by a Japanese scientist named Satoshi Omura
01:09:03 and he found it in soil near a Japanese golf course.
01:09:08 So I would just point out in passing
01:09:10 that if we were to stop self silencing
01:09:12 over the possibility that Asians will be demonized
01:09:17 over the possible lab leak in Wuhan
01:09:20 and to recognize that actually the natural course
01:09:22 of the story has a likely lab leak in China,
01:09:27 it has a unlikely hero in Japan,
01:09:32 the story is naturally not a simple one.
01:09:36 But in any case, Omura discovered this molecule.
01:09:40 He sent it to a friend who was at Merck,
01:09:45 scientist named Campbell.
01:09:46 They won a Nobel Prize for the discovery
01:09:50 of the Ivermectin molecule in 2015.
01:09:54 Its initial use was in treating parasitic infections.
01:09:58 It’s very effective in treating the worm
01:10:02 that causes river blindness,
01:10:04 the pathogen that causes elephantitis, scabies.
01:10:08 It’s a very effective anti parasite drug.
01:10:10 It’s extremely safe.
01:10:11 It’s on the WHO’s list of essential medications.
01:10:14 It’s safe for children.
01:10:16 It has been administered something like 4 billion times
01:10:20 in the last four decades.
01:10:22 It has been given away in the millions of doses
01:10:25 by Merck in Africa.
01:10:27 People have been on it for long periods of time.
01:10:30 And in fact, one of the reasons
01:10:32 that Africa may have had less severe impacts from COVID 19
01:10:36 is that Ivermectin is widely used there to prevent parasites
01:10:40 and the drug appears to have a long lasting impact.
01:10:43 So it’s an interesting molecule.
01:10:45 It was discovered some time ago apparently
01:10:49 that it has antiviral properties.
01:10:50 And so it was tested early in the COVID 19 pandemic
01:10:54 to see if it might work to treat humans with COVID.
01:10:58 It turned out to have very promising evidence
01:11:02 that it did treat humans.
01:11:03 It was tested in tissues.
01:11:04 It was tested at a very high dosage, which confuses people.
01:11:08 They think that those of us who believe
01:11:10 that Ivermectin might be useful in confronting this disease
01:11:14 are advocating those high doses, which is not the case.
01:11:17 But in any case, there have been quite a number of studies.
01:11:20 A wonderful meta analysis was finally released.
01:11:23 We had seen it in preprint version,
01:11:25 but it was finally peer reviewed and published this last week.
01:11:29 It reveals that the drug, as clinicians have been telling us,
01:11:34 those who have been using it,
01:11:35 it’s highly effective at treating people with the disease,
01:11:37 especially if you get to them early.
01:11:39 And it showed an 86% effectiveness as a prophylactic
01:11:43 to prevent people from contracting COVID.
01:11:46 And that number, 86%, is high enough
01:11:49 to drive SARS CoV2 to extinction if we wished to deploy it.
01:11:55 First of all, the meta analysis,
01:11:58 is this the Ivermectin for COVID 19
01:12:01 real time meta analysis of 60 studies?
01:12:04 Or there’s a bunch of meta analysis there.
01:12:06 Because I was really impressed by the real time meta analysis
01:12:09 that keeps getting updated.
01:12:11 I don’t know if it’s the same kind of thing.
01:12:12 The one at ivmmeta.com?
01:12:18 Well, I saw it at c19ivermeta.com.
01:12:21 No, this is not that meta analysis.
01:12:24 So that is, as you say, a living meta analysis
01:12:26 where you can watch as evidence rolls in.
01:12:27 Which is super cool, by the way.
01:12:29 It’s really cool.
01:12:29 And they’ve got some really nice graphics
01:12:32 that allow you to understand, well, what is the evidence?
01:12:35 It’s concentrated around this level of effectiveness,
01:12:37 et cetera.
01:12:38 So anyway, it’s a great site, well worth paying attention to.
01:12:40 No, this is a meta analysis.
01:12:43 I don’t know any of the authors but one.
01:12:46 Second author is Tess Lorry of the BIRD group.
01:12:49 BIRD being a group of analysts and doctors in Britain
01:12:55 that is playing a role similar to the FLCCC here in the US.
01:13:00 So anyway, this is a meta analysis
01:13:02 that Tess Lorry and others did
01:13:06 of all of the available evidence.
01:13:08 And it’s quite compelling.
01:13:10 People can look for it on my Twitter.
01:13:12 I will put it up and people can find it there.
01:13:15 So what about dose here?
01:13:18 In terms of safety, what do we understand
01:13:22 about the kind of dose required
01:13:23 to have that level of effectiveness?
01:13:26 And what do we understand about the safety
01:13:29 of that kind of dose?
01:13:30 So let me just say, I’m not a medical doctor.
01:13:32 I’m a biologist.
01:13:34 I’m on ivermectin in lieu of vaccination.
01:13:39 In terms of dosage, there is one reason for concern,
01:13:42 which is that the most effective dose for prophylaxis
01:13:45 involves something like weekly administration.
01:13:49 And because that is not a historical pattern of use
01:13:53 for the drug, it is possible
01:13:56 that there is some longterm implication
01:13:58 of being on it weekly for a long period of time.
01:14:02 There’s not a strong indication of that.
01:14:04 The safety signal that we have over people using the drug
01:14:07 over many years and using it in high doses.
01:14:10 In fact, Dr. Corey told me yesterday
01:14:13 that there are cases in which people
01:14:15 have made calculation errors
01:14:17 and taken a massive overdose of the drug
01:14:19 and had no ill effect.
01:14:21 So anyway, there’s lots of reasons
01:14:23 to think the drug is comparatively safe,
01:14:24 but no drug is perfectly safe.
01:14:27 And I do worry about the longterm implications
01:14:29 of taking it.
01:14:30 I also think it’s very likely
01:14:32 that because the drug is administered
01:14:37 in a dose something like, let’s say 15 milligrams
01:14:42 for somebody my size once a week
01:14:44 after you’ve gone through the initial double dose
01:14:48 that you take 48 hours apart,
01:14:51 it is apparent that if the amount of drug in your system
01:14:55 is sufficient to be protective at the end of the week,
01:14:58 then it was probably far too high
01:15:00 at the beginning of the week.
01:15:01 So there’s a question about whether or not
01:15:03 you could flatten out the intake
01:15:05 so that the amount of ivermectin goes down,
01:15:09 but the protection remains.
01:15:10 I have little doubt that that would be discovered
01:15:13 if we looked for it.
01:15:15 But that said, it does seem to be quite safe,
01:15:18 highly effective at preventing COVID.
01:15:21 The 86% number is plenty high enough
01:15:23 for us to drive SARS CoV2 to extinction
01:15:27 in light of its R0 number of slightly more than two.
01:15:33 And so why we are not using it is a bit of a mystery.
01:15:36 So even if everything you said now
01:15:39 turns out to be not correct,
01:15:42 it is nevertheless obvious that it’s sufficiently promising
01:15:46 and it always has been in order to merit rigorous
01:15:50 scientific exploration, investigation,
01:15:53 doing a lot of studies and certainly not censoring
01:15:57 the science or the discussion of it.
01:16:00 So before we talk about the various vaccines for COVID 19,
01:16:06 I’d like to talk to you about censorship.
01:16:08 Given everything you’re saying,
01:16:10 why did YouTube and other places
01:16:14 censor discussion of ivermectin?
01:16:19 Well, there’s a question about why they say they did it
01:16:21 and there’s a question about why they actually did it.
01:16:24 Now, it is worth mentioning
01:16:27 that YouTube is part of a consortium.
01:16:31 It is partnered with Twitter, Facebook, Reuters, AP,
01:16:36 Financial Times, Washington Post,
01:16:40 some other notable organizations.
01:16:42 And that this group has appointed itself
01:16:46 the arbiter of truth.
01:16:48 In effect, they have decided to control discussion
01:16:53 ostensibly to prevent the distribution of misinformation.
01:16:57 Now, how have they chosen to do that?
01:16:59 In this case, they have chosen to simply utilize
01:17:03 the recommendations of the WHO and the CDC
01:17:06 and apply them as if they are synonymous
01:17:09 with scientific truth.
01:17:11 Problem, even at their best,
01:17:14 the WHO and CDC are not scientific entities.
01:17:17 They are entities that are about public health.
01:17:20 And public health has this, whether it’s right or not,
01:17:24 and I believe I disagree with it,
01:17:26 but it has this self assigned right to lie
01:17:34 that comes from the fact that there is game theory
01:17:36 that works against, for example,
01:17:38 a successful vaccination campaign.
01:17:40 That if everybody else takes a vaccine
01:17:44 and therefore the herd becomes immune through vaccination
01:17:48 and you decide not to take a vaccine,
01:17:50 then you benefit from the immunity of the herd
01:17:52 without having taken the risk.
01:17:55 So people who do best are the people who opt out.
01:17:58 That’s a hazard.
01:17:59 And the WHO and CDC as public health entities
01:18:02 effectively oversimplify stories in order to make sense
01:18:07 of oversimplify stories in order that that game theory
01:18:11 does not cause a predictable tragedy of the commons.
01:18:15 With that said, once that right to lie exists,
01:18:19 then it turns out to serve the interests of,
01:18:23 for example, pharmaceutical companies,
01:18:25 which have emergency use authorizations
01:18:27 that require that there not be a safe
01:18:28 and effective treatment and have immunity from liability
01:18:31 for harms caused by their product.
01:18:34 So that’s a recipe for disaster, right?
01:18:37 You don’t need to be a sophisticated thinker
01:18:40 about complex systems to see the hazard
01:18:43 of immunizing a company from the harm of its own product
01:18:48 at the same time that that product can only exist
01:18:51 in the market if some other product that works better
01:18:55 somehow fails to be noticed.
01:18:57 So somehow YouTube is doing the bidding of Merck and others.
01:19:02 Whether it knows that that’s what it’s doing,
01:19:03 I have no idea.
01:19:05 I think this may be another case of an autopilot
01:19:08 that thinks it’s doing the right thing
01:19:09 because it’s parroting the corrupt wisdom
01:19:12 of the WHO and the CDC,
01:19:14 but the WHO and the CDC have been wrong again and again
01:19:17 in this pandemic.
01:19:18 And the irony here is that with YouTube coming after me,
01:19:22 well, my channel has been right where the WHO and CDC
01:19:25 have been wrong consistently over the whole pandemic.
01:19:29 So how is it that YouTube is censoring us
01:19:32 because the WHO and CDC disagree with us
01:19:35 when in fact, in past disagreements,
01:19:36 we’ve been right and they’ve been wrong?
01:19:38 There’s so much to talk about here.
01:19:41 So I’ve heard this many times actually
01:19:47 on the inside of YouTube and with colleagues
01:19:49 that I’ve talked with is they kind of in a very casual way
01:19:55 say their job is simply to slow
01:19:59 or prevent the spread of misinformation.
01:20:03 And they say like, that’s an easy thing to do.
01:20:06 Like to know what is true or not is an easy thing to do.
01:20:11 And so from the YouTube perspective,
01:20:14 I think they basically outsource of the task
01:20:21 of knowing what is true or not to public institutions
01:20:25 that on a basic Google search claim
01:20:29 to be the arbiters of truth.
01:20:32 So if you were YouTube who are exceptionally profitable
01:20:38 and exceptionally powerful in terms of controlling
01:20:43 what people get to see or not, what would you do?
01:20:46 Would you take a stand, a public stand
01:20:49 against the WHO, CDC?
01:20:54 Or would you instead say, you know what?
01:20:57 Let’s open the dam and let any video on anything fly.
01:21:02 What do you do here?
01:21:04 Say you were put, if Brent Weinstein was put in charge
01:21:08 of YouTube for a month in this most critical of times
01:21:13 where YouTube actually has incredible amounts of power
01:21:16 to educate the populace, to give power of knowledge
01:21:20 to the populace such that they can reform institutions.
01:21:24 What would you do?
01:21:25 How would you run YouTube?
01:21:26 Well, unfortunately, or fortunately,
01:21:29 this is actually quite simple.
01:21:32 The founders, the American founders,
01:21:34 settled on a counterintuitive formulation
01:21:37 that people should be free to say anything.
01:21:41 They should be free from the government
01:21:43 blocking them from doing so.
01:21:45 They did not imagine that in formulating that right,
01:21:48 that most of what was said would be of high quality,
01:21:51 nor did they imagine it would be free of harmful things.
01:21:54 What they correctly reasoned was that the benefit
01:21:57 of leaving everything so it can be said exceeds the cost,
01:22:02 which everyone understands to be substantial.
01:22:05 What I would say is they could not have anticipated
01:22:09 the impact, the centrality of platforms
01:22:13 like YouTube, Facebook, Twitter, et cetera.
01:22:16 If they had, they would not have limited
01:22:20 the First Amendment as they did.
01:22:21 They clearly understood that the power of the federal
01:22:24 government was so great that it needed to be limited
01:22:29 by granting explicitly the right of citizens
01:22:32 to say anything.
01:22:34 In fact, YouTube, Twitter, Facebook may be more powerful
01:22:39 in this moment than the federal government
01:22:42 of their worst nightmares could have been.
01:22:44 The power that these entities have to control thought
01:22:47 and to shift civilization is so great
01:22:50 that we need to have those same protections.
01:22:52 It doesn’t mean that harmful things won’t be said,
01:22:54 but it means that nothing has changed
01:22:56 about the cost benefit analysis
01:22:59 of building the right to censor.
01:23:01 So if I were running YouTube,
01:23:03 the limit of what should be allowed
01:23:06 is the limit of the law, right?
01:23:08 If what you are doing is legal,
01:23:10 then it should not be YouTube’s place
01:23:12 to limit what gets said or who gets to hear it.
01:23:15 That is between speakers and audience.
01:23:18 Will harm come from that? Of course it will.
01:23:20 But will net harm come from it?
01:23:22 No, I don’t believe it will.
01:23:24 I believe that allowing everything to be said
01:23:26 does allow a process in which better ideas
01:23:29 do come to the fore and win out.
01:23:31 So you believe that in the end,
01:23:33 when there’s complete freedom to share ideas,
01:23:37 that truth will win out.
01:23:39 So what I’ve noticed, just as a brief side comment,
01:23:44 that certain things become viral
01:23:48 irregardless of their truth.
01:23:51 I’ve noticed that things that are dramatic and or funny,
01:23:55 like things that become memes are not,
01:23:58 don’t have to be grounded in truth.
01:24:00 And so that what worries me there
01:24:03 is that we basically maximize for drama
01:24:08 versus maximize for truth in a system
01:24:10 where everything is free.
01:24:12 And that is worrying in the time of emergency.
01:24:16 Well, yes, it’s all worrying in time of emergency,
01:24:18 to be sure.
01:24:19 But I want you to notice that what you’ve happened on
01:24:22 is actually an analog for a much deeper and older problem.
01:24:26 Human beings are the, we are not a blank slate,
01:24:31 but we are the blankest slate that nature has ever devised.
01:24:34 And there’s a reason for that, right?
01:24:35 It’s where our flexibility comes from.
01:24:39 We have effectively, we are robots
01:24:42 in which a large fraction of the cognitive capacity
01:24:47 has been, or of the behavioral capacity,
01:24:50 has been offloaded to the software layer,
01:24:52 which gets written and rewritten over evolutionary time.
01:24:57 That means effectively that much of what we are,
01:25:00 in fact, the important part of what we are
01:25:02 is housed in the cultural layer and the conscious layer
01:25:06 and not in the hardware hard coding layer.
01:25:08 So that layer is prone to make errors, right?
01:25:14 And anybody who’s watched a child grow up
01:25:17 knows that children make absurd errors all the time, right?
01:25:20 That’s part of the process, as we were discussing earlier.
01:25:24 It is also true that as you look across
01:25:26 a field of people discussing things,
01:25:29 a lot of what is said is pure nonsense, it’s garbage.
01:25:33 But the tendency of garbage to emerge
01:25:37 and even to spread in the short term
01:25:39 does not say that over the long term,
01:25:41 what sticks is not the valuable ideas.
01:25:45 So there is a high tendency for novelty
01:25:49 to be created in the cultural space,
01:25:51 but there’s also a high tendency for it to go extinct.
01:25:54 And you have to keep that in mind.
01:25:55 It’s not like the genome, right?
01:25:57 Everything is happening at a much higher rate.
01:25:59 Things are being created, they’re being destroyed.
01:26:01 And I can’t say that, I mean, obviously,
01:26:04 we’ve seen totalitarianism arise many times,
01:26:08 and it’s very destructive each time it does.
01:26:10 So it’s not like, hey, freedom to come up
01:26:13 with any idea you want hasn’t produced a whole lot of carnage.
01:26:16 But the question is, over time,
01:26:18 does it produce more open, fairer, more decent societies?
01:26:23 And I believe that it does.
01:26:24 I can’t prove it, but that does seem to be the pattern.
01:26:27 I believe so as well.
01:26:29 The thing is, in the short term, freedom of speech,
01:26:35 absolute freedom of speech can be quite destructive.
01:26:38 But you nevertheless have to hold on to that,
01:26:42 because in the long term, I think you and I, I guess,
01:26:46 are optimistic in the sense that good ideas will win out.
01:26:51 I don’t know how strongly I believe that it will work,
01:26:54 but I will say I haven’t heard a better idea.
01:26:56 I would also point out that there’s something
01:27:01 very significant in this question of the hubris involved
01:27:06 in imagining that you’re going to improve the discussion
01:27:08 by censoring, which is the majority of concepts
01:27:14 at the fringe are nonsense.
01:27:18 That’s automatic.
01:27:19 But the heterodoxy at the fringe,
01:27:23 which is indistinguishable at the beginning
01:27:25 from the nonsense ideas, is the key to progress.
01:27:30 So if you decide, hey, the fringe is 99% garbage,
01:27:34 let’s just get rid of it, right?
01:27:35 Hey, that’s a strong win.
01:27:36 We’re getting rid of 99% garbage for 1% something or other.
01:27:40 And the point is, yeah, but that 1% something or other
01:27:42 is the key.
01:27:43 You’re throwing out the key.
01:27:45 And so that’s what YouTube is doing.
01:27:48 Frankly, I think at the point that it started censoring
01:27:50 my channel, in the immediate aftermath
01:27:53 of this major reversal over LabLeak,
01:27:56 it should have looked at itself and said,
01:27:57 well, what the hell are we doing?
01:27:59 Who are we censoring?
01:28:00 We’re censoring somebody who was just right, right?
01:28:03 In a conflict with the very same people
01:28:05 on whose behalf we are now censoring, right?
01:28:07 That should have caused them to wake up.
01:28:09 So you said one approach, if you’re on YouTube,
01:28:11 is this basically let all videos go
01:28:15 that do not violate the law.
01:28:16 Well, I should fix that, okay?
01:28:18 I believe that that is the basic principle.
01:28:20 Eric makes an excellent point about the distinction
01:28:23 between ideas and personal attacks,
01:28:26 doxxing, these other things.
01:28:28 So I agree, there’s no value in allowing people
01:28:31 to destroy each other’s lives,
01:28:33 even if there’s a technical legal defense for it.
01:28:36 Now, how you draw that line, I don’t know.
01:28:39 But what I’m talking about is,
01:28:41 yes, people should be free to traffic in bad ideas,
01:28:44 and they should be free to expose that the ideas are bad.
01:28:47 And hopefully that process results
01:28:49 in better ideas winning out.
01:28:50 Yeah, there’s an interesting line between ideas,
01:28:55 like the earth is flat,
01:28:56 which I believe you should not censor.
01:28:59 And then you start to encroach on personal attacks.
01:29:04 So not doxxing, yes, but not even getting to that.
01:29:08 There’s a certain point where it’s like,
01:29:10 that’s no longer ideas, that’s more,
01:29:15 that’s somehow not productive, even if it’s wrong.
01:29:18 It feels like believing the earth is flat
01:29:20 is somehow productive,
01:29:22 because maybe there’s a tiny percent chance it is.
01:29:27 It just feels like personal attacks, it doesn’t,
01:29:31 well, I’m torn on this
01:29:33 because there’s assholes in this world,
01:29:36 there’s fraudulent people in this world.
01:29:37 So sometimes personal attacks are useful to reveal that,
01:29:41 but there’s a line you can cross.
01:29:44 There’s a comedy where people make fun of others.
01:29:48 I think that’s amazing, that’s very powerful,
01:29:50 and that’s very useful, even if it’s painful.
01:29:53 But then there’s like, once it gets to be,
01:29:57 yeah, there’s a certain line,
01:29:58 it’s a gray area where you cross,
01:30:00 where it’s no longer in any possible world productive.
01:30:04 And that’s a really weird gray area
01:30:07 for YouTube to operate in.
01:30:09 And that feels like it should be a crowdsource thing,
01:30:12 where people vote on it.
01:30:13 But then again, do you trust the majority to vote
01:30:16 on what is crossing the line and not?
01:30:19 I mean, this is where,
01:30:21 this is really interesting on this particular,
01:30:24 like the scientific aspect of this.
01:30:27 Do you think YouTube should take more of a stance,
01:30:30 not censoring, but to actually have scientists
01:30:35 within YouTube having these kinds of discussions,
01:30:39 and then be able to almost speak out in a transparent way,
01:30:42 this is what we’re going to let this video stand,
01:30:45 but here’s all these other opinions.
01:30:47 Almost like take a more active role
01:30:49 in its recommendation system,
01:30:52 in trying to present a full picture to you.
01:30:55 Right now they’re not,
01:30:57 the recommender systems are not human fine tuned.
01:31:01 They’re all based on how you click,
01:31:03 and there’s this clustering algorithms.
01:31:05 They’re not taking an active role
01:31:07 on giving you the full spectrum of ideas
01:31:09 in the space of science.
01:31:11 They just censor or not.
01:31:12 Well, at the moment,
01:31:15 it’s gonna be pretty hard to compel me
01:31:17 that these people should be trusted
01:31:18 with any sort of curation or comment
01:31:22 on matters of evidence,
01:31:24 because they have demonstrated
01:31:26 that they are incapable of doing it well.
01:31:29 You could make such an argument,
01:31:30 and I guess I’m open to the idea of institutions
01:31:34 that would look something like YouTube,
01:31:36 that would be capable of offering something valuable.
01:31:39 I mean, and even just the fact of them
01:31:41 literally curating things and putting some videos
01:31:43 next to others implies something.
01:31:47 So yeah, there’s a question to be answered,
01:31:49 but at the moment, no.
01:31:51 At the moment, what it is doing
01:31:53 is quite literally putting not only individual humans
01:31:57 in tremendous jeopardy by censoring discussion
01:32:00 of useful tools and making tools that are more hazardous
01:32:04 than has been acknowledged seem safe, right?
01:32:07 But it is also placing humanity in danger
01:32:10 of a permanent relationship with this pathogen.
01:32:13 I cannot emphasize enough how expensive that is.
01:32:16 It’s effectively incalculable.
01:32:18 If the relationship becomes permanent,
01:32:20 the number of people who will ultimately suffer
01:32:23 and die from it is indefinitely large.
01:32:26 Yeah, currently the algorithm is very rabbit hole driven,
01:32:30 meaning if you click on Flat Earth videos,
01:32:35 that’s all you’re going to be presented with
01:32:38 and you’re not going to be nicely presented
01:32:40 with arguments against the Flat Earth.
01:32:42 And the flip side of that,
01:32:46 if you watch like quantum mechanics videos
01:32:48 or no, general relativity videos,
01:32:50 it’s very rare you’re going to get a recommendation.
01:32:53 Have you considered the Earth is flat?
01:32:54 And I think you should have both.
01:32:57 Same with vaccine.
01:32:58 Videos that present the power and the incredible
01:33:01 like biology, genetics, virology about the vaccine,
01:33:06 you’re rarely going to get videos
01:33:09 from well respected scientific minds
01:33:14 presenting possible dangers of the vaccine.
01:33:16 And the vice versa is true as well,
01:33:19 which is if you’re looking at the dangers of the vaccine
01:33:22 on YouTube, you’re not going to get the highest quality
01:33:25 of videos recommended to you.
01:33:27 And I’m not talking about like manually inserted CDC videos
01:33:30 that are like the most untrustworthy things
01:33:33 you can possibly watch about how everybody
01:33:35 should take the vaccine, it’s the safest thing ever.
01:33:38 No, it’s about incredible, again, MIT colleagues of mine,
01:33:42 incredible biologists, virologists that talk about
01:33:45 the details of how the mRNA vaccines work
01:33:49 and all those kinds of things.
01:33:50 I think maybe this is me with the AI hat on,
01:33:55 is I think the algorithm can fix a lot of this
01:33:58 and YouTube should build better algorithms
01:34:00 and trust that to a couple of complete freedom of speech
01:34:06 to expand what people are able to think about,
01:34:10 present always varied views,
01:34:12 not balanced in some artificial way, hard coded way,
01:34:16 but balanced in a way that’s crowdsourced.
01:34:18 I think that’s an algorithm problem that can be solved
01:34:21 because then you can delegate it to the algorithm
01:34:25 as opposed to this hard code censorship
01:34:29 of basically creating artificial boundaries
01:34:34 on what can and can’t be discussed,
01:34:36 instead creating a full spectrum of exploration
01:34:39 that can be done and trusting the intelligence of people
01:34:43 to do the exploration.
01:34:45 Well, there’s a lot there.
01:34:47 I would say we have to keep in mind
01:34:49 that we’re talking about a publicly held company
01:34:53 with shareholders and obligations to them
01:34:55 and that that may make it impossible.
01:34:57 And I remember many years ago,
01:35:01 back in the early days of Google,
01:35:03 I remember a sense of terror at the loss of general search.
01:35:10 It used to be that Google, if you searched,
01:35:14 came up with the same thing for everyone
01:35:16 and then it got personalized and for a while
01:35:19 it was possible to turn off the personalization,
01:35:21 which was still not great
01:35:22 because if everybody else is looking
01:35:24 at a personalized search and you can tune into one
01:35:26 that isn’t personalized, that doesn’t tell you
01:35:30 why the world is sounding the way it is.
01:35:33 But nonetheless, it was at least an option.
01:35:34 And then that vanished.
01:35:35 And the problem is I think this is literally deranging us.
01:35:40 That in effect, I mean, what you’re describing
01:35:43 is unthinkable.
01:35:44 It is unthinkable that in the face of a campaign
01:35:48 to vaccinate people in order to reach herd immunity
01:35:51 that YouTube would give you videos on hazards of vaccines
01:35:59 when this is, how hazardous the vaccines are
01:36:02 is an unsettled question.
01:36:04 Why is it unthinkable?
01:36:06 That doesn’t make any sense from a company perspective.
01:36:09 If intelligent people in large amounts are open minded
01:36:16 and are thinking through the hazards
01:36:19 and the benefits of a vaccine, a company should find
01:36:23 the best videos to present what people are thinking about.
01:36:28 Well, let’s come up with a hypothetical.
01:36:30 Okay, let’s come up with a very deadly disease
01:36:34 for which there’s a vaccine that is very safe,
01:36:37 though not perfectly safe.
01:36:40 And we are then faced with YouTube trying to figure out
01:36:43 what to do for somebody searching on vaccine safety.
01:36:47 Suppose it is necessary in order to drive
01:36:50 the pathogen to extinction, something like smallpox,
01:36:53 that people get on board with the vaccine.
01:36:57 But there’s a tiny fringe of people who thinks
01:36:59 that the vaccine is a mind control agent.
01:37:05 So should YouTube direct people to the only claims
01:37:11 against this vaccine, which is that it’s a mind control
01:37:13 agent when in fact the vaccine is very safe,
01:37:20 whatever that means.
01:37:22 If that were the actual configuration of the puzzle,
01:37:25 then YouTube would be doing active harm,
01:37:28 pointing you to this other video potentially.
01:37:33 Now, yes, I would love to live in a world where people
01:37:36 are up to the challenge of sorting that out.
01:37:39 But my basic point would be, if it’s an evidentiary
01:37:42 question, and there is essentially no evidence
01:37:45 that the vaccine is a mind control agent,
01:37:48 and there’s plenty of evidence that the vaccine is safe,
01:37:50 then while you look for this video,
01:37:52 we’re gonna give you this one, puts it on a par, right?
01:37:55 So for the mind that’s tracking how much thought
01:37:59 is there behind it’s safe versus how much thought
01:38:01 is there behind it’s a mind control agent
01:38:04 will result in artificially elevating this.
01:38:07 Now in the current case, what we’ve seen is not this at all.
01:38:11 We have seen evidence obscured in order to create
01:38:15 a false story about safety.
01:38:18 And we saw the inverse with ivermectin.
01:38:22 We saw a campaign to portray the drug as more dangerous
01:38:27 and less effective than the evidence
01:38:29 clearly suggested it was.
01:38:30 So we’re not talking about a comparable thing,
01:38:33 but I guess my point is the algorithmic solution
01:38:36 that you point to creates a problem of its own,
01:38:39 which is that it means that the way to get exposure
01:38:42 is to generate something fringy.
01:38:44 If you’re the only thing on some fringe,
01:38:46 then suddenly YouTube would be recommending those things,
01:38:49 and that’s obviously a gameable system at best.
01:38:53 Yeah, but the solution to that,
01:38:54 I know you’re creating a thought experiment,
01:38:57 maybe playing a little bit of a devil’s advocate.
01:39:00 I think the solution to that is not to limit the algorithm
01:39:03 in the case of the super deadly virus.
01:39:05 It’s for the scientists to step up
01:39:08 and become better communicators, more charismatic,
01:39:11 fight the battle of ideas, sort of create better videos.
01:39:16 Like if the virus is truly deadly,
01:39:19 you have a lot more ammunition, a lot more data,
01:39:22 a lot more material to work with
01:39:23 in terms of communicating with the public.
01:39:26 So be better at communicating and stop being,
01:39:30 you have to start trusting the intelligence of people
01:39:33 and also being transparent
01:39:35 and playing the game of the internet,
01:39:37 which is like, what is the internet hungry for, I believe?
01:39:40 Authenticity, stop looking like you’re full of shit.
01:39:46 The scientific community,
01:39:47 if there’s any flaw that I currently see,
01:39:50 especially the people that are in public office,
01:39:53 that like Anthony Fauci,
01:39:54 they look like they’re full of shit
01:39:56 and I know they’re brilliant.
01:39:57 Why don’t they look more authentic?
01:39:59 So they’re losing that game
01:40:01 and I think a lot of people observing this entire system now,
01:40:05 younger scientists are seeing this and saying,
01:40:09 okay, if I want to continue being a scientist
01:40:12 in the public eye and I want to be effective at my job,
01:40:16 I’m gonna have to be a lot more authentic.
01:40:18 So they’re learning the lesson,
01:40:19 this evolutionary system is working.
01:40:22 So there’s just a younger generation of minds coming up
01:40:25 that I think will do a much better job
01:40:27 in this battle of ideas
01:40:28 that when the much more dangerous virus comes along,
01:40:32 they’ll be able to be better communicators.
01:40:34 At least that’s the hope.
01:40:36 Using the algorithm to control that is,
01:40:40 I feel like is a big problem.
01:40:41 So you’re going to have the same problem with a deadly virus
01:40:45 as with the current virus
01:40:46 if you let YouTube draw hard lines
01:40:50 by the PR and the marketing people
01:40:52 versus the broad community of scientists.
01:40:56 Well, in some sense you’re suggesting something
01:40:59 that’s close kin to what I was saying
01:41:01 about freedom of expression ultimately
01:41:05 provides an advantage to better ideas.
01:41:07 So I’m in agreement broadly speaking,
01:41:10 but I would also say there’s probably some sort of,
01:41:13 let’s imagine the world that you propose
01:41:15 where YouTube shows you the alternative point of view.
01:41:19 That has the problem that I suggest,
01:41:21 but one thing you could do is you could give us the tools
01:41:24 to understand what we’re looking at, right?
01:41:27 You could give us,
01:41:28 so first of all, there’s something I think myopic,
01:41:32 solipsistic, narcissistic about an algorithm
01:41:37 that serves shareholders by showing you what you want to see
01:41:40 rather than what you need to know, right?
01:41:42 That’s the distinction is flattering you,
01:41:45 playing to your blind spot
01:41:47 is something that algorithm will figure out,
01:41:49 but it’s not healthy for us all
01:41:51 to have Google playing to our blind spot.
01:41:53 It’s very, very dangerous.
01:41:54 So what I really want is analytics that allow me
01:41:59 or maybe options and analytics.
01:42:02 The options should allow me to see
01:42:05 what alternative perspectives are being explored, right?
01:42:09 So here’s the thing I’m searching
01:42:10 and it leads me down this road, right?
01:42:12 Let’s say it’s ivermectin, okay?
01:42:14 I find all of this evidence that ivermectin works.
01:42:16 I find all of these discussions
01:42:17 and people talk about various protocols and this and that.
01:42:20 And then I could say, all right, what is the other side?
01:42:24 And I could see who is searching, not as individuals,
01:42:28 but what demographics are searching alternatives.
01:42:32 And maybe you could even combine it
01:42:33 with something Reddit like where effectively,
01:42:37 let’s say that there was a position that, I don’t know,
01:42:40 that a vaccine is a mind control device
01:42:44 and you could have a steel man this argument competition
01:42:48 effectively and the better answers that steel man
01:42:51 and as well as possible would rise to the top.
01:42:53 And so you could read the top three or four explanations
01:42:56 about why this really credibly is a mind control product.
01:43:01 And you can say, well, that doesn’t really add up.
01:43:03 I can check these three things myself
01:43:05 and they can’t possibly be right, right?
01:43:07 And you could dismiss it.
01:43:08 And then as an argument that was credible,
01:43:10 let’s say plate tectonics before
01:43:12 that was an accepted concept,
01:43:15 you’d say, wait a minute,
01:43:16 there is evidence for plate tectonics.
01:43:19 As crazy as it sounds that the continents
01:43:21 are floating around on liquid,
01:43:23 actually that’s not so implausible.
01:43:26 We’ve got these subduction zones,
01:43:27 we’ve got a geology that is compatible,
01:43:30 we’ve got puzzle piece continents
01:43:31 that seem to fit together.
01:43:33 Wow, that’s a surprising amount of evidence
01:43:35 for that position.
01:43:36 So I’m gonna file some Bayesian probability with it
01:43:39 that’s updated for the fact that actually
01:43:40 the steel man arguments better than I was expecting, right?
01:43:43 So I could imagine something like that
01:43:45 where A, I would love the search to be indifferent
01:43:48 to who’s searching, right?
01:43:49 The solipsistic thing is too dangerous.
01:43:51 So the search could be general,
01:43:53 so we would all get a sense
01:43:54 for what everybody else was seeing too.
01:43:56 And then some layer that didn’t have anything to do
01:43:59 with what YouTube points you to or not,
01:44:01 but allowed you to see, you know,
01:44:04 the general pattern of adherence
01:44:08 to searching for information.
01:44:11 And again, a layer in which those things could be defended.
01:44:14 So you could hear what a good argument sounded like
01:44:17 rather than just hear a caricatured argument.
01:44:19 Yeah, and also reward people,
01:44:21 creators that have demonstrated
01:44:23 like a track record of open mindedness
01:44:26 and correctness as much as it could be measured
01:44:29 over a long term and sort of,
01:44:33 I mean, a lot of this maps
01:44:36 to incentivizing good longterm behavior,
01:44:41 not immediate kind of dopamine rush kind of signals.
01:44:50 I think ultimately the algorithm on the individual level
01:44:55 should optimize for personal growth,
01:45:00 longterm happiness, just growth intellectually,
01:45:04 growth in terms of lifestyle personally and so on,
01:45:07 as opposed to immediate.
01:45:10 I think that’s going to build a better society,
01:45:12 not even just like truth,
01:45:13 because I think truth is a complicated thing.
01:45:16 It’s more just you growing as a person,
01:45:19 exploring the space of ideas, changing your mind often,
01:45:23 increasing the level to which you’re open minded,
01:45:25 the knowledge base you’re operating from,
01:45:28 the willingness to empathize with others,
01:45:31 all those kinds of things the algorithm should optimize for.
01:45:34 Like creating a better human at the individual level
01:45:37 that you’re, I think that’s a great business model
01:45:40 because the person that’s using this tool
01:45:44 will then be happier with themselves for having used it
01:45:47 and will be a lifelong quote unquote customer.
01:45:50 I think it’s a great business model
01:45:53 to make a happy, open minded, knowledgeable,
01:45:57 better human being.
01:45:58 It’s a terrible business model under the current system.
01:46:02 What you want is to build the system
01:46:04 in which it is a great business model.
01:46:05 Why is it a terrible model?
01:46:07 Because it will be decimated by those
01:46:10 who play to the short term.
01:46:12 I don’t think so.
01:46:14 Why?
01:46:15 I mean, I think we’re living it.
01:46:16 We’re living it.
01:46:17 Well, no, because if you have the alternative
01:46:19 that presents itself,
01:46:21 it points out the emperor has no clothes.
01:46:24 I mean, it points out that YouTube is operating in this way,
01:46:27 Twitter is operating in this way,
01:46:29 Facebook is operating in this way.
01:46:30 How long term would you like the wisdom to prove at?
01:46:35 Well, even a week is better when it’s currently happening.
01:46:40 Right, but the problem is,
01:46:42 if a week loses out to an hour, right?
01:46:45 And I don’t think it loses out.
01:46:48 It loses out in the short term.
01:46:49 That’s my point.
01:46:50 At least you’re a great communicator
01:46:52 and you basically say, look, here’s the metrics.
01:46:55 And a lot of it is like how people actually feel.
01:46:59 Like this is what people experience with social media.
01:47:02 They look back at the previous month and say,
01:47:06 I felt shitty on a lot of days because of social media.
01:47:09 Right.
01:47:11 If you look back at the previous few weeks and say,
01:47:14 wow, I’m a better person because of that month happened.
01:47:18 That’s, they immediately choose the product
01:47:20 that’s going to lead to that.
01:47:22 That’s what love for products looks like.
01:47:24 If you love, like a lot of people love their Tesla car,
01:47:28 like that’s, or iPhone or like beautiful design.
01:47:31 That’s what love looks like.
01:47:33 You look back, I’m a better person
01:47:35 for having used this thing.
01:47:36 Well, you got to ask yourself the question though,
01:47:38 if this is such a great business model,
01:47:40 why isn’t it devolving?
01:47:42 Why don’t we see it?
01:47:44 Honestly, it’s competence.
01:47:46 It’s like people are just, it’s not easy to build new,
01:47:50 it’s not easy to build products, tools, systems
01:47:55 on new ideas.
01:47:57 It’s kind of a new idea.
01:47:59 We’ve gone through this, everything we’re seeing now
01:48:02 comes from the ideas of the initial birth of the internet.
01:48:06 There just needs to be new sets of tools
01:48:08 that are incentivizing long term personal growth
01:48:12 and happiness.
01:48:13 That’s it.
01:48:14 Right, but what we have is a market
01:48:16 that doesn’t favor this, right?
01:48:18 I mean, for one thing, we had an alternative to Facebook,
01:48:23 right, that looked, you owned your own data,
01:48:25 it wasn’t exploitative and Facebook bought
01:48:29 a huge interest in it and it died.
01:48:32 I mean, who do you know who’s on diaspora?
01:48:34 The execution there was not good.
01:48:37 Right, but it could have gotten better, right?
01:48:40 I don’t think that the argument that why hasn’t somebody
01:48:43 done it a good argument for it’s not going to completely
01:48:47 destroy all of Twitter and Facebook when somebody does it
01:48:51 or Twitter will catch up and pivot to the algorithm.
01:48:54 This is not what I’m saying.
01:48:56 There’s obviously great ideas that remain unexplored
01:48:59 because nobody has gotten to the foothill
01:49:01 that would allow you to explore them.
01:49:03 That’s true, but you know, an internet
01:49:05 that was non predatory is an obvious idea
01:49:08 and many of us know that we want it
01:49:10 and many of us have seen prototypes of it
01:49:13 and we don’t move because there’s no audience there.
01:49:15 So the network effects cause you to stay
01:49:17 with the predatory internet.
01:49:19 But let me just, I wasn’t kidding about build the system
01:49:24 in which your idea is a great business plan.
01:49:28 So in our upcoming book, Heather and I in our last chapter
01:49:32 explore something called the fourth frontier
01:49:34 and fourth frontier has to do with sort of a 2.0 version
01:49:38 of civilization, which we freely admit
01:49:40 we can’t tell you very much about.
01:49:42 It’s something that would have to be,
01:49:44 we would have to prototype our way there.
01:49:45 We would have to effectively navigate our way there.
01:49:48 But the result would be very much
01:49:49 like what you’re describing.
01:49:51 It would be something that effectively liberates humans
01:49:54 meaningfully and most importantly,
01:49:57 it has to feel like growth without depending on growth.
01:50:02 In other words, human beings are creatures
01:50:05 that like every other creature
01:50:07 is effectively looking for growth, right?
01:50:09 We are looking for underexploited
01:50:11 or unexploited opportunities and when we find them,
01:50:14 our ancestors for example, they happen into a new valley
01:50:18 that was unexplored by people.
01:50:20 Their population would grow until it hit carrying capacity.
01:50:23 So there would be this great feeling of there’s abundance
01:50:25 until you hit carrying capacity, which is inevitable
01:50:27 and then zero sum dynamics would set in.
01:50:30 So in order for human beings to flourish longterm,
01:50:34 the way to get there is to satisfy the desire for growth
01:50:37 without hooking it to actual growth,
01:50:39 which only moves and fits and starts.
01:50:42 And this is actually, I believe the key
01:50:45 to avoiding these spasms of human tragedy
01:50:48 when in the absence of growth,
01:50:50 people do something that causes their population
01:50:54 to experience growth, which is they go and make war on
01:50:57 or commit genocide against some other population,
01:50:59 which is something we obviously have to stop.
01:51:02 By the way, this is a hunter gatherers guide
01:51:06 to the 21st century coauthored.
01:51:08 That’s right.
01:51:09 With your wife, Heather, being released in September.
01:51:11 I believe you said you’re going to do
01:51:13 a little bit of a preview videos on each chapter
01:51:16 leading up to the release.
01:51:17 So I’m looking forward to the last chapter
01:51:19 as well as all the previous ones.
01:51:23 I have a few questions on that.
01:51:24 So you generally have faith to clarify that technology
01:51:30 could be the thing that empowers this kind of future.
01:51:36 Well, if you just let technology evolve,
01:51:40 it’s going to be our undoing, right?
01:51:43 One of the things that I fault my libertarian friends for
01:51:48 is this faith that the market is going to find solutions
01:51:51 without destroying us.
01:51:52 And my sense is I’m a very strong believer in markets.
01:51:56 I believe in their power
01:51:57 even above some market fundamentalists.
01:52:00 But what I don’t believe is that they should be allowed
01:52:03 to plot our course, right?
01:52:06 Markets are very good at figuring out how to do things.
01:52:09 They are not good at all about figuring out
01:52:12 what we should do, right?
01:52:13 What we should want.
01:52:14 We have to tell markets what we want
01:52:16 and then they can tell us how to do it best.
01:52:19 And if we adopted that kind of pro market
01:52:22 but in a context where it’s not steering,
01:52:25 where human wellbeing is actually the driver,
01:52:28 we can do remarkable things.
01:52:30 And the technology that emerges
01:52:32 would naturally be enhancing of human wellbeing.
01:52:35 Perfectly so?
01:52:36 No, but overwhelmingly so.
01:52:38 But at the moment, markets are finding
01:52:40 our every defective character and exploiting them
01:52:43 and making huge profits
01:52:44 and making us worse to each other in the process.
01:52:49 Before we leave COVID 19,
01:52:52 let me ask you about a very difficult topic,
01:52:57 which is the vaccines.
01:53:00 So I took the Pfizer vaccine, the two shots.
01:53:05 You did not.
01:53:07 You have been taking ivermectin.
01:53:10 Yep.
01:53:12 So one of the arguments
01:53:15 against the discussion of ivermectin
01:53:17 is that it prevents people
01:53:21 from being fully willing to get the vaccine.
01:53:24 How would you compare ivermectin
01:53:27 and the vaccine for COVID 19?
01:53:31 All right, that’s a good question.
01:53:33 I would say, first of all,
01:53:34 there are some hazards with the vaccine
01:53:37 that people need to be aware of.
01:53:38 There are some things that we cannot rule out
01:53:41 and for which there is some evidence.
01:53:44 The two that I think people should be tracking
01:53:46 is the possibility, some would say a likelihood,
01:53:50 that a vaccine of this nature,
01:53:53 that is to say very narrowly focused on a single antigen,
01:53:58 is an evolutionary pressure
01:54:02 that will drive the emergence of variants
01:54:05 that will escape the protection
01:54:06 that comes from the vaccine.
01:54:08 So this is a hazard.
01:54:11 It is a particular hazard in light of the fact
01:54:14 that these vaccines have a substantial number
01:54:16 of breakthrough cases.
01:54:18 So one danger is that a person who has been vaccinated
01:54:22 will shed viruses that are specifically less visible
01:54:27 or invisible to the immunity created by the vaccines.
01:54:31 So we may be creating the next pandemic
01:54:34 by applying the pressure of vaccines
01:54:37 at a point that it doesn’t make sense to.
01:54:40 The other danger has to do with something called
01:54:42 antibody dependent enhancement,
01:54:45 which is something that we see in certain diseases
01:54:47 like dengue fever.
01:54:48 You may know that dengue, one gets a case,
01:54:51 and then their second case is much more devastating.
01:54:54 So break bone fever is when you get your second case
01:54:57 of dengue, and dengue effectively utilizes
01:55:00 the immune response that is produced by prior exposure
01:55:04 to attack the body in ways that it is incapable
01:55:06 of doing before exposure.
01:55:08 So this is apparently, this pattern has apparently blocked
01:55:12 past efforts to make vaccines against coronaviruses.
01:55:17 Whether it will happen here or not,
01:55:19 it is still too early to say.
01:55:20 But before we even get to the question
01:55:22 of harm done to individuals by these vaccines,
01:55:26 we have to ask about what the overall impact is going to be.
01:55:29 And it’s not clear in the way people think it is
01:55:32 that if we vaccinate enough people, the pandemic will end.
01:55:35 It could be that we vaccinate people
01:55:37 and make the pandemic worse.
01:55:38 And while nobody can say for sure
01:55:40 that that’s where we’re headed,
01:55:42 it is at least something to be aware of.
01:55:43 So don’t vaccines usually create
01:55:46 that kind of evolutionary pressure
01:55:48 to create deadlier, different strains of the virus?
01:55:55 So is there something particular with these mRNA vaccines
01:55:58 that’s uniquely dangerous in this regard?
01:56:01 Well, it’s not even just the mRNA vaccines.
01:56:03 The mRNA vaccines and the adenovector DNA vaccine
01:56:07 all share the same vulnerability,
01:56:09 which is they are very narrowly focused
01:56:11 on one subunit of the spike protein.
01:56:14 So that is a very concentrated evolutionary signal.
01:56:18 We are also deploying it in mid pandemic
01:56:20 and it takes time for immunity to develop.
01:56:23 So part of the problem here,
01:56:25 if you inoculated a population before encounter
01:56:29 with a pathogen, then there might be substantially
01:56:32 enough immunity to prevent this phenomenon from happening.
01:56:37 But in this case, we are inoculating people
01:56:40 as they are encountering those who are sick with the disease.
01:56:43 And what that means is the disease is now faced
01:56:47 with a lot of opportunities
01:56:48 to effectively evolutionarily practice escape strategies.
01:56:52 So one thing is the timing,
01:56:54 the other thing is the narrow focus.
01:56:56 Now in a traditional vaccine,
01:56:58 you would typically not have one antigen, right?
01:57:01 You would have basically a virus full of antigens
01:57:04 and the immune system would therefore
01:57:06 produce a broader response.
01:57:08 So that is the case for people who have had COVID, right?
01:57:11 They have an immunity that is broader
01:57:13 because it wasn’t so focused
01:57:14 on one part of the spike protein.
01:57:17 So anyway, there is something unique here.
01:57:19 So these platforms create that special hazard.
01:57:21 They also have components that we haven’t used before
01:57:25 in people.
01:57:26 So for example, the lipid nanoparticles
01:57:28 that coat the RNAs are distributing themselves
01:57:32 around the body in a way that will have unknown consequences.
01:57:37 So anyway, there’s reason for concern.
01:57:40 Is it possible for you to steel man the argument
01:57:45 that everybody should get vaccinated?
01:57:48 Of course.
01:57:49 The argument that everybody should get vaccinated
01:57:51 is that nothing is perfectly safe.
01:57:54 Phase three trials showed good safety for the vaccines.
01:57:59 Now that may or may not be actually true,
01:58:01 but what we saw suggested high degree of efficacy
01:58:05 and a high degree of safety for the vaccines
01:58:09 that inoculating people quickly
01:58:11 and therefore dropping the landscape of available victims
01:58:15 for the pathogen to a very low number
01:58:19 so that herd immunity drives it to extinction
01:58:22 requires us all to take our share of the risk
01:58:25 and that because driving it to extinction
01:58:30 should be our highest priority that really
01:58:32 people shouldn’t think too much about the various nuances
01:58:36 because overwhelmingly fewer people will die
01:58:39 if the population is vaccinated from the vaccine
01:58:43 than will die from COVID if they’re not vaccinated.
01:58:45 And with the vaccine as it currently is being deployed,
01:58:48 that is a quite a likely scenario
01:58:51 that everything, you know, the virus will fade away.
01:58:58 In the following sense that the probability
01:59:01 that a more dangerous strain will be created is nonzero,
01:59:05 but it’s not 50%, it’s something smaller.
01:59:10 And so the most likely, well, I don’t know,
01:59:11 maybe you disagree with that,
01:59:12 but the scenario we’re most likely to see now
01:59:15 that the vaccine is here is that the virus,
01:59:19 the effects of the virus will fade away.
01:59:21 First of all, I don’t believe that the probability
01:59:23 of creating a worse pandemic is low enough to discount.
01:59:27 I think the probability is fairly high
01:59:29 and frankly, we are seeing a wave of variants
01:59:32 that we will have to do a careful analysis
01:59:37 to figure out what exactly that has to do
01:59:39 with campaigns of vaccination,
01:59:40 where they have been, where they haven’t been,
01:59:42 where the variants emerged from.
01:59:43 But I believe that what we are seeing is a disturbing pattern
01:59:47 that reflects that those who were advising caution
01:59:50 may well have been right.
01:59:51 The data here, by the way, and the small tangent is terrible.
01:59:55 Terrible, right.
01:59:56 And why is it terrible is another question, right?
01:59:59 This is where I started getting angry.
02:00:01 Yes.
02:00:02 It’s like, there’s an obvious opportunity
02:00:04 for exceptionally good data, for exceptionally rigorous,
02:00:07 like even the self, like the website for self reporting,
02:00:10 side effects for, not side effects,
02:00:12 but negative effects, right?
02:00:14 Adverse events.
02:00:15 Adverse events, sorry, for the vaccine.
02:00:18 Like, there’s many things I could say
02:00:20 from both the study perspective,
02:00:22 but mostly, let me just put on my hat of like HTML
02:00:27 and like web design.
02:00:29 Like, it’s like the worst website.
02:00:32 It makes it so unpleasant to report.
02:00:34 It makes it so unclear what you’re reporting.
02:00:37 If somebody actually has serious effect,
02:00:38 like if you have very mild effects,
02:00:40 what are the incentives for you to even use
02:00:43 that crappy website with many pages and forms
02:00:46 that don’t make any sense?
02:00:47 If you have adverse effects,
02:00:49 what are the incentives for you to use that website?
02:00:53 What is the trust that you have
02:00:55 that this information will be used well?
02:00:56 All those kinds of things.
02:00:58 And the data about who’s getting vaccinated,
02:01:01 anonymized data about who’s getting vaccinated,
02:01:04 where, when, with what vaccine,
02:01:06 coupled with the adverse effects,
02:01:09 all of that we should be collecting.
02:01:10 Instead, we’re completely not.
02:01:13 We’re doing it in a crappy way
02:01:14 and using that crappy data to make conclusions
02:01:18 that you then twist.
02:01:19 You’re basically collecting in a way
02:01:21 that can arrive at whatever conclusions you want.
02:01:25 And the data is being collected by the institutions,
02:01:29 by governments, and so therefore,
02:01:31 it’s obviously they’re going to try
02:01:33 to construct any kind of narratives they want
02:01:35 based on this crappy data.
02:01:36 Reminds me of much of psychology, the field that I love,
02:01:39 but is flawed in many fundamental ways.
02:01:42 So rant over, but coupled with the dangers
02:01:46 that you’re speaking to,
02:01:47 we don’t have even the data to understand the dangers.
02:01:52 Yeah, I’m gonna pick up on your rant and say,
02:01:55 we, estimates of the degree of underreporting in VAERS
02:02:00 are that it is 10% of the real to 100%.
02:02:05 And that’s the system for reporting.
02:02:08 Yeah, the VAERS system is the system
02:02:10 for reporting adverse events.
02:02:11 So in the US, we have above 5,000 unexpected deaths
02:02:18 that seem in time to be associated with vaccination.
02:02:22 That is an undercount, almost certainly,
02:02:24 and by a large factor.
02:02:27 We don’t know how large.
02:02:29 I’ve seen estimates, 25,000 dead in the US alone.
02:02:34 Now, you can make the argument that, okay,
02:02:37 that’s a large number,
02:02:39 but the necessity of immunizing the population
02:02:42 to drive SARS CoV2 to extinction
02:02:45 is such that it’s an acceptable number.
02:02:47 But I would point out
02:02:48 that that actually does not make any sense.
02:02:51 And the reason it doesn’t make any sense
02:02:52 is actually there are several reasons.
02:02:54 One, if that was really your point,
02:02:57 that yes, many, many people are gonna die,
02:02:59 but many more will die if we don’t do this.
02:03:02 Were that your approach,
02:03:05 you would not be inoculating people who had had COVID 19,
02:03:08 which is a large population.
02:03:10 There’s no reason to expose those people to danger.
02:03:13 Their risk of adverse events
02:03:14 in the case that they have them is greater.
02:03:18 So there’s no reason that we would be allowing
02:03:20 those people to face a risk of death
02:03:22 if this was really about an acceptable number of deaths
02:03:25 arising out of this set of vaccines.
02:03:29 I would also point out
02:03:30 there’s something incredibly bizarre.
02:03:32 And I struggle to find language that is strong enough
02:03:37 for the horror of vaccinating children in this case
02:03:43 because children suffer a greater risk of longterm effects
02:03:48 because they are going to live longer.
02:03:49 And because this is earlier in their development,
02:03:51 therefore it impacts systems that are still forming.
02:03:55 They tolerate COVID well.
02:03:57 And so the benefit to them is very small.
02:04:01 And so the only argument for doing this
02:04:04 is that they may cryptically be carrying more COVID
02:04:06 than we think, and therefore they may be integral
02:04:09 to the way the virus spreads to the population.
02:04:11 But if that’s the reason that we are inoculating children,
02:04:14 and there has been some revision in the last day or two
02:04:16 about the recommendation on this
02:04:17 because of the adverse events
02:04:19 that have shown up in children,
02:04:20 but to the extent that we were vaccinating children,
02:04:24 we were doing it to protect old, infirm people
02:04:28 who are the most likely to succumb to COVID 19.
02:04:32 What society puts children in danger,
02:04:37 robs children of life to save old, infirm people?
02:04:40 That’s upside down.
02:04:43 So there’s something about the way we are going about
02:04:46 vaccinating, who we are vaccinating,
02:04:48 what dangers we are pretending don’t exist
02:04:52 that suggests that to some set of people,
02:04:55 vaccinating people is a good in and of itself,
02:04:58 that that is the objective of the exercise,
02:05:00 not herd immunity.
02:05:01 And the last thing, and I’m sorry,
02:05:03 I don’t wanna prevent you from jumping in here,
02:05:05 but the second reason, in addition to the fact
02:05:07 that we’re exposing people to danger
02:05:09 that we should not be exposing them to.
02:05:11 By the way, as a tiny tangent,
02:05:13 another huge part of this soup
02:05:16 that should have been part of it
02:05:17 that’s an incredible solution is large scale testing.
02:05:20 Mm hmm.
02:05:22 But that might be another couple hour conversation,
02:05:26 but there’s these solutions that are obvious
02:05:28 that were available from the very beginning.
02:05:30 So you could argue that iveractin is not that obvious,
02:05:34 but maybe the whole point is you have aggressive,
02:05:38 very fast research that leads to a meta analysis
02:05:43 and then large scale production and deployment.
02:05:46 Okay, at least that possibility
02:05:49 should be seriously considered,
02:05:51 coupled with a serious consideration
02:05:53 of large scale deployment of testing,
02:05:55 at home testing that could have accelerated
02:06:00 the speed at which we reached that herd immunity.
02:06:07 But I don’t even wanna.
02:06:08 Well, let me just say, I am also completely shocked
02:06:11 that we did not get on high quality testing early
02:06:15 and that we are still suffering from this even now,
02:06:19 because just the simple ability to track
02:06:21 where the virus moves between people
02:06:23 would tell us a lot about its mode of transmission,
02:06:26 which would allow us to protect ourselves better.
02:06:28 Instead, that information was hard won
02:06:32 and for no good reason.
02:06:33 So I also find this mysterious.
02:06:35 You’ve spoken with Eric Weinstein, your brother,
02:06:39 on his podcast, The Portal,
02:06:41 about the ideas that eventually led to the paper
02:06:45 you published titled, The Reserved Capacity Hypothesis.
02:06:50 I think first, can you explain this paper
02:06:56 and the ideas that led up to it?
02:06:59 Sure, easier to explain the conclusion of the paper.
02:07:05 There’s a question about why a creature
02:07:08 that can replace its cells with new cells
02:07:11 grows feeble and inefficient with age.
02:07:14 We call that process, which is otherwise called aging,
02:07:18 we call it senescence.
02:07:20 And senescence, in this paper, it is hypothesized,
02:07:26 is the unavoidable downside of a cancer prevention
02:07:32 feature of our bodies.
02:07:36 That each cell has a limit on the number of times
02:07:39 it can divide.
02:07:40 There are a few cells in the body that are exceptional,
02:07:42 but most of our cells can only divide
02:07:45 a limited number of times.
02:07:46 That’s called the Hayflick limit.
02:07:47 And the Hayflick limit reduces the ability
02:07:52 of the organism to replace tissues.
02:07:55 It therefore results in a failure over time
02:07:58 of maintenance and repair.
02:08:01 And that explains why we become decrepit as we grow old.
02:08:06 The question was why would that be,
02:08:09 especially in light of the fact that the mechanism
02:08:12 that seems to limit the ability of cells to reproduce
02:08:16 is something called a telomere.
02:08:18 Telomere is a, it’s not a gene, but it’s a DNA sequence
02:08:22 at the ends of our chromosomes
02:08:24 that is just simply repetitive.
02:08:26 And the number of repeats functions like a counter.
02:08:30 So there’s a number of repeats that you have
02:08:33 after development is finished.
02:08:34 And then each time the cell divides a little bit
02:08:36 of telomere is lost.
02:08:37 And at the point that the telomere becomes critically short,
02:08:40 the cell stops dividing even though it still has
02:08:42 the capacity to do so.
02:08:44 Stops dividing and it starts transcribing different genes
02:08:47 than it did when it had more telomere.
02:08:50 So what my work did was it looked at the fact
02:08:53 that the telomeric shortening was being studied
02:08:56 by two different groups.
02:08:57 It was being studied by people who were interested
02:09:00 in counteracting the aging process.
02:09:03 And it was being studied in exactly the opposite fashion
02:09:06 by people who were interested in tumorigenesis and cancer.
02:09:10 The thought being because it was true that when one looked
02:09:13 into tumors, they always had telomerase active.
02:09:16 That’s the enzyme that lengthens our telomeres.
02:09:19 So those folks were interested in bringing about a halt
02:09:24 to the lengthening of telomeres
02:09:25 in order to counteract cancer.
02:09:27 And the folks who were studying the senescence process
02:09:30 were interested in lengthening telomeres
02:09:32 in order to generate greater repair capacity.
02:09:35 And my point was evolutionarily speaking,
02:09:38 this looks like a pleiotropic effect
02:09:42 that the genes which create the tendency of the cells
02:09:49 to be limited in their capacity to replace themselves
02:09:53 are providing a benefit in youth,
02:09:55 which is that we are largely free of tumors and cancer
02:09:59 at the inevitable late life cost that we grow feeble
02:10:02 and inefficient and eventually die.
02:10:04 And that matches a very old hypothesis in evolutionary theory
02:10:10 by somebody I was fortunate enough to know, George Williams,
02:10:13 one of the great 20th century evolutionists
02:10:16 who argued that senescence would have to be caused
02:10:19 by pleiotropic genes that cause early life benefits
02:10:23 at unavoidable late life costs.
02:10:26 And although this isn’t the exact nature of the system,
02:10:29 he predicted it matches what he was expecting
02:10:32 in many regards to a shocking degree.
02:10:35 That said, the focus of the paper is about the,
02:10:41 well, let me just read the abstract.
02:10:43 We observed that captive rodent breeding protocols designed,
02:10:47 this is the end of the abstract.
02:10:49 We observed that captive rodent breeding protocols
02:10:51 designed to increase reproductive output,
02:10:53 simultaneously exert strong selection
02:10:55 against reproductive senescence
02:10:58 and virtually eliminate selection
02:11:00 that would otherwise favor tumor suppression.
02:11:03 This appears to have greatly elongated
02:11:05 the telomeres of laboratory mice.
02:11:07 With their telomeric failsafe effectively disabled,
02:11:10 these animals are unreliable models
02:11:12 of normal senescence and tumor formation.
02:11:15 So basically using these mice is not going to lead
02:11:19 to the right kinds of conclusions.
02:11:21 Safety tests employing these animals
02:11:24 likely overestimate cancer risks
02:11:26 and underestimate tissue damage
02:11:29 and consequent accelerated senescence.
02:11:32 So I think, especially with your discussion with Eric,
02:11:38 the conclusion of this paper has to do with the fact that,
02:11:43 like we shouldn’t be using these mice to test the safety
02:11:48 or to make conclusions about cancer or senescence.
02:11:53 Is that the basic takeaway?
02:11:55 Like basically saying that the length of these telomeres
02:11:57 is an important variable to consider.
02:12:00 Well, let’s put it this way.
02:12:01 I think there was a reason that the world of scientists
02:12:05 who was working on telomeres
02:12:07 did not spot the pleiotropic relationship
02:12:10 that was the key argument in my paper.
02:12:16 The reason they didn’t spot it was that there was a result
02:12:19 that everybody knew, which seemed inconsistent.
02:12:22 The result was that mice have very long telomeres,
02:12:26 but they do not have very long lives.
02:12:30 Now, we can talk about what the actual meaning
02:12:32 of don’t have very long lives is,
02:12:34 but in the end, I was confronted with a hypothesis
02:12:39 that would explain a great many features
02:12:41 of the way mammals and indeed vertebrates age,
02:12:44 but it was inconsistent with one result.
02:12:46 And at first I thought,
02:12:48 maybe there’s something wrong with the result.
02:12:50 Maybe this is one of these cases
02:12:51 where the result was achieved once
02:12:54 through some bad protocol and everybody else
02:12:56 was repeating it, didn’t turn out to be the case.
02:12:58 Many laboratories had established
02:13:00 that mice had ultra long telomeres.
02:13:02 And so I began to wonder whether or not
02:13:05 there was something about the breeding protocols
02:13:09 that generated these mice.
02:13:11 And what that would predict is that the mice
02:13:13 that have long telomeres would be laboratory mice
02:13:16 and that wild mice would not.
02:13:18 And Carol Greider, who agreed to collaborate with me,
02:13:23 tested that hypothesis and showed that it was indeed true,
02:13:27 that wild derived mice, or at least mice
02:13:29 that had been in captivity for a much shorter period of time
02:13:32 did not have ultra long telomeres.
02:13:35 Now, what this implied though, as you read,
02:13:38 is that our breeding protocols
02:13:41 generate lengthening of telomeres.
02:13:43 And the implication of that is that the animals
02:13:45 that have these very long telomeres
02:13:47 will be hyper prone to create tumors.
02:13:50 They will be extremely resistant to toxins
02:13:54 because they have effectively an infinite capacity
02:13:56 to replace any damaged tissue.
02:13:58 And so ironically, if you give one of these
02:14:02 ultra long telomere lab mice a toxin,
02:14:06 if the toxin doesn’t outright kill it,
02:14:08 it may actually increase its lifespan
02:14:10 because it functions as a kind of chemotherapy.
02:14:14 So the reason that chemotherapy works
02:14:16 is that dividing cells are more vulnerable
02:14:19 than cells that are not dividing.
02:14:21 And so if this mouse has effectively
02:14:23 had its cancer protection turned off,
02:14:26 and it has cells dividing too rapidly,
02:14:28 and you give it a toxin, you will slow down its tumors
02:14:31 faster than you harm its other tissues.
02:14:33 And so you’ll get a paradoxical result
02:14:35 that actually some drug that’s toxic
02:14:38 seems to benefit the mouse.
02:14:40 Now, I don’t think that that was understood
02:14:43 before I published my paper.
02:14:44 Now I’m pretty sure it has to be.
02:14:46 And the problem is that this actually is a system
02:14:50 that serves pharmaceutical companies
02:14:53 that have the difficult job of bringing compounds to market,
02:14:57 many of which will be toxic.
02:14:59 Maybe all of them will be toxic.
02:15:01 And these mice predispose our system
02:15:04 to declare these toxic compounds safe.
02:15:07 And in fact, I believe we’ve seen the errors
02:15:10 that result from using these mice a number of times,
02:15:12 most famously with Vioxx, which turned out
02:15:15 to do conspicuous heart damage.
02:15:18 Why do you think this paper and this idea
02:15:20 has not gotten significant traction?
02:15:23 Well, my collaborator, Carol Greider,
02:15:27 said something to me that rings in my ears to this day.
02:15:32 She initially, after she showed that laboratory mice
02:15:35 have anomalously long telomeres
02:15:37 and that wild mice don’t have long telomeres,
02:15:39 I asked her where she was going to publish that result
02:15:42 so that I could cite it in my paper.
02:15:44 And she said that she was going to keep the result in house
02:15:47 rather than publish it.
02:15:49 And at the time, I was a young graduate student.
02:15:54 I didn’t really understand what she was saying.
02:15:56 But in some sense, the knowledge that a model organism
02:16:01 is broken in a way that creates the likelihood
02:16:04 that certain results will be reliably generateable,
02:16:08 you can publish a paper and make a big splash
02:16:10 with such a thing, or you can exploit the fact
02:16:13 that you know how those models will misbehave
02:16:16 and other people don’t.
02:16:17 So there’s a question, if somebody is motivated cynically
02:16:22 and what they want to do is appear to have deeper insight
02:16:25 into biology because they predict things
02:16:27 better than others do, knowing where the flaw is
02:16:31 so that your predictions come out true is advantageous.
02:16:34 At the same time, I can’t help but imagine
02:16:38 that the pharmaceutical industry,
02:16:40 when it figured out that the mice were predisposed
02:16:42 to suggest that drugs were safe,
02:16:45 didn’t leap to fix the problem because in some sense,
02:16:49 it was the perfect cover for the difficult job
02:16:51 of bringing drugs to market and then discovering
02:16:55 their actual toxicity profile, right?
02:16:57 This made things look safer than they were
02:16:59 and I believe a lot of profits
02:17:01 have likely been generated downstream.
02:17:04 So to kind of play devil’s advocate,
02:17:06 it’s also possible that this particular,
02:17:10 the length of the telomeres is not a strong variable
02:17:12 for the drug development and for the conclusions
02:17:16 that Carol and others have been studying.
02:17:18 Is it possible for that to be the case?
02:17:22 So one reason she and others could be ignoring this
02:17:27 is because it’s not a strong variable.
02:17:29 Well, I don’t believe so and in fact,
02:17:31 at the point that I went to publish my paper,
02:17:34 Carol published her result.
02:17:36 She did so in a way that did not make a huge splash.
02:17:39 Did she, I apologize if I don’t know how,
02:17:44 what was the emphasis of her publication of that paper?
02:17:49 Was it purely just kind of showing data
02:17:52 or is there more, because in your paper,
02:17:54 there’s a kind of more of a philosophical statement as well.
02:17:57 Well, my paper was motivated by interest
02:18:00 in the evolutionary dynamics around senescence.
02:18:03 I wasn’t pursuing grants or anything like that.
02:18:07 I was just working on a puzzle I thought was interesting.
02:18:10 Carol has, of course, gone on to win a Nobel Prize
02:18:14 for her co discovery with Elizabeth Greider
02:18:17 of telomerase, the enzyme that lengthens telomeres.
02:18:21 But anyway, she’s a heavy hitter in the academic world.
02:18:25 I don’t know exactly what her purpose was.
02:18:27 I do know that she told me she wasn’t planning to publish
02:18:30 and I do know that I discovered that she was
02:18:32 in the process of publishing very late
02:18:34 and when I asked her to send me the paper
02:18:36 to see whether or not she had put evidence in it
02:18:40 that the hypothesis had come from me,
02:18:43 she grudgingly sent it to me
02:18:45 and my name was nowhere mentioned
02:18:46 and she broke contact at that point.
02:18:50 What it is that motivated her, I don’t know,
02:18:53 but I don’t think it can possibly be
02:18:55 that this result is unimportant.
02:18:57 The fact is, the reason I called her in the first place,
02:19:00 an established contact that generated our collaboration,
02:19:04 was that she was a leading light in the field
02:19:07 of telomeric studies and because of that,
02:19:11 this question about whether the model organisms
02:19:14 are distorting the understanding
02:19:18 of the functioning of telomeres, it’s central.
02:19:20 Do you feel like you’ve been,
02:19:23 as a young graduate student, do you think Carol
02:19:27 or do you think the scientific community
02:19:28 broadly screwed you over in some way?
02:19:31 I don’t think of it in those terms.
02:19:33 Probably partly because it’s not productive
02:19:37 but I have a complex relationship with this story.
02:19:42 On the one hand, I’m livid with Carol Greider
02:19:44 for what she did.
02:19:46 She absolutely pretended that I didn’t exist in this story
02:19:50 and I don’t think I was a threat to her.
02:19:51 My interest was as an evolutionary biologist,
02:19:54 I had made an evolutionary contribution,
02:19:57 she had tested a hypothesis and frankly,
02:19:59 I think it would have been better for her
02:20:01 if she had acknowledged what I had done.
02:20:03 I think it would have enhanced her work
02:20:07 and I was, let’s put it this way,
02:20:10 when I watched her Nobel lecture,
02:20:12 and I should say there’s been a lot of confusion
02:20:13 about this Nobel stuff.
02:20:15 I’ve never said that I should have gotten a Nobel prize.
02:20:17 People have misportrayed that.
02:20:23 In listening to her lecture,
02:20:25 I had one of the most bizarre emotional experiences
02:20:29 of my life because she presented the work
02:20:33 that resulted from my hypothesis.
02:20:35 She presented it as she had in her paper
02:20:38 with no acknowledgement of where it had come from
02:20:42 and she had in fact portrayed the distortion
02:20:47 of the telomeres as if it were a lucky fact
02:20:50 because it allowed testing hypotheses
02:20:53 that would otherwise not be testable.
02:20:55 You have to understand as a young scientist
02:21:00 to watch work that you have done presented
02:21:04 in what’s surely the most important lecture
02:21:07 of her career, it’s thrilling.
02:21:11 It was thrilling to see her figures
02:21:16 projected on the screen there.
02:21:18 To have been part of work that was important enough
02:21:21 for that felt great and of course,
02:21:23 to be erased from the story felt absolutely terrible.
02:21:27 So anyway, that’s sort of where I am with it.
02:21:30 My sense is what I’m really troubled by in this story
02:21:35 is the fact that as far as I know,
02:21:41 the flaw with the mice has not been addressed.
02:21:45 And actually, Eric did some looking into this.
02:21:48 He tried to establish by calling the Jack’s lab
02:21:50 and trying to ascertain what had happened with the colonies,
02:21:54 whether any change in protocol had occurred
02:21:57 and he couldn’t get anywhere.
02:21:58 There was seemingly no awareness that it was even an issue.
02:22:02 So I’m very troubled by the fact that as a father,
02:22:06 for example, I’m in no position to protect my family
02:22:10 from the hazard that I believe lurks
02:22:12 in our medicine cabinets, right?
02:22:15 Even though I’m aware of where the hazard comes from,
02:22:17 it doesn’t tell me anything useful
02:22:18 about which of these drugs will turn out to do damage
02:22:21 if that is ultimately tested.
02:22:23 And that’s a very frustrating position to be in.
02:22:26 On the other hand, there’s a part of me
02:22:28 that’s even still grateful to Carol for taking my call.
02:22:31 She didn’t have to take my call
02:22:33 and talk to some young graduate student
02:22:34 who had some evolutionary idea
02:22:36 that wasn’t in her wheelhouse specifically, and yet she did.
02:22:41 And for a while, she was a good collaborator, so.
02:22:44 Well, can I, I have to proceed carefully here because
02:22:49 it’s a complicated topic.
02:22:52 So she took the call.
02:22:55 And you kind of, you’re kind of saying that
02:23:01 she basically erased credit, you know,
02:23:05 pretending you didn’t exist in some kind of,
02:23:07 in a certain sense.
02:23:11 Let me phrase it this way.
02:23:12 I’ve, as a research scientist at MIT,
02:23:17 I’ve had, and especially just part of
02:23:22 a large set of collaborations,
02:23:25 I’ve had a lot of students come to me
02:23:28 and talk to me about ideas,
02:23:31 perhaps less interesting than what we’re discussing here
02:23:33 in the space of AI, that I’ve been thinking about anyway.
02:23:38 In general, with everything I’m doing with robotics, people
02:23:45 have told me a bunch of ideas
02:23:47 that I’m already thinking about.
02:23:49 The point is taking that idea, see, this is different
02:23:53 because the idea has more power in the space
02:23:55 that we’re talking about here,
02:23:56 and robotics is like your idea means shit
02:23:58 until you build it.
02:24:00 Like, so the engineering world is a little different,
02:24:03 but there’s a kind of sense that I probably forgot
02:24:07 a lot of brilliant ideas have been told to me.
02:24:11 Do you think she pretended you don’t exist?
02:24:14 Do you think she was so busy that she kind of forgot,
02:24:19 you know, that she has like the stream
02:24:21 of brilliant people around her,
02:24:23 there’s a bunch of ideas that are swimming in the air,
02:24:26 and you just kind of forget people
02:24:28 that are a little bit on the periphery
02:24:30 on the idea generation, like, or is it some mix of both?
02:24:34 It’s not a mix of both.
02:24:36 I know that because we corresponded.
02:24:39 She put a graduate student on this work.
02:24:41 He emailed me excitedly when the results came in.
02:24:46 So there was no ambiguity about what had happened.
02:24:50 What’s more, when I went to publish my work,
02:24:52 I actually sent it to Carol in order to get her feedback
02:24:56 because I wanted to be a good collaborator to her,
02:24:59 and she absolutely panned it,
02:25:02 made many critiques that were not valid,
02:25:06 but it was clear at that point
02:25:07 that she became an antagonist,
02:25:10 and none of this adds up.
02:25:12 She couldn’t possibly have forgotten the conversation.
02:25:16 I believe I even sent her tissues at some point in part,
02:25:21 not related to this project, but as a favor.
02:25:23 She was doing another project that involved telomeres,
02:25:25 and she needed samples that I could get ahold of
02:25:28 because of the Museum of Zoology that I was in.
02:25:30 So this was not a one off conversation.
02:25:34 I certainly know that those sorts of things can happen,
02:25:36 but that’s not what happened here.
02:25:37 This was a relationship that existed
02:25:41 and then was suddenly cut short
02:25:43 at the point that she published her paper by surprise
02:25:46 without saying where the hypothesis had come from
02:25:48 and began to be a opposing force to my work.
02:25:54 Is there, there’s a bunch of trajectories
02:25:57 you could have taken through life.
02:25:58 Do you think about the trajectory of being a researcher,
02:26:06 of then going to war in the space of ideas,
02:26:10 of publishing further papers along this line?
02:26:13 I mean, that’s often the dynamic of that fascinating space
02:26:18 is you have a junior researcher with brilliant ideas
02:26:21 and a senior researcher that starts out as a mentor
02:26:24 that becomes a competitor.
02:26:26 I mean, that happens.
02:26:27 But then the way to,
02:26:31 it’s almost an opportunity to shine
02:26:33 is to publish a bunch more papers in this place
02:26:36 to tear it apart, to dig into,
02:26:39 like really make it a war of ideas.
02:26:42 Did you consider that possible trajectory?
02:26:45 I did.
02:26:46 A couple of things to say about it.
02:26:48 One, this work was not central for me.
02:26:51 I took a year on the T. Lemire project
02:26:54 because something fascinating occurred to me
02:26:57 and I pursued it.
02:26:58 And the more I pursued it,
02:26:59 the clearer it was there was something there.
02:27:01 But it wasn’t the focus of my graduate work.
02:27:03 And I didn’t want to become a T. Lemire researcher.
02:27:08 What I want to do is to be an evolutionary biologist
02:27:12 who upgrades the toolkit of evolutionary concepts
02:27:15 so that we can see more clearly
02:27:17 how organisms function and why.
02:27:20 And T. Lemire’s was a proof of concept, right?
02:27:24 That paper was a proof of concept
02:27:26 that the toolkit in question works.
02:27:30 As for the need to pursue it further,
02:27:35 I think it’s kind of absurd
02:27:37 and you’re not the first person to say
02:27:38 maybe that was the way to go about it.
02:27:40 But the basic point is, look, the work was good.
02:27:43 It turned out to be highly predictive.
02:27:47 Frankly, the model of senescence that I presented
02:27:50 is now widely accepted.
02:27:52 And I don’t feel any misgivings at all
02:27:55 about having spent a year on it, said my piece,
02:27:58 and moved on to other things
02:28:00 which frankly I think are bigger.
02:28:02 I think there’s a lot of good to be done
02:28:03 and it would be a waste to get overly narrowly focused.
02:28:08 There’s so many ways through the space of science
02:28:12 and the most common ways is just publish a lot.
02:28:16 Just publish a lot of papers, do these incremental work
02:28:19 and exploring the space kind of like ants looking for food.
02:28:24 You’re tossing out a bunch of different ideas.
02:28:26 Some of them could be brilliant breakthrough ideas, nature.
02:28:29 Some of them are more confidence kind of publications,
02:28:32 all those kinds of things.
02:28:33 Did you consider that kind of path in science?
02:28:38 Of course I considered it,
02:28:39 but I must say the experience of having my first encounter
02:28:44 with the process of peer review be this story,
02:28:48 which was frankly a debacle from one end to the other
02:28:52 with respect to the process of publishing.
02:28:55 It did not, it was not a very good sales pitch
02:28:58 for trying to make a difference through publication.
02:29:01 And I would point out part of what I ran into
02:29:03 and I think frankly part of what explains Carol’s behavior
02:29:06 is that in some parts of science,
02:29:10 there is this dynamic where PIs parasitize their underlings
02:29:16 and if you’re very, very good, you rise to the level
02:29:20 where one day instead of being parasitized,
02:29:23 you get to parasitize others.
02:29:25 Now I find that scientifically despicable
02:29:28 and it wasn’t the culture of the lab I grew up in at all.
02:29:31 My lab, in fact, the PI, Dick Alexander, who’s now gone,
02:29:35 but who was an incredible mind and a great human being,
02:29:40 he didn’t want his graduate students working
02:29:42 on the same topics he was on,
02:29:44 not because it wouldn’t have been useful and exciting,
02:29:47 but because in effect, he did not want any confusion
02:29:51 about who had done what because he was a great mentor
02:29:55 and the idea was actually a great mentor
02:29:58 is not stealing ideas and you don’t want people
02:30:02 thinking that they are.
02:30:03 So anyway, my point would be,
02:30:08 I wasn’t up for being parasitized.
02:30:11 I don’t like the idea that if you are very good,
02:30:14 you get parasitized until it’s your turn
02:30:16 to parasitize others.
02:30:17 That doesn’t make sense to me.
02:30:21 Crossing over from evolution into cellular biology
02:30:23 may have exposed me to that.
02:30:25 That may have been par for the course,
02:30:26 but it doesn’t make it acceptable.
02:30:29 And I would also point out that my work falls
02:30:33 in the realm of synthesis.
02:30:36 My work generally takes evidence accumulated by others
02:30:41 and places it together in order to generate hypotheses
02:30:46 that explain sets of phenomena
02:30:48 that are otherwise intractable.
02:30:51 And I am not sure that that is best done
02:30:55 with narrow publications that are read by few.
02:30:59 And in fact, I would point to the very conspicuous example
02:31:03 of Richard Dawkins, who I must say I’ve learned
02:31:05 a tremendous amount from and I greatly admire.
02:31:07 Dawkins has almost no publication record
02:31:12 in the sense of peer reviewed papers in journals.
02:31:15 What he’s done instead is done synthetic work
02:31:18 and he’s published it in books,
02:31:19 which are not peer reviewed in the same sense.
02:31:22 And frankly, I think there’s no doubting
02:31:24 his contribution to the field.
02:31:27 So my sense is if Richard Dawkins can illustrate
02:31:32 that one can make contributions to the field
02:31:34 without using journals as the primary mechanism
02:31:38 for distributing what you’ve come to understand,
02:31:40 then it’s obviously a valid mechanism
02:31:42 and it’s a far better one from the point of view
02:31:44 of accomplishing what I want to accomplish.
02:31:46 Yeah, it’s really interesting.
02:31:47 There is of course several levels
02:31:49 you can do the kind of synthesis
02:31:50 and that does require a lot of both broad
02:31:53 and deep thinking is exceptionally valuable.
02:31:56 You could also, I’m working on something
02:31:58 with Andrew Huberman now, you can also publish synthesis.
02:32:02 That’s like review papers that are exceptionally valuable
02:32:05 for the communities.
02:32:06 It brings the community together, tells a history,
02:32:09 tells a story of where the community has been.
02:32:11 It paints a picture of where the path lays for the future.
02:32:14 I think it’s really valuable.
02:32:15 And Richard Dawkins is a good example
02:32:17 of somebody that does that in book form
02:32:20 that he kind of walks the line really interestingly.
02:32:23 You have like somebody who like Neil deGrasse Tyson,
02:32:26 who’s more like a science communicator.
02:32:28 Richard Dawkins sometimes is a science communicator,
02:32:31 but he gets like close to the technical
02:32:34 to where it’s a little bit, it’s not shying away
02:32:36 from being really a contribution to science.
02:32:41 No, he’s made real contributions.
02:32:44 In book form.
02:32:45 Yes, he really has.
02:32:46 Which is fascinating.
02:32:47 I mean, Roger Penrose, I mean, similar kind of idea.
02:32:51 That’s interesting, that’s interesting.
02:32:53 Synthesis does not, especially synthesis work,
02:32:56 work that synthesizes ideas does not necessarily need
02:33:00 to be peer reviewed.
02:33:03 It’s peer reviewed by peers reading it.
02:33:08 Well, and reviewing it.
02:33:10 That’s it, it is reviewed by peers,
02:33:11 which is not synonymous with peer review.
02:33:13 And that’s the thing is people don’t understand
02:33:15 that the two things aren’t the same, right?
02:33:17 Peer review is an anonymous process
02:33:20 that happens before publication
02:33:23 in a place where there is a power dynamic, right?
02:33:26 I mean, the joke of course is that peer review
02:33:28 is actually peer preview, right?
02:33:30 Your biggest competitors get to see your work
02:33:32 before it sees the light of day
02:33:34 and decide whether or not it gets published.
02:33:37 And again, when your formative experience
02:33:41 with the publication apparatus is the one I had
02:33:43 with the telomere paper, there’s no way
02:33:46 that that seems like the right way
02:33:48 to advance important ideas.
02:33:50 And what’s the harm in publishing them
02:33:54 so that your peers have to review them in public
02:33:55 where they actually, if they’re gonna disagree with you,
02:33:58 they actually have to take the risk of saying,
02:34:00 I don’t think this is right and here’s why, right?
02:34:03 With their name on it.
02:34:04 I’d much rather that.
02:34:05 It’s not that I don’t want my work reviewed by peers,
02:34:07 but I want it done in the open, you know,
02:34:10 for the same reason you don’t meet
02:34:11 with dangerous people in private, you meet at the cafe.
02:34:14 I want the work reviewed out in public.
02:34:18 Can I ask you a difficult question?
02:34:20 Sure.
02:34:23 There is popularity in martyrdom.
02:34:26 There’s popularity in pointing out
02:34:30 that the emperor has no clothes.
02:34:33 That can become a drug in itself.
02:34:40 I’ve confronted this in scientific work I’ve done at MIT
02:34:46 where there are certain things that are not done well.
02:34:49 People are not being the best version of themselves.
02:34:52 And particular aspects of a particular field
02:34:59 are in need of a revolution.
02:35:02 And part of me wanted to point that out
02:35:06 versus doing the hard work of publishing papers
02:35:11 and doing the revolution.
02:35:13 Basically just pointing out, look,
02:35:15 you guys are doing it wrong and then just walking away.
02:35:19 Are you aware of the drug of martyrdom,
02:35:23 of the ego involved in it,
02:35:29 that it can cloud your thinking?
02:35:32 Probably one of the best questions I’ve ever been asked.
02:35:35 So let me try to sort it out.
02:35:39 First of all, we are all mysteries to ourself at some level.
02:35:43 So it’s possible there’s stuff going on in me
02:35:46 that I’m not aware of that’s driving.
02:35:48 But in general, I would say one of my better strengths
02:35:52 is that I’m not especially ego driven.
02:35:55 I have an ego, I clearly think highly of myself,
02:35:58 but it is not driving me.
02:36:00 I do not crave that kind of validation.
02:36:03 I do crave certain things.
02:36:05 I do love a good eureka moment.
02:36:07 There is something great about it.
02:36:09 And there’s something even better about the phone calls
02:36:11 you make next when you share it, right?
02:36:14 It’s pretty fun, right?
02:36:15 I really like it.
02:36:17 I also really like my subject, right?
02:36:20 There’s something about a walk in the forest
02:36:23 when you have a toolkit in which you can actually look
02:36:26 at creatures and see something deep, right?
02:36:30 I like it, that drives me.
02:36:33 And I could entertain myself for the rest of my life, right?
02:36:35 If I was somehow isolated from the rest of the world,
02:36:39 but I was in a place that was biologically interesting,
02:36:42 hopefully I would be with people that I love
02:36:45 and pets that I love, believe it or not.
02:36:48 But if I were in that situation and I could just go out
02:36:51 every day and look at cool stuff and figure out
02:36:54 what it means, I could be all right with that.
02:36:56 So I’m not heavily driven by the ego thing, as you put it.
02:37:02 So I am completely the same except instead of the pets,
02:37:07 I would put robots.
02:37:08 But so it’s not, it’s the eureka, it’s the exploration
02:37:12 of the subject that brings you joy and fulfillment.
02:37:16 It’s not the ego.
02:37:17 Well, there’s more to say.
02:37:18 No, I really don’t think it’s the ego thing.
02:37:21 I will say I also have kind of a secondary passion
02:37:24 for robot stuff.
02:37:25 I’ve never made anything useful, but I do believe,
02:37:29 I believe I found my calling.
02:37:30 But if this wasn’t my calling,
02:37:32 my calling would have been inventing stuff.
02:37:34 I really enjoy that too.
02:37:36 So I get what you’re saying about the analogy quite well.
02:37:39 But as far as the martyrdom thing,
02:37:46 I understand the drug you’re talking about
02:37:47 and I’ve seen it more than I’ve felt it.
02:37:51 I do, if I’m just to be completely candid
02:37:53 and this question is so good, it deserves a candid answer.
02:37:57 I do like the fight, right?
02:38:01 I like fighting against people I don’t respect
02:38:04 and I like winning, but I have no interest in martyrdom.
02:38:10 One of the reasons I have no interest in martyrdom
02:38:12 is that I’m having too good a time, right?
02:38:15 I very much enjoy my life and.
02:38:17 It’s such a good answer.
02:38:18 I have a wonderful wife.
02:38:21 I have amazing children.
02:38:23 I live in a lovely place.
02:38:26 I don’t wanna exit any quicker than I have to.
02:38:29 That said, I also believe in things
02:38:32 and a willingness to exit if that’s the only way
02:38:35 is not exactly inviting martyrdom,
02:38:37 but it is an acceptance that fighting is dangerous
02:38:41 and going up against powerful forces
02:38:43 means who knows what will come of it, right?
02:38:46 I don’t have the sense that the thing is out there
02:38:48 that used to kill inconvenient people.
02:38:51 I don’t think that’s how it’s done anymore.
02:38:52 It’s primarily done through destroying them reputationally,
02:38:56 which is not something I relish the possibility of,
02:39:00 but there is a difference between
02:39:03 a willingness to face the hazard
02:39:07 rather than a desire to face it because of the thrill, right?
02:39:13 For me, the thrill is in fighting when I’m in the right.
02:39:19 I think I feel that that is a worthwhile way
02:39:22 to take what I see as the kind of brutality
02:39:27 that is built into men and to channel it
02:39:30 to something useful, right?
02:39:33 If it is not channeled into something useful,
02:39:35 it will be channeled into something else,
02:39:36 so it damn well better be channeled into something useful.
02:39:38 It’s not motivated by fame or popularity,
02:39:41 those kinds of things.
02:39:42 It’s, you know what, you’re just making me realize
02:39:45 that enjoying the fight,
02:39:50 fighting the powerful and idea that you believe is right
02:39:53 is a kind of optimism for the human spirit.
02:40:01 It’s like, we can win this.
02:40:05 It’s almost like you’re turning into action,
02:40:08 into personal action, this hope for humanity
02:40:13 by saying like, we can win this.
02:40:15 And that makes you feel good about the rest of humanity,
02:40:20 that if there’s people like me, then we’re going to be okay.
02:40:26 Even if you’re like, your ideas might be wrong or not,
02:40:29 but if you believe they’re right
02:40:31 and you’re fighting the powerful against all odds,
02:40:36 then we’re going to be okay.
02:40:39 If I were to project, I mean,
02:40:42 because I enjoy the fight as well,
02:40:44 I think that’s the way I, that’s what brings me joy,
02:40:48 is it’s almost like it’s optimism in action.
02:40:54 Well, it’s a little different for me.
02:40:55 And again, I think, you know, I recognize you.
02:40:58 You’re a familiar, your construction is familiar,
02:41:01 even if it isn’t mine, right?
02:41:03 For me, I actually expect us not to be okay.
02:41:08 And I’m not okay with that.
02:41:10 But what’s really important, if I feel like what I’ve said
02:41:14 is I don’t know of any reason that it’s not okay,
02:41:17 or any reason that it’s too late.
02:41:19 As far as I know, we could still save humanity
02:41:22 and we could get to the fourth frontier
02:41:24 or something akin to it.
02:41:26 But I expect us not to, I expect us to fuck it up, right?
02:41:29 I don’t like that thought, but I’ve looked into the abyss
02:41:32 and I’ve done my calculations
02:41:34 and the number of ways we could not succeed are many
02:41:38 and the number of ways that we could manage
02:41:40 to get out of this very dangerous phase of history is small.
02:41:44 The thing I don’t have to worry about is
02:41:47 that I didn’t do enough, right?
02:41:50 That I was a coward, that I prioritized other things.
02:41:57 At the end of the day, I think I will be able to say
02:41:59 to myself, and in fact, the thing that allows me to sleep,
02:42:02 is that when I saw clearly what needed to be done,
02:42:05 I tried to do it to the extent that it was in my power.
02:42:08 And if we fail, as I expect us to,
02:42:12 I can’t say, well, geez, that’s on me, you know?
02:42:16 And frankly, I regard what I just said to you
02:42:18 as something like a personality defect, right?
02:42:22 I’m trying to free myself from the sense
02:42:24 that this is my fault.
02:42:25 On the other hand, my guess is that personality defect
02:42:28 is probably good for humanity, right?
02:42:31 It’s a good one for me to have the externalities
02:42:34 of it are positive, so I don’t feel too bad about it.
02:42:38 Yeah, that’s funny, so yeah, our perspective on the world
02:42:42 are different, but they rhyme, like you said.
02:42:45 Because I’ve also looked into the abyss,
02:42:47 and it kind of smiled nervously back.
02:42:51 So I have a more optimistic sense that we’re gonna win
02:42:55 more than likely we’re going to be okay.
02:42:59 Right there with you, brother.
02:43:00 I’m hoping you’re right.
02:43:01 I’m expecting me to be right.
02:43:03 But back to Eric, you had a wonderful conversation.
02:43:07 In that conversation, he played the big brother role,
02:43:11 and he was very happy about it.
02:43:13 He was self congratulatory about it.
02:43:17 Can you talk to the ways in which Eric made you
02:43:21 a better man throughout your life?
02:43:24 Yeah, hell yeah.
02:43:25 I mean, for one thing, you know,
02:43:27 Eric and I are interestingly similar in some ways
02:43:30 and radically different in some other ways,
02:43:33 and it’s often a matter of fascination
02:43:35 to people who know us both because almost always
02:43:38 people meet one of us first, and they sort of
02:43:40 get used to that thing, and then they meet the other,
02:43:41 and it throws the model into chaos.
02:43:44 But you know, I had a great advantage,
02:43:47 which is I came second, right?
02:43:49 So although it was kind of a pain in the ass
02:43:51 to be born into a world that had Eric in it
02:43:53 because he’s a force of nature, right?
02:43:55 It was also terrifically useful because A,
02:43:59 he was a very awesome older brother
02:44:02 who made interesting mistakes, learned from them,
02:44:06 and conveyed the wisdom of what he had discovered,
02:44:08 and that was, you know, I don’t know who else
02:44:12 ends up so lucky as to have that kind of person
02:44:16 blazing the trail.
02:44:18 It also probably, you know, my hypothesis
02:44:22 for what birth order effects are
02:44:24 is that they’re actually adaptive, right?
02:44:27 That the reason that a second born is different
02:44:30 than a first born is that they’re not born
02:44:32 into a world with the same niches in it, right?
02:44:35 And so the thing about Eric is he’s been
02:44:38 completely dominant in the realm of fundamental thinking,
02:44:44 right, like what he’s fascinated by
02:44:45 is the fundamental of fundamentals,
02:44:48 and he’s excellent at it, which meant
02:44:50 that I was born into a world where somebody
02:44:52 was becoming excellent in that, and for me
02:44:54 to be anywhere near the fundamental of fundamentals
02:44:57 was going to be pointless, right?
02:44:58 I was going to be playing second fiddle forever,
02:45:00 and I think that that actually drove me
02:45:02 to the other end of the continuum
02:45:04 between fundamental and emergent,
02:45:06 and so I became fascinated with biology
02:45:09 and have been since I was three years old, right?
02:45:13 I think Eric drove that, and I have to thank him for it
02:45:16 because, you know, I mean.
02:45:19 I never thought of, so Eric drives towards the fundamental,
02:45:24 and you drive towards the emergent,
02:45:26 the physics and the biology.
02:45:28 Right, opposite ends of the continuum,
02:45:30 and as Eric would be quick to point out
02:45:32 if he was sitting here, I treat the emergent layer,
02:45:36 I seek the fundamentals in it,
02:45:37 which is sort of an echo of Eric’s style of thinking
02:45:40 but applied to the very far complexity.
02:45:43 He’s overpoweringly argues for the importance of physics,
02:45:50 the fundamental of the fundamental.
02:45:55 He’s not here to defend himself.
02:45:57 Is there an argument to be made against that?
02:46:00 Or biology, the emergent,
02:46:03 the study of the thing that emerged
02:46:06 when the fundamental acts at the cosmic scale
02:46:10 and then builds the beautiful thing that is us
02:46:13 is much more important.
02:46:16 Psychology, biology, the systems
02:46:19 that we’re actually interacting with in this human world
02:46:23 are much more important to understand
02:46:25 than the low level theories of quantum mechanics
02:46:31 and general relativity.
02:46:33 Yeah, I can’t say that one is more important.
02:46:35 I think there’s probably a different time scale.
02:46:38 I think understanding the emergent layer
02:46:40 is more often useful, but the bang for the buck
02:46:44 at the far fundamental layer may be much greater.
02:46:48 So for example, the fourth frontier,
02:46:51 I’m pretty sure it’s gonna have to be fusion powered.
02:46:55 I don’t think anything else will do it,
02:46:57 but once you had fusion power,
02:46:58 assuming we didn’t just dump fusion power on the market
02:47:01 the way we would be likely to
02:47:02 if it was invented usefully tomorrow,
02:47:05 but if we had fusion power
02:47:08 and we had a little bit more wisdom than we have,
02:47:10 you could do an awful lot.
02:47:12 And that’s not gonna come from people like me
02:47:15 who look at the dynamics of it.
02:47:17 Can I argue against that?
02:47:19 Please.
02:47:21 I think the way to unlock fusion power
02:47:25 is through artificial intelligence.
02:47:28 So I think most of the breakthrough ideas
02:47:32 in the futures of science will be developed by AI systems.
02:47:35 And I think in order to build intelligent AI systems,
02:47:38 you have to be a scholar of the fundamental
02:47:41 of the emergent, of biology, of the neuroscience,
02:47:46 of the way the brain works,
02:47:48 of intelligence, of consciousness.
02:47:50 And those things, at least directly,
02:47:53 don’t have anything to do with physics.
02:47:56 Well.
02:47:56 You’re making me a little bit sad
02:47:58 because my addiction to the aha moment thing
02:48:02 is incompatible with outsourcing that job.
02:48:06 Like the outsource thing.
02:48:07 I don’t wanna outsource that thing to the AI.
02:48:09 You reap the moment.
02:48:11 And actually, I’ve seen this happen before
02:48:13 because some of the people who trained Heather and me
02:48:16 were phylogenetic systematists,
02:48:19 Arnold Kluge in particular.
02:48:21 And the problem with systematics
02:48:24 is that to do it right when your technology is primitive,
02:48:28 you have to be deeply embedded in the philosophical
02:48:32 and the logical, right?
02:48:33 Your method has to be based in the highest level of rigor.
02:48:40 Once you can sequence genes,
02:48:42 genes can spit so much data at you
02:48:44 that you can overwhelm high quality work
02:48:46 with just lots and lots and lots of automated work.
02:48:49 And so in some sense,
02:48:51 there’s like a generation of phylogenetic systematists
02:48:54 who are the last of the greats
02:48:56 because what’s replacing them is sequencers.
02:48:59 So anyway, maybe you’re right about the AI.
02:49:03 And I guess I’m…
02:49:03 What makes you sad?
02:49:06 I like figuring stuff out.
02:49:07 Is there something that you disagree with the error con,
02:49:11 even trying to convince them you failed so far,
02:49:14 but you will eventually succeed?
02:49:18 You know, that is a very long list.
02:49:20 Eric and I have tensions over certain things
02:49:24 that recur all the time.
02:49:26 And I’m trying to think what would be the ideal…
02:49:29 Is it in the space of science,
02:49:30 in the space of philosophy, politics, family, love, robots?
02:49:35 Well, all right, let me…
02:49:39 I’m just gonna use your podcast
02:49:42 to make a bit of a cryptic war
02:49:44 and just say there are many places
02:49:47 in which I believe that I have butted heads with Eric
02:49:50 over the course of decades
02:49:52 and I have seen him move in my direction
02:49:55 substantially over time.
02:49:56 So you’ve been winning.
02:49:57 He might win a battle here or there,
02:49:59 but you’ve been winning the war.
02:50:00 I would not say that.
02:50:01 It’s quite possible he could say the same thing about me.
02:50:04 And in fact, I know that it’s true.
02:50:06 There are places where he’s absolutely convinced me.
02:50:08 But in any case, I do believe it’s at least…
02:50:11 It may not be a totally even fight,
02:50:13 but it’s more even than some will imagine.
02:50:16 But yeah, we have…
02:50:18 There are things I say that drive him nuts, right?
02:50:22 Like when something, like you heard me talk about the…
02:50:28 What was it?
02:50:29 It was the autopilot that seems to be putting
02:50:33 a great many humans in needless medical jeopardy
02:50:37 over the COVID 19 pandemic.
02:50:40 And my feeling is we can say this almost for sure.
02:50:45 Anytime you have the appearance
02:50:47 of some captured gigantic entity
02:50:50 that is censoring you on YouTube
02:50:52 and handing down dictates from the who and all of that,
02:50:56 it is sure that there will be
02:50:59 a certain amount of collusion, right?
02:51:01 There’s gonna be some embarrassing emails in some places
02:51:04 that are gonna reveal some shocking connections.
02:51:05 And then there’s gonna be an awful lot of emergence
02:51:09 that didn’t involve collusion, right?
02:51:11 In which people were doing their little part of a job
02:51:13 and something was emerging.
02:51:14 And you never know what the admixture is.
02:51:16 How much are we looking at actual collusion
02:51:19 and how much are we looking at an emergent process?
02:51:21 But you should always walk in with the sense
02:51:23 that it’s gonna be a ratio.
02:51:24 And the question is, what is the ratio in this case?
02:51:27 I think this drives Eric nuts
02:51:29 because he is very focused on the people.
02:51:32 I think he’s focused on the people who have a choice
02:51:34 and make the wrong one.
02:51:36 And anyway, he may.
02:51:38 Discussion of the ratio is a distraction to that.
02:51:41 I think he takes it almost as an offense
02:51:45 because it grants cover to people who are harming others.
02:51:51 And I think it offends him morally.
02:51:56 And if I had to say, I would say it alters his judgment
02:52:00 on the matter.
02:52:02 But anyway, certainly useful just to leave open
02:52:05 the two possibilities and say it’s a ratio,
02:52:07 but we don’t know which one.
02:52:10 Brother to brother, do you love the guy?
02:52:13 Hmm, hell yeah, hell yeah.
02:52:15 And I’d love him if he was just my brother,
02:52:18 but he’s also awesome.
02:52:19 So I love him and I love him for who he is.
02:52:21 So let me ask you about back to your book,
02:52:25 Hunter Gatherer’s Guide to the 21st Century.
02:52:29 I can’t wait both for the book and the videos
02:52:32 you do on the book.
02:52:33 That’s really exciting that there’s like a structured,
02:52:35 organized way to present this.
02:52:39 A kind of from an evolutionary biology perspective,
02:52:44 a guide for the future,
02:52:46 using our past as the fundamental, the emergent way
02:52:52 to present a picture of the future.
02:52:56 Let me ask you about something that,
02:53:00 I think about a little bit in this modern world,
02:53:02 which is monogamy.
02:53:07 So I personally value monogamy.
02:53:10 One girl, ride or die.
02:53:12 There you go.
02:53:13 Ride or, no, that’s exactly it now.
02:53:15 But that said, I don’t know what’s the right way
02:53:21 to approach this,
02:53:23 but from an evolutionary biology perspective
02:53:27 or from just looking at modern society,
02:53:30 that seems to be an idea that’s not,
02:53:33 what’s the right way to put it, flourishing?
02:53:37 It is waning.
02:53:38 It’s waning.
02:53:41 So I suppose based on your reaction,
02:53:44 you’re also a supporter of monogamy
02:53:45 or you value monogamy.
02:53:47 Are you and I just delusional?
02:53:53 What can you say about monogamy
02:53:56 from the context of your book,
02:53:58 from the context of evolutionary biology,
02:54:00 from the context of being human?
02:54:02 Yeah, I can say that I fully believe
02:54:05 that we are actually enlightened
02:54:06 and that although monogamy is waning,
02:54:09 that it is not waning because there is a superior system.
02:54:12 It is waning for predictable other reasons.
02:54:15 So let us just say it is,
02:54:18 there is a lot of pre trans fallacy here
02:54:21 where people go through a phase
02:54:24 where they recognize that actually
02:54:26 we know a lot about the evolution of monogamy
02:54:31 and we can tell from the fact
02:54:33 that humans are somewhat sexually dimorphic
02:54:36 that there has been a lot of polygyny in human history.
02:54:39 And in fact, most of human history was largely polygynous.
02:54:45 But it is also the case that most of the people
02:54:48 on earth today belong to civilizations
02:54:51 that are at least nominally monogamous
02:54:53 and have practiced monogamy.
02:54:54 And that’s not anti evolutionary.
02:54:58 What that is is part of what I mentioned before
02:55:01 where human beings can swap out their software program
02:55:05 and different mating patterns are favored
02:55:09 in different periods of history.
02:55:11 So I would argue that the benefit of monogamy,
02:55:15 the primary one that drives the evolution
02:55:17 of monogamous patterns in humans
02:55:19 is that it brings all adults into child rearing.
02:55:24 Now the reason that that matters
02:55:26 is because human babies are very labor intensive.
02:55:29 In order to raise them properly,
02:55:31 having two parents is a huge asset
02:55:34 and having more than two parents,
02:55:35 having an extended family also very important.
02:55:39 But what that means is that for a population
02:55:43 that is expanding, a monogamous mating system makes sense.
02:55:48 It makes sense because it means that the number of offspring
02:55:50 that can be raised is elevated.
02:55:52 It’s elevated because all potential parents
02:55:56 are involved in parenting.
02:55:58 Whereas if you sideline a bunch of males
02:56:00 by having a polygynous system
02:56:01 in which one male has many females,
02:56:03 which is typically the way that works,
02:56:05 what you do is you sideline all those males,
02:56:07 which means the total amount of parental effort is lower
02:56:09 and the population can’t grow.
02:56:12 So what I’m arguing is that you should expect to see
02:56:16 populations that face the possibility of expansion
02:56:20 endorse monogamy.
02:56:21 And at the point that they have reached carrying capacity,
02:56:24 you should expect to see polygyny break back out.
02:56:26 And what we are seeing
02:56:28 is a kind of false sophistication around polyamory,
02:56:31 which will end up breaking down into polygyny,
02:56:35 which will not be in the interest of most people.
02:56:37 Really the only people whose interest
02:56:38 it could be argued to be in
02:56:41 would be the very small number of males at the top
02:56:44 who have many partners and everybody else suffers.
02:56:48 Is it possible to make the argument
02:56:51 if we focus in on those males at the quote unquote top
02:56:55 with many female partners,
02:56:57 is it possible to say that that’s a suboptimal life,
02:57:02 that a single partner is the optimal life?
02:57:05 Well, it depends what you mean.
02:57:06 I have a feeling that you and I wouldn’t have to go very far
02:57:09 to figure out that what might be evolutionarily optimal
02:57:15 doesn’t match my values as a person
02:57:17 and I’m sure it doesn’t match yours either.
02:57:20 Can we try to dig into that gap between those two?
02:57:23 Sure.
02:57:24 I mean, we can do it very simply.
02:57:29 Selection might favor your engaging in war
02:57:33 against a defenseless enemy or genocide, right?
02:57:38 It’s not hard to figure out
02:57:40 how that might put your genes at advantage.
02:57:43 I don’t know about you, Lex.
02:57:44 I’m not getting involved in no genocide.
02:57:46 It’s not gonna happen.
02:57:47 I won’t do it.
02:57:48 I will do anything to avoid it.
02:57:49 So some part of me has decided that my conscious self
02:57:54 and the values that I hold trump my evolutionary self
02:57:59 and once you figure out that in some extreme case,
02:58:03 that’s true and then you realize
02:58:04 that that means it must be possible in many other cases
02:58:07 and you start going through all of the things
02:58:09 that selection would favor
02:58:10 and you realize that a fair fraction of the time,
02:58:12 actually, you’re not up for this.
02:58:14 You don’t wanna be some robot on a mission
02:58:17 that involves genocide when necessary.
02:58:19 You wanna be your own person and accomplish things
02:58:22 that you think are valuable.
02:58:24 And so among those are not advocating,
02:58:30 let’s suppose you were in a position
02:58:32 to be one of those males at the top of a polygynous system.
02:58:35 We both know why that would be rewarding, right?
02:58:38 But we also both recognize.
02:58:39 Do we?
02:58:40 Yeah, sure.
02:58:41 Lots of sex?
02:58:42 Yeah.
02:58:43 Okay, what else?
02:58:43 Lots of sex and lots of variety, right?
02:58:45 So look, every red blooded American slash Russian male
02:58:51 can understand why that’s appealing, right?
02:58:53 On the other hand, it is up against an alternative
02:58:57 which is having a partner with whom one is bonded
02:59:03 especially closely, right?
02:59:05 Right.
02:59:06 And so.
02:59:07 A love.
02:59:08 Right.
02:59:09 Well, I don’t wanna straw man the polygyny position.
02:59:14 Obviously polygyny is complex
02:59:15 and there’s nothing that stops a man presumably
02:59:19 from loving multiple partners and from them loving him back.
02:59:23 But in terms of, if love is your thing,
02:59:25 there’s a question about, okay, what is the quality of love
02:59:28 if it is divided over multiple partners, right?
02:59:31 And what is the net consequence for love in a society
02:59:36 when multiple people will be frozen out
02:59:38 for every individual male in this case who has it?
02:59:41 And what I would argue is, and you know,
02:59:46 this is weird to even talk about,
02:59:47 but this is partially me just talking
02:59:49 from personal experience.
02:59:51 I think there actually is a monogamy program in us
02:59:54 and it’s not automatic.
02:59:55 But if you take it seriously, you can find it
03:00:00 and frankly, marriage, and it doesn’t have to be marriage,
03:00:04 but whatever it is that results in a lifelong bond
03:00:07 with a partner has gotten a very bad rap.
03:00:10 You know, it’s the butt of too many jokes.
03:00:12 But the truth is, it’s hugely rewarding, it’s not easy.
03:00:18 But if you know that you’re looking for something, right?
03:00:20 If you know that the objective actually exists
03:00:22 and it’s not some utopian fantasy that can’t be found,
03:00:25 if you know that there’s some real world, you know,
03:00:28 warts and all version of it, then you might actually think,
03:00:32 hey, that is something I want and you might pursue it
03:00:34 and my guess is you’d be very happy when you find it.
03:00:36 Yeah, I think there is, getting to the fundamental
03:00:39 and the emergent, I feel like there is some kind of physics
03:00:43 of love.
03:00:44 So one, there’s a conservation thing going on.
03:00:47 So if you have like many partners, yeah, in theory,
03:00:51 you should be able to love all of them deeply.
03:00:54 But it seems like in reality that love gets split.
03:00:57 Yep.
03:00:58 Now, there’s another law that’s interesting
03:01:01 in terms of monogamy.
03:01:02 I don’t know if it’s at the physics level,
03:01:04 but if you are in a monogamous relationship by choice
03:01:10 and almost as in slight rebellion to social norms,
03:01:16 that’s much more powerful.
03:01:17 Like if you choose that one partnership,
03:01:20 that’s also more powerful.
03:01:22 If like everybody’s in a monogamous,
03:01:24 this pressure to be married and this pressure of society,
03:01:27 that’s different because that’s almost like a constraint
03:01:30 on your freedom that is enforced by something
03:01:33 other than your own ideals.
03:01:35 It’s by somebody else.
03:01:37 When you yourself choose to, I guess,
03:01:40 create these constraints, that enriches that love.
03:01:45 So there’s some kind of love function,
03:01:47 like E equals MC squared, but for love,
03:01:50 that I feel like if you have less partners
03:01:53 and it’s done by choice, that can maximize that.
03:01:56 And that love can transcend the biology,
03:02:00 transcend the evolutionary biology forces
03:02:03 that have to do much more with survival
03:02:06 and all those kinds of things.
03:02:07 It can transcend to take us to a richer experience,
03:02:11 which we have the luxury of having,
03:02:13 exploring of happiness, of joy, of fulfillment,
03:02:17 all those kinds of things.
03:02:19 Totally agree with this.
03:02:21 And there’s no question that by choice,
03:02:24 when there are other choices,
03:02:26 imbues it with meaning that it might not otherwise have.
03:02:30 I would also say, I’m really struck by,
03:02:35 and I have a hard time not feeling terrible sadness
03:02:40 over what younger people are coming
03:02:44 to think about this topic.
03:02:46 I think they’re missing something so important
03:02:49 and so hard to phrase that,
03:02:51 and they don’t even know that they’re missing it.
03:02:54 They might know that they’re unhappy,
03:02:55 but they don’t understand what it is
03:02:58 they’re even looking for,
03:02:58 because nobody’s really been honest with them
03:03:00 about what their choices are.
03:03:02 And I have to say, if I was a young person,
03:03:05 or if I was advising a young person,
03:03:06 which I used to do, again, a million years ago
03:03:09 when I was a college professor four years ago,
03:03:12 but I used to talk to students.
03:03:13 I knew my students really well,
03:03:15 and they would ask questions about this,
03:03:16 and they were always curious
03:03:17 because Heather and I seemed to have a good relationship,
03:03:19 and many of them knew both of us.
03:03:22 So they would talk to us about this.
03:03:24 If I was advising somebody, I would say,
03:03:28 do not bypass the possibility
03:03:30 that what you are supposed to do is find somebody worthy,
03:03:36 somebody who can handle it,
03:03:37 somebody who you are compatible with,
03:03:39 and that you don’t have to be perfectly compatible.
03:03:41 It’s not about dating until you find the one.
03:03:44 It’s about finding somebody whose underlying values
03:03:48 and viewpoint are complimentary to yours,
03:03:51 sufficient that you fall in love.
03:03:53 If you find that person, opt out together.
03:03:58 Get out of this damn system
03:04:00 that’s telling you what’s sophisticated
03:04:02 to think about love and romance and sex.
03:04:04 Ignore it together, all right?
03:04:06 That’s the key, and I believe you’ll end up laughing
03:04:11 in the end if you do it.
03:04:12 You’ll discover, wow, that’s a hellscape
03:04:16 that I opted out of, and this thing I opted into?
03:04:19 Complicated, difficult, worth it.
03:04:22 Nothing that’s worth it is ever not difficult,
03:04:25 so we should even just skip
03:04:27 the whole statement about difficult.
03:04:30 Yeah, all right.
03:04:30 I just, I wanna be honest.
03:04:32 It’s not like, oh, it’s nonstop joy.
03:04:35 No, it’s fricking complex, but worth it?
03:04:38 No question in my mind.
03:04:41 Is there advice outside of love
03:04:42 that you can give to young people?
03:04:45 You were a million years ago a professor.
03:04:49 Is there advice you can give to young people,
03:04:51 high schoolers, college students about career, about life?
03:04:56 Yeah, but it’s not, they’re not gonna like it
03:04:58 because it’s not easy to operationalize,
03:05:00 and this was a problem when I was a college professor, too.
03:05:03 People would ask me what they should do.
03:05:04 Should they go to graduate school?
03:05:06 I had almost nothing useful to say
03:05:08 because the job market and the market of prejob training
03:05:14 and all of that, these things are all so distorted
03:05:19 and corrupt that I didn’t wanna point anybody to anything
03:05:23 because it’s all broken, and I would tell them that,
03:05:26 but I would say that results in a kind of meta level advice
03:05:31 that I do think is useful.
03:05:33 You don’t know what’s coming.
03:05:35 You don’t know where the opportunities will be.
03:05:38 You should invest in tools rather than knowledge.
03:05:42 To the extent that you can do things,
03:05:44 you can repurpose that no matter what the future brings
03:05:47 to the extent that if you, as a robot guy,
03:05:51 you’ve got the skills of a robot guy.
03:05:53 Now, if civilization failed
03:05:56 and the stuff of robot building disappeared with it,
03:06:00 you’d still have the mind of a robot guy,
03:06:02 and the mind of a robot guy can retool
03:06:04 around all kinds of things, whether you’re forced to work
03:06:08 with fibers that are made into ropes.
03:06:12 Your mechanical mind would be useful in all kinds of places,
03:06:15 so invest in tools like that that can be easily repurposed,
03:06:19 and invest in combinations of tools, right?
03:06:23 If civilization keeps limping along,
03:06:28 you’re gonna be up against all sorts of people
03:06:30 who have studied the things that you studied, right?
03:06:33 If you think, hey, computer programming
03:06:34 is really, really cool, and you pick up computer programming,
03:06:37 guess what, you just entered a large group of people
03:06:40 who have that skill, and many of them will be better
03:06:42 than you, almost certainly.
03:06:44 On the other hand, if you combine that with something else
03:06:48 that’s very rarely combined with it,
03:06:50 if you have, I don’t know if it’s carpentry
03:06:54 and computer programming, if you take combinations
03:06:57 of things that are, even if they’re both common,
03:07:00 but they’re not commonly found together,
03:07:03 then those combinations create a rarefied space
03:07:06 where you inhabit it, and even if the things
03:07:08 don’t even really touch, but nonetheless,
03:07:11 they create a mind in which the two things are live
03:07:13 and you can move back and forth between them
03:07:15 and step out of your own perspective
03:07:18 by moving from one to the other,
03:07:20 that will increase what you can see
03:07:22 and the quality of your tools.
03:07:24 And so anyway, that isn’t useful advice.
03:07:26 It doesn’t tell you whether you should go
03:07:27 to graduate school or not, but it does tell you
03:07:30 the one thing we can say for certain about the future
03:07:33 is that it’s uncertain, and so prepare for it.
03:07:36 And like you said, there’s cool things to be discovered
03:07:38 in the intersection of fields and ideas.
03:07:42 And I would look at grad school that way,
03:07:44 actually, if you do go, or I see,
03:07:50 I mean, this is such a, like every course
03:07:52 in grad school, undergrad too,
03:07:55 was like this little journey that you’re on
03:07:57 that explores a particular field.
03:08:00 And it’s not immediately obvious how useful it is,
03:08:03 but it allows you to discover intersections
03:08:08 between that thing and some other thing.
03:08:11 So you’re bringing to the table these pieces of knowledge,
03:08:16 some of which when intersected might create a niche
03:08:19 that’s completely novel, unique, and will bring you joy.
03:08:23 I mean, I took a huge number of courses
03:08:25 in theoretical computer science.
03:08:28 Most of them seem useless, but they totally changed
03:08:31 the way I see the world in ways that I’m not prepared
03:08:34 or is a little bit difficult to kind of make explicit,
03:08:38 but taken together, they’ve allowed me to see,
03:08:44 for example, the world of robotics totally different
03:08:48 and different from many of my colleagues
03:08:50 and friends and so on.
03:08:51 And I think that’s a good way to see if you go
03:08:54 to grad school was as a opportunity
03:08:59 to explore intersections of fields,
03:09:01 even if the individual fields seem useless.
03:09:04 Yeah, and useless doesn’t mean useless, right?
03:09:07 Useless means not directly applicable,
03:09:09 but a good, useless course can be the best one
03:09:12 you ever took.
03:09:14 Yeah, I took James Joyce, a course on James Joyce,
03:09:18 and that was truly useless.
03:09:21 Well, I took immunobiology in the medical school
03:09:25 when I was at Penn as, I guess I would have been
03:09:29 a freshman or a sophomore.
03:09:30 I wasn’t supposed to be in this class.
03:09:33 It blew my goddamn mind, and it still does, right?
03:09:37 I mean, we had this, I don’t even know who it was,
03:09:39 but we had this great professor who was highly placed
03:09:42 in the world of immunobiology.
03:09:44 The course is called Immunobiology, not immunology.
03:09:47 Immunobiology, it had the right focus,
03:09:50 and as I recall it, the professor stood sideways
03:09:54 to the chalkboard, staring off into space,
03:09:57 literally stroking his beard with this bemused look
03:10:01 on his face through the entire lecture.
03:10:04 And you had all these medical students
03:10:05 who were so furiously writing notes
03:10:07 that I don’t even think they were noticing
03:10:08 the person delivering this thing,
03:10:09 but I got what this guy was smiling about.
03:10:13 It was like so, what he was describing,
03:10:16 adaptive immunity is so marvelous, right?
03:10:18 That it was like almost a privilege to even be saying it
03:10:21 to a room full of people who were listening, you know?
03:10:23 But anyway, yeah, I took that course,
03:10:25 and lo and behold, COVID.
03:10:27 That’s gonna be useful.
03:10:28 Well, yeah, suddenly it’s front and center,
03:10:32 and wow, am I glad I took it.
03:10:33 But anyway, yeah, useless courses are great.
03:10:37 And actually, Eric gave me one of the greater pieces
03:10:40 of advice, at least for college, that anyone’s ever given,
03:10:43 which was don’t worry about the prereqs.
03:10:46 Take it anyway, right?
03:10:48 But now, I don’t even know if kids can do this now
03:10:50 because the prereqs are now enforced by a computer.
03:10:53 But back in the day, if you didn’t mention
03:10:56 that you didn’t have the prereqs,
03:10:58 nobody stopped you from taking the course.
03:10:59 And what he told me, which I didn’t know,
03:11:01 was that often the advanced courses are easier in some way.
03:11:06 The material’s complex, but it’s not like intro bio
03:11:11 where you’re learning a thousand things at once, right?
03:11:14 It’s like focused on something.
03:11:16 So if you dedicate yourself, you can pull it off.
03:11:18 Yeah, stay with an idea for many weeks at a time,
03:11:21 and it’s ultimately rewarding,
03:11:22 and not as difficult as it looks.
03:11:25 Can I ask you a ridiculous question?
03:11:27 Please.
03:11:28 What do you think is the meaning of life?
03:11:34 Well, I feel terrible having to give you the answer.
03:11:38 I realize you asked the question,
03:11:40 but if I tell you, you’re gonna again feel bad.
03:11:43 I don’t wanna do that.
03:11:44 But look, there’s two.
03:11:46 There can be a disappointment.
03:11:47 No, it’s gonna be a horror, right?
03:11:50 Because we actually know the answer to the question.
03:11:52 Oh no.
03:11:53 It’s completely meaningless.
03:11:56 There is nothing that we can do
03:11:58 that escapes the heat death of the universe
03:12:00 or whatever it is that happens at the end.
03:12:02 And we’re not gonna make it there anyway.
03:12:04 But even if you were optimistic about our ability
03:12:07 to escape every existential hazard indefinitely,
03:12:13 ultimately it’s all for naught and we know it, right?
03:12:17 That said, once you stare into that abyss,
03:12:20 and then it stares back and laughs or whatever happens,
03:12:24 then the question is, okay, given that,
03:12:27 can I relax a little bit, right?
03:12:29 And figure out, well, what would make sense
03:12:31 if that were true, right?
03:12:34 And I think there’s something very clear to me.
03:12:37 I think if you do all of the,
03:12:38 if I just take the values that I’m sure we share
03:12:41 and extrapolate from them,
03:12:43 I think the following thing is actually a moral imperative.
03:12:48 Being a human and having opportunity
03:12:51 is absolutely fucking awesome, right?
03:12:54 A lot of people don’t make use of the opportunity
03:12:56 and a lot of people don’t have opportunity, right?
03:12:58 They get to be human, but they’re too constrained
03:13:00 by keeping a roof over their heads to really be free.
03:13:03 But being a free human is fantastic.
03:13:07 And being a free human on this beautiful planet,
03:13:10 crippled as it may be, is unparalleled.
03:13:13 I mean, what could be better?
03:13:15 How lucky are we that we get that, right?
03:13:17 So if that’s true, that it is awesome to be human
03:13:21 and to be free, then surely it is our obligation
03:13:25 to deliver that opportunity to as many people as we can.
03:13:29 And how do you do that?
03:13:30 Well, I think I know what job one is.
03:13:33 Job one is we have to get sustainable.
03:13:36 The way to get the maximum number of humans
03:13:39 to have that opportunity to be both here and free
03:13:42 is to make sure that there isn’t a limit
03:13:44 on how long we can keep doing this.
03:13:46 That effectively requires us to reach sustainability.
03:13:50 And then at sustainability, you could have a horror show
03:13:54 of sustainability, right?
03:13:55 You could have a totalitarian sustainability.
03:13:58 That’s not the objective.
03:14:00 The objective is to liberate people.
03:14:02 And so the question, the whole fourth frontier question,
03:14:04 frankly, is how do you get to a sustainable
03:14:08 and indefinitely sustainable state
03:14:10 in which people feel liberated,
03:14:13 in which they are liberated,
03:14:14 to pursue the things that actually matter,
03:14:16 to pursue beauty, truth, compassion, connection,
03:14:22 all of those things that we could list as unalloyed goods,
03:14:27 those are the things that people should be most liberated
03:14:29 to do in a system that really functions.
03:14:31 And anyway, my point is,
03:14:35 I don’t know how precise that calculation is,
03:14:37 but I’m pretty sure it’s not wrong.
03:14:38 It’s accurate enough.
03:14:39 And if it is accurate enough, then the point is, okay,
03:14:43 well, there’s no ultimate meaning,
03:14:45 but the proximate meaning is that one.
03:14:47 How many people can we get to have this wonderful experience
03:14:50 that we’ve gotten to have, right?
03:14:52 And there’s no way that’s so wrong
03:14:54 that if I invest my life in it,
03:14:56 that I’m making some big error.
03:14:58 I’m sure of that.
03:14:59 Life is awesome, and we wanna spread the awesome
03:15:02 as much as possible.
03:15:03 Yeah, you sum it up that way, spread the awesome.
03:15:05 Spread the awesome.
03:15:06 So that’s the fourth frontier.
03:15:07 And if that fails, if the fourth frontier fails,
03:15:10 the fifth frontier will be defined by robots,
03:15:12 and hopefully they’ll learn the lessons
03:15:15 of the mistakes that the humans made
03:15:18 and build a better world with more awesome.
03:15:20 I hope they’re very happy here
03:15:21 and that they do a better job with the place than we did.
03:15:23 Yeah.
03:15:25 Brett.
03:15:26 I can’t believe it took us this long to talk,
03:15:29 as I mentioned to you before,
03:15:31 that we haven’t actually spoken, I think, at all.
03:15:35 And I’ve always felt that we’re already friends.
03:15:39 I don’t know how that works
03:15:40 because I’ve listened to your podcasts a lot.
03:15:42 I’ve also sort of loved your brother.
03:15:46 And so it was like,
03:15:48 we’ve known each other for the longest time,
03:15:49 and I hope we can be friends and talk often again.
03:15:53 And I hope that you get a chance to meet
03:15:56 some of my robot friends as well and fall in love.
03:15:59 And I’m so glad that you love robots as well.
03:16:02 So we get to share in that love.
03:16:04 So I can’t wait for us to interact together.
03:16:07 So we went from talking about some of the worst failures
03:16:11 of humanity to some of the most beautiful
03:16:14 aspects of humanity.
03:16:16 What else can you ask for from a conversation?
03:16:18 Thank you so much for talking today.
03:16:20 You know, Lex, I feel the same way towards you,
03:16:23 and I really appreciate it.
03:16:24 This has been a lot of fun,
03:16:25 and I’m looking forward to our next one.
03:16:27 Thanks for listening to this conversation
03:16:29 with Brett Weinstein,
03:16:30 and thank you to Jordan Harbridge’s show,
03:16:32 Express CPN, Magic Spoon, and Four Sigmatic.
03:16:36 Check them out in the description to support this podcast.
03:16:39 And now, let me leave you with some words
03:16:41 from Charles Darwin.
03:16:43 Ignorance more frequently begets confidence
03:16:46 than does knowledge.
03:16:47 It is those who know little, not those who know much,
03:16:51 who so positively assert that this or that problem
03:16:55 will never be solved by science.
03:16:57 Thank you for listening, and hope to see you next time.