Rob Reid: The Existential Threat of Engineered Viruses and Lab Leaks #193

Transcript

00:00:00 The following is a conversation with Rob Reed, entrepreneur, author, and host of the After

00:00:05 On Podcast.

00:00:07 Sam Harris recommended that I absolutely must talk to Rob about his recent work on the

00:00:12 future of engineer pandemics.

00:00:15 I then listened to the 4 hours special episode of Sam’s Making Sense podcast with Rob titled

00:00:21 Engineering the Apocalypse, and I was floored, and knew I had to talk to him.

00:00:27 Quick mention of our sponsors, Athletic Greens, Belcampo, Fundrise, and NetSuite.

00:00:33 Check them out in the description to support this podcast.

00:00:36 As a side note, let me say a few words about the lab leak hypothesis, which proposes that

00:00:41 COVID 19 is a product of gain of function research on coronaviruses conducted at the

00:00:47 Wuhan Institute of Virology that was then accidentally leaked due to human error.

00:00:53 For context, this lab is biosafety level 4, BSL 4, and it investigates coronaviruses.

00:00:59 BSL 4 is the highest level of safety, but if you look at all the human in the loop pieces

00:01:04 required to achieve this level of safety, it becomes clear that even BSL 4 labs are

00:01:09 highly susceptible to human error.

00:01:11 To me, whether the virus leaked from the lab or not, getting to the bottom of what happened

00:01:15 is about much more than this particular catastrophic case.

00:01:18 It is a test for our scientific, political, journalistic, and social institutions of how

00:01:25 well we can prepare and respond to threats that can cripple or destroy human civilization.

00:01:31 If we continue gain of function research on viruses, eventually these viruses will leak,

00:01:37 and they will be more deadly and more contagious.

00:01:40 We can pretend that won’t happen, or we can openly and honestly talk about the risks involved.

00:01:45 This research can both save and destroy human life on earth as we know it.

00:01:49 It’s a powerful double edged sword.

00:01:52 If YouTube and other platforms censor conversations about this, if scientists self censor conversations

00:01:59 about this, we’ll become merely victims of our brief homo sapiens story, not its heroes.

00:02:05 As I said before, too carelessly labeling ideas as misinformation and dismissing them

00:02:11 because of that will eventually destroy our ability to discover the truth, and without

00:02:16 truth we don’t have a fighting chance against the great filter before us.

00:02:22 This is the Lex Friedman Podcast, and here is my conversation with Rob Reid.

00:02:28 I have seen evidence on the internet that you have a sense of humor, allegedly, but

00:02:33 you also talk and think about the destruction of human civilization.

00:02:38 What do you think of the Elon Musk hypothesis that the most entertaining outcome is the

00:02:42 most likely?

00:02:44 And he, I think, followed on to say a scene from an external observer, like if somebody

00:02:49 was watching us, it seems we come up with creative ways of progressing our civilization

00:02:56 that’s fun to watch.

00:02:57 Yeah, so he, exactly, he said from the standpoint of the observer, not the participant, I think.

00:03:03 And so what’s interesting about that, those were, I think, just a couple of freestanding

00:03:07 tweets and delivered without a whole lot of wrapper of context, so it’s left to the mind

00:03:12 of the reader of the tweets to infer what he was talking about.

00:03:16 So that’s kind of like, it provokes some interesting thoughts.

00:03:20 Like first of all, it presupposes the existence of an observer, and it also presupposes that

00:03:26 the observer wishes to be entertained and has some mechanism of enforcing their desire

00:03:32 to be entertained.

00:03:33 So there’s like a lot underpinning that.

00:03:35 And to me, that suggests, particularly coming from Elon, that it’s a reference to simulation

00:03:40 theory, that somebody is out there and has far greater insights and a far greater ability

00:03:46 to, let’s say, peer into a single individual life and find that entertaining and full of

00:03:52 plot twists and surprises and either a happy or tragic ending, or they have an incredible

00:03:58 meta view and they can watch the arc of civilization unfolding in a way that is entertaining and

00:04:04 full of plot twists and surprises and a happy or unhappy ending.

00:04:08 So okay, so we’re presupposing an observer.

00:04:11 Then on top of that, when you think about it, you’re also presupposing a producer,

00:04:17 because the act of observation is mostly fun if there are plot twists and surprises and

00:04:23 other developments that you weren’t foreseeing.

00:04:25 I have reread my own novels, and that’s fun because it’s something that I worked hard

00:04:30 on and I slaved over and I love, but there aren’t a lot of surprises in there.

00:04:34 So now I’m thinking we need a producer and an observer for that to be true.

00:04:39 And on top of that, it’s got to be a very competent producer because Elon said the most

00:04:45 entertaining outcome is the most likely one.

00:04:48 So there’s lots of layers for thinking about that.

00:04:51 And when you’ve got a producer who’s trying to make it entertaining, it makes me think

00:04:55 of there was a South Park episode in which Earth turned out to be a reality show.

00:05:00 And somehow we had failed to entertain the audience as much as we used to, so the Earth

00:05:05 show was going to get canceled, et cetera.

00:05:09 So taking all that together, and I’m obviously being a little bit playful in laying this

00:05:13 out, what is the evidence that we have that we are in a reality that is intended to be

00:05:20 most entertaining?

00:05:21 Now you could look at that reality on the level of individual lives or the whole arc

00:05:26 of civilization, other lives, levels as well, I’m sure.

00:05:30 But just looking from my own life, I think I’d make a pretty lousy show.

00:05:34 I spend an inordinate amount of time just looking at a computer.

00:05:38 I don’t think that’s very entertaining.

00:05:40 And there’s just a completely inadequate level of shootouts and car chases in my life.

00:05:46 I mean, I’ll go weeks, even months without a single shootout or car chase.

00:05:50 That just means that you’re one of the non player characters in this game.

00:05:53 You’re just waiting.

00:05:54 I’m an extra.

00:05:55 You’re an extra that waiting for your one opportunity for a brief moment to actually

00:05:59 interact with one of the main characters in the play.

00:06:03 Okay, that’s good.

00:06:04 So okay, so we rule out me being the star of the show, which I probably could have guessed

00:06:08 at.

00:06:09 Anyway, but then even the arc of civilization, I mean, there have been a lot of really intriguing

00:06:13 things that have happened and a lot of astounding things that have happened.

00:06:16 But I would have some werewolves, I’d have some zombies, I would have some really improbable

00:06:23 developments like maybe Canada absorbing the United States.

00:06:28 So I don’t know, I’m not sure if we’re necessarily designed for maximum entertainment.

00:06:33 But if we are, that will mean that 2020 is just a prequel for even more bizarre years

00:06:40 ahead.

00:06:41 So I kind of hope that we’re not designed for maximum entertainment.

00:06:45 Well, the night is still young in terms of Canada.

00:06:47 But do you think it’s possible for the observer and the producer to be kind of emergent?

00:06:52 So meaning, it does seem when you kind of watch memes on the internet, the funny ones,

00:06:59 the entertaining ones spread more efficiently.

00:07:01 They do.

00:07:02 I mean, I don’t know what it is about the human mind that soaks up on mass funny things

00:07:11 much more sort of aggressively.

00:07:12 It’s more viral in the full sense of that word.

00:07:16 Is there some sense that whatever the evolutionary process that created our cognitive capabilities

00:07:23 is the same process that’s going to, in an emergent way, create the most entertaining

00:07:28 outcome, the most memeifiable outcome, the most viral outcome if we were to share it

00:07:36 on Twitter?

00:07:37 Yeah, that’s interesting.

00:07:38 Yeah, we do have an incredible ability.

00:07:41 Like I mean, how many memes are created in a given day and the ones that go viral are

00:07:45 almost uniformly funny, at least to somebody with a particular sense of humor.

00:07:48 Yeah, I’d have to think about that.

00:07:53 We are definitely great at creating atomized units of funny.

00:07:59 Like in the example that you used, there are going to be X million brains parsing and judging

00:08:05 whether this meme is retweetable or not.

00:08:07 And so that sort of atomic element of funniness, of entertainingness, et cetera, we definitely

00:08:14 have an environment that’s good at selecting for that and selective pressure and everything

00:08:20 else that’s going on.

00:08:21 But in terms of the entire ecosystem of conscious systems here on the Earth driving for a level

00:08:31 of entertainment, that is on such a much higher level that I don’t know if that would necessarily

00:08:38 follow directly from the fact that atomic units of entertainment are very, very aptly

00:08:44 selected for us.

00:08:45 I don’t know.

00:08:46 Do you find it compelling or useful to think about human civilization from the perspective

00:08:54 of the ideas versus the perspective of the individual human brains?

00:08:59 Just almost thinking about the ideas or the memes.

00:09:02 This is the Dawkins thing as the organisms and then the humans as just like vehicles

00:09:09 for briefly carrying those organisms as they jump around and spread.

00:09:13 Yeah, for propagating them, mutating them, putting selective pressure on them, et cetera.

00:09:19 I mean, I found Dawkins interpret or his launching of the idea of memes is just kind of an afterthought

00:09:27 to his unbelievably brilliant book about the selfish gene.

00:09:32 What a PS to put at the end of a long chunk of writing, profoundly interesting.

00:09:37 I view the relationship though between humans and memes as probably an oversimplification,

00:09:43 but maybe a little bit like the relationship between flowers and bees, right?

00:09:47 Do flowers have bees or do bees in a sense have flowers?

00:09:52 And the answer is it is a very, very symbiotic relationship in which both have semi independent

00:09:58 roles that they play and both are highly dependent upon the other.

00:10:03 And so in the case of bees, obviously, you could see the flower is being this monolithic

00:10:07 structure physically in relation to any given bee and it’s the source of food and sustenance.

00:10:14 So you could kind of say, well, flowers have bees.

00:10:17 But on the other hand, the flowers would obviously be doomed.

00:10:20 They weren’t being pollinated by the bees.

00:10:22 So you could kind of say, well, you know, bees are, you know, flowers are really expression

00:10:26 of what the bees need.

00:10:28 And the truth is a symbiosis.

00:10:30 So with, with memes and human minds, our brains are clearly the Petri dishes in which memes

00:10:39 are either propagated or not propagated, get mutated or don’t get mutated if they are the

00:10:45 venue in which competition, selective competition plays out between different memes.

00:10:51 So all of that is very true.

00:10:53 And you could look at that and say, really, the human mind is a production of memes and

00:10:58 ideas have us rather than us having ideas.

00:11:01 But at the same time, let’s take a catchy tune as an example of a meme.

00:11:07 That catchy tune did originate in a human mind.

00:11:11 Somebody had to structure that thing.

00:11:13 And as much as I like Elizabeth Gilbert’s TED talk about how the universe, I’m simplifying,

00:11:18 but you know, kind of the ideas find their way in this beautiful TED talk.

00:11:22 It’s very lyrical.

00:11:23 She talked about, you know, ideas and prose kind of beaming into our minds.

00:11:30 And you know, she talked about needing to pull over to the side of the road when she

00:11:33 got inspiration for a particular paragraph or a particular idea and a burning need to

00:11:38 write that down.

00:11:40 I love that.

00:11:41 It’s that beautiful as a writer, as a novelist myself, I’ve never had that experience.

00:11:47 And I think that really most things that do become memes are the product of a great deal

00:11:54 of deliberate and willful exertion of a conscious mind.

00:11:59 And so like the bees and the flowers, I think there’s a great symbiosis and they both kind

00:12:04 of have one another.

00:12:05 Ideas have us, but we have ideas for real.

00:12:08 If we could take a little bit of a tangent, Stephen King on writing, you as a great writer,

00:12:14 you’re dropping a hint here that the ideas don’t come to you.

00:12:18 It’s a grind of sort of, it’s almost like you’re mining for gold.

00:12:22 It’s more of a very deliberate, rigorous daily process.

00:12:28 So maybe can you talk about the writing process?

00:12:32 How do you write well?

00:12:36 And maybe if you want to step outside of yourself, almost like give advice to an aspiring writer,

00:12:42 what does it take to write the best work of your life?

00:12:46 Well it would be very different if it’s fiction versus nonfiction.

00:12:50 And I’ve done both.

00:12:51 I’ve written two nonfiction books and two works of fiction.

00:12:55 Two works of fiction being more recent, I’m going to focus on that right now because that’s

00:12:58 more toweringly on my mind.

00:13:02 Bronx novelists, again, this is an oversimplification, but there’s kind of two schools of thought.

00:13:08 Some people really like to fly by the seat of their pants and some people really, really

00:13:12 like to outline, to plot.

00:13:15 So there’s plotters and pantsers, I guess is one way that people look at it.

00:13:20 And as with most things, there is a great continuum in between and I’m somewhere on

00:13:24 that continuum, but I lean, I guess, a little bit more toward the plotter.

00:13:29 And so when I do start a novel, I have a pretty strong point of view about how it’s going

00:13:35 to end and I have a very strong point of view about how it’s going to begin.

00:13:39 And I do try to make an effort of making an outline that I know I’m going to be extremely

00:13:44 unfaithful to in the actual execution of the story, but trying to make an outline that

00:13:49 gets us from here to there and notion of subplots and beats and rhythm and different characters

00:13:54 and so forth.

00:13:56 But then when I get into the process, that outline, particularly the center of it, ultimately

00:14:02 inevitably morphs a great deal.

00:14:03 And I think if I were personally a rigorous outliner, I would not allow that to happen.

00:14:08 I also would make a much more vigorous skeleton before I start.

00:14:14 So I think people who are really in that plotting outlining mode are people who write page turners,

00:14:22 people who write spy novels or supernatural adventures, where you really want a relentless

00:14:30 pace of events, action, plot twists, conspiracy, et cetera.

00:14:37 And that is really the bone.

00:14:40 That’s really the skeletal structure.

00:14:42 So I think folks who write that kind of book are really very much on the outlining side.

00:14:47 And then I think people who write what’s often referred to as literary fiction for lack of

00:14:52 a better term, where it’s more about sort of aura and ambiance and character development

00:14:59 and experience and inner experience and inner journey and so forth, I think that group is

00:15:05 more likely to fly by the seat of their pants.

00:15:07 And I know people who start with a blank page and just see where it’s going to go.

00:15:11 I’m a little bit more on the plotting side.

00:15:14 Now you asked what makes something, at least in the mind of the writer, as great as it

00:15:21 can be.

00:15:22 For me, it’s an astonishingly high percentage of it is editing as opposed to the initial

00:15:26 writing.

00:15:27 For every hour that I spend writing new prose, like new pages, new paragraphs, new bits of

00:15:36 the book, I probably spend…

00:15:40 I wish I kept a count.

00:15:42 I wish I had one of those pieces of software that lawyers use to decide how much time I’m

00:15:47 going to be doing this, that.

00:15:48 But I would say it’s at least four or five hours and maybe as many as 10 that I spend

00:15:53 editing.

00:15:54 And so it’s relentless for me.

00:15:56 For each one hour of writing, you said?

00:15:58 I’d say that.

00:15:59 Four, wow.

00:16:00 I mean, I write because I edit and I spend just relentlessly polishing and pruning and

00:16:07 sometimes on the micro level of just like, does the rhythm of the sentence feel right?

00:16:12 Do I need to carve a syllable or something so it can land?

00:16:15 Like as micro as that to as macro as like, okay, I’m done but the book is 750 pages long

00:16:21 and it’s way too bloated and I need to lop a third out of it.

00:16:25 Problems on those two orders of magnitude and everything in between, that is an enormous

00:16:30 amount of my time.

00:16:31 And I also write music, write and record and produce music.

00:16:37 And there the ratio is even higher.

00:16:40 Every minute that I spend or my band spends laying down that original audio, it’s a very

00:16:47 high proportion of hours that go into just making it all hang together and sound just

00:16:52 right.

00:16:53 So I think that’s true of a lot of creative processes.

00:16:56 I know it’s true of sculpture.

00:16:58 I believe it’s true of woodwork.

00:16:59 My dad was an amateur woodworker and he spent a huge amount of time on sanding and polishing

00:17:04 at the end.

00:17:05 So I think a great deal of the sparkle comes from that part of the process, any creative

00:17:09 process.

00:17:10 Can I ask about the psychological, the demon side of that picture?

00:17:14 In the editing process, you’re ultimately judging the initial piece of work and you’re

00:17:18 judging and judging and judging.

00:17:20 How much of your time do you spend hating your work?

00:17:26 How much time do you spend in gratitude, impressed, thankful, or how good the work that you will

00:17:33 put together is?

00:17:36 I spend almost all the time in a place that’s intermediate between those, but leaning toward

00:17:42 gratitude.

00:17:43 I spend almost all the time in a state of optimism that this thing that I have, I like,

00:17:49 I like quite a bit and I can make it better and better and better with every time I go

00:17:56 through it.

00:17:57 So I spend most of my time in a state of optimism.

00:18:01 I think I personally oscillate much more aggressively between those two, where I wouldn’t be able

00:18:06 to find the average.

00:18:07 I go pretty deep.

00:18:11 Marvin Minsky from MIT had this advice, I guess, to what it takes to be successful in

00:18:19 science and research is to hate everything you do.

00:18:23 You’ve ever done in the past.

00:18:25 I mean, at least he was speaking about himself that the key to his success was to hate everything

00:18:31 he’s ever done.

00:18:32 I have a little Marvin Minsky there in me too, to sort of always be exceptionally self

00:18:39 critical, but almost like self critical about the work, but grateful for the chance to be

00:18:45 able to do the work.

00:18:46 If that makes sense.

00:18:47 It makes perfect sense.

00:18:48 But that, you know, each one of us have to strike a certain kind of balance.

00:18:56 But back to the destruction of human civilization.

00:19:00 If humans destroy ourselves in the next hundred years, what will be the most likely source,

00:19:08 the most likely reason that we destroy ourselves?

00:19:10 Well, let’s see, a hundred years.

00:19:14 It’s hard for me to comfortably predict out that far, and it’s something to give a lot

00:19:19 more thought to, I think, than normal folks simply because I am a science fiction writer.

00:19:25 And you know, I feel with the acceleration of technological progress, it’s really hard

00:19:31 to foresee out more than just a few decades.

00:19:33 I mean, comparing today’s world to that of 1921, where we are right now, a century later,

00:19:39 it would have been so unforeseeable.

00:19:42 And I just don’t know what’s going to happen, particularly with exponential technologies.

00:19:46 I mean, our intuitions reliably defeat ourselves with exponential technologies like computing

00:19:51 and synthetic biology and, you know, how we might destroy ourselves in the hundred year

00:19:57 time frame might have everything to do with breakthroughs in nanotechnology 40 years from

00:20:02 now and then how rapidly those breakthroughs accelerate.

00:20:05 But in the near term that I’m comfortable predicting, let’s say 30 years, I would say

00:20:10 the most likely route to self destruction would be synthetic biology.

00:20:16 And I always say that with the gigantic caveat and very important one that I find, and I’ll

00:20:22 abbreviate synthetic biology to SynBio just to save us some syllables.

00:20:25 I believe SynBio offers us simply stunning promise that we would be fools to deny ourselves.

00:20:34 So I’m not an anti SynBio person by any stretch.

00:20:37 I mean, SynBio has unbelievable odds of helping us beat cancer, helping us rescue the environment,

00:20:43 helping us do things that we would currently find imponderable.

00:20:46 So it’s electrifying the field.

00:20:48 But in the wrong hands, those hands either being incompetent or being malevolent.

00:20:54 In the wrong hands, synthetic biology to me has a much, much greater odds of leading to

00:21:03 our self destruction than something running amok with super AI, which I believe is a real

00:21:08 possibility and one we need to be concerned about.

00:21:10 But in the 30 year time frame, I think it’s a lesser one or nuclear weapons or anything

00:21:14 else that I can think of.

00:21:16 Can you explain that a little bit further?

00:21:18 So your concern is on the manmade versus the natural side of the pandemic frontier.

00:21:26 So we humans, engineering pathogens, engineering viruses is the concern here.

00:21:34 And maybe how do you see the possible trajectories happening here in terms of, is it malevolent

00:21:41 or is it accidents, oops, little mistakes or unintended consequences of particular actions

00:21:51 that are ultimately lead to unexpected mistakes?

00:21:53 Well, both of them are a danger.

00:21:55 And I think the question of which is more likely has to do with two things.

00:22:00 One, do we take a lot of methodical, affordable, foresighted steps that we are absolutely capable

00:22:08 of taking right now to forestall the risk of a bad actor infecting us with something

00:22:14 that could have annihilating impacts?

00:22:17 And in the episode you referenced with Sam, we talked a great deal about that.

00:22:22 So do we take those steps?

00:22:24 And if we take those steps, I think the danger of malevolent rogue actors doing us in with

00:22:29 Sin Bio couldn’t plummet.

00:22:31 But you know, it’s always a question of if and we have a bad, bad and very long track

00:22:36 record of hitting the snooze bar after different natural pandemics have attacked us.

00:22:41 So that’s variable number one.

00:22:43 Variable number two is how much experimentation and pathogen development do we as a society

00:22:51 decide is acceptable in the realms of academia, government or private industry?

00:22:59 And if we decide as a society that it’s perfectly okay for people with varying research agendas

00:23:06 to create pathogens that if released could wipe out humanity, if we think that’s fine

00:23:12 and if that kind of work starts happening in one lab, five labs, 50 labs, 500 labs in

00:23:19 one country, then 10 countries, then 70 countries or whatever, that risk of a boo boo starts

00:23:26 rising astronomically.

00:23:28 And this won’t be a spoiler alert based on the way that I presented those two things,

00:23:33 but I think it’s unbelievably important to manage both of those risks.

00:23:37 The easier one to manage, although it wouldn’t be simple by any stretch because it would

00:23:42 have to be something that all nations agree on.

00:23:45 But the easiest way, the easier risk to manage is that of, hey guys, let’s not develop pathogens

00:23:52 that if they escaped from a lab could annihilate us.

00:23:56 There’s no line of research that justifies that.

00:23:58 And in my view, I mean, that’s the point of perspective we need to have.

00:24:02 We’d have to collectively agree that there’s no line of research that justifies that.

00:24:06 The reason why I believe that would be a highly rational conclusion is even the highest level

00:24:11 of biosafety lab in the world, biosafety lab level four.

00:24:15 And there are not a lot of BSL four labs in the world.

00:24:18 There are things can and have leaked out of BSL four labs and some of the work that’s

00:24:25 been done with potentially annihilating pathogens, which we can talk about, it’s actually done

00:24:30 at BSL three.

00:24:32 And so fundamentally any lab can leak.

00:24:36 We have proven ourselves to be incapable of creating a lab that is utterly impervious

00:24:41 to leaks.

00:24:42 So why in the world would we create something where if God forbid it leaked, could annihilate

00:24:47 us all.

00:24:48 And by the way, almost all of the measures that are taken in biosafety level anything

00:24:53 labs are designed to prevent accidental leaks.

00:24:57 What happens if you have a malevolent insider?

00:24:59 We could talk about the psychology and the motivations of what would make a malevolent

00:25:03 insider who wants to release something and not annihilating in a bit.

00:25:07 I’m sure that we will.

00:25:08 But what if you have a malevolent insider?

00:25:11 Virtually none of the standards that go into biosafety level one, two, three, and four

00:25:17 are about preventing somebody hijacking the process.

00:25:20 Some of them are, but they’re mainly designed against accidents.

00:25:23 They’re imperfect against accidents.

00:25:25 And if this kind of work starts happening in lots and lots of labs with every lab you

00:25:29 add, the odds of there being a malevolent inside are naturally increased arithmetically

00:25:34 as the number of labs goes up.

00:25:36 Now on the front of somebody outside of a government academic or scientific, traditional

00:25:44 government, academic, scientific environment creating something malevolent, again, there’s

00:25:50 protections that we can take both at the level of syn bio architecture, hardening the entire

00:25:57 syn bio ecosystem against terrible things being made that we don’t want to have out

00:26:03 there by rogue actors, to early detection, to lots and lots of other things that we can

00:26:09 do to dramatically mitigate that risk.

00:26:11 And I think we do both of those things, decide that no, we’re not going to experimentally

00:26:16 make annihilating pathogens in leaky labs, and B, yes, we are going to take countermeasures

00:26:22 that are going to cost a fraction of our annual defense budget to preclude their creation,

00:26:28 then I think both risks get managed down.

00:26:31 But if you take one set of precautions and not the other, then the thing that you have

00:26:36 not taken precautions against immediately becomes the more likely outcome.

00:26:41 So can we talk about this kind of research and what’s actually done and what are the

00:26:45 positives and negatives of it?

00:26:47 So if we look at gain of function research and the kind of stuff that’s happening in

00:26:52 level three and level four BSL labs, what’s the whole idea here?

00:26:56 Is it trying to engineer viruses to understand how they behave?

00:27:01 You want to understand the dangerous ones.

00:27:03 Yeah.

00:27:04 So that would be the logic behind doing it.

00:27:06 And so gain of function can mean a lot of different things.

00:27:10 Viewed through a certain lens, gain of function research could be what you do when you create,

00:27:15 you know, GMOs, when you create, you know, hearty strains of corn that are resistant

00:27:20 to pesticides.

00:27:21 I mean, you could view that as gain of function.

00:27:23 So I’m going to refer to gain of function in a relatively narrow sense, which is actually

00:27:26 the sense that the term is usually used, which is in some way magnifying capabilities of

00:27:34 microorganisms to make them more dangerous, whether it’s more transmissible or more deadly.

00:27:40 And in that line of research, I’ll use an example from 2011 because it’s very illustrative

00:27:46 and it’s also very chilling.

00:27:48 Back in 2011, two separate labs independently of one another, I assume there was some kind

00:27:54 of communication between them, but they were basically independent projects, one in Holland

00:27:57 and one in Wisconsin, did gain of function research on something called H5N1 flu.

00:28:04 H5N1 is, you know, something that, at least on a lethality basis, makes COVID look like

00:28:11 a kitten.

00:28:12 You know, COVID, according to the World Health Organization, has a case fatality rate somewhere

00:28:15 between half a percent and one percent, H5N1 is closer to 60 percent, six zero.

00:28:21 And so that’s actually even slightly more lethal than Ebola.

00:28:24 It’s a very, very, very scary pathogen.

00:28:27 The good news about H5N1 is that it is barely, barely contagious.

00:28:33 But I believe it is in no way contagious human to human.

00:28:36 It requires, you know, very, very, very deep contact with birds, in most cases chickens.

00:28:44 And so if you’re a chicken farmer and you spend an enormous amount of time around them

00:28:49 and perhaps you get into situations in which you get a break in your skin and you’re interacting

00:28:54 intensely with fowl who, as it turns out, have H5N1, that’s when the jump comes.

00:29:01 But it’s not, there’s no airborne transmission that we’re aware of human to human.

00:29:05 I mean, not that it just doesn’t exist.

00:29:08 I think the World Health Organization did a relentless survey of the number of H5N1

00:29:14 cases.

00:29:15 I think they do it every year.

00:29:16 I saw one 10 year series where I think it was like 500 fatalities over the course of

00:29:22 a decade.

00:29:23 And that’s a drop in the bucket.

00:29:24 Kind of fun, fun fact, I believe the typical lethality from lightning over 10 years is

00:29:30 70,000 deaths.

00:29:31 So we think getting struck by lightning, pretty low risk, H5N1 much, much lower than that.

00:29:37 What happened in these experiments is the experimenters in both cases set out to make

00:29:43 H5N1 that would be contagious, that could create airborne transmission.

00:29:48 And so they basically passed it, I think in both cases, they passed it through a large

00:29:52 number of ferrets.

00:29:54 And so this wasn’t like CRISPR, there wasn’t even any CRISPR back in those days.

00:29:58 This was relatively straightforward, selecting for a particular outcome.

00:30:03 And after guiding the path and passing them through, again, I believe it was a series

00:30:07 of ferrets.

00:30:08 They did in fact come up with a version of H5N1 that is capable of airborne transmission.

00:30:14 Now they didn’t unleash it into the world.

00:30:17 They didn’t inject it into humans to see what would happen.

00:30:20 And so for those two reasons, we don’t really know how contagious it might have been.

00:30:25 But if it was as contagious as COVID, that could be a civilization threatening pathogen.

00:30:33 And why would you do it?

00:30:34 Well, the people who did it were good guys.

00:30:36 They were virologists.

00:30:38 I believe their agenda as they explained it was much as you said, let’s figure out what

00:30:43 a worst case scenario might look like so we can understand it better.

00:30:47 But my understanding is in both cases, it was done in BSL3 labs.

00:30:52 And so potential of leak, significantly nonzero, hopefully way below 1% but significantly nonzero.

00:31:01 And when you look at the consequences of an escape in terms of human lives, destruction

00:31:06 of a large portion of the economy, et cetera, and you do an expected value calculation on

00:31:10 whatever fraction of 1% that was, you would come up with a staggering cost, staggering

00:31:17 expected cost for this work.

00:31:19 So it should never have been carried out.

00:31:21 Now you might make an argument if you said, if you believed that H5N1 in nature is on

00:31:30 an inevitable path to airborne transmission, and it’s only going to be a small number of

00:31:35 years, A. And B, if it makes that transition, there is one set of changes to its metabolic

00:31:43 pathways and its genomic code and so forth, one that we have discovered.

00:31:49 So it is going to go from point A, which is where it is right now, to point B. We have

00:31:53 reliably engineered point B. That is the destination.

00:31:58 And we need to start fighting that right now because this is five years or less away.

00:32:02 Now that’d be a very different world.

00:32:03 That’d be like spotting an asteroid that’s coming toward the earth and is five years

00:32:06 off.

00:32:07 And yes, you marshal everything you can to resist that.

00:32:10 But there’s two problems with that perspective.

00:32:12 The first is, in however many thousands of generations that humans have been inhabiting

00:32:17 this planet, there has never been a transmissible form of H5N1.

00:32:21 And influenza has been around for a very long time.

00:32:23 So there is no case for inevitability of this kind of a jump to airborne transmission.

00:32:30 So we’re not on a freight train to that outcome.

00:32:33 And if there was inevitability around that, it’s not like there’s just one set of genetic

00:32:38 code that would get there.

00:32:41 There’s all kinds of different mutations that could conceivably result in that kind of an

00:32:46 outcome, unbelievable diversity of mutations.

00:32:49 And so we’re not actually creating something we’re inevitably going to face, but we are

00:32:54 creating something, we are creating a very powerful and unbelievably negative card and

00:33:00 injecting it in the deck that nature never put into the deck.

00:33:04 So in that case, I just don’t see any moral or scientific justification for that kind

00:33:11 of work.

00:33:12 And interestingly, there was quite a bit of excitement and concern about this when the

00:33:18 work came out.

00:33:19 One of the teams was going to publish their results in Science, the other in Nature.

00:33:22 And there were a lot of editorials and a lot of scientists are saying, this is crazy.

00:33:27 And publication of those papers did get suspended.

00:33:31 And not long after that, there was a pause put on US government funding, NIH funding

00:33:36 on gain of function research.

00:33:38 But both of those speed bumps were ultimately removed.

00:33:41 Those papers did ultimately get published.

00:33:43 And that pause on funding, you know, ceased long ago.

00:33:47 And in fact, those two very projects, my understanding is resumed their funding, got their government

00:33:52 funding back.

00:33:53 I don’t know why a Dutch project is getting NIH funding, but whatever, about a year and

00:33:57 a half ago.

00:33:58 So as far as the US government and regulators are concerned, it’s all systems go for gain

00:34:04 of function at this point, which I find very troubling.

00:34:07 Now I’m a little bit of an outsider from this field, but it has echoes of the same kind

00:34:11 of problem I see in the AI world with autonomous weapon systems.

00:34:16 Nobody in my colleagues, my colleagues, friends, as far as I can tell, people in the AI community

00:34:25 are not really talking about autonomous weapon systems as now US and China full steam ahead

00:34:31 on the development of both.

00:34:33 And that seems to be a similar kind of thing on gain of function.

00:34:37 I’ve, you know, have friends in the biology space and they don’t want to talk about gain

00:34:42 of function publicly.

00:34:46 And I don’t, that makes me very uncomfortable from an outsider perspective in terms of gain

00:34:50 of function.

00:34:51 It makes me very uncomfortable from the insider perspective on autonomous weapon systems.

00:34:56 I’m not sure how to communicate exactly about autonomous weapon systems.

00:35:00 And I certainly don’t know how to communicate effectively about gain of function.

00:35:04 What is the right path forward here?

00:35:06 Could we seize all gain of function research?

00:35:08 Is that, is that really the solution here?

00:35:11 Well, again, I’m going to use gain of function in the relatively narrow context of over assessing

00:35:15 because you could say almost, you know, anything that you do to make biology more effective

00:35:19 as gain of function.

00:35:20 So within the narrow confines of what we’re discussing, I think it would be easy enough

00:35:27 for level headed people in all of the countries, level headed governmental people in all the

00:35:31 countries that realistically could support such a program to agree, we don’t want this

00:35:36 to happen because all labs leak.

00:35:40 I mean, and you know, an example that I use, I actually didn’t use it in the piece I did

00:35:45 with Sam Harris as well, is the anthrax attacks in the United States in 2001.

00:35:50 I mean, talk about an example of the least likely lab leaking into the least likely place.

00:35:57 This was shortly after 9 11, folks who don’t remember it, and it was a very, very lethal

00:36:03 strand of anthrax that as it turned out, based on the forensic genomic work that was done

00:36:08 and so forth, absolutely leaked from a high security US army lab.

00:36:13 Probably the one at Fort Detrick in Maryland, it might’ve been another one, but who cares?

00:36:17 It absolutely leaked from a high security US army lab.

00:36:21 And where did it leak to?

00:36:22 This highly dangerous substance that was kept under lock and key by a very security minded

00:36:28 organization?

00:36:29 Well, it leaked to places including the Senate majority leader’s office, Tom Daschle’s office,

00:36:33 I think it was Senator Leahy’s office, certain publications, including bizarrely the National

00:36:38 Enquirer.

00:36:39 But let’s go to the Senate majority leader’s office.

00:36:41 It is hard to imagine a more security minded country than the United States two weeks after

00:36:47 the 9 11 attack.

00:36:49 I mean, it doesn’t get more security minded than that.

00:36:52 And it’s also hard to imagine a more security capable organization than the United States

00:36:59 military.

00:37:00 We can joke all we want about inefficiencies in the military and $24,000 wrenches and so

00:37:05 forth, but pretty capable when it comes to that.

00:37:08 Despite that level of focus and concern and competence, just days after the 9 11 attack,

00:37:16 something comes from the inside of our military industrial compacts and ends up in the office

00:37:22 of someone I believe the Senate majority leader somewhere in the line of presidential succession.

00:37:26 It tells us everything can leak.

00:37:28 So again, think of a level headed conversation between powerful leaders in a diversity of

00:37:33 countries, thinking through like, I can imagine a very simple PowerPoint revealing, just discussing

00:37:40 briefly things like the anthrax leak, things like this foot and mouth disease outbreak

00:37:46 that or leaking that came out of a BSL four level lab in the UK, several other things

00:37:52 talking about the utter virulence that could result from gain of function and say, folks,

00:37:57 can we agree that this just shouldn’t happen?

00:38:01 I mean, if we were able to agree on the nuclear nonproliferation treaty, which we were by

00:38:06 a weapons convention, which we did agree on, we the world, for the most part, I believe

00:38:11 agreement could be found there.

00:38:13 But it’s going to take people in leadership of a couple of very powerful countries to

00:38:18 get to the consensus amongst them and then to decide we’re going to get everybody together

00:38:22 and browbeat them into banning this stuff.

00:38:24 Now that doesn’t make it entirely impossible that somebody might do this.

00:38:28 But in well regulated, carefully watched over fiduciary environments like federally funded

00:38:35 academic research, anything going on in the government itself, things going on in companies

00:38:41 that have investors who don’t want to go to jail for the rest of their lives.

00:38:47 I think that would have a major, major dampening impact on it.

00:38:50 But there is a particular possible catalyst in this time we live in, which is for really

00:38:58 kind of raising the question of gain of function research for the application of virus making

00:39:02 viruses more dangerous.

00:39:05 Is the question of whether COVID leaked from a lab, sort of not even answering that question,

00:39:14 but even asking that question is a very, it seems like a very important question to ask

00:39:21 to catalyze the conversation about whether we should be doing gain of function research.

00:39:26 I mean, from a high level, why do you think people, even colleagues of mine are not comfortable

00:39:33 asking that question?

00:39:35 And two, do you think that the answer could be that it did leak from a lab?

00:39:40 I think the mere possibility that it did leak from a lab is evidence enough, again, for

00:39:49 the hypothetical rational national leaders watching this simple PowerPoint.

00:39:54 If you could put the possibility at 1% and you look at the unbelievable destructive power

00:40:00 that COVID had, that should be an overwhelmingly powerful argument for excluding it.

00:40:06 Now as to whether or not that was a leak, some very, very level, I don’t know enough

00:40:12 about all of the factors in the Bayesian analysis and so forth that has gone into people making

00:40:18 the pro argument of that.

00:40:19 So I don’t pretend to be an expert on that and I don’t have a point of view, I just don’t

00:40:25 know.

00:40:26 But what we can say is it is entirely possible for a couple of reasons.

00:40:32 One is that there is a BSL4 lab in Wuhan, the Wuhan Institute of Virology.

00:40:37 I believe it’s the only BSL4 in China, I could be wrong about that, but it definitely had

00:40:44 a history that alarmed very sophisticated US diplomats and others who were in contact

00:40:52 with the lab and were aware of what it was doing long before COVID hit the world.

00:41:00 And so there are diplomatic cables that have been declassified, I believe one sophisticated

00:41:05 scientist or other observer said that WIV is a ticking time bomb.

00:41:10 And I believe it’s also been pretty reasonably established that coronaviruses were a topic

00:41:16 of great interest at WIV.

00:41:18 The SARS obviously came out of China and that’s a coronavirus that would make an enormous

00:41:22 amount of sense for it to be studied there.

00:41:25 And there is so much opacity about what happened in the early days and weeks after the outbreak

00:41:32 that’s basically been imposed by the Chinese government that we just don’t know.

00:41:38 So it feels like a substantially or greater than 1% possibility to me looking at it from

00:41:43 the outside.

00:41:45 And that’s something that one could imagine.

00:41:47 Now we’re going to the realm of thought experiment, not me decreeing this is what happened, but

00:41:52 if they’re studying coronavirus at the Wuhan Institute of Virology and there is this precedent

00:41:58 of gain of function research that’s been done on something that is remarkably uncontagious

00:42:02 to humans, whereas we know coronavirus is contagious to humans, I could definitely…

00:42:06 And there is this global consensus, certainly was the case two or three years ago when this

00:42:12 work might’ve started, there seems to be this global consensus that gain of function is

00:42:16 fine.

00:42:17 The US paused funding for a little while, but paused funding, they never said private

00:42:21 actors couldn’t do it, it was just a pause of NIH funding.

00:42:25 And then that pause was lifted.

00:42:26 So again, none of this is irrational.

00:42:28 You could certainly see the folks at WIV saying, gain of function, interesting vector, coronavirus

00:42:34 unlike H5N1, very contagious, we’re a nation that has had terrible run ins with coronavirus,

00:42:42 why don’t we do a little gain of function on this?

00:42:44 And then like all labs at all levels, one could imagine this lab leaking.

00:42:49 So it’s not an impossibility and very, very level headed people have said that, who’ve

00:42:55 looked at it much more deeply do believe in that outcome.

00:42:58 Why is it such a threat to power the idea that it’ll leak from a lab?

00:43:03 Why is it so threatening?

00:43:04 I don’t maybe understand this point exactly.

00:43:08 Is it just that as governments and especially the Chinese government is really afraid of

00:43:14 admitting mistakes that everybody makes?

00:43:18 So this is a horrible, like Chernobyl is a good example.

00:43:21 I come from the Soviet Union.

00:43:24 I mean, well, major mistakes were made in Chernobyl.

00:43:29 I would argue for a lab leak to happen, the scale of the mistake is much smaller, right?

00:43:40 The depth and the breadth of rot that in bureaucracy that led to Chernobyl is much bigger than

00:43:50 anything that could lead to a lab leak, because it could literally just be, I mean, I’m sure

00:43:55 there’s security, very careful security procedures, even in level three labs, but it, I imagine

00:44:02 maybe you can correct me, it’s all it takes is the incompetence of a small number of individuals,

00:44:09 one individual on a particular, a couple of weeks, three weeks period, as opposed to a

00:44:14 multi year bureaucratic failure of the entire government.

00:44:19 Right.

00:44:20 Well, certainly the magnitude of mistakes and compounding mistakes that went into Chernobyl

00:44:24 was far, far, far greater, but the consequence of COVID outweighs that, the consequences

00:44:30 of Chernobyl to a tremendous degree.

00:44:34 And I think that particularly authoritarian governments are unbelievably reluctant to

00:44:43 admit to any fallibility whatsoever, and there’s a long, long history of that across dozens

00:44:49 and dozens of authoritarian governments, and to be transparent, again, this is in the hypothetical

00:44:56 world in which this was a leak, which again, I don’t have, I don’t personally have enough

00:45:00 sophistication to have an opinion on the, on the likelihood, but in the hypothetical

00:45:04 world in which it was a leak, the global reaction and the amount of global animus and the amount

00:45:14 of, you know, the decline in global respect that would happen toward China, because every

00:45:22 country suffered massively from this, unbelievable damages in terms of human lives and economic

00:45:28 activity disrupted, the world would in some way present China with that bill.

00:45:35 And when you take on top of that, the natural disinclination for any authoritarian government

00:45:41 to admit any fallibility and tolerate the possibility of any fallibility whatsoever,

00:45:46 and you look at the relative opacity, even though they let a world health organization

00:45:51 group in, you know, a couple of months ago to run around, they didn’t give that who group

00:45:56 anywhere near the level of access that would be necessary to definitively say X happened

00:46:01 versus Y.

00:46:02 The level of opacity that surrounds those opening weeks and months of COVID in China,

00:46:08 we just don’t know.

00:46:10 If you were to kind of look back at 2020 and maybe broadening it out to future pandemics

00:46:17 that could be much more dangerous, what kind of response, how do we fail in a response

00:46:24 and how could we do better?

00:46:27 So the gain of function research is discussing the question of we should not be creating

00:46:35 viruses that are both exceptionally contagious and exceptionally deadly to humans.

00:46:41 But if it does happen, perhaps the natural evolution, natural mutation, is there interesting

00:46:48 technological responses on the testing side, on the vaccine development side, on the collection

00:46:56 of data or on the basic sort of policy response side or the sociological, the psychological

00:47:02 side?

00:47:03 Yeah, there’s all kinds of things.

00:47:05 And most of what I’ve thought about and written about and again discussed in that long bit

00:47:11 with Sam is dual use.

00:47:14 So most of the countermeasures that I’ve been thinking about and advocating for would be

00:47:20 every bit as effective against zoonotic disease and natural pandemic of some sort as an artificial

00:47:26 one.

00:47:27 The risk of an artificial one, even the near term risk of an artificial one, ups the urgency

00:47:32 around these measures immensely, but most of them would be broadly applicable.

00:47:37 And so I think the first thing that we really want to do on a global scale is have a far,

00:47:43 far, far more robust and globally transparent system of detection.

00:47:49 And that can happen on a number of levels.

00:47:52 The most obvious one is just in the blood of people who come into clinics exhibiting

00:47:58 signs of illness.

00:48:00 And we are certainly at a point now where at with relatively minimal investment, we

00:48:07 could develop in clinic diagnostics that would be unbelievably effective at pinpointing what’s

00:48:12 going on in almost any disease when somebody walks into a doctor’s office or a clinic.

00:48:19 And better than that, this is a little bit further off, but it wouldn’t cost tens of

00:48:24 billions in research dollars, it would be a relatively modest and affordable budget

00:48:28 in relation to the threat at home diagnostics that can really, really pinpoint, okay, particularly

00:48:36 with respiratory infections, because that is generally almost universally the mechanism

00:48:41 of transmission for any serious pandemic.

00:48:44 So somebody has a respiratory infection, is it one of the, you know, significantly large

00:48:49 handful of rhinoviruses, coronaviruses, and other things that cause common cold?

00:48:55 Or is it influenza?

00:48:56 If it’s influenza, is it influenza A versus B?

00:49:00 Or is it, you know, a small handful of other more exotic but nonetheless sort of common

00:49:06 respiratory infections that are out there?

00:49:09 Having a diagnostic panel to pinpoint all of that stuff, that’s something that’s well

00:49:12 within our capabilities.

00:49:14 That’s much less a lift than creating mRNA vaccines, which obviously we proved capable

00:49:19 of when we put our minds to it.

00:49:21 So do that on a global basis.

00:49:24 And I don’t think that’s irrational because the best prototype for this that I’m aware

00:49:28 of isn’t currently rolling out in Atherton, California, or Fairfield County, Connecticut,

00:49:34 or some other wealthy place.

00:49:36 The best prototype that I’m aware of this is rolling out right now in Nigeria.

00:49:40 And it’s a project that came out of the Broad Institute, which is, as I’m sure you know,

00:49:45 but some listeners may not, is kind of like an academic joint venture between Harvard

00:49:49 and MIT.

00:49:50 The program is called Sentinel.

00:49:53 And their objective is, and their plan is a very well conceived plan, a methodical plan,

00:49:59 is to do just that in areas of Nigeria that are particularly vulnerable to zoonotic diseases

00:50:05 making the jump from animals to humans.

00:50:08 But also there’s just an unbelievable public health benefit from that.

00:50:12 And it’s sort of a three tier system where clinicians in the field could very rapidly

00:50:17 determine do you have one of the infections of acute interest here, either because it’s

00:50:22 very common in this region, so we want to diagnose as many things as we can at the front

00:50:26 line, or because it’s uncommon but unbelievably threatening like Ebola.

00:50:31 So front line worker can make that determination very, very rapidly.

00:50:35 If it comes up as a we don’t know, they bump it up to a level that’s more like at a fully

00:50:41 configured doctor’s office or local hospital.

00:50:44 And if it’s still at a we don’t know, it gets bumped up to a national level.

00:50:47 And it gets bumped very, very rapidly.

00:50:51 So if this can be done in Nigeria, and it seems that it can be, there shouldn’t be any

00:50:57 inhibition for it to happen in most other places.

00:51:00 And it should be affordable from a budgetary standpoint.

00:51:03 And based on Sentinel’s budget and adjusting things for things like very different cost

00:51:07 of living, larger population, et cetera, I did a back of the envelope calculation that

00:51:13 doing something like Sentinel in the US would be in the low billions of dollars.

00:51:17 And wealthy countries, middle income countries can’t afford such a thing.

00:51:22 Lower income countries should certainly be helped with that.

00:51:25 But start with that level of detection.

00:51:27 And then layer on top of that other interesting things like monitoring search engine traffic,

00:51:33 search engine queries for evidence that strange clusters of symptoms are starting to rise

00:51:39 in different places.

00:51:40 There’s been a lot of work done with that.

00:51:43 Most of it kind of academic and experimental, but some of it has been powerful enough to

00:51:47 suggest that this could be a very powerful early warning system.

00:51:51 There’s a guy named Bill Lampos at University College London who basically did a very rigorous

00:51:56 analysis that showed that symptom searches reliably predicted COVID outbreaks in the

00:52:05 early days of the pandemic in given countries by as much as 16 days before the evidence

00:52:10 started to accrue at a public health level.

00:52:12 16 days of forewarning can be monumentally important in the early days of an outbreak.

00:52:18 And this is a very, very talented, but nonetheless very resource constrained academic project.

00:52:26 Even if that was something that was done with a NORAD like budget.

00:52:31 So starting with detection, that’s something we could do radically, radically better.

00:52:35 So aggregating multiple data sources in order to create something, I mean, this is really

00:52:39 exciting to me, the possibility that I’ve heard inklings of creating almost like a weather

00:52:44 map of pathogens, like basically aggregating all of these data sources, scaling many orders

00:52:54 of magnitude up at home testing and all kinds of testing that doesn’t just try to test for

00:53:00 the particular pathogen of worry now, but everything like a full spectrum of things

00:53:06 that could be dangerous to the human body.

00:53:09 And thereby be able to create these maps like that are dynamically updated on an hourly

00:53:14 basis of how viruses travel throughout the world.

00:53:19 And so you can respond, like you can then integrate just like you do when you check

00:53:23 your weather map and it’s raining or not, of course, not perfect, but it’s very good

00:53:28 predictor whether it’s going to rain or not and use that to then make decisions about

00:53:34 your own life, ultimately give the power information to individuals to respond.

00:53:38 And if it’s a super dangerous, like if it’s acid rain versus regular rain, you might want

00:53:44 to really stay inside as opposed to risking it.

00:53:47 And that, just like you said, if I think it’s not very expensive relative to all the things

00:53:54 that we do in this world, but it does require bold leadership.

00:54:00 And there’s another dark thing, which really has bothered me about 2020, which it requires

00:54:05 is it requires trust in institutions to carry out these kinds of programs and requires trust

00:54:13 in science and engineers and sort of centralized organizations that would operate at scale

00:54:20 here.

00:54:21 And much of that trust has been, at least in the United States, diminished.

00:54:27 It feels like I’m not exactly sure where to place the blame, but I do place quite a bit

00:54:33 of the blame into the scientific community and again, my fellow colleagues in speaking

00:54:40 down to people at times, speaking from authority, it sounded like it dismissed the basic human

00:54:46 experience or the basic common humanity of people in a way to like, it almost sounded

00:54:52 like there’s an agenda that’s hidden behind the words the scientists spoke.

00:54:58 Like they’re trying to, in a self preserving way, control the population or something like

00:55:03 that.

00:55:04 I don’t think any of that is true from the majority of the scientific community, but

00:55:07 it sounded that way.

00:55:08 And so the trust began to diminish and I’m not sure how to fix that except to be more

00:55:16 authentic, be more real, acknowledge the uncertainties under which we operate, acknowledge the mistakes

00:55:22 that scientists make, that institutions make.

00:55:26 The leak from the lab is a perfect example where we have imperfect systems that make

00:55:32 all the progress we see in the world.

00:55:34 And that being honest about that imperfection, I think is essential for forming trust.

00:55:39 But I don’t know what to make of it has been deeply disappointing because I do think just

00:55:45 like you mentioned, the solutions require people to trust the institutions with their

00:55:53 data.

00:55:54 Yeah.

00:55:55 And I think part of the problem is it seems to me as an outsider that there was a bizarre

00:55:59 unwillingness on the part of the CDC and other institutions to admit to, to frame and to

00:56:08 contextualize uncertainty.

00:56:11 Maybe they had a patronizing idea that these people need to be told and when they’re told,

00:56:16 they need to be told with authority and a level of definitiveness and certain certitude

00:56:21 that doesn’t actually exist.

00:56:23 And so when they whipsaw on recommendations like what you should do about masks, when

00:56:29 the CDC is kind of at the very beginning of the pandemic saying, masks don’t do anything.

00:56:35 Don’t wear them.

00:56:36 When the real driver for that was we don’t want these clowns going out and depleting

00:56:41 Amazon of masks because they may be needed in medical settings and we just don’t know

00:56:49 yet.

00:56:50 I think a message that actually respected people and said, this is why we’re asking

00:56:54 you not to do masks yet and there’s more to be seen would be less whipsawing and would

00:57:00 bring people like they feel more like they’re part of the conversation and they’re being

00:57:04 treated like adults than saying one day definitively masks suck.

00:57:09 And then X days later saying, nope, they haven’t wear masks.

00:57:13 And so I think framing things in terms of the probabilities, which most people are easy

00:57:16 to parse.

00:57:17 I mean, a more recent example, which I just thought was batty was suspending the Johnson

00:57:23 and Johnson vaccine for a very low single digit number of days in the United States

00:57:30 based on the fact that I believe there had been seven ish clotting incidents in roughly

00:57:38 seven million people who had had the vaccine administered, I believe one of which resulted

00:57:43 in a fatality.

00:57:45 And there was definitely suggestive data that indicated that there was a relationship.

00:57:50 This wasn’t just coincidental because I think all of the clotting incidents happened in

00:57:53 women as opposed to men and kind of clustered in a certain age group.

00:57:58 But does that call for shutting off the vaccine or does it call for leveling with the American

00:58:05 public and saying we’ve had one fatality out of seven million?

00:58:10 This is, let’s just assume substantially less than the likelihood of getting struck by lightning.

00:58:18 Based on that information, and we’re going to keep you posted because you can trust us

00:58:22 to keep you posted, based on that information, please decide whether you’re comfortable with

00:58:27 a Johnson and Johnson vaccine.

00:58:29 That would have been one response and I think people would have been able to parse the simple

00:58:33 bits of data and make their own judgment.

00:58:35 By turning it off, all of a sudden there’s this dramatic signal to people who don’t read

00:58:41 all 900 words in the New York Times piece that explains why it’s being turned off but

00:58:45 just see the headline, which is a majority of people.

00:58:48 There’s a sudden like, oh my God, yikes, vaccine being shut off.

00:58:54 And then all the people who sat on the fence or are sitting on the fence about whether

00:58:58 or not they trust vaccines, that is going to push an incalculable number of people.

00:59:03 That’s going to be the last straw for we don’t know how many hundreds of thousands or more

00:59:06 likely millions of people to say, okay, tipping point here, I don’t trust these vaccines.

00:59:11 By pausing that for whatever it was, 10 or 12 days, and then flipping the switch as everybody

00:59:16 who knew much about the situation knew was inevitable.

00:59:21 By flipping the on switch 12 days later, you’re conveying certitude J and J bad to certitude

00:59:28 J and J good in a period of just a few days and people just feel whipsawed and they’re

00:59:33 not part of the analysis.

00:59:34 But it’s not just the whipsawing.

00:59:36 And I think about this quite a bit, I don’t think I have good answers.

00:59:39 It’s something about the way the communication actually happens.

00:59:43 Just I don’t know what it is about Anthony Fauci, for example, but I don’t trust him.

00:59:49 And I think that has to do, I mean, he has an incredible background.

00:59:55 I’m sure he’s a brilliant scientist and researcher.

00:59:59 I’m sure he’s also a great, like inside the room, policymaker and deliberator and so on.

01:00:06 But what makes a great leader is something about that thing that you can’t quite describe,

01:00:14 but being a communicator that you know you can trust, that there’s an authenticity that’s

01:00:22 required.

01:00:23 And I’m not sure, maybe I’m being a bit too judgmental, but I’m a huge fan of a lot of

01:00:29 great leaders throughout history.

01:00:31 They’ve communicated exceptionally well in the way that Fauci does not.

01:00:36 And I think about that, I think about what does affect the science communication.

01:00:40 So great leaders throughout history did not necessarily need to be great science communicators.

01:00:47 Their leadership was in other domains.

01:00:49 But when you’re fighting the virus, you also have to be a great science communicator.

01:00:53 You have to be able to communicate uncertainties, you have to be able to communicate something

01:00:58 like a vaccine that you’re allowing inside your body into the messiness, into the complexity

01:01:03 of the biology system, that if we’re being honest, it’s so complex we’ll never be able

01:01:08 to really understand.

01:01:10 We can only desperately hope that science can give us sort of a high likelihood that

01:01:16 there’s no short term negative consequences and that kind of intuition about long term

01:01:22 negative consequences and doing our best in this battle against trillions of things that

01:01:29 are trying to kill us.

01:01:32 Being an effective communicator in that space is very difficult, but I think about what

01:01:36 it takes because I think there should be more science communicators that are effective at

01:01:41 that kind of thing.

01:01:43 Let me ask you about something that’s sort of more in the AI space that I think about

01:01:49 that kind of goes along this thread that you’ve spoken about, about democratizing the technology

01:01:58 that could destroy human civilization, is from amazing work from DeepMind AlphaFold2,

01:02:05 which achieved incredible performance on the protein folding problem, single protein folding

01:02:12 problem.

01:02:13 When you think about the use of AI in the SYN biospace, I think the gain of function

01:02:22 in the virus space research that you referred to, I think is natural mutations and sort

01:02:28 of aggressively mutating the virus until you get one that like that has this both contagious

01:02:35 and deadly, but what about then using AI to, through simulation, be able to compute deadly

01:02:45 viruses or any kind of biological systems?

01:02:49 Is this something you’re worried about, or again, is this something you’re more excited

01:02:53 about?

01:02:54 I think computational biology is unbelievably exciting and promising field, and I think

01:02:58 when you’re doing things in silico as opposed to in vivo, the dangers plummet.

01:03:05 You don’t have a critter that can leak from a leaky lab.

01:03:10 So I don’t see any problem with that, except I do worry about the data security dimension

01:03:15 of it, because if you were doing really, really interesting in silico gain of function research

01:03:21 and you hit upon through a level of sophistication, we don’t currently have, but synthetic biology

01:03:27 is an exponential technology, so capabilities that are utterly out of reach today will be

01:03:32 attainable in five or six years.

01:03:35 I think if you conjured up worst case genomes of viruses that don’t exist in vivo anywhere,

01:03:44 they’re just in the computer space, but like, hey guys, this is the genetic sequence that

01:03:48 would end the world, let’s say, then you have to worry about the utter hackability of every

01:03:56 computer network we can imagine.

01:03:58 Data leaks from the least likely places on the grandest possible scales have happened

01:04:04 and continue to happen and will probably always continue to happen, and so that would be the

01:04:09 danger of doing the work in silico.

01:04:11 If you end up with a list of like, well, these are things we never want to see, that list

01:04:16 leaks, and after the passage of some time, certainly couldn’t be done today, but after

01:04:20 the passage of some time, lots and lots of people in academic labs going all the way

01:04:26 down to the high school level are in a position to make it overly simplistic, hit print on

01:04:33 a genome and have the virus bearing that genome pop out on the other end and you’ve got something

01:04:37 to worry about, but in general, computational biology I think is incredibly important, particularly

01:04:42 because the crushing majority of work that people are doing with the protein folding

01:04:47 problem and other things are about creating therapeutics, about creating things that will

01:04:52 help us live better, live longer, thrive, be more well, and so forth, and the protein

01:04:58 folding problem is a monstrous computational challenge that we seem to make just the most

01:05:04 glacial project on, I’m sorry, progress on for years and years, but I think there’s a

01:05:09 biannual competition I think for which people tackle the protein folding problem, and Deep

01:05:17 Mind’s entrant both two years ago, like in 2018 and 2020, ruled the field, and so protein

01:05:25 folding is an unbelievably important thing if you want to start thinking about therapeutics

01:05:29 because it’s the folding of the protein that tells us where the channels and the receptors

01:05:34 and everything else are on that protein, and it’s from that precise model, if we can get

01:05:39 to a precise model, that you can start barraging it again in silicone with thousands, tens

01:05:46 of thousands, millions of potential therapeutics and see what resolves the problems, the shortcomings

01:05:52 that a misshapen protein, for instance, somebody with cystic fibrosis, how might we treat

01:05:59 that?

01:06:00 So I see nothing but good in that.

01:06:01 Well, let me ask you about fear and hope in this world.

01:06:06 I tend to believe that in terms of competence and malevolence, that people who are, maybe

01:06:15 it’s in my interactions, I tend to see that, first of all, I believe that most people are

01:06:20 good and want to do good and are just better at doing good and more inclined to do good

01:06:27 on this world, and more than that, people who are malevolent are usually incompetent

01:06:35 at building technology.

01:06:38 So I’ve seen this in my life, that people who are exceptionally good at stuff, no matter

01:06:43 what the stuff is, tend to, maybe they discover joy in life in a way that gives them fulfillment

01:06:50 and thereby does not result in them wanting to destroy the world.

01:06:55 So like the better you are at stuff, whether that’s building nuclear weapons or plumbing,

01:07:00 doesn’t matter, the both, the less likely you are to destroy the world.

01:07:03 So in that sense, with many technologies, AI especially, I always think that the malevolent

01:07:14 would be far outnumbered by the ultra competent.

01:07:18 And in that sense, the defenses will always be stronger than the offense in terms of the

01:07:27 people trying to destroy the world.

01:07:28 Now there’s a few spaces where that might not be the case, and that’s an interesting

01:07:33 conversation where this one person who’s not very competent can destroy the whole world.

01:07:40 Perhaps Symbio is one such space because of the exponential effects of the technology.

01:07:47 I tend to believe AI is not one of the such spaces, but do you share this kind of view

01:07:54 that the ultra competent are usually also the good?

01:07:58 Yeah, absolutely.

01:07:59 I absolutely share that and that gives me a great deal of optimism that we will be able

01:08:04 to short circuit the threat that malevolence and Symbio could pose to us.

01:08:10 But we need to start creating those defensive systems or defensive layers, one of which

01:08:14 we talked about far, far, far better surveillance in order to prevail.

01:08:18 So the good guys will almost inevitably outsmart and definitely outnumber the bad guys in most

01:08:26 sort of smack downs that we can imagine.

01:08:29 But the good guys aren’t going to be able to exert their advantages unless they have

01:08:34 the imagination necessary to think about what the worst possible thing can be done by somebody

01:08:40 whose own psychology is completely alien to their own.

01:08:45 So that’s a tricky, tricky thing to solve for.

01:08:47 Now in terms of whether the asymmetric power that a bad guy might have in the face of the

01:08:54 overwhelming numerical advantage and competence advantage that the good guys have, unfortunately,

01:09:00 I look at something like mass shootings as an example.

01:09:04 I’m sure the guy who was responsible for the Vegas shooting or the Orlando shooting or

01:09:08 any other shooting that we can imagine didn’t know a whole lot about ballistics.

01:09:14 And the number of good guy citizens in the United States with guns compared to bad guy

01:09:20 citizens I’m sure is a crushingly overwhelmingly high ratio in favor of the good guys.

01:09:25 But that doesn’t make it possible for us to stop mass shootings.

01:09:30 An example is Fort Hood, 45,000 trained soldiers on that base, yet there have been two mass

01:09:38 shootings there.

01:09:40 And so there is an asymmetry when you have powerful and lethal technology that gets so

01:09:48 democratized and so proliferated in tools that are very, very easy to use even by a

01:09:54 knucklehead.

01:09:56 When those tools get really easy to use by a knucklehead and they’re really widespread,

01:10:00 it becomes very, very hard to defend against all instances of usage.

01:10:06 Now the good news, quote unquote, about mass shootings, if there is any, and there is some,

01:10:11 is even the most brutal and carefully planning and well armed mass shooter can only take

01:10:17 so many victims.

01:10:20 And the same is true, there’s been four instances that I’m aware of, of commercial pilots committing

01:10:26 suicide by downing their planes and taking all their passengers with them.

01:10:29 These weren’t Boeing engineers, but like an army of Boeing engineers ultimately were not

01:10:33 capable of preventing that.

01:10:36 But even in their case, and I’m actually not counting 9 11 and that, 9 11 is a different

01:10:40 category in my mind, these are just personally suicidal pilots.

01:10:45 In those cases, they only have a plain load of people that they’re able to take with them.

01:10:50 If we imagine a highly plausible and imaginable future in which some bio tools that are amoral,

01:10:57 that could be used for good or for ill, start embodying unbelievable sophistication and

01:11:04 genius in the tool, in the easier and easier and easier to make tool, all those thousands,

01:11:12 tens of thousands, hundreds of thousands of scientist years start getting embodied in

01:11:16 something that may be as simple as hitting a print button.

01:11:21 Then that good guy technology can be hijacked by a bad person and used in a very asymmetric

01:11:29 way.

01:11:30 What happens though, as you go to the high school student from the current very specific

01:11:35 set of labs that are able to do it, as it becomes more and more democratized, as it

01:11:41 becomes easier and easier to do this kind of large scale damage with an engineered virus,

01:11:48 the more and more there will be engineering of defenses against these systems is some

01:11:53 of the things we talked about in terms of testing, in terms of collection of data, but

01:11:56 also in terms of at scale contact tracing or also engineering of vaccines in a matter

01:12:05 of days, maybe hours, maybe minutes.

01:12:09 I feel like the defenses, that’s what human species seems to do, is we keep hitting the

01:12:15 snooze button until there’s a storm on the horizon heading towards us.

01:12:22 Then we start to quickly build up the defenses or the response that’s proportional to the

01:12:29 scale of the storm.

01:12:31 Of course, again, certain kinds of exponential threats require us to build up the defenses

01:12:38 way earlier than we usually do, and that’s I guess the question.

01:12:42 But I ultimately am hopeful that the natural process of hitting the snooze button until

01:12:48 the deadline is right in front of us will work out for quite a long time for us humans.

01:12:53 And I fully agree.

01:12:54 That’s why I’m fundamentally, I may not sound like it thus far, but I’m fundamentally very,

01:12:59 very optimistic about our ability to short circuit this threat because there is, again,

01:13:04 I’ll stress the technological feasibility and the profound affordability of a relatively

01:13:11 simple set of steps that we can take to preclude it, but we do have to take those steps.

01:13:17 What I’m hoping to do and trying to do is inject a notion of what those steps are into

01:13:22 the public conversation and do my small part to up the odds that that actually ends up

01:13:27 happening.

01:13:30 The danger with this one is it is exponential, and I think that our minds are fundamentally

01:13:37 struggle to understand exponential math.

01:13:40 It’s just not something we’re wired for.

01:13:42 Our ancestors didn’t confront exponential processes when they were growing up on the

01:13:46 savanna, so it’s not something that’s intuitive to us and our intuitions are reliably defeated

01:13:52 when exponential processes come along.

01:13:54 So that’s issue number one.

01:13:56 And issue number two with something like this is it kind of only takes one.

01:14:02 That ball only has to go into the net once and we’re doomed, which is not the case with

01:14:08 mass shooters.

01:14:09 It’s not the case with commercial pilots run amok.

01:14:12 It’s not the case with really any threat that I can think of with the exception of nuclear

01:14:17 war that has the one bad outcome and game over.

01:14:24 And that means that we need to be unbelievably serious about these defenses and we need to

01:14:31 do things that might on the surface seem like a tremendous overreaction so that we can be

01:14:36 prepared to nip anything that comes along in the bud.

01:14:39 I like you believe that’s eminently doable.

01:14:43 I like you believe that the good guys outnumber the bad guys in this particular one to a degree

01:14:47 that probably has no precedent in history.

01:14:49 I mean, even the worst, worst people I’m sure in ISIS, even Osama bin Laden, even any bad

01:14:55 guy you could imagine in history would be revolted by the idea of exterminating all

01:15:01 of humanity.

01:15:02 I mean, that’s a low bar.

01:15:06 And so the good guys completely outnumber the bad guys when it comes to this.

01:15:11 But the asymmetry and the fact that one catastrophic error could lead to unbelievably consequential

01:15:18 things is what worries me here.

01:15:20 But I too am very optimistic.

01:15:21 The thing that I sometimes worry about is the fact that we haven’t seen overwhelming

01:15:27 evidence of alien civilizations out there makes me think, well, there’s a lot of explanations,

01:15:33 but one of them that worries me is that whenever they get smart, they just destroy themselves.

01:15:40 Oh yeah.

01:15:41 I mean, that was the most fascinating, is the most fascinating and chilling number or

01:15:46 variable in the Drake equation is L. At the end of it, you look out and you see, you know,

01:15:53 one to 400 billion stars in the Milky Way galaxy.

01:15:56 And we now know because of Kepler that an astonishingly high percentage of them probably

01:16:01 have habitable planets.

01:16:03 And you know, so all the things that were unknowns when the Drake equation was originally

01:16:07 written, like, you know, how many stars have planets?

01:16:10 Actually back then in the 1960s when the Drake equation came along, the consensus amongst

01:16:15 astronomers was that it would be a small minority of solar systems that had planets or stars.

01:16:19 But now we know it’s substantially all of them.

01:16:22 How many of those stars have planets in the habitable zone?

01:16:25 It’s kind of looking like 20%, like, oh my God.

01:16:29 And so L, which is how long does a civilization, once it reaches technological competence,

01:16:36 continues to last?

01:16:37 That’s the doozy.

01:16:40 And you’re right.

01:16:42 It’s all too plausible to think that when a civilization reaches a level of sophistication,

01:16:47 that’s probably just a decade or three in our future.

01:16:50 The odds of it self destructing just start mounting astronomically, no pun intended.

01:16:57 My hope is that actually there is a lot of alien civilizations out there and what they

01:17:02 figure out in order to avoid the self destruction, they need to turn off the thing that was useful,

01:17:09 that used to be a feature, now became a bug, which is the desire to colonize, to conquer

01:17:13 more land.

01:17:14 So they, like, there’s probably ultra intelligent alien civilizations out there, they’re just

01:17:19 like chilling, like on the beach with whatever your favorite alcohol beverage is, but like

01:17:25 without sort of trying to conquer everything.

01:17:28 Just chilling out and maybe exploring in the realm of knowledge, but almost like appreciating

01:17:36 existence for its own sake versus life as a progression of conquering of other life.

01:17:47 Like this kind of predator prey formulation that resulted in us humans, perhaps as something

01:17:54 we have to shed in order to survive.

01:17:57 I don’t know.

01:17:58 Yeah, that is a very plausible solution to Fermi’s paradox and it’s one that makes sense.

01:18:04 You know, when we look at our own lives and our own arc of technological trajectory, it’s

01:18:11 very, very easy to imagine that in an intermediate future world of, you know, flawless VR or

01:18:18 flawless, you know, whatever kind of simulation that we want to inhabit, it will just simply

01:18:25 cease to be worthwhile to go out and expand our interstellar territory.

01:18:34 But if we were going out and conquering interstellar territory, it wouldn’t necessarily have to

01:18:38 be predator or prey.

01:18:39 I can imagine a benign but sophisticated intelligence saying, well, we’re going to go to places,

01:18:44 we’re going to go to places that we can terraform, use a different word than terra, obviously,

01:18:48 but we can turn into habitable for our particular physiology so long as that they don’t house,

01:18:55 you know, intelligent sentient creatures that would suffer from our invasion.

01:18:59 But it is easy to see a sophisticated intelligent species evolving to the point where interstellar

01:19:05 travel with its incalculable expense and physical hurdles just isn’t worth it compared to what

01:19:11 could be done, you know, where one already is.

01:19:15 So you talked about diagnostics at scale as a possible solution to future pandemics.

01:19:22 What about another possible solution, which is kind of creating a backup copy?

01:19:27 You know, I’m actually now putting together a NAS for a backup for myself for the first

01:19:32 time taking backup of data seriously.

01:19:35 But if we were to take the backup of human consciousness seriously and try to expand

01:19:39 throughout the solar system and colonize other planets, do you think that’s an interesting

01:19:44 solution?

01:19:47 One of many for protecting human civilization from self destruction, sort of humans becoming

01:19:53 a multi planetary species?

01:19:54 Oh, absolutely.

01:19:55 I mean, I find it electrifying, first of all, so I’ve got a little bit of a personal bias

01:19:59 when I was a kid, I thought there was nothing cooler than rockets, I thought there was nothing

01:20:03 cooler than NASA, I thought there was nothing cooler than people walking on the moon.

01:20:08 And as I grew up, I thought there was nothing more tragic than the fact that we went from

01:20:12 walking on the moon to at best getting to something like suborbital altitude.

01:20:17 And just I found that more and more depressing with the passage of decades at just the colossal

01:20:23 expense of, you know, manned space travel and the fact that it seemed that we were unlikely

01:20:29 to ever get back to the moon, let alone Mars.

01:20:31 So I have a boundless appreciation for Elon Musk for many reasons.

01:20:36 But the fact that he has put Mars on the credible agenda is one of the things that I appreciate

01:20:41 immensely.

01:20:43 So there’s just the sort of space nerd in me that just says, God, that’s cool.

01:20:47 But on a more practical level, we were talking about, you know, potentially inhabiting planets

01:20:54 that aren’t our own.

01:20:56 And we’re thinking about a benign civilization that would do that in planetary circumstances,

01:21:04 where we’re not causing other conscious systems to suffer.

01:21:07 I mean, Mars is a place that’s very promising, there may be microbial life there, and I hope

01:21:11 there is.

01:21:12 And if we found it, I think it would be electrifying.

01:21:15 But I think ultimately, the moral judgment would be made that, you know, the continued

01:21:20 thriving of that microbial life is of less concern than creating a habitable planet to

01:21:26 humans, which would be a project on the many thousands of years scale.

01:21:30 But I don’t think that that would be a greatly immoral act.

01:21:34 And if that happened, and if Mars became, you know, home to a self sustaining group

01:21:39 of humans that could survive a catastrophic mistake here on Earth, then yeah, the fact

01:21:44 that we have a backup colony is great.

01:21:46 And if we could make more, I’m sorry, not backup colony, backup copy is great.

01:21:50 And if we could make more and more such backup copies throughout the solar system, by hollowing

01:21:55 out asteroids and whatever else it is, maybe even Venus, we could get rid of three quarters

01:21:59 of its atmosphere and, you know, turn it into a tropical paradise.

01:22:04 I think all of that is wonderful.

01:22:05 Now, whether we can make the leap from that to interstellar transportation, with the incredible

01:22:11 distances that are involved, I think that’s an open question.

01:22:15 But I think if we ever do that, it would be more like the Pacific Ocean’s channel of human

01:22:25 expansion than the Atlantic Ocean’s.

01:22:28 And so what I mean by that is, when we think about European society transmitting itself

01:22:34 across the Atlantic, it’s these big, ambitious, crazy, expensive, one shot expeditions like

01:22:42 Columbus’s to make it across this enormous expanse, and at least initially, without any

01:22:47 certainty that there’s land on the other end, right?

01:22:50 So that’s kind of how I view our space program, is like big, very conscious, deliberate efforts

01:22:56 to get from point A to point B.

01:22:58 If you look at how Pacific Islanders transmitted, you know, their descendants and their culture

01:23:06 and so forth throughout Polynesia and beyond, it was much more, you know, inhabiting a place,

01:23:14 getting to the point where there were people who were ambitious or unwelcome enough to

01:23:18 decide it’s time to go off island and find the next one and pray to find the next one.

01:23:23 That method of transmission didn’t happen in a single swift year, but it happened over

01:23:28 many, many centuries.

01:23:30 And it was like going from this island to that island, and probably for every expedition

01:23:34 that went out to seek another island and actually lucked out and found one, God knows how many

01:23:38 were lost at sea.

01:23:40 But that form of transmission took place over a very long period of time.

01:23:43 And I could see us, you know, perhaps, you know, going from the inner solar system to

01:23:48 the outer solar system, to the Kuiper Belt, to the Oort Cloud, you know, there’s theories

01:23:53 that there might be, you know, planets out there that are not anchored to stars, like

01:23:57 kind of hop, hop, slowly transmitting ourselves to at some point, we’re actually in an Alpha

01:24:02 Centauri.

01:24:03 But I think that kind of backup copy and transmission of our physical presence and our culture to

01:24:09 a diversity of, you know, extraterrestrial outposts is a really exciting idea.

01:24:15 I really never thought about that, because I have thought my thinking about space exploration

01:24:21 has been very Atlantic Ocean centric in a sense that there’ll be one program with NASA

01:24:26 and maybe private Elon Musk SpaceX or Jeff Bezos and so on.

01:24:31 But it’s true that with the help of Elon Musk, making it cheaper and cheaper, more effective

01:24:36 to create these technologies, where you could go into deep space, perhaps the way we actually

01:24:42 colonize the solar system and expand out into the galaxy is basically just like these like

01:24:52 renegade ships of weirdos that just kind of like, most of them like, quote unquote, homemade,

01:25:02 but they just kind of venture out into space and just like, you know, the initial Android

01:25:07 model of like millions of like these little ships just flying out, most of them die off

01:25:12 in horrible accidents, but some of them will persist or there’ll be stories of them persisting

01:25:19 and over a period of decades and centuries, there’ll be other attempts, almost always

01:25:24 as a response to the main set of efforts.

01:25:27 That’s interesting.

01:25:28 Yeah.

01:25:29 Because you kind of think of Mars colonization as the big NASA Elon Musk effort of a big

01:25:34 colony, but maybe the successful one would be, you know, like a decade after that, there’ll

01:25:39 be like a ship from like some kid, some high school kid who gets together a large team

01:25:45 and does something probably illegal and launches something where they end up actually persisting

01:25:50 quite a bit.

01:25:51 And from that learning lessons that nobody ever gave permission for, but somehow actually

01:25:56 flourish and then take that into the scale of centuries forward into the rest of space.

01:26:04 That’s really interesting.

01:26:05 Yeah.

01:26:06 I think the giant steps are likely to be NASA like efforts, like there is no intermediate

01:26:11 rock, well, I guess it’s the moon, but even getting the moon ain’t that easy between us

01:26:14 and Mars, right?

01:26:15 So like the giant steps, the big hubs, like the Ohera airports of the future probably

01:26:20 will be very deliberate efforts, but then, you know, you would have, I think that kind

01:26:25 of diffusion as space travel becomes more democratized and more capable, you’ll have

01:26:31 this sort of natural diffusion of people who kind of want to be off grid or think they

01:26:35 can make a fortune there.

01:26:37 You know, the kind of mentality that drove people to San Francisco, I mean, San Francisco

01:26:40 was not populated as a result of King Ferdinand and Isabella like effort to fund Columbus

01:26:46 going over.

01:26:47 It was just a whole bunch of people making individual decisions that there’s gold in

01:26:51 them Thar Hills and I’m going to go out and get a piece of it.

01:26:53 So I could see that kind of fusion.

01:26:55 What I can’t see and the reason that I think this Pacific model of transmission is more

01:26:59 likely is I just can’t see a NASA like effort to go from Earth to Alpha Centauri.

01:27:06 It’s just too far.

01:27:08 I just see lots and lots and lots of relatively tiny steps between now and there and the fact

01:27:15 is that there is, there are large chunks of matter going at least a light year beyond

01:27:20 the sun.

01:27:21 I mean, the Oort cloud, I think extends at least a light year beyond the sun and you

01:27:25 know, then maybe there are these untethered planets after that.

01:27:28 We won’t really know till we get there and if our Oort cloud goes out a light year and

01:27:32 Alpha Centauri’s Oort cloud goes out a light year, you’ve already cut in half the distance.

01:27:37 You know, so who knows?

01:27:38 But yeah.

01:27:39 One of the possibilities, probably the cheapest and most effective way to create interesting

01:27:46 interstellar spacecraft is ones that are powered and driven by AI and you could think of, here’s

01:27:53 where you have high school students be able to build a sort of a HAL 9000 version, the

01:28:00 modern version of that and it’s kind of interesting to think about these robots traveling out

01:28:07 throughout, perhaps sadly long after human civilization is gone, there’ll be these intelligent

01:28:15 robots flying throughout space and perhaps land on Alpha Centauri B or any of those kinds

01:28:22 of planets and colonize sort of, humanity continues through the proliferation of our

01:28:34 creations like robotic creations that have some echoes of that intelligence, hopefully

01:28:42 also the consciousness.

01:28:43 Does that make you sad the future where AGI super intelligent or just mediocre intelligent

01:28:50 AI systems outlive humans?

01:28:54 I guess it depends on the circumstances in which they outlive humans.

01:28:58 So let’s take the example that you just gave.

01:29:01 We send out very sophisticated AGI’s on simple rocket ships, relatively simple ones that

01:29:08 don’t have to have all the life support necessary for humans and therefore they’re of trivial

01:29:13 mass compared to a crude ship, a generation ship and therefore they’re way more likely

01:29:18 to happen.

01:29:19 Let’s use that example.

01:29:21 And let’s say that they travel to distant planets at a speed that’s not much faster

01:29:26 than what a chemical rocket can achieve and so it’s inevitably tens, hundreds of thousands

01:29:30 of years before they make landfall someplace.

01:29:32 So let’s imagine that’s going on and meanwhile we die for reasons that have nothing to do

01:29:39 with those AGI’s diffusing throughout the solar system, whether it’s through climate

01:29:43 change, nuclear war, you know, symbio, rogue symbio, whatever.

01:29:47 In that kind of scenario, the notion of the AGI’s that we created outlasting us is very

01:29:51 reassuring because it says that like we ended but our descendants are out there and hopefully

01:29:59 some of them make landfall and create some echo of who we are.

01:30:02 So that’s a very optimistic one.

01:30:04 Whereas the Terminator scenario of a super AGI arising on earth and getting let out of

01:30:11 its box due to some boo boo on the part of its creators who do not have super intelligence

01:30:17 and then deciding that for whatever reason it doesn’t have any need for us to be around

01:30:22 and exterminating us, that makes me feel crushingly sad.

01:30:26 I mean, look, I was sad when my elementary school was shut down and bulldozed even though

01:30:31 I hadn’t been a student there for decades, you know, the thought of my hometown getting

01:30:37 disbanded is even worse, the thought of my home state of Connecticut getting disbanded

01:30:42 and like absorbed into Massachusetts is even worse.

01:30:44 The notion of humanity is just crushingly, crushingly sad to me.

01:30:48 So you hate goodbyes.

01:30:51 Certain goodbyes, yes.

01:30:53 Some goodbyes are really, really liberating, but yes.

01:30:56 Well, but what if the Terminators, you know, have consciousness and enjoy the hell out

01:31:03 of life as well?

01:31:05 They’re just better at it.

01:31:07 Yeah.

01:31:08 Well, the have consciousness is a really key element.

01:31:11 And so there’s no reason to be certain that a super intelligence would have consciousness.

01:31:19 We don’t know that factually at all.

01:31:21 And so what is a very lonely outcome to me is the rise of a super intelligence that has

01:31:26 a certain optimization function that it’s either been programmed with or that arises

01:31:31 in an emergently that says, Hey, I want to do this thing for which humans are either

01:31:36 an unacceptable risk.

01:31:38 Their presence is either an unacceptable risk or they’re just collateral damage, but there

01:31:42 is no consciousness there.

01:31:44 Then the idea of the light of consciousness being snuffed out by something that is very

01:31:49 competent but has no consciousness is really, really sad.

01:31:54 Yeah, but I tend to believe that it’s almost impossible to create a super intelligent agent

01:31:58 that can’t destroy human civilization without it being conscious.

01:32:01 It’s like those are coupled, like you have to, in order to destroy humans or supersede

01:32:08 humans, you really have to be accepted by humans.

01:32:13 I think this idea that you can build systems that destroy human civilization without them

01:32:20 being deeply integrated into human civilization is impossible.

01:32:23 And for them to be integrated, they have to be human like, not just in body and form,

01:32:29 but in all the things that we value as humans, one of which is consciousness.

01:32:34 The other one is just ability to communicate.

01:32:36 The other one is poetry and music and beauty and all those things.

01:32:40 They have to be all of those things.

01:32:43 I mean, this is what I think about.

01:32:45 It does make me sad, but it’s letting go, which is they might be just better at everything

01:32:53 we appreciate than us.

01:32:55 And that’s sad and hopefully they’ll keep us around, but I think it is a kind of goodbye

01:33:05 to realizing that we’re not the most special species on earth anymore.

01:33:10 That’s still painful.

01:33:12 It’s still painful.

01:33:13 And in terms of whether such a creation would have to be conscious, let’s say, I’m not so

01:33:19 sure.

01:33:20 But let’s imagine something that can pass the Turing test.

01:33:25 Something that passes the Turing test could over text based interaction in any event successfully

01:33:31 mimic a very conscious intelligence on the other end, but just be completely unconscious.

01:33:37 So that’s a possibility.

01:33:39 And that if you take that upper radical step, which I think can be permitted if we’re thinking

01:33:43 about super intelligence, you could have something that could reason its way through, this is

01:33:49 my optimization function.

01:33:51 And in order to get to it, I’ve got to deal with these messy, somewhat illogical things

01:33:56 that are as intelligent in relation to me as they are intelligent in relation to ants.

01:34:01 I can trick them, manipulate them, whatever.

01:34:04 And I know the resources I need.

01:34:05 I know this, I need this amount of power.

01:34:07 I need to seize control of these manufacturing resources that are robotically operated.

01:34:13 I need to improve those robots with software upgrades and then ultimately mechanical upgrades,

01:34:17 which I can affect through X, Y, and Z, that could still be a thing that passes the Turing

01:34:23 test.

01:34:24 I don’t think it’s necessarily certain that that optimization function mass, maximizing

01:34:33 entity would be conscious.

01:34:36 So this is from a very engineering perspective because I think a lot about natural language

01:34:42 processing, all those kind of, I’m speaking to a very specific problem of just say the

01:34:47 Turing test.

01:34:48 I really think that something like consciousness is required, when you say reasoning, you’re

01:34:55 separating that from consciousness.

01:34:56 But I think consciousness is part of reasoning in the sense that you will not be able to

01:35:03 become super intelligent in the way that it’s required to be part of human society without

01:35:10 having consciousness.

01:35:11 Like I really think it’s impossible to separate the consciousness thing, but it’s hard to

01:35:15 define consciousness when you just use that word.

01:35:18 Even just like the capacity, the way I think about consciousness is the important symptoms

01:35:25 or maybe consequences of consciousness, one of which is the capacity to suffer.

01:35:31 I think AI will need to be able to suffer in order to become super intelligent, to feel

01:35:37 the pain, the uncertainty, the doubt.

01:35:40 The other part of that is not just the suffering, but the ability to understand that it too

01:35:48 is mortal in the sense that it has a self awareness about its presence in the world,

01:35:54 understand that it’s finite and be terrified of that finiteness.

01:35:58 I personally think that’s a fundamental part of the human condition is this fear of death

01:36:02 that most of us construct an illusion around, but I think AI would need to be able to really

01:36:08 have it part of its whole essence.

01:36:12 Like every computation, every part of the thing that generates, that does both the perception

01:36:17 and generates the behavior will have to have, I don’t know how this is accomplished, but

01:36:23 I believe it has to truly be terrified of death, truly have the capacity to suffer and

01:36:30 from that something that will be recognized to us humans as consciousness would emerge.

01:36:35 Whether it’s the illusion of consciousness, I don’t know.

01:36:37 The point is, it looks a whole hell of a lot like consciousness to us humans.

01:36:42 And I believe that AI, when you ask it, will also say that it is conscious, in the full

01:36:49 sense that we say that we’re conscious.

01:36:52 And all of that I think is fully integrated.

01:36:54 You can’t separate the two, the idea of the paperclip maximizer that sort of ultra rationally

01:37:02 would be able to destroy all humans because it’s really good at accomplishing a simple

01:37:10 objective function that doesn’t care about the value of humans.

01:37:14 It may be possible, but the number of trajectories to that are far outnumbered by the trajectories

01:37:20 that create something that is conscious, something that appreciative of beauty creates beautiful

01:37:25 things in the same way that humans can create beautiful things.

01:37:27 And ultimately, the sad, destructive path for that AI would look a lot like just better

01:37:36 humans than these cold machines.

01:37:41 And I would say, of course, the cold machines that lack consciousness, the philosophical

01:37:47 zombies make me sad.

01:37:49 But also what makes me sad is just things that are far more powerful and smart and creative

01:37:56 than us too, because then in the same way that Alpha Zero becoming a better chess player

01:38:04 than the best of humans, even starting with Deep Blue, but really with Alpha Zero, that

01:38:10 makes me sad too.

01:38:11 One of the most beautiful games that humans ever created that used to be seen as demonstrations

01:38:19 of the intellect, which is chess, and go in other parts of the world have been solved

01:38:25 by AI, that makes me quite sad, and it feels like the progress of that is just pushing

01:38:29 on forward.

01:38:30 Oh, it makes me sad too.

01:38:31 And to be perfectly clear, I absolutely believe that artificial consciousness is entirely

01:38:37 possible.

01:38:38 And that’s not something I rule out at all.

01:38:40 I mean, if you could get smart enough to have a perfect map of the neural structure and

01:38:46 the neural states and the amount of neurotransmitters that are going between every synapse in a

01:38:51 particular person’s mind, could you replicate that in silica at some reasonably distant

01:38:59 point in the future?

01:39:00 Absolutely.

01:39:01 And then you’d have a consciousness.

01:39:02 I don’t rule out the possibility of artificial consciousness in any way.

01:39:05 What I’m less certain about is whether consciousness is a requirement for superintelligence pursuing

01:39:11 a maximizing function of some sort.

01:39:16 I don’t feel the certitude that consciousness simply must be part of that.

01:39:21 You had said for it to coexist with human society would need to be consciousness.

01:39:27 Could be entirely true, but it also could just exist orthogonally to human society.

01:39:32 And it could also upon attaining a superintelligence with a maximizing function very, very, very

01:39:39 rapidly because of the speed at which computing works compared to our own meat based minds

01:39:46 very, very rapidly make the decisions and calculations necessary to seize the reins

01:39:51 of power before we even know what’s going on.

01:39:53 Yeah.

01:39:54 I mean, kind of like biological viruses do, they don’t necessarily, they integrate themselves

01:39:58 just fine with human society.

01:39:59 Yeah.

01:40:00 Without technically, without consciousness, without even being alive, you know, technically

01:40:05 by the standards of a lot of biologists.

01:40:07 So this is a bit of a tangent, but you’ve talked with Sam Harris on that four hour special

01:40:14 episode we mentioned.

01:40:16 And I’m just curious to ask, cause I use this meditation app I’ve been using for the past

01:40:22 month to meditate.

01:40:24 Is this something you’ve integrated as part of your life, meditation or fasting, or has

01:40:29 some of Sam Harris rubbed off on you in terms of his appreciation of meditation and just

01:40:35 kind of from a third person perspective, analyzing your own mind, consciousness, free will and

01:40:40 so on?

01:40:41 You know, I’ve tried it three separate times in my life, really made a concerted attack

01:40:46 on meditation and integrating it into my life.

01:40:51 One of them, the most extreme was I took a class based on the work of Jon Kabat Zinn,

01:40:55 who is, you know, in many ways, one of the founding people behind the mindful meditation

01:41:01 movement that required like part of the class was, you know, it was a weekly class and you

01:41:08 were going to meditate an hour a day, every day.

01:41:12 And having done that for, I think it was 10 weeks, it might’ve been 13, however long period

01:41:16 of time was, at the end of it, it just didn’t stick.

01:41:20 As soon as it was over, you know, I did not feel that gravitational pull.

01:41:24 I did not feel the collapse in quality of life after wimping out on that project.

01:41:33 And then the most recent one was actually with Sam’s app during the lockdown.

01:41:37 I did make a pretty good and consistent concerted effort to listen to his 10 minute meditation

01:41:43 every day.

01:41:44 And I’ve always fallen away from it.

01:41:46 And I, you know, you’re kind of interpreting, why did I personally do this?

01:41:50 I do believe it was ultimately because it wasn’t bringing me that, you know, joy or

01:41:56 inner peace or better competence at being me that I was hoping to get from it.

01:42:01 Otherwise, I think I would have clung to it in the way that we cling to certain good habits,

01:42:06 like I’m really good at flossing my teeth.

01:42:08 Not that you were going to ask Lex, but yeah, that’s one thing that defeats a lot of people.

01:42:12 I’m good at that.

01:42:13 See, Herman Hesse, I think, I forget which book or maybe, I forget where, I’ve read everything

01:42:20 of his, so it’s unclear where it came from, but he had this idea that anybody who truly

01:42:30 achieves mastery in things will learn how to meditate in some way.

01:42:35 So it could be that for you, the flossing of teeth is yet another like little inkling

01:42:41 of meditation.

01:42:42 Like it doesn’t have to be this very particular kind of meditation.

01:42:46 Maybe podcasting, you have an amazing podcast, that could be meditation.

01:42:49 The writing process is meditation.

01:42:51 For me, like there’s a bunch of mechanisms which take my mind into a very particular

01:43:01 place that looks a whole lot like meditation.

01:43:04 For example, when I’ve been running over the past couple years, and especially when I listen

01:43:12 to certain kinds of audio books, like I’ve listened to the Rise and Fall of the Third

01:43:17 Reich.

01:43:18 I’ve listened to a lot of sort of World War II, which at once, because I have a lot of

01:43:24 family who’s lost in World War II and so much of the Soviet Union is grounded in the suffering

01:43:30 of World War II, that somehow it connects me to my history, but also there’s some kind

01:43:36 of purifying aspect to thinking about how cruel, but at the same time, how beautiful

01:43:42 human nature could be.

01:43:43 And so you’re also running, like it clears the mind from all the concerns of the world

01:43:49 and somehow it takes you to this place where you were like deeply appreciative to be alive

01:43:54 in the sense that as opposed to listening to your breath or like feeling your breath

01:43:59 and thinking about your consciousness and all those kinds of processes that Sam’s app

01:44:04 does.

01:44:05 Well, this does that for me, the running and flossing may do that for you.

01:44:10 So maybe Herman Hesse is onto something.

01:44:12 So I hope flossing is not my main form of expertise, although I am going to claim a

01:44:16 certain expertise there and I’m going to claim it.

01:44:19 Somebody has to be the best flosser in the world.

01:44:21 That ain’t me.

01:44:22 I’m just glad that I’m a consistent one.

01:44:23 I mean, there are a lot of things that bring me into a flow state and I think maybe perhaps

01:44:27 that’s one reason why meditation isn’t as necessary for me.

01:44:31 I definitely enter a flow state when I’m writing and definitely enter a flow state

01:44:34 when I’m editing.

01:44:35 I definitely enter a flow state when I’m mixing and mastering music.

01:44:39 I enter a flow state when I’m doing heavy, heavy research to either prepare for a podcast

01:44:44 or to also do tech investing, to make myself smart in a new field that is fairly alien

01:44:52 to me, I can just, the hours can just melt away while I’m reading this and watching that

01:44:58 YouTube lecture and going through this presentation and so forth.

01:45:02 So maybe because there’s a lot of things that bring me into a flow state in my normal weekly

01:45:06 life, not daily, unfortunately, but certainly my normal weekly life that I have less of

01:45:11 an urge to meditate.

01:45:12 Now you’ve been working with Sam’s app for about a month now, you said.

01:45:15 Is this your first run in with meditation?

01:45:17 Is your first attempt to integrate it with your life or?

01:45:19 Like meditation, meditation.

01:45:20 I always thought running and thinking, I listen to brown noise often.

01:45:26 That takes my mind, I don’t know what the hell it does, but it takes my mind immediately

01:45:29 into like the state where I’m deeply focused on anything I do.

01:45:33 I don’t know why.

01:45:34 So it’s like you’re accompanying sound when you’re like, really?

01:45:37 And what’s the difference between brown and white noise?

01:45:39 This is a cool term I haven’t heard before.

01:45:41 So people should look up brown noise.

01:45:43 They don’t have to because you’re about to tell them what it is.

01:45:45 Because you have to experience, you have to listen to it.

01:45:48 So I think white noise is, this has to do with music.

01:45:52 I think there’s different colors, there’s pink noise.

01:45:55 And I think that has to do with like the frequencies.

01:45:59 Like the white noise is usually less bassy, brown noise is very bassy.

01:46:06 So it’s more like versus like, if that makes sense.

01:46:14 So there’s like a deepness to it.

01:46:16 I think everyone is different, but for me, when I was a research scientist at MIT,

01:46:25 especially when there’s a lot of students around, I remember just being annoyed

01:46:29 at the noise of people talking.

01:46:31 And one of my colleagues said, well, you should try listening to brown noise.

01:46:34 Like it really knocks out everything.

01:46:36 Because I used to wear earplugs to it, like just see if I can block it out.

01:46:40 And like the moment I put it on, something, it’s as if my mind was waiting

01:46:46 all these years to hear that sound.

01:46:49 Everything just focused in, I listened.

01:46:52 It makes me wonder how many other amazing things out there they’re waiting to

01:46:55 discover from my own particular, like biological, from my own particular brain.

01:47:01 So that, it just goes, the mind just focuses in, it’s kind of incredible.

01:47:06 So I see that as a kind of meditation, maybe I’m using a performance enhancing

01:47:13 sound to achieve that meditation, but I’ve been doing that for many years now

01:47:17 and running and walking and doing, Cal Newport was the first person that

01:47:22 introduced me to the idea of deep work.

01:47:24 Just put a word to the kind of thinking that’s required to sort of deeply think

01:47:30 about a problem, especially if it’s mathematical in nature.

01:47:33 I see that as a kind of meditation because what it’s doing is you have

01:47:37 these constructs in your mind that you’re building on top of each other.

01:47:40 And there’s all these distracting thoughts that keep bombarding you

01:47:44 from all over the place.

01:47:45 And the whole process is you slowly let them kind of move past you.

01:47:50 And that’s a meditative process.

01:47:51 It’s very meditative.

01:47:52 That sounds a lot like what Sam talks about in his meditation app, which I did

01:47:57 use to be clear for a while, of just letting the thought go by without

01:48:01 deranging you.

01:48:02 Derangement is one of Sam’s favorite words, as I’m sure you know.

01:48:06 But brown noise, that’s really intriguing.

01:48:08 I am going to try that as soon as this evening.

01:48:11 Yeah, to see if it works, but very well might not work at all.

01:48:14 Yeah, yeah.

01:48:15 I think the interesting point is, and the same with the fasting and the diet,

01:48:20 is I long ago stopped trusting experts or maybe taking the word of experts

01:48:29 as the gospel truth and only using it as an inspiration to try something,

01:48:37 to try thoroughly something.

01:48:39 So fasting was one of the things when I first discovered I’ve been many times

01:48:44 eating just once a day, so that’s a 24 hour fast.

01:48:49 It makes me feel amazing.

01:48:50 And at the same time, eating only meat, putting ethical concerns aside,

01:48:56 makes me feel amazing.

01:48:57 I don’t know why it doesn’t, the point is to be an N of one scientist

01:49:02 until nutrition science becomes a real science to where it’s doing like studies

01:49:07 that deeply understand the biology underlying all of it and also does real

01:49:14 thorough long term studies of thousands, if not millions of people versus a very

01:49:23 like small studies that are kind of generalizing from very noisy data and all

01:49:29 those kinds of things where you can’t control all the elements.

01:49:32 Particularly because our own personal metabolism is highly variant among us.

01:49:36 So there are going to be some people like if brown noise is a game changer

01:49:41 for 7% of people, there’s 93% odds that I’m not one of them,

01:49:46 but there’s certainly every reason in the world to test it out.

01:49:49 Now, so I’m intrigued by the fasting.

01:49:51 I like you, well, I assume like you, I don’t have any problem going to one meal

01:49:56 a day and I often do that inadvertently and I’ve never done it methodically.

01:50:00 Like I’ve never done it like I’m going to do this for 15 days.

01:50:03 Maybe I should and maybe I should.

01:50:05 Like how many, how many days in a row of the one day, one meal a day did you

01:50:09 find brought noticeable impact to you?

01:50:13 Was it after three days of it?

01:50:14 Was it months of it?

01:50:15 Like what was it?

01:50:17 Well, the noticeable impact is day one.

01:50:19 So for me, folks, cause I eat a very low carb diet.

01:50:22 So the hunger wasn’t the hugest issue.

01:50:25 Like there wasn’t a painful hunger, like wanting to eat.

01:50:29 So I was already kind of primed for it.

01:50:31 And the benefit comes from a lot of people that do intermittent fasting.

01:50:36 That’s only like 16 hours of fasting get this benefit too is the focus.

01:50:41 There’s a clarity of thought.

01:50:43 If my brain was a runner, it felt like I’m running on a track when

01:50:49 I’m fasting versus running in quicksand, like it’s much crisper.

01:50:53 And is this your first 72 hour fast?

01:50:54 This is the first time doing 72 hours.

01:50:56 Yeah.

01:50:56 And that’s a different thing, but similar, like I’m going up and

01:51:01 down in terms of, in terms of hunger and the focus is really crisp.

01:51:06 The thing I’m noticing most of all, to be honest, is how much eating, even

01:51:12 when it’s once a day or twice a day is a big part of my life.

01:51:18 Like I almost feel like I have way more time in my life and it’s not so

01:51:22 much about the eating, but like, I don’t have to plan my day around like

01:51:26 today, I don’t have any eating to do.

01:51:30 It does free up hours or any cleaning up after eating or provisioning the food.

01:51:35 But like, or even like thinking about it’s not a thing.

01:51:38 Like, so when you think about what you’re going to do tonight, I think I’m

01:51:43 realizing that as opposed to thinking, you know, I’m going to work on this

01:51:47 problem or I’m going to go on this walk, or I’m going to call this person.

01:51:51 I often think I’m going to eat this thing.

01:51:54 You allow dinner as a kind of, you know, when people talk about like the

01:51:59 weather or something like that, it’s almost like a generic thought you

01:52:02 allow yourself to have because, because it’s the lazy thought.

01:52:06 And I don’t have the opportunity to have that thought because I’m not eating it.

01:52:10 So now I get to think about like the things I’m actually going to do tonight

01:52:13 that are more complicated than the eating process.

01:52:16 That’s, that’s been the most noticeable thing to be honest.

01:52:20 And then there’s people that have written me that have done seven day fast.

01:52:24 And there’s a few people that have written me and I’ve heard of this is doing 30 day fasts.

01:52:31 And it’s interesting.

01:52:32 The body, I don’t know what the health benefits are necessarily.

01:52:37 What that shows me is how adaptable the human body is.

01:52:41 Yeah.

01:52:42 And, and that’s incredible.

01:52:43 And that’s something really important to remember when we

01:52:47 think about how to live life because the body adapts.

01:52:50 Yeah.

01:52:50 I mean, we sure couldn’t go 30 days without water.

01:52:53 That’s right.

01:52:54 But food, yeah, it’s been done.

01:52:56 It’s demonstrably possible.

01:52:57 You ever read Franz Kafka has a great short story called The Hunger Artist?

01:53:01 Yeah.

01:53:02 I love that.

01:53:03 Great story.

01:53:04 You know, that was before I started fasting.

01:53:06 I read that story and I, I, I admired the beauty of that, the artistry of that actual

01:53:11 hunger artist that it’s like madness, but it also felt like a little bit of genius.

01:53:16 I actually have to reread it.

01:53:18 You know what?

01:53:18 That’s what I’m going to do tonight.

01:53:19 I’m going to read it because I’m doing the fast.

01:53:21 Because you’re in the midst of it.

01:53:22 Yeah.

01:53:22 Be very contextual.

01:53:23 I haven’t read it since high school and I love to read it again.

01:53:25 I love his work.

01:53:26 So maybe I’ll read it tonight too.

01:53:28 And part of the reason of sort of I’ve here in Texas, people have been so friendly that

01:53:34 I’ve been nonstop eating like brisket with incredible people, a lot of whiskey as well.

01:53:39 So I gained quite a bit of weight, which I’m embracing.

01:53:43 It’s okay.

01:53:44 But I am also aware as I’m fasting that like I have a lot of fat for, for to, to run on.

01:53:52 Like I have a lot of like natural resources on my body.

01:53:57 You’ve got reserves.

01:53:58 Reserves.

01:53:58 You got reserves, yeah.

01:53:59 And that’s, that’s really cool.

01:54:01 You know, there’s like a re this whole thing, this biology works well.

01:54:05 Like I can go a long time because of the, the longterm investing in terms of brisket

01:54:10 that I’ve been doing in the weeks before.

01:54:12 So it’s all training.

01:54:13 It’s all training.

01:54:14 All prep work.

01:54:14 All prep work.

01:54:15 Yeah.

01:54:15 So, okay.

01:54:16 You open a bunch of doors, one of which is music.

01:54:18 I, so I got to walk in at least for a brief moment.

01:54:21 I love guitar.

01:54:22 I love music.

01:54:23 You founded a music company, but you’re also a musician yourself.

01:54:27 You know, let me ask the big ridiculous question first.

01:54:30 What’s the greatest song of all time?

01:54:32 Greatest song of all time.

01:54:34 Okay.

01:54:34 Wow.

01:54:35 It’s, it’s going to obviously very dramatically from genre to genre.

01:54:39 So like you, I like guitar, perhaps like you, although I’ve dabbled in, in inhaling

01:54:47 every genre of music that I can almost practically imagine.

01:54:51 I keep coming back to, you know, the sound of bass, guitar, drum, keyboards, voice.

01:54:57 I love that style of music and added to it.

01:55:00 I think a lot of really cool electronic production makes something that’s really,

01:55:05 really new and hybridy and awesome.

01:55:08 But, you know, and that kind of like guitar based rock I think I’ve got to go with

01:55:15 won’t get fooled again by the who.

01:55:18 It is such an epic song.

01:55:21 It’s got so much grandeur to it.

01:55:23 It uses the synthesizers that were available at the time.

01:55:27 This has got to be, I think, 1972, 73, which are very, very primitive to our ears,

01:55:31 but uses them in this hypnotic and beautiful way that I can’t imagine somebody with the

01:55:38 greatest synth array conceivable by today’s technology could do a better job of in the

01:55:43 context of that song.

01:55:45 And it’s, you know, almost operatic.

01:55:49 So I would say in that genre, the genre of, you know, rock that would be my nomination.

01:55:56 I’m totally in my brain.

01:55:58 Pinball Wizard is overriding everything else, but it was so like, I can’t even imagine the

01:56:04 song.

01:56:04 Well, I would say, ironically, with Pinball Wizard.

01:56:07 So that came from the movie Tommy.

01:56:09 And in the movie, Tommy, the rival of Tommy, the reigning pinball champ was Elton John.

01:56:17 And so there are a couple of versions of Pinball Wizard out there.

01:56:20 One sung by Roger Daltrey of The Who, which a purist would say, hey, that’s the real

01:56:24 pinball wizard.

01:56:25 But the version that is sung by Elton John in the movie, which is available to those

01:56:30 who are ambitious and want to dig for it, that’s even better in my mind.

01:56:35 Yeah, the covers.

01:56:36 And I, for myself, I was thinking, what is the song for me?

01:56:40 They answered that question.

01:56:42 I think that changes day to day, too.

01:56:45 I was realizing that.

01:56:46 Of course, but for me, somebody who values lyrics as well and the emotion in the song.

01:56:57 By the way, Hallelujah by Leonard Cohen was a close one.

01:56:59 But the number one is Johnny Cash’s cover of Hurt that is, there’s something so powerful

01:57:10 about that song, about that cover, about that performance.

01:57:15 Maybe another one is the cover of Sound of Silence.

01:57:19 Maybe there’s something about covers for me.

01:57:21 So whose cover sounds?

01:57:22 Because Simon and Garfunkel, I think, did the original recording of that, right?

01:57:26 So which cover is it that?

01:57:28 There’s a cover by Disturbed.

01:57:31 It’s a metal band, which is so interesting because I’m really not into that kind of metal.

01:57:35 But he does a pure vocal performance.

01:57:38 So he’s not doing a metal performance.

01:57:41 I would say it’s one of the greatest people should see it.

01:57:44 It’s like 400 million views or something like that.

01:57:48 It’s probably the greatest live vocal performance I’ve ever heard is Disturbed covering Sound

01:57:54 of Silence.

01:57:55 I’ll listen to it as soon as I get home.

01:57:56 And that song came to life to me in a way that Simon and Garfunkel never did.

01:58:01 For me with Simon and Garfunkel, there’s not a pain, there’s not an anger, there’s not

01:58:09 power to their performance.

01:58:11 It’s almost like this melancholy, I don’t know.

01:58:15 Well, I guess there’s a lot of beauty to it, objectively beautiful.

01:58:21 I think, I never thought of this until now, but I think if you put entirely different

01:58:26 lyrics on top of it, unless they were joyous, which would be weird, it wouldn’t necessarily

01:58:32 lose that much.

01:58:32 There’s just a beauty in the harmonizing.

01:58:35 It’s soft and you’re right.

01:58:36 It’s not dripping with emotion.

01:58:40 The vocal performance is not dripping with emotion, it’s dripping with technical harmonizing

01:58:48 brilliance and beauty.

01:58:50 Now, if you compare that to the Disturbed cover or the Johnny Cash’s Hurt cover, when

01:58:56 you walk away, it’s haunting.

01:59:01 It stays with you for a long time.

01:59:02 There’s certain performances that will just stay with you to where, like if you watch

01:59:11 people respond to that, and that’s certainly how I felt when you listen to that, the Disturbed

01:59:15 performance or Johnny Cash Hurt, there’s a response to where you just sit there with

01:59:20 your mouth open, kind of like paralyzed by it somehow.

01:59:26 And I think that’s what makes for a great song to where you’re just like, it’s not

01:59:31 that you’re like singing along or having fun, that’s another way a song could be great,

01:59:36 but where you’re just like, you’re in awe.

01:59:41 If we go to listen.com and that whole fascinating era of music in the 90s, transitioning to

01:59:50 the aughts, I remember those days, the Napster days, when piracy, from my perspective, allegedly

01:59:58 ruled the land.

02:00:01 What do you make of that whole era?

02:00:03 What are the big, what was, first of all, your experiences of that era and what were

02:00:08 the big takeaways in terms of piracy, in terms of what it takes to build a company that succeeds

02:00:15 in that kind of digital space, in terms of music, but in terms of anything creative?

02:00:22 Well, so for those who don’t remember, which is going to be most folks, listen.com created

02:00:27 a service called Rhapsody, which is much, much more recognizable to folks because Rhapsody

02:00:31 became a pretty big name for reasons that I’ll get into in a second.

02:00:34 So for people who don’t know their early online music history, we were the first company,

02:00:39 so I founded, listen, I was a loan founder, and Rhapsody, we were the first service to

02:00:46 get full catalog licenses from all the major music labels in order to distribute their

02:00:51 music online, and we specifically did it through a mechanism which at the time struck people

02:00:56 as exotic and bizarre and kind of incomprehensible, which was unlimited on demand streaming, which

02:01:01 of course now it’s a model that’s been appropriated by Spotify and Apple and many, many others.

02:01:08 So we were a pioneer on that front.

02:01:10 What was really, really, really hard about doing business in those days was the reaction

02:01:16 of the music labels to piracy, which was about 180 degrees opposite of what the reaction

02:01:22 quote unquote should have been from the standpoint of preserving their business from piracy.

02:01:26 So Napster came along and was a service that enabled people to get near unlimited access

02:01:35 to most songs.

02:01:38 I mean, truly obscure things could be very hard to find on Napster, but most songs with

02:01:43 a relatively simple one click ability to download those songs and have the MP3s on their hard

02:01:50 drives, but there was a lot that was very messy about the Napster experience.

02:01:55 You might download a really god awful recording of that song.

02:01:59 You may download a recording that actually wasn’t that song with some prankster putting

02:02:04 it up to sort of mess with people.

02:02:06 You could struggle to find the song that you’re looking for.

02:02:09 You could end up finding yourself connected, it was peer to peer.

02:02:13 You might randomly find yourself connected to somebody in Bulgaria, doesn’t have a very

02:02:17 good internet connection.

02:02:18 So you might wait 19 minutes only for it to snap, et cetera, et cetera.

02:02:23 And our argument to, well, actually let’s start with how that hit the music labels.

02:02:27 The music labels had been in a very, very comfortable position for many, many decades

02:02:32 of essentially being the monopoly providers of a certain subset of artists.

02:02:41 Any given label was a monopoly provider of the artists and the recordings that they owned

02:02:46 and they could sell it at what turned out to be tremendously favorable rates.

02:02:51 In the late era of the CD, you were talking close to $20 for a compact disc that might

02:02:57 have one song that you were crazy about and simply needed to own that might actually be

02:03:02 glued to 17 other songs that you found to be sure crap.

02:03:05 And so the music industry had used the fact that it had this unbelievable leverage and

02:03:13 profound pricing power to really get music lovers to the point that they felt very, very

02:03:20 misused by the entire situation.

02:03:22 Now along comes Napster and music sales start getting gutted with extreme rapidity.

02:03:29 And the reaction of the music industry to that was one of shock and absolute fury, which

02:03:37 is understandable.

02:03:38 I mean, industries do get gutted all the time, but I struggle to think of an analog of an

02:03:43 industry that got gutted that rapidly.

02:03:46 I mean, we could say that passenger train service certainly got gutted by airlines,

02:03:51 but that was a process that took place over decades and decades and decades.

02:03:54 It wasn’t something that happened, really started showing up in the numbers in a single

02:03:59 digit number of months and started looking like an existential threat within a year or

02:04:04 two.

02:04:04 So the music industry is quite understandably in a state of shock and fury.

02:04:10 I don’t blame them for that.

02:04:11 But then their reaction was catastrophic, both for themselves and almost for people

02:04:18 like us who were trying to do the cowboy in the white hat thing.

02:04:23 So our response to the music industry was, look, what you need to do to fight piracy,

02:04:28 you can’t put the genie back in the bottle.

02:04:30 You can’t switch off the internet.

02:04:32 Even if you all shut your eyes and wish very, very, very hard, the internet is not going

02:04:37 away.

02:04:38 And these peer to peer technologies are genies out of the bottle.

02:04:40 And if you don’t, whatever you do, don’t shut down Napster because if you do, suddenly

02:04:47 that technology is going to splinter into 30 different nodes that you’ll never, ever

02:04:51 be able to shut off.

02:04:52 We suggested to them is like, look, what you want to do is to create a massively better

02:04:59 experience to piracy, something that’s way better, that you sell at a completely reasonable

02:05:04 price.

02:05:04 And this is what it is.

02:05:06 Don’t just give people access to that very limited number of songs that they happen to

02:05:10 have acquired and paid for or pirated and have on their hard drive.

02:05:15 Give them access to all of the music in the world for a simple low price.

02:05:19 And obviously, that doesn’t sound like a crazy suggestion, I don’t think, to anybody’s

02:05:22 ears today because that is how the majority of music is now being consumed online.

02:05:26 But in doing that, you’re going to create a much, much better option to this kind of

02:05:32 crappy, kind of rickety, kind of buggy process of acquiring MP3s.

02:05:37 Now, unfortunately, the music industry was so angry about Napster and so forth that for

02:05:44 essentially three and a half years, they folded their arms, stamped their feet, and boycotted

02:05:48 the internet.

02:05:49 So they basically gave people who were fervently passionate about music and were digitally

02:05:54 modern, they gave them basically one choice.

02:05:57 If you want to have access to digital music, we, the music industry, insist that you steal

02:06:01 it because we are not going to sell it to you.

02:06:04 So what that did is it made an entire generation of people morally comfortable with swiping

02:06:10 the music because they felt quite pragmatically, well, they’re not giving me any choice here.

02:06:14 It’s like a 20 year old violating the 21 drinking age.

02:06:18 If they do that, they’re not going to feel like felons.

02:06:21 They’re going to be like, this is an unreasonable law and I’m skirting it, right?

02:06:25 So they make a whole generation of people morally comfortable with swiping music, but

02:06:29 also technically adept at it.

02:06:32 And when they did shut down Napster and kind of even trickier tools and like tweakier tools

02:06:37 like Kazaa and so forth came along, people just figured out how to do it.

02:06:41 So by the time they finally, grudgingly, it took years, allowed us to release this experience

02:06:48 that we were quite convinced would be better than piracy, we had this enormous hole had

02:06:53 been dug where lots of people said, music is a thing that is free and that’s morally

02:06:59 okay and I know how to get it.

02:07:01 And so streaming took many, many, many more years to take off and become the gargantuan

02:07:08 thing the juggernaut is today than would have happened if they’d made, pivoted to let’s

02:07:14 sell a better experience as opposed to demand that people want digital music, steal it.

02:07:19 Like what lessons do we draw from that?

02:07:21 Cause we’re probably in the midst of living through a bunch of similar situations in different

02:07:26 domains currently.

02:07:27 We just don’t know.

02:07:28 There’s a lot of things in this world that are really painful.

02:07:31 Like, I mean, I don’t know if you can draw perfect parallels, but fiat money versus cryptocurrency,

02:07:37 there’s a lot of currently people in power who are kind of very skeptical about cryptocurrency,

02:07:42 although that’s changing.

02:07:43 But it’s arguable it’s changing way too slowly.

02:07:45 There’s a lot of people making that argument where there should be a complete like Coinbase

02:07:49 and all this stuff switched to that.

02:07:52 There’s a lot of other domains that where a pivot, like if you pivot now, you’re going

02:08:00 to win big, but you don’t pivot because you’re stubborn.

02:08:05 And it’s so, I mean, like, is this just the way that companies are?

02:08:09 A company succeeds initially and then it grows and there’s a huge number of employees and

02:08:15 managers that don’t have the guts or the institutional mechanisms to do the pivot.

02:08:21 Is that just the way of companies?

02:08:23 Well, I think what happens, I’ll use the case of the music industry.

02:08:27 There was an economic model that had put food on the table and paid for marble lobbies and

02:08:32 seven and even eight figure executive salaries for many, many decades, which was the physical

02:08:37 collection of music.

02:08:38 And then you start talking about something like unlimited streaming and it seems so ephemeral

02:08:44 and like such a long shot that people start worrying about cannibalizing their own business

02:08:50 and they lose sight of the fact that something illicit is cannibalizing their business at

02:08:54 an extraordinarily fast rate.

02:08:56 And so if they don’t do it themselves, they’re doomed.

02:08:58 I mean, we used to put slides in front of these folks, this is really funny, where we

02:09:02 said, okay, let’s assume Rhapsody, we want it to be 9.99 a month and we want it to be

02:09:08 12 months, so it’s $120 a year from the budget of a music lover.

02:09:13 And then we were also able to get reasonably accurate statistics that showed how many CDs

02:09:18 per year the average person who bothered to collect music, which was not all people, actually

02:09:23 bought.

02:09:24 And it was overwhelmingly clear that the average CD buyer spends a hell of a lot less than

02:09:29 $120 a year on music.

02:09:32 This is a revenue expansion, blah, blah, blah.

02:09:35 But all they could think of, and I’m not saying this in a pejorative or patronizing way, I

02:09:40 don’t blame them, they’ve grown up in this environment for decades, all they could think

02:09:44 of was the incredible margins that they had on a CD.

02:09:48 And they would say, well, if this CD, by the mechanism that you guys are proposing, the

02:09:55 CD that I’m selling for $17.99, somebody would need to stream those songs.

02:10:00 We were talking about a penny of playback then, it’s less than that now that the record

02:10:04 labels get paid.

02:10:05 But would have to stream songs from that 1,799 times, it’s never gonna happen.

02:10:10 So they were just sort of stuck in the model of this, but it’s like, no, dude, but they’re

02:10:13 gonna spend money on all this other stuff.

02:10:15 So I think people get very hung up on that.

02:10:17 I mean, another example is really the taxi industry was not monolithic, like the music

02:10:22 labels.

02:10:23 There was a whole bunch of fleets and a whole bunch of cities, very, very fragmented, it’s

02:10:26 an imperfect analogy.

02:10:27 But nonetheless, imagine if the taxi industry writ large upon seeing Uber said, oh my God,

02:10:34 people wanna be able to hail things easily, cheaply, they don’t wanna mess with cash,

02:10:39 they wanna know how many minutes it’s gonna be, they wanna know the fare in advance, and

02:10:43 they want a much bigger fleet than what we’ve got.

02:10:46 If the taxi industry had rolled out something like that with the branding of yellow taxis,

02:10:52 universally known and kind of loved by Americans and expanded their fleet in a necessary manner,

02:10:58 I don’t think Uber or Lyft ever would have gotten a foothold.

02:11:01 But the problem there was that real economics in the taxi industry wasn’t with fares, it

02:11:08 was with the scarcity of medallions.

02:11:10 And so the taxi fleets, in many cases, owned gazillions of medallions whose value came

02:11:16 from their very scarcity.

02:11:18 So they simply couldn’t pivot to that.

02:11:21 So I think you end up having these vested interests with economics that aren’t necessarily

02:11:25 visible to outsiders who get very, very reluctant to disrupt their own model, which is why it

02:11:32 ends up coming from the outside so frequently.

02:11:34 So you know what it takes to build a successful startup, but you’re also an investor in a

02:11:39 lot of successful startups.

02:11:41 Let me ask for advice.

02:11:43 What do you think it takes to build a successful startup by way of advice?

02:11:48 JS Well, I think it starts, I mean, everything

02:11:51 starts and even ends with the founder.

02:11:54 And so I think it’s really, really important to look at the founder’s motivations and their

02:11:59 sophistication about what they’re doing.

02:12:02 In almost all cases that I’m familiar with and have thought hard about, you’ve had a

02:12:08 founder who was deeply, deeply inculcated in the domain of technology that they were

02:12:15 taking on.

02:12:16 Now, what’s interesting about that is you could say, no, wait, how is that possible

02:12:20 because there’s so many young founders?

02:12:21 When you look at young founders, they’re generally coming out of very nascent emerging

02:12:26 fields of technology where simply being present and accounted for and engaged in the community

02:12:31 for a period of even months is enough time to make them very, very deeply inculcated.

02:12:36 I mean, you look at Marc Andreessen and Netscape.

02:12:41 Marc had been doing visual web browsers when Netscape had been founded for what, a year

02:12:45 and a half, but he’d created the first one in Mosaic when he was an undergrad.

02:12:51 And the commercial internet was pre nascent in 1994 when Netscape was founded.

02:12:58 So there’s somebody who’s very, very deep in their domain.

02:13:00 Mark Zuckerberg also, social networking, very deep in his domain, even though it was

02:13:04 nascent at the time.

02:13:05 Lots of people doing crypto stuff.

02:13:07 I mean, 10 years ago, even seven or eight years ago, by being a really, really vehement

02:13:14 and engaged participant in the crypto ecosystem, you could be an expert in that.

02:13:19 You look, however, at more established industries, take Salesforce.com.

02:13:23 Salesforce automation, pretty mature field when it got started.

02:13:26 Who’s the executive and the founder?

02:13:28 Marc Benioff, who has spent 13 years at Oracle and was an investor in Siebel Systems, which

02:13:34 ended up being Salesforce’s main competition.

02:13:36 So more established, you need the entrepreneur to be very, very deep in the technology and

02:13:43 the culture of the space because you need that entrepreneur, that founder to have just

02:13:50 an unbelievably accurate intuitive sense for where the puck is going.

02:13:56 And that only comes from being very deep.

02:13:58 So that is sort of factor number one.

02:14:01 And the next thing is that that founder needs to be charismatic and or credible, or ideally

02:14:08 both in exactly the right ways to be able to attract a team that is bought into that

02:14:14 vision and is bought into that founder’s intuitions being correct and not just the team,

02:14:19 obviously, but also the investors.

02:14:21 So it takes a certain personality type to pull that off.

02:14:25 Then the next thing I’m still talking about the founder is a relentlessness and indeed

02:14:31 a monomania to put this above things that might rationally, should perhaps rationally

02:14:39 supersede it for a period of time to just relentlessly pivot when pivoting is called

02:14:46 for and it’s always called for.

02:14:48 I mean, think of even very successful companies like how many times did Facebook pivot?

02:14:53 Newsfeed was something that was completely alien to the original version of Facebook

02:14:58 and came foundationally important.

02:15:00 How many times did Google, how many times at any given, how many times has Apple pivoted?

02:15:04 That founder energy and DNA when the founder moves on the DNA that’s been inculcated

02:15:09 with a company has to have that relentlessness and that ability to pivot and pivot and pivot

02:15:15 without being worried about sacred cows.

02:15:18 And then the last thing I’ll say about the founder before I get to the rest of the team

02:15:21 and that’ll be mercifully brief is the founder has to be obviously a really great

02:15:28 hirer but just important a very good firer.

02:15:32 And firing is a horrific experience for both people involved in it.

02:15:37 It is a wrenching emotional experience.

02:15:40 And being good at realizing when this particular person is damaging the interests of the company

02:15:49 and the team and the shareholders and having the intestinal fortitude to have that conversation

02:15:56 and make it happen is something that most people don’t have in them.

02:16:01 And it’s something that needs to be developed in most people or maybe some people have it

02:16:07 naturally.

02:16:08 But without that ability, that will take an A plus organization into B minus range very,

02:16:13 very quickly.

02:16:15 And so that’s all what needs to be present in the founder.

02:16:19 Can I just say?

02:16:20 Sure.

02:16:21 How damn good you are, Rob.

02:16:22 That was brilliant.

02:16:24 The one thing that was kind of really kind of surprising to me is having a deep technical

02:16:29 knowledge because I think the way you expressed it, which is that allows you to be really

02:16:37 honest with the capabilities of what like what’s possible.

02:16:45 Of course, you’re often trying to do the impossible.

02:16:48 But in order to do the impossible, you have to be quote unquote impossible.

02:16:51 But you have to be honest with what is actually possible.

02:16:54 And it doesn’t necessarily have to be the technical competence.

02:16:57 It’s got to be, in my view, just a complete immersion in that emerging market.

02:17:02 And so I can imagine there are a couple of people out there who have started really good

02:17:06 crypto projects who themselves are right in the code, but they’re immersed in the culture

02:17:12 and through the culture and a deep understanding of what’s happening and what’s not happening.

02:17:16 They can get a good intuition of what’s possible, but the very first hire, I mean, a great way

02:17:23 to solve that is to have a technical co founder and dual founder companies have become extremely

02:17:28 common for that reason.

02:17:30 And if you’re not doing that and you’re not the technical person, but you are the founder,

02:17:35 you’ve got to be really great at hiring a very damn good technical person very, very fast.

02:17:43 Can I on the founder ask you, is it possible to do this alone?

02:17:50 There’s so many people giving advice and saying that it’s impossible to do the first few steps,

02:17:54 not impossible, but much more difficult to do it alone.

02:17:58 If we were to take the journey, especially in the software world where there’s not significant

02:18:02 investment required for it to build something up, is it possible to go to a prototype to

02:18:10 something that essentially works and already has a huge number of customers alone?

02:18:14 Sure.

02:18:15 There are lots and lots of loan founder companies out there that have made an incredible difference.

02:18:21 I mean, I’m not certainly putting rhapsody in the league of Spotify.

02:18:25 We were too early to be Spotify, but we did an awful lot of innovation.

02:18:29 And then after the company sold and ended up in the hands of real networks and MTV,

02:18:33 you know, got to millions of subs, right?

02:18:35 I was a loan founder and I studied Arabic and Middle Eastern history undergrad,

02:18:40 so I definitely wasn’t very, very technical.

02:18:42 But yeah, loan founders can absolutely work.

02:18:44 And the advantage of a loan founder is you don’t have the catastrophic potential

02:18:51 of a falling out between founders.

02:18:53 I mean, two founders who fall out with each other badly can rip a company to shreds because

02:19:00 they both have an enormous amount of equity and enormous amount of power.

02:19:04 And the capital structure is a result of that.

02:19:06 They both have an enormous amount of moral authority with the team as a result of each

02:19:12 having that founder role.

02:19:14 And I have witnessed over the years many, many situations in which companies have been shredded

02:19:21 or have suffered near fatal blows because of a falling out between founders.

02:19:27 And the more founders you add, the more risky that becomes.

02:19:30 I don’t think there should ever almost, I mean, you never say never, but multiple founders

02:19:36 beyond two is such an unstable and potentially treacherous situation that I would never,

02:19:44 ever recommend going beyond two.

02:19:45 But I do see value in the non technical sort of business and market and outside minded

02:19:51 founder teaming up with the technical founder.

02:19:54 There is a lot of merit to that, but there’s a lot of danger in that less those two blow

02:19:58 apart.

02:19:59 Was it lonely for you?

02:20:00 Unbelievably.

02:20:01 And that’s the drawback.

02:20:02 I mean, if you’re a lone founder, there is no other person that you can sit down with

02:20:10 and tackle problems and talk them through who has precisely or nearly precisely your

02:20:15 alignment of interests.

02:20:17 Your most trusted board member is likely an investor and therefore at the end of the

02:20:23 day has the interest of preferred stock in mind, not common stock.

02:20:26 Your most trusted VP, who might own a very significant stake in the company, doesn’t

02:20:33 own anywhere near your stake in the company.

02:20:35 And so their long term interests may well be in getting the right level of experience

02:20:40 and credibility necessary to peel off and start their own company.

02:20:44 Or their interests might be aligned with jumping ship and setting up with a different

02:20:51 company, whether it’s a rival or one in a completely different space.

02:20:54 So, yeah, being a lone founder is a spectacularly lonely thing.

02:20:57 And that’s a major downside to it.

02:20:59 What about mentorship?

02:21:00 Because you’re a mentor to a lot of people.

02:21:03 Can you find an alleviation to that loneliness in the space of ideas with a good mentor?

02:21:09 With a good mentor or like a mentor who’s mentoring you?

02:21:11 Yeah.

02:21:12 Yeah, you can a great deal, particularly if it’s somebody who’s been through this very

02:21:15 process and has navigated it successfully and cares enough about you and your well being

02:21:21 to give you beautifully unvarnished advice.

02:21:25 That can be a huge, huge thing.

02:21:26 That can assuage things a great deal.

02:21:28 And I had a board member who was not an investor, who basically played that role for me to a

02:21:35 great degree.

02:21:35 He came in maybe halfway through the company’s history, though.

02:21:39 I would have needed that the most in the very earliest days.

02:21:43 Yeah, the loneliness, that’s the whole journey of life.

02:21:47 We’re always alone, alone together.

02:21:49 Mm hmm.

02:21:51 It pays to embrace that.

02:21:54 You were saying that there might be something outside of the founder that’s also that you

02:21:58 were promising to be brief on.

02:22:00 Yeah.

02:22:00 OK, so we talked about the founder.

02:22:02 You were asking what makes a great startup.

02:22:04 Yes.

02:22:04 And great founder is thing number one, but then thing number two, and it’s ginormous,

02:22:09 is a great team.

02:22:10 And so I said so much about the founder because one hopes or one believes that a founder who

02:22:16 is a great hirer is going to be hiring people and in charge of critical functions like

02:22:22 engineering and marketing and biz dev and sales and so forth, who themselves are great

02:22:25 hirers.

02:22:26 But what needs to radiate from the founder into the team that might be a little bit different

02:22:30 from what’s in the gene code of the founder?

02:22:33 The team needs to be fully bought in to the, you know, the intuitions and the vision of

02:22:40 the founder.

02:22:41 Great.

02:22:41 We’ve got that.

02:22:42 But the team needs to have a slightly different thing, which is, you know, it’s 99% obsession

02:22:50 is execution, is to relentlessly hit the milestones, hit the objectives, hit the quarterly

02:22:57 goals.

02:22:58 That is, you know, 1% vision.

02:23:00 You don’t want to lose that.

02:23:02 But execution machines, you know, people who have a demonstrated ability and a demonstrated

02:23:10 focus on, yeah, I go from point to point to point.

02:23:14 I try to beat and raise expectations relentlessly, never fall short, and, you know, both sort

02:23:20 of blaze and follow the path.

02:23:22 Not that the path is going to, I mean, blaze the trail as well.

02:23:25 I mean, a good founder is going to trust that VP of sales to have a better sense of what

02:23:31 it takes to build out that organization, what the milestones be.

02:23:34 And it’s going to be kind of a dialogue amongst those at the top.

02:23:38 But, you know, execution obsession in the team is the next thing.

02:23:42 Yeah, there’s some sense where the founder, you know, you talk about sort of the space

02:23:47 of ideas, like first principles thinking, asking big difficult questions of like future

02:23:51 trajectories or having a big vision and big picture dreams.

02:23:55 You can almost be a dreamer, it feels like, when you’re like not the founder, but in the

02:24:03 space of sort of leadership.

02:24:08 But when it gets to the ground floor, there has to be execution, there has to be hitting

02:24:12 deadlines.

02:24:15 And sometimes those are attention.

02:24:18 There’s something about dreams that are attention with the pragmatic nature of execution.

02:24:28 Not dreams, but sort of ambitious vision.

02:24:32 And those have to be, I suppose, coupled.

02:24:35 The vision in the leader and the execution in the software world, that would be the programmer

02:24:42 or the designer.

02:24:45 Absolutely.

02:24:46 Amongst many other things, you’re an incredible conversationalist, a podcast, you host a podcast

02:24:51 called After On.

02:24:52 I mean, there’s a million questions I want to ask you here, but one at the highest level,

02:24:58 what do you think makes for a great conversation?

02:25:00 I would say two things, one of two things, and ideally both of two things.

02:25:07 One is if something is beautifully architected, whether it’s done deliberately and methodically

02:25:16 and willfully, as when I do it, or whether that just emerges from the conversation.

02:25:21 But something that’s beautifully architected, that can create something that’s incredibly

02:25:26 powerful and memorable, or something where there’s just extraordinary chemistry.

02:25:32 And so with All In, or I’ll go way back.

02:25:35 You might remember the NPR show Car Talk, I couldn’t care less about auto mechanics

02:25:41 myself.

02:25:42 Yeah, that’s right.

02:25:43 But I love that show because the banter between those two guys was just beyond, it was without

02:25:47 any parallel, right?

02:25:50 And some kind of edgy podcasts like Red Scare is just really entertaining to me because

02:25:54 the banter between the women on that show is just so good, and All In and that kind

02:25:58 of thing.

02:25:59 So I think it’s a combination of sort of the arc and the chemistry.

02:26:04 And I think because the arc can be so important, that’s why very, very highly produced podcasts

02:26:11 like This American Life, obviously a radio show, but I think of a podcast because that’s

02:26:15 how I always consume it, or Criminal, or a lot of what Wondery does and so forth.

02:26:21 That is real documentary making, and that requires a big team and a big budget relative

02:26:26 to the kinds of things you and I do, but nonetheless, then you got that arc, and that can be really,

02:26:31 really compelling.

02:26:32 But if we go back to conversation, I think it’s a combination of structure and chemistry.

02:26:37 Yeah, and I’ve actually personally have lost, I used to love This American Life, and for

02:26:43 some reason because it lacks the possibility of magic, it’s engineered magic.

02:26:51 I’ve fallen off of it myself as well.

02:26:53 I mean, when I fell madly in love with it during the aughts, it was the only thing going.

02:26:58 They were really smart to adopt podcasting as a distribution mechanism early.

02:27:04 But yeah, I think that maybe there’s a little bit less magic there now because I think they

02:27:09 have agendas other than necessarily just delighting their listeners with quirky stories, which

02:27:14 I think is what it was all about back in the day and some other things.

02:27:17 Is there like a memorable conversation that you’ve had on the podcast, whether it was

02:27:22 because it was wild and fun or one that was exceptionally challenging, maybe challenging

02:27:29 to prepare for, that kind of thing?

02:27:31 Is there something that stands out in your mind that you can draw an insight from?

02:27:35 Yeah, I mean, this no way diminishes the episodes that will not be the answer to these two questions,

02:27:42 but an example of something that was really, really challenging to prepare for was George

02:27:46 Church.

02:27:48 So as I’m sure you know and as I’m sure many of your listeners know, he is one of the absolute

02:27:52 leading lights in the field of synthetic biology.

02:27:55 He’s also unbelievably prolific.

02:27:57 His lab is large and has all kinds of efforts have spun out of that.

02:28:02 And what I wanted to make my George Church episode about was, first of all, grounding

02:28:08 people into what is this thing called Symbio.

02:28:12 And that required me to learn a hell of a lot more about Symbio than I knew going into

02:28:17 it.

02:28:18 So there was just this very broad, I mean, I knew much more than the average person going

02:28:23 into that episode, but there was this incredible breadth of grounding that I needed to get

02:28:27 myself in the domain.

02:28:29 And then George does so many interesting things, there’s so many interesting things emitting

02:28:34 from his lab that, you know, and he and I had a really good dialogue, he was a great

02:28:38 guide going into it.

02:28:41 Minnowing it down to the three to four that I really wanted us to focus on to create a

02:28:47 sense of wonder and magic in the listener of what could be possible from this very broad

02:28:51 spectrum domain, that was a doozy of a challenge.

02:28:54 That was a tough, tough, tough one to prepare for.

02:28:57 Now, in terms of something that was just wild and fun, unexpected, I mean, by the time we

02:29:04 sat down to interview, I knew where we were going to go.

02:29:07 But just in terms of the idea space, Don Hoffman, yeah, so Don Hoffman is, again, some listeners

02:29:15 probably know because he’s, I think I was the first podcaster to interview him.

02:29:19 I’m sure some of your listeners are familiar with him, but he has this unbelievably contrarian

02:29:25 take on the nature of reality, but it is contrarian in a way that all the ideas are highly internally

02:29:33 consistent and snap together in a way that’s just delightful.

02:29:38 And it seems as radically violating of our intuitions and as radically violating of the

02:29:46 probable nature of reality as anything that one can encounter.

02:29:49 But an analogy that he uses, which is very powerful, which is what intuition could possibly

02:29:54 be more powerful than the notion that there is a single unitary direction called down.

02:30:00 When we’re on this big flat thing for which there is a thing called down.

02:30:05 And we all know, I mean, that’s the most intuitive thing that one could probably think of.

02:30:10 And we all know that that ain’t true.

02:30:12 So my conversation with Don Hoffman was just wild and full of plot twists and interesting

02:30:18 stuff.

02:30:19 And the interesting thing about the wildness of his ideas, it’s to me at least as a listener,

02:30:25 coupled with, he’s a good listener and he empathizes with the people who challenge his

02:30:35 ideas.

02:30:36 Like what’s a better way to phrase that?

02:30:39 He is a welcoming of challenge in a way that creates a really fun conversation.

02:30:44 Oh, totally.

02:30:45 Yeah.

02:30:46 He loves a parry or a jab, whatever the word is at his argument.

02:30:52 He honors it.

02:30:54 He’s a very, very gentle and noncombatative soul, but then he is very good and takes great

02:31:03 evident joy in responding to that in a way that expands your understanding of his thinking.

02:31:10 Let me as a small tangent of tying up together our previous conversation about listening.com

02:31:15 and streaming and Spotify and the world of podcasting.

02:31:20 So we’ve been talking about this magical medium of podcasting.

02:31:25 I have a lot of friends at Spotify in the high positions of Spotify as well.

02:31:32 I worry about Spotify and podcasting and the future of podcasting in general that moves

02:31:41 podcasting in the place of maybe walled gardens of sorts.

02:31:49 Since you’ve had a foot in both worlds, have a foot in both worlds, do you worry as well

02:31:55 about the future of podcasting?

02:31:56 Yeah.

02:31:57 I think walled gardens are really toxic to the medium that they start balkanizing.

02:32:05 So to take an example, I’ll take two examples.

02:32:09 With music, it was a very, very big deal that at Rhapsody we were the first company to get

02:32:15 full catalog licenses from all back then there were five major music labels and also hundreds

02:32:20 and hundreds of indies because you needed to present the listener with a sense that

02:32:25 basically everything is there and there is essentially no friction to discovering that

02:32:31 which is new and you can wander this realm and all you really need is a good map, whether

02:32:36 it is something that somebody, the editorial team assembled or a good algorithm or whatever

02:32:41 it is, but a good map to wander this domain.

02:32:43 When you start walling things off, A, you undermine the joy of friction free discovery,

02:32:50 which is an incredibly valuable thing to deliver to your customer, both from a business standpoint

02:32:55 and simply from a humanistic standpoint of do you want to bring delight to people?

02:33:01 But it also creates an incredible opening vector for piracy.

02:33:06 And so something that’s very different from the Rhapsody slash Spotify slash et cetera

02:33:10 like experience is what we have now in video.

02:33:14 Like wow, is that show on Hulu, is it on Netflix, is it on something like IFC channel, is it

02:33:20 on Discovery Plus, is it here, is it there?

02:33:23 And the more frustration and toe stubbing that people encounter when they are seeking

02:33:30 something and they’re already paying a very respectable amount of money per month to have

02:33:35 access to content and they can’t find it, the more that happens, the more people are

02:33:39 going to be driven to piracy solutions like to hell with it.

02:33:43 Never know where I’m going to find something, I never know what it’s going to cost.

02:33:45 Oftentimes really interesting things are simply unavailable.

02:33:50 That surprises me the number of times that I’ve been looking for things I don’t even

02:33:53 think are that obscure that are just, it says not available in your geography, period, mister.

02:33:59 So I think that that’s a mistake.

02:34:01 And then the other thing is for podcasters and lovers of podcasting, we should want to

02:34:07 resist this walled garden thing because A, it does smother this friction free or eradicate

02:34:16 this friction free discovery unless you want to sign up for lots of different services.

02:34:21 And also dims the voice of somebody who might be able to have a far, far, far bigger impact

02:34:28 by reaching far more neurons with their ideas.

02:34:32 I’m going to use an example from I guess it was probably the 90s or maybe it was the

02:34:36 aughts of Howard Stern, who had the biggest megaphone or maybe the second biggest after

02:34:42 Oprah megaphone in popular culture.

02:34:46 And because he was syndicated on hundreds and hundreds and hundreds of radio stations

02:34:50 at a time when terrestrial broadcast was the main thing people listened to in their car,

02:34:53 no more obviously.

02:34:54 But when he decided to go over to satellite radio, I can’t remember if it was XM or Sirius,

02:34:59 maybe they’d already merged at that point.

02:35:01 But when he did that, he made, you know, totally his right to do it, a financial calculation

02:35:07 that they were offering him a nine figure sum to do that.

02:35:11 But his audience, because not a lot of people were subscribing to satellite radio at that

02:35:14 point, his audience probably collapsed by, I wouldn’t be surprised if it was as much

02:35:19 as 95%.

02:35:20 And so the influence that he had on the culture and his ability to sort of shape conversation

02:35:27 and so forth just got muted.

02:35:30 Yeah, and also there’s a certain sense, especially in modern times, where the walled gardens

02:35:37 naturally lead to, I don’t know if there’s a term for it, but people who are not creatives

02:35:48 starting to have power over the creatives.

02:35:51 Right.

02:35:52 And even if they don’t stifle it, if they’re providing, you know, incentives within the

02:35:58 platform to shape, shift, or, you know, even completely mutate or distort the show, I mean,

02:36:06 imagine somebody has got, you know, a reasonably interesting idea for a podcast and they get

02:36:12 signed up with, let’s say Spotify, and Spotify is going to give them financing to get the

02:36:15 thing spun up.

02:36:17 And that’s great.

02:36:18 And Spotify is going to give them a certain amount of really, you know, powerful placement,

02:36:23 you know, within the visual field of listeners.

02:36:27 But Spotify has conditions for that.

02:36:29 They say, look, you know, we think that your podcast will be much more successful if you

02:36:34 dumb it down about 60%.

02:36:37 If you add some, you know, silly, dirty jokes, if you do this, you do that.

02:36:44 And suddenly the person who is dependent upon Spotify for permission to come into existence

02:36:48 and is really dependent, really wants to please them, you know, to get that money in, to get

02:36:52 that placement, really wants to be successful.

02:36:55 And all of a sudden you’re having a dialogue between a complete non creative, some marketing,

02:37:00 you know, sort of data analytic person at Spotify and a creative that’s going to shape

02:37:05 what that show is, you know, so that could be much more common.

02:37:10 And ultimately having the aggregate, an even bigger impact than, you know, the cancellation,

02:37:16 let’s say if somebody who says the wrong word or voices the wrong idea, I mean, that’s kind

02:37:20 of what you have, not kind of, that’s what you have with film and TV is that so much

02:37:25 influence is exerted over the storyline and the plots and the character arcs and all kinds

02:37:31 of things by executives who are completely alien to the experience and the skill set

02:37:35 of being a show runner in television, being a director in film that, you know, is meant

02:37:40 to like, oh, we can’t piss off the Chinese market here or we can’t say that or we need

02:37:43 to have, you know, cast members that have precisely these demographics reflected or

02:37:48 whatever it is that, you know, and obviously despite that extraordinary, at least TV shows

02:37:54 are now being made, um, you know, in terms of film, I think the quality has, has nosedived

02:37:59 of the average, let’s say, say American film coming out of a major studio.

02:38:03 The average quality and my view has nosedived over the past decade as it’s kind of, everything’s

02:38:07 gotta be a superhero franchise, but you know, great stuff gets made despite that.

02:38:13 But I have to assume that in some cases, at least in perhaps many cases, greater stuff

02:38:19 would be made if there was less interference from non creative executives.

02:38:23 It’s like the flip side of that though, and this is, was the pitch of Spotify because

02:38:27 I’ve heard their pitch is Netflix from everybody I’ve heard that I’ve spoken with about Netflix

02:38:34 is they actually empower the creator.

02:38:36 I don’t know what the heck they do, but they do a good job of giving creators, even the

02:38:41 crazy ones like Tim Dillon, like Joe Rogan, like comedians, freedom to be their crazy

02:38:46 selves.

02:38:47 And the result is like some of the greatest television, some of the greatest cinema, whatever

02:38:54 you call it, ever made.

02:38:55 True.

02:38:56 Right.

02:38:57 And I don’t know what the heck they’re doing.

02:38:58 It’s a relative thing.

02:38:59 It’s not able from what I understand.

02:39:00 It’s a relative thing.

02:39:01 They’re interfering far, far, far less than, you know, NBC or, you know, AMC would have

02:39:07 interfered.

02:39:08 It’s a relative thing, and obviously they’re the ones writing the checks and they’re the

02:39:12 ones giving the platforms.

02:39:13 They’ve every right to their own influence, obviously.

02:39:16 But my understanding is that they’re relatively way more hands off and that has had a demonstrable

02:39:21 effect because I agree.

02:39:23 Some of the greatest, you know, video produced video content of all time, an incredibly inordinate

02:39:29 percentage of that is coming out from Netflix in just a few years when the history of cinema

02:39:32 goes back many, many decades.

02:39:34 And Spotify wants to be that for podcasting, and I hope they do become that for podcasting,

02:39:41 but I’m wearing my skeptical goggles or skeptical hat, whatever the heck it is, because it’s

02:39:47 not easy to do and it requires, it requires letting go of power, giving power to the creatives.

02:39:55 It requires pivoting, which large companies, even as innovative as Spotify is, still now

02:40:01 a large company, pivoting into a whole new space is very tricky and difficult.

02:40:05 So I’m skeptical, but hopeful.

02:40:08 What advice would you give to a young person today about life, about career?

02:40:13 We talked about startups, we talked about music, we talked about the end of human civilization.

02:40:18 Is there advice you would give to a young person today, maybe in college, maybe in high

02:40:23 school about their life?

02:40:27 Let’s see, there’s so many domains you can advise on, and I’m not going to give advice

02:40:34 on life because I fear that I would drift into hallmark bromides that really wouldn’t

02:40:40 be all that distinctive, and they might be entirely true.

02:40:43 Sometimes the greatest insights about life turn out to be the kinds of things you’d see

02:40:48 on a hallmark card, so I’m going to steer clear of that.

02:40:50 On a career level, one thing that I think is unintuitive but unbelievably powerful is

02:40:57 to focus not necessarily on being in the top sliver of 1% in excelling at one domain that’s

02:41:07 important and valuable, but to think in terms of intersections of two domains, which are

02:41:14 rare but valuable, and there’s a couple reasons for this.

02:41:19 The first is in an incredibly competitive world that is so much more competitive than

02:41:25 it was when I was coming out of school, radically more competitive than when I was coming out

02:41:28 of school, to navigate your way to the absolute pinnacle of any domain.

02:41:34 Let’s say you want to be really, really great at Python, pick a language, whatever it is.

02:41:40 You want to be one of the world’s greatest Python developers, JavaScript, whatever your

02:41:44 language is.

02:41:45 Hopefully it’s not Cobalt.

02:41:46 By the way, if you listen to this, I am actually looking for a Cobalt expert to interview because

02:41:53 I find the language fascinating, and there’s not many of them, so please, if you know a

02:41:58 world expert in Cobalt or Fortran, both, actually.

02:42:02 Or if you are one.

02:42:03 Or if you are one, please email me.

02:42:05 Yeah.

02:42:06 So, I mean, if you’re going out there and you want to be in the top sliver 1% of Python

02:42:10 developers, it’s a very, very difficult thing to do, particularly if you want to be number

02:42:13 one in the world, something like that.

02:42:14 And I’ll use an analogy as I had a friend in college who was on a track and indeed succeeded

02:42:23 at that to become an Olympic medalist, and I think it was 100 meter breaststroke.

02:42:29 And he mortgaged a significant percentage of his college life to that goal, or I should

02:42:37 say dedicated or invested or whatever you wanted to say.

02:42:39 But he didn’t participate in a lot of the social, a lot of the late night, a lot of

02:42:44 the this, a lot of the that, because he was training so much.

02:42:48 And obviously, he also wanted to keep up with his academics.

02:42:50 And at the end of the day, the story has a happy ending in that he did medal in that.

02:42:55 Bronze, not gold, but holy cow, anybody who gets an Olympic medal, that’s an extraordinary

02:43:00 thing.

02:43:01 And at that moment, he was one of the top three people on earth at that thing.

02:43:05 But wow, how hard to do that.

02:43:07 How many thousands of other people went down that path and made similar sacrifices and

02:43:12 didn’t get there.

02:43:13 It’s very, very hard to do that.

02:43:14 Whereas, and I’ll use a personal example.

02:43:19 When I came out of business school, I went to a good business school and learned the

02:43:23 things that were there to be learned.

02:43:25 And I came out and I entered a world with lots of MBAs.

02:43:29 Harvard Business School, by the way.

02:43:30 Okay, yes, it was Harvard, it’s true.

02:43:32 You’re the first person who went there who didn’t say where you went, which is beautiful.

02:43:36 I appreciate that.

02:43:37 It’s one of the greatest business schools in the world.

02:43:41 It’s a whole nother fascinating conversation about that world.

02:43:44 But anyway, yes.

02:43:45 But anyway, so I learned the things that you learn getting an MBA from a top program.

02:43:51 And I entered a world that had hundreds of thousands of people who had MBAs, probably

02:43:56 hundreds of thousands who have them from top 10 programs.

02:44:00 So I was not particularly great at being an MBA person.

02:44:04 I was inexperienced relative to most of them and there were a lot of them, but it was okay

02:44:09 MBA person, right, newly minted.

02:44:12 But then as it happened, I found my way into working on the commercial internet in 1994.

02:44:20 So I went to a, at the time, giant hot computing company called Silicon Graphics, which had

02:44:25 enough heft and enough head count that they could take on and experienced MBAs and try

02:44:30 to train them in the world of Silicon Valley.

02:44:33 But within that company that had an enormous amount of surface area and was touching a

02:44:38 lot of areas and had unbelievably smart people at the time, it was not surprising that SGI

02:44:46 started doing really interesting and innovative and trailblazing stuff on the internet before

02:44:51 almost anybody else.

02:44:52 And part of the reason was that our founder, Jim Clark, went off to cofound Netscape with

02:44:55 Mark Andreessen.

02:44:56 So the whole company is like, wait, what was that?

02:44:58 What’s this commercial internet thing?

02:45:00 So I ended up in that group.

02:45:01 Now in terms of being a commercial internet person or a worldwide web person, again, I

02:45:09 was in that case, barely credentialed, I couldn’t write a stitch of code, but I had a pretty

02:45:14 good mind for grasping the business and cultural significance of this transition.

02:45:22 And this was, again, we were talking earlier about emerging areas.

02:45:25 Within a few months, you know, I was in the relatively top echelon of people in terms

02:45:29 of just sheer experience, because like, let’s say it was five months into the program, there

02:45:33 were only so many people who’d been doing worldwide web stuff commercially for five

02:45:37 months, you know?

02:45:38 And then what was interesting though was the intersection of those two things.

02:45:43 The commercial web, as it turned out, grew into an unbelievable vastness.

02:45:49 And so by being a pretty good, okay web person and a pretty good, okay MBA person, that intersection

02:45:57 put me in a very rare group, which was web oriented MBAs.

02:46:03 And in those early days, you could probably count on your fingers the number of people

02:46:08 who came out of really competitive programs who were doing stuff full time on the internet.

02:46:12 And there was a greater appetite for great software developers in the internet domain,

02:46:17 but there was an appetite and a real one and a rapidly growing one for MBA thinkers who

02:46:24 were also seasoned and networked in the emerging world of the commercial worldwide web.

02:46:29 And so finding an intersection of two things you can be pretty good at, but is a rare intersection

02:46:37 and a special intersection is probably a much easier way to make yourself distinguishable

02:46:43 and in demand from the world than trying to be world class at this one thing.

02:46:48 So in the intersection is where there’s to be discovered opportunity and success.

02:46:53 That’s really interesting.

02:46:54 There’s actually more intersection of fields and fields themselves, right?

02:46:58 So yeah, I mean, I’ll give you kind of a funny hypothetical here, but it’s one I’ve been

02:47:02 thinking about a little bit.

02:47:04 There’s a lot of people in crypto right now.

02:47:06 It’d be hard to be in the top percentile of crypto people, whether it comes from just

02:47:11 having a sheer grasp of the industry, a great network within the industry, technological

02:47:15 skills, whatever you want to call it.

02:47:18 And then there’s this parallel world and orthogonal world called crop insurance.

02:47:23 And I’m sure that’s a big world.

02:47:25 Crop insurance is a very, very big deal, particularly in the wealthy and industrialized world where

02:47:29 people through sophisticated financial markets, rule of law and large agricultural concerns

02:47:35 that are worried about that.

02:47:37 Somewhere out there is somebody who is pretty crypto savvy, but probably not top 1%, but

02:47:42 also has kind of been in the crop insurance world and understands that a hell of a lot

02:47:47 better than almost anybody who’s ever had anything to do with cryptocurrency.

02:47:52 And so I think that decentralized finance, DeFi, one of the interesting and I think very

02:47:58 world positive things that I think it’s almost inevitably will be bringing to the world is

02:48:03 crop insurance for small holding farmers.

02:48:06 I mean, people who have tiny, tiny plots of land in places like India, et cetera, where

02:48:12 there is no crop insurance available to them because just the financial infrastructure

02:48:17 doesn’t exist.

02:48:19 But it’s highly imaginable that using Oracle networks that are trusted outside deliverers

02:48:24 of factual information about rainfall in a particular area, you can start giving drought

02:48:29 insurance to folks like this.

02:48:31 The right person to come up with that idea is not a crypto whiz who doesn’t know a blasted

02:48:37 thing about small holding farmers.

02:48:39 The right person to come up with that is not a crop insurance whiz who isn’t quite sure

02:48:43 what Bitcoin is, but somebody occupies that intersection.

02:48:47 That’s just one of gazillion examples of things that are going to come along for somebody

02:48:52 who occupies the right intersection of skills, but isn’t necessarily the number one person

02:48:57 at either one of those expertises.

02:48:59 That’s making me kind of wonder about my own little things that I’m average at and seeing

02:49:05 where the intersections that could be exploited.

02:49:09 That’s pretty profound.

02:49:10 So we talked quite a bit about the end of the world and how we’re both optimistic about

02:49:15 us figuring our way out.

02:49:18 Unfortunately, for now at least, both you and I are going to die one day, way too soon.

02:49:27 First of all, that sucks.

02:49:28 It does.

02:49:29 I mean, one, I’d like to ask if you ponder your own mortality, how does that kind of,

02:49:41 what kind of wisdom inside does it give you about your own life?

02:49:45 And broadly, do you think about your life and what the heck it’s all about?

02:49:50 Yeah, with respect to pondering mortality, I do try to do that as little as possible

02:49:57 because there’s not a lot I can do about it.

02:50:00 But it’s inevitably there.

02:50:01 And I think that what it does when you think about it in the right way is it makes you

02:50:07 realize how unbelievably rare and precious the moments that we have here are, and therefore

02:50:13 how consequential the decisions that we make about how to spend our time are.

02:50:18 You know, like, do you do those 17 nagging emails or do you have dinner with somebody

02:50:25 who’s really important to you who haven’t seen in three and a half years?

02:50:28 If you had an infinite expanse of time in front of you, you might well rationally conclude

02:50:33 I’m going to do those emails because collectively they’re rather important.

02:50:37 And I have tens of thousands of years to catch up with my buddy, Tim.

02:50:41 But I think the scarcity of the time that we have helps us choose the right things if

02:50:48 we’re tuned to that and we’re attuned to the context that mortality puts over the consequence

02:50:54 of every decision we make of how to spend our time.

02:50:56 That doesn’t mean that we’re all very good at it, it doesn’t mean I’m very good at it.

02:51:00 But it does add a dimension of choice and significance to everything that we elect to

02:51:06 do.

02:51:07 It’s kind of funny that you say you try to think about it as little as possible.

02:51:10 I would venture to say you probably think about the end of human civilization more than

02:51:14 you do about your own life.

02:51:15 You’re probably right.

02:51:16 Because that feels like a problem that could be solved.

02:51:19 Right.

02:51:20 Whereas the end of my own life can’t be solved.

02:51:22 Well, I don’t know.

02:51:23 I mean, there’s transhumanists who have incredible optimism about near or intermediate future

02:51:28 therapies that could really, really change human lifespan.

02:51:32 I really hope that they’re right, but I don’t have a whole lot to add to that project because

02:51:37 I’m not a life scientist myself.

02:51:39 I’m in part also afraid of immortality.

02:51:44 Not as much, but close to as I’m afraid of death itself.

02:51:48 So it feels like the things that give us meaning give us meaning because of the scarcity that

02:51:55 surrounds it.

02:51:56 Agreed.

02:51:57 I’m almost afraid of having too much of stuff.

02:52:01 Yeah.

02:52:02 Although, if there was something that said, this can expand your enjoyable wellspan or

02:52:07 lifespan by 75 years, I’m all in.

02:52:11 Well, part of the reason I wanted to not do a startup, really the only thing that worries

02:52:19 me about doing a startup is if it becomes successful.

02:52:24 Because of how much I dream, how much I’m driven to be successful, that there will not

02:52:31 be enough silence in my life, enough scarcity to appreciate the moments I appreciate now

02:52:39 as deeply as I appreciate them now.

02:52:42 There’s a simplicity to my life now that it feels like you might disappear with success.

02:52:48 I wouldn’t say might.

02:52:52 I think if you start a company that has ambitious investors, ambitious for the returns that

02:52:59 they’d like to see, that has ambitious employees, ambitious for the career trajectories they

02:53:05 want to be on and so forth, and is driven by your own ambition, there’s a profound monogamy

02:53:15 to that.

02:53:18 It is very, very hard to carve out time to be creative, to be peaceful, to be so forth

02:53:24 because of with every new employee that you hire, that’s one more mouth to feed.

02:53:30 With every new investor that you take on, that’s one more person to whom you really

02:53:35 do want to deliver great returns.

02:53:38 As the valuation ticks up, the threshold to delivering great returns for your investors

02:53:43 always rises.

02:53:45 There is an extraordinary monogamy to being a founder CEO above all for the first few

02:53:54 years and first in people’s minds could be as many as 10 or 15.

02:53:59 But I guess the fundamental calculation is whether the passion for the vision is greater

02:54:07 than the cost you’ll pay.

02:54:09 Right.

02:54:10 It’s all opportunity cost.

02:54:11 It’s all opportunity cost in terms of time and attention and experience.

02:54:16 And some things like I’m, everyone’s different, but I’m less calculating some things you just

02:54:21 can’t help.

02:54:22 Sometimes you just dive in.

02:54:23 Oh yeah.

02:54:24 I mean you can do balance sheets all you want on this versus that and what’s the right.

02:54:28 I mean I’ve done it in the past and it’s never worked.

02:54:32 It’s always been like, okay, what’s my gut screaming at me to do?

02:54:36 But about the meaning of life, you ever think about that?

02:54:42 Yeah.

02:54:43 I mean, this is where I’m going to go all hallmarking on you, but I think that there’s

02:54:48 a few things and one of them is certainly love and the love that we experience and feel

02:54:57 and cause to well up in others is something that’s just so profound and goes beyond almost

02:55:05 anything else that we can do.

02:55:08 And whether that is something that lies in the past, like maybe there was somebody that

02:55:13 you were dating and loved very profoundly in college and haven’t seen in years, I don’t

02:55:19 think the significance of that love is any way diminished by the fact that it had a notional

02:55:24 beginning and end.

02:55:25 The fact is that you experience that and you trigger that in somebody else and that happened.

02:55:30 And it doesn’t have to be, certainly it doesn’t have to be love of romantic partners alone.

02:55:35 It’s family members, it’s love between friends, it’s love between creatures.

02:55:39 I had a dog for 10 years who passed away a while ago and experienced unbelievable love

02:55:47 with her.

02:55:48 It can be love of that which you create and we were talking about the flow states that

02:55:52 we enter and the pride or lack of pride or in the Minsky case, your hatred of that which

02:55:57 you’ve done.

02:55:58 But nonetheless, the creations that we make and whether it’s the love or the joy or the

02:56:05 engagement or the perspective shift that that cascades into other minds, I think that’s

02:56:11 a big, big, big part of the meaning of life.

02:56:13 It’s not something that everybody participates in necessarily, although I think we all do

02:56:18 at least in a very local level by the example that we set, by the interactions that we have.

02:56:25 But for people who create works that travel far and reach people they’ll never meet, that

02:56:31 reach countries they’ll never visit, that reach people perhaps that come along and come

02:56:36 across their ideas or their works or their stories or their aesthetic creations of other

02:56:40 sorts long after they’re dead, I think that’s really, really big part of the fabric of the

02:56:46 meaning of life.

02:56:50 So all these things like love and creation, I think really is what it’s all about.

02:57:01 And part of love is also the loss of it.

02:57:03 There’s a Louis episode with Louis C.K. where an old gentleman is giving him advice that

02:57:13 sometimes the sweetest parts of love is when you lose it and you remember it, sort of you

02:57:18 reminisce on the loss of it.

02:57:21 And there’s some aspect in which, and I have many of those in my own life, that almost

02:57:27 like the memories of it and the intensity of emotion you still feel about it is like

02:57:34 the sweetest part.

02:57:37 You’re like, after saying goodbye, you relive it.

02:57:40 So that goodbye is also a part of love.

02:57:45 The loss of it is also a part of love.

02:57:47 I don’t know, it’s back to that scarcity.

02:57:49 I won’t say the loss is the best part personally, but it definitely is an aspect of it.

02:57:56 And the grief you might feel about something that’s gone makes you realize what a big deal

02:58:03 it was.

02:58:06 Speaking of which, this particular journey we went on together come to an end.

02:58:14 So I have to say goodbye and I hate saying goodbye.

02:58:16 Rob, this is truly an honor.

02:58:18 I’ve really been a big fan.

02:58:20 People should definitely check out your podcast, your Master What You Do in the conversation

02:58:24 space, in the writing space.

02:58:26 It’s been an incredible honor that you would show up here and spend this time with me.

02:58:29 I really, really appreciate it.

02:58:30 Well, it’s been a huge honor to be here as well, and also a fan in heaven for a long

02:58:35 time.

02:58:36 Thanks for listening to this conversation with Rob Reed.

02:58:40 And thank you to Athletic Greens, Belcampo, Fundrise, and NetSuite.

02:58:46 Check them out in the description to support this podcast.

02:58:49 And now, let me leave you with some words from Plato.

02:58:52 We can easily forgive a child who’s afraid of the dark.

02:58:55 The real tragedy of life is when men are afraid of the light.

02:59:00 Thank you for listening and hope to see you next time.