Grant Sanderson: Math, Manim, Neural Networks & Teaching with 3Blue1Brown #118

Transcript

00:00:00 The following is a conversation with Grant Sanderson, his second time on the podcast.

00:00:05 He’s known to millions of people as the mind behind 3Blue1Brown, a YouTube channel where

00:00:10 he educates and inspires the world with the beauty and power of mathematics.

00:00:15 Quick summary of the sponsors, Dollar Shave Club, DoorDash, and CashApp.

00:00:20 Click the sponsor links in the description to get a discount and to support this podcast,

00:00:24 especially for the two new sponsors, Dollar Shave Club and DoorDash.

00:00:29 Let me say, as a side note, I think that this pandemic challenged millions of educators

00:00:33 to rethink how they teach, to rethink the nature of education.

00:00:38 As people know, Grant is a master elucidator of mathematical concepts that may otherwise

00:00:43 seem difficult or out of reach for students and curious minds, but he’s also an inspiration

00:00:49 to teachers, researchers, and people who just enjoy sharing knowledge.

00:00:53 Like me, for what it’s worth.

00:00:56 It’s one thing to give a semester’s worth of multi hour lectures, it’s another to extract

00:01:00 from those lectures the most important, interesting, beautiful, and difficult concepts and present

00:01:06 them in a way that makes everything fall into place.

00:01:09 That is the challenge that is worth taking on.

00:01:11 My dream is to see more and more of my colleagues at MIT and world experts across the world

00:01:16 summon their inner 3Blue1Brown and create the canonical explainer videos on a topic

00:01:22 that they know more than almost anyone else in the world.

00:01:25 Amidst the political division, the economic pain, the psychological and medical toll of

00:01:30 the virus, masterfully crafted educational content feels like one of the beacons of hope

00:01:36 that we can hold onto.

00:01:37 If you enjoy this thing, subscribe on YouTube, review it with 5 Stars and Apple Podcasts,

00:01:43 follow on Spotify, support on Patreon, or connect with me on Twitter at Lex Friedman.

00:01:48 Of course, after you go immediately, which you already probably have done a long time

00:01:52 ago, and subscribe to 3Blue1Brown’s YouTube channel, you will not regret it.

00:01:59 As usual, I’ll do a few minutes of as now and no ads in the middle.

00:02:02 I try to make these interesting, but I give you timestamps so you can skip.

00:02:06 But still, please do check out the sponsors by clicking the links in the description.

00:02:10 Especially the two new ones, DoorDash and Dollar Shave Club, they’re evaluating us,

00:02:15 looking at how many people go to their site and get their stuff in order to determine

00:02:19 if they want to support us for the long term, so you know what to do.

00:02:22 It’s the best way to support this podcast, as always.

00:02:26 This show is sponsored by Dollar Shave Club.

00:02:28 Try them out with a one time offer for only $5 and free shipping at DollarShaveClub.com

00:02:34 slash Lex.

00:02:36 The starter kit comes with a six blade razor, refills, and all kinds of other stuff that

00:02:41 makes shaving feel great.

00:02:43 I’ve been a member of Dollar Shave Club for over five years now, and actually signed up

00:02:47 when I first heard about them on the Joe Rogan podcast, and now we have come full circle.

00:02:53 I feel like I’ve made it, now that I can do a read for them just like Joe did all those

00:02:57 years ago.

00:02:58 For the most part, I’ve just used the razor and the refills, but they encouraged me to

00:03:02 try the shave butter, which I’ve never used before, so I did, and I love it.

00:03:07 Not sure how the chemistry of it works out, but it’s translucent somehow, which is a

00:03:11 cool new experience.

00:03:13 Again, try the Ultimate Shave Starter Set today for just $5 plus free shipping at DollarShaveClub.com

00:03:19 slash Lex.

00:03:21 This show is also sponsored by DoorDash.

00:03:24 Get $5 off and zero delivery fees on your first order of $15 or more when you download

00:03:29 the DoorDash app and enter code LEX.

00:03:32 I have so many memories of working late nights for a deadline with a team of engineers and

00:03:37 eventually taking a break to argue about which DoorDash restaurant to order from, and when

00:03:42 the food came, those moments of bonding, of exchanging ideas, of pausing to shift attention

00:03:48 from the programs to the humans were special.

00:03:53 These days, for a bit of time, I’m on my own, sadly, so I miss that camaraderie.

00:03:58 But actually DoorDash is still there for me.

00:04:00 There’s a million options that fit into my keto diet ways.

00:04:04 Also it’s a great way to support restaurants in these challenging times.

00:04:07 Once again, download the DoorDash app and enter code LEX to get $5 off and zero delivery

00:04:13 fees on your first order of $15 or more.

00:04:16 Finally, this show is presented by Cash App, the number one finance app in the App Store.

00:04:21 When you get it, use code LEX PODCAST.

00:04:24 Cash App lets you send money to friends, buy bitcoin, and invest in the stock market with

00:04:28 as little as $1.

00:04:31 It’s one of the best design interfaces of an app that I’ve ever used.

00:04:34 To me, good design is when everything is easy and natural.

00:04:38 Bad design is when the app gets in the way either because it’s buggy or because it tries

00:04:42 too hard to be helpful.

00:04:44 I’m looking at you, Clippy.

00:04:46 Anyway, there’s a big part of my brain and heart that love to design things and also

00:04:50 to appreciate great design by others.

00:04:52 So again, if you get Cash App from the App Store or Google Play and use code LEX PODCAST,

00:04:57 you get $10 and Cash App will also donate $10 to FIRST, an organization that is helping

00:05:03 to advance robotics and STEM education for young people around the world.

00:05:08 And now here’s my conversation with Grant Sanderson.

00:05:13 You’ve spoken about Richard Feynman as someone you admire.

00:05:17 I think last time we spoke, we ran out of time.

00:05:21 So I wanted to talk to you about him.

00:05:24 Who is Richard Feynman to you in your eyes?

00:05:27 What impact did he have on you?

00:05:29 I mean, I think a ton of people like Feynman.

00:05:31 It’s a little bit cliche to say that you like Feynman, right?

00:05:34 That’s almost like when you don’t know what to say about sports and you just point to

00:05:38 the Super Bowl or something or something you enjoy watching.

00:05:41 But I do actually think there’s a layer to Feynman that sits behind the iconography.

00:05:45 One thing that just really struck me was this letter that he wrote to his wife two years

00:05:50 after she died.

00:05:51 So during the Manhattan Project, she had polio.

00:05:54 Tragically she died.

00:05:55 They were just young, madly in love.

00:05:58 And the icon of Feynman is almost this mildly sexist, womanizing philanderer, at least on

00:06:06 the personal side.

00:06:08 But you read this letter, and I can try to pull it up for you if I want.

00:06:10 And it’s just this absolutely heartfelt letter to his wife saying how much he loves her even

00:06:15 though she’s dead and what she means to him, how no woman can ever measure up to her.

00:06:21 And it shows you that the Feynman that we’ve all seen in Surely You’re Joking is different

00:06:26 from the Feynman in reality.

00:06:28 And I think the same kind of goes in his science, where he sometimes has this output of being

00:06:35 this aw shucks character, like everyone else is coming in with these fancyfalutin formulas,

00:06:39 but I’m just going to try to whittle it down to its essentials, which is so appealing because

00:06:43 we love to see that kind of thing.

00:06:45 But when you get into it, what he was doing was actually quite deep, very much mathematical.

00:06:52 That should go without saying, but I remember reading a book about Feynman in a cafe once,

00:06:55 and this woman looked at me and saw that it was about Feynman.

00:06:58 She was like, oh, I love him.

00:06:59 I read Surely You’re Joking.

00:07:01 And she started explaining to me how he was never really a math person.

00:07:04 And I don’t understand how that can possibly be a public perception about any physicist,

00:07:10 but for whatever reason, that worked into his art that he shooed off math in place of

00:07:14 true science.

00:07:15 The reality of it is he was deeply in love with math and was much more going in that

00:07:18 direction and had a clicking point into seeing that physics was a way to realize that and

00:07:22 all the creativity that he could output in that direction was instead poured towards

00:07:27 things like fundamental, not even fundamental theories, just emergent phenomena and everything

00:07:31 like that.

00:07:32 So to answer your actual question, like what, what, what I like about, uh, his way of going

00:07:37 at things is this constant desire to reinvent it for himself.

00:07:42 Like when he would consume papers, the way he’d describe it, he’s, he would start to

00:07:45 see what problem he was trying to solve and then just try to solve it himself to get a

00:07:48 sense of personal ownership.

00:07:50 And then from there, see what others had done.

00:07:52 Is that how you see problems yourself?

00:07:54 Like that’s actually an interesting point when you first are inspired by a certain idea

00:08:00 that you maybe want to teach or visualize or just explore on your own.

00:08:05 I’m sure you’re captured by some possibility and magic of it.

00:08:08 Do you read the work of others?

00:08:10 Like do you go through the proofs or do you try to rediscover everything yourself?

00:08:15 So I think the things that I’ve learned best and have the deepest ownership of are the

00:08:19 ones that have some element of rediscovery.

00:08:21 The problem is that really slows you down.

00:08:23 And this is for my, for my part, it’s actually a big fault.

00:08:25 Like this is part of why I’m, I’m not an active researcher.

00:08:28 I’m not like at the depth of the field.

00:08:29 A lot of other people are the stuff that I do learn.

00:08:32 I try to learn it really well.

00:08:33 Um, but other times you do need to get through it at a certain pace.

00:08:37 You do need to get to a point of a problem you’re trying to solve.

00:08:39 So obviously you need to be well equipped to read things, uh, without that reinvention

00:08:43 component and see how others have done it.

00:08:45 But I think if you choose a few core building blocks along the way and you say, I’m really

00:08:49 going to try to approach this, um, before I see how this person went at it, I’m really

00:08:54 going to try to approach it for myself.

00:08:56 No matter what you gain, all sorts of inarticulatable intuitions about that topic, which aren’t

00:09:02 going to be there.

00:09:03 If you simply go through the proof, for example, you’re going to be, um, trying to come up

00:09:06 with counter examples.

00:09:07 You’re going to try to come up with, um, intuitive examples, all sorts of things where you’re

00:09:11 populating your brain with data.

00:09:13 And the ones that you come up with are likely to be different than the one that the text

00:09:16 comes up with and that like lends at a different angle.

00:09:19 So that aspect also slowed Feynman down in a lot of respects.

00:09:22 I think there was a period when like the rest of physics was running away from him.

00:09:26 Um, but in so far as got, it got him to where he was, uh, I, I kind of resonate with that.

00:09:32 I just, I would, I would be nowhere near it cause I not like him at all, but it’s like

00:09:37 a, uh, state to aspire to.

00:09:41 You know, just to link in a small point you made that you’re not a quote unquote active

00:09:46 researcher, do you, you’re swimming often in reasonably good depth about a lot of topics.

00:09:55 Do you sometimes want to like dive deep at a certain moment and say, like, cause you

00:10:01 probably built up a hell of an amazing intuition about what is and isn’t true within these

00:10:07 worlds.

00:10:08 Do you ever want to just dive in and see if you can discover something new?

00:10:14 Yeah.

00:10:15 I, I think one of my biggest regrets from undergrad is not having built better relationships

00:10:20 with the professors I had there.

00:10:21 And I think a big part of success and research is that element of like mentorship and like

00:10:26 people giving you the kind of scaffolded problems to carry along for my own like goals right

00:10:30 now.

00:10:31 I feel like, um, I’m pretty good at exposing math to others and like want to continue doing

00:10:37 that for my personal learning.

00:10:39 I, are you familiar with like the hedgehog Fox dynamic?

00:10:44 I think this was, um, either the ancient Greeks came up with it or it was pretended to be

00:10:50 something drawn from the ancient Greeks that I don’t know who to point it to, but the probably

00:10:53 Mark Twain.

00:10:54 It is that you’ve got two types of people or especially two types of researchers.

00:10:59 There’s the Fox that knows many different things and then the hedgehog that knows one

00:11:03 thing very deeply.

00:11:04 So like von Neumann would have been the Fox.

00:11:06 Obviously someone who knows many different things, just very foundational, a lot of different

00:11:10 fields.

00:11:11 Um, Einstein would have been more of a hedge thinking really deeply about one particular

00:11:14 thing and both are very necessary for making progress.

00:11:17 Um, so between those two, I would definitely see myself as like the Fox where, uh, I’ll

00:11:22 try to get my pause in like a whole bunch of different things.

00:11:24 And at the moment I just think I don’t know enough of anything to make like a significant

00:11:28 contribution to any of them.

00:11:30 But I do see value in, um, like having a decently deep understanding of a wide variety of things.

00:11:36 Like most people who, uh, know computer science really deeply don’t necessarily know physics

00:11:42 very deeply or, uh, many of the aspects, like different fields in math, even let’s say you

00:11:47 have like an analytic number theory versus an algebraic number theory.

00:11:49 Like these two things end up being related to very different fields.

00:11:52 Like some of them more complex analysis, some of them more like algebraic geometry.

00:11:56 And then when you just go out so far as to take those adjacent fields, place one, you

00:12:00 know, PhD student into a seminar of another ones, they don’t understand what the other

00:12:03 one’s saying at all.

00:12:04 Like you take the complex analysis specialist inside the algebraic geometry seminar, they’re

00:12:08 as lost as you or I would be.

00:12:10 But I think, uh, going around and like trying to have some sense of what this big picture

00:12:14 is certainly has personal value for me.

00:12:16 I don’t know if I would ever make like new contributions in those fields, but I do think

00:12:20 I could make new like expositional contributions where there’s kind of a notion of, uh, things

00:12:26 that are known, but like haven’t been explained very well.

00:12:29 Well, first of all, I think most people would agree your videos, your teaching the way you

00:12:34 see the world is fundamentally often new, like you’re creating something new and it

00:12:42 almost feels like research, even just like the visualizations, uh, the multidimensional

00:12:47 visualization we’ll talk about.

00:12:48 I mean, you’re revealing something very interesting that, uh, yeah, just feels like research feels

00:12:54 like science feels like the cutting edge of the very thing of which like new ideas and

00:13:01 new discoveries are made of.

00:13:03 I do think you’re being a little bit more generous than is necessarily.

00:13:06 And I promise that’s not even false humility because I sometimes think when I research

00:13:10 a video, I’ll learn like 10 times as much as I need for the video itself and it ends

00:13:14 up feeling kind of elementary.

00:13:15 Um, so I have a sense of just how far away like the stuff that I cover is from the actual

00:13:20 depth.

00:13:22 I think that’s natural, but I think that could also be a mathematics thing.

00:13:26 I feel like in the machine learning world, you like two weeks in, you feel like you’ve

00:13:30 basically mastered in mathematics.

00:13:34 It’s like, well, everything is either trivial or impossible.

00:13:37 And it’s like a shockingly thin line between the two where you can find something that’s

00:13:41 totally impenetrable.

00:13:42 And then after you get a feel for it, it’s like, Oh yeah, that whole, that whole subject

00:13:45 is actually trivial in some way.

00:13:48 So maybe that’s what goes on.

00:13:50 Every researcher is just on the other end of that hump and it feels like it’s so far

00:13:53 away, but one step actually gets them there.

00:13:56 What do you think about, uh, sort of Feynman’s teaching style or another perspective of use

00:14:02 of visualization?

00:14:05 Well his teaching style is interesting because people have described like the Feynman effect

00:14:09 where while you’re watching his lectures or while he reading his lectures, everything

00:14:13 makes such perfect sense.

00:14:15 So as an entertainment session, it’s wonderful because it gives you this, um, this intellectual

00:14:21 satisfaction that you don’t get from anywhere else that you like finally understand it.

00:14:26 But the Feynman effect is that you can’t really recall what it is that gave you that insight,

00:14:31 you know, even a week later.

00:14:32 And this is, um, this is true of a lot of books and a lot of lectures where the retention

00:14:36 is never quite what we hope it is.

00:14:38 Um, so there is a risk that, uh, the stuff that I do also fits that same bill where at

00:14:46 best it’s giving this kind of intellectual candy on giving a glimpse of feeling like

00:14:50 you understand something, but unless you do something active, like reinventing it yourself,

00:14:55 like doing problems, um, to solidify it, um, even things like space repetition memory to

00:15:00 just make sure that you have like the building blocks of what do all the terms mean.

00:15:04 Unless you’re doing something like that, it’s not actually going to stick.

00:15:07 So the very same thing that’s so admirable about Feynman’s lectures, which is how damn

00:15:12 satisfying they are to consume might actually also reveal a little bit of the flaw that

00:15:18 we should as educators all look out for, which is that that does not correlate with long

00:15:22 term learning.

00:15:23 We’ll talk about it a little bit.

00:15:25 I think you’ve done some interactive stuff.

00:15:28 I mean, even in your videos, the awesome thing that Feynman couldn’t do at the time is you

00:15:34 could, since it’s programmed, you can like tinker, like play with stuff.

00:15:40 You could take this value and change it.

00:15:42 You can like heroes, take the value of this variable and change it to build up an intuition,

00:15:47 to move along the surface or to, to change the shape of something.

00:15:51 I think that’s almost an equivalent of you doing it yourself.

00:15:57 It’s not quite there, but you as a viewer, um, yeah, do you think there’s some value

00:16:02 in that interactive element?

00:16:04 Yeah, well, so what’s interesting is you’re saying that, and the videos are non interactive

00:16:08 in the sense that there’s a play button and a pause button.

00:16:10 Um, and you could ask like, Hey, while you’re programming these things, why don’t you program

00:16:13 it into an interactable version?

00:16:15 You know, make it a Jupiter notebook that people can play with, which I should do.

00:16:19 And that like would be better.

00:16:20 I think the thing about interactives though is most people consuming them, um, just sort

00:16:25 of consume what the author had in mind.

00:16:27 Uh, and that’s kind of what they want.

00:16:29 Like I have a ton of friends who make interactive explanations.

00:16:33 And when you look into the analytics of how people use them, there’s a small sliver that

00:16:37 genuinely use it as a playground to have experiments.

00:16:40 And maybe that small sliver is actually who you’re targeting and the rest don’t matter.

00:16:43 Um, but most people consume it just as a piece of, um, like well constructed literature that

00:16:48 maybe you tweak with the example a little bit to see what it’s getting at.

00:16:51 But in that way, I do think like a video can get most of the benefits of the interactive,

00:16:56 like the interactive app, as long as you make the interactive for yourself and you decide

00:17:01 what the best narrative to spin is.

00:17:03 Um, as a more concrete example, like my process with, I made this video about, um, SIR models

00:17:08 for epidemics and it’s like this agent based bottling thing where you tweak some things

00:17:13 about how the epidemic spreads and you want to see how that affects its evolution.

00:17:17 Um, my, my, uh, format for making that was very different than others where rather than

00:17:21 scripting it ahead of time, I just made the playground and then I played a bunch, uh,

00:17:26 and then I saw what stories there were to tell within that.

00:17:29 Um,

00:17:30 that’s cool.

00:17:31 So your, your video had that kind of structure, it had, uh, like five or six stories or whatever

00:17:36 it was.

00:17:37 And like, it was basically, okay, here’s a simulation, here’s a model.

00:17:42 What can we discover with this model?

00:17:44 And here’s five things I found after playing with it.

00:17:46 Well, cause here, the thing is a way that you could do that project is you make the

00:17:51 model and then you put it out and you say, here’s a thing for the world to play with,

00:17:54 like come to my website where you interact with this thing.

00:17:57 Um, and, and people did like sort of remake it in a, um, JavaScript way so that you can

00:18:01 go to that website and you can test your own hypotheses.

00:18:04 But I think a meaningful part of the value to add is not just the technology, but to

00:18:08 give the story around it as well.

00:18:11 And like, that’s kind of my job.

00:18:12 It’s not just to like make the, uh, the visuals that someone will look at it’s to be the one

00:18:17 to decide what’s the interesting thing to walk through here.

00:18:20 Um, and even though there’s lots of other interesting paths that one could take, that

00:18:24 can be kind of daunting when you’re just sitting there in a sandbox and you’re given this

00:18:27 tool with like five different sliders and you’re told to like play and discover things.

00:18:32 Where do you do?

00:18:33 What do you start?

00:18:34 What are my hypotheses?

00:18:35 What should I be asking?

00:18:36 Like a little bit of guidance in that direction can be what actually sparks curiosity to make

00:18:40 someone want to, um, imagine more about it.

00:18:43 A few videos I’ve seen you do, I don’t know how often you do it, but there’s almost a

00:18:47 tangential like pause where you, here’s a cool thing you say like, here’s a cool thing,

00:18:54 but it’s outside the scope of this video essentially, but I’ll leave it to you as homework essentially

00:18:59 to like figure out it’s a cool thing to explore.

00:19:02 I wish I could say that wasn’t a function of laziness and that’s like, you’ve worked

00:19:06 so hard on making the 20 minutes already that to extend it out even further, it would take

00:19:11 more time.

00:19:12 And one of your cooler videos, the homomorphic, like from the Mobius strip to this, yeah,

00:19:19 that’s the super and you’re like, yeah, you can’t, uh, you can’t transform the Mobius

00:19:26 strip into a, into a surface without it intersecting itself, but I’ll leave it to you to see why

00:19:35 that is.

00:19:36 Well, I hope that’s not exactly how I phrase it because I think what my hope would be is

00:19:41 that I leave it to you to think about why you would expect that to be true and then

00:19:45 to want to know what aspects of a Mobius strip do you want to formalize such that you can

00:19:50 prove that intuition that you have because at some point now you’re starting to invent

00:19:55 algebraic topology.

00:19:56 If you have these vague instincts like I want to get this Mobius strip, I want to, um, fit

00:20:02 it such that it’s all above the plane, but it’s boundary sits exactly on the plane.

00:20:08 I don’t think I can do that without crossing itself, but that feels really vague.

00:20:11 How do I formalize it?

00:20:12 And as you’re starting to formalize that, that’s what’s going to get you to try to come

00:20:16 up with a definition for what it means to be orientable or non orientable.

00:20:19 And like once you have that motivation, a lot of the otherwise arbitrary things that

00:20:22 are sitting at the very beginning of a topology stack textbook start to make a little more

00:20:26 sense.

00:20:27 Yeah.

00:20:28 And I mean that, that whole video beautifully was a motivation for topology school.

00:20:32 That was my, well, my hope with that is I feel like topology is, um, I don’t want to

00:20:36 say it’s taught wrong, but I do think sometimes it’s popularized in the wrong way where, uh,

00:20:41 you know, you’ll hear these things that people saying, Oh, topologists, they’re very interested

00:20:44 in surfaces that you can bend and stretch, but you can’t cut or glue.

00:20:50 Are they?

00:20:51 Why?

00:20:52 Yeah.

00:20:53 There’s all sorts of things you can be interested in with random, like imaginative manipulations

00:20:56 of things.

00:20:57 Is that really what like mathematicians are into?

00:21:00 And the short answer is not, not really.

00:21:03 That’s uh, it’s not as if someone was sitting there thinking like, I wonder what the properties

00:21:07 of clay are by add some arbitrary rules about what, when I can’t cut it and when I can’t

00:21:12 glue it instead, it’s, there’s a ton of pieces of math that, um, can actually be equivalent

00:21:18 to, uh, like these very general structures that’s like geometry, except you don’t have

00:21:23 exact distances.

00:21:24 You just want to maintain a notion of closeness.

00:21:26 And once you get it to those general structures, constructing mappings between them translate

00:21:31 into non trivial facts about other parts of math and that I just, I don’t think that’s

00:21:36 actually like popularized.

00:21:38 Um, I don’t even think it’s emphasized well enough when you’re starting to take a topology

00:21:41 class because you kind of have these two problems.

00:21:43 It’s like either it’s too squishy.

00:21:45 You’re just talking about coffee mugs and donuts, or it’s a little bit too rigor first.

00:21:49 And you’re talking about, um, the axiom systems with open sets and an open set is not the

00:21:55 opposite of closed set.

00:21:56 So sorry about that.

00:21:57 Everyone, we have a notion of clopin sets for ones that are both at the same time.

00:22:01 Yeah.

00:22:02 It’s just, it’s not, it’s not an intuitive axiom system in comparison to other fields

00:22:06 of math.

00:22:07 So you as the student like really have to walk through mud to get there and you’re constantly

00:22:11 confused about how this relates to the beautiful things about coffee mugs and Mobius strips

00:22:15 and such.

00:22:16 And it takes a really long time to actually see like see topology in the way that mathematicians

00:22:21 see topology.

00:22:22 But I don’t think it needs to take that time.

00:22:23 I think there’s, um, this is making me feel like I need to make more videos on the topic

00:22:27 because I think you do, but you know, I’ve also seen it in my narrow view.

00:22:32 Uh, like, um, I find game theory very beautiful and I know topology has been used, uh, elegantly

00:22:39 to prove things in game theory.

00:22:41 Yeah.

00:22:42 You have like facts that seem very strange.

00:22:43 Like I could tell you, you stir your coffee and um, after you stir it and like, let’s

00:22:47 say all the molecules settled to like not moving again, one of the molecules will be

00:22:51 basically in the same position it was before.

00:22:53 Um, you have all sorts of fixed point theorems like this, right?

00:22:57 That kind of fixed point theorem directly relevant to Nash equilibriums, right?

00:23:01 Um, so you can imagine popularizing it by describing the coffee fact, but then you’re

00:23:05 left to wonder like who cares about if a molecule of coffee like stays in the same spot?

00:23:09 Is this what we’re paying our mathematicians for?

00:23:11 Um, you have this very elegant mapping onto economics in a way that’s very concrete or

00:23:15 very, I shouldn’t say concrete, very, uh, tangible, like actually adds value to people’s

00:23:20 lives through the predictions that it makes.

00:23:22 Uh, but that line isn’t always drawn because like you have to get a little bit technical

00:23:26 in order to properly draw that line out, um, and often I think popularized forms of media

00:23:34 just shy away from being a little too technical for sure.

00:23:37 Uh, by the way, for people who are watching the video, I do not condone the message in

00:23:41 this mug.

00:23:42 It’s the only one I have, which is this.

00:23:44 The snuggle is real.

00:23:47 By the way, for anyone watching, I do condone the message of that mug.

00:23:50 The snuggle is real.

00:23:51 The snuggle is real.

00:23:52 Okay, so you mentioned the SIR model.

00:23:57 I think, uh, there are certain ideas there of growth of exponential growth.

00:24:03 What maybe have you learned about, um, pandemics from, from making that video?

00:24:11 Because it was kind of exploratory.

00:24:12 You were kind of building up an intuition and it’s, again, people should watch the video.

00:24:17 It’s kind of an abstract view.

00:24:19 It’s not really modeling in detail.

00:24:23 The whole field of epidemiology, those, those people, they go really far in terms of modeling,

00:24:31 like how people move about.

00:24:32 I don’t know if you’ve seen it, but like there is the mobility patterns, like how, like the

00:24:37 track, like how many people you encounter in a certain situations when you go to a school,

00:24:42 when you go to a mall, they like model every aspect of that for a particular city.

00:24:46 Like they have maps of actual city streets.

00:24:49 They model it really well and natural patterns of the people have it’s crazy.

00:24:54 So you don’t do any of that.

00:24:55 You’re just doing an abstract model to explore different ideas of simple pedigree.

00:24:59 Well, because I don’t want to pretend like an epidemiologist, I’m an epidemiologist.

00:25:02 Like we have a ton of armchair epidemiologists and the spirit of that was more like, uh,

00:25:08 can we through a little bit of play, uh, draw like reasonable ish conclusions.

00:25:12 Um, and also just like, uh, get ourselves in a position where we can judge the validity

00:25:17 of a model.

00:25:18 Like, I think people should look at that and they should criticize it.

00:25:21 They should point to all the ways that it’s wrong because it’s definitely naive, right?

00:25:24 And the way that it’s set up.

00:25:25 Um, but to say like what, what lessons from that hold, like thinking about the are not

00:25:30 value and what that represents and what it can imply.

00:25:33 Um, so are not is if you are infectious and you’re in a population which is completely

00:25:40 susceptible, uh, what’s the average number of people that you’re going to infect during

00:25:44 your infectiousness?

00:25:46 Um, so certainly during the beginning of an epidemic, this basically gives you kind of

00:25:51 the, um, the exponential growth rate.

00:25:53 Like if every person infects two others, you’ve got that one, two, four, eight, uh, exponential

00:25:57 growth pattern.

00:25:58 Um, as it goes on and, uh, let’s say it’s something, um, uh, endemic where you’ve got

00:26:04 like a ton of people who have had it, uh, and are recovered, then, uh, you, you would,

00:26:10 the are not value doesn’t tell you that as directly because a lot of the people you interact

00:26:13 with aren’t susceptible, but in the early phases it does.

00:26:16 Um, and this is like the fundamental constant that it seems like epidemiologists look at

00:26:21 and you know, the whole goal is to get that down.

00:26:23 If you can get it below one, then it’s no longer epidemic.

00:26:26 If it’s equal to one, then it’s endemic, um, and it’s above one, then your epidemic.

00:26:30 So, uh, like just teaching what that value is and giving some intuitions on how do certain

00:26:36 changes in behavior change that value and then what does that imply for exponential

00:26:40 growth?

00:26:41 I think those are, um, general enough lessons and they’re like resilient to all of the

00:26:47 chaoses of the world, um, that it’s still like valid to take from the video.

00:26:52 I mean, one of the interesting aspects of that is just exponential growth and we think

00:26:56 about growth.

00:26:57 Is that the one of the first times you’ve done a video on, on, uh, no, of course not

00:27:03 the whole, uh, well there’s identity.

00:27:07 Okay.

00:27:08 So

00:27:09 sure.

00:27:10 I guess I’ve done a lot of videos about exponential growth in the circular direction, uh, only

00:27:13 minimal in the normal direction.

00:27:14 I mean, another way to ask, like, do you think we’re able to reason intuitively about exponential

00:27:22 growth?

00:27:23 It’s, it’s funny.

00:27:25 I think it’s, um, I think it’s extremely intuitive to humans and then we train it out of ourselves

00:27:30 such that it’s then really not intuitive and then I think it can become intuitive again

00:27:33 when you study a technical field.

00:27:35 Uh, so what I mean by that is, um, have you ever heard of these studies where in a, uh,

00:27:41 like anthropological setting where you’re studying a group that has been disassociated

00:27:46 from a lot of like modern society and you ask what number is between one and nine and

00:27:51 maybe you would ask you, you’ve got like one rock and you’ve got nine rocks, you’re like

00:27:54 what pile is halfway in between these and our instinct is usually to say five.

00:27:59 That’s the number that sits right between one and nine.

00:28:02 Um, but sometimes when a numeracy and, uh, the kind of just basic arithmetic that we

00:28:07 have isn’t in a society, the natural instinct is three because it’s, uh, in between in an

00:28:12 exponential sense and a geometric sense that, uh, one is three times bigger and then the

00:28:16 next one is three times bigger than that.

00:28:18 So it’s like, what’s, you know, if you have one friend versus a hundred friends, what’s

00:28:22 in between that?

00:28:23 Ten friends seems like the social status in between those two states.

00:28:26 So that’s like deeply intuitive to us to think logarithmically like that.

00:28:30 Um, and for some reason we kind of train it out of ourselves to start thinking linearly

00:28:35 about things.

00:28:36 So in the sense, yeah, the early, early basic math is, uh, yeah, forces us to take a step

00:28:42 back.

00:28:43 It’s, it’s the same criticism if there’s any of science is the lessons of science make

00:28:51 us like see the world in a slightly narrow sense to where we, we have an over exaggerated

00:28:57 confidence that we understand everything as opposed to just understanding a small slice

00:29:02 of it.

00:29:03 But I think that probably only really goes for small numbers cause the real counterintuitive

00:29:07 thing about exponential growth is like as the numbers start to get big.

00:29:10 So I bet if you took that same setup and you asked them, oh, if I keep tripling the size

00:29:14 of this rock pile, you know, um, seven times, how big will it be?

00:29:18 I bet it would be surprisingly big even to like an a society without numeracy.

00:29:23 And that’s the side of it that, um, I think is pretty counterintuitive to us, uh, but

00:29:28 that you can basically train into people like I think computer scientists and physicists

00:29:33 when they’re looking at the early numbers of, um, like COVID were, they were the ones

00:29:38 thinking like, oh God, this is following an exact exponential curve.

00:29:42 Um, and I heard that from a number of people, uh, so it’s, and, and almost all of them are

00:29:47 like techies in some capacity, probably just cause I like live in the Bay area, but, but

00:29:51 for sure they, they’re cognizant of this kind of, this kind of growth is present in a lot

00:29:56 of natural systems and a lot of, in a lot of, in a lot of systems.

00:30:01 Uh, I don’t know if you’ve seen like, I mean, there’s a lot of ways to visualize this obviously,

00:30:04 but Raker as well, I think was the one that had this like chess board where, um, every,

00:30:12 every square on the chess board, you double the number of stones or something in that

00:30:15 chess board.

00:30:16 I’ve heard, this is like an old proverb where it’s like, you know, someone, the King offered

00:30:20 him a gift and he said, ah, the only gift I would like very modest, give me a single

00:30:24 grain of rice for the first chess board and then two grains of rice for the next square.

00:30:29 Then twice that for the next square and just continue on.

00:30:32 That’s my only modest ask your sire and like, it’s all, you know, more grains of rice than

00:30:37 there are, uh, anything in the world, um, by the time you get to the end.

00:30:41 And I, I, my intuition falls apart there, like, I would have never predicted that, like

00:30:47 for some reason, that’s a really compelling, uh, illustration, how poorly breaks down.

00:30:54 Just like you said, maybe we’re okay for the first few piles, but after, uh, of rocks,

00:30:58 but after a while it’s game over.

00:31:00 You know, the other classic example for, um, gauging someone’s intuitive understanding

00:31:04 of exponential growth is, uh, I’ve got like a Lily pad on a, on like really big lake,

00:31:10 um, like lake Michigan and that Lily pad replicates, it doubles, um, one day and then it doubles

00:31:16 the next day and it doubles the next day.

00:31:17 Um, and after 50 days, um, it actually is going to cover the entire lake.

00:31:22 Okay.

00:31:23 So after how many days does it cover half the lake?

00:31:25 49.

00:31:26 So you, you have a good instinct for exponential growth.

00:31:30 Right.

00:31:31 So I think a lot of, uh, like the knee jerk reaction is sometimes to think that it’s like

00:31:35 half the amount of time or to at least be like surprised that like after 49 days, you’ve

00:31:41 only covered half of it.

00:31:42 Um, yeah.

00:31:43 I mean, that’s the reason you heard a pause for me.

00:31:46 Um, I literally thought that can’t be right.

00:31:48 Right.

00:31:49 Yeah, exactly.

00:31:50 So even when you know the fact and you do the division, it’s like, wow.

00:31:53 So you’ve gotten like that whole time and then day 49, it’s only covering half.

00:31:57 And then after that it gets the whole thing.

00:31:58 But I think you can make that even more visceral if rather than going one day before you say

00:32:02 how long until, um, it’s covered 1% of the lake, right.

00:32:06 And it’s, uh, so what would that be?

00:32:08 Um, how many times you have to double to get over a hundred, like seven, six and a half

00:32:12 times, something like that.

00:32:13 Right.

00:32:14 So at that point you’re looking at 43, 44 days into it.

00:32:17 You’re not even at 1% of the lake.

00:32:19 So you’ve, you’ve experienced, you know, 44 out of 50 days and you’re like, ah, that’s

00:32:23 really bad.

00:32:24 It’s just 1% of the lake.

00:32:25 But then next thing you know, it’s the entire lake.

00:32:28 You’re wearing a space X shirt.

00:32:30 So let me ask you, let me ask you one, one person who talks about exponential, you know,

00:32:38 just the miracle of the exponential function in general is Elon Musk.

00:32:42 So he kind of advocates the idea of exponential thinking, you know, realizing that technological

00:32:51 development can, at least in the short term, follow exponential improvement, which breaks

00:32:58 apart our intuition, our ability to reason about what is and isn’t impossible.

00:33:03 So he’s a big one.

00:33:04 It’s a good leadership kind of style of saying like, look, the thing that everyone thinks

00:33:09 is impossible is actually possible because exponentials.

00:33:13 But what’s your sense about, um, about that kind of way to see the world?

00:33:19 Well, so I think it’s, um, it can be very inspiring to note when something like Moore’s

00:33:25 law is another great example where you have this exponential pattern that holds shockingly

00:33:29 well.

00:33:30 Um, and it enables, um, just better lives to be led.

00:33:33 I think the people who took Moore’s law seriously in the sixties, we’re seeing that, wow, it’s

00:33:37 not going to be too long before like these giant computers that are either batch processing

00:33:41 or time shared, you could actually have one small enough to put on your desk on top of

00:33:45 your desk and you could do things.

00:33:46 And if they took it seriously, like you have people predicting smartphones like a long

00:33:49 time ago.

00:33:50 Um, and it’s only out of like kind of this, I don’t want to say faith in exponentials,

00:33:55 but an understanding that that’s what’s happening.

00:33:58 What’s more interesting I think is to, um, really understand why exponential growth happens

00:34:03 and that the mechanism behind it is when the rate of change is proportional to the thing

00:34:07 in and of itself.

00:34:08 So the reason that technology would grow exponentially is only going to be if, um, the rate of progress

00:34:14 is proportional to the amount that you have.

00:34:16 So that the software you write enables you to write more software.

00:34:19 Um, and I think we see this with the internet, like the advent of the internet makes it faster

00:34:25 to learn things, which makes it faster to, uh, create new things.

00:34:29 Um, I think this is, uh, oftentimes why like investment will grow exponentially that the

00:34:35 more resources a company has, if it knows how to use them, well, the more, uh, the more

00:34:40 it can actually grow.

00:34:41 So, I mean, you know, you referenced Elon Musk.

00:34:43 I think he seems to really be into vertically integrating his companies.

00:34:47 I think a big part of that is because you have the sense, what you want is to make sure

00:34:49 that the things that you develop, you have ownership of in the, they enable further development

00:34:54 of the adjacent parts, right?

00:34:55 So it’s not just this, you, you see a curve and you’re blindly drawing a line through

00:35:00 it.

00:35:01 What’s much more interesting is to ask, when do you have this proportional growth property?

00:35:04 Um, because then you can also recognize when it breaks down, like in an epidemic, as you

00:35:09 approach saturation, that would break down.

00:35:11 Um, as you do anything that, uh, skews what that proportionality constant is, um, you

00:35:16 can make it maybe not break down as being an exponential, but it can seriously slow

00:35:20 what that exponential rate is.

00:35:21 This is the opposite of a pandemic is you want, in terms of ideas, you want to minimize

00:35:28 barriers that, um, prevent the spread.

00:35:31 You want to maximize the spread of impact.

00:35:33 So like you want it to, to grow when you’re doing technological development is so that

00:35:38 you do hold up that rate holds up.

00:35:42 And that’s, that’s almost like, uh, like an operational challenge of like how you run

00:35:47 a company, how you run a group of people is that any one invention has a ripple that’s

00:35:52 unstopped.

00:35:54 And that ripple effect then has its own ripple effects and so on.

00:35:58 And that continues.

00:35:59 Yeah.

00:36:00 Like Moore’s law is fascinating.

00:36:01 And the, like on a psychological level and a human level, cause it’s not exponential.

00:36:06 It’s, it’s just a consistent set of like what you would call like S curves, which is like,

00:36:13 it’s constantly like breakthrough innovations nonstop.

00:36:18 That’s a good point.

00:36:19 Like it might not actually be an example of exponentials because of something which grows

00:36:22 in proportion to itself.

00:36:23 But instead it’s almost like a benchmark that was set out that everyone’s been pressured

00:36:27 to meet.

00:36:29 And it’s like all these innovations and micro inventions along the way, rather than some

00:36:33 consistent sit back and just let the lily pad grow across the lake phenomenon.

00:36:38 And it’s also that there’s a human psychological level for sure of like the four minute mile,

00:36:42 like it’s something about it.

00:36:45 Like saying that, look, there is, you know, Moore’s law, it’s a law.

00:36:51 So like it’s a, it’s certainly an achievable thing.

00:36:56 You know, we achieved it for the last decade, for the last two decades, for the last three

00:36:59 decades, you just keep going and it somehow makes it happen.

00:37:04 I mean, it makes people, I’m continuously surprised in this world how few people do

00:37:10 the best work in the world, like in that particular, whatever that field is, like it’s very often

00:37:18 that like the genius, I mean, you couldn’t argue that community matters, but it’s certain

00:37:25 like I’ve been in groups of engineers where like one person is clearly like doing an incredible

00:37:32 amount of work and just is the genius and it’s fascinating to see basically it’s kind

00:37:38 of the Steve Jobs idea is maybe the whole point is to create an atmosphere where the

00:37:46 genius can discover themselves, like have the opportunity to do the best work of their

00:37:52 life and yeah, and that the exponential is just milking that.

00:37:57 It’s like rippling the idea that it’s possible and that idea that it’s possible finds the

00:38:03 right people for the four minute mile and the idea that it’s possible finds the right

00:38:07 runners to run it and then expose the number of people who can run faster than four minutes.

00:38:12 It’s kind of interesting to, I don’t know, basically the positive way to see that is

00:38:17 most of us are way more intelligent, have way more potential than we ever realized.

00:38:22 I guess that’s kind of depressing, but I mean like the ceiling for most of us is much higher

00:38:26 than we ever realized.

00:38:28 That is true.

00:38:29 A good book to read if you want that sense is Peak, which essentially talks about peak

00:38:34 performance in a lot of different ways, like chess, London cab drivers, how many pushups

00:38:39 people can do, short term memory tasks, and it’s meant to be like a concrete manifesto

00:38:45 about deliberate practice and such, but the one sensation you come out with is wow, no

00:38:50 matter how good people are at something, they can get better and like way better than we

00:38:54 think they could.

00:38:55 I don’t know if that’s actually related to exponential growth, but I do think it’s a

00:38:59 true phenomenon that’s interesting.

00:39:01 Yeah, I mean, there’s certainly no law of exponential growth in human innovation.

00:39:06 Well, I don’t know.

00:39:09 Well kind of, there is.

00:39:11 I think it’s very interesting to see when innovations in one field allow for innovations

00:39:15 in another.

00:39:16 Like the advent of computing seems like a prerequisite for the advent of chaos theory.

00:39:20 You have this truth about physics and the world that in theory could be known.

00:39:25 You could find Lorenz’s equations without computers, but in practice, it was just never

00:39:29 going to be analyzed that way unless you were doing like a bunch of simulations and that

00:39:34 you could computationally see these models.

00:39:36 So it’s like physics allowed for computers, computers allowed for better physics, and

00:39:40 you know, wash, rinse and repeat.

00:39:42 That self proportionality, that’s exponential.

00:39:45 So I think I wouldn’t think it’s too far to say that that’s a law of some kind.

00:39:50 Yeah, a fundamental law of the universe is that these descendants of apes will exponentially

00:39:59 improve their technology and one day be taken over by the AGI.

00:40:05 That’s built in.

00:40:06 That’ll make the video game fun, whoever created this thing.

00:40:09 So I mean, since you’re wearing a space X shirt, let me ask.

00:40:12 I didn’t realize I was wearing a space X shirt.

00:40:15 I apologize.

00:40:16 It’s on point.

00:40:17 So it’s on topic.

00:40:18 I’ll take it.

00:40:19 It’s the first crewed mission out into space since the space shuttle and just by first

00:40:32 time ever by a commercial company, I mean, it’s an incredible accomplishment, I think,

00:40:37 but it’s also just an incredible, it inspires imagination amongst people that this is the

00:40:44 first step in a long, vibrant journey of humans into space.

00:40:50 So how do you feel?

00:40:52 Is this exciting to you?

00:40:55 Yeah, it is.

00:40:56 I think it’s great.

00:40:57 The idea of seeing it basically done by smaller entities instead of by governments.

00:41:00 I mean, it’s a heavy collaboration between space X and NASA in this case, but moving

00:41:04 in the direction of not necessarily requiring an entire country and its government to make

00:41:09 it happen, but that you can have something closer to a single company doing it.

00:41:14 We’re not there yet because it’s not like they’re unilaterally saying like we’re just

00:41:18 shooting people up into space.

00:41:20 It’s just a sign that we’re able to do more powerful things with smaller groups of people.

00:41:25 I find that inspiring.

00:41:26 Innovate quickly.

00:41:27 I hope we see people land on Mars in my lifetime.

00:41:30 Do you think we will?

00:41:32 I think so.

00:41:33 I mean, I think there’s a ton of challenges there, right?

00:41:35 Like radiation being kind of the biggest one.

00:41:37 And I think there’s a ton of people who look at that and say, why?

00:41:42 Why would you want to do that?

00:41:43 Let’s let the robots do the science for us.

00:41:46 But I think there’s enough people who are genuinely inspired about broadening the worlds

00:41:50 that we’ve touched or people who think about things like backing up the light of consciousness

00:41:55 with super long term visions of terraforming, like as long as there’s a backing up the

00:41:59 light of consciousness.

00:42:00 Yeah.

00:42:01 I thought that if Earth goes to hell, we’ve got to have a backup somewhere.

00:42:07 A lot of people see that as pretty out there and it’s like not in the short term future,

00:42:10 but I think that’s an inspiring thought.

00:42:12 I think that’s a reason to get up in the morning and I feel like most employees at SpaceX feel

00:42:17 that way too.

00:42:18 Do you think we’ll colonize Mars one day?

00:42:21 No idea.

00:42:22 Like either AGI kills us first or if we’re like allowed, I don’t know if it’ll take us

00:42:26 for allowed.

00:42:27 Well, like honestly, it would take such a long time.

00:42:29 Like, okay, you might have a small colony, right?

00:42:32 Something like what you see in the Martian, but not like people living comfortably there.

00:42:39 But if you want to talk about actual like second Earth kind of stuff, that’s just like

00:42:46 way far out there and the future moves so fast that it’s hard to predict.

00:42:50 We might just kill ourselves before that even becomes viable.

00:42:53 Yeah.

00:42:54 I mean, there’s a lot of possibilities where it could be just, it doesn’t have to be on

00:42:58 a planet, we could be floating out in space, have a space faring backup solution that doesn’t

00:43:07 have to deal with the constraints that a planet, I mean, a planet provides a lot of possibilities

00:43:11 and resources, but also has some constraints.

00:43:13 Yeah.

00:43:14 I mean, for me, for some reason, it’s a deeply exciting possibility.

00:43:19 Oh yeah.

00:43:20 Yeah.

00:43:21 All of the people who were like skeptical about it are like, why do we care about going

00:43:24 to Mars?

00:43:25 Like, what makes you care about anything that’s inspiring?

00:43:28 It’s hard.

00:43:29 It actually is hard to hear that because exactly as you put it on a philosophical level, it’s

00:43:34 hard to say, why do anything?

00:43:37 I don’t know.

00:43:38 It’s like the people say like, I’ve been doing like an insane challenge last 30 something

00:43:45 days.

00:43:46 Your pull ups?

00:43:47 The pull ups and push ups and like, a bunch of people are like, awesome.

00:43:55 They’re insane, but awesome.

00:43:57 And then some people are like, why?

00:43:59 Why do anything?

00:44:00 I don’t know.

00:44:02 There’s a calling.

00:44:03 It’s, I’m with JFK a little bit is because we do these things because they’re hard.

00:44:09 There’s something in the human spirit that says like, same with like a math problem.

00:44:14 There’s something you fail once and it’s like this feeling that, you know what, I’m not

00:44:20 going to back down from this.

00:44:21 There’s something to be discovered in overcoming this thing.

00:44:25 So what I like about it is, and I also like this about the moon missions, sure, it’s kind

00:44:29 of arbitrary, but you can’t move the target.

00:44:32 So you can’t make it easier and say that you’ve accomplished the goal.

00:44:36 And when that happens, it just demands actual innovation, right?

00:44:39 Like protecting humans from the radiation in space on the flight there while they’re

00:44:44 hard problem demands innovation.

00:44:46 You can’t move the goalpost to make that easier.

00:44:49 But certainly the innovations required for things like that will be relevant in a bunch

00:44:52 of other domains too.

00:44:54 So like the idea of doing something merely because it’s hard, it’s like loosely productive.

00:44:59 Great.

00:45:00 But as long as you can’t move the goalposts, there’s probably going to be these secondary

00:45:03 benefits that like we should all strive for.

00:45:07 Yeah.

00:45:08 I mean, it’s hard to formulate the Mars colonization problem as something that has a deadline,

00:45:13 which is the problem.

00:45:15 But if there was a deadline, then the amount of things we would come up with by forcing

00:45:23 ourselves to figure out how to colonize that place would be just incredible.

00:45:29 This is what people, like the internet didn’t get created because people sat down and try

00:45:34 to figure out how do I, you know, send TikTok videos of myself dancing to people.

00:45:41 They, you know, it was, there’s an application.

00:45:44 I mean, actually I don’t even know what do you think the application for the internet

00:45:48 was when it was, it must’ve been very low level basic network communication within DARPA,

00:45:53 like military based, like how do I send like a networking, how do I send information securely

00:46:00 between two places?

00:46:03 Maybe it was an encryption.

00:46:04 I’m totally speaking totally outside of my knowledge, but like it was probably intended

00:46:08 for a very narrow, small group of people.

00:46:10 Well, so I mean, it was, there was like this small community of people who are really interested

00:46:13 in timesharing computing and like interactive computing in contrast with a batch processing.

00:46:20 And then the idea that as you set up like a timesharing center, basically meaning kind

00:46:24 of multiple people like logged in and using that like central computer, why not make it

00:46:28 accessible to others?

00:46:30 And this was kind of what I had always thought like, Oh, is this like fringe group that was

00:46:33 interested in this new kind of computing and they all like got themselves together.

00:46:37 But the thing is like DARPA wouldn’t act, you wouldn’t have the U S government funding

00:46:41 that just for the funds of it, right?

00:46:44 In some sense, that’s what ARPA was all about was like just really advanced research for

00:46:48 the sake of having advanced research and it doesn’t have to pay out with utility soon.

00:46:53 But the core parts of its development were happening like in the middle of the Vietnam

00:46:57 war when there was budgetary constraints all over the place.

00:47:01 I only learned this recently, actually, like if you look at the documents, basically justifying

00:47:06 the budget for the ARPANET as they were developing it, and not just keeping it where it was,

00:47:12 but actively growing it while all sorts of other departments were having their funding

00:47:15 cut because of the war, a big part of it was national defense in terms of having like a

00:47:21 more robust communication system, like the idea of packet switching versus circuit switching.

00:47:26 You could kind of make this case that in some calamitous circumstance where a central location

00:47:31 gets nuked, this is a much more resilient way to still have your communication lines

00:47:36 that like traditional telephone lines weren’t as resilient to, which I just found very interesting.

00:47:45 Even something that we see as so happy go lucky is just a bunch of computer nerds trying

00:47:48 to get like interactive computing out there.

00:47:51 The actual thing that made it funded and thing that made it advance when it did was because

00:47:57 of this direct national security question and concern.

00:48:00 I don’t know if you’ve read it.

00:48:02 I haven’t read it.

00:48:03 I don’t know if I’ve been meaning to read it, but Neil deGrasse Tyson actually came out

00:48:06 with a book that talks about like science in the context of the military, like basically

00:48:11 saying all the great science we’ve done in the 20th century was like because of the military.

00:48:18 He paints a positive, it’s not like a critical, a lot of people say like military industrial

00:48:23 complex and so on.

00:48:25 Another way to see the military and national security is like a source of, like you said,

00:48:30 headlines and like hard things you can’t move, like almost like scaring yourself into being

00:48:37 productive.

00:48:38 It is that.

00:48:39 I mean, Manhattan Project is a perfect example, probably the quintessential example.

00:48:43 That one is a little bit more macabre than others because of like what they were building,

00:48:48 but in terms of how many focused, smart hours of human intelligence get pointed towards

00:48:54 a topic per day, you’re just maxing it out with that sense of worry.

00:48:58 In that context, everyone there was saying like, we’ve got to get the bomb before Hitler

00:49:01 does and that just lights a fire under you that I, again, like the circumstances macabre,

00:49:08 but I think that’s actually pretty healthy, especially for researchers that are otherwise

00:49:11 going to be really theoretical to take these like theorizers and say, make this real physical

00:49:17 thing happen.

00:49:18 Meaning a lot of it is going to be unsexy, a lot of it’s going to be like young Feynman

00:49:22 sitting there kind of inventing a notion of computation in order to like compute what

00:49:28 they needed to compute more quickly with like the rudimentary automated tools that they

00:49:31 had available.

00:49:34 I think you see this with Bell Labs also where you’ve got otherwise very theorizing minds

00:49:39 in very pragmatic contexts that I think is like really helpful for the theory as well

00:49:43 as for the applications.

00:49:46 I think that stuff can be positive for progress.

00:49:50 You mentioned Bell Labs and Manhattan Project.

00:49:52 This kind of makes me curious for the things you’ve create, which are quite singular.

00:49:58 Like if you look at all YouTube or just not YouTube, it doesn’t matter what it is.

00:50:04 It’s just teaching content, art, it doesn’t matter.

00:50:06 It’s like, yep, that’s, that’s grant, right?

00:50:11 That’s unique.

00:50:12 I know you’re teaching style and everything.

00:50:15 Does it, Manhattan Project and Bell Labs was like famously a lot of brilliant people, but

00:50:22 there’s a lot of them.

00:50:23 They play off of each other.

00:50:25 So like my question for you is that, does it get lonely?

00:50:28 Honestly, that right there, I think is the biggest part of my life that I would like

00:50:32 to change in some way that I look at a Bell Labs type situation and I’m like, God damn,

00:50:39 I love that whole situation and I’m so jealous of it and you’re like reading about Hamming

00:50:44 and then you see that he also shared an office with Shannon and you’re like, of course he

00:50:47 did.

00:50:48 Of course they shared an office.

00:50:49 That’s how these ideas get.

00:50:50 And they actually probably very likely worked separately.

00:50:53 Yeah, totally, totally separate.

00:50:55 But there’s a literally, and sorry to interrupt, there’s a literally magic that happens when

00:50:59 you run into each other, like on the way to like getting a snack or something.

00:51:06 Conversations you overhear, it’s other projects you’re pulled into, it’s like puzzles that

00:51:09 colleagues are sharing, like all of that.

00:51:12 I have some extent of it just because I try to stay well connected in communities of people

00:51:17 who think in similar ways.

00:51:19 But it’s not in the day to day in the same way, which I would like to fix somehow.

00:51:25 That’s one of the, I would say one of the biggest, well, one of the many drawbacks,

00:51:35 negative things about this current pandemic is that whatever the term is, but like chance

00:51:40 collisions are significantly reduced.

00:51:44 I saw, I don’t know why I saw this, but on my brother’s work calendar, he had a scheduled

00:51:50 slot with someone that he scheduled a meeting and the title of the whole meeting was no

00:51:57 specific agenda.

00:51:58 I just missed the happenstance serendipitous conversations that we used to have, which

00:52:02 the pandemic and remote work has so cruelly taken away from us.

00:52:05 Brilliant.

00:52:06 That’s brilliant.

00:52:07 I’m like, that’s the way to do it.

00:52:09 You just schedule those things, schedule the serendipitous interaction.

00:52:13 That’s like, I mean, you can’t do it in an academic setting, but it’s basically like

00:52:16 going to a bar and sitting there just for the strangers you might meet, just the strangers

00:52:22 or striking up a conversation with strangers on the train.

00:52:26 Harder to do when you’re deeply like maybe myself or maybe a lot of academic types who

00:52:33 are like introverted and avoid human contact as much as possible.

00:52:38 So it’s nice when it’s forced, those chance collisions, but maybe scheduling is a possibility

00:52:43 but for the most part, do you work alone?

00:52:48 I’m sure you struggle a lot.

00:52:53 You probably hit moments when you look at this and you say like, this is the wrong way

00:53:00 to show it.

00:53:01 It’s a long way to visualize it.

00:53:02 I’m making it too hard for myself.

00:53:04 I’m going down the wrong direction.

00:53:05 This is too long.

00:53:06 This is too short.

00:53:07 All those self doubt that could be paralyzing.

00:53:11 What do you do in those moments?

00:53:13 I actually much prefer like work to be a solitary affair for me.

00:53:18 That’s like a personality quirk.

00:53:19 I would like it to be in an environment with others and like collaborative in the sense

00:53:23 of ideas exchanged.

00:53:24 But those phenomena you’re describing when you say this is too long, this is too short,

00:53:27 this visualization sucks, it’s way easier to say that to yourself than it is to say

00:53:31 to a collaborator.

00:53:34 And I know that’s just a thing that I’m not good at.

00:53:36 So in that way, it’s very easy to just throw away a script because the script isn’t working.

00:53:41 It’s hard to tell someone else they should do the same.

00:53:43 Actually last time we talked, I think it was like very close to me talking Don Knuth was

00:53:47 kind of cool.

00:53:48 Like two people that…

00:53:49 I can’t believe you got that interview.

00:53:51 It’s the hard…

00:53:52 No, can I brag about something?

00:53:54 Please.

00:53:55 My favorite thing is Don Knuth, after did the interview, he offered to go out to hot

00:54:01 dogs with me.

00:54:02 To get hot dogs.

00:54:03 That was never…

00:54:04 Like people ask me what’s the favorite interview you’ve ever done and that has to be…

00:54:09 But unfortunately I couldn’t, I had a thing after.

00:54:12 So I had to turn down Don Knuth.

00:54:14 You missed Knuth dogs?

00:54:15 Knuth dogs.

00:54:16 Sorry.

00:54:17 So that was a little bragging, but the hot dogs, he’s such a sweet.

00:54:22 But the reason I bring that up is he works through problems alone as well.

00:54:28 He prefers that struggle, the struggle of it.

00:54:33 Writers like Stephen King often talk about their process of what they do, what they eat

00:54:40 when they wake up, when they sit down, how they like their desk on a perfectly productive

00:54:49 day.

00:54:51 What they like to do, how long they like to work for, what enables them to think deeply,

00:54:56 all that kind of stuff.

00:54:57 Hunter S. Thompson did a lot of drugs.

00:55:00 Everybody has their own thing.

00:55:03 Do you have a thing?

00:55:05 If you were to lay out a perfect productive day, what would that schedule look like do

00:55:10 you think?

00:55:11 Part of that’s hard to answer because like the mode of work I do changes a lot from day

00:55:19 to day.

00:55:20 Like some days I’m writing.

00:55:21 The thing I have to do is write a script.

00:55:22 Some days I’m animating.

00:55:23 The thing I have to do is animate.

00:55:24 Sometimes I’m like working on the animation library.

00:55:27 The thing I have to do is like a little, I’m not a software engineer, but something in

00:55:30 the direction of software engineering.

00:55:32 Some days it’s like a variant of research.

00:55:34 It’s like learn this topic well and try to learn it differently.

00:55:37 So those are like four very different modes.

00:55:41 Some days it’s like get through the email backlog of people I’ve been, tasks I’ve been

00:55:45 putting off.

00:55:46 It goes research, scripting, like the idea starts with research and then there’s scripting

00:55:51 and then there’s programming and then there’s the showtime.

00:55:56 And the research side, by the way, I think a problematic way to do it is to say I’m starting

00:56:01 this project and therefore I’m starting the research.

00:56:03 Instead it should be that you’re like ambiently learning a ton of things just in the background

00:56:08 and then once you feel like you have the understanding for one, you put it on the list of things

00:56:11 that there can be a video for.

00:56:14 Otherwise either you’re going to end up roadblocked forever or you’re just not going to like have

00:56:18 a good way of talking about it.

00:56:21 But still some of the days it’s like the thing to do is learn new things.

00:56:25 So what’s the most painful one?

00:56:26 I think you mentioned scripting.

00:56:29 Scripting is yeah, that’s the worst.

00:56:30 Yeah, writing is the worst.

00:56:31 So what’s your, on a perfectly, so let’s take the hardest one.

00:56:35 What’s a perfectly productive day?

00:56:37 You wake up and it’s like, damn it, this is the day I need to do some scripting.

00:56:41 And like you didn’t do anything the last two days so you came up with excuses to procrastinate

00:56:45 so today must be the day.

00:56:47 Yeah, I wake up early, I guess I exercise and then I turn the internet off.

00:56:57 If we’re writing, yeah, that’s what’s required is having the internet off and then maybe

00:57:01 you keep notes on the things that you want to Google when you’re allowed to have the

00:57:04 internet again.

00:57:05 I’m not great about doing that, but when I do, that makes it happen.

00:57:08 And then when I hit writer’s block, like the solution to writer’s block is to read.

00:57:13 Doesn’t even have to be related.

00:57:14 Just read something different just for like 15 minutes, half an hour and then go back

00:57:18 to writing.

00:57:20 That when it’s a nice cycle, I think can work very well.

00:57:22 And when you’re writing the script, you don’t know where it ends, right?

00:57:26 Like you have a problem solving videos.

00:57:28 I know where it ends, expositional videos.

00:57:30 I don’t know where it ends coming up with a, with the magical thing that makes this

00:57:36 whole story, like ties this whole story together that when does that happen?

00:57:41 That’s that’s the thing that makes it such that a topic gets put on the list of like

00:57:45 videos.

00:57:46 Oh, that’s an issue.

00:57:47 You shouldn’t start the project unless there’s one of those and you have, you have so many

00:57:50 nice bags that you haven’t such a big bag of aha moments already that you could just

00:57:55 pull at it.

00:57:56 That’s one of the things.

00:57:57 And one of the sad things about time and that nothing lasts forever and that we’re all mortal.

00:58:05 Let’s not get into that discussion is, you know, if I see like, even when I ask for people

00:58:14 to ask, like ask, I did a call for questions and people want to ask you questions and so

00:58:19 many requests from people about like certain videos they would love you to do.

00:58:23 It’s such a pile and I think that’s a, that’s a sign of like admiration from people for

00:58:30 sure.

00:58:31 But it’s like, it makes me sad cause like whenever I see them, people give ideas, they’re

00:58:35 all like very often really good ideas.

00:58:38 And it’s like, it’s such a, it makes me sad in the same kind of way when I go through

00:58:44 a library or through a bookstore, you see all these amazing books that you’ll never

00:58:49 get to open.

00:58:52 So yeah.

00:58:53 So you gotta enjoy the ones that you have, enjoy the books that are open and don’t let

00:58:59 yourself lament the ones that stay closed.

00:59:02 What else?

00:59:03 Is there any other magic to that day?

00:59:05 So do you try to dedicate like a certain number of hours?

00:59:08 Do you, Cal Newport has this deep work kind of idea.

00:59:13 There’s systematic people who like get really on top of, you know, they checklist of what

00:59:17 they’re going to do in the day and they like count their hours.

00:59:20 And I am not a systematic person in that way.

00:59:23 Which is probably a problem.

00:59:24 I very likely would get more done if I was systematic in that way, but that doesn’t happen.

00:59:30 So you talk to me later in life and maybe I’ll have like changed my ways and give you

00:59:35 a very different answer.

00:59:37 I think Benjamin Franklin like later in life figured out the rigor is these like very rigorous

00:59:42 schedules and how to be productive.

00:59:45 I think those schedules are much more fun to write.

00:59:47 Like it’s very fun to like write a schedule and make a blog post about like the perfect

00:59:50 productive day that like might work for one person.

00:59:54 But I don’t know how much people get out of like reading them or trying to adopt someone

00:59:57 else’s style.

00:59:59 And I’m not even sure that they’ve ever followed.

01:00:01 Exactly.

01:00:02 You’re always going to write it as the best version of yourself.

01:00:05 You’re not going to explain the phenomenon of like wanting to get out of the bed, but

01:00:10 not really wanting to get out of the bed and all of that.

01:00:13 And just like zoning out for random reasons or the one that people probably don’t touch

01:00:18 at all is I try to check social media once a day, but I’m like only.

01:00:24 So I post and that’s it.

01:00:26 When I post, I check the previous days.

01:00:28 That’s like my, what I try to do.

01:00:31 That’s what I do like 90% of the days.

01:00:34 But then I’ll go, I’ll have like a two week period where it’s just like, I’m checking

01:00:38 the internet like, I mean, it’s some, probably some scary number of times and a lot of people

01:00:44 can resonate with that.

01:00:45 I think it’s a legitimate addiction.

01:00:47 It’s like, it’s a dopamine addiction and it’s, I don’t know if it’s a problem because as

01:00:52 long as it’s the kind of socializing, like if you’re actually engaging with friends and

01:00:55 engaging with other people’s ideas, uh, I think it can be really useful.

01:01:00 Well, I don’t know.

01:01:01 So like for sure I agree with you, but I’m, it’s a, it’s definitely an addiction because

01:01:07 for me, I think it’s true for a lot of people.

01:01:09 I am very cognizant of the fact I just don’t feel that happy.

01:01:14 If I look at a day where I’ve checked social media a lot, like if I just aggregate, I did

01:01:20 a self report, I’m sure I would find that I’m just like literally on like less happy

01:01:26 with my life and myself after I’ve done that check.

01:01:29 When I check it once a day, I’m very like, I’m happy I even like, cause I’ve seen it.

01:01:36 Okay.

01:01:37 One way to measure that is when somebody says something not nice to you on the internet

01:01:42 is like when I check it once a day, I’m able to just like, like I smile, like, like I virtually,

01:01:48 I think about them positively, empathetically, I send them love.

01:01:51 I don’t, I don’t ever respond, but I just feel positively about the whole thing.

01:01:56 If I check it, if I check like more than that, it starts eating at me.

01:02:01 Like it start, there, there’s an eating thing that, that happens like anxiety.

01:02:07 It occupies a part of your mind that’s not, doesn’t seem to be healthy.

01:02:11 Same with, I mean, you, you, you put stuff out on YouTube.

01:02:15 I think it’s important.

01:02:17 I think you have a million dimensions that are interesting to you, but yeah, one of,

01:02:21 one of the interesting ones is the study of education and the psychological aspect of

01:02:26 putting stuff up on YouTube.

01:02:28 I like now have completely stopped checking statistics of any kind.

01:02:34 I’ve released an episode a 100 with my dad, conversation with my dad.

01:02:39 He checks, he’s probably listening to this stop.

01:02:44 He checks the number of views on his, on his video, on his conversation.

01:02:49 So he discovered like a reason he’s new to this whole addiction and he just checks and

01:02:54 he like, he’ll text me or write to me, I just passed Dawkins and I love that so much.

01:03:04 Yeah.

01:03:05 So he’s, uh, can I tell you a funny story in that effect of like parental use of YouTube?

01:03:09 Uh, early on in the channel, uh, my mom would like text me.

01:03:14 She’s like, uh, the channel, the channel has had 990,000 views.

01:03:19 The channel has had 991,000 views.

01:03:20 I’m like, oh, that’s cute.

01:03:22 She’s going to the little part on the about page where you see the total number of channel

01:03:24 views.

01:03:25 No, she didn’t know about that.

01:03:27 She had been going every day through all the videos and then adding them up and she thought

01:03:33 she was like doing me this favor of providing me this like global analytic that, uh, otherwise

01:03:38 wouldn’t be visible.

01:03:39 That’s awesome.

01:03:40 It’s just like this addiction where you have some number you want to follow and like, yeah,

01:03:43 it’s funny that your dad had this.

01:03:44 I think a lot of people have it.

01:03:46 I think that’s probably a beautiful thing for like parents cause they’re legitimately,

01:03:52 they’re proud.

01:03:53 Yeah.

01:03:54 It’s, it’s born of love.

01:03:55 It’s great.

01:03:56 The downside, I feel one, one of them is this is one interesting experience that you probably

01:04:03 don’t know much about cause comments on your videos are super positive.

01:04:07 Uh, but people judge the quality of how something went.

01:04:12 Like I see that with these conversations by the comments.

01:04:16 Yeah.

01:04:17 Like, I’m not talking about like, you know, people in their twenties and their thirties.

01:04:22 I’m talking about like CEOs of major companies who don’t have time.

01:04:27 They basically, they literally, this is their evaluation metric.

01:04:31 They’re like, Ooh, the comments seem to be positive and that’s really concerning to me.

01:04:35 Most important lesson for any content creator to learn is that the commenting public is

01:04:40 not representative of the actual public.

01:04:42 And this is easy to see.

01:04:44 Ask yourself, how often do you write comments on YouTube videos?

01:04:47 Most people will realize I never do it.

01:04:49 Some people realize they do, but the people who realize they never do it should understand

01:04:53 that that’s a sign.

01:04:54 The kind of people who are like you aren’t the ones leaving comments.

01:04:58 And I think this is important.

01:04:59 A number of respects, like, uh, in my case, I think I would think my content was better

01:05:03 than it was if I just read comments cause people are super nice.

01:05:06 The thing is the people who are bored by it are, are put off by it in some way or frustrated

01:05:10 by it.

01:05:11 Usually they just go away.

01:05:13 You’re certainly not going to watch the whole video, much less leave a comment on it.

01:05:16 So there’s a huge under representation of like negative feedback, like well intentioned

01:05:20 negative feedback because very few people actively do that.

01:05:23 Like watch the whole thing that they dislike, figure out what they disliked, articulate

01:05:26 what they dislike.

01:05:27 Um, there’s plenty of negative feedback that’s not well intentioned, but um, for like that

01:05:32 golden kind, uh, I think a lot of YouTuber friends I have, uh, at least have gone through

01:05:38 phases of like anxiety about the nature of comments, um, that stem from basically just

01:05:44 this that it’s like people who aren’t necessarily representative of who they were going for

01:05:48 or misinterpreted what they’re trying to say or whatever have you, or we’re focusing on

01:05:52 things like personal appearances as opposed to like substance.

01:05:55 Um, and they come away thinking like, oh, that’s what everyone thinks, right?

01:05:59 That’s what everyone’s response to this video was.

01:06:01 Um, but a lot of the people who had the reaction you wanted them to have, like they probably

01:06:05 didn’t write it down.

01:06:07 So very important to learn.

01:06:09 It also translates to, um, realizing that you’re not as important as you might think

01:06:14 you are, right?

01:06:15 Because all of the people commenting are the ones who love you the most and are like really

01:06:19 asking you to like create certain things or like mad that you didn’t create like a past

01:06:22 thing.

01:06:23 Um, I don’t, I have such a problem.

01:06:26 Like I have a very real problem with making promises about a type of content that I’ll

01:06:30 make and then either not following up on it soon or just like never following up on it.

01:06:34 Yeah.

01:06:35 Like the last time we talked, I think prom, I’m not sure a promise to me that you’ll have

01:06:38 music incorporated into your, like, uh, I’ll share it with you a private link, but there’s

01:06:44 an example of like what I had in mind.

01:06:45 I like did a version of it, um, and I’m like, Oh, I think there’s a better version of this

01:06:50 that might exist one day.

01:06:52 So it’s now on the, like the back burner, it’s like, it’s sitting there.

01:06:55 It was like a live performance at this one thing, I think next, next circumstance that

01:06:59 I’m like doing another recorded live performance that like fits having that then in a better

01:07:03 recording context, maybe I’ll make it nice in public.

01:07:06 Maybe a while, but exactly.

01:07:08 Right.

01:07:09 Um, the point I was going to make those, like, I know I’m bad about following up on stuff,

01:07:12 uh, which is an actual problem.

01:07:14 It’s born of the fact that I have a sense of what will be like good content when it

01:07:18 won’t be.

01:07:19 Um, but this can actually be credibly disheartening because a ton of comments that I see are people

01:07:24 who are like, uh, frustrated, usually in a benevolent way that like I haven’t followed

01:07:29 through on like X and X, which I get and I should do that.

01:07:32 But what’s comforting thought for me is that when there’s a topic I haven’t promised, but

01:07:36 I am working on and I’m excited about, it’s like the people who would really like this

01:07:40 don’t know that it’s coming and don’t know to like comment to that effect and like the

01:07:44 commenting public that I’m seeing is not representative of like who I think this other project will

01:07:49 touch meaningfully.

01:07:50 Yeah.

01:07:51 So focus on the future on the thing you’re creating now, just like the, uh, yeah, the

01:07:54 art of it.

01:07:55 One of the people is really inspiring to me in that regard because I’ve really seen it

01:08:00 in persons, um, Joe Rogan, he doesn’t read comments, but not just that he doesn’t give

01:08:08 a damn.

01:08:09 Hmm.

01:08:10 He like legitimate, he’s not like clueless about it.

01:08:13 He’s like, just like the richness and the depth of a smile he has when he just experiences

01:08:19 the moment with you like offline, you can tell he doesn’t give a damn about like, like

01:08:28 about anything, about what people think about whether if it’s on a podcast, you talk to

01:08:31 them or whether offline about just, it’s not there.

01:08:36 Like what other people think, how, how, um, even like what the rest of the day looks like

01:08:41 is just deeply in the moment, uh, or like, especially like is, is what we’re doing going

01:08:48 to make for a good Instagram photo or something like that?

01:08:50 It doesn’t think like that at all.

01:08:52 It’s I think for actually quite a lot of people, he’s an inspiration in that way, but it was

01:08:58 and in real life, I show that you can be very successful, not giving a damn about, um, about

01:09:06 comments.

01:09:07 And it sounds, it sounds bad not to read comments cause it’s like, well, there’s a huge number

01:09:12 of people who are deeply passionate about what you do.

01:09:15 So you’re what ignoring them, but at the same time, the nature of our platforms is such

01:09:20 that the cost of listening to all the positive people who are really close to you, who are

01:09:27 incredible people have been, you know, I’ve made a great community that you can learn

01:09:32 a lot from the cost of listening to those folks is also the cost of your psychology

01:09:40 slowly being degraded by the natural underlying toxicity of the internet.

01:09:47 Engage with a handful of people deeply rather than like as many people as you can in a shallow

01:09:51 way.

01:09:52 I think that’s a good lesson for social media usage.

01:09:55 Um, like platforms in general, like choose, choose just a handful of things to engage

01:10:00 with and engage with it very well in a way that you feel proud of and don’t worry about

01:10:03 the rest.

01:10:04 Honestly, I think the best social media platform is texting.

01:10:09 That’s my favorite.

01:10:10 That’s my go to social media platform.

01:10:12 Well, yeah, the best social media interactions like real life, not social media, but social

01:10:17 interaction.

01:10:18 Oh yeah.

01:10:19 No, no, no question there.

01:10:20 I think everyone should agree with that.

01:10:21 Which sucks because, uh, it’s been challenged now with the current situation and we’re trying

01:10:26 to figure out what kind of platform can be created that we can do remote communication

01:10:31 that still is effective.

01:10:32 It’s important for education.

01:10:34 It’s important for just the question of education right now.

01:10:38 Yeah.

01:10:39 So on that topic, uh, you’ve done a series of live streams called lockdown math and you

01:10:44 know, you want live, which is different than you usually do.

01:10:48 Maybe one, can you talk about how that feel?

01:10:53 What’s that experience like like in your own, when you look back, like, is that an effective

01:10:58 way?

01:10:59 Did you find a being able to teach?

01:11:01 And if so, is there a lessons for this world where all of these educators are now trying

01:11:07 to figure out how the heck do I teach remotely?

01:11:11 For me, it was very different, as different as you can get.

01:11:13 I’m on camera, which I’m usually not.

01:11:15 I’m doing it live, which is nerve wracking.

01:11:17 Um, it was a slightly different like level of topics, although realistically I’m just

01:11:21 talking about things I’m interested in no matter what.

01:11:24 I think the reason I did that was this thought that a ton of people are looking to learn

01:11:28 remotely the rate at which I usually put out content is too slow to be actively helpful.

01:11:33 Let me just do some biweekly lectures that if you’re looking for a place to point your

01:11:36 students, if you’re a student looking for a place to be edified about math, just tune

01:11:39 in at these times.

01:11:40 Um, and in that sense, I think it was, you know, a success for those who followed with

01:11:45 it.

01:11:46 It was a really rewarding experience for me to see how people engaged with it.

01:11:50 Um, part of the fun of the live interaction was to actually like I do these live quizzes

01:11:54 and see how people would answer and try to shape the lesson based on that or see what

01:11:57 questions people were asking in the audience.

01:11:59 I would love to, if I did more things like that in the future, kind of tighten that feedback

01:12:03 loop even more.

01:12:05 Um, I think for, you know, you asked about like if this can be relevant to educators,

01:12:10 like 100% online teaching is basically a form of live streaming now.

01:12:15 Um, and usually it happens through zoom.

01:12:17 I think if teachers view what they’re doing as a kind of performance and a kind of live

01:12:22 stream performance, um, that would probably be pretty healthy because zoom can be kind

01:12:27 of awkward.

01:12:28 Um, and I brought up this little blog post actually just on like just what our setup

01:12:32 looked like if you want to adopt it yourself and how to integrate, um, like the broadcasting

01:12:37 software OBS with zoom or things like that.

01:12:39 It was really sorry to pause on that.

01:12:40 I mean, yeah, maybe we could look at the blog post, but it looked really nice.

01:12:45 The thing is, I knew nothing about any of that stuff before I started.

01:12:48 I had a friend who knew a fair bit.

01:12:50 Um, and so he kind of helped show me the routes.

01:12:52 One of the things that I realized is that you could, as a teacher, like it doesn’t take

01:12:57 that much to make things look and feel pretty professional.

01:12:59 Um, like one component of it is as soon as you hook things up with the broadcasting software,

01:13:04 rather than just doing like screen sharing, you can set up different scenes and then you

01:13:07 can like have keyboard shortcuts to transition between those scenes.

01:13:11 So you don’t need a production studio with a director calling like, go to camera three,

01:13:14 go to camera two, like onto the screen capture.

01:13:17 Instead you can have control of that.

01:13:18 And it took a little bit of practice and I would mess it up now and then, but I think

01:13:21 I had it decently smooth such that, you know, I’m talking to the camera and then we’re doing

01:13:25 something on the paper.

01:13:26 Then we’re doing like a, um, playing with a Desmos graph or something.

01:13:31 And something that I think in the past would have required a production team, you can actually

01:13:34 do as a solo operation, um, and in particular as a teacher.

01:13:38 And I think it’s worth it to try to do that because, uh, two reasons, one, you might get

01:13:42 more engagement from the students, but the biggest reason I think one of the like best

01:13:46 things that can come out of this pandemic education wise is if we turn a bunch of teachers

01:13:50 into content creators.

01:13:51 And if we take lessons that are usually done in these one off settings and like start to

01:13:55 get in the habit of, um, sometimes I’ll use the phrase commoditizing explanation where

01:14:01 what you want is whatever a thing a student wants to learn.

01:14:06 It just seems inefficient to me that that lesson is taught millions of times over in

01:14:11 parallel across many different classrooms in the world.

01:14:14 Like year to year, you’ve got a given algebra one lesson that’s just taught like literally

01:14:18 millions of times, um, by different people.

01:14:21 What should happen is that there’s the small handful of explanations online, uh, that exists

01:14:27 so that when someone needs that explanation, they can go to it, that the time in classroom

01:14:30 is spent on all of the parts of teaching and education that aren’t explanation, which is

01:14:34 most of it.

01:14:35 Right.

01:14:36 Um, and the way to get there is to basically have more people who are already explaining,

01:14:40 publish their explanations and have it in a publicized forum.

01:14:43 So if during a pandemic you can have people automatically creating online content cause

01:14:49 it has to be online, but getting into the habit of doing it in a, um, in a way that

01:14:53 doesn’t just feel like a zoom call that happened to be recorded, but it actually feels like

01:14:57 a, a piece that was always going to be publicized to more people than just your students that

01:15:03 can be really powerful.

01:15:05 And there’s an improvement process there, like so being self critical and growing, like,

01:15:11 you know, like I guess YouTubers go through this process of like putting out some content

01:15:17 and like nobody caring about it and then trying to figure out like, and basically improving

01:15:24 figure out like, why did nobody care?

01:15:28 What can I, you know, and they come up with all kinds of answers, which may or may not

01:15:31 be correct, but doesn’t matter because the answer leads to improvement.

01:15:35 So you’re being constantly self critical, self analytical, it should be better to say.

01:15:40 So you think of like, how can I make the audio better?

01:15:43 Like all the basic things.

01:15:45 Maybe one, one question to ask, cause, uh, well, by way of, uh, Russ Tedrick is a robotics

01:15:52 professor at MIT, one of my favorite people, a big fan of yours.

01:15:55 Uh, he watched our first conversation.

01:15:57 I just interviewed him a couple of weeks ago.

01:16:01 He, uh, he teaches this course in the under actuated robotics, which is, um, like robotic

01:16:08 systems when you can’t control everything, like when you’re like, we as humans, when

01:16:13 we walk, we’re always falling forward, which means like it’s gravity.

01:16:17 You can’t control it.

01:16:18 You just hope you can catch yourself, but that’s not all guaranteed.

01:16:21 It depends on the surface.

01:16:23 So like that’s under actuated.

01:16:24 You can’t control everything.

01:16:26 The number of actuators, uh, the degrees of freedoms you have is not enough to fully control

01:16:31 the system.

01:16:32 So I don’t know.

01:16:33 It’s a really, I think, beautiful, fascinating class.

01:16:35 He puts it online.

01:16:37 Um, it’s quite popular.

01:16:39 He does an incredible job teaching.

01:16:40 He puts it online every time, but he’s kind of been interested in like crisping it up,

01:16:45 like, you know, making it, uh, you know, innovating in different kinds of ways.

01:16:50 And he was inspired by the work you do, because I think in his work, he can do similar kinds

01:16:56 of explanations as you’re doing, like revealing the beauty of it and spending like months

01:17:01 in preparing a single video.

01:17:03 Uh, and he’s interested in how to do that.

01:17:06 That’s why he listened to the conversation.

01:17:07 He’s playing with manum, but he had this question of, you know, um, of, uh, you know, like in

01:17:16 my apartment where we did the interview, I have like curtains, like the, for like a black

01:17:21 curtain, not this, uh, this is, this is a adjacent mansion that we’re in that I also,

01:17:28 uh, but you basically just have, I have like a black curtain, whatever that, you know,

01:17:33 makes it really easy to set up a filming situation with cameras that we have here, these microphones.

01:17:38 He was asking, you know, what kind of equipment do you recommend?

01:17:41 I guess like your blog post is a good one.

01:17:43 I said, I don’t recommend this is excessive and actually really hard to work with.

01:17:49 So I wonder, I mean, uh, is there something you would recommend in terms of equipment?

01:17:55 Like is, is it, do you re do you think like lapel mics, like USB mics, what do you, for

01:18:00 my narration, I use a USB mic for the streams that used to lapel mic, uh, the narration,

01:18:06 it’s a blue Yeti.

01:18:07 Um, I’m forgetting actually the name of the lapel mic, but it was probably like a road

01:18:12 of some kind.

01:18:13 Um, but is it hard to figure out how to make the audio sound good?

01:18:17 Oh, I mean, listen to all the early videos on my channel and clearly like I’m terrible

01:18:21 at this for, for some reason.

01:18:23 Um, I just couldn’t get audio for awhile.

01:18:25 I think I, it’s weird when you hear your own voice.

01:18:28 So you hear it, you’re like, this sounds weird and it’s hard to notice it sound weird because

01:18:31 you’re not used to your own voice or they’re like actual audio artifacts at play.

01:18:36 Um, so, uh, and then video is just for the lockdown, just the camera, like you said,

01:18:43 it was probably streaming somehow through the, yeah, there were two GH five cameras.

01:18:47 One that was mounted overhead over a piece of paper.

01:18:49 You could also use like an iPad or a Wacom tablet to do your writing electronically,

01:18:53 but I just wanted the paper feel, um, one on the face.

01:18:57 There’s two.

01:18:58 Um, again, I don’t know, I’m like just not actually the one to ask this cause I like

01:19:02 animate stuff usually, but, uh, each of them like has a compressor object that makes it

01:19:08 such that the camera output goes into the computer USB, but like gets compressed before

01:19:12 it does that.

01:19:13 The, the live aspect of it, do you, do you regret doing it live?

01:19:20 Not at all.

01:19:21 Um, I think I do think the content might be like much less sharp and tight than if it

01:19:26 were something, even that I just recorded like that and then edited later.

01:19:30 But I do like something that I do to be out there to show like, Hey, this is what it’s

01:19:34 like.

01:19:35 Raw.

01:19:36 This is what it’s like when I make mistakes.

01:19:37 Um, this is like the pace of thinking, um, I like the live interaction of it.

01:19:41 I think that made it better.

01:19:42 Uh, I probably would do it on a different channel.

01:19:45 I think, um, if I did series like that in the future, just because it’s, it’s a different

01:19:49 style.

01:19:50 It’s probably a different target audience and, um, kind of keep clean what three blue

01:19:53 and brown is about versus, uh, the benefits of like live lectures.

01:19:58 Do you, uh, suggest like in this time of COVID that people like Russ or other educators tried

01:20:04 to go like the, the shorter, like 20 minute videos that are like really well planned out

01:20:12 or scripted.

01:20:13 You really think through, you slowly design.

01:20:15 So it’s not live.

01:20:16 Do you see like that being an important part of, um, what they do?

01:20:20 Yeah.

01:20:21 Well, what I think teachers like Russ should do is, um, choose the small handful of topics

01:20:25 that they’re going to do just really well.

01:20:27 They want to create the best short explanation of it in the world that will be one of those

01:20:31 handfuls in a world where you have commoditized explanation, right?

01:20:35 Most of the lectures should be done just normally.

01:20:37 Um, so put thought and planning into it.

01:20:39 I’m sure he’s a wonderful teacher and like knows all about that, but maybe choose those

01:20:42 small handful of topics.

01:20:44 Um, do what beneficial for me sometimes is I do sample lessons with people on that topic

01:20:49 to get some sense of how other people think about it.

01:20:52 Let that inform how you want to, um, edit it or script it or whatever format you want

01:20:56 to do.

01:20:57 Some people are comfortable just explaining it and editing later.

01:20:59 I’m more comfortable like writing it out and thinking in that setting.

01:21:02 Yeah.

01:21:03 It’s kind of sad.

01:21:04 Sorry to interrupt.

01:21:05 Uh, it’s, it’s a little bit sad to me to see how much knowledge is lost.

01:21:10 Like just, just like you mentioned, there’s professors, like we can take my dad, for example,

01:21:16 to blow up his ego a little bit, but he’s a, he’s a great teacher and he knows plasma,

01:21:21 plasma chemistry, plasma physics really well.

01:21:23 So he can very simply explain some beautiful, but otherwise, uh, complicated concepts.

01:21:31 And it’s sad that like, if you Google plasma or like for plasma physics, like there’s no

01:21:37 videos.

01:21:38 And just imagine if every one of those excellent teachers like your father or like Russ, um,

01:21:43 even if they just chose one topic this year, they’re like, I’m going to make the best video

01:21:46 that I can on this topic.

01:21:48 If every one of the great teachers did that, the internet would be replete and it’s already

01:21:52 replete with great explanations.

01:21:53 But it would be even more so with all the niche, great explanations and like anything

01:21:56 you want to learn.

01:21:57 Um, and there’s a self interest to it for, in terms of teachers, in terms of even, so

01:22:02 if you take Russ, for example, it’s not that he’s teaching something like he teaches his

01:22:08 main thing, his thing he’s deeply passionate about.

01:22:11 And from a selfish perspective, it’s also just like, I mean, it’s a, it’s a, it’s like

01:22:20 publishing a paper in a really, uh, like nature has like letters, like accessible publication.

01:22:27 It’s just going to guarantee that your work, that your passion is seen by a huge number

01:22:35 of people, whatever the definition of huge is, doesn’t matter.

01:22:39 It’s much more than it otherwise, uh, would be.

01:22:42 And it’s those lectures that tell early students what to be interested in at the moment.

01:22:47 I think students are disproportionately interested in the things that are well represented on

01:22:51 YouTube.

01:22:52 So to any educator out there, if you’re wondering, Hey, I want more like grad students in my

01:22:56 department, like what’s the best way to recruit grad students?

01:22:59 It’s like, make the best video you can and then wait eight years.

01:23:02 And then you’re going to have a pile of like excellent grad students for that department.

01:23:05 And one of the lessons I think your channel teaches is there’s appeal of explaining just

01:23:12 something beautiful, explaining it cleanly, technically not doing a marketing video about

01:23:19 why topology is great.

01:23:21 There’s yeah, that’s the, there’s people interested in this stuff.

01:23:24 I mean, uh, one of the greatest channels like Matt, it’s not even a math channel, but the

01:23:29 channel with greatest math content is Vsauce, like interviewed.

01:23:33 If imagine you were to propose making a video that explains the Banach Tarski paradox substantively,

01:23:38 right?

01:23:39 Like not shying around it, maybe not describing things in terms of, um, like the group theoretic

01:23:45 terminology that you’d usually see in a paper, but the actual results, um, that went into

01:23:51 this idea of like breaking apart a sphere, proposing that to like a network TV station

01:23:56 saying, yeah, I’m going to, I’m going to do this in depth talk of the Banach Tarski paradox.

01:23:59 I’m pretty sure it’s going to reach 20 million people.

01:24:02 It’s like, get out of here.

01:24:04 Like no, no one cares about that.

01:24:05 No one’s interested in anything even anywhere near that.

01:24:08 But then you have Michael’s quirky personality around it.

01:24:11 And just people that are actually hungry for that kind of depth, um, then you don’t need

01:24:16 like the approval of some higher network.

01:24:19 You can just do it and let the people speak for themselves.

01:24:22 So I think, you know, if your father was to make something on plasma physics or, um, if

01:24:26 we were to have like, uh, underactualized robotics, underactuated, underactuated, yes,

01:24:32 not underactualized, plenty actualized underactuated robotics.

01:24:37 Robotics is under actualized currently.

01:24:41 So even if it’s things that you might think are niche, I bet you’ll be surprised by how

01:24:46 many people, um, actually engage with it really deeply.

01:24:49 Although I just psychologically watching him, I can’t speak for a lot of people.

01:24:52 I can speak for my dad.

01:24:53 I think there’s a, there’s a little bit of a skill gap, but I think that could be overcome.

01:25:00 That’s pretty basic.

01:25:01 None of us know how to make videos when we start the first stuff I made was terrible

01:25:04 in a number of respects.

01:25:05 Like look at the earliest videos I need in the YouTube channel, except for captain disillusion.

01:25:09 And they’re all like terrible versions of whatever they are now.

01:25:13 But the thing I’ve noticed, especially like with world experts is it’s the same thing

01:25:19 that I’m sure you went through, which is like, um, fear of like embarrassment.

01:25:25 Like they, they definitely, it’s, it’s the same reason.

01:25:29 Like I feel that anytime I put out a video, I don’t know if you still feel that.

01:25:35 But like, I don’t know, it’s this imposter syndrome.

01:25:39 Like who am I to talk about this?

01:25:41 And that that’s true for like even things that you’ve studied for like your whole life.

01:25:46 Uh, I don’t know.

01:25:47 It’s scary to post stuff on YouTube.

01:25:50 It is scary.

01:25:51 Uh, I honestly wish that more of the people who had that modesty to say, who am I to

01:25:57 post this?

01:25:58 We’re the ones actually posting it.

01:26:00 That’s right.

01:26:01 I mean, the honest problem is like a lot of the educational content is posted by people

01:26:04 who like, we’re just starting to research it two weeks ago and are on a certain schedule

01:26:09 and who maybe should think like, who am I to explain and choose your favorite topic,

01:26:15 quantum mechanics or something.

01:26:17 Um, and the people who have the self awareness, uh, to not post are probably the people also

01:26:23 best positioned to give a good, honest explanation of it.

01:26:27 That’s why there’s a lot of value in a channel like numberphile where they basically trap

01:26:32 a really smart person and force them to explain stuff on a bronze sheet of paper.

01:26:38 So, but of course that’s not scalable as a single channel.

01:26:41 If they, if there’s anything beautiful that it could be done as people take it in their

01:26:45 own hands, uh, educators, which is again, circling back, I do think the pandemic will

01:26:51 serve to force a lot of people’s hands.

01:26:54 You’re going to be making online content anyway.

01:26:56 It’s happening, right?

01:26:58 Just hit that publish button and see how it goes.

01:27:01 Yeah.

01:27:02 See how it goes.

01:27:03 The cool thing about YouTube is it might not go for a while, but like 10 years later, right?

01:27:10 Yeah.

01:27:11 It’ll be like, this, the thing this, what people don’t understand with YouTube, at least

01:27:14 for now, at least that’s my hope with it is, uh, it’s a leg.

01:27:20 It’s a, it’s literally better than publishing a book in terms of the legacy.

01:27:24 It’s it will live for a long, long time.

01:27:27 Of course it’s, um, one of the things I mentioned Joe Rogan before, it’s kinda, there’s a sad

01:27:34 thing cause I’m a fan.

01:27:36 He’s moving to Spotify.

01:27:38 Yeah.

01:27:39 Yeah.

01:27:40 Nine digit numbers will do that to you.

01:27:41 Yeah.

01:27:42 But he doesn’t really that he was one of the person that doesn’t actually care that much

01:27:46 about money.

01:27:47 Like having talked to him here, it wasn’t because of money.

01:27:50 It’s because he legitimately thinks that they’re going to do like a better job.

01:27:58 Like, so they’re, so from his perspective, YouTube, you have to understand where they’re

01:28:03 coming from.

01:28:04 YouTube has been cracking down on people who they, you know, Joe Rogan talks to Alex Jones

01:28:10 and conspiracy theories and stuff.

01:28:13 And YouTube is really like careful that kind of stuff.

01:28:16 And that’s not a good feeling.

01:28:18 Like, and Joe didn’t, doesn’t feel like YouTube was on his side.

01:28:22 You know, he’s often has videos that they don’t put in trending that like are obviously

01:28:28 should be in trending because they’re nervous about like, you know, if this concert is this,

01:28:34 is this content going to, you know, upset people that all that kind of stuff have misinformation.

01:28:41 And that’s not a good place for a person to be in.

01:28:44 And Spotify is giving them a, we’re never going to censor you.

01:28:48 We’re never going to do that.

01:28:50 But the reason I bring that up, whatever you think about that, I personally think as bullshit

01:28:55 because podcasting should be free and not constrained to a platform.

01:28:59 It’s pirate radio.

01:29:00 What the hell?

01:29:01 You can’t, as much as I love Spotify, you can’t just, you can’t put fences around it.

01:29:08 But anyway, the reason I bring that up is Joe’s going to remove his entire library from

01:29:13 YouTube.

01:29:14 Whoa, really?

01:29:15 I didn’t know that.

01:29:16 His full length, the clips are going to stay, but the full length videos are all, I mean,

01:29:20 made private or deleted.

01:29:22 That’s part of the deal.

01:29:23 And like, that’s the first time where I was like, Oh, YouTube videos might not live forever.

01:29:29 Like things you find like, okay, I’m sorry.

01:29:32 This is why you need an IPFS or something where it’s like, if there’s a content link,

01:29:36 are you familiar with this system at all?

01:29:39 Like right now, if you have a URL, it points to a server.

01:29:41 There’s like a system where the address points to content and then it’s like distributed.

01:29:46 So you, you can’t actually delete what’s at an address because it’s, it’s content addressed.

01:29:50 And as long as there’s someone on the network who hosts it, it’s always accessible at the

01:29:54 address that it once was.

01:29:56 But I mean, that raises a question.

01:29:58 I’m not going to put you on the spot, but like somebody like Vsauce, right?

01:30:03 Spotify comes along and gives him, let’s say $100 billion.

01:30:07 Okay.

01:30:08 Let’s say some crazy number and then removes it from YouTube, right?

01:30:13 It’s made me, I don’t know, for some reason I thought YouTube is forever.

01:30:19 I don’t think it will be.

01:30:20 I mean, you know, another variant that this might take is like, uh, that, you know, um,

01:30:25 you fast forward 50 years and, uh, you know, Google or Alphabet isn’t the company that

01:30:30 it once was.

01:30:31 And it’s kind of struggling to make ends meet.

01:30:33 And you know, it’s been supplanted by the whoever wins on the AR game or whatever it

01:30:38 might be.

01:30:39 And then they’re like, you know, all of these videos that we’re hosting are pretty costly.

01:30:43 So we’re just, we’re going to start deleting the ones that aren’t watched that much and

01:30:47 tell people to like try to back them up on their own or whatever it is.

01:30:51 Um, or even if it does exist in some form forever, it’s like if people are, um, not

01:30:56 habituated to watching YouTube in 50 years, they’re watching something else, which seems

01:30:59 pretty likely.

01:31:00 Like it would be shocking if YouTube remained as popular as it is now indefinitely into

01:31:06 the future.

01:31:07 So, uh, it won’t be forever.

01:31:10 Makes me sad still, but cause it’s such a nice, it’s just like you said of the canonical

01:31:16 videos.

01:31:17 Sorry.

01:31:18 I didn’t mean to interrupt.

01:31:19 You know, you should get Juan Bennett on the, uh, on the thing and then talk to him about

01:31:21 permanence.

01:31:22 I think you would have a good conversation.

01:31:24 Who’s that?

01:31:25 So he’s the one that founded this thing called IPFS that I’m talking about.

01:31:28 And if you have him talk about basically what you’re describing, like, Oh, it’s sad that

01:31:32 this isn’t forever.

01:31:33 Then you’ll get some articulate pontification around it that’s like been pretty well thought

01:31:38 through.

01:31:39 Uh, but yeah, I do see YouTube, just like you said, as a, as a place, like what your

01:31:44 channel creates, which is like a set of canonical videos on a topic.

01:31:47 Now others could create videos on that topic as well, but as a collection, it creates a

01:31:54 nice set of places to go.

01:31:56 Uh, if you’re curious about a particular topic and it seems like coronavirus is a nice opportunity

01:32:02 to, uh, put that knowledge out there in the world at, uh, MIT and beyond, I have to talk

01:32:10 to you a little bit about machine learning, deep learning and so on.

01:32:13 Again, we talked about last time you have a set of beautiful videos on neural networks.

01:32:19 Uh, let me ask you first, what is the most beautiful aspect of neural networks and machine

01:32:28 learning to you, like for making those videos from watching how the field is evolving?

01:32:35 Is there something mathematically or in applied sense, just beautiful to you about them?

01:32:42 Well, I think what I would go to is the layered structure and how, um, you can have what feel

01:32:48 like qualitatively distinct things happening, going from one layer to another, but that

01:32:52 are, um, following the same mathematical rule because you look at it as a piece of math.

01:32:56 It’s like you got a non linearity and then you’ve got a matrix multiplication.

01:33:00 That’s what’s happening on all the layers.

01:33:02 Um, but especially if you look at like some of the visualizations that, uh, like Chris

01:33:06 Ola has done with respect to, um, like convolutional nets that have been trained on image net trying

01:33:12 to say, what does this neuron do?

01:33:14 What do this, uh, does this family of neurons do?

01:33:17 What you can see is that, um, the ones closer to the input side are picking up on very low

01:33:22 level ideas like the texture, right?

01:33:24 And then as you get further back, you have higher level ideas.

01:33:26 Like what is the, where are the eyes in this picture?

01:33:29 And then how do the eyes form like an animal is this animal, a cat or a dog or a deer.

01:33:33 You have this series of qualitatively different things happening, even though it’s the same

01:33:37 piece of math on each one.

01:33:39 So that’s a pretty beautiful idea that you can have like a generalizable object that,

01:33:44 um, runs through the layers of abstraction, which in some sense constitute intelligence

01:33:50 is having, um, those many different layers of an understanding to something form abstractions

01:33:55 in a automated way.

01:33:57 Exactly.

01:33:58 It’s automated abstracting, which, I mean, that just feels very powerful.

01:34:02 Um, and the idea that it can be so simply mathematically represented.

01:34:06 I mean, a ton of like modern ML research seems a little bit like you do a bunch of ad hoc

01:34:10 things, then you decide which one worked and then you retrospectively come up with the

01:34:14 mathematical reason that it always had to work.

01:34:16 Um, but you know, who cares how you came to it when you have like that elegant piece of

01:34:19 math?

01:34:20 Uh, it’s hard not to just smile seeing it work in action.

01:34:24 Well, and when you talked about topology before, one of the really interesting things is, is

01:34:30 beginning to be investigated under kind of the field of like science and deep learning,

01:34:34 which is like the craziness of the surface that, uh, is trying to be optimized, uh, in

01:34:42 neural networks.

01:34:43 I mean, the, the amount of local minima, local optima there is in these surfaces and somehow

01:34:51 a dumb gradient descent algorithm was able to find really good solutions.

01:34:55 That’s like, that’s really surprising.

01:34:58 Well, so on the one hand it is, but also it’s like not, it’s not terribly surprising that

01:35:04 you have these interesting points that exist when you make your space so high dimensional,

01:35:08 like GPT three, what did it have?

01:35:10 175 billion parameters.

01:35:12 So it doesn’t feel as mesmerizing to think about, Oh, there’s some surface of intelligent

01:35:19 behavior in this crazy high dimensional space.

01:35:21 It’s like, there’s so many parameters that of course, but what’s more interesting is

01:35:24 like, how, how is it that you’re able to efficiently get there, which is maybe what you’re describing

01:35:28 that something as dumb as gradient descent does it, but like the re the reason that gradient

01:35:35 descent works well with neural networks and not just, you know, choose however you want

01:35:38 to parameterize this space and then like apply gradient descent to it is that that layered

01:35:42 structure lets you decompose the derivative in a way that makes it computationally feasible.

01:35:47 Um, yeah, it’s just that, that there’s so many good solutions, probably infinitely infinitely

01:35:54 many good solutions, not best solutions, but good solutions.

01:35:58 That’s that’s what’s interesting.

01:36:00 It’s similar to, uh, Steven Wolfram has this idea of like the, if you just look at all

01:36:07 space of computations of all space of basically algorithms that you’d be surprised how many

01:36:13 of them are actually intelligent.

01:36:15 Like if you just randomly pick from the bucket, uh, that’s surprising.

01:36:19 We tend to think like a tiny, tiny minority of them would be intelligent, but his sense

01:36:26 is like, it seems weirdly easy to find computations that do something interesting.

01:36:32 Well, okay, so that from like a calm agor, calm agor of complexity standpoint, almost

01:36:38 everything will be interesting.

01:36:40 What’s fascinating is to find the stuff that’s describable with low information, but still

01:36:44 does interesting things.

01:36:45 Uh, like one fun example of this, you know, um, Shannon’s noisy coding and theorem, uh,

01:36:51 noisy coding theorem and, uh, information theory that basically says if, you know, I

01:36:55 want to send some bits to you, um, maybe, uh, some of them are going to get flipped.

01:36:59 Uh, there’s some noise along the channel.

01:37:01 I can come up with some way of coding it.

01:37:04 That’s resilient to that noise.

01:37:06 That’s very good.

01:37:07 Um, and then he quantitatively describes what very good is.

01:37:10 What’s funny about how he proves the existence of good error correction codes is rather than

01:37:15 saying like, here’s how to construct it or even like a sensible nonconstructive proof.

01:37:20 The nature of his nonconstructive proof is to say, um, if we chose a random encoding,

01:37:25 it would be almost at the limit, which is weird because then it took decades for people

01:37:30 to actually find any that were anywhere close to the limit.

01:37:33 And what his proof was saying is choose a random one.

01:37:35 And it’s like the best kind of encoding you’ll ever find.

01:37:39 But what’s what that tells us is that sometimes when you choose a random element from this

01:37:44 ungodly huge set, that’s a very different task from finding an efficient way to actively

01:37:49 describe it.

01:37:50 Cause in that case, the random element to actually implement it as a bit of code, you

01:37:52 would just have this huge table of like, um, telling you how to encode one thing into another.

01:37:58 That’s totally computationally infeasible.

01:38:00 So on the side of like how many possible programs are interesting in some way, it’s like, yeah,

01:38:06 tons of them.

01:38:07 But the much, much more delicate question is when you can have a low information description

01:38:11 of something that still becomes interesting.

01:38:14 And thereby this kind of gives you a blueprint for how to engineer that kind of thing.

01:38:18 Right.

01:38:19 Yeah.

01:38:20 Chaos theory is another good instance there where it’s like, yeah, a ton of things are

01:38:22 hard to describe, but how do you have ones that have a simple set of governing equations

01:38:27 that remain like arbitrarily hard to describe?

01:38:30 Well, let me ask you, uh, you mentioned GPT three.

01:38:33 It’s interesting to ask, uh, what are your thoughts about the recently released open

01:38:40 AI GPT three model that I believe is already trying to learn how to communicate like Grant

01:38:46 Sanderson?

01:38:47 You know, I think I got an email a day or two ago about someone who wanted to, um, try

01:38:51 to use GPT three with manum where you would like give it a high level description of something

01:38:57 and then it’ll like automatically create the mathematical animation, like trying to put

01:39:01 me out of a job here.

01:39:03 I mean, it probably won’t put you out of a job, but it’ll create something visually beautiful

01:39:08 for sure.

01:39:09 I would be surprised if that worked as stated, but maybe there’s like variants of it like

01:39:15 that you can get to.

01:39:16 Um, I mean like a lot of those demos, it’s interesting.

01:39:18 I think, uh, there’s a lot of failed experiments, like depending on how you prime the thing,

01:39:26 you’re going to have a lot of failed, I’m certainly with code and program synthesis.

01:39:30 Most of it won’t even run, but eventually I think if you, if you’re, if you pick the

01:39:35 right examples, you’ll be able to generate something cool.

01:39:38 And I think that even that’s good enough, even though if it’s, if it’s, if you’re being

01:39:42 very selective, it’s still cool that something can be generated.

01:39:46 Yeah.

01:39:47 That’s a huge value.

01:39:48 Um, I mean, think of the writing process.

01:39:50 Sometimes a big part of it is just getting a bunch of stuff on the page and then you

01:39:52 can decide what to whittle down to.

01:39:54 So if it can be used in like a man machine symbiosis where it’s just giving you a spew

01:39:59 of potential ideas that then you can refine down, um, like it’s serving as the generator

01:40:05 and then the human serves as the refiner.

01:40:07 That seems like a pretty powerful dynamic.

01:40:09 Yeah.

01:40:10 Have you, uh, have you gotten a chance to see any of the demos like on Twitter?

01:40:14 Is there a favorite you’ve seen or?

01:40:15 Oh, my absolute favorite.

01:40:17 Yeah.

01:40:18 Uh, so Tim Blay who runs a channel called acapella science, he was like tweeting a bunch

01:40:23 about playing with it.

01:40:24 Um, and so he, so GPT three was trained on the internet from before COVID.

01:40:30 So in a sense it doesn’t know about the Corona virus.

01:40:33 So what he seeded it with was just a short description about like, um, a novel virus,

01:40:37 uh, emerges in Wuhan, China and starts to spread around the globe.

01:40:41 What follows is a month by month description of what happens, January, colon, right?

01:40:46 That’s what he sees it with.

01:40:47 So then what GPT three generates is like January, then a paragraph of description, February

01:40:51 and such.

01:40:52 And it’s the funniest thing you’ll ever read because, um, it predicts a zombie apocalypse,

01:40:58 which of course it would because it’s trained on like the internet, the stories, but what

01:41:02 you see unfolding is a description of COVID 19 if it were a zombie apocalypse.

01:41:08 And like the early aspects of it are kind of shockingly in line with what’s reasonable

01:41:12 and then it gets out of hand so quickly.

01:41:14 And the other flip side of that is, uh, I wouldn’t be surprised if it’s onto something

01:41:19 at some point here when, you know, 2020 has been full of surprises, who knows, like we

01:41:25 might all be in like this crazy militarized zone as it predicts just a couple of months

01:41:30 off.

01:41:31 Yeah.

01:41:32 I think there’s definitely an interesting tool of storytelling.

01:41:36 It has struggled with mathematics, which is interesting, or in just even numbers, it’s

01:41:40 able to, it’s not able to generate like patterns, you know, like you give it, um, in like five

01:41:49 digit numbers and it’s not able to figure out the sequence, you know, or like, um, I

01:41:55 didn’t look in too much, but I’m talking about like sequences, like the Fibonacci numbers

01:42:00 and to see how far it can go because obviously it’s leveraging stuff from the internet and

01:42:04 it starts to lose it, but it is also cool that I’ve seen it able to generate some interesting

01:42:09 patterns, um, that are mathematically correct.

01:42:12 Yeah.

01:42:13 I honestly haven’t dug into like what’s going on within it, uh, in a way that I can speak

01:42:18 intelligently to, I guess it doesn’t surprise me that it’s bad at numerical patterns because

01:42:24 I mean, maybe I should be more impressed with it, but like that requires having, um, a weird

01:42:30 combination of intuitive and, uh, and formulaic worldview.

01:42:35 So you’re not just going off of intuition.

01:42:37 When you see Fibonacci numbers, you’re not saying like intuitively, what do I think will

01:42:39 follow the 13?

01:42:40 Like I’ve seen patterns a lot where like 13s are followed by 21s instead.

01:42:45 It’s the, like the way you’re starting to see a shape of things is by knowing what hypotheses

01:42:50 to test where you’re saying, oh, maybe it’s generated based on the previous terms or maybe

01:42:54 it’s generated based on like multiplying by a constant or whatever it is you like have

01:42:58 a bunch of different hypotheses and your intuitions are around those hypotheses, but you still

01:43:01 need to actively test it.

01:43:04 Um, and it seems like GPT three is extremely good at, um, like that sort of pattern matching

01:43:10 recognition that usually is very hard for computers.

01:43:13 That is what humans get good at through expertise and exposure to lots of things.

01:43:17 It’s why it’s good to learn from as many examples as you can rather than just from the definitions

01:43:21 it’s to get that level of intuition, but to actually concretize it into a piece of math,

01:43:27 you do need to, um, like test your hypotheses and if not prove it, um, like have an actual

01:43:33 explanation for what’s going on, not just a, uh, a pattern that you’ve seen.

01:43:37 Yeah.

01:43:38 And, but then the flip side to play devil’s advocate, that’s a very kind of probably correct

01:43:43 intuitive understanding of just like we said, a few, a few layers creating abstractions,

01:43:49 but it’s been able to form something that looks like, uh, a compression of the data

01:43:58 that it’s seen that looks awfully a lot like it understands what the heck it’s talking

01:44:02 about.

01:44:03 Well, I think a lot of understanding is like, I don’t mean to denigrate pattern recognition.

01:44:08 Pattern recognition is most of understanding and it’s super important and it’s super hard.

01:44:12 Um, and so like when it’s demonstrating this kind of real understanding, compressing down

01:44:16 some data, like that, that might be pattern recognition at its finest.

01:44:20 My only point would be that like what differentiates math, I think to a large extent is that, um,

01:44:27 the pattern recognition isn’t sufficient and that the kind of patterns that you’re recognizing

01:44:32 are not like the end goals, but instead they’re, they are the little bits and paths that get

01:44:37 you to the end goal.

01:44:39 That’s certainly true for mathematics in general.

01:44:41 It’s an interesting question if that might, uh, for certain kinds of series of numbers,

01:44:47 it might not be true.

01:44:48 Like you might, um, because that’s a basic, you know, like Taylor’s like certain kinds

01:44:53 of series, it feels like compressing the internet, uh, is, is enough to figure out because those

01:45:01 patterns in some form appear in the text somewhere.

01:45:05 Yeah.

01:45:06 Well, I mean, there’s, uh, there’s all sorts of wonderful examples of false patterns in

01:45:09 math where, um, one of the earliest videos I put on the channel was talking about the

01:45:13 extent of dividing a circle up using these chords.

01:45:15 And you see this pattern of one, two, four, eight, 16, I was like, okay, pretty easy to

01:45:20 see what that pattern is.

01:45:21 It’s powers of two.

01:45:22 You’ve seen it a million times.

01:45:23 Um, but it’s not powers of two.

01:45:25 The next term is 31.

01:45:27 And so it’s like almost a power of two, but it’s a little bit shy.

01:45:30 And there’s, there’s actually a very good explanation for what’s going on.

01:45:33 Um, but I think it’s a good test of whether you’re thinking clearly about mechanistic

01:45:40 explanations of things, how quickly you jump to thinking it must be powers of two because

01:45:44 the problem itself, there’s really no, no good way to, I mean, there can’t be a good

01:45:49 way to think about it as like doubling a set because ultimately it doesn’t, but even before

01:45:53 it starts to, it’s not something that screams out as being a doubling phenomenon.

01:45:58 So at best, if it did turn out to be powers of two, it would have only been so very subtly.

01:46:03 And I think the difference between like, you know, a math student making the mistake and

01:46:06 a mathematician who’s experienced seeing that kind of pattern is that they, they’ll have

01:46:10 a sense from what the problem itself is, whether the pattern that they’re observing is reasonable

01:46:15 and how to test it.

01:46:16 And like, uh, I w I would just be very impressed if there was any algorithm that, um, was actively

01:46:23 accomplishing that goal.

01:46:24 Yeah.

01:46:25 Like a learning base algorithm.

01:46:26 Yeah.

01:46:27 Like a little scientist, I guess.

01:46:29 Basically.

01:46:30 Yeah.

01:46:31 That’s a fascinating thought because GPT three, these language models are already accomplishing

01:46:36 way more than I’ve expected.

01:46:38 So I’m learning not to doubt, but we’ll get there.

01:46:42 Yeah.

01:46:43 I, I, I’m not saying I’d be impressed, but like surprised, like I’ll be impressed, but

01:46:46 I think we’ll get there on, um, algorithms doing math like that.

01:46:52 So one of the amazing things you’ve done for the world is to some degree, open sourcing

01:47:00 the tooling that you use to make your videos with Madam, uh, this Python library.

01:47:08 Now it’s quickly evolving because I think you’re inventing new things every time you

01:47:11 make a video.

01:47:12 In fact, I wanted, um, I’ve been working on playing around with something.

01:47:17 I wanted to do like an ode to three blue on Brown.

01:47:20 Like I love playing Hendrix.

01:47:22 I wanted to do like a cover, you know, of a concept I wanted to visualize and use Madam.

01:47:27 And I saw that you had like a little piece of code on like Mobia strip and I tried to

01:47:31 do some cool things with spinning a Mobia strip, like continue, um, twisting it, I guess

01:47:39 is the term, uh, and it was easier to, uh, it was tough.

01:47:44 So I haven’t figured it out yet.

01:47:45 Well, so I guess the question I want to ask is so many people love it, uh, that you’ve

01:47:50 put that out there.

01:47:51 They want to, uh, do the same thing as I do with Hendrix and want to cover it.

01:47:54 They want to explain an idea using the tool, including Russ.

01:47:58 How would you recommend they try to, I’m very sorry.

01:48:02 They try to go, they try to go by, uh, about it and what kind of choices should they choose

01:48:11 to be most effective?

01:48:13 That I can answer.

01:48:14 So I always feel guilty if this comes up because, um, I think of it like this scrappy tool.

01:48:19 It’s like a math teacher who put together some code.

01:48:22 People asked what it was, so they made it open source and they kept scrapping it together.

01:48:26 And there’s a lot, like a lot of things about it that make it harder to work with than it

01:48:29 needs to be that are a function of like me not being a software engineer.

01:48:33 Um, I, I’ve, I’ve put some work this year trying to like make it better and more flexible.

01:48:39 Um, that is still just kind of like a work in process.

01:48:43 Um, one thing I would love to do is just get my act together about properly integrating

01:48:48 with what like the community wants to work with and like what stuff I work on and making

01:48:53 that, um, not like deviate, uh, and just like actually fostering that community in a way

01:48:58 that I’ve, I’ve been like shamefully neglectful of.

01:49:01 So I’m just always guilty if it comes up.

01:49:03 So let’s put that guilt aside, just kind of Zen, like I’ll pretend like it isn’t terrible

01:49:08 for someone like Russ.

01:49:09 Um, I think step one is like, make sure that what you’re animating should be done so programmatically

01:49:14 because a lot of things maybe shouldn’t.

01:49:16 Um, like if you’re just making a quick graph of something, uh, if it’s a graphical intuition

01:49:20 that maybe has a little motion to it, use Desmos, use grapher, use GeoGebra, use Mathematica,

01:49:26 certain things that are like really oriented around graph.

01:49:28 GeoGebra is kind of cool.

01:49:29 I did super amazing.

01:49:31 You can get very, very far with it.

01:49:33 Um, and in a lot of ways, like it would make more sense for STEM stuff that I do to just

01:49:37 do in GeoGebra, but I kind of have this cycle of liking to try to improve man and by doing

01:49:42 videos and such.

01:49:43 So, uh, do as I say, not as I do.

01:49:45 The original like thought I had in making manum was that there’s so many different ways

01:49:49 of representing functions other than graphs, um, in particular things like transformations,

01:49:55 like use movement over time to communicate relationships between inputs and outputs instead

01:49:59 of like estimate direction and Y direction, um, or like vector fields or things like that.

01:50:04 So I wanted something that was flexible enough that you didn’t feel constrained into a graphical

01:50:08 environment.

01:50:09 Um, by graphical, I mean like graphs with like X coordinate, Y coordinate kind of stuff,

01:50:15 but also make sure that, um, you’re taking advantage of the fact that it’s programmatic.

01:50:20 You have loops, you have conditionals, you have abstraction.

01:50:23 If any of those are like well fit for what you want to teach to, you know, have a scene

01:50:27 type that you tweak a little bit based on parameters or to have conditional so that

01:50:31 things can go one way or another or loops so that you can create these things of like

01:50:34 arbitrarily increasing complexity.

01:50:37 That’s the stuff that’s like meant to be animated programmatically.

01:50:39 If it’s just like writing some text on the screen or shifting around objects or something

01:50:43 like that, um, things like that, you should probably just use keynote, right?

01:50:48 Um, you’d be a lot simpler.

01:50:50 So, uh, try to find a workflow that distills down that which should be programmatic into

01:50:55 manum and that which doesn’t need to be into like other domains.

01:50:58 Again, do as I say, not as I do.

01:51:01 I mean, Python is an integral part of it.

01:51:03 Just for the fun of it, let me ask, uh, what, uh, what’s your most and least favorite aspects

01:51:09 of Python?

01:51:10 Ooh, most and least.

01:51:12 I mean, I love that it’s like object oriented and functional, I guess that you can kind

01:51:18 of like get both of those, um, uh, benefits for how you structure things.

01:51:23 So if you would just want to quickly whip something together, the functional aspects

01:51:26 are nice.

01:51:27 It’s your primary language, like for programmatically generating stuff.

01:51:31 Yeah.

01:51:32 It’s home for me.

01:51:33 It’s home.

01:51:34 Yeah.

01:51:35 Sometimes you travel, but it’s home.

01:51:36 Got it.

01:51:37 It’s home.

01:51:38 Uh, I mean, the biggest disadvantage is that it’s slow.

01:51:39 So when you’re doing computationally intensive things, either you have to like think about

01:51:42 it more than you should how to make it efficient or it just like takes long.

01:51:47 Do you run into that at all?

01:51:48 Like with your work?

01:51:49 Well, so, uh, certainly old man is like way slower than it needs to be because of, uh,

01:51:54 how it renders things on the backend is like kind of absurd.

01:51:58 I’ve rewritten things such that it’s all done with like shaders in such a way that it should

01:52:02 be just like live and actually like interactive while you’re coding it.

01:52:06 If you want to, to have like a 3d scene, you can move around, you can, um, have, um, elements

01:52:12 respond to where your mouse is or things.

01:52:14 That’s not something that user of a video is going to get to experience cause there’s

01:52:17 just a play button and a pause button.

01:52:19 But while you’re developing, that can be nice.

01:52:21 Um, so it’s gotten better in speed in that sense, but that’s basically because the hard

01:52:25 work is being done in the language that’s not Python, but GLSL, right?

01:52:29 Um, but yeah, there are some times when it’s like a, um, there’s just a lot of data that

01:52:35 goes into the object that I want to animate that then it just like Python is slow.

01:52:40 Well, let me ask, quickly ask, what do you think about the walrus operator, if you’re

01:52:44 familiar with it at all?

01:52:46 The reason it’s interesting, there’s a new operator in Python 3.8.

01:52:49 I find it psychologically interesting cause it, the toxicity over it led Guido to resign

01:52:54 the step down from this.

01:52:55 Is that actually true?

01:52:56 Or was it like, there’s a bunch of surrounding things that also, was it actually the walrus

01:53:00 operator that, that.

01:53:02 Well, it was, it was a text, it was an accumulation of toxicity, but that was the, the most, that

01:53:08 was the most toxic one, like the discussion.

01:53:11 That’s the most number of Python core developers that were opposed to Guido’s decision.

01:53:16 Um, he didn’t particularly, I don’t think cared about it either way.

01:53:20 He just thought it was a good idea.

01:53:21 This is where you approve it.

01:53:23 And like the structure of the idea of a BDFL is like you listen to everybody, hear everybody

01:53:30 out.

01:53:31 You make a decision and you move forward.

01:53:33 And he didn’t like the negativity that burdened him after that.

01:53:37 People like some parts of the benevolent dictator for life mantra, but once the dictator does

01:53:41 things different than you want, suddenly dictatorship doesn’t seem so great.

01:53:44 Yeah.

01:53:45 I mean, they still liked it.

01:53:46 He just couldn’t because he truly is the bee in the benevolent.

01:53:50 He’s really, he really is a nice guy.

01:53:52 He, I mean, and I think he can’t, it’s a lot of toxicity.

01:53:56 It’s difficult.

01:53:57 It’s a difficult job.

01:53:58 And that’s why Linus Torvalds is perhaps the way he is.

01:54:01 You have to have a thick skin to fight off, fight off the warring masses.

01:54:06 It’s kind of surprising to me how many people can like threatened to murder each other over

01:54:11 whether we should have braces or not, or like it’s incredible.

01:54:15 Yeah.

01:54:16 I mean, that’s my knee jerk reaction to the walrus operators.

01:54:18 Like I don’t actually care that much either way.

01:54:20 I’m not going to get personally passionate.

01:54:22 My initial reaction was like, yeah, this seems to make things more confusing to read.

01:54:26 But then again, so does list comprehension until you’re used to it.

01:54:29 So like if there’s a use for it, great, if not great, but like, let’s just all calm down

01:54:33 about our spaces versus tabs debates here and like, be chill.

01:54:37 Yeah.

01:54:38 To me, it just represents the value of great leadership, even in open source communities.

01:54:44 Does it represent that if he stepped down as a leader?

01:54:46 Well, he fought for it.

01:54:48 No, he got it passed.

01:54:49 I guess, but I guess, I could represent multiple things too.

01:54:54 It can represent like failed dictatorships or it can, it can represent a lot of things,

01:54:59 but to me, great leaders take risks.

01:55:03 Even if it, even if it’s a mistake at the end, like you have to make decisions.

01:55:09 The thing is this world won’t go anywhere.

01:55:11 If you constantly, if whenever there’s a divisive thing, you wait until the division is no longer

01:55:17 there.

01:55:18 Like that’s the paralysis we experienced with like Congress and political systems.

01:55:22 It’s good to be slow when there’s indecision, when there’s people disagree, it’s good to

01:55:28 take your time.

01:55:29 But like at a certain point it results in paralysis and you just have to make a decision.

01:55:34 The background of the site, whether it’s yellow, blue, or red can cause people to like go to

01:55:40 war over each other, which I’ve seen this with design.

01:55:43 People are very touch on color, color choices at the end of the day, just make a decision

01:55:49 and go with it.

01:55:50 And that, that’s what the Walrus operator represents to me is it represents the fighter

01:55:55 pilot instinct of like quick action is more important than, than just like hearing everybody

01:56:01 out and really think it through it because that’s going to lead to paralysis.

01:56:05 Yeah.

01:56:06 Like if that’s the actual case that, you know, it’s something where he’s consciously hearing

01:56:10 people’s disagreement, disagreeing with that disagreement and saying he wants to move forward

01:56:16 anyway, that’s an admirable aspect of leadership.

01:56:21 So we don’t have much time, but I want to ask just cause it’s some beautiful mathematics

01:56:26 involved.

01:56:27 2020 brought us a couple of in the physics world theories of everything, Eric Weinstein

01:56:35 kind of, I mean, it’s been working for probably decades, but he put out this idea of geometric

01:56:41 unity or started sort of publicly thinking and talking about it more, Steven Wolfram

01:56:46 put out his physics project, which is kind of this hypergraph view of a theory of everything.

01:56:53 Do you find interesting, beautiful things to these theories of everything?

01:56:58 What do you think about the physics world and sort of the beautiful, interesting, insightful

01:57:04 mathematics in that world, whether we’re talking about quantum mechanics, which you touched

01:57:09 on in a bunch of your videos a little bit, quaternions, like just the mathematics involved

01:57:13 or the general relativity, which is more about surfaces and topology, all that stuff.

01:57:19 Well, I think, um, as far as like popularized science is concerned, people are more interested

01:57:25 in theories of everything than they should be like, cause the problem is whether we’re

01:57:29 talking about trying to make sense of Weinstein’s lectures or Wolfram’s project, or let’s just

01:57:34 say like listening to, uh, Witten talk about string theory, whatever proposed path to a

01:57:40 theory of everything, um, you’re not actually going to understand it.

01:57:44 Some physicists will, but like, you’re just not actually going to understand the substance

01:57:48 of what they’re saying.

01:57:49 What I think is way, way more productive is, um, to let yourself get really interested

01:57:53 in the phenomena that are still deep, but which you have a chance of understanding because

01:57:58 the path to getting to like even understanding what questions these theories of everything

01:58:02 are trying to answer involves like walking down that, um, I mean, I was watching a video

01:58:06 before I came here about from Steve mold talking about, um, why sugar polarizes light in a

01:58:11 certain way.

01:58:12 So fascinating, like really, really interesting.

01:58:15 It’s not like this novel theory of everything type thing, but to understand what’s going

01:58:19 on there really requires digging in in depth to certain ideas.

01:58:23 And if you let yourself think past what the video tells you about what does circularly

01:58:27 polarized light mean and things like that, it actually would get you to a pretty good

01:58:31 appreciation of like two state states and quantum systems, um, in a way that just trying

01:58:36 to read about like, Oh, what’s the, um, what are the hard parts about resolving quantum

01:58:40 field theories with general relativity is never going to get you.

01:58:44 So as far as popularizing science is concerned, like the audience should be less interested

01:58:50 than they are in theories of everything.

01:58:52 Um, the popularizers should be less emphatic than they are about that for like actual practicing

01:58:59 physicists.

01:59:00 And that might be the case.

01:59:01 Maybe more people should think about fundamental questions, but it’s difficult to create, uh,

01:59:06 like a three blue, one brown video on the theory of everything.

01:59:09 So basically we should really try to find the beauty in mathematics or physics by looking

01:59:16 at concepts that are like within reach.

01:59:18 Yeah, I think that’s super important.

01:59:20 I mean, so you see this in math too with, um, the big unsolved problems.

01:59:25 So like the clay millennium problems, Riemann hypothesis, um, have you ever done a video

01:59:29 on Fermat’s last theorem?

01:59:30 No, I have not yet.

01:59:31 No.

01:59:32 But if I did, do you know what I would do?

01:59:34 I would talk about, um, proving Fermat’s last theorem in the specific case of N equals

01:59:38 three.

01:59:39 Okay.

01:59:40 Is that still accessible though?

01:59:41 Yes.

01:59:42 Actually barely.

01:59:43 Um, Mathologer might be able to do like a great job on this.

01:59:46 He does a good job of taking stuff that’s barely accessible and making it, but the,

01:59:50 the core ideas of proving it for N equals three are hard, but they do get you real ideas

01:59:55 about algebraic number theory.

01:59:56 And it involves looking at a number field that’s, uh, it lives in the complex plane.

02:00:00 It looks like a hexagonal lattice and you start asking questions about factoring numbers

02:00:04 in this hexagonal lattice.

02:00:06 So it takes a while, but I’ve talked about this sort of like lattice arithmetic, um,

02:00:10 in other contexts and you can get to a okay understanding of that.

02:00:15 And the things that make Fermat’s last theorem hard are actually quite deep.

02:00:18 Um, and so the cases that we can solve it for, it’s like you can get these broad sweeps

02:00:23 based on some hard, but like accessible, um, bits of number theory.

02:00:28 But before you can even understand why the general case is as hard as it is, you have

02:00:32 to walk through those.

02:00:33 And so any other attempt to describe it would just end up being like shallow and not really

02:00:38 productive for the viewer’s time.

02:00:39 Um, I think the same goes for, uh, most like unsolved problem type things where I think,

02:00:45 you know, as a kid, I was actually very inspired by the twin prime conjecture, um, that like

02:00:49 totally sucked me in as this thing that was understandable.

02:00:52 I kind of had this dream like, Oh, maybe I’ll be the one to prove the twin prime conjecture

02:00:55 and new math that I would learn would be like viewed through this lens of like, Oh, maybe

02:01:00 I can apply it to that in some way.

02:01:01 But, uh, you sort of mature to a point where you realize that, uh, you should spend your

02:01:08 brain cycles on problems that you will see resolved because then you’re going to grow

02:01:12 to see what it feels like for these things to be resolved rather than spending your brain

02:01:16 cycles on something where it’s not, it’s not going to pan out.

02:01:19 Um, and the people who do make progress towards these things like James Maynard, uh, is a

02:01:24 great example here of like young creative mathematician who like pushes in the direction

02:01:28 of things like the twin prime conjecture rather than hitting that head on, just see all the

02:01:33 interesting questions that are hard for similar reasons, but become more tractable and let

02:01:36 themselves really engage with those.

02:01:38 Um, so I think people should get in that habit.

02:01:41 I think the popularization of physics should encourage that habit through things like the

02:01:46 physics of simple everyday phenomena, because it can get quite deep.

02:01:50 And um, yeah, I think I, you know, I’ve, I’ve heard a lot of the interest that, you know,

02:01:54 people send me messages asking to explain Weinstein’s thing or asking to explain Wolfram’s

02:01:58 thing.

02:01:59 One, I don’t understand them, but more importantly, um, it’s too big a bite to, you shouldn’t

02:02:06 be interested in those, right?

02:02:08 The giant sort of a ball of interesting ideas.

02:02:12 There’s probably a million of interesting ideas in there that individually could be

02:02:16 explored effectively.

02:02:17 And to be clear, you should be interested in fundamental questions.

02:02:20 I think that’s a good habit to ask what the fundamentals of things are, but I think it

02:02:25 takes a lot of steps to like, certainly you shouldn’t be trying to answer that unless

02:02:29 you actually understand quantum field theory and you actually understand general relativity.

02:02:33 That’s the cool thing about like your videos, people who haven’t done mathematics, like

02:02:37 if you really give it time, watch it a couple of times and like try to try to reason about

02:02:42 it, you can actually understand the concept that’s being explained.

02:02:45 And it’s not a coincidence that the things I’m describing aren’t like the most, um, up

02:02:49 to date, uh, progress on the Riemann hypothesis cousins or, um, like there’s context in which

02:02:55 the analog of the Riemann hypothesis has been solved in like more, uh, discrete feeling

02:03:00 finite settings that are more well behaved.

02:03:02 I’m not describing that because it just takes a ton to get there.

02:03:05 And instead I think it’ll be like productive to have an actual understanding of something

02:03:10 that can, you can pack into 20 minutes.

02:03:12 I think that’s beautifully put ultimately.

02:03:15 That’s where like the most satisfying thing is when you really understand, um, yeah, really

02:03:20 understand, build a habit of feeling what it’s like to actually come to resolution.

02:03:25 Yeah.

02:03:26 Yeah.

02:03:27 As opposed to, which it can also be enjoyable, but just being in awe of the fact that you

02:03:32 don’t understand anything.

02:03:33 Yeah.

02:03:34 That’s not like, I don’t know.

02:03:35 Maybe we’ll get entertainment out of that, but it’s not as fulfilling as understanding

02:03:40 you won’t grow.

02:03:41 Yeah.

02:03:42 And, but also just the fulfilling, it really does feel good when you first don’t understand

02:03:47 something and then you do, that’s a beautiful feeling.

02:03:51 Hey, let me ask you one, uh, last, last time we got awkward and weird about, uh, a fear

02:03:57 of mortality, which you made fun of me off, but let me ask you on the, the other absurd

02:04:01 question is, um, what do you think is, uh, the meaning of our life of meaning of life?

02:04:08 I’m sorry if I made fun of you about, no, you didn’t.

02:04:11 I’m just joking.

02:04:12 It was great.

02:04:13 I don’t think life has a meaning.

02:04:15 I think like meaning, I don’t understand the question.

02:04:18 I think meaning is something that’s described to stuff that’s created with purpose.

02:04:22 There’s a meaning to, uh, like this water bottle label and that someone created it with

02:04:26 a purpose of conveying meaning.

02:04:27 And there was like one consciousness that wanted to get its ideas into another consciousness.

02:04:31 Um, most things don’t have that property.

02:04:35 It’s a little bit like if I asked you, um, like what is the height, all right, so it’s

02:04:41 all relative.

02:04:42 Yeah.

02:04:43 You’d be like the height of what you can’t ask.

02:04:44 What is the height without an object?

02:04:46 You can’t ask what is the meaning of life without like an intentful consciousness, putting

02:04:50 it like, I guess I’m revealing I’m not very religious, but you know, the mathematics of

02:04:56 everything seems kind of beautiful.

02:04:58 It seems like, it seems like there’s some kind of structure relative to which, I mean,

02:05:05 you could calculate the height.

02:05:06 Well, so, but what I’m saying is I don’t understand the question.

02:05:09 What is the meaning of life in that?

02:05:10 I think people might be asking something very real.

02:05:13 I don’t understand what they’re asking.

02:05:14 Are they asking like, why does life exist?

02:05:16 Like how did it come about?

02:05:17 What are the natural laws?

02:05:19 Are they asking, um, as I’m making decisions day by day for what should I do?

02:05:23 What is the guiding light that inspires like, what should I do?

02:05:26 I think that’s what people are kind of asking.

02:05:27 But also like why the thing that gives you joy about education, about mathematics, what

02:05:36 the hell is that?

02:05:37 Like what interactions with other people, interactions with like minded people, I think

02:05:41 is the meaning of, in that sense, bringing others joy, essentially, like in something

02:05:46 you’ve created, it connects with others somehow and the same and the vice versa.

02:05:53 I think that that is what, um, when we use the word meaning to mean like you’re sort

02:05:57 of filled with a sense of happiness and energy to create more things, like I have so much

02:06:01 meaning taken from this, like that, yeah, that’s what fuels, fuels my pump at least.

02:06:06 So a life alone on a desert island would be kind of meaningless.

02:06:10 Yeah.

02:06:11 You want to be alone together with someone.

02:06:13 I think we’re all alone together.

02:06:15 I think there’s no better way to end it, Grant.

02:06:18 You’ve been, first time we talked, it was amazing again, it’s a huge honor that you

02:06:22 make time for me.

02:06:23 I appreciate talking with you.

02:06:24 Thanks, man.

02:06:25 Awesome.

02:06:26 Thanks for listening.

02:06:27 Thanks for listening to this conversation with Grant Sanderson.

02:06:29 And thank you to our sponsors, Dollar Shave Club, DoorDash, and Cash App.

02:06:34 Click the sponsor links in the description to get a discount and to support this podcast.

02:06:39 If you enjoy this thing, subscribe on YouTube, review it with five stars on Apple Podcast,

02:06:44 follow on Spotify, support on Patreon, or connect with me on Twitter at Lex Friedman.

02:06:50 And now let me leave you with some words from Richard Feynman.

02:06:54 I have a friend who’s an artist and has sometimes taken a view which I don’t agree with very

02:06:59 well.

02:07:00 He’ll hold up a flower and say, look how beautiful it is, and I’ll agree.

02:07:04 Then he says, I as an artist can see how beautiful this is, but you as a scientist take this

02:07:10 all apart and it becomes a dull thing.

02:07:13 And I think he’s kind of nutty.

02:07:16 First of all, the beauty that he sees is available to other people and to me too, I believe.

02:07:21 Although I may not be quite as refined aesthetically as he is, I can appreciate the beauty of a

02:07:26 flower.

02:07:27 At the same time, I see much more about the flower than he sees.

02:07:31 I can imagine the cells in there, the complicated actions inside, which also have a beauty.

02:07:37 I mean, it’s not just beauty at this dimension at one centimeter, there’s also beauty at

02:07:40 smaller dimensions, the inner structure, also the processes.

02:07:46 The fact that the colors in the flower evolved in order to attract insects to pollinate it

02:07:50 is interesting.

02:07:52 It means that insects can see the color.

02:07:54 It adds a question.

02:07:56 Does this aesthetic sense also exist in the lower forms?

02:07:59 Why is it aesthetic?

02:08:01 All kinds of interesting questions which the science knowledge only adds to the excitement,

02:08:05 the mystery and the awe of a flower.

02:08:08 It only adds.

02:08:09 I don’t understand how it subtracts.

02:08:13 Thank you for listening and hope to see you next time.