Elon Musk: SpaceX, Mars, Tesla Autopilot, Self-Driving, Robotics, and AI #252

Transcript

00:00:00 The following is a conversation with Elon Musk,

00:00:02 his third time on this, The Lex Friedman Podcast.

00:00:06 Yeah, make yourself comfortable.

00:00:08 Boo.

00:00:09 No, wow, okay.

00:00:10 You don’t do the headphone thing?

00:00:12 No.

00:00:13 Okay.

00:00:14 I mean, how close do I need to get up to the same place?

00:00:15 The closer you are, the sexier you sound.

00:00:17 Hey, babe.

00:00:18 What’s up?

00:00:19 Can’t get enough of your love, baby.

00:00:20 I’m gonna clip that out

00:00:25 any time somebody messages me about it.

00:00:26 If you want my body and you think I’m sexy,

00:00:30 come right out and tell me so.

00:00:33 Do, do, do, do, do.

00:00:36 So good.

00:00:38 So good.

00:00:39 Okay, serious mode activate.

00:00:41 All right.

00:00:43 Serious mode.

00:00:44 Come on, you’re Russian.

00:00:45 You can be serious.

00:00:46 Yeah, I know.

00:00:46 Everyone’s serious all the time in Russia.

00:00:47 Yeah, yeah, we’ll get there.

00:00:50 We’ll get there.

00:00:51 Yeah.

00:00:52 It’s gotten soft.

00:00:53 Allow me to say that the SpaceX launch

00:00:57 of human beings to orbit on May 30th, 2020

00:01:02 was seen by many as the first step

00:01:03 in a new era of human space exploration.

00:01:07 These human space flight missions were a beacon of hope

00:01:10 to me and to millions over the past two years

00:01:12 as our world has been going through

00:01:14 one of the most difficult periods in recent human history.

00:01:18 We saw, we see the rise of division, fear, cynicism,

00:01:21 and the loss of common humanity

00:01:24 right when it is needed most.

00:01:26 So first, Elon, let me say thank you

00:01:29 for giving the world hope and reason

00:01:30 to be excited about the future.

00:01:32 Oh, it’s kind of easy to say.

00:01:34 I do want to do that.

00:01:35 Humanity has obviously a lot of issues

00:01:38 and people at times do bad things,

00:01:42 but despite all that, I love humanity

00:01:47 and I think we should make sure we do everything we can

00:01:52 to have a good future and an exciting future

00:01:54 and one where that maximizes the happiness of the people.

00:01:58 Let me ask about Crew Dragon Demo 2.

00:02:00 So that first flight with humans on board,

00:02:04 how did you feel leading up to that launch?

00:02:06 Were you scared?

00:02:07 Were you excited?

00:02:08 What was going through your mind?

00:02:09 So much was at stake.

00:02:11 Yeah, no, that was extremely stressful, no question.

00:02:14 We obviously could not let them down in any way.

00:02:19 So extremely stressful, I’d say, to say the least.

00:02:25 I was confident that at the time that we launched

00:02:28 that no one could think of anything at all to do

00:02:34 that would improve the probability of success.

00:02:36 And we racked our brains to think of any possible way

00:02:40 to improve the probability of success.

00:02:41 We could not think of anything more, nor could NASA.

00:02:44 And so that’s just the best that we could do.

00:02:48 So then we went ahead and launched.

00:02:51 Now, I’m not a religious person,

00:02:54 but I nonetheless got on my knees and prayed for that mission.

00:03:00 Were you able to sleep?

00:03:01 No.

00:03:02 How did it feel when it was a success,

00:03:10 first when the launch was a success,

00:03:12 and when they returned back home or back to Earth?

00:03:16 It was a great relief.

00:03:20 Yeah, for high stress situations,

00:03:23 I find it’s not so much elation as relief.

00:03:26 And I think once, as we got more comfortable

00:03:33 and proved out the systems,

00:03:34 because we really got to make sure everything works,

00:03:41 it was definitely a lot more enjoyable

00:03:43 with the subsequent astronaut missions.

00:03:46 And I thought the Inspiration mission

00:03:50 was actually a very inspiring Inspiration 4 mission.

00:03:53 I’d encourage people to watch the Inspiration documentary

00:03:57 on Netflix, it’s actually really good.

00:04:00 And it really isn’t, I was actually inspired by that.

00:04:03 And so that one I felt I was kind of able

00:04:07 to enjoy the actual mission

00:04:09 and not just be super stressed all the time.

00:04:10 So for people that somehow don’t know,

00:04:13 it’s the all civilian, first time all civilian

00:04:17 out to space, out to orbit.

00:04:19 Yeah, and it was I think the highest orbit

00:04:22 that in like, I don’t know, 30 or 40 years or something.

00:04:26 The only one that was higher was the one shuttle,

00:04:29 sorry, a Hubble servicing mission.

00:04:32 And then before that, it would have been Apollo in 72.

00:04:37 It’s pretty wild.

00:04:39 So it’s cool, it’s good.

00:04:40 I think as a species, we want to be continuing

00:04:46 to do better and reach higher ground.

00:04:50 And like, I think it would be tragic, extremely tragic

00:04:52 if Apollo was the high watermark for humanity,

00:04:57 and that’s as far as we ever got.

00:05:00 And it’s concerning that here we are 49 years

00:05:06 after the last mission to the moon.

00:05:09 And so almost half a century, and we’ve not been back.

00:05:14 And that’s worrying.

00:05:16 It’s like, does that mean we’ve peaked as a civilization

00:05:20 or what?

00:05:21 So like, I think we got to get back to the moon

00:05:24 and build a base there, a science base.

00:05:27 I think we could learn a lot about the nature

00:05:28 of the universe if we have a proper science base

00:05:31 on the moon, like we have a science base in Antarctica

00:05:35 and many other parts of the world.

00:05:38 And so that’s like, I think the next big thing,

00:05:41 we’ve got to have like a serious moon base

00:05:45 and then get people to Mars and get out there

00:05:49 and be a spacefaring civilization.

00:05:52 I’ll ask you about some of those details,

00:05:54 but since you’re so busy with the hard engineering

00:05:57 challenges of everything that’s involved,

00:06:00 are you still able to marvel at the magic of it all,

00:06:03 of space travel, of every time the rocket goes up,

00:06:05 especially when it’s a crewed mission?

00:06:08 Or are you just so overwhelmed with all the challenges

00:06:11 that you have to solve?

00:06:12 And actually, sort of to add to that,

00:06:16 the reason I wanted to ask this question of May 30th,

00:06:19 it’s been some time, so you can look back

00:06:21 and think about the impact already.

00:06:23 It’s already, at the time it was an engineering problem

00:06:26 maybe, now it’s becoming a historic moment.

00:06:29 Like it’s a moment that, how many moments

00:06:31 will be remembered about the 21st century?

00:06:33 To me, that or something like that,

00:06:37 maybe Inspiration4, one of those would be remembered

00:06:39 as the early steps of a new age of space exploration.

00:06:44 Yeah, I mean, during the launches itself,

00:06:46 so I mean, I think maybe some people will know,

00:06:49 but a lot of people don’t know,

00:06:50 is like I’m actually the chief engineer of SpaceX,

00:06:52 so I’ve signed off on pretty much all the design decisions.

00:06:59 And so if there’s something that goes wrong

00:07:03 with that vehicle, it’s fundamentally my fault, so.

00:07:09 So I’m really just thinking about all the things that,

00:07:13 like, so when I see the rocket,

00:07:15 I see all the things that could go wrong

00:07:17 and the things that could be better

00:07:18 and the same with the Dragon spacecraft.

00:07:21 It’s like, other people will see,

00:07:23 oh, this is a spacecraft or a rocket

00:07:25 and this looks really cool.

00:07:27 I’m like, I’ve like a readout of like,

00:07:29 these are the risks, these are the problems,

00:07:32 that’s what I see.

00:07:33 Like, choo choo choo choo choo choo choo.

00:07:36 So it’s not what other people see

00:07:38 when they see the product, you know.

00:07:40 So let me ask you then to analyze Starship

00:07:43 in that same way.

00:07:44 I know you have, you’ll talk about,

00:07:46 in more detail about Starship in the near future, perhaps.

00:07:49 Yeah, we can talk about it now if you want.

00:07:51 But just in that same way, like you said,

00:07:54 you see, when you see a rocket,

00:07:57 you see sort of a list of risks.

00:07:59 In that same way, you said that Starship

00:08:01 is a really hard problem.

00:08:03 So there’s many ways I can ask this,

00:08:05 but if you magically could solve one problem perfectly,

00:08:09 one engineering problem perfectly,

00:08:11 which one would it be?

00:08:13 On Starship?

00:08:14 On, sorry, on Starship.

00:08:15 So is it maybe related to the efficiency,

00:08:18 the engine, the weight of the different components,

00:08:21 the complexity of various things,

00:08:22 maybe the controls of the crazy thing it has to do to land?

00:08:26 No, it’s actually by far the biggest thing

00:08:30 absorbing my time is engine production.

00:08:35 Not the design of the engine,

00:08:36 but I’ve often said prototypes are easy, production is hard.

00:08:45 So we have the most advanced rocket engine

00:08:48 that’s ever been designed.

00:08:52 Cause I’d say currently the best rocket engine ever

00:08:55 is probably the RD180 or RD170,

00:09:00 that’s the Russian engine basically.

00:09:03 And still, I think an engine should only count

00:09:06 if it’s gotten something to orbit.

00:09:09 So our engine has not gotten anything to orbit yet,

00:09:12 but it is, it’s the first engine

00:09:14 that’s actually better than the Russian RD engines,

00:09:20 which were amazing design.

00:09:22 So you’re talking about Raptor engine.

00:09:24 What makes it amazing?

00:09:25 What are the different aspects of it that make it,

00:09:29 like what are you the most excited about

00:09:31 if the whole thing works in terms of efficiency,

00:09:35 all those kinds of things?

00:09:37 Well, it’s, the Raptor is a full flow

00:09:42 staged combustion engine

00:09:46 and it’s operating at a very high chamber pressure.

00:09:50 So one of the key figures of merit, perhaps the key figure

00:09:54 of merit is what is the chamber pressure

00:09:58 at which the rocket engine can operate?

00:10:00 That’s the combustion chamber pressure.

00:10:03 So Raptor is designed to operate at 300 bar,

00:10:06 possibly maybe higher, that’s 300 atmospheres.

00:10:10 So the record right now for operational engine

00:10:15 is the RD engine that I mentioned, the Russian RD,

00:10:17 which is I believe around 267 bar.

00:10:22 And the difficulty of the chamber pressure

00:10:25 is increases on a nonlinear basis.

00:10:27 So 10% more chamber pressure is more like 50% more difficult.

00:10:36 But that chamber pressure is,

00:10:38 that is what allows you to get a very high power density

00:10:44 for the engine.

00:10:45 So enabling a very high thrust to weight ratio

00:10:53 and a very high specific impulse.

00:10:57 So specific impulse is like a measure of the efficiency

00:10:59 of a rocket engine.

00:11:01 It’s really the effect of exhaust velocity

00:11:07 of the gas coming out of the engine.

00:11:10 So with a very high chamber pressure,

00:11:17 you can have a compact engine

00:11:21 that nonetheless has a high expansion ratio,

00:11:24 which is the ratio between the exit nozzle and the throat.

00:11:31 So you see a rocket engine’s got sort of like a hourglass

00:11:35 shape, it’s like a chamber and then it necks down

00:11:38 and there’s a nozzle and the ratio of the exit diameter

00:11:41 to the throat is the expansion ratio.

00:11:45 So why is it such a hard engine to manufacture at scale?

00:11:51 It’s very complex.

00:11:53 So a lot of, what is complexity mean here?

00:11:55 There’s a lot of components involved.

00:11:56 There’s a lot of components and a lot of unique materials

00:12:02 that, so we had to invent several alloys

00:12:07 that don’t exist in order to make this engine work.

00:12:11 It’s a materials problem too.

00:12:14 It’s a materials problem and in a staged combustion,

00:12:19 a full flow staged combustion,

00:12:21 there are many feedback loops in the system.

00:12:24 So basically you’ve got propellants and hot gas

00:12:31 flowing simultaneously to so many different places

00:12:36 on the engine and they all have a recursive effect

00:12:42 on each other.

00:12:43 So you change one thing here, it has a recursive effect

00:12:45 here, it changes something over there

00:12:47 and it’s quite hard to control.

00:12:52 Like there’s a reason no one’s made this before.

00:12:58 And the reason we’re doing a staged combustion full flow

00:13:03 is because it has the highest possible efficiency.

00:13:12 So in order to make a fully reusable rocket,

00:13:19 which that’s the really the holy grail of orbital rocketry.

00:13:25 You have to have, everything’s gotta be the best.

00:13:28 It’s gotta be the best engine, the best airframe,

00:13:30 the best heat shield, extremely light avionics,

00:13:38 very clever control mechanisms.

00:13:40 You’ve got to shed mass in any possible way that you can.

00:13:45 For example, we are, instead of putting landing legs

00:13:47 on the booster and ship, we are gonna catch them

00:13:49 with a tower to save the weight of the landing legs.

00:13:53 So that’s like, I mean, we’re talking about catching

00:13:58 the largest flying object ever made with,

00:14:03 on a giant tower with chopstick arms.

00:14:06 It’s like Karate Kid with the fly, but much bigger.

00:14:12 I mean, pulling something like that home.

00:14:12 This probably won’t work the first time.

00:14:17 Anyway, so this is bananas, this is banana stuff.

00:14:19 So you mentioned that you doubt, well, not you doubt,

00:14:23 but there’s days or moments when you doubt

00:14:26 that this is even possible.

00:14:28 It’s so difficult.

00:14:30 The possible part is, well, at this point,

00:14:35 I think we will get Starship to work.

00:14:41 There’s a question of timing.

00:14:42 How long will it take us to do this?

00:14:45 How long will it take us to actually achieve

00:14:47 full and rapid reusability?

00:14:50 Cause it will take probably many launches

00:14:52 before we are able to have full and rapid reusability.

00:14:57 But I can say that the physics pencils out.

00:15:01 Like we’re not, like at this point,

00:15:06 I’d say we’re confident that, like let’s say,

00:15:10 I’m very confident success is in the set

00:15:12 of all possible outcomes.

00:15:13 For a while there, I was not convinced

00:15:18 that success was in the set of possible outcomes,

00:15:20 which is very important actually.

00:15:23 But so we’re saying there’s a chance.

00:15:29 I’m saying there’s a chance, exactly.

00:15:33 Just not sure how long it will take.

00:15:38 We have a very talented team.

00:15:39 They’re working night and day to make it happen.

00:15:43 And like I said, the critical thing to achieve

00:15:48 for the revolution in space flight

00:15:49 and for humanity to be a spacefaring civilization

00:15:52 is to have a fully and rapidly reusable rocket,

00:15:54 orbital rocket.

00:15:56 There’s not even been any orbital rocket

00:15:58 that’s been fully reusable ever.

00:16:00 And this has always been the holy grail of rocketry.

00:16:06 And many smart people, very smart people,

00:16:09 have tried to do this before and they’ve not succeeded.

00:16:12 So, cause it’s such a hard problem.

00:16:16 What’s your source of belief in situations like this?

00:16:21 When the engineering problem is so difficult,

00:16:23 there’s a lot of experts, many of whom you admire,

00:16:27 who have failed in the past.

00:16:29 And a lot of people, a lot of experts,

00:16:37 maybe journalists, all the kind of,

00:16:38 the public in general have a lot of doubt

00:16:40 about whether it’s possible.

00:16:42 And you yourself know that even if it’s a non null set,

00:16:47 non empty set of success, it’s still unlikely

00:16:50 or very difficult.

00:16:52 Like where do you go to, both personally,

00:16:55 intellectually as an engineer, as a team,

00:16:58 like for source of strength needed

00:17:00 to sort of persevere through this.

00:17:02 And to keep going with the project, take it to completion.

00:17:18 A source of strength.

00:17:19 That’s really not how I think about things.

00:17:23 I mean, for me, it’s simply this,

00:17:25 this is something that is important to get done

00:17:28 and we should just keep doing it or die trying.

00:17:32 And I don’t need a source of strength.

00:17:35 So quitting is not even like…

00:17:39 That’s not, it’s not in my nature.

00:17:41 And I don’t care about optimism or pessimism.

00:17:46 Fuck that, we’re gonna get it done.

00:17:47 Gonna get it done.

00:17:51 Can you then zoom back in to specific problems

00:17:55 with Starship or any engineering problems you work on?

00:17:58 Can you try to introspect your particular

00:18:00 biological neural network, your thinking process

00:18:03 and describe how you think through problems,

00:18:06 the different engineering and design problems?

00:18:07 Is there like a systematic process?

00:18:10 You’ve spoken about first principles thinking,

00:18:11 but is there a kind of process to it?

00:18:14 Well, yeah, like saying like physics is a law

00:18:19 and everything else is a recommendation.

00:18:21 Like I’ve met a lot of people that can break the law,

00:18:23 but I haven’t met anyone who could break physics.

00:18:25 So the first, for any kind of technology problem,

00:18:32 you have to sort of just make sure

00:18:34 you’re not violating physics.

00:18:39 And first principles analysis, I think,

00:18:45 is something that can be applied to really any walk of life,

00:18:49 anything really.

00:18:50 It’s really just saying, let’s boil something down

00:18:54 to the most fundamental principles.

00:18:58 The things that we are most confident are true

00:19:00 at a foundational level.

00:19:02 And that sets your axiomatic base,

00:19:05 and then you reason up from there,

00:19:07 and then you cross check your conclusion

00:19:09 against the axiomatic truths.

00:19:13 So some basics in physics would be like,

00:19:18 are you violating conservation of energy or momentum

00:19:20 or something like that?

00:19:21 Then it’s not gonna work.

00:19:29 So that’s just to establish, is it possible?

00:19:32 And another good physics tool

00:19:34 is thinking about things in the limit.

00:19:36 If you take a particular thing

00:19:38 and you scale it to a very large number

00:19:41 or to a very small number, how do things change?

00:19:45 Both like in number of things you manufacture

00:19:48 or something like that, and then in time?

00:19:51 Yeah, like let’s say take an example of like manufacturing,

00:19:55 which I think is just a very underrated problem.

00:20:00 And like I said, it’s much harder to

00:20:06 take an advanced technology product

00:20:09 and bring it into volume manufacturing

00:20:11 than it is to design it in the first place.

00:20:12 My orders of magnitude.

00:20:14 So let’s say you’re trying to figure out

00:20:17 is like why is this part or product expensive?

00:20:23 Is it because of something fundamentally foolish

00:20:27 that we’re doing or is it because our volume is too low?

00:20:31 And so then you say, okay, well, what if our volume

00:20:32 was a million units a year?

00:20:34 Is it still expensive?

00:20:35 That’s what I mean by thinking about things in the limit.

00:20:38 If it’s still expensive at a million units a year,

00:20:40 then volume is not the reason why your thing is expensive.

00:20:42 There’s something fundamental about the design.

00:20:44 And then you then can focus on reducing the complexity

00:20:47 or something like that in the design.

00:20:48 You gotta change the design to change the part

00:20:50 to be something that is not fundamentally expensive.

00:20:56 But like that’s a common thing in rocketry

00:20:58 because the unit volume is relatively low.

00:21:01 And so a common excuse would be,

00:21:04 well, it’s expensive because our unit volume is low.

00:21:06 And if we were in like automotive or something like that,

00:21:08 or consumer electronics, then our costs would be lower.

00:21:10 I’m like, okay, so let’s say

00:21:13 now you’re making a million units a year.

00:21:14 Is it still expensive?

00:21:16 If the answer is yes, then economies of scale

00:21:20 are not the issue.

00:21:22 Do you throw into manufacturing,

00:21:24 do you throw like supply chain?

00:21:26 Talked about resources and materials and stuff like that.

00:21:28 Do you throw that into the calculation

00:21:29 of trying to reason from first principles,

00:21:31 like how we’re gonna make the supply chain work here?

00:21:34 Yeah, yeah.

00:21:35 And then the cost of materials, things like that.

00:21:37 Or is that too much?

00:21:38 Exactly, so like a good example,

00:21:41 I think of thinking about things in the limit

00:21:44 is if you take any machine or whatever,

00:21:54 like take a rocket or whatever,

00:21:56 and say, if you look at the raw materials in the rocket,

00:22:03 so you’re gonna have like aluminum, steel, titanium,

00:22:07 Inconel, specialty alloys, copper,

00:22:12 and you say, what’s the weight of the constituent elements,

00:22:19 of each of these elements,

00:22:20 and what is their raw material value?

00:22:22 And that sets the asymptotic limit

00:22:25 for how low the cost of the vehicle can be

00:22:29 unless you change the materials.

00:22:31 So, and then when you do that,

00:22:33 I call it like maybe the magic one number

00:22:35 or something like that.

00:22:36 So that would be like, if you had the,

00:22:40 like just a pile of these raw materials,

00:22:42 again, you could wave the magic wand

00:22:43 and rearrange the atoms into the final shape.

00:22:47 That would be the lowest possible cost

00:22:49 that you could make this thing for

00:22:51 unless you change the materials.

00:22:52 So then, and that is almost always a very low number.

00:22:57 So then what’s actually causing things to be expensive

00:23:01 is how you put the atoms into the desired shape.

00:23:06 Yeah, actually, if you don’t mind me taking a tiny tangent,

00:23:10 I often talk to Jim Keller,

00:23:11 who’s somebody that worked with you as a friend.

00:23:14 Jim was, yeah, did great work at Tesla.

00:23:17 So I suppose he carries the flame

00:23:20 with the same kind of thinking

00:23:22 that you’re talking about now.

00:23:26 And I guess I see that same thing

00:23:27 at Tesla and SpaceX folks who work there,

00:23:31 they kind of learn this way of thinking

00:23:33 and it kind of becomes obvious almost.

00:23:36 But anyway, I had argument, not argument,

00:23:39 but he educated me about how cheap it might be

00:23:44 to manufacture a Tesla bot.

00:23:46 We just, we had an argument.

00:23:48 How can you reduce the cost of scale of producing a robot?

00:23:52 Because I’ve gotten the chance to interact quite a bit,

00:23:55 obviously, in the academic circles with humanoid robots

00:23:59 and then Boston Dynamics and stuff like that.

00:24:01 And they’re very expensive to build.

00:24:04 And then Jim kind of schooled me on saying like,

00:24:07 okay, like this kind of first principles thinking

00:24:10 of how can we get the cost of manufacturing down?

00:24:13 I suppose you do that,

00:24:14 you have done that kind of thinking for Tesla bot

00:24:17 and for all kinds of complex systems

00:24:21 that are traditionally seen as complex.

00:24:23 And you say, okay, how can we simplify everything down?

00:24:27 Yeah, I mean, I think if you are really good

00:24:30 at manufacturing, you can basically make,

00:24:34 at high volume, you can basically make anything

00:24:36 for a cost that asymptotically approaches

00:24:40 the raw material value of the constituents

00:24:42 plus any intellectual property that you need to do license.

00:24:46 Anything.

00:24:49 But it’s hard.

00:24:50 It’s not like that’s a very hard thing to do,

00:24:51 but it is possible for anything.

00:24:54 Anything in volume can be made, like I said,

00:24:57 for a cost that asymptotically approaches

00:25:00 this raw material constituents

00:25:02 plus intellectual property license rights.

00:25:05 So what’ll often happen in trying to design a product

00:25:08 is people will start with the tools and parts

00:25:11 and methods that they are familiar with

00:25:14 and then try to create the product

00:25:17 using their existing tools and methods.

00:25:21 The other way to think about it is actually imagine the,

00:25:25 try to imagine the platonic ideal of the perfect product

00:25:28 or technology, whatever it might be.

00:25:31 And say, what is the perfect arrangement of atoms

00:25:35 that would be the best possible product?

00:25:38 And now let us try to figure out

00:25:39 how to get the atoms in that shape.

00:25:43 I mean, it sounds,

00:25:48 it’s almost like a Rick and Morty absurd

00:25:50 until you start to really think about it

00:25:52 and you really should think about it in this way

00:25:56 because everything else is kind of,

00:25:59 if you think, you might fall victim to the momentum

00:26:03 of the way things were done in the past

00:26:04 unless you think in this way.

00:26:06 Well, just as a function of inertia,

00:26:07 people will want to use the same tools and methods

00:26:10 that they are familiar with.

00:26:13 That’s what they’ll do by default.

00:26:15 And then that will lead to an outcome of things

00:26:18 that can be made with those tools and methods

00:26:20 but is unlikely to be the platonic ideal

00:26:23 of the perfect product.

00:26:25 So then, so that’s why it’s good to think of things

00:26:28 in both directions.

00:26:29 So like, what can we build with the tools that we have?

00:26:31 But then, but also what is the perfect,

00:26:34 the theoretical perfect product look like?

00:26:36 And that theoretical perfect part

00:26:38 is gonna be a moving target

00:26:39 because as you learn more,

00:26:41 the definition of that perfect product will change

00:26:45 because you don’t actually know what the perfect product is,

00:26:47 but you can successfully approximate a more perfect product.

00:26:52 So think about it like that and then saying,

00:26:54 okay, now what tools, methods, materials,

00:26:57 whatever do we need to create

00:27:00 in order to get the atoms in that shape?

00:27:03 But people rarely think about it that way.

00:27:07 But it’s a powerful tool.

00:27:10 I should mention that the brilliant Siobhan Ziles

00:27:13 is hanging out with us in case you hear a voice

00:27:17 of wisdom from outside, from up above.

00:27:23 Okay, so let me ask you about Mars.

00:27:26 You mentioned it would be great for science

00:27:28 to put a base on the moon to do some research.

00:27:32 But the truly big leap, again,

00:27:36 in this category of seemingly impossible,

00:27:38 is to put a human being on Mars.

00:27:41 When do you think SpaceX will land a human being on Mars?

00:27:44 Hmm, best case is about five years, worst case, 10 years.

00:27:54 Okay, so I’m gonna ask you about Mars.

00:27:57 You mentioned it would be great for science

00:28:00 to put a base on the moon to do some research.

00:28:03 But the truly big leap, again,

00:28:05 in this category of seemingly impossible,

00:28:08 is to put a human being on Mars.

00:28:11 But the truly big leap, again,

00:28:15 in this category of seemingly impossible,

00:28:18 is to put a human being on Mars.

00:28:20 When do you think SpaceX will land a human being on Mars?

00:28:23 Hmm, best case is about five years, worst case, 10 years.

00:28:31 What are the determining factors, would you say,

00:28:34 from an engineering perspective?

00:28:36 Or is that not the bottlenecks?

00:28:37 I don’t know, order of magnitude or something like that.

00:28:40 It’s a lot, it’s really next level.

00:28:43 So, and the fundamental optimization of Starship

00:28:49 is minimizing cost per ton to orbit,

00:28:51 and ultimately cost per ton to the surface of Mars.

00:28:54 This may seem like a mercantile objective,

00:28:56 but it is actually the thing that needs to be optimized.

00:29:00 Like there is a certain cost per ton to the surface of Mars

00:29:04 where we can afford to establish a self sustaining city,

00:29:08 and then above that, we cannot afford to do it.

00:29:12 So right now, you couldn’t fly to Mars for a trillion dollars.

00:29:16 No amount of money could get you a ticket to Mars.

00:29:19 So we need to get that above,

00:29:22 to get that something that is actually possible at all.

00:29:27 But that’s, we don’t just want to have,

00:29:32 with Mars, flags and footprints,

00:29:33 and then not come back for a half century,

00:29:35 like we did with the Moon.

00:29:37 In order to pass a very important, great filter,

00:29:43 I think we need to be a multi planet species.

00:29:48 This may sound somewhat esoteric to a lot of people,

00:29:51 but eventually, given enough time,

00:29:57 there’s something, Earth is likely to experience

00:30:00 some calamity that could be something

00:30:05 that humans do to themselves,

00:30:06 or an external event like happened to the dinosaurs.

00:30:12 But eventually, if none of that happens,

00:30:17 and somehow magically we keep going,

00:30:21 then the Sun is gradually expanding,

00:30:24 and will engulf the Earth,

00:30:26 and probably Earth gets too hot for life

00:30:31 in about 500 million years.

00:30:34 It’s a long time, but that’s only 10% longer

00:30:37 than Earth has been around.

00:30:39 And so if you think about the current situation,

00:30:43 it’s really remarkable, and kind of hard to believe,

00:30:45 but Earth’s been around four and a half billion years,

00:30:50 and this is the first time in four and a half billion years

00:30:52 that it’s been possible to extend life beyond Earth.

00:30:55 And that window of opportunity may be open

00:30:58 for a long time, and I hope it is,

00:30:59 but it also may be open for a short time.

00:31:01 And I think it is wise for us to act quickly

00:31:09 while the window is open, just in case it closes.

00:31:13 Yeah, the existence of nuclear weapons, pandemics,

00:31:17 all kinds of threats should kind of give us some motivation.

00:31:22 I mean, civilization could die with a bang or a whimper.

00:31:29 If it dies as a demographic collapse,

00:31:32 then it’s more of a whimper, obviously.

00:31:35 And if it’s World War III, it’s more of a bang.

00:31:38 But these are all risks.

00:31:41 I mean, it’s important to think of these things

00:31:42 and just think of things like probabilities, not certainties.

00:31:46 There’s a certain probability

00:31:47 that something bad will happen on Earth.

00:31:50 I think most likely the future will be good.

00:31:53 But there’s like, let’s say for argument’s sake,

00:31:56 a 1% chance per century of a civilization ending event.

00:32:00 Like that was Stephen Hawking’s estimate.

00:32:05 I think he might be right about that.

00:32:07 So then we should basically think of this

00:32:15 being a multi planet species,

00:32:16 just like taking out a planet from the sky,

00:32:18 multi planet species, just like taking out insurance

00:32:20 for life itself.

00:32:21 Like life insurance for life.

00:32:27 It’s turned into an infomercial real quick.

00:32:29 Life insurance for life, yes.

00:32:31 And we can bring the creatures from,

00:32:36 plants and animals from Earth to Mars

00:32:38 and breathe life into the planet

00:32:41 and have a second planet with life.

00:32:44 That would be great.

00:32:46 They can’t bring themselves there.

00:32:47 So if we don’t bring them to Mars,

00:32:50 then they will just for sure all die

00:32:52 when the sun expands anyway.

00:32:54 And then that’ll be it.

00:32:56 What do you think is the most difficult aspect

00:32:58 of building a civilization on Mars?

00:33:00 Terraforming Mars, like from an engineering perspective,

00:33:03 from a financial perspective, human perspective,

00:33:07 to get a large number of folks there

00:33:13 who will never return back to Earth?

00:33:15 No, they could certainly return.

00:33:17 Some will return back to Earth.

00:33:18 They will choose to stay there

00:33:19 for the rest of their lives.

00:33:21 Yeah, many will.

00:33:23 But we need the spaceships back,

00:33:28 like the ones that go to Mars.

00:33:29 We need them back.

00:33:30 So you can hop on if you want.

00:33:32 But we can’t just not have the spaceships come back.

00:33:34 Those things are expensive.

00:33:35 We need them back.

00:33:36 I’d like to come back and do another trip.

00:33:38 I mean, do you think about the terraforming aspect,

00:33:40 like actually building?

00:33:41 Are you so focused right now on the spaceships part

00:33:44 that’s so critical to get to Mars?

00:33:46 We absolutely, if you can’t get there,

00:33:48 nothing else matters.

00:33:49 So, and like I said, we can’t get there

00:33:53 at some extraordinarily high cost.

00:33:54 I mean, the current cost of, let’s say,

00:33:57 one ton to the surface of Mars

00:34:00 is on the order of a billion dollars.

00:34:02 So, because you don’t just need the rocket

00:34:04 and the launch and everything,

00:34:05 you need like heat shield, you need guidance system,

00:34:09 you need deep space communications,

00:34:12 you need some kind of landing system.

00:34:15 So, like rough approximation would be a billion dollars

00:34:19 per ton to the surface of Mars right now.

00:34:22 This is obviously way too expensive

00:34:26 to create a self sustaining civilization.

00:34:30 So we need to improve that by at least a factor of 1,000.

00:34:36 A million per ton?

00:34:38 Yes, ideally much less than a million ton.

00:34:40 But if it’s not, like it’s gotta be,

00:34:44 so you have to say like, well,

00:34:45 how much can society afford to spend

00:34:47 or just want to spend on a self sustaining city on Mars?

00:34:52 The self sustaining part is important.

00:34:53 Like it’s just the key threshold,

00:34:57 the great filter will have been passed

00:35:01 when the city on Mars can survive

00:35:05 even if the spaceships from Earth stop coming

00:35:07 for any reason, doesn’t matter what the reason is,

00:35:09 but if they stop coming for any reason,

00:35:12 will it die out or will it not?

00:35:13 And if there’s even one critical ingredient missing,

00:35:16 then it still doesn’t count.

00:35:18 It’s like, you know, if you’re on a long sea voyage

00:35:20 and you’ve got everything except vitamin C,

00:35:23 it’s only a matter of time, you know, you’re gonna die.

00:35:26 So we’re gonna get a Mars city

00:35:28 to the point where it’s self sustaining.

00:35:32 I’m not sure this will really happen in my lifetime,

00:35:33 but I hope to see it at least have a lot of momentum.

00:35:37 And then you could say, okay,

00:35:38 what is the minimum tonnage necessary

00:35:40 to have a self sustaining city?

00:35:44 And there’s a lot of uncertainty about this.

00:35:46 You could say like, I don’t know,

00:35:48 it’s probably at least a million tons

00:35:52 cause you have to set up a lot of infrastructure on Mars.

00:35:55 Like I said, you can’t be missing anything

00:35:58 that in order to be self sustaining,

00:36:00 you can’t be missing, like you need,

00:36:02 you know, semiconductor, fabs, you need iron ore refineries,

00:36:07 like you need lots of things, you know.

00:36:09 So, and Mars is not super hospitable.

00:36:13 It’s the least inhospitable planet,

00:36:15 but it’s definitely a fixer of a planet.

00:36:18 Outside of Earth.

00:36:19 Yes.

00:36:20 Earth is pretty good.

00:36:21 Earth is like easy, yeah.

00:36:22 And also I should, we should clarify in the solar system.

00:36:25 Yes, in the solar system.

00:36:26 There might be nice like vacation spots.

00:36:29 There might be some great planets out there,

00:36:31 but it’s hopeless.

00:36:32 Too hard to get there?

00:36:33 Yeah, way, way, way, way, way too hard to say the least.

00:36:37 Let me push back on that, not really a push back,

00:36:39 but a quick curve ball of a question.

00:36:42 So you did mention physics as the first starting point.

00:36:44 So, general relativity allows for wormholes.

00:36:51 They technically can exist.

00:36:53 Do you think those can ever be leveraged

00:36:55 by humans to travel fast in the speed of light?

00:36:59 Well, the wormhole thing is debatable.

00:37:03 The, we currently do not know of any means

00:37:08 of going faster than the speed of light.

00:37:11 There is like, there are some ideas about having space.

00:37:21 Like so, you can only move at the speed of light

00:37:26 through space, but if you can make space itself move,

00:37:30 that’s like, that’s warping space.

00:37:36 Space is capable of moving faster than the speed of light.

00:37:39 Right.

00:37:40 Like the universe in the Big Bang,

00:37:42 the universe expanded at much,

00:37:44 much more than the speed of light by a lot.

00:37:46 Yeah.

00:37:48 So, but the, if this is possible,

00:37:56 the amount of energy required to warp space

00:37:58 is so gigantic, it boggles the mind.

00:38:03 So all the work you’ve done with propulsion,

00:38:05 how much innovation is possible with rocket propulsion?

00:38:08 Is this, I mean, you’ve seen it all,

00:38:11 and you’re constantly innovating in every aspect.

00:38:14 How much is possible?

00:38:15 Like how much, can you get 10x somehow?

00:38:17 Is there something in there in physics

00:38:19 that you can get significant improvement

00:38:21 in terms of efficiency of engines

00:38:22 and all those kinds of things?

00:38:24 Well, as I was saying, really the Holy Grail

00:38:27 is a fully and rapidly reusable orbital system.

00:38:33 So right now, the Falcon 9

00:38:38 is the only reusable rocket out there,

00:38:41 but the booster comes back and lands,

00:38:44 and you’ve seen the videos,

00:38:46 and we get the nose cone fairing back,

00:38:47 but we do not get the upper stage back.

00:38:49 So that means that we have a minimum cost

00:38:54 of building an upper stage.

00:38:56 And you can think of like a two stage rocket

00:38:59 of sort of like two airplanes,

00:39:00 like a big airplane and a small airplane.

00:39:03 And we get the big airplane back,

00:39:04 but not the small airplane.

00:39:05 And so it still costs a lot.

00:39:07 So that upper stage is at least $10 million.

00:39:13 And then the degree of,

00:39:15 the booster is not as rapidly and completely reusable

00:39:19 as we’d like in order of the fairings.

00:39:20 So our kind of minimum marginal cost

00:39:25 and our counting overhead for per flight

00:39:27 is on the order of 15 to $20 million, maybe.

00:39:33 So that’s extremely good for,

00:39:38 it’s by far better than any rocket ever in history.

00:39:41 But with full and rapid reusability,

00:39:45 we can reduce the cost per ton to orbit

00:39:48 by a factor of 100.

00:39:51 But just think of it like,

00:39:54 like imagine if you had an aircraft or something or a car.

00:39:58 And if you had to buy a new car

00:40:02 every time you went for a drive,

00:40:05 that would be very expensive.

00:40:06 It’d be silly, frankly.

00:40:08 But in fact, you just refuel the car or recharge the car.

00:40:13 And that makes your trip like,

00:40:18 I don’t know, a thousand times cheaper.

00:40:20 So it’s the same for rockets.

00:40:23 If you, it’s very difficult to make this complex machine

00:40:27 that can go to orbit.

00:40:28 And so if you cannot reuse it

00:40:30 and have to throw even any part of,

00:40:32 any significant part of it away,

00:40:34 that massively increases the cost.

00:40:36 So, you know, Starship in theory

00:40:40 could do a cost per launch of like a million,

00:40:44 maybe $2 million or something like that.

00:40:46 And put over a hundred tons in orbit, which is crazy.

00:40:53 Yeah, that’s incredible.

00:40:55 So you’re saying like it’s by far the biggest bang

00:40:58 for the buck is to make it fully reusable

00:41:00 versus like some kind of brilliant breakthrough

00:41:04 in theoretical physics.

00:41:05 No, no, there’s no, there’s no brilliant break.

00:41:07 No, there’s no, just make the rocket reusable.

00:41:11 This is an extremely difficult engineering problem.

00:41:13 Got it.

00:41:14 No new physics is required.

00:41:17 Just brilliant engineering.

00:41:19 Let me ask a slightly philosophical fun question.

00:41:22 Gotta ask, I know you’re focused on getting to Mars,

00:41:24 but once we’re there on Mars, what do you,

00:41:27 what form of government, economic system, political system

00:41:32 do you think would work best

00:41:33 for an early civilization of humans?

00:41:37 Is, I mean, the interesting reason to talk about this stuff,

00:41:41 it also helps people dream about the future.

00:41:44 I know you’re really focused

00:41:45 about the short term engineering dream,

00:41:48 but it’s like, I don’t know,

00:41:49 there’s something about imagining an actual civilization

00:41:51 on Mars that gives people, really gives people hope.

00:41:55 Well, it would be a new frontier and an opportunity

00:41:57 to rethink the whole nature of government,

00:41:59 just as was done in the creation of the United States.

00:42:02 So, I mean, I would suggest having a direct democracy,

00:42:14 people vote directly on things

00:42:16 as opposed to representative democracy.

00:42:18 So representative democracy, I think,

00:42:21 is too subject to a special interest

00:42:25 and a coercion of the politicians and that kind of thing.

00:42:29 So I’d recommend that there was just direct democracy,

00:42:39 people vote on laws, the population votes on laws themselves,

00:42:42 and then the laws must be short enough

00:42:44 that people can understand them.

00:42:46 Yeah, and then like keeping a well informed populace,

00:42:49 like really being transparent about all the information,

00:42:52 about what they’re voting for.

00:42:53 Yeah, absolute transparency.

00:42:54 Yeah, and not make it as annoying as those cookies

00:42:57 we have to accept. Accept cookies.

00:42:59 I’ve always, like, you know,

00:43:01 there’s like always like a slight amount of trepidation

00:43:03 when you click accept cookies,

00:43:05 like I feel as though there’s like perhaps

00:43:07 like a very tiny chance that it’ll open a portal to hell

00:43:10 or something like that.

00:43:12 That’s exactly how I feel.

00:43:13 Why do they, why do they keep wanting me to accept it?

00:43:16 What do they want with this cookie?

00:43:19 Like somebody got upset with accepting cookies

00:43:21 or something somewhere, who cares?

00:43:23 Like so annoying to keep accepting all these cookies.

00:43:26 To me, this is just a great example.

00:43:29 Yes, you can have my damn cookie.

00:43:30 I don’t care, whatever.

00:43:32 Heard it from Ilhan first.

00:43:33 He accepts all your damn cookies.

00:43:35 Yeah, and stop asking me.

00:43:40 It’s annoying.

00:43:41 Yeah, it’s one example of implementation

00:43:46 of a good idea done really horribly.

00:43:50 Yeah, it’s somebody who was like,

00:43:51 there’s some good intentions of like privacy or whatever,

00:43:54 but now everyone’s just has to accept cookies

00:43:57 and it’s not, you know, you have billions of people

00:43:59 who have to keep clicking accept cookie.

00:44:00 It’s super annoying.

00:44:02 Then we just accept the damn cookie, it’s fine.

00:44:05 There is like, I think a fundamental problem that we’re,

00:44:08 because we’ve not really had a major,

00:44:12 like a world war or something like that in a while.

00:44:14 And obviously we would like to not have world wars.

00:44:18 There’s not been a cleansing function

00:44:19 for rules and regulations.

00:44:21 So wars did have, you know, some sort of aligning

00:44:24 in that there would be a reset on rules

00:44:27 and regulations after a war.

00:44:29 So World Wars I and II,

00:44:30 there were huge resets on rules and regulations.

00:44:34 Now, if society does not have a war

00:44:37 and there’s no cleansing function

00:44:39 or garbage collection for rules and regulations,

00:44:41 then rules and regulations will accumulate every year

00:44:43 because they’re immortal.

00:44:45 There’s no actual, humans die, but the laws don’t.

00:44:48 So we need a garbage collection function

00:44:51 for rules and regulations.

00:44:52 They should not just be immortal

00:44:55 because some of the rules and regulations

00:44:57 that are put in place will be counterproductive,

00:45:00 done with good intentions, but counterproductive.

00:45:02 Sometimes not done with good intentions.

00:45:03 So if rules and regulations just accumulate every year

00:45:09 and you get more and more of them,

00:45:10 then eventually you won’t be able to do anything.

00:45:13 You’re just like Gulliver with, you know,

00:45:15 tied down by thousands of little structures

00:45:17 by thousands of little strings.

00:45:19 And we see that in, you know, US and like basically

00:45:26 all economies that have been around for a while

00:45:31 and regulators and legislators create new rules

00:45:35 and regulations every year,

00:45:36 but they don’t put effort into removing them.

00:45:38 And I think that’s very important that we put effort

00:45:40 into removing rules and regulations.

00:45:44 But it gets tough because you get special interests

00:45:45 that then are dependent on, like they have, you know,

00:45:48 a vested interest in that whatever rule and regulation

00:45:51 and then they fight to not get it removed.

00:45:57 Yeah, so I mean, I guess the problem with the constitution

00:46:00 is it’s kind of like C versus Java

00:46:04 because it doesn’t have any garbage collection built in.

00:46:06 I think there should be, when you first said

00:46:09 the metaphor of garbage collection, I loved it.

00:46:10 Yeah, from a coding standpoint.

00:46:12 From a coding standpoint, yeah, yeah.

00:46:14 It would be interesting if the laws themselves

00:46:16 kind of had a built in thing

00:46:19 where they kind of die after a while

00:46:20 unless somebody explicitly publicly defends them.

00:46:23 So that’s sort of, it’s not like somebody has to kill them.

00:46:26 They kind of die themselves.

00:46:28 They disappear.

00:46:29 Yeah.

00:46:32 Not to defend Java or anything, but you know, C++,

00:46:36 you know, you could also have a great garbage collection

00:46:38 in Python and so on.

00:46:39 Yeah, so yeah, something needs to happen

00:46:43 or just the civilization’s arteries just harden over time

00:46:48 and you can just get less and less done

00:46:50 because there’s just a rule against everything.

00:46:54 So I think like, I don’t know, for Mars or whatever,

00:46:57 I’d say, or even for, you know, obviously for Earth as well,

00:47:00 like I think there should be an active process

00:47:02 for removing rules and regulations

00:47:04 and questioning their existence.

00:47:07 Just like if we’ve got a function

00:47:10 for creating rules and regulations,

00:47:11 because rules and regulations can also think of us

00:47:13 like they’re like software or lines of code

00:47:15 for operating civilization.

00:47:18 That’s the rules and regulations.

00:47:21 So it’s not like we shouldn’t have rules and regulations,

00:47:22 but you have code accumulation, but no code removal.

00:47:27 And so it just gets to become basically archaic bloatware

00:47:31 after a while.

00:47:33 And it’s just, it makes it hard for things to progress.

00:47:37 So I don’t know, maybe Mars, you’d have like,

00:47:40 you know, any given law must have a sunset, you know,

00:47:44 and require active voting to keep it up there, you know.

00:47:52 And I actually also say like, and these are just,

00:47:54 I don’t know, recommendations or thoughts,

00:47:58 ultimately will be up to the people on Mars to decide.

00:48:00 But I think it should be easier to remove a law

00:48:06 than to add one because of the,

00:48:08 just to overcome the inertia of laws.

00:48:10 So maybe it’s like, for argument’s sake,

00:48:15 you need like say 60% vote to have a law take effect,

00:48:19 but only a 40% vote to remove it.

00:48:23 So let me be the guy, you posted a meme on Twitter recently

00:48:26 where there’s like a row of urinals,

00:48:30 and a guy just walks all the way across,

00:48:33 and he tells you about crypto.

00:48:36 I mean, that’s happened to me so many times.

00:48:38 I think maybe even literally.

00:48:40 Yeah.

00:48:41 Do you think, technologically speaking,

00:48:43 there’s any room for ideas of smart contracts or so on?

00:48:47 Because you mentioned laws.

00:48:49 That’s an interesting use of things like smart contracts

00:48:52 to implement the laws by which governments function.

00:48:57 Like something built on Ethereum,

00:48:58 or maybe a dog coin that enables smart contracts somehow.

00:49:04 I don’t quite understand this whole smart contract thing.

00:49:08 You know.

00:49:09 I mean, I’m too dumb to understand smart contracts.

00:49:15 That’s a good line.

00:49:17 I mean, my general approach to any kind of deal or whatever

00:49:21 is just make sure there’s clarity of understanding.

00:49:23 That’s the most important thing.

00:49:25 And just keep any kind of deal very short and simple,

00:49:29 plain language, and just make sure everyone understands

00:49:33 this is the deal, is it clear?

00:49:36 And what are the consequences if various things

00:49:40 don’t happen?

00:49:42 But usually deals are, business deals or whatever,

00:49:47 are way too long and complex and overly lawyered

00:49:50 and pointlessly.

00:49:52 You mentioned that Doge is the people’s coin.

00:49:57 Yeah.

00:49:57 And you said that you were literally going,

00:49:59 SpaceX may consider literally putting a Doge coin

00:50:04 on the moon, is this something you’re still considering?

00:50:09 Mars, perhaps, do you think there’s some chance,

00:50:13 we’ve talked about political systems on Mars,

00:50:16 that Doge coin is the official currency of Mars

00:50:20 at some point in the future?

00:50:22 Well, I think Mars itself will need to have

00:50:25 a different currency because you can’t synchronize

00:50:29 due to speed of light, or not easily.

00:50:32 So it must be completely stand alone from Earth.

00:50:36 Well, yeah, because Mars is, at closest approach,

00:50:41 it’s four light minutes away, roughly,

00:50:43 and then at furthest approach, it’s roughly

00:50:45 20 light minutes away, maybe a little more.

00:50:50 So you can’t really have something synchronizing

00:50:52 if you’ve got a 20 minute speed of light issue,

00:50:55 if it’s got a one minute blockchain.

00:50:58 It’s not gonna synchronize properly.

00:50:59 So Mars, I don’t know if Mars would have

00:51:03 a cryptocurrency as a thing, but probably, seems likely.

00:51:07 But it would be some kind of localized thing on Mars.

00:51:12 And you let the people decide.

00:51:14 Yeah, absolutely.

00:51:17 The future of Mars should be up to the Martians.

00:51:20 Yeah, so, I think the cryptocurrency thing

00:51:25 is an interesting approach to reducing

00:51:30 the error in the database that is called money.

00:51:41 I think I have a pretty deep understanding

00:51:42 of what money actually is on a practical day to day basis

00:51:46 because of PayPal.

00:51:50 We really got in deep there.

00:51:52 And right now, the money system, actually,

00:51:57 for practical purposes, is really a bunch

00:52:01 of heterogeneous mainframes running old COBOL.

00:52:07 Okay, you mean literally.

00:52:08 That’s literally what’s happening.

00:52:10 In batch mode.

00:52:12 Okay.

00:52:13 In batch mode.

00:52:14 Yeah, pretty the poor bastards who have

00:52:16 to maintain that code.

00:52:19 Okay, that’s a pain.

00:52:22 Not even Fortran, it’s COBOL.

00:52:24 It’s COBOL.

00:52:26 And the banks are still buying mainframes in 2021

00:52:30 and running ancient COBOL code.

00:52:33 And the Federal Reserve is probably even older

00:52:37 than what the banks have, and they have

00:52:39 an old COBOL mainframe.

00:52:41 And so, the government effectively has editing privileges

00:52:47 on the money database.

00:52:48 And they use those editing privileges to make more money,

00:52:53 whatever they want.

00:52:55 And this increases the error in the database that is money.

00:52:59 So, I think money should really be viewed

00:53:00 through the lens of information theory.

00:53:03 And so, it’s kind of like an internet connection.

00:53:08 Like what’s the bandwidth, total bit rate,

00:53:12 what is the latency, jitter, packet drop,

00:53:16 you know, errors in network communication.

00:53:21 Just think of money like that, basically.

00:53:24 I think that’s probably the right way to think of it.

00:53:26 And then say what system from an information theory

00:53:31 standpoint allows an economy to function the best.

00:53:35 And, you know, crypto is an attempt to reduce

00:53:40 the error in money that is contributed

00:53:48 by governments diluting the money supply

00:53:53 as basically a pernicious form of taxation.

00:53:58 So, both policy in terms of with inflation

00:54:01 and actual like technological COBOL,

00:54:05 like cryptocurrency takes us into the 21st century

00:54:08 in terms of the actual systems

00:54:10 that allow you to do the transaction,

00:54:12 to store wealth, all those kinds of things.

00:54:16 Like I said, just think of money as information.

00:54:18 People often will think of money

00:54:20 as having power in and of itself.

00:54:24 It does not.

00:54:24 Money is information and it does not have power

00:54:28 in and of itself.

00:54:31 Like, you know, applying the physics tools

00:54:35 of thinking about things in the limit is helpful.

00:54:37 If you are stranded on a tropical island

00:54:41 and you have a trillion dollars, it’s useless

00:54:47 because there’s no resource allocation.

00:54:50 Money is a database for resource allocation,

00:54:52 but there’s no resource to allocate except yourself.

00:54:55 So, money is useless.

00:55:01 If you’re stranded on a desert island with no food,

00:55:04 all the Bitcoin in the world will not stop you

00:55:10 from starving.

00:55:12 So, just think of money as a database

00:55:20 for resource allocation across time and space.

00:55:24 And then what system, in what form

00:55:29 should that database or data system,

00:55:37 what would be most effective?

00:55:39 Now, there is a fundamental issue

00:55:41 with say Bitcoin in its current form

00:55:46 in that the transaction volume is very limited

00:55:50 and the latency for a properly confirmed transaction

00:55:55 is too long, much longer than you’d like.

00:55:58 So, it’s actually not great from a transaction volume

00:56:02 standpoint or a latency standpoint.

00:56:07 So, it is perhaps useful to solve an aspect

00:56:12 of the money database problem, which is the sort of store

00:56:17 of wealth or an accounting of relative obligations,

00:56:22 I suppose, but it is not useful as a currency,

00:56:27 as a day to day currency.

00:56:28 But people have proposed different technological solutions.

00:56:31 Like Lightning.

00:56:32 Yeah, Lightning Network and the layer two technologies

00:56:34 on top of that.

00:56:35 I mean, it seems to be all kind of a trade off,

00:56:38 but the point is, it’s kind of brilliant to say

00:56:41 that just think about information,

00:56:42 think about what kind of database,

00:56:44 what kind of infrastructure enables

00:56:45 that exchange of information.

00:56:46 Yeah, just say like you’re operating an economy

00:56:49 and you need to have some thing that allows

00:56:55 for the efficient, to have efficient value ratios

00:56:59 between products and services.

00:57:01 So, you’ve got this massive number of products

00:57:03 and services and you need to, you can’t just barter.

00:57:06 It’s just like, that would be extremely unwieldy.

00:57:09 So, you need something that gives you the ratio

00:57:13 of exchange between goods and services.

00:57:20 And then something that allows you

00:57:22 to shift obligations across time, like debt.

00:57:26 Debt and equity shift obligations across time.

00:57:29 Then what does the best job of that?

00:57:33 Part of the reason why I think there’s some

00:57:36 Merit Doge coin, even though it was obviously created

00:57:38 as a joke, is that it actually does have

00:57:44 a much higher transaction volume capability than Bitcoin.

00:57:49 And the costs of doing a transaction,

00:57:53 the Doge coin fee is very low.

00:57:55 Like right now, if you want to do a Bitcoin transaction,

00:57:58 the price of doing that transaction is very high.

00:58:00 So, you could not use it effectively for most things.

00:58:04 And nor could it even scale to a high volume.

00:58:11 And when Bitcoin started, I guess around 2008

00:58:15 or something like that, the internet connections

00:58:18 were much worse than they are today.

00:58:20 Like Order of magnitude, I mean,

00:58:23 just way, way worse in 2008.

00:58:26 So, like having a small block size or whatever

00:58:31 is, and a long synchronization time made sense in 2008.

00:58:37 But 2021, or fast forward 10 years,

00:58:41 it’s like comically low.

00:58:50 And I think there’s some value to having a linear increase

00:58:55 in the amount of currency that is generated.

00:58:58 So, because some amount of the currency,

00:59:01 like if a currency is too deflationary,

00:59:05 or I should say, if a currency is expected

00:59:10 to increase in value over time,

00:59:11 there’s reluctance to spend it.

00:59:14 Because you’re like, oh, I’ll just hold it and not spend it

00:59:17 because it’s scarcity is increasing with time.

00:59:19 So, if I spend it now, then I will regret spending it.

00:59:22 So, I will just, you know, hodl it.

00:59:24 But if there’s some dilution of the currency occurring

00:59:27 over time, that’s more of an incentive

00:59:29 to use it as a currency.

00:59:31 So, those coins, somewhat randomly has just a fixed,

00:59:39 a number of sort of coins or hash strings

00:59:43 that are generated every year.

00:59:46 So, there’s some inflation, but it’s not a percentage base.

00:59:49 It’s a percentage of the total amount of money

00:59:54 it’s a fixed number.

00:59:55 So, the percentage of inflation

00:59:58 will necessarily decline over time.

01:00:02 So, I’m not saying that it’s like the ideal system

01:00:06 for a currency, but I think it actually is

01:00:09 just fundamentally better than anything else I’ve seen

01:00:13 just by accident, so.

01:00:16 I like how you said around 2008.

01:00:19 So, you’re not, you know, some people suggested

01:00:23 you might be Satoshi Nakamoto.

01:00:24 You’ve previously said you’re not.

01:00:26 Let me ask.

01:00:27 You’re not for sure.

01:00:28 Would you tell us if you were?

01:00:30 Yes.

01:00:30 Okay.

01:00:33 Do you think it’s a feature or a bug

01:00:34 that he’s anonymous or she or they?

01:00:38 It’s an interesting kind of quirk of human history

01:00:41 that there is a particular technology

01:00:43 that is a completely anonymous inventor

01:00:46 or creator.

01:01:03 Well, I mean, you can look at the evolution of ideas

01:01:10 before the launch of Bitcoin

01:01:11 and see who wrote, you know, about those ideas.

01:01:19 And then, like, I don’t know exactly,

01:01:21 obviously I don’t know who created Bitcoin

01:01:24 for practical purposes,

01:01:25 but the evolution of ideas is pretty clear for that.

01:01:28 And like, it seems as though like Nick Szabo

01:01:31 is probably more than anyone else responsible

01:01:35 for the evolution of those ideas.

01:01:37 So, he claims not to be Satoshi Nakamoto,

01:01:41 but I’m not sure that’s neither here nor there,

01:01:44 but he seems to be the one more responsible

01:01:47 for the ideas behind Bitcoin than anyone else.

01:01:50 So, it’s not perhaps like singular figures

01:01:52 aren’t even as important as the figures involved

01:01:55 in the evolution of ideas, the Leto thing, so.

01:01:58 Yeah.

01:01:58 Yeah, it’s, you know, perhaps it’s sad

01:02:02 to think about history,

01:02:03 but maybe most names will be forgotten anyway.

01:02:06 What is a name anyway?

01:02:07 It’s a name attached to an idea.

01:02:11 What does it even mean, really?

01:02:13 I think Shakespeare had a thing about roses and stuff,

01:02:16 whatever he said.

01:02:17 A rose by any other name, it smells sweet.

01:02:22 I got Elon to quote Shakespeare.

01:02:24 I feel like I accomplished something today.

01:02:26 Shall I compare thee to a summer’s day?

01:02:28 What?

01:02:30 I’m gonna clip that out.

01:02:31 I said it to people.

01:02:33 Not more temperate and more fair.

01:02:39 Autopilot.

01:02:40 Tesla autopilot.

01:02:46 Tesla autopilot has been through an incredible journey

01:02:48 over the past six years,

01:02:50 or perhaps even longer in the minds of,

01:02:52 in your mind and the minds of many involved.

01:02:57 Yeah, I think that’s where we first like connected really

01:02:58 was the autopilot stuff, autonomy and.

01:03:01 The whole journey was incredible to me to watch.

01:03:05 I was, because I knew, well, part of it is I was at MIT

01:03:10 and I knew the difficulty of computer vision.

01:03:13 And I knew the whole, I had a lot of colleagues and friends

01:03:15 about the DARPA challenge and knew how difficult it is.

01:03:18 And so there was a natural skepticism

01:03:20 when I first drove a Tesla with the initial system

01:03:23 based on Mobileye.

01:03:25 I thought there’s no way, so first when I got in,

01:03:28 I thought there’s no way this car could maintain,

01:03:32 like stay in the lane and create a comfortable experience.

01:03:35 So my intuition initially was that the lane keeping problem

01:03:39 is way too difficult to solve.

01:03:41 Oh, lane keeping, yeah, that’s relatively easy.

01:03:43 Well, like, but not this, but solve in the way

01:03:47 that we just, we talked about previous is prototype

01:03:50 versus a thing that actually creates a pleasant experience

01:03:54 over hundreds of thousands of miles or millions.

01:03:57 Yeah, so we had to wrap a lot of code

01:04:00 around the Mobileye thing.

01:04:01 It doesn’t just work by itself.

01:04:04 I mean, that’s part of the story

01:04:06 of how you approach things sometimes.

01:04:07 Sometimes you do things from scratch.

01:04:09 Sometimes at first you kind of see what’s out there

01:04:12 and then you decide to do from scratch.

01:04:14 That was one of the boldest decisions I’ve seen

01:04:17 is both in the hardware and the software

01:04:18 to decide to eventually go from scratch.

01:04:21 I thought, again, I was skeptical

01:04:22 of whether that’s going to be able to work out

01:04:24 because it’s such a difficult problem.

01:04:26 And so it was an incredible journey

01:04:28 what I see now with everything,

01:04:31 the hardware, the compute, the sensors,

01:04:33 the things I maybe care and love about most

01:04:37 is the stuff that Andre Karpathy is leading

01:04:40 with the data set selection,

01:04:41 the whole data engine process,

01:04:43 the neural network architectures,

01:04:45 the way that’s in the real world,

01:04:47 that network is tested, validated,

01:04:49 all the different test sets,

01:04:52 versus the ImageNet model of computer vision,

01:04:54 like what’s in academia is like real world

01:04:58 artificial intelligence.

01:05:01 And Andre’s awesome and obviously plays an important role,

01:05:04 but we have a lot of really talented people driving things.

01:05:09 And Ashok is actually the head of autopilot engineering.

01:05:14 Andre’s the director of AI.

01:05:16 AI stuff, yeah, yeah.

01:05:17 So yeah, I’m aware that there’s an incredible team

01:05:20 of just a lot going on.

01:05:22 Yeah, obviously people will give me too much credit

01:05:26 and they’ll give Andre too much credit, so.

01:05:28 And people should realize how much is going on

01:05:31 under the hood.

01:05:32 Yeah, it’s just a lot of really talented people.

01:05:36 The Tesla Autopilot AI team is extremely talented.

01:05:40 It’s like some of the smartest people in the world.

01:05:43 So yeah, we’re getting it done.

01:05:45 What are some insights you’ve gained

01:05:47 over those five, six years of autopilot

01:05:51 about the problem of autonomous driving?

01:05:54 So you leaped in having some sort of

01:05:58 first principles kinds of intuitions,

01:06:00 but nobody knows how difficult the problem, like the problem.

01:06:05 I thought the self driving problem would be hard,

01:06:07 but it was harder than I thought.

01:06:08 It’s not like I thought it would be easy.

01:06:09 I thought it would be very hard,

01:06:10 but it was actually way harder than even that.

01:06:14 So, I mean, what it comes down to at the end of the day

01:06:17 is to solve self driving,

01:06:18 you basically need to recreate what humans do to drive,

01:06:28 which is humans drive with optical sensors,

01:06:31 eyes and biological neural nets.

01:06:34 And so in order to,

01:06:36 that’s how the entire road system is designed to work

01:06:39 with basically passive optical and neural nets,

01:06:45 biologically.

01:06:46 And now that we need to,

01:06:47 so for actually for full self driving to work,

01:06:50 we have to recreate that in digital form.

01:06:52 So we have to, that means cameras with advanced neural nets

01:07:01 in silicon form, and then it will obviously solve

01:07:06 for full self driving.

01:07:08 That’s the only way.

01:07:09 I don’t think there’s any other way.

01:07:10 But the question is what aspects of human nature

01:07:12 do you have to encode into the machine, right?

01:07:15 Do you have to solve the perception problem, like detect?

01:07:18 And then you first realize

01:07:21 what is the perception problem for driving,

01:07:23 like all the kinds of things you have to be able to see,

01:07:25 like what do we even look at when we drive?

01:07:27 There’s, I just recently heard Andre talked about MIT

01:07:32 about car doors.

01:07:33 I think it was the world’s greatest talk of all time

01:07:36 about car doors, the fine details of car doors.

01:07:41 Like what is even an open car door, man?

01:07:44 So like the ontology of that,

01:07:46 that’s a perception problem.

01:07:48 We humans solve that perception problem,

01:07:49 and Tesla has to solve that problem.

01:07:51 And then there’s the control and the planning

01:07:53 coupled with the perception.

01:07:54 You have to figure out like what’s involved in driving,

01:07:58 like especially in all the different edge cases.

01:08:02 And then, I mean, maybe you can comment on this,

01:08:06 how much game theoretic kind of stuff needs to be involved

01:08:10 at a four way stop sign.

01:08:12 As humans, when we drive, our actions affect the world.

01:08:18 Like it changes how others behave.

01:08:20 Most autonomous driving, if you,

01:08:23 you’re usually just responding to the scene

01:08:27 as opposed to like really asserting yourself in the scene.

01:08:31 Do you think?

01:08:33 I think these sort of control logic conundrums

01:08:37 are not the hard part.

01:08:39 The, you know, let’s see.

01:08:45 What do you think is the hard part

01:08:46 in this whole beautiful, complex problem?

01:08:50 So it’s a lot of freaking software, man.

01:08:52 A lot of smart lines of code.

01:08:57 For sure, in order to have,

01:09:01 create an accurate vector space.

01:09:03 So like you’re coming from image space,

01:09:08 which is like this flow of photons,

01:09:12 you’re going to the camera, cameras,

01:09:14 and then you have this massive bitstream

01:09:21 in image space, and then you have to effectively compress

01:09:29 a massive bitstream corresponding to photons

01:09:34 that knocked off an electron in a camera sensor

01:09:38 and turn that bitstream into a vector space.

01:09:44 By vector space, I mean like, you know,

01:09:47 you’ve got cars and humans and lane lines and curves

01:09:55 and traffic lights and that kind of thing.

01:09:59 Once you’ve got all of that in your head,

01:10:03 once you have an accurate vector space,

01:10:08 the control problem is similar to that of a video game,

01:10:11 like a Grand Theft Auto of Cyberpunk,

01:10:14 if you have accurate vector space.

01:10:16 It’s the control problem is,

01:10:18 I wouldn’t say it’s trivial, it’s not trivial,

01:10:20 but it’s not like some insurmountable thing.

01:10:29 Having an accurate vector space is very difficult.

01:10:32 Yeah, I think we humans don’t give enough respect

01:10:35 to how incredible the human perception system is

01:10:37 to mapping the raw photons to the vector space

01:10:42 representation in our heads.

01:10:44 Your brain is doing an incredible amount of processing

01:10:48 and giving you an image that is a very cleaned up image.

01:10:51 Like when we look around here, we see,

01:10:53 like you see color in the corners of your eyes,

01:10:55 but actually your eyes have very few cones,

01:10:59 like cone receptors in the peripheral vision.

01:11:02 Your eyes are painting color in the peripheral vision.

01:11:05 You don’t realize it,

01:11:06 but their eyes are actually painting color

01:11:09 and your eyes also have like this blood vessels

01:11:12 and all sorts of gnarly things and there’s a blind spot,

01:11:14 but do you see your blind spot?

01:11:16 No, your brain is painting in the missing, the blind spot.

01:11:21 You’re gonna do these things online where you look here

01:11:24 and look at this point and then look at this point

01:11:27 and if it’s in your blind spot,

01:11:30 your brain will just fill in the missing bits.

01:11:33 The peripheral vision is so cool.

01:11:35 It makes you realize all the illusions for vision sciences,

01:11:38 so it makes you realize just how incredible the brain is.

01:11:40 The brain is doing crazy amount of post processing

01:11:42 on the vision signals for your eyes.

01:11:45 It’s insane.

01:11:49 And then even once you get all those vision signals,

01:11:51 your brain is constantly trying to forget

01:11:56 as much as possible.

01:11:57 So human memory is,

01:11:59 perhaps the weakest thing about the brain is memory.

01:12:01 So because memory is so expensive to our brain

01:12:05 and so limited,

01:12:06 your brain is trying to forget as much as possible

01:12:09 and distill the things that you see

01:12:12 into the smallest amounts of information possible.

01:12:16 So your brain is trying to not just get to a vector space,

01:12:19 but get to a vector space that is the smallest possible

01:12:22 vector space of only relevant objects.

01:12:26 And I think you can sort of look inside your brain

01:12:29 or at least I can,

01:12:31 when you drive down the road and try to think about

01:12:35 what your brain is actually doing consciously.

01:12:38 And it’s like you’ll see a car,

01:12:44 because you don’t have cameras,

01:12:46 I don’t have eyes in the back of your head or a side.

01:12:48 So you basically have like two cameras on a slow gimbal.

01:13:00 And eyesight is not that great.

01:13:01 Okay, human eyes are like,

01:13:04 and people are constantly distracted

01:13:05 and thinking about things and texting

01:13:07 and doing all sorts of things they shouldn’t do in a car,

01:13:09 changing the radio station.

01:13:10 So having arguments is like,

01:13:15 so when’s the last time you looked right and left

01:13:22 and rearward, or even diagonally forward

01:13:27 to actually refresh your vector space?

01:13:30 So you’re glancing around and what your mind is doing

01:13:32 is trying to distill the relevant vectors,

01:13:37 basically objects with a position and motion.

01:13:40 And then editing that down to the least amount

01:13:48 that’s necessary for you to drive.

01:13:49 It does seem to be able to edit it down

01:13:53 or compress it even further into things like concepts.

01:13:55 So it’s not, it’s like it goes beyond,

01:13:57 the human mind seems to go sometimes beyond vector space

01:14:01 to sort of space of concepts to where you’ll see a thing.

01:14:05 It’s no longer represented spatially somehow.

01:14:07 It’s almost like a concept that you should be aware of.

01:14:10 Like if this is a school zone,

01:14:12 you’ll remember that as a concept,

01:14:14 which is a weird thing to represent,

01:14:16 but perhaps for driving,

01:14:17 you don’t need to fully represent those things.

01:14:20 Or maybe you get those kind of indirectly.

01:14:25 You need to establish vector space

01:14:27 and then actually have predictions for those vector spaces.

01:14:32 So if you drive past, say a bus and you see that there’s people,

01:14:47 before you drove past the bus,

01:14:48 you saw people crossing or some,

01:14:50 just imagine there’s like a large truck

01:14:52 or something blocking site.

01:14:55 But before you came up to the truck,

01:14:57 you saw that there were some kids about to cross the road

01:15:00 in front of the truck.

01:15:01 Now you can no longer see the kids,

01:15:03 but you would now know, okay,

01:15:06 those kids are probably gonna pass by the truck

01:15:09 and cross the road, even though you cannot see them.

01:15:12 So you have to have memory,

01:15:17 you have to need to remember that there were kids there

01:15:19 and you need to have some forward prediction

01:15:21 of what their position will be at the time of relevance.

01:15:25 So with occlusions and computer vision,

01:15:28 when you can’t see an object anymore,

01:15:30 even when it just walks behind a tree and reappears,

01:15:33 that’s a really, really,

01:15:35 I mean, at least in academic literature,

01:15:37 it’s tracking through occlusions, it’s very difficult.

01:15:40 Yeah, we’re doing it.

01:15:41 I understand this.

01:15:42 Yeah.

01:15:43 So some of it.

01:15:44 It’s like object permanence,

01:15:45 like same thing happens with humans with neural nets.

01:15:47 Like when like a toddler grows up,

01:15:50 like there’s a point in time where they develop,

01:15:54 they have a sense of object permanence.

01:15:56 So before a certain age, if you have a ball or a toy

01:15:59 or whatever, and you put it behind your back

01:16:01 and you pop it out, if they don’t,

01:16:03 before they have object permanence,

01:16:04 it’s like a new thing every time.

01:16:05 It’s like, whoa, this toy went poof, just fared

01:16:08 and now it’s back again and they can’t believe it.

01:16:09 And that they can play peekaboo all day long

01:16:12 because peekaboo is fresh every time.

01:16:13 But then we figured out object permanence,

01:16:18 then they realize, oh no, the object is not gone,

01:16:20 it’s just behind your back.

01:16:22 Sometimes I wish we never did figure out object permanence.

01:16:26 Yeah, so that’s a…

01:16:28 So that’s an important problem to solve.

01:16:31 Yes, so like an important evolution

01:16:33 of the neural nets in the car is

01:16:39 memory across both time and space.

01:16:43 So now you can’t remember, like you have to say,

01:16:47 like how long do you want to remember things for?

01:16:48 And there’s a cost to remembering things for a long time.

01:16:53 So you can like run out of memory

01:16:55 to try to remember too much for too long.

01:16:58 And then you also have things that are stale

01:17:01 if you remember them for too long.

01:17:03 And then you also need things that are remembered over time.

01:17:06 So even if you like say have like,

01:17:10 for our good sake, five seconds of memory on a time basis,

01:17:14 but like let’s say you’re parked at a light

01:17:17 and you saw, use a pedestrian example,

01:17:20 that people were waiting to cross the road

01:17:25 and you can’t quite see them because of an occlusion,

01:17:28 but they might wait for a minute before the light changes

01:17:31 for them to cross the road.

01:17:33 You still need to remember that that’s where they were

01:17:36 and that they’re probably going

01:17:38 to cross the road type of thing.

01:17:40 So even if that exceeds your time based memory,

01:17:44 it should not exceed your space memory.

01:17:48 And I just think the data engine side of that,

01:17:50 so getting the data to learn all of the concepts

01:17:53 that you’re saying now is an incredible process.

01:17:56 It’s this iterative process of just,

01:17:58 it’s this hydranet of many.

01:18:00 Hydranet.

01:18:03 We’re changing the name to something else.

01:18:05 Okay, I’m sure it’ll be equally as Rick and Morty like.

01:18:09 There’s a lot of, yeah.

01:18:11 We’ve rearchitected the neural net,

01:18:14 the neural nets in the cars so many times it’s crazy.

01:18:17 Oh, so every time there’s a new major version,

01:18:20 you’ll rename it to something more ridiculous

01:18:21 or memorable and beautiful, sorry.

01:18:25 Not ridiculous, of course.

01:18:28 If you see the full array of neural nets

01:18:32 that are operating in the cars,

01:18:34 it kind of boggles the mind.

01:18:36 There’s so many layers, it’s crazy.

01:18:39 So, yeah.

01:18:43 And we started off with simple neural nets

01:18:48 that were basically image recognition

01:18:53 on a single frame from a single camera

01:18:56 and then trying to knit those together

01:19:00 with C, I should say we’re really primarily running C here

01:19:07 because C++ is too much overhead

01:19:10 and we have our own C compiler.

01:19:11 So to get maximum performance,

01:19:13 we actually wrote our own C compiler

01:19:15 and are continuing to optimize our C compiler

01:19:18 for maximum efficiency.

01:19:20 In fact, we’ve just recently done a new river

01:19:23 on our C compiler that’ll compile directly

01:19:25 to our autopilot hardware.

01:19:26 If you want to compile the whole thing down

01:19:28 with your own compiler, like so efficiency here,

01:19:32 because there’s all kinds of compute,

01:19:33 there’s CPU, GPU, there’s like basic types of things

01:19:37 and you have to somehow figure out the scheduling

01:19:39 across all of those things.

01:19:40 And so you’re compiling the code down that does all, okay.

01:19:44 So that’s why there’s a lot of people involved.

01:19:46 There’s a lot of hardcore software engineering

01:19:50 at a very sort of bare metal level

01:19:54 because we’re trying to do a lot of compute

01:19:57 that’s constrained to our full self driving computer.

01:20:03 So we want to try to have the highest frames per second

01:20:07 possible in a sort of very finite amount of compute

01:20:14 and power.

01:20:15 So we really put a lot of effort into the efficiency

01:20:20 of our compute.

01:20:23 And so there’s actually a lot of work done

01:20:26 by some very talented software engineers at Tesla

01:20:29 that at a very foundational level

01:20:33 to improve the efficiency of compute

01:20:35 and how we use the trip accelerators,

01:20:38 which are basically, you know,

01:20:43 doing matrix math dot products,

01:20:45 like a bazillion dot products.

01:20:47 And it’s like, what are neural nets?

01:20:49 It’s like compute wise, like 99% dot products.

01:20:54 So, you know.

01:20:57 And you want to achieve as many high frame rates

01:20:59 like a video game.

01:21:00 You want full resolution, high frame rate.

01:21:05 High frame rate, low latency, low jitter.

01:21:10 So I think one of the things we’re moving towards now

01:21:18 is no post processing of the image

01:21:22 through the image signal processor.

01:21:26 So like what happens for cameras is that,

01:21:32 almost all cameras is they,

01:21:35 there’s a lot of post processing done

01:21:37 in order to make pictures look pretty.

01:21:40 And so we don’t care about pictures looking pretty.

01:21:43 We just want the data.

01:21:45 So we’re moving just raw photon counts.

01:21:48 So the system will, like the image that the computer sees

01:21:55 is actually much more than what you’d see

01:21:57 if you represented it on a camera.

01:21:59 It’s got much more data.

01:22:00 And even in a very low light conditions,

01:22:02 you can see that there’s a small photon count difference

01:22:05 between this spot here and that spot there,

01:22:08 which means that,

01:22:09 so it can see in the dark incredibly well

01:22:12 because it can detect these tiny differences

01:22:15 in photon counts.

01:22:16 Like much better than you could possibly imagine.

01:22:20 So, and then we also save 13 milliseconds on a latency.

01:22:27 So.

01:22:29 From removing the post processing on the image?

01:22:31 Yes.

01:22:32 It’s like,

01:22:32 it’s incredible.

01:22:34 Cause we’ve got eight cameras

01:22:35 and then there’s roughly, I don’t know,

01:22:39 one and a half milliseconds or so,

01:22:41 maybe 1.6 milliseconds of latency for each camera.

01:22:46 And so like going to just,

01:22:53 basically bypassing the image processor

01:22:56 gets us back 13 milliseconds of latency,

01:22:58 which is important.

01:22:59 And we track latency all the way from, you know,

01:23:03 photon hits the camera to, you know,

01:23:06 all the steps that it’s got to go through to get,

01:23:08 you know, go through the various neural nets

01:23:12 and the C code.

01:23:13 And there’s a little bit of C++ there as well.

01:23:17 Well, I can maybe a lot, but it,

01:23:20 the core stuff is heavy duty computers all in C.

01:23:25 And so we track that latency all the way

01:23:28 to an output command to the drive unit to accelerate

01:23:33 the brakes just to slow down the steering,

01:23:36 you know, turn left or right.

01:23:38 So, cause you go to output a command

01:23:40 that’s got to go to a controller.

01:23:41 And like some of these controllers have an update frequency

01:23:44 that’s maybe 10 Hertz or something like that,

01:23:46 which is slow.

01:23:47 That’s like now you lose a hundred milliseconds potentially.

01:23:50 So, so then we want to update the,

01:23:55 the drivers on the like say steering and braking control

01:23:58 to have more like a hundred Hertz instead of 10 Hertz.

01:24:02 And then you’ve got a 10 millisecond latency

01:24:04 instead of a hundred millisecond worst case latency.

01:24:06 And actually jitter is more of a challenge than latency.

01:24:09 Cause latency is like, you can, you can,

01:24:11 you can anticipate and predict, but if you’re,

01:24:13 but if you’ve got a stack up of things going

01:24:14 from the camera to the, to the computer through

01:24:17 then a series of other computers,

01:24:19 and finally to an actuator on the car,

01:24:22 if you have a stack up of tolerances of timing tolerances,

01:24:26 then you can have quite a variable latency,

01:24:29 which is called jitter.

01:24:30 And, and that makes it hard to, to, to anticipate exactly

01:24:34 what, how you should turn the car or accelerate,

01:24:37 because if you’ve got maybe a hundred,

01:24:40 50, 200 milliseconds of jitter,

01:24:42 then you could be off by, you know, up to 0.2 seconds.

01:24:45 And this can make, this could make a big difference.

01:24:47 So you have to interpolate somehow to, to, to,

01:24:50 to deal with the effects of jitter.

01:24:52 So that you can make like robust control decisions.

01:24:57 You have to, so the jitters and the sensor information,

01:25:01 or the jitter can occur at any stage in the pipeline.

01:25:05 You can, if you have just, if you have fixed latency,

01:25:07 you can anticipate and, and like say, okay,

01:25:11 we know that our information is for argument’s sake,

01:25:16 150 milliseconds stale.

01:25:19 Like, so for, for, for 150 milliseconds

01:25:22 from photon second camera to where you can measure a change

01:25:28 in the acceleration of the vehicle.

01:25:33 So then, then you can say, okay, well, we’re going to enter,

01:25:38 we know it’s 150 milliseconds.

01:25:39 So we’re going to take that into account

01:25:40 and, and compensate for that latency.

01:25:44 However, if you’ve got then 150 milliseconds of latency

01:25:47 plus a hundred milliseconds of jitter,

01:25:49 that’s which could be anywhere from zero,

01:25:50 zero to a hundred milliseconds on top.

01:25:52 So, so then your latency could be from 150 to 250 milliseconds.

01:25:55 Now you’ve got a hundred milliseconds

01:25:56 that you don’t know what to do with.

01:25:58 And that’s basically random.

01:26:01 So getting rid of jitter is extremely important.

01:26:04 And that affects your control decisions

01:26:05 and all those kinds of things.

01:26:07 Okay.

01:26:09 Yeah, the car’s just going to fundamentally maneuver better

01:26:11 with lower jitter.

01:26:12 Got it.

01:26:13 The cars will maneuver with superhuman ability

01:26:16 and reaction time much faster than a human.

01:26:20 I mean, I think over time the autopilot,

01:26:24 full self driving will be capable of maneuvers

01:26:26 that are far more than what like James Bond could do

01:26:34 in like the best movie type of thing.

01:26:36 That’s exactly what I was imagining in my mind,

01:26:38 as you said it.

01:26:40 It’s like an impossible maneuvers

01:26:41 that a human couldn’t do.

01:26:43 Yeah, so.

01:26:45 Well, let me ask sort of looking back the six years,

01:26:48 looking out into the future,

01:26:50 based on your current understanding,

01:26:51 how hard do you think this,

01:26:53 this full self driving problem,

01:26:55 when do you think Tesla will solve level four FSD?

01:27:01 I mean, it’s looking quite likely

01:27:02 that it will be next year.

01:27:05 And what does the solution look like?

01:27:07 Is it the current pool of FSD beta candidates,

01:27:10 they start getting greater and greater

01:27:13 as they have been degrees of autonomy,

01:27:15 and then there’s a certain level

01:27:17 beyond which they can do their own,

01:27:20 they can read a book.

01:27:22 Yeah, so.

01:27:25 I mean, you can see that anybody

01:27:26 who’s been following the full self driving beta closely

01:27:30 will see that the rate of disengagements

01:27:35 has been dropping rapidly.

01:27:37 So like disengagement be where the driver intervenes

01:27:40 to prevent the car from doing something dangerous,

01:27:44 potentially, so.

01:27:49 So the interventions per million miles

01:27:53 has been dropping dramatically at some point.

01:27:57 And that trend looks like it happens next year

01:28:01 is that the probability of an accident on FSD

01:28:06 is less than that of the average human,

01:28:09 and then significantly less than that of the average human.

01:28:13 So it certainly appears like we will get there next year.

01:28:21 Then of course, then there’s gonna be a case of,

01:28:24 okay, well, we now have to prove this to regulators

01:28:26 and prove it to, you know, and we want a standard

01:28:28 that is not just equivalent to a human,

01:28:31 but much better than the average human.

01:28:33 I think it’s gotta be at least two or three times

01:28:35 two or three times higher safety than a human.

01:28:39 So two or three times lower probability of injury

01:28:41 than a human before we would actually say like,

01:28:44 okay, it’s okay to go, it’s not gonna be equivalent,

01:28:46 it’s gonna be much better.

01:28:48 So if you look, FSD 10.6 just came out recently,

01:28:53 10.7 is on the way, maybe 11 is on the way,

01:28:57 so we’re in the future.

01:28:58 Yeah, we were hoping to get 11 out this year,

01:29:01 but 11 actually has a whole bunch of fundamental rewrites

01:29:07 on the neural net architecture,

01:29:10 and some fundamental improvements

01:29:14 in creating vector space, so.

01:29:19 There is some fundamental like leap

01:29:22 that really deserves the 11,

01:29:24 I mean, that’s a pretty cool number.

01:29:25 Yeah, 11 would be a single stack

01:29:29 for all, you know, one stack to rule them all.

01:29:36 But there are just some really fundamental

01:29:40 neural net architecture changes

01:29:43 that will allow for much more capability,

01:29:47 but at first they’re gonna have issues.

01:29:51 So like we have this working on like sort of alpha software

01:29:54 and it’s good, but it’s basically taking a whole bunch

01:30:00 of C, C++ code and leading a massive amount of C++ code

01:30:05 and replacing it with a neural net.

01:30:06 And Andre makes this point a lot,

01:30:09 which is like neural nets are kind of eating software.

01:30:12 Over time there’s like less and less conventional software,

01:30:15 more and more neural net, which is still software,

01:30:18 but it’s, you know, still comes out the lines of software,

01:30:21 but it’s more neural net stuff

01:30:25 and less, you know, heuristics basically.

01:30:33 More matrix based stuff and less heuristics based stuff.

01:30:38 And, you know, like one of the big changes will be,

01:30:47 like right now the neural nets will deliver

01:30:54 a giant bag of points to the C++ or C and C++ code.

01:31:00 We call it the giant bag of points.

01:31:03 And it’s like, so you’ve got a pixel and something associated

01:31:08 with that pixel.

01:31:09 Like this pixel is probably car.

01:31:11 The pixel is probably lane line.

01:31:13 Then you’ve got to assemble this giant bag of points

01:31:16 in the C code and turn it into vectors.

01:31:21 And it does a pretty good job of it, but it’s,

01:31:26 we want to just, you know,

01:31:30 we need another layer of neural nets on top of that

01:31:35 to take the giant bag of points and distill that down

01:31:40 to a vector space in the neural net part of the software,

01:31:45 as opposed to the heuristics part of the software.

01:31:48 This is a big improvement.

01:31:51 Neural nets all the way down.

01:31:52 That’s what you want.

01:31:53 It’s not even all neural nets, but this will be just a,

01:31:58 this is a game changer to not have the bag of points,

01:32:01 the giant bag of points that has to be assembled

01:32:04 with many lines of C++ and have the,

01:32:09 and have a neural net just assemble those into a vector.

01:32:12 So the neural net is outputting much, much less data.

01:32:19 It’s outputting, this is a lane line.

01:32:22 This is a curb.

01:32:23 This is drivable space.

01:32:24 This is a car.

01:32:25 This is a pedestrian or a cyclist or something like that.

01:32:29 It’s outputting, it’s really outputting proper vectors

01:32:35 to the C, C++ control code,

01:32:39 as opposed to the sort of constructing the vectors in C,

01:32:50 which we’ve done, I think, quite a good job of,

01:32:52 but we’re kind of hitting a local maximum

01:32:55 on how well the C can do this.

01:32:59 So this is really a big deal.

01:33:02 And just all of the networks in the car

01:33:04 need to move to surround video.

01:33:06 There’s still some legacy networks that are not surround video.

01:33:11 And all of the training needs to move to surround video

01:33:14 and the efficiency of the training,

01:33:16 it needs to get better and it is.

01:33:18 And then we need to move everything to raw photon counts

01:33:25 as opposed to processed images,

01:33:29 which is quite a big reset on the training

01:33:31 because the system’s trained on post processed images.

01:33:35 So we need to redo all the training

01:33:38 to train against the raw photon counts

01:33:41 instead of the post processed image.

01:33:43 So ultimately, it’s kind of reducing the complexity

01:33:46 of the whole thing.

01:33:47 So reducing the…

01:33:50 Lines of code will actually go lower.

01:33:52 Yeah, that’s fascinating.

01:33:54 So you’re doing fusion of all the sensors

01:33:56 and reducing the complexity of having to deal with these…

01:33:58 Fusion of the cameras.

01:33:59 Fusion of the cameras, really.

01:34:00 Right, yes.

01:34:03 Same with humans.

01:34:05 Well, I guess we’ve got ears too.

01:34:07 Yeah, we’ll actually need to incorporate sound as well

01:34:11 because you need to listen for ambulance sirens

01:34:14 or fire trucks or somebody yelling at you or something.

01:34:20 I don’t know.

01:34:21 There’s a little bit of audio that needs to be incorporated as well.

01:34:24 Do you need to go back for a break?

01:34:26 Yeah, sure, let’s take a break.

01:34:27 Okay.

01:34:28 Honestly, frankly, the ideas are the easy thing

01:34:33 and the implementation is the hard thing.

01:34:35 The idea of going to the moon is the easy part.

01:34:37 Not going to the moon is the hard part.

01:34:39 It’s the hard part.

01:34:40 And there’s a lot of hardcore engineering

01:34:42 that’s got to get done at the hardware and software level.

01:34:46 Like I said, optimizing the C compiler

01:34:48 and just cutting out latency everywhere.

01:34:55 If we don’t do this, the system will not work properly.

01:34:59 So the work of the engineers doing this,

01:35:02 they are like the unsung heroes,

01:35:05 but they are critical to the success of the situation.

01:35:08 I think you made it clear.

01:35:09 I mean, at least to me, it’s super exciting.

01:35:11 Everything that’s going on outside of what Andre is doing.

01:35:15 Just the whole infrastructure, the software.

01:35:17 I mean, everything is going on with Data Engine,

01:35:19 whatever it’s called.

01:35:21 The whole process is just work of art to me.

01:35:24 The sheer scale of it boggles my mind.

01:35:26 Like the training, the amount of work done with,

01:35:29 like we’ve written all this custom software for training and labeling

01:35:33 and to do auto labeling.

01:35:34 Auto labeling is essential.

01:35:38 Because especially when you’ve got surround video, it’s very difficult.

01:35:42 To label surround video from scratch is extremely difficult.

01:35:48 Like take a human such a long time to even label one video clip,

01:35:52 like several hours.

01:35:54 Or the auto label, basically we just apply heavy duty,

01:36:00 like a lot of compute to the video clips to preassign

01:36:06 and guess what all the things are that are going on in the surround video.

01:36:09 And then there’s like correcting it.

01:36:10 Yeah.

01:36:11 And then all the human has to do is like tweet,

01:36:13 like say, adjust what is incorrect.

01:36:16 This is like increases productivity by in fact a hundred or more.

01:36:21 Yeah.

01:36:22 So you’ve presented Tesla Bot as primarily useful in the factory.

01:36:25 First of all, I think humanoid robots are incredible.

01:36:28 From a fan of robotics, I think the elegance of movement

01:36:32 that humanoid robots, that bipedal robots show are just so cool.

01:36:38 So it’s really interesting that you’re working on this

01:36:40 and also talking about applying the same kind of all the ideas,

01:36:44 some of which we’ve talked about with Data Engine,

01:36:46 all the things that we’re talking about with Tesla Autopilot,

01:36:49 just transferring that over to just yet another robotics problem.

01:36:54 I have to ask, since I care about human robot interaction,

01:36:57 so the human side of that.

01:36:59 So you’ve talked about mostly in the factory.

01:37:01 Do you see part of this problem that Tesla Bot has to solve

01:37:06 is interacting with humans and potentially having a place like in the home.

01:37:10 So interacting, not just not replacing labor, but also like, I don’t know,

01:37:15 being a friend or an assistant or something like that.

01:37:18 Yeah, I think the possibilities are endless.

01:37:27 It’s not quite in Tesla’s primary mission direction

01:37:32 of accelerating sustainable energy,

01:37:34 but it is an extremely useful thing that we can do for the world,

01:37:38 which is to make a useful humanoid robot that is capable of interacting with the world

01:37:44 and helping in many different ways.

01:37:49 I think if you say extrapolate to many years in the future,

01:37:59 I think work will become optional.

01:38:04 There’s a lot of jobs that if people weren’t paid to do it,

01:38:10 they wouldn’t do it.

01:38:12 It’s not fun necessarily.

01:38:14 If you’re washing dishes all day,

01:38:16 it’s like, you know, even if you really like washing dishes,

01:38:19 you really want to do it for eight hours a day every day.

01:38:22 Probably not.

01:38:25 And then there’s like dangerous work.

01:38:27 And basically, if it’s dangerous, boring,

01:38:30 it has like potential for repetitive stress injury, that kind of thing.

01:38:34 Then that’s really where humanoid robots would add the most value initially.

01:38:40 So that’s what we’re aiming for is for the humanoid robots to do jobs

01:38:47 that people don’t voluntarily want to do.

01:38:51 And then we’ll have to pair that obviously

01:38:53 with some kind of universal basic income in the future.

01:38:56 So I think.

01:39:00 So do you see a world when there’s like hundreds of millions of Tesla bots

01:39:05 doing different, performing different tasks throughout the world?

01:39:12 Yeah, I haven’t really thought about it that far into the future,

01:39:14 but I guess there may be something like that.

01:39:17 So.

01:39:20 Can I ask a wild question?

01:39:22 So the number of Tesla cars has been accelerating.

01:39:25 There’s been close to two million produced.

01:39:28 Many of them have autopilot.

01:39:30 I think we’re over two million now.

01:39:32 Do you think there will ever be a time when there will be more Tesla bots

01:39:36 than Tesla cars?

01:39:40 Yeah.

01:39:42 Actually, it’s funny you asked this question,

01:39:44 because normally I do try to think pretty far into the future,

01:39:47 but I haven’t really thought that far into the future with the Tesla bot,

01:39:52 or it’s codenamed Optimus.

01:39:55 I call it Optimus subprime.

01:39:59 It’s not like a giant transformer robot.

01:40:04 So.

01:40:07 But it’s meant to be a general purpose, helpful bot.

01:40:14 And basically, like the things that we’re basically like, like,

01:40:18 Tesla, I think is the has the most advanced real world AI

01:40:25 for interacting with the real world,

01:40:26 which are developed as a function of to make self driving work.

01:40:30 And so along with custom hardware and like a lot of, you know,

01:40:36 hardcore low level software to have it run efficiently

01:40:39 and be power efficient because it’s one thing to do neural nets

01:40:43 if you’ve got a gigantic server room with 10,000 computers.

01:40:45 But now let’s say you just you have to now distill that down

01:40:48 into one computer that’s running at low power in a humanoid robot or a car.

01:40:53 That’s actually very difficult.

01:40:54 A lot of hardcore software work is required for that.

01:40:59 So since we’re kind of like solving the navigate the real world

01:41:05 with neural nets problem for cars,

01:41:08 which are kind of robots with four wheels,

01:41:10 then it’s like kind of a natural extension of that is to put it

01:41:14 in a robot with arms and legs and actuators.

01:41:20 So like the two hard things are like you basically need to make the

01:41:31 have the robot be intelligent enough to interact in a sensible way

01:41:34 with the environment.

01:41:36 So you need real world AI and you need to be very good at manufacturing,

01:41:43 which is a very hard problem.

01:41:44 Tesla is very good at manufacturing and also has the real world AI.

01:41:50 So making the humanoid robot work is basically means developing custom motors

01:41:58 and sensors that are different from what a car would use.

01:42:04 But we’ve also we have I think we have the best expertise

01:42:10 in developing advanced electric motors and power electronics.

01:42:15 So it just has to be for humanoid robot application on a car.

01:42:22 Still, you do talk about love sometimes.

01:42:25 So let me ask.

01:42:26 This isn’t like for like sex robots or something like that.

01:42:29 Love is the answer.

01:42:30 Yes.

01:42:33 There is something compelling to us, not compelling,

01:42:36 but we connect with humanoid robots or even like robots like with the dog

01:42:41 and shapes of dogs.

01:42:43 It just it seems like, you know, there’s a huge amount of loneliness in this world.

01:42:48 All of us seek companionship with other humans, friendship

01:42:51 and all those kinds of things.

01:42:52 We have a lot of here in Austin, a lot of people have dogs.

01:42:56 There seems to be a huge opportunity to also have robots that decrease

01:43:01 the amount of loneliness in the world or help us humans connect with each other.

01:43:09 So in a way that dogs can.

01:43:12 Do you think about that with TeslaBot at all?

01:43:14 Or is it really focused on the problem of performing specific tasks,

01:43:19 not connecting with humans?

01:43:23 I mean, to be honest, I have not actually thought about it from the companionship

01:43:27 standpoint, but I think it actually would end up being it could be actually

01:43:31 a very good companion.

01:43:34 And it could develop like a personality over time that is that is like unique,

01:43:44 like, you know, it’s not like they’re just all the robots are the same

01:43:47 and that personality could evolve to be, you know,

01:43:53 match the owner or the, you know, yes, the owner.

01:43:59 Well, whatever you want to call it.

01:44:02 The other half, right?

01:44:05 In the same way that friends do.

01:44:06 See, I think that’s a huge opportunity.

01:44:09 Yeah, no, that’s interesting.

01:44:14 Because, you know, like there’s a Japanese phrase I like, Wabi Sabi,

01:44:19 you know, the subtle imperfections are what makes something special.

01:44:23 And the subtle imperfections of the personality of the robot mapped

01:44:28 to the subtle imperfections of the robot’s human friend.

01:44:34 I don’t know, owner sounds like maybe the wrong word,

01:44:36 but could actually make an incredible buddy, basically.

01:44:42 In that way, the imperfections.

01:44:43 Like R2D2 or like a C3PO sort of thing, you know.

01:44:46 So from a machine learning perspective,

01:44:49 I think the flaws being a feature is really nice.

01:44:53 You could be quite terrible at being a robot for quite a while

01:44:57 in the general home environment or in general world.

01:45:00 And that’s kind of adorable.

01:45:02 And that’s like, those are your flaws and you fall in love with those flaws.

01:45:06 So it’s very different than autonomous driving

01:45:09 where it’s a very high stakes environment you cannot mess up.

01:45:13 And so it’s more fun to be a robot in the home.

01:45:17 Yeah, in fact, if you think of like C3PO and R2D2,

01:45:21 like they actually had a lot of like flaws and imperfections

01:45:24 and silly things and they would argue with each other.

01:45:29 Were they actually good at doing anything?

01:45:32 I’m not exactly sure.

01:45:34 They definitely added a lot to the story.

01:45:38 But there’s sort of quirky elements and, you know,

01:45:43 that they would like make mistakes and do things.

01:45:45 It was like it made them relatable, I don’t know, enduring.

01:45:52 So yeah, I think that that could be something that probably would happen.

01:45:59 But our initial focus is just to make it useful.

01:46:03 So I’m confident we’ll get it done.

01:46:06 I’m not sure what the exact timeframe is,

01:46:08 but like we’ll probably have, I don’t know,

01:46:11 a decent prototype towards the end of next year or something like that.

01:46:15 And it’s cool that it’s connected to Tesla, the car.

01:46:19 Yeah, it’s using a lot of, you know,

01:46:22 it would use the autopilot inference computer

01:46:25 and a lot of the training that we’ve done for cars

01:46:29 in terms of recognizing real world things

01:46:32 could be applied directly to the robot.

01:46:38 But there’s a lot of custom actuators and sensors that need to be developed.

01:46:42 And an extra module on top of the vector space for love.

01:46:47 Yeah.

01:46:48 That’s what I’m saying.

01:46:51 We can add that to the car too.

01:46:53 That’s true.

01:46:55 Yeah, it could be useful in all environments.

01:46:57 Like you said, a lot of people argue in the car,

01:46:59 so maybe we can help them out.

01:47:02 You’re a student of history,

01:47:03 fan of Dan Carlin’s Hardcore History podcast.

01:47:06 Yeah, it’s great.

01:47:07 Greatest podcast ever?

01:47:08 Yeah, I think it is actually.

01:47:11 It almost doesn’t really count as a podcast.

01:47:14 It’s more like an audio book.

01:47:16 So you were on the podcast with Dan.

01:47:18 I just had a chat with him about it.

01:47:21 He said you guys went military and all that kind of stuff.

01:47:23 Yeah, it was basically, it should be titled Engineer Wars, essentially.

01:47:32 Like when there’s a rapid change in the rate of technology,

01:47:36 then engineering plays a pivotal role in victory and battle.

01:47:43 How far back in history did you go?

01:47:45 Did you go World War II?

01:47:47 Well, it was supposed to be a deep dive on fighters and bomber technology in World War II,

01:47:55 but that ended up being more wide ranging than that,

01:47:58 because I just went down the total rathole of studying all of the fighters and bombers of World War II

01:48:04 and the constant rock, paper, scissors game that one country would make this plane,

01:48:10 then it would make a plane to beat that, and that country would make a plane to beat that.

01:48:15 And really what matters is the pace of innovation

01:48:18 and also access to high quality fuel and raw materials.

01:48:25 Germany had some amazing designs, but they couldn’t make them

01:48:29 because they couldn’t get the raw materials,

01:48:31 and they had a real problem with the oil and fuel, basically.

01:48:37 The fuel quality was extremely variable.

01:48:40 So the design wasn’t the bottleneck?

01:48:42 Yeah, the U.S. had kickass fuel that was very consistent.

01:48:47 The problem is if you make a very high performance aircraft engine,

01:48:50 in order to make it high performance, the fuel, the aviation gas,

01:48:59 has to be a consistent mixture and it has to have a high octane.

01:49:07 High octane is the most important thing, but it also can’t have impurities and stuff

01:49:11 because you’ll foul up the engine.

01:49:14 And Germany just never had good access to oil.

01:49:16 They tried to get it by invading the Caucasus, but that didn’t work too well.

01:49:22 It never worked so well.

01:49:23 It didn’t work out for them.

01:49:24 See you, Jerry.

01:49:26 Nice to meet you.

01:49:28 Germany was always struggling with basically shitty oil,

01:49:31 and they couldn’t count on high quality fuel for their aircraft,

01:49:37 so they had to have all these additives and stuff.

01:49:43 Whereas the U.S. had awesome fuel, and they provided that to Britain as well.

01:49:48 So that allowed the British and the Americans to design aircraft engines

01:49:53 that were super high performance, better than anything else in the world.

01:49:58 Germany could design the engines, they just didn’t have the fuel.

01:50:01 And then also the quality of the aluminum alloys that they were getting

01:50:06 was also not that great.

01:50:09 Is this like, you talked about all this with Dan?

01:50:11 Yep.

01:50:12 Awesome.

01:50:13 Broadly looking at history, when you look at Genghis Khan,

01:50:16 when you look at Stalin, Hitler, the darkest moments of human history,

01:50:22 what do you take away from those moments?

01:50:24 Does it help you gain insight about human nature, about human behavior today,

01:50:28 whether it’s the wars or the individuals or just the behavior of people,

01:50:32 any aspects of history?

01:50:41 Yeah, I find history fascinating.

01:50:49 There’s just a lot of incredible things that have been done, good and bad,

01:50:54 that they help you understand the nature of civilization and individuals.

01:51:06 Does it make you sad that humans do these kinds of things to each other?

01:51:09 You look at the 20th century, World War II, the cruelty, the abuse of power,

01:51:15 talk about communism, Marxism, and Stalin.

01:51:20 I mean, there’s a lot of human history.

01:51:24 Most of it is actually people just getting on with their lives,

01:51:28 and it’s not like human history is just nonstop war and disaster.

01:51:35 Those are actually just, those are intermittent and rare.

01:51:38 If they weren’t, then humans would soon cease to exist.

01:51:46 But it’s just that wars tend to be written about a lot,

01:51:50 whereas something being like, well,

01:51:54 a normal year where nothing major happened doesn’t get written about much.

01:51:58 But that’s, most people just like farming and kind of living their life,

01:52:04 being a villager somewhere.

01:52:09 And every now and again, there’s a war.

01:52:16 And I would have to say, there aren’t very many books where I just had to stop reading

01:52:23 because it was just too dark.

01:52:26 But the book about Stalin, The Court of the Red Tsar, I had to stop reading.

01:52:32 It was just too dark and rough.

01:52:37 Yeah.

01:52:39 The 30s, there’s a lot of lessons there to me,

01:52:44 in particular that it feels like humans,

01:52:48 like all of us have that, it’s the old Solzhenitsyn line,

01:52:53 that the line between good and evil runs through the heart of every man,

01:52:56 that all of us are capable of evil, all of us are capable of good.

01:52:59 It’s almost like this kind of responsibility that all of us have

01:53:04 to tend towards the good.

01:53:07 And so to me, looking at history is almost like an example of,

01:53:11 look, you have some charismatic leader that convinces you of things.

01:53:16 It’s too easy, based on that story, to do evil onto each other,

01:53:21 onto your family, onto others.

01:53:23 And so it’s like our responsibility to do good.

01:53:26 It’s not like now is somehow different from history.

01:53:29 That can happen again. All of it can happen again.

01:53:32 And yes, most of the time, you’re right,

01:53:35 the optimistic view here is mostly people are just living life.

01:53:39 And as you’ve often memed about, the quality of life was way worse

01:53:44 back in the day, and this keeps improving over time

01:53:47 through innovation, through technology.

01:53:49 But still, it’s somehow notable that these blimps of atrocities happen.

01:53:54 Sure.

01:53:56 Yeah, I mean, life was really tough for most of history.

01:54:02 I mean, for most of human history, a good year would be one

01:54:07 where not that many people in your village died of the plague,

01:54:11 starvation, freezing to death, or being killed by a neighboring village.

01:54:16 It’s like, well, it wasn’t that bad.

01:54:18 It was only like we lost 5% this year.

01:54:20 That was a good year.

01:54:23 That would be par for the course.

01:54:25 Just not starving to death would have been the primary goal

01:54:28 of most people throughout history,

01:54:31 is making sure we’ll have enough food to last through the winter

01:54:34 and not freeze or whatever.

01:54:36 So, now food is plentiful.

01:54:42 I have an obesity problem.

01:54:46 Well, yeah, the lesson there is to be grateful for the way things are now

01:54:50 for some of us.

01:54:53 We’ve spoken about this offline.

01:54:56 I’d love to get your thought about it here.

01:55:00 If I sat down for a long form in person conversation with the President of Russia,

01:55:05 Vladimir Putin, would you potentially want to call in for a few minutes

01:55:10 to join in on a conversation with him, moderated and translated by me?

01:55:15 Sure, yeah.

01:55:16 Sure, I’d be happy to do that.

01:55:19 You’ve shown interest in the Russian language.

01:55:22 Is this grounded in your interest in history of linguistics, culture, general curiosity?

01:55:27 I think it sounds cool.

01:55:29 Sounds cool and that looks cool.

01:55:32 Well, it takes a moment to read Cyrillic.

01:55:39 Once you know what the Cyrillic characters stand for,

01:55:43 actually then reading Russian becomes a lot easier

01:55:47 because there are a lot of words that are actually the same.

01:55:49 Like bank is bank.

01:55:54 So find the words that are exactly the same and now you start to understand Cyrillic.

01:55:59 If you can sound it out, there’s at least some commonality of words.

01:56:06 What about the culture?

01:56:09 You love great engineering, physics.

01:56:12 There’s a tradition of the sciences there.

01:56:14 You look at the 20th century from rocketry.

01:56:17 Some of the greatest rockets, some of the space exploration has been done in the former Soviet Union.

01:56:24 So do you draw inspiration from that history?

01:56:27 Just how this culture that in many ways, one of the sad things is because of the language,

01:56:33 a lot of it is lost to history because it’s not translated.

01:56:37 Because it is in some ways an isolated culture.

01:56:40 It flourishes within its borders.

01:56:45 So do you draw inspiration from those folks, from the history of science engineering there?

01:56:51 The Soviet Union, Russia and Ukraine as well have a really strong history in space flight.

01:57:02 Some of the most advanced and impressive things in history were done by the Soviet Union.

01:57:11 So one cannot help but admire the impressive rocket technology that was developed.

01:57:20 After the fall of the Soviet Union, there’s much less that then happened.

01:57:29 But still things are happening, but it’s not quite at the frenetic pace that it was happening before the Soviet Union kind of dissolved into separate republics.

01:57:46 Yeah, there’s Roscosmos, the Russian agency.

01:57:52 I look forward to a time when those countries with China are working together.

01:57:57 The United States are all working together.

01:58:00 Maybe a little bit of friendly competition.

01:58:02 I think friendly competition is good.

01:58:04 Governments are slow and the only thing slower than one government is a collection of governments.

01:58:09 So the Olympics would be boring if everyone just crossed the finishing line at the same time.

01:58:16 Nobody would watch.

01:58:18 And people wouldn’t try hard to run fast and stuff.

01:58:22 So I think friendly competition is a good thing.

01:58:25 This is also a good place to give a shout out to a video titled,

01:58:29 The Entire Soviet Rocket Engine Family Tree by Tim Dodd, AKA Everyday Astronaut.

01:58:34 It’s like an hour and a half.

01:58:35 It gives the full history of Soviet rockets.

01:58:38 And people should definitely go check out and support Tim in general.

01:58:41 That guy is super excited about the future, super excited about space flight.

01:58:45 Every time I see anything by him, I just have a stupid smile on my face because he’s so excited about stuff.

01:58:50 Yeah, Tim Dodd is really great.

01:58:53 If you’re interested in anything to do with space, he’s, in terms of explaining rocket technology to your average person, he’s awesome.

01:59:02 The best, I’d say.

01:59:04 And I should say like the part of the reason like I switched us from like Rafter at one point was going to be a hydrogen engine.

01:59:14 But hydrogen has a lot of challenges.

01:59:17 It’s very low density.

01:59:18 It’s a deep cryogen.

01:59:19 So it’s only liquid at a very, very close to absolute zero.

01:59:23 Requires a lot of insulation.

01:59:26 So it is a lot of challenges there.

01:59:30 And I was actually reading a bit about Russian rocket engine developments.

01:59:35 And at least the impression I had was that the Soviet Union, Russia and Ukraine primarily were actually in the process of switching to Methalox.

01:59:50 And there were some interesting tests and data for ISP.

01:59:55 Like they were able to get like up to like a 380 second ISP with the Methalox engine.

02:00:01 And I was like, well, OK, that’s actually really impressive.

02:00:05 So I think you could actually get a much lower cost, like in optimizing cost per ton to orbit, cost per ton to Mars.

02:00:19 I think Methalox is the way to go.

02:00:25 And I was partly inspired by the Russian work on the test stands with Methalox engines.

02:00:32 And now for something completely different.

02:00:35 Do you mind doing a bit of a meme review in the spirit of the great, the powerful PewDiePie?

02:00:41 Let’s say 1 to 11.

02:00:42 Just go over a few documents printed out.

02:00:45 We can try.

02:00:46 Let’s try this.

02:00:49 I present to you document numero uno.

02:00:56 I don’t know. OK.

02:00:58 Vladimir Impaler discovers marshmallows.

02:01:03 That’s not bad.

02:01:07 So you get it? Because he likes impaling things.

02:01:11 Yes, I get it.

02:01:12 I don’t know, three, whatever.

02:01:14 That’s not very good.

02:01:19 This is grounded in some engineering, some history.

02:01:28 Yeah, give us an eight out of ten.

02:01:31 What do you think about nuclear power?

02:01:33 I’m in favor of nuclear power.

02:01:34 I think in a place that is not subject to extreme natural disasters, I think nuclear power is a great way to generate electricity.

02:01:47 I don’t think we should be shutting down nuclear power stations.

02:01:51 Yeah, but what about Chernobyl?

02:01:53 Exactly.

02:01:56 I think there’s a lot of fear of radiation and stuff.

02:02:04 The problem is a lot of people just don’t study engineering or physics.

02:02:13 Just the word radiation just sounds scary.

02:02:16 They can’t calibrate what radiation means.

02:02:21 But radiation is much less dangerous than you think.

02:02:30 For example, Fukushima, when the Fukushima problem happened due to the tsunami,

02:02:41 I got people in California asking me if they should worry about radiation from Fukushima.

02:02:46 I’m like, definitely not, not even slightly, not at all.

02:02:51 That is crazy.

02:02:54 Just to show this is how the danger is so much overplayed compared to what it really is that I actually flew to Fukushima.

02:03:09 I donated a solar power system for a water treatment plant, and I made a point of eating locally grown vegetables on TV in Fukushima.

02:03:28 I’m still alive.

02:03:31 So it’s not even that the risk of these events is low, but the impact of them is…

02:03:36 The impact is greatly exaggerated.

02:03:38 It’s human nature.

02:03:40 People don’t know what radiation is.

02:03:42 I’ve had people ask me, what about radiation from cell phones causing brain cancer?

02:03:46 I’m like, when you say radiation, do you mean photons or particles?

02:03:49 They’re like, I don’t know, what do you mean photons or particles?

02:03:52 Do you mean, let’s say, photons?

02:03:56 What frequency or wavelength?

02:03:59 And they’re like, I have no idea.

02:04:01 Do you know that everything’s radiating all the time?

02:04:04 What do you mean?

02:04:06 Like, yeah, everything’s radiating all the time.

02:04:08 Photons are being emitted by all objects all the time, basically.

02:04:12 And if you want to know what it means to stand in front of nuclear fire, go outside.

02:04:21 The sun is a gigantic thermonuclear reactor that you’re staring right at it.

02:04:28 Are you still alive?

02:04:29 Yes.

02:04:30 Okay, amazing.

02:04:32 Yeah, I guess radiation is one of the words that could be used as a tool to fear monger by certain people.

02:04:39 That’s it.

02:04:40 I think people just don’t understand.

02:04:42 I mean, that’s the way to fight that fear, I suppose, is to understand, is to learn.

02:04:46 Yeah, just say, okay, how many people have actually died from nuclear accidents?

02:04:50 It’s practically nothing.

02:04:51 And say, how many people have died from coal plants?

02:04:57 And it’s a very big number.

02:04:59 So, like, obviously we should not be starting up coal plants and shutting down nuclear plants.

02:05:05 It just doesn’t make any sense at all.

02:05:08 Coal plants, like, I don’t know, 100 to 1,000 times worse for health than nuclear power plants.

02:05:15 You want to go to the next one?

02:05:16 This is really bad.

02:05:20 It’s 90, 180, and 360 degrees.

02:05:24 Everybody loves the math.

02:05:25 Nobody gives a shit about 270.

02:05:28 It’s not super funny.

02:05:30 I don’t know, like, 203.

02:05:32 This is not a, you know, LOL situation.

02:05:37 Yeah.

02:05:43 That was pretty good.

02:05:44 The United States oscillating between establishing and destroying dictatorships.

02:05:48 It’s like a metronome.

02:05:49 Is that a metronome?

02:05:50 Yeah, it’s out of 7 out of 10.

02:05:53 It’s kind of true.

02:05:54 Oh, yeah, this is kind of personal for me.

02:05:57 Next one.

02:05:58 Oh, man.

02:05:59 Is this Leica?

02:06:00 Yeah.

02:06:01 Well, no.

02:06:02 Or it’s like referring to Leica or something?

02:06:03 As Leica’s, like, husband.

02:06:06 Husband.

02:06:07 Yeah, yeah.

02:06:08 Hello.

02:06:09 Yes, this is dog.

02:06:10 Your wife was launched into space.

02:06:11 And then the last one is him with his eyes closed and a bottle of vodka.

02:06:16 Yeah.

02:06:17 Leica didn’t come back.

02:06:18 No.

02:06:19 They don’t tell you the full story of, you know, what the impact they had on the loved

02:06:24 ones.

02:06:25 True.

02:06:26 Yeah.

02:06:27 It’s like 711 for me.

02:06:28 Sure.

02:06:29 The Soviet shadow.

02:06:30 Oh, yeah.

02:06:31 This keeps going on the Russian theme.

02:06:33 First man in space.

02:06:35 Nobody cares.

02:06:36 First man on the moon.

02:06:37 Well, I think people do care.

02:06:38 No, I know.

02:06:39 But…

02:06:40 There is…

02:06:41 Yuri Gagarin’s name will be forever in history, I think.

02:06:45 There is something special about placing, like, stepping foot onto another totally foreign

02:06:52 land.

02:06:53 It’s not the journey, like, people that explore the oceans.

02:06:56 It’s not as important to explore the oceans as to land on a whole new continent.

02:07:01 Yeah.

02:07:02 Well, this is about you.

02:07:05 Oh, yeah, I’d love to get your comment on this.

02:07:08 Elon Musk, after sending 6.6 billion dollars to the UN to end world hunger, you have three

02:07:14 hours.

02:07:15 Yeah, well, I mean, obviously, 6 billion dollars is not going to end world hunger.

02:07:24 So I mean, the reality is at this point, the world is producing far more food than it can

02:07:30 really consume.

02:07:31 Like, we don’t have a caloric constraint at this point.

02:07:35 So where there is hunger, it is almost always due to, like, civil war or strife or some

02:07:43 like, it’s not a thing that is extremely rare for it to be just a matter of, like, lack

02:07:52 of money.

02:07:53 It’s like, you know, it’s like some civil war in some country and like one part of the

02:07:59 country is literally trying to starve the other part of the country.

02:08:03 So it’s much more complex than something that money could solve.

02:08:05 It’s geopolitics.

02:08:07 It’s a lot of things.

02:08:09 It’s human nature.

02:08:10 It’s governments.

02:08:11 It’s money, monetary systems, all that kind of stuff.

02:08:14 Yeah, food is extremely cheap these days.

02:08:17 It’s like, I mean, the US at this point, you know, among low income families, obesity is

02:08:26 actually another problem.

02:08:27 It’s not, like, obesity, it’s not hunger.

02:08:31 It’s like too much, you know, too many calories.

02:08:34 So it’s not that nobody’s hungry anywhere.

02:08:37 It’s just, this is not a simple matter of adding money and solving it.

02:08:43 Hmm.

02:08:44 What do you think that one gets?

02:08:48 It’s getting…

02:08:49 Two.

02:08:50 We’re just going after empires, world, where did you get those artifacts?

02:08:57 The British Museum.

02:08:58 Shout out to Monty Python.

02:09:01 We found them.

02:09:02 Yeah.

02:09:03 The British Museum is pretty great.

02:09:05 I mean, admittedly Britain did take these historical artifacts from all around the world

02:09:10 and put them in London, but, you know, it’s not like people can’t go see them.

02:09:16 So it is a convenient place to see these ancient artifacts is London for, you know, for a large

02:09:23 segment of the world.

02:09:25 So I think, you know, on balance, the British Museum is a net good, although I’m sure a

02:09:29 lot of countries will argue about that.

02:09:31 Yeah.

02:09:32 It’s like you want to make these historical artifacts accessible to as many people as

02:09:35 possible and the British Museum, I think, does a good job of that.

02:09:41 Even if there’s a darker aspect to like the history of empire in general, whatever the

02:09:45 empire is, however things were done, it is the history that happened.

02:09:52 You can’t sort of erase that history, unfortunately.

02:09:54 You could just become better in the future.

02:09:55 That’s the point.

02:09:57 Yeah.

02:09:58 I mean, it’s like, well, how are we going to pass moral judgment on these things?

02:10:04 Like it’s like if, you know, if one is going to judge, say, the Russian Empire, you’ve

02:10:10 got to judge, you know, what everyone was doing at the time and how were the British

02:10:14 relative to everyone.

02:10:18 And I think the British would actually get like a relatively good grade, relatively good

02:10:22 grade, not in absolute terms, but compared to what everyone else was doing, they were

02:10:30 not the worst.

02:10:31 Like I said, you got to look at these things in the context of the history at the time

02:10:35 and say, what were the alternatives and what are you comparing it against?

02:10:39 And I do not think it would be the case that Britain would get a bad grade when looking

02:10:47 at history at the time.

02:10:48 You know, if you judge history from, you know, from what is morally acceptable today, you

02:10:56 basically are going to give everyone a failing grade.

02:10:58 I’m not clear.

02:10:59 It’s not, I don’t think anyone would get a passing grade in their morality of like you

02:11:04 could go back 300 years ago, like who’s getting a passing grade?

02:11:08 Basically no one.

02:11:10 And we might not get a passing grade from generations that come after us.

02:11:16 What does that one get?

02:11:18 Sure.

02:11:19 Six, seven.

02:11:20 For the Monty Python, maybe.

02:11:21 I always love Monty Python.

02:11:22 They’re great.

02:11:23 The Life of Brian and the Quest of the Holy Grail are incredible.

02:11:28 Yeah.

02:11:29 Yeah.

02:11:30 Yeah.

02:11:31 Those serious eyebrows.

02:11:32 How important do you think is facial hair to great leadership?

02:11:37 Well, you got a new haircut.

02:11:41 How does that affect your leadership?

02:11:42 I don’t know.

02:11:43 Hopefully not.

02:11:44 It doesn’t.

02:11:45 Is that the second no one?

02:11:46 Yeah.

02:11:47 The second is no one.

02:11:48 There is no one competing with Brezhnev.

02:11:49 No one, too.

02:11:50 Those are like epic eyebrows.

02:11:51 Yeah.

02:11:52 Sure.

02:11:53 That’s ridiculous.

02:11:54 Give it a six or seven, I don’t know.

02:11:55 I like this Shakespearean analysis of memes.

02:11:56 Brezhnev, he had a flair for drama as well.

02:11:57 Like, you know, showmanship.

02:11:58 Yeah.

02:11:59 Yeah.

02:12:00 It must come from the eyebrows.

02:12:01 All right.

02:12:02 Invention, great engineering, look what I invented, that’s the best thing since ripped

02:12:20 up bread.

02:12:21 Yeah.

02:12:22 Because they invented sliced bread, am I just explaining memes at this point?

02:12:29 This is where my life has become a meme, what it like, you know, like a scribe that like

02:12:40 runs around with the kings and just like writes down memes.

02:12:44 I mean, when was the cheeseburger invented?

02:12:46 That’s like an epic invention.

02:12:47 Yeah.

02:12:48 Like, like, wow.

02:12:49 Yeah.

02:12:50 Versus just like a burger or a burger, I guess a burger in general is like, you know, then

02:12:57 there’s like, what is a burger, what’s a sandwich, and then you start getting a pizza sandwich

02:13:01 and what is the original, it gets into an ontology argument.

02:13:05 Yeah.

02:13:06 But everybody knows like if you order like a burger or cheeseburger or whatever and you

02:13:08 like, you got like, you know, tomato and some lettuce and onions and whatever and, you know,

02:13:14 mayor and ketchup and mustard, it’s like epic.

02:13:16 Yeah.

02:13:17 But I’m sure they’ve had bread and meat separately for a long time and it was kind of a burger

02:13:21 on the same plate, but somebody who actually combined them into the same thing and then

02:13:25 you bite it and hold it makes it convenient.

02:13:29 It’s a materials problem.

02:13:30 Yeah.

02:13:31 Like your hands don’t get dirty and whatever.

02:13:33 Yeah.

02:13:34 It’s brilliant.

02:13:35 Well, that is not what I would have guessed, but everyone knows like you, if you order

02:13:43 a cheeseburger, you know what you’re getting, you know, it’s not like some obtuse, like,

02:13:46 well, I wonder what I’ll get, you know, um, you know, uh, fries are, I mean, great.

02:13:52 I mean, they were the devil, but fries are awesome.

02:13:56 And uh, yeah, pizza is incredible.

02:14:00 Food innovation doesn’t get enough love, I guess is what we’re getting at.

02:14:05 Great.

02:14:06 Um, uh, what about the, uh, Matthew McConaughey, Austinite here, uh, president Kennedy, do

02:14:13 you know how to put men on the moon yet?

02:14:15 NASA?

02:14:16 No.

02:14:17 President Kennedy, it’d be a lot cooler if you did.

02:14:19 Pretty much sure, six, six or seven, I suppose.

02:14:25 All right.

02:14:26 And this is the last one that’s funny.

02:14:30 Someone drew a bunch of dicks all over the walls, 16 chapel boys bath.

02:14:35 Sure.

02:14:36 I’ll give it a nine.

02:14:37 It’s super.

02:14:38 It’s really true.

02:14:39 All right.

02:14:40 This is our highest ranking meme for today.

02:14:41 I mean, it’s true.

02:14:42 Like, how do they get away with it?

02:14:44 Lots of nakedness.

02:14:45 I mean, dick pics are, I mean, just something throughout history.

02:14:49 Uh, as long as people can draw things, there’s been a dick pic.

02:14:53 It’s a staple of human history.

02:14:55 It’s a staple.

02:14:56 It’s just throughout human history.

02:14:58 You tweeted that you aspire to comedy.

02:15:00 You’re friends with Joe Rogan.

02:15:02 Might you, uh, do a short standup comedy set at some point in the future, maybe, um, open

02:15:08 for Joe, something like that.

02:15:09 Is that, is that…

02:15:10 Really?

02:15:11 Standup?

02:15:12 Actual, just full on standup?

02:15:13 Full on standup.

02:15:14 Is that in there or is that…

02:15:15 It’s extremely difficult if, uh, at least that’s what, uh, like Joe says and the comedians

02:15:22 say.

02:15:23 Huh.

02:15:24 I wonder if I could, um, I mean, I, I, you know, I, I have done standup for friends,

02:15:32 just, uh, impromptu, you know, I’ll get, get on like a roof, uh, and they, they do laugh,

02:15:40 but they’re our friends too.

02:15:41 So I don’t know if, if you’ve got to call, you know, like a room of strangers, are they

02:15:45 going to actually also find it funny, but I could try, see what happens.

02:15:51 I think you’d learn something either way.

02:15:53 Um, yeah.

02:15:54 I kind of love, um, both the, when you bomb and when, when you do great, just watching

02:16:00 people, how they deal with it, it’s so difficult.

02:16:03 It’s so, you’re so fragile up there.

02:16:07 It’s just you and you, you think you’re going to be funny.

02:16:09 And when it completely falls flat, it’s just, it’s beautiful to see people deal with like

02:16:14 that.

02:16:15 Yeah.

02:16:16 I might have enough material to do standup.

02:16:17 I’ve never thought about it, but I might have enough material.

02:16:21 Um, I don’t know, like 15 minutes or something.

02:16:25 Oh yeah.

02:16:26 Yeah.

02:16:27 Do a Netflix special.

02:16:28 Netflix special.

02:16:29 Sure.

02:16:30 Um, what’s your favorite Rick and Morty concept, uh, just to spring that on you.

02:16:36 Is there, there’s a lot of sort of scientific engineering ideas explored there.

02:16:39 There’s the butter robot.

02:16:42 That’s a great, uh, that’s a great show.

02:16:44 Um, yeah.

02:16:45 Rick and Morty is awesome.

02:16:47 Somebody that’s exactly like you from an alternate dimension showed up there.

02:16:50 Elon Tusk.

02:16:51 Yeah, that’s right.

02:16:52 That you voiced.

02:16:53 Yeah.

02:16:54 Rick and Morty certainly explores a lot of interesting concepts.

02:16:57 Uh, so like what’s the favorite one?

02:17:00 I don’t know.

02:17:01 The butter robot certainly is, uh, you know, it’s like, it’s certainly possible to have

02:17:04 too much sentience in a device.

02:17:06 Um, like you don’t want to have your toast to be like a super genius toaster.

02:17:12 It’s going to hate, hate life cause all it could do is make his toast.

02:17:15 But if it’s like, you don’t want to have like super intelligent stuck in a very limited

02:17:19 device.

02:17:20 Um, do you think it’s too easy from a, if we’re talking about from the engineering perspective

02:17:24 of super intelligence, like with Marvin the robot, like, is it, it seems like it might

02:17:30 be very easy to engineer just the depressed robot.

02:17:33 Like it’s not obvious to engineer and robot that’s going to find a fulfilling existence.

02:17:40 Sometimes humans I suppose, but, um, I wonder if that’s like the default, if you don’t do

02:17:47 a good job on building a robot, it’s going to be sad a lot.

02:17:52 Well we can reprogram robots easier than we can reprogram humans.

02:17:58 So I guess if you let it evolve without tinkering, then it might get a sad, uh, but you can change

02:18:06 the optimization function and have it be a cheery robot.

02:18:13 You uh, like I mentioned with, with SpaceX, you give a lot of people hope and a lot of

02:18:17 people look up to you.

02:18:18 Millions of people look up to you.

02:18:19 Uh, if we think about young people in high school, maybe in college, um, what advice

02:18:26 would you give to them about if they want to try to do something big in this world,

02:18:31 they want to really have a big positive impact.

02:18:33 What advice would you give them about their career, maybe about life in general?

02:18:39 Try to be useful.

02:18:40 Um, you know, do things that are useful to your fellow human beings, to the world.

02:18:46 It’s very hard to be useful.

02:18:48 Um, very hard.

02:18:51 Um, you know, are you contributing more than you consume?

02:18:57 You know, like, uh, like can you try to have a positive net contribution to society?

02:19:05 Um, I think that’s the thing to aim for, you know, not, not to try to be sort of a leader

02:19:11 for just for the sake of being a leader or whatever.

02:19:15 Um, a lot of time people, a lot of times the people you want as leaders are the people

02:19:22 who don’t want to be leaders.

02:19:24 So, um, if you live a useful life, that is a good life, a life worth having lived.

02:19:36 Um, you know, and I, like I said, I would, I would encourage people to use the mental

02:19:45 tools of physics and apply them broadly in life.

02:19:48 There are the best tools.

02:19:49 When you think about education and self education, what do you recommend?

02:19:54 So there’s the university, there’s a self study, there is a hands on sort of finding

02:20:01 a company or a place or a set of people that do the thing you’re passionate about and joining

02:20:05 them as early as possible.

02:20:08 Um, there’s, uh, taking a road trip across Europe for a few years and writing some poetry,

02:20:13 which, uh, which, which trajectory do you suggest?

02:20:18 In terms of learning about how you can become useful, as you mentioned, how you can have

02:20:24 the most positive impact.

02:20:27 Well, I encourage people to read a lot of books, just read, basically try to ingest

02:20:39 as much information as you can, uh, and try to also just develop a good general knowledge.

02:20:46 Um, so, so you at least have like a rough lay of the land of the knowledge landscape.

02:20:54 Like try to learn a little bit about a lot of things, um, cause you might not know what

02:20:59 you’re really interested in.

02:21:00 How would you know what you’re really interested in if you at least aren’t like doing a peripheral

02:21:04 explore exploration of broadly of, of the knowledge landscape?

02:21:10 Um, and you talk to people from different walks of life and different, uh, industries

02:21:17 and professions and skills and occupations, like just try to learn as much as possible.

02:21:27 Man’s search for meaning.

02:21:31 Isn’t the whole thing a search for meaning?

02:21:32 Yeah.

02:21:33 What’s the meaning of life and all, you know, but just generally, like I said, I would encourage

02:21:38 people to read broadly, um, in many different subject areas, um, and, and, and then try

02:21:45 to find something where there’s an overlap of your talents and, and what you’re interested

02:21:51 in.

02:21:52 So people may, may, may be good at something, but, or they may have skill at a particular

02:21:56 thing, but they don’t like doing it.

02:21:57 Um, so you want to try to find a thing where you have your, that’s a good, a good, a combination

02:22:04 of, of your, of the things that you’re inherently good at, but you also like doing, um, and,

02:22:12 um,

02:22:13 And reading is a super fast shortcut to, to figure out which, where are you, you both

02:22:18 good at it.

02:22:19 You like doing it and it will actually have positive impact.

02:22:22 Well, you got to learn about things somehow.

02:22:25 So read, reading a broad range, just really read, read it.

02:22:31 You know, one point was that kid I read through the encyclopedia, uh, so that was pretty helpful.

02:22:38 Um, and, uh, there are also things that I didn’t even know existed a lot, so obviously

02:22:44 and

02:22:45 It’s like as broad as it gets.

02:22:46 Encyclopedias were digestible, I think, uh, you know, whatever, 40 years ago.

02:22:51 Um, so, um, you know, maybe read through the, the condensed version of the encyclopedia

02:22:58 of Britannica.

02:22:59 And that, um, you can always like skip subjects or you read a few paragraphs and you know

02:23:05 you’re not interested, just jump to the next one.

02:23:07 That sort of read the encyclopedia or scan, skim, skim through it.

02:23:12 Um, and, um, but I, you know, I put a lot of stock and certainly have a lot of respect

02:23:19 for someone who puts in an honest day’s work, uh, to do useful things and, and just generally

02:23:27 to have like a, not a zero sum mindset, um, or, uh, like have, have more of a grow the

02:23:34 pie mindset.

02:23:35 Like the, if you, if you sort of say like when, when I see people like perhaps, um,

02:23:42 including some very smart people kind of taking an attitude of, uh, like, like, like doing

02:23:48 things that seem like morally questionable, it’s often because they have at a base sort

02:23:53 of axiomatic level, a zero sum mindset.

02:23:57 Um, and, and they, without realizing it, they don’t realize they have a zero sum mindset

02:24:02 or at least that they don’t realize it consciously.

02:24:04 Um, and so if you have a zero sum mindset, then the only way to get ahead is by taking

02:24:09 things from others.

02:24:11 If it’s like, if the, if the, if the pie is fixed, then the only way to have more pie

02:24:17 is to take someone else’s pie.

02:24:19 But this is false.

02:24:20 Like obviously the pie has grown dramatically over time, the economic pie.

02:24:24 Um, so the reality, in reality you can have the, so overuse this analogy, if you have

02:24:31 a lot of, there’s a lot of pie, pie is not fixed.

02:24:37 Um, uh, so you really want to make sure you don’t, you’re not operating, um, without realizing

02:24:43 it from a zero sum mindset where, where the only way to get ahead is to take things from

02:24:47 others.

02:24:48 And you take, try to take things from others, which is not, not good.

02:24:52 It’s much better to work on, uh, adding to the economic pie, maybe, you know, so creating,

02:25:02 like I said, create, creating more than you consume, uh, doing more than you.

02:25:06 Yeah.

02:25:07 Um, so that’s, that’s a big deal.

02:25:08 Um, I think there’s like, you know, a fair number of people in, in finance that, uh,

02:25:15 do have a bit of a zero sum mindset.

02:25:16 I mean, it’s all walks of life.

02:25:19 I’ve seen that one of the, one of the reasons, uh, Rogan inspires me is he celebrates others

02:25:25 a lot.

02:25:26 There’s not, not creating a constant competition.

02:25:29 Like there’s a scarcity of resources.

02:25:31 What happens when you celebrate others and you promote others, the ideas of others, it,

02:25:36 it, uh, it actually grows that pie.

02:25:38 I mean, it, every, like the, uh, the resource, the resources become less scarce and that,

02:25:45 that applies in a lot of kinds of domains.

02:25:46 It applies in academia where a lot of people are very, uh, see some funding for academic

02:25:51 research is zero sum and it is not.

02:25:54 If you celebrate each other, if you make, if you get everybody to be excited about AI,

02:25:58 about physics, above mathematics, I think it, there’ll be more and more funding and

02:26:02 I think everybody wins.

02:26:03 Yeah.

02:26:04 That applies, I think broadly.

02:26:05 Uh,

02:26:06 yeah, yeah, exactly.

02:26:08 So last, last question about love and meaning, uh, what is the role of love?

02:26:15 In the human condition, broadly and more specific to you, how has love, romantic love or otherwise

02:26:21 made you a better person, a better human being?

02:26:27 Better engineer?

02:26:28 Now you’re asking really perplexing questions.

02:26:31 Um, it’s hard to give a, I mean, there are many books, poems and songs written about

02:26:41 what is love and what is, what exactly, you know, um, you know, what is love, baby don’t

02:26:50 hurt me.

02:26:51 Um, that’s one of the great ones.

02:26:53 Yes.

02:26:54 Yeah.

02:26:55 You’ve, you’ve earlier quoted Shakespeare, but that that’s really up there.

02:26:58 Yeah.

02:26:59 Love is a many splinter thing.

02:27:02 Uh, I mean there’s, um, it’s cause we’ve talked about so many inspiring things like be useful

02:27:07 in the world, sort of like solve problems, alleviate suffering, but it seems like connection

02:27:13 between humans is a source, you know, it’s a, it’s a source of joys, a source of meaning

02:27:20 and that, that’s what love is, friendship, love.

02:27:24 I just wonder if you think about that kind of thing where you talk about preserving the

02:27:29 light of human consciousness and us becoming a multi planetary, multi planetary species.

02:27:35 I mean, to me at least, um, that, that means like if we’re just alone and conscious and

02:27:43 intelligent and it doesn’t mean nearly as much as if we’re with others, right?

02:27:48 And there’s some magic created when we’re together, the, uh, the, the friendship of

02:27:54 it.

02:27:55 And I think the highest form of it is love, which I think broadly is, is much bigger than

02:27:59 just sort of romantic, but also yes, romantic love and, um, family and those kinds of things.

02:28:05 Well, I mean, the reason I guess I care about us becoming multi planet species in a space

02:28:10 frank civilization is foundationally, I love humanity, um, and, and so I wish to see it

02:28:19 prosper and do great things and be happy and, um, and if I did not love humanity, I would

02:28:27 not care about these things.

02:28:29 So when you look at the whole of it, the human history, all the people who’s ever lived,

02:28:35 all the people live now, it’s pretty, we’re, we’re okay.

02:28:41 On the whole, we’re pretty interesting bunch.

02:28:44 Yeah.

02:28:45 All things considered, and I’ve read a lot of history, including the darkest, worst parts

02:28:51 of it, and, uh, despite all that, I think on balance, I still love humanity.

02:28:59 You joked about it with the 42, uh, what do you, what do you think is the meaning of this

02:29:03 whole thing?

02:29:05 Is like, is there a non numerical representation?

02:29:08 Yeah.

02:29:09 Well, really, I think what Douglas Adams was saying in Hitchhiker’s Guide to the Galaxy

02:29:13 is that, um, the universe is the answer and, uh, what we really need to figure out are

02:29:22 what questions to ask about the answer that is the universe and that the question is the

02:29:27 really the hard part.

02:29:28 And if you can properly frame the question, then the answer, relatively speaking, is easy.

02:29:33 Uh, so, so, so therefore, if you want to understand what questions to ask about the universe,

02:29:40 you want to understand the meaning of life, we need to expand the scope and scale of consciousness

02:29:45 so that we’re better able to understand the nature of the universe and, and understand

02:29:51 the meaning of life.

02:29:52 And ultimately, the most important part would be to ask the right question, thereby elevating

02:30:00 the role of the interviewer as the most important human in the room.

02:30:07 Good questions are, you know, it’s a hard, it’s hard to come up with good questions.

02:30:12 Absolutely.

02:30:13 Um, but yeah, like, it’s like that, that is the foundation of my philosophy is that, um,

02:30:20 I am curious about the nature of the universe and, uh, you know, and obviously I will die.

02:30:27 I don’t know when I’ll die, but I won’t live forever.

02:30:31 Um, but I would like to know that we are on a path to understanding the nature of the

02:30:36 universe and the meaning of life and what questions to ask about the answer that is

02:30:40 the universe.

02:30:41 And, um, and so if we expand the scope and scale of humanity and consciousness in general,

02:30:46 um, which includes silicon consciousness, then that, you know, that, that, that seems

02:30:51 like a fundamentally good thing.

02:30:53 Elon, like I said, um, I’m deeply grateful that you would spend your extremely valuable

02:31:00 time with me today and also that you have given millions of people hope in this difficult

02:31:07 time, this divisive time, in this, uh, cynical time.

02:31:11 So I hope you do continue doing what you’re doing.

02:31:14 Thank you so much for talking today.

02:31:15 Oh, you’re welcome.

02:31:16 Uh, thanks for your excellent questions.

02:31:18 Thanks for listening to this conversation with Elon Musk.

02:31:21 To support this podcast, please check out our sponsors in the description.

02:31:25 And now let me leave you with some words from Elon Musk himself.

02:31:29 When something is important enough, you do it, even if the odds are not in your favor.

02:31:35 Thank you for listening and hope to see you next time.