George Hotz: Hacking the Simulation & Learning to Drive with Neural Nets #132

Transcript

00:00:00 The following is a conversation with George Hotz,

00:00:02 AKA Geohot, his second time on the podcast.

00:00:06 He’s the founder of Comma AI,

00:00:09 an autonomous and semi autonomous vehicle technology company

00:00:12 that seeks to be to Tesla Autopilot

00:00:15 what Android is to the iOS.

00:00:18 They sell the Comma 2 device for $1,000

00:00:22 that when installed in many of their supported cars

00:00:25 can keep the vehicle centered in the lane

00:00:27 even when there are no lane markings.

00:00:30 It includes driver sensing

00:00:32 that ensures that the driver’s eyes are on the road.

00:00:35 As you may know, I’m a big fan of driver sensing.

00:00:38 I do believe Tesla Autopilot and others

00:00:40 should definitely include it in their sensor suite.

00:00:43 Also, I’m a fan of Android and a big fan of George

00:00:47 for many reasons,

00:00:48 including his nonlinear out of the box brilliance

00:00:51 and the fact that he’s a superstar programmer

00:00:55 of a very different style than myself.

00:00:57 Styles make fights and styles make conversations.

00:01:01 So I really enjoyed this chat

00:01:02 and I’m sure we’ll talk many more times on this podcast.

00:01:06 Quick mention of a sponsor

00:01:07 followed by some thoughts related to the episode.

00:01:10 First is Four Sigmatic,

00:01:12 the maker of delicious mushroom coffee.

00:01:15 Second is The Coding Digital,

00:01:17 a podcast on tech and entrepreneurship

00:01:19 that I listen to and enjoy.

00:01:22 And finally, ExpressVPN,

00:01:24 the VPN I’ve used for many years to protect my privacy

00:01:27 on the internet.

00:01:29 Please check out the sponsors in the description

00:01:31 to get a discount and to support this podcast.

00:01:34 As a side note, let me say that my work at MIT

00:01:38 on autonomous and semi autonomous vehicles

00:01:40 led me to study the human side of autonomy

00:01:43 enough to understand that it’s a beautifully complicated

00:01:46 and interesting problem space,

00:01:48 much richer than what can be studied in the lab.

00:01:51 In that sense, the data that Comma AI, Tesla Autopilot

00:01:55 and perhaps others like Cadillac Super Cruiser collecting

00:01:58 gives us a chance to understand

00:02:00 how we can design safe semi autonomous vehicles

00:02:03 for real human beings in real world conditions.

00:02:07 I think this requires bold innovation

00:02:09 and a serious exploration of the first principles

00:02:13 of the driving task itself.

00:02:15 If you enjoyed this thing, subscribe on YouTube,

00:02:17 review it with five stars and up a podcast,

00:02:20 follow on Spotify, support on Patreon

00:02:22 or connect with me on Twitter at Lex Friedman.

00:02:26 And now here’s my conversation with George Hotz.

00:02:31 So last time we started talking about the simulation,

00:02:34 this time let me ask you,

00:02:35 do you think there’s intelligent life out there

00:02:37 in the universe?

00:02:38 I’ve always maintained my answer to the Fermi paradox.

00:02:41 I think there has been intelligent life

00:02:44 elsewhere in the universe.

00:02:45 So intelligent civilizations existed

00:02:47 but they’ve blown themselves up.

00:02:49 So your general intuition is that

00:02:50 intelligent civilizations quickly,

00:02:54 like there’s that parameter in the Drake equation.

00:02:57 Your sense is they don’t last very long.

00:02:59 Yeah.

00:03:00 How are we doing on that?

00:03:01 Like, have we lasted pretty good?

00:03:03 Oh no.

00:03:04 Are we do?

00:03:05 Oh yeah.

00:03:06 I mean, not quite yet.

00:03:09 Well, it was good to tell you,

00:03:10 as you’d ask the IQ required to destroy the world

00:03:13 falls by one point every year.

00:03:15 Okay.

00:03:16 Technology democratizes the destruction of the world.

00:03:21 When can a meme destroy the world?

00:03:23 It kind of is already, right?

00:03:27 Somewhat.

00:03:28 I don’t think we’ve seen anywhere near the worst of it yet.

00:03:32 Well, it’s going to get weird.

00:03:34 Well, maybe a meme can save the world.

00:03:36 You thought about that?

00:03:37 The meme Lord Elon Musk fighting on the side of good

00:03:40 versus the meme Lord of the darkness,

00:03:44 which is not saying anything bad about Donald Trump,

00:03:48 but he is the Lord of the meme on the dark side.

00:03:51 He’s a Darth Vader of memes.

00:03:53 I think in every fairy tale they always end it with,

00:03:58 and they lived happily ever after.

00:03:59 And I’m like, please tell me more

00:04:00 about this happily ever after.

00:04:02 I’ve heard 50% of marriages end in divorce.

00:04:05 Why doesn’t your marriage end up there?

00:04:07 You can’t just say happily ever after.

00:04:09 So it’s the thing about destruction

00:04:12 is it’s over after the destruction.

00:04:14 We have to do everything right in order to avoid it.

00:04:18 And one thing wrong,

00:04:20 I mean, actually this is what I really like

00:04:21 about cryptography.

00:04:22 Cryptography, it seems like we live in a world

00:04:24 where the defense wins versus like nuclear weapons.

00:04:29 The opposite is true.

00:04:30 It is much easier to build a warhead

00:04:32 that splits into a hundred little warheads

00:04:34 than to build something that can, you know,

00:04:36 take out a hundred little warheads.

00:04:38 The offense has the advantage there.

00:04:41 So maybe our future is in crypto, but.

00:04:44 So cryptography, right.

00:04:45 The Goliath is the defense.

00:04:49 And then all the different hackers are the Davids.

00:04:54 And that equation is flipped for nuclear war.

00:04:57 Cause there’s so many,

00:04:58 like one nuclear weapon destroys everything essentially.

00:05:01 Yeah, and it is much easier to attack with a nuclear weapon

00:05:06 than it is to like the technology required to intercept

00:05:09 and destroy a rocket is much more complicated

00:05:12 than the technology required to just, you know,

00:05:13 orbital trajectory, send a rocket to somebody.

00:05:17 So, okay.

00:05:18 Your intuition that there were intelligent civilizations

00:05:21 out there, but it’s very possible

00:05:24 that they’re no longer there.

00:05:26 That’s kind of a sad picture.

00:05:27 They enter some steady state.

00:05:29 They all wirehead themselves.

00:05:31 What’s wirehead?

00:05:33 Stimulate, stimulate their pleasure centers

00:05:35 and just, you know, live forever in this kind of stasis.

00:05:39 They become, well, I mean,

00:05:42 I think the reason I believe this is because where are they?

00:05:46 If there’s some reason they stopped expanding,

00:05:50 cause otherwise they would have taken over the universe.

00:05:52 The universe isn’t that big.

00:05:53 Or at least, you know,

00:05:54 let’s just talk about the galaxy, right?

00:05:56 That’s 70,000 light years across.

00:05:58 I took that number from Star Trek Voyager.

00:05:59 I don’t know how true it is, but yeah, that’s not big.

00:06:04 Right? 70,000 light years is nothing.

00:06:07 For some possible technology that you can imagine

00:06:10 that can leverage like wormholes or something like that.

00:06:12 Or you don’t even need wormholes.

00:06:13 Just a von Neumann probe is enough.

00:06:15 A von Neumann probe and a million years of sublight travel

00:06:18 and you’d have taken over the whole universe.

00:06:20 That clearly didn’t happen.

00:06:22 So something stopped it.

00:06:24 So you mean if you, right,

00:06:25 for like a few million years,

00:06:27 if you sent out probes that travel close,

00:06:29 what’s sublight?

00:06:30 You mean close to the speed of light?

00:06:32 Let’s say 0.1 C.

00:06:33 And it just spreads.

00:06:34 Interesting.

00:06:35 Actually, that’s an interesting calculation, huh?

00:06:38 So what makes you think that we’d be able

00:06:40 to communicate with them?

00:06:42 Like, yeah, what’s,

00:06:45 why do you think we would be able to be able

00:06:47 to comprehend intelligent lives that are out there?

00:06:51 Like even if they were among us kind of thing,

00:06:54 like, or even just flying around?

00:06:57 Well, I mean, that’s possible.

00:07:01 It’s possible that there is some sort of prime directive.

00:07:04 That’d be a really cool universe to live in.

00:07:07 And there’s some reason

00:07:08 they’re not making themselves visible to us.

00:07:10 But it makes sense that they would use the same,

00:07:15 well, at least the same entropy.

00:07:16 Well, you’re implying the same laws of physics.

00:07:18 I don’t know what you mean by entropy in this case.

00:07:20 Oh, yeah.

00:07:21 I mean, if entropy is the scarce resource in the universe.

00:07:25 So what do you think about like Stephen Wolfram

00:07:26 and everything is a computation?

00:07:28 And then what if they are traveling through

00:07:31 this world of computation?

00:07:32 So if you think of the universe

00:07:34 as just information processing,

00:07:36 then what you’re referring to with entropy

00:07:40 and then these pockets of interesting complex computation

00:07:44 swimming around, how do we know they’re not already here?

00:07:47 How do we know that this,

00:07:51 like all the different amazing things

00:07:53 that are full of mystery on earth

00:07:55 are just like little footprints of intelligence

00:07:58 from light years away?

00:08:01 Maybe.

00:08:02 I mean, I tend to think that as civilizations expand,

00:08:05 they use more and more energy

00:08:07 and you can never overcome the problem of waste heat.

00:08:10 So where is there waste heat?

00:08:11 So we’d be able to, with our crude methods,

00:08:13 be able to see like, there’s a whole lot of energy here.

00:08:18 But it could be something we’re not,

00:08:20 I mean, we don’t understand dark energy, right?

00:08:22 Dark matter.

00:08:23 It could be just stuff we don’t understand at all.

00:08:26 Or they can have a fundamentally different physics,

00:08:29 you know, like that we just don’t even comprehend.

00:08:32 Well, I think, okay,

00:08:33 I mean, it depends how far out you wanna go.

00:08:35 I don’t think physics is very different

00:08:36 on the other side of the galaxy.

00:08:39 I would suspect that they have,

00:08:41 I mean, if they’re in our universe,

00:08:43 they have the same physics.

00:08:45 Well, yeah, that’s the assumption we have,

00:08:47 but there could be like super trippy things

00:08:50 like our cognition only gets to a slice,

00:08:57 and all the possible instruments that we can design

00:08:59 only get to a particular slice of the universe.

00:09:01 And there’s something much like weirder.

00:09:04 Maybe we can try a thought experiment.

00:09:06 Would people from the past

00:09:10 be able to detect the remnants of our,

00:09:14 or would we be able to detect our modern civilization?

00:09:16 I think the answer is obviously yes.

00:09:18 You mean past from a hundred years ago?

00:09:20 Well, let’s even go back further.

00:09:22 Let’s go to a million years ago, right?

00:09:24 The humans who were lying around in the desert

00:09:26 probably didn’t even have,

00:09:27 maybe they just barely had fire.

00:09:31 They would understand if a 747 flew overhead.

00:09:35 Oh, in this vicinity, but not if a 747 flew on Mars.

00:09:43 Like, cause they wouldn’t be able to see far,

00:09:45 cause we’re not actually communicating that well

00:09:47 with the rest of the universe.

00:09:48 We’re doing okay.

00:09:50 Just sending out random like fifties tracks of music.

00:09:54 True.

00:09:55 And yeah, I mean, they’d have to, you know,

00:09:57 we’ve only been broadcasting radio waves for 150 years.

00:10:02 And well, there’s your light cone.

00:10:04 So.

00:10:05 Yeah. Okay.

00:10:06 What do you make about all the,

00:10:08 I recently came across this having talked to David Fravor.

00:10:14 I don’t know if you caught what the videos

00:10:16 of the Pentagon released

00:10:18 and the New York Times reporting of the UFO sightings.

00:10:23 So I kind of looked into it, quote unquote.

00:10:26 And there’s actually been like hundreds

00:10:30 of thousands of UFO sightings, right?

00:10:33 And a lot of it you can explain away

00:10:35 in different kinds of ways.

00:10:37 So one is it could be interesting physical phenomena.

00:10:40 Two, it could be people wanting to believe

00:10:44 and therefore they conjure up a lot of different things

00:10:46 that just, you know, when you see different kinds of lights,

00:10:48 some basic physics phenomena,

00:10:50 and then you just conjure up ideas

00:10:53 of possible out there mysterious worlds.

00:10:56 But, you know, it’s also possible,

00:10:58 like you have a case of David Fravor,

00:11:02 who is a Navy pilot, who’s, you know,

00:11:06 as legit as it gets in terms of humans

00:11:08 who are able to perceive things in the environment

00:11:13 and make conclusions,

00:11:15 whether those things are a threat or not.

00:11:17 And he and several other pilots saw a thing,

00:11:22 I don’t know if you followed this,

00:11:23 but they saw a thing that they’ve since then called TikTok

00:11:26 that moved in all kinds of weird ways.

00:11:29 They don’t know what it is.

00:11:30 It could be technology developed by the United States

00:11:36 and they’re just not aware of it

00:11:38 and the surface level from the Navy, right?

00:11:40 It could be different kind of lighting technology

00:11:42 or drone technology, all that kind of stuff.

00:11:45 It could be the Russians and the Chinese,

00:11:46 all that kind of stuff.

00:11:48 And of course their mind, our mind,

00:11:51 can also venture into the possibility

00:11:54 that it’s from another world.

00:11:56 Have you looked into this at all?

00:11:58 What do you think about it?

00:11:59 I think all the news is a psyop.

00:12:01 I think that the most plausible.

00:12:05 Nothing is real.

00:12:06 Yeah, I listened to the, I think it was Bob Lazar

00:12:10 on Joe Rogan.

00:12:12 And like, I believe everything this guy is saying.

00:12:15 And then I think that it’s probably just some like MKUltra

00:12:18 kind of thing, you know?

00:12:20 What do you mean?

00:12:21 Like they, you know, they made some weird thing

00:12:24 and they called it an alien spaceship.

00:12:26 You know, maybe it was just to like

00:12:27 stimulate young physicists minds.

00:12:29 We’ll tell them it’s alien technology

00:12:31 and we’ll see what they come up with, right?

00:12:33 Do you find any conspiracy theories compelling?

00:12:36 Like have you pulled at the string

00:12:38 of the rich complex world of conspiracy theories

00:12:42 that’s out there?

00:12:43 I think that I’ve heard a conspiracy theory

00:12:46 that conspiracy theories were invented by the CIA

00:12:48 in the 60s to discredit true things.

00:12:52 Yeah.

00:12:53 So, you know, you can go to ridiculous conspiracy theories

00:12:58 like Flat Earth and Pizza Gate.

00:13:01 And, you know, these things are almost to hide

00:13:05 like conspiracy theories that like,

00:13:08 you know, remember when the Chinese like locked up

00:13:09 the doctors who discovered coronavirus?

00:13:11 Like I tell people this and I’m like,

00:13:12 no, no, no, that’s not a conspiracy theory.

00:13:14 That actually happened.

00:13:15 Do you remember the time that the money used to be backed

00:13:18 by gold and now it’s backed by nothing?

00:13:20 This is not a conspiracy theory.

00:13:21 This actually happened.

00:13:23 Well, that’s one of my worries today

00:13:26 with the idea of fake news is that when nothing is real,

00:13:32 then like you dilute the possibility of anything being true

00:13:37 by conjuring up all kinds of conspiracy theories.

00:13:41 And then you don’t know what to believe.

00:13:42 And then like the idea of truth of objectivity

00:13:46 is lost completely.

00:13:47 Everybody has their own truth.

00:13:50 So you used to control information by censoring it.

00:13:53 And then the internet happened and governments were like,

00:13:55 oh shit, we can’t censor things anymore.

00:13:58 I know what we’ll do.

00:14:00 You know, it’s the old story of the story of like

00:14:04 tying a flag with a leprechaun tells you his gold is buried

00:14:07 and you tie one flag and you make the leprechaun swear

00:14:09 to not remove the flag.

00:14:10 And you come back to the field later with a shovel

00:14:11 and there’s flags everywhere.

00:14:14 That’s one way to maintain privacy, right?

00:14:16 It’s like in order to protect the contents

00:14:20 of this conversation, for example,

00:14:21 we could just generate like millions of deep,

00:14:25 fake conversations where you and I talk

00:14:27 and say random things.

00:14:29 So this is just one of them

00:14:30 and nobody knows which one was the real one.

00:14:32 This could be fake right now.

00:14:34 Classic steganography technique.

00:14:37 Okay, another absurd question about intelligent life.

00:14:39 Cause you know, you’re an incredible programmer

00:14:43 outside of everything else we’ll talk about

00:14:45 just as a programmer.

00:14:49 Do you think intelligent beings out there,

00:14:52 the civilizations that were out there,

00:14:54 had computers and programming?

00:14:58 Did they, do we naturally have to develop something

00:15:01 where we engineer machines and are able to encode

00:15:05 both knowledge into those machines

00:15:08 and instructions that process that knowledge,

00:15:11 process that information to make decisions

00:15:14 and actions and so on?

00:15:15 And would those programming languages,

00:15:18 if you think they exist, be at all similar

00:15:21 to anything we’ve developed?

00:15:24 So I don’t see that much of a difference

00:15:26 between quote unquote natural languages

00:15:29 and programming languages.

00:15:34 Yeah.

00:15:35 I think there’s so many similarities.

00:15:36 So when asked the question,

00:15:39 what do alien languages look like?

00:15:42 I imagine they’re not all that dissimilar from ours.

00:15:46 And I think translating in and out of them

00:15:51 wouldn’t be that crazy.

00:15:52 Well, it’s difficult to compile like DNA to Python

00:15:57 and then to C.

00:15:59 There’s a little bit of a gap in the kind of languages

00:16:02 we use for touring machines

00:16:06 and the kind of languages nature seems to use a little bit.

00:16:10 Maybe that’s just, we just haven’t understood

00:16:13 the kind of language that nature uses well yet.

00:16:16 DNA is a CAD model.

00:16:19 It’s not quite a programming language.

00:16:21 It has no sort of a serial execution.

00:16:25 It’s not quite a, yeah, it’s a CAD model.

00:16:29 So I think in that sense,

00:16:30 we actually completely understand it.

00:16:32 The problem is, well, simulating on these CAD models,

00:16:37 I played with it a bit this year,

00:16:38 is super computationally intensive.

00:16:41 If you wanna go down to like the molecular level

00:16:43 where you need to go to see a lot of these phenomenon

00:16:45 like protein folding.

00:16:48 So yeah, it’s not that we don’t understand it.

00:16:52 It just requires a whole lot of compute to kind of compile it.

00:16:55 For our human minds, it’s inefficient,

00:16:56 both for the data representation and for the programming.

00:17:00 Yeah, it runs well on raw nature.

00:17:02 It runs well on raw nature.

00:17:03 And when we try to build emulators or simulators for that,

00:17:07 well, they’re mad slow, but I’ve tried it.

00:17:10 It runs in that, yeah, you’ve commented elsewhere,

00:17:14 I don’t remember where,

00:17:15 that one of the problems is simulating nature is tough.

00:17:20 And if you want to sort of deploy a prototype,

00:17:25 I forgot how you put it, but it made me laugh,

00:17:28 but animals or humans would need to be involved

00:17:31 in order to try to run some prototype code on,

00:17:38 like if we’re talking about COVID and viruses and so on,

00:17:41 if you were trying to engineer

00:17:42 some kind of defense mechanisms,

00:17:45 like a vaccine against COVID and all that kind of stuff

00:17:49 that doing any kind of experimentation,

00:17:52 like you can with like autonomous vehicles

00:17:53 would be very technically and ethically costly.

00:17:59 I’m not sure about that.

00:18:00 I think you can do tons of crazy biology and test tubes.

00:18:05 I think my bigger complaint is more,

00:18:08 oh, the tools are so bad.

00:18:11 Like literally, you mean like libraries and?

00:18:14 I don’t know, I’m not pipetting shit.

00:18:16 Like you’re handing me a, I got a, no, no, no,

00:18:20 there has to be some.

00:18:22 Like automating stuff.

00:18:24 And like the, yeah, but human biology is messy.

00:18:28 Like it seems.

00:18:29 But like, look at those Toronto’s videos.

00:18:31 They were a joke.

00:18:32 It’s like a little gantry.

00:18:33 It’s like little X, Y gantry,

00:18:34 high school science project with the pipette.

00:18:36 I’m like, really?

00:18:38 Gotta be something better.

00:18:39 You can’t build like nice microfluidics

00:18:41 and I can program the computation to bio interface.

00:18:45 I mean, this is gonna happen.

00:18:47 But like right now, if you are asking me

00:18:50 to pipette 50 milliliters of solution, I’m out.

00:18:54 This is so crude.

00:18:55 Yeah.

00:18:56 Okay, let’s get all the crazy out of the way.

00:18:59 So a bunch of people asked me,

00:19:02 since we talked about the simulation last time,

00:19:05 we talked about hacking the simulation.

00:19:06 Do you have any updates, any insights

00:19:09 about how we might be able to go about hacking simulation

00:19:13 if we indeed do live in a simulation?

00:19:17 I think a lot of people misinterpreted

00:19:19 the point of that South by talk.

00:19:22 The point of the South by talk

00:19:23 was not literally to hack the simulation.

00:19:26 I think that this is an idea is literally just,

00:19:33 I think theoretical physics.

00:19:34 I think that’s the whole goal, right?

00:19:39 You want your grand unified theory, but then, okay,

00:19:42 build a grand unified theory search for exploits, right?

00:19:45 I think we’re nowhere near actually there yet.

00:19:47 My hope with that was just more to like,

00:19:51 are you people kidding me

00:19:52 with the things you spend time thinking about?

00:19:54 Do you understand like kind of how small you are?

00:19:58 You are bytes and God’s computer, really?

00:20:02 And the things that people get worked up about, you know?

00:20:06 So basically, it was more a message

00:20:10 of we should humble ourselves.

00:20:12 That we get to, like what are we humans in this byte code?

00:20:19 Yeah, and not just humble ourselves,

00:20:22 but like I’m not trying to like make people guilty

00:20:24 or anything like that.

00:20:25 I’m trying to say like, literally,

00:20:27 look at what you are spending time on, right?

00:20:30 What are you referring to?

00:20:31 You’re referring to the Kardashians?

00:20:32 What are we talking about?

00:20:34 Twitter?

00:20:34 No, the Kardashians, everyone knows that’s kind of fun.

00:20:38 I’m referring more to like the economy, you know?

00:20:42 This idea that we gotta up our stock price.

00:20:50 Or what is the goal function of humanity?

00:20:55 You don’t like the game of capitalism?

00:20:57 Like you don’t like the games we’ve constructed

00:20:59 for ourselves as humans?

00:21:00 I’m a big fan of capitalism.

00:21:02 I don’t think that’s really the game we’re playing right now.

00:21:05 I think we’re playing a different game

00:21:07 where the rules are rigged.

00:21:10 Okay, which games are interesting to you

00:21:12 that we humans have constructed and which aren’t?

00:21:14 Which are productive and which are not?

00:21:18 Actually, maybe that’s the real point of the talk.

00:21:21 It’s like, stop playing these fake human games.

00:21:25 There’s a real game here.

00:21:26 We can play the real game.

00:21:28 The real game is, you know, nature wrote the rules.

00:21:31 This is a real game.

00:21:32 There still is a game to play.

00:21:35 But if you look at, sorry to interrupt,

00:21:36 I don’t know if you’ve seen the Instagram account,

00:21:38 nature is metal.

00:21:40 The game that nature seems to be playing

00:21:42 is a lot more cruel than we humans want to put up with.

00:21:47 Or at least we see it as cruel.

00:21:49 It’s like the bigger thing eats the smaller thing

00:21:53 and does it to impress another big thing

00:21:58 so it can mate with that thing.

00:22:00 And that’s it.

00:22:01 That seems to be the entirety of it.

00:22:04 Well, there’s no art, there’s no music,

00:22:07 there’s no comma AI, there’s no comma one,

00:22:10 no comma two, no George Hots with his brilliant talks

00:22:14 at South by Southwest.

00:22:17 I disagree, though.

00:22:17 I disagree that this is what nature is.

00:22:19 I think nature just provided basically a open world MMORPG.

00:22:26 And, you know, here it’s open world.

00:22:29 I mean, if that’s the game you want to play,

00:22:31 you can play that game.

00:22:32 But isn’t that beautiful?

00:22:33 I don’t know if you played Diablo.

00:22:35 They used to have, I think, cow level where it’s…

00:22:39 So everybody will go just, they figured out this,

00:22:44 like the best way to gain like experience points

00:22:48 is to just slaughter cows over and over and over.

00:22:52 And so they figured out this little sub game

00:22:55 within the bigger game that this is the most efficient way

00:22:58 to get experience points.

00:22:59 And everybody somehow agreed

00:23:01 that getting experience points in RPG context

00:23:04 where you always want to be getting more stuff,

00:23:06 more skills, more levels, keep advancing.

00:23:09 That seems to be good.

00:23:10 So might as well sacrifice actual enjoyment

00:23:14 of playing a game, exploring a world,

00:23:17 and spending like hundreds of hours of your time

00:23:21 at cow level.

00:23:22 I mean, the number of hours I spent in cow level,

00:23:26 I’m not like the most impressive person

00:23:28 because people have spent probably thousands of hours there,

00:23:30 but it’s ridiculous.

00:23:31 So that’s a little absurd game that brought me joy

00:23:35 in some weird dopamine drug kind of way.

00:23:37 So you don’t like those games.

00:23:40 You don’t think that’s us humans feeling the nature.

00:23:46 I think so.

00:23:47 And that was the point of the talk.

00:23:49 Yeah.

00:23:50 So how do we hack it then?

00:23:51 Well, I want to live forever.

00:23:52 And I want to live forever.

00:23:55 And this is the goal.

00:23:56 Well, that’s a game against nature.

00:23:59 Yeah, immortality is the good objective function to you?

00:24:03 I mean, start there and then you can do whatever else

00:24:05 you want because you got a long time.

00:24:07 What if immortality makes the game just totally not fun?

00:24:10 I mean, like, why do you assume immortality

00:24:13 is somehow a good objective function?

00:24:18 It’s not immortality that I want.

00:24:19 A true immortality where I could not die,

00:24:22 I would prefer what we have right now.

00:24:25 But I want to choose my own death, of course.

00:24:27 I don’t want nature to decide when I die,

00:24:29 I’m going to win.

00:24:30 I’m going to be you.

00:24:33 And then at some point, if you choose commit suicide,

00:24:36 like how long do you think you’d live?

00:24:41 Until I get bored.

00:24:43 See, I don’t think people like brilliant people like you

00:24:48 that really ponder living a long time

00:24:52 are really considering how meaningless life becomes.

00:24:58 Well, I want to know everything and then I’m ready to die.

00:25:03 As long as there’s…

00:25:04 Yeah, but why do you want,

00:25:05 isn’t it possible that you want to know everything

00:25:06 because it’s finite?

00:25:09 Like the reason you want to know quote unquote everything

00:25:12 is because you don’t have enough time to know everything.

00:25:16 And once you have unlimited time,

00:25:18 then you realize like, why do anything?

00:25:22 Like why learn anything?

00:25:25 I want to know everything and then I’m ready to die.

00:25:27 So you have, yeah.

00:25:28 It’s not a, like, it’s a terminal value.

00:25:30 It’s not in service of anything else.

00:25:34 I’m conscious of the possibility, this is not a certainty,

00:25:37 but the possibility of that engine of curiosity

00:25:41 that you’re speaking to is actually

00:25:47 a symptom of the finiteness of life.

00:25:49 Like without that finiteness, your curiosity would vanish.

00:25:55 Like a morning fog.

00:25:57 All right, cool.

00:25:57 Bukowski talked about love like that.

00:25:59 Then let me solve immortality

00:26:01 and let me change the thing in my brain

00:26:02 that reminds me of the fact that I’m immortal,

00:26:04 tells me that life is finite shit.

00:26:06 Maybe I’ll have it tell me that life ends next week.

00:26:09 Right?

00:26:10 I’m okay with some self manipulation like that.

00:26:12 I’m okay with deceiving myself.

00:26:14 Oh, Rika, changing the code.

00:26:17 Yeah, well, if that’s the problem, right?

00:26:18 If the problem is that I will no longer have that,

00:26:20 that curiosity, I’d like to have backup copies of myself,

00:26:24 which I check in with occasionally

00:26:27 to make sure they’re okay with the trajectory

00:26:29 and they can kind of override it.

00:26:31 Maybe a nice, like, I think of like those wave nets,

00:26:33 those like logarithmic go back to the copies.

00:26:35 Yeah, but sometimes it’s not reversible.

00:26:36 Like I’ve done this with video games.

00:26:39 Once you figure out the cheat code

00:26:41 or like you look up how to cheat old school,

00:26:43 like single player, it ruins the game for you.

00:26:46 Absolutely.

00:26:47 It ruins that feeling.

00:26:48 But again, that just means our brain manipulation

00:26:51 technology is not good enough yet.

00:26:53 Remove that cheat code from your brain.

00:26:54 Here you go.

00:26:55 So it’s also possible that if we figure out immortality,

00:27:00 that all of us will kill ourselves

00:27:03 before we advance far enough

00:27:06 to be able to revert the change.

00:27:08 I’m not killing myself till I know everything, so.

00:27:11 That’s what you say now, because your life is finite.

00:27:15 You know, I think yes, self modifying systems gets,

00:27:19 comes up with all these hairy complexities

00:27:21 and can I promise that I’ll do it perfectly?

00:27:23 No, but I think I can put good safety structures in place.

00:27:27 So that talk and your thinking here

00:27:29 is not literally referring to a simulation

00:27:36 and that our universe is a kind of computer program

00:27:40 running on a computer.

00:27:42 That’s more of a thought experiment.

00:27:45 Do you also think of the potential of the sort of Bostrom,

00:27:51 Elon Musk and others that talk about an actual program

00:27:57 that simulates our universe?

00:27:59 Oh, I don’t doubt that we’re in a simulation.

00:28:01 I just think that it’s not quite that important.

00:28:05 I mean, I’m interested only in simulation theory

00:28:06 as far as like it gives me power over nature.

00:28:09 If it’s totally unfalsifiable, then who cares?

00:28:13 I mean, what do you think that experiment would look like?

00:28:15 Like somebody on Twitter asks,

00:28:17 asks George what signs we would look for

00:28:20 to know whether or not we’re in the simulation,

00:28:22 which is exactly what you’re asking is like,

00:28:25 the step that precedes the step of knowing

00:28:29 how to get more power from this knowledge

00:28:32 is to get an indication that there’s some power to be gained.

00:28:35 So get an indication that there,

00:28:37 you can discover and exploit cracks in the simulation

00:28:42 or it doesn’t have to be in the physics of the universe.

00:28:45 Yeah.

00:28:46 Show me, I mean, like a memory leak could be cool.

00:28:51 Like some scrying technology, you know?

00:28:54 What kind of technology?

00:28:55 Scrying?

00:28:56 What’s that?

00:28:57 Oh, that’s a weird,

00:28:58 scrying is the paranormal ability to like remote viewing,

00:29:03 like being able to see somewhere where you’re not.

00:29:08 So, you know, I don’t think you can do it

00:29:10 by chanting in a room,

00:29:11 but if we could find, it’s a memory leak, basically.

00:29:16 It’s a memory leak.

00:29:17 Yeah, you’re able to access parts you’re not supposed to.

00:29:19 Yeah, yeah, yeah.

00:29:20 And thereby discover a shortcut.

00:29:22 Yeah, maybe memory leak means the other thing as well,

00:29:24 but I mean like, yeah,

00:29:25 like an ability to read arbitrary memory, right?

00:29:28 And that one’s not that horrifying, right?

00:29:29 The right ones start to be horrifying.

00:29:31 Read, right.

00:29:32 It’s the reading is not the problem.

00:29:34 Yeah, it’s like Heartfleet for the universe.

00:29:37 Oh boy, the writing is a big, big problem.

00:29:40 It’s a big problem.

00:29:43 It’s the moment you can write anything,

00:29:44 even if it’s just random noise.

00:29:47 That’s terrifying.

00:29:49 I mean, even without that,

00:29:51 like even some of the, you know,

00:29:52 the nanotech stuff that’s coming, I think is.

00:29:57 I don’t know if you’re paying attention,

00:29:58 but actually Eric Weistand came out

00:30:00 with the theory of everything.

00:30:02 I mean, that came out.

00:30:03 He’s been working on a theory of everything

00:30:05 in the physics world called geometric unity.

00:30:08 And then for me, from computer science person like you,

00:30:11 Steven Wolfram’s theory of everything,

00:30:14 of like hypergraphs is super interesting and beautiful,

00:30:17 but not from a physics perspective,

00:30:19 but from a computational perspective.

00:30:20 I don’t know, have you paid attention to any of that?

00:30:23 So again, like what would make me pay attention

00:30:26 and like why like I hate string theory is,

00:30:29 okay, make a testable prediction, right?

00:30:31 I’m only interested in,

00:30:33 I’m not interested in theories for their intrinsic beauty.

00:30:36 I’m interested in theories

00:30:37 that give me power over the universe.

00:30:39 So if these theories do, I’m very interested.

00:30:43 Can I just say how beautiful that is?

00:30:45 Because a lot of physicists say,

00:30:47 I’m interested in experimental validation

00:30:49 and they skip out the part where they say

00:30:52 to give me more power in the universe.

00:30:55 I just love the.

00:30:57 No, I want. The clarity of that.

00:30:59 I want 100 gigahertz processors.

00:31:02 I want transistors that are smaller than atoms.

00:31:04 I want like power.

00:31:08 That’s true.

00:31:10 And that’s where people from aliens

00:31:12 to this kind of technology where people are worried

00:31:14 that governments, like who owns that power?

00:31:19 Is it George Harts?

00:31:20 Is it thousands of distributed hackers across the world?

00:31:25 Is it governments?

00:31:26 Is it Mark Zuckerberg?

00:31:28 There’s a lot of people that,

00:31:32 I don’t know if anyone trusts any one individual with power.

00:31:35 So they’re always worried.

00:31:37 It’s the beauty of blockchains.

00:31:39 That’s the beauty of blockchains, which we’ll talk about.

00:31:43 On Twitter, somebody pointed me to a story,

00:31:46 a bunch of people pointed me to a story a few months ago

00:31:49 where you went into a restaurant in New York.

00:31:51 And you can correct me if any of this is wrong.

00:31:53 And ran into a bunch of folks from a company

00:31:56 in a crypto company who are trying to scale up Ethereum.

00:32:01 And they had a technical deadline

00:32:03 related to solidity to OVM compiler.

00:32:07 So these are all Ethereum technologies.

00:32:09 So you stepped in, they recognized you,

00:32:14 pulled you aside, explained their problem.

00:32:16 And you stepped in and helped them solve the problem,

00:32:19 thereby creating legend status story.

00:32:23 So can you tell me the story in a little more detail?

00:32:28 It seems kind of incredible.

00:32:31 Did this happen?

00:32:32 Yeah, yeah, it’s a true story, it’s a true story.

00:32:34 I mean, they wrote a very flattering account of it.

00:32:39 So Optimism is the company called Optimism,

00:32:43 spin off of Plasma.

00:32:45 They’re trying to build L2 solutions on Ethereum.

00:32:47 So right now, every Ethereum node

00:32:52 has to run every transaction on the Ethereum network.

00:32:56 And this kind of doesn’t scale, right?

00:32:58 Because if you have N computers,

00:33:00 well, if that becomes two N computers,

00:33:02 you actually still get the same amount of compute, right?

00:33:05 This is like all of one scaling

00:33:09 because they all have to run it.

00:33:10 Okay, fine, you get more blockchain security,

00:33:12 but like, blockchain is already so secure.

00:33:15 Can we trade some of that off for speed?

00:33:17 So that’s kind of what these L2 solutions are.

00:33:20 They built this thing, which kind of,

00:33:23 kind of sandbox for Ethereum contracts.

00:33:26 So they can run it in this L2 world

00:33:28 and it can’t do certain things in L world, in L1.

00:33:30 Can I ask you for some definitions?

00:33:32 What’s L2?

00:33:33 Oh, L2 is layer two.

00:33:34 So L1 is like the base Ethereum chain.

00:33:37 And then layer two is like a computational layer

00:33:40 that runs elsewhere,

00:33:44 but still is kind of secured by layer one.

00:33:47 And I’m sure a lot of people know,

00:33:49 but Ethereum is a cryptocurrency,

00:33:51 probably one of the most popular cryptocurrency

00:33:53 second to Bitcoin.

00:33:55 And a lot of interesting technological innovation there.

00:33:58 Maybe you could also slip in whenever you talk about this

00:34:03 and things that are exciting to you in the Ethereum space.

00:34:06 And why Ethereum?

00:34:07 Well, I mean, Bitcoin is not Turing complete.

00:34:12 Ethereum is not technically Turing complete

00:34:13 with the gas limit, but close enough.

00:34:16 With the gas limit?

00:34:16 What’s the gas limit, resources?

00:34:19 Yeah, I mean, no computer is actually Turing complete.

00:34:21 Right.

00:34:23 You’re gonna find out RAM, you know?

00:34:24 I can actually solve the whole thing.

00:34:25 What’s the word gas limit?

00:34:26 You just have so many brilliant words.

00:34:28 I’m not even gonna ask.

00:34:29 That’s not my word, that’s Ethereum’s word.

00:34:32 Gas limit.

00:34:33 Ethereum, you have to spend gas per instruction.

00:34:35 So like different op codes use different amounts of gas

00:34:37 and you buy gas with ether to prevent people

00:34:40 from basically DDoSing the network.

00:34:42 So Bitcoin is proof of work.

00:34:45 And then what’s Ethereum?

00:34:47 It’s also proof of work.

00:34:48 They’re working on some proof of stake,

00:34:49 Ethereum 2.0 stuff.

00:34:51 But right now it’s proof of work.

00:34:52 It uses a different hash function from Bitcoin.

00:34:54 That’s more ASIC resistance, because you need RAM.

00:34:57 So we’re all talking about Ethereum 1.0.

00:34:59 So what were they trying to do to scale this whole process?

00:35:03 So they were like, well, if we could run contracts elsewhere

00:35:07 and then only save the results of that computation,

00:35:13 well, we don’t actually have to do the compute on the chain.

00:35:14 We can do the compute off chain

00:35:15 and just post what the results are.

00:35:17 Now, the problem with that is,

00:35:18 well, somebody could lie about what the results are.

00:35:21 So you need a resolution mechanism.

00:35:23 And the resolution mechanism can be really expensive

00:35:26 because you just have to make sure

00:35:29 that the person who is saying,

00:35:31 look, I swear that this is the real computation.

00:35:33 I’m staking $10,000 on that fact.

00:35:36 And if you prove it wrong,

00:35:39 yeah, it might cost you $3,000 in gas fees to prove wrong,

00:35:42 but you’ll get the $10,000 bounty.

00:35:44 So you can secure using those kinds of systems.

00:35:47 So it’s effectively a sandbox, which runs contracts.

00:35:52 And like, it’s like any kind of normal sandbox,

00:35:55 you have to like replace syscalls

00:35:57 with calls into the hypervisor.

00:36:02 Sandbox, syscalls, hypervisor.

00:36:05 What do these things mean?

00:36:06 As long as it’s interesting to talk about.

00:36:09 Yeah, I mean, you can take like the Chrome sandbox

00:36:11 is maybe the one to think about, right?

00:36:12 So the Chrome process that’s doing a rendering,

00:36:16 can’t, for example, read a file from the file system.

00:36:18 It has, if it tries to make an open syscall in Linux,

00:36:21 the open syscall, you can’t make it open syscall,

00:36:23 no, no, no.

00:36:24 You have to request from the kind of hypervisor process

00:36:29 or like, I don’t know what it’s called in Chrome,

00:36:31 but the, hey, could you open this file for me?

00:36:36 And then it does all these checks

00:36:37 and then it passes the file handle back in

00:36:39 if it’s approved.

00:36:41 So that’s, yeah.

00:36:42 So what’s the, in the context of Ethereum,

00:36:45 what are the boundaries of the sandbox

00:36:47 that we’re talking about?

00:36:48 Well, like one of the calls that you,

00:36:50 actually reading and writing any state

00:36:53 to the Ethereum contract,

00:36:55 or to the Ethereum blockchain.

00:36:58 Writing state is one of those calls

00:37:01 that you’re going to have to sandbox in layer two,

00:37:04 because if you let layer two just arbitrarily write

00:37:08 to the Ethereum blockchain.

00:37:09 So layer two is really sitting on top of layer one.

00:37:15 So you’re going to have a lot of different kinds of ideas

00:37:17 that you can play with.

00:37:18 And they’re all, they’re not fundamentally changing

00:37:21 the source code level of Ethereum.

00:37:25 Well, you have to replace a bunch of calls

00:37:28 with calls into the hypervisor.

00:37:31 So instead of doing the syscall directly,

00:37:33 you replace it with a call to the hypervisor.

00:37:37 So originally they were doing this

00:37:39 by first running the, so Solidity is the language

00:37:43 that most Ethereum contracts are written in.

00:37:45 It compiles to a bytecode.

00:37:47 And then they wrote this thing they called the transpiler.

00:37:50 And the transpiler took the bytecode

00:37:52 and it transpiled it into OVM safe bytecode.

00:37:56 Basically bytecode that didn’t make any

00:37:57 of those restricted syscalls

00:37:58 and added the calls to the hypervisor.

00:38:01 This transpiler was a 3000 line mess.

00:38:05 And it’s hard to do.

00:38:07 It’s hard to do if you’re trying to do it like that,

00:38:09 because you have to kind of like deconstruct the bytecode,

00:38:12 change things about it, and then reconstruct it.

00:38:15 And I mean, as soon as I hear this, I’m like,

00:38:17 well, why don’t you just change the compiler, right?

00:38:20 Why not the first place you build the bytecode,

00:38:22 just do it in the compiler.

00:38:25 So yeah, I asked them how much they wanted it.

00:38:29 Of course, measured in dollars and I’m like, well, okay.

00:38:33 And yeah.

00:38:34 And you wrote the compiler.

00:38:35 Yeah, I modified, I wrote a 300 line diff to the compiler.

00:38:39 It’s open source, you can look at it.

00:38:40 Yeah, it’s, yeah, I looked at the code last night.

00:38:43 It’s, yeah, exactly.

00:38:46 Cute is a good word for it.

00:38:49 And it’s C++.

00:38:52 C++, yeah.

00:38:54 So when asked how you were able to do it,

00:38:57 you said, you just gotta think and then do it right.

00:39:03 So can you break that apart a little bit?

00:39:04 What’s your process of one, thinking and two, doing it right?

00:39:09 You know, the people that I was working for

00:39:12 were amused that I said that.

00:39:13 It doesn’t really mean anything.

00:39:14 Okay.

00:39:16 I mean, is there some deep, profound insights

00:39:19 to draw from like how you problem solve from that?

00:39:23 This is always what I say.

00:39:24 I’m like, do you wanna be a good programmer?

00:39:26 Do it for 20 years.

00:39:27 Yeah, there’s no shortcuts.

00:39:29 No.

00:39:31 What are your thoughts on crypto in general?

00:39:33 So what parts technically or philosophically

00:39:38 do you find especially beautiful maybe?

00:39:40 Oh, I’m extremely bullish on crypto longterm.

00:39:42 Not any specific crypto project, but this idea of,

00:39:48 well, two ideas.

00:39:50 One, the Nakamoto Consensus Algorithm

00:39:54 is I think one of the greatest innovations

00:39:57 of the 21st century.

00:39:58 This idea that people can reach consensus.

00:40:01 You can reach a group consensus.

00:40:03 Using a relatively straightforward algorithm is wild.

00:40:08 And like, you know, Satoshi Nakamoto,

00:40:14 people always ask me who I look up to.

00:40:15 It’s like, whoever that is.

00:40:17 Who do you think it is?

00:40:19 I mean, Elon Musk?

00:40:21 Is it you?

00:40:22 It is definitely not me.

00:40:24 And I do not think it’s Elon Musk.

00:40:26 But yeah, this idea of groups reaching consensus

00:40:31 in a decentralized yet formulaic way

00:40:34 is one extremely powerful idea from crypto.

00:40:40 Maybe the second idea is this idea of smart contracts.

00:40:45 When you write a contract between two parties,

00:40:49 any contract, this contract, if there are disputes,

00:40:53 it’s interpreted by lawyers.

00:40:56 Lawyers are just really shitty overpaid interpreters.

00:41:00 Imagine you had, let’s talk about them in terms of a,

00:41:02 in terms of like, let’s compare a lawyer to Python, right?

00:41:05 So lawyer, well, okay.

00:41:07 That’s really, I never thought of it that way.

00:41:10 It’s hilarious.

00:41:11 So Python, I’m paying even 10 cents an hour.

00:41:15 I’ll use the nice Azure machine.

00:41:17 I can run Python for 10 cents an hour.

00:41:19 Lawyers cost $1,000 an hour.

00:41:21 So Python is 10,000 X better on that axis.

00:41:25 Lawyers don’t always return the same answer.

00:41:31 Python almost always does.

00:41:36 Cost.

00:41:37 Yeah, I mean, just cost, reliability,

00:41:40 everything about Python is so much better than lawyers.

00:41:43 So if you can make smart contracts,

00:41:46 this whole concept of code is law.

00:41:50 I love, and I would love to live in a world

00:41:53 where everybody accepted that fact.

00:41:55 So maybe you can talk about what smart contracts are.

00:42:01 So let’s say, let’s say, you know,

00:42:05 we have a, even something as simple

00:42:08 as a safety deposit box, right?

00:42:11 Safety deposit box that holds a million dollars.

00:42:14 I have a contract with the bank that says

00:42:17 two out of these three parties must be present

00:42:22 to open the safety deposit box and get the money out.

00:42:25 So that’s a contract for the bank,

00:42:26 and it’s only as good as the bank and the lawyers, right?

00:42:29 Let’s say, you know, somebody dies and now,

00:42:32 oh, we’re gonna go through a big legal dispute

00:42:34 about whether, oh, well, was it in the will,

00:42:36 was it not in the will?

00:42:37 What, like, it’s just so messy,

00:42:39 and the cost to determine truth is so expensive.

00:42:44 Versus a smart contract, which just uses cryptography

00:42:47 to check if two out of three keys are present.

00:42:50 Well, I can look at that, and I can have certainty

00:42:53 in the answer that it’s going to return.

00:42:55 And that’s what, all businesses want is certainty.

00:42:58 You know, they say businesses don’t care.

00:42:59 Viacom, YouTube, YouTube’s like,

00:43:02 look, we don’t care which way this lawsuit goes.

00:43:04 Just please tell us so we can have certainty.

00:43:07 Yeah, I wonder how many agreements in this,

00:43:09 because we’re talking about financial transactions only

00:43:12 in this case, correct, the smart contracts?

00:43:15 Oh, you can go to anything.

00:43:17 You can put a prenup in the Ethereum blockchain.

00:43:21 A married smart contract?

00:43:23 Sorry, divorce lawyer, sorry.

00:43:24 You’re going to be replaced by Python.

00:43:29 Okay, so that’s another beautiful idea.

00:43:34 Do you think there’s something that’s appealing to you

00:43:37 about any one specific implementation?

00:43:40 So if you look 10, 20, 50 years down the line,

00:43:45 do you see any, like, Bitcoin, Ethereum,

00:43:48 any of the other hundreds of cryptocurrencies winning out?

00:43:51 Is there, like, what’s your intuition about the space?

00:43:53 Or are you just sitting back and watching the chaos

00:43:55 and look who cares what emerges?

00:43:57 Oh, I don’t.

00:43:58 I don’t speculate.

00:43:59 I don’t really care.

00:43:59 I don’t really care which one of these projects wins.

00:44:02 I’m kind of in the Bitcoin as a meme coin camp.

00:44:05 I mean, why does Bitcoin have value?

00:44:07 It’s technically kind of, you know,

00:44:11 not great, like the block size debate.

00:44:14 When I found out what the block size debate was,

00:44:16 I’m like, are you guys kidding?

00:44:18 What’s the block size debate?

00:44:21 You know what?

00:44:22 It’s really, it’s too stupid to even talk.

00:44:23 People can look it up, but I’m like, wow.

00:44:27 You know, Ethereum seems,

00:44:28 the governance of Ethereum seems much better.

00:44:31 I’ve come around a bit on proof of stake ideas.

00:44:35 You know, very smart people thinking about some things.

00:44:37 Yeah, you know, governance is interesting.

00:44:40 It does feel like Vitalik,

00:44:44 like it does feel like an open,

00:44:46 even in these distributed systems,

00:44:48 leaders are helpful

00:44:51 because they kind of help you drive the mission

00:44:54 and the vision and they put a face to a project.

00:44:58 It’s a weird thing about us humans.

00:45:00 Geniuses are helpful, like Vitalik.

00:45:02 Yeah, brilliant.

00:45:06 Leaders are not necessarily, yeah.

00:45:10 So you think the reason he’s the face of Ethereum

00:45:15 is because he’s a genius.

00:45:17 That’s interesting.

00:45:18 I mean, that was,

00:45:21 it’s interesting to think about

00:45:22 that we need to create systems

00:45:25 in which the quote unquote leaders that emerge

00:45:30 are the geniuses in the system.

00:45:33 I mean, that’s arguably why

00:45:35 the current state of democracy is broken

00:45:36 is the people who are emerging as the leaders

00:45:39 are not the most competent,

00:45:40 are not the superstars of the system.

00:45:43 And it seems like at least for now

00:45:45 in the crypto world oftentimes

00:45:47 the leaders are the superstars.

00:45:49 Imagine at the debate they asked,

00:45:51 what’s the sixth amendment?

00:45:53 What are the four fundamental forces in the universe?

00:45:56 What’s the integral of two to the X?

00:45:59 I’d love to see those questions asked

00:46:01 and that’s what I want as our leader.

00:46:03 It’s a little bit.

00:46:04 What’s Bayes rule?

00:46:07 Yeah, I mean, even, oh wow, you’re hurting my brain.

00:46:10 It’s that my standard was even lower

00:46:15 but I would have loved to see

00:46:17 just this basic brilliance.

00:46:20 Like I’ve talked to historians.

00:46:22 There’s just these, they’re not even like

00:46:23 they don’t have a PhD or even education history.

00:46:26 They just like a Dan Carlin type character

00:46:30 who just like, holy shit.

00:46:32 How did all this information get into your head?

00:46:35 They’re able to just connect Genghis Khan

00:46:38 to the entirety of the history of the 20th century.

00:46:41 They know everything about every single battle that happened

00:46:46 and they know the game of Thrones

00:46:51 of the different power plays and all that happened there.

00:46:55 And they know like the individuals

00:46:56 and all the documents involved

00:46:58 and they integrate that into their regular life.

00:47:02 It’s not like they’re ultra history nerds.

00:47:03 They’re just, they know this information.

00:47:06 That’s what competence looks like.

00:47:08 Yeah.

00:47:09 Cause I’ve seen that with programmers too, right?

00:47:10 That’s what great programmers do.

00:47:12 But yeah, it would be, it’s really unfortunate

00:47:15 that those kinds of people aren’t emerging as our leaders.

00:47:19 But for now, at least in the crypto world

00:47:21 that seems to be the case.

00:47:23 I don’t know if that always, you could imagine

00:47:26 that in a hundred years, it’s not the case, right?

00:47:28 Crypto world has one very powerful idea going for it

00:47:31 and that’s the idea of forks, right?

00:47:35 I mean, imagine, we’ll use a less controversial example.

00:47:42 This was actually in my joke app in 2012.

00:47:47 I was like, Barack Obama, Mitt Romney,

00:47:49 let’s let them both be president, right?

00:47:51 Like imagine we could fork America

00:47:52 and just let them both be president.

00:47:54 And then the Americas could compete

00:47:56 and people could invest in one,

00:47:58 pull their liquidity out of one, put it in the other.

00:48:00 You have this in the crypto world.

00:48:02 Ethereum forks into Ethereum and Ethereum classic.

00:48:05 And you can pull your liquidity out of one

00:48:07 and put it in another.

00:48:08 And people vote with their dollars,

00:48:11 which forks, companies should be able to fork.

00:48:16 I’d love to fork Nvidia, you know?

00:48:20 Yeah, like different business strategies

00:48:22 and then try them out and see what works.

00:48:26 Like even take, yeah, take comma AI that closes its source

00:48:34 and then take one that’s open source and see what works.

00:48:38 Take one that’s purchased by GM

00:48:41 and one that remains Android Renegade

00:48:43 and all these different versions and see.

00:48:45 The beauty of comma AI is someone can actually do that.

00:48:47 Please take comma AI and fork it.

00:48:50 That’s right, that’s the beauty of open source.

00:48:53 So you’re, I mean, we’ll talk about autonomous vehicle space,

00:48:56 but it does seem that you’re really knowledgeable

00:49:02 about a lot of different topics.

00:49:03 So the natural question a bunch of people ask this,

00:49:06 which is how do you keep learning new things?

00:49:09 Do you have like practical advice

00:49:12 if you were to introspect, like taking notes,

00:49:15 allocate time, or do you just mess around

00:49:19 and just allow your curiosity to drive?

00:49:21 I’ll write these people a self help book

00:49:23 and I’ll charge $67 for it.

00:49:25 And I will write, I will write,

00:49:28 I will write on the cover of the self help book.

00:49:30 All of this advice is completely meaningless.

00:49:32 You’re gonna be a sucker and buy this book anyway.

00:49:34 And the one lesson that I hope they take away from the book

00:49:38 is that I can’t give you a meaningful answer to that.

00:49:42 That’s interesting.

00:49:44 Let me translate that.

00:49:45 Is you haven’t really thought about what it is you do

00:49:51 systematically because you could reduce it.

00:49:53 And there’s some people, I mean, I’ve met brilliant people

00:49:56 that this is really clear with athletes.

00:50:00 Some are just, you know, the best in the world

00:50:03 at something and they have zero interest

00:50:06 in writing like a self help book,

00:50:09 or how to master this game.

00:50:11 And then there’s some athletes who become great coaches

00:50:15 and they love the analysis, perhaps the over analysis.

00:50:18 And you right now, at least at your age,

00:50:20 which is an interesting, you’re in the middle of the battle.

00:50:23 You’re like the warriors that have zero interest

00:50:25 in writing books.

00:50:27 So you’re in the middle of the battle.

00:50:29 So you have, yeah.

00:50:30 This is a fair point.

00:50:31 I do think I have a certain aversion

00:50:34 to this kind of deliberate intentional way of living life.

00:50:39 You’re eventually, the hilarity of this,

00:50:41 especially since this is recorded,

00:50:43 it will reveal beautifully the absurdity

00:50:47 when you finally do publish this book.

00:50:49 I guarantee you, you will.

00:50:51 The story of comma AI, maybe it’ll be a biography

00:50:56 written about you.

00:50:57 That’ll be better, I guess.

00:50:58 And you might be able to learn some cute lessons

00:51:00 if you’re starting a company like comma AI from that book.

00:51:03 But if you’re asking generic questions,

00:51:05 like how do I be good at things?

00:51:07 How do I be good at things?

00:51:10 Dude, I don’t know.

00:51:11 Do them a lot.

00:51:14 Do them a lot.

00:51:15 But the interesting thing here is learning things

00:51:18 outside of your current trajectory,

00:51:22 which is what it feels like from an outsider’s perspective.

00:51:28 I don’t know if there’s advice on that,

00:51:30 but it is an interesting curiosity.

00:51:33 When you become really busy, you’re running a company.

00:51:38 Hard time.

00:51:40 Yeah.

00:51:41 But there’s a natural inclination and trend.

00:51:46 Just the momentum of life carries you

00:51:48 into a particular direction of wanting to focus.

00:51:51 And this kind of dispersion that curiosity can lead to

00:51:55 gets harder and harder with time.

00:51:58 Because you get really good at certain things

00:52:00 and it sucks trying things that you’re not good at,

00:52:03 like trying to figure them out.

00:52:05 When you do this with your live streams,

00:52:07 you’re on the fly figuring stuff out.

00:52:10 You don’t mind looking dumb.

00:52:11 No.

00:52:14 You just figure it out pretty quickly.

00:52:16 Sometimes I try things and I don’t figure them out quickly.

00:52:19 My chest rating is like a 1400,

00:52:20 despite putting like a couple of hundred hours in.

00:52:23 It’s pathetic.

00:52:24 I mean, to be fair, I know that I could do it better

00:52:26 if I did it better.

00:52:27 Like don’t play five minute games,

00:52:29 play 15 minute games at least.

00:52:31 Like I know these things, but it just doesn’t,

00:52:34 it doesn’t stick nicely in my knowledge stream.

00:52:37 All right, let’s talk about Comma AI.

00:52:39 What’s the mission of the company?

00:52:42 Let’s like look at the biggest picture.

00:52:44 Oh, I have an exact statement.

00:52:46 Solve self driving cars

00:52:48 while delivering shippable intermediaries.

00:52:51 So longterm vision is have fully autonomous vehicles

00:52:56 and make sure you’re making money along the way.

00:52:59 I think it doesn’t really speak to money,

00:53:00 but I can talk about what solve self driving cars means.

00:53:03 Solve self driving cars of course means

00:53:06 you’re not building a new car,

00:53:08 you’re building a person replacement.

00:53:10 That person can sit in the driver’s seat

00:53:12 and drive you anywhere a person can drive

00:53:14 with a human or better level of safety,

00:53:17 speed, quality, comfort.

00:53:21 And what’s the second part of that?

00:53:23 Delivering shippable intermediaries is well,

00:53:26 it’s a way to fund the company, that’s true.

00:53:28 But it’s also a way to keep us honest.

00:53:31 If you don’t have that, it is very easy

00:53:34 with this technology to think you’re making progress

00:53:39 when you’re not.

00:53:40 I’ve heard it best described on Hacker News as

00:53:43 you can set any arbitrary milestone,

00:53:46 meet that milestone and still be infinitely far away

00:53:49 from solving self driving cars.

00:53:51 So it’s hard to have like real deadlines

00:53:53 when you’re like Cruz or Waymo when you don’t have revenue.

00:54:02 Is that, I mean, is revenue essentially

00:54:06 the thing we’re talking about here?

00:54:07 Revenue is, capitalism is based around consent.

00:54:11 Capitalism, the way that you get revenue

00:54:13 is real capitalism comes in the real capitalism camp.

00:54:16 There’s definitely scams out there,

00:54:17 but real capitalism is based around consent.

00:54:19 It’s based around this idea that like,

00:54:20 if we’re getting revenue, it’s because we’re providing

00:54:22 at least that much value to another person.

00:54:24 When someone buys $1,000 comma two from us,

00:54:27 we’re providing them at least $1,000 of value

00:54:29 or they wouldn’t buy it.

00:54:30 Brilliant, so can you give a whirlwind overview

00:54:32 of the products that Comma AI provides,

00:54:34 like throughout its history and today?

00:54:38 I mean, yeah, the past ones aren’t really that interesting.

00:54:40 It’s kind of just been refinement of the same idea.

00:54:45 The real only product we sell today is the Comma 2.

00:54:48 Which is a piece of hardware with cameras.

00:54:50 Mm, so the Comma 2, I mean, you can think about it

00:54:54 kind of like a person.

00:54:57 Future hardware will probably be

00:54:58 even more and more personlike.

00:55:00 So it has eyes, ears, a mouth, a brain,

00:55:07 and a way to interface with the car.

00:55:09 Does it have consciousness?

00:55:10 Just kidding, that was a trick question.

00:55:13 I don’t have consciousness either.

00:55:15 Me and the Comma 2 are the same.

00:55:16 You’re the same?

00:55:17 I have a little more compute than it.

00:55:18 It only has like the same compute as a B, you know.

00:55:23 You’re more efficient energy wise

00:55:25 for the compute you’re doing.

00:55:26 Far more efficient energy wise.

00:55:29 20 petaflops, 20 watts, crazy.

00:55:30 Do you lack consciousness?

00:55:32 Sure.

00:55:33 Do you fear death?

00:55:33 You do, you want immortality.

00:55:35 Of course I fear death.

00:55:36 Does Comma AI fear death?

00:55:38 I don’t think so.

00:55:39 Of course it does.

00:55:40 It very much fears, well, it fears negative loss.

00:55:42 Oh yeah.

00:55:43 Okay, so Comma 2, when did that come out?

00:55:49 That was a year ago?

00:55:50 No, two.

00:55:52 Early this year.

00:55:53 Wow, time, it feels like, yeah.

00:55:56 2020 feels like it’s taken 10 years to get to the end.

00:56:00 It’s a long year.

00:56:01 It’s a long year.

00:56:03 So what’s the sexiest thing about Comma 2 feature wise?

00:56:08 So, I mean, maybe you can also link on like, what is it?

00:56:14 Like what’s its purpose?

00:56:15 Cause there’s a hardware, there’s a software component.

00:56:18 You’ve mentioned the sensors,

00:56:20 but also like what is it, its features and capabilities?

00:56:23 I think our slogan summarizes it well.

00:56:25 Comma slogan is make driving chill.

00:56:28 I love it, okay.

00:56:30 Yeah, I mean, it is, you know,

00:56:33 if you like cruise control, imagine cruise control,

00:56:35 but much, much more.

00:56:36 So it can do adaptive cruise control things,

00:56:41 which is like slow down for cars in front of it,

00:56:42 maintain a certain speed.

00:56:44 And it can also do lane keeping.

00:56:46 So staying in the lane and doing it better

00:56:48 and better and better over time.

00:56:50 It’s very much machine learning based.

00:56:53 So this camera is, there’s a driver facing camera too.

00:57:01 What else is there?

00:57:02 What am I thinking?

00:57:02 So the hardware versus software.

00:57:04 So open pilot versus the actual hardware of the device.

00:57:09 What’s, can you draw that distinction?

00:57:10 What’s one, what’s the other?

00:57:11 I mean, the hardware is pretty much a cell phone

00:57:13 with a few additions.

00:57:14 A cell phone with a cooling system

00:57:16 and with a car interface connected to it.

00:57:20 And by cell phone, you mean like Qualcomm Snapdragon.

00:57:25 Yeah, the current hardware is a Snapdragon 821.

00:57:29 It has wifi radio, it has an LTE radio, it has a screen.

00:57:32 We use every part of the cell phone.

00:57:35 And then the interface with the car

00:57:37 is specific to the car.

00:57:38 So you keep supporting more and more cars.

00:57:41 Yeah, so the interface to the car,

00:57:42 I mean, the device itself just has four CAN buses.

00:57:45 It has four CAN interfaces on it

00:57:46 that are connected through the USB port to the phone.

00:57:49 And then, yeah, on those four CAN buses,

00:57:53 you connect it to the car.

00:57:54 And there’s a little harness to do this.

00:57:56 Cars are actually surprisingly similar.

00:57:58 So CAN is the protocol by which cars communicate.

00:58:01 And then you’re able to read stuff and write stuff

00:58:04 to be able to control the car depending on the car.

00:58:06 So what’s the software side?

00:58:08 What’s OpenPilot?

00:58:10 So I mean, OpenPilot is,

00:58:11 the hardware is pretty simple compared to OpenPilot.

00:58:13 OpenPilot is, well, so you have a machine learning model,

00:58:21 which it’s in OpenPilot, it’s a blob.

00:58:24 It’s just a blob of weights.

00:58:25 It’s not like people are like, oh, it’s closed source.

00:58:27 I’m like, it’s a blob of weights.

00:58:28 What do you expect?

00:58:29 So it’s primarily neural network based.

00:58:33 You, well, OpenPilot is all the software

00:58:36 kind of around that neural network.

00:58:37 That if you have a neural network that says,

00:58:39 here’s where you wanna send the car,

00:58:40 OpenPilot actually goes and executes all of that.

00:58:44 It cleans up the input to the neural network.

00:58:46 It cleans up the output and executes on it.

00:58:49 So it connects, it’s the glue

00:58:50 that connects everything together.

00:58:51 Runs the sensors, does a bunch of calibration

00:58:54 for the neural network, deals with like,

00:58:58 if the car is on a banked road,

00:59:00 you have to counter steer against that.

00:59:02 And the neural network can’t necessarily know that

00:59:03 by looking at the picture.

00:59:06 So you do that with other sensors

00:59:08 and Fusion and Localizer.

00:59:09 OpenPilot also is responsible

00:59:11 for sending the data up to our servers.

00:59:14 So we can learn from it, logging it, recording it,

00:59:17 running the cameras, thermally managing the device,

00:59:21 managing the disk space on the device,

00:59:23 managing all the resources on the device.

00:59:24 So what, since we last spoke,

00:59:26 I don’t remember when, maybe a year ago,

00:59:28 maybe a little bit longer,

00:59:30 how has OpenPilot improved?

00:59:33 We did exactly what I promised you.

00:59:34 I promised you that by the end of the year,

00:59:36 where you’d be able to remove the lanes.

00:59:40 The lateral policy is now almost completely end to end.

00:59:46 You can turn the lanes off and it will drive,

00:59:48 drive slightly worse on the highway

00:59:49 if you turn the lanes off,

00:59:51 but you can turn the lanes off and it will drive well,

00:59:54 trained completely end to end on user data.

00:59:57 And this year we hope to do the same

00:59:58 for the longitudinal policy.

01:00:00 So that’s the interesting thing is you’re not doing,

01:00:03 you don’t appear to be, maybe you can correct me,

01:00:05 you don’t appear to be doing lane detection

01:00:08 or lane marking detection or kind of the segmentation task

01:00:12 or any kind of object detection task.

01:00:15 You’re doing what’s traditionally more called

01:00:17 like end to end learning.

01:00:19 So, and trained on actual behavior of drivers

01:00:24 when they’re driving the car manually.

01:00:27 And this is hard to do.

01:00:29 It’s not supervised learning.

01:00:32 Yeah, but so the nice thing is there’s a lot of data.

01:00:34 So it’s hard and easy, right?

01:00:37 It’s a…

01:00:37 We have a lot of high quality data, yeah.

01:00:40 Like more than you need in the second.

01:00:41 Well…

01:00:42 We have way more than we do.

01:00:43 We have way more data than we need.

01:00:44 I mean, it’s an interesting question actually,

01:00:47 because in terms of amount, you have more than you need,

01:00:50 but the driving is full of edge cases.

01:00:54 So how do you select the data you train on?

01:00:58 I think this is an interesting open question.

01:01:00 Like what’s the cleverest way to select data?

01:01:04 That’s the question Tesla is probably working on.

01:01:07 That’s, I mean, the entirety of machine learning can be,

01:01:09 they don’t seem to really care.

01:01:11 They just kind of select data.

01:01:12 But I feel like that if you want to solve,

01:01:14 if you want to create intelligent systems,

01:01:16 you have to pick data well, right?

01:01:18 And so do you have any hints, ideas of how to do it well?

01:01:22 So in some ways that is…

01:01:25 The definition I like of reinforcement learning

01:01:27 versus supervised learning.

01:01:29 In supervised learning, the weights depend on the data.

01:01:32 Right?

01:01:34 And this is obviously true,

01:01:35 but in reinforcement learning,

01:01:38 the data depends on the weights.

01:01:40 Yeah.

01:01:41 And actually both ways.

01:01:42 That’s poetry.

01:01:43 So how does it know what data to train on?

01:01:46 Well, let it pick.

01:01:47 We’re not there yet, but that’s the eventual.

01:01:49 So you’re thinking this almost like

01:01:51 a reinforcement learning framework.

01:01:53 We’re going to do RL on the world.

01:01:55 Every time a car makes a mistake, user disengages,

01:01:58 we train on that and do RL on the world.

01:02:00 Ship out a new model, that’s an epoch, right?

01:02:03 And for now you’re not doing the Elon style promising

01:02:08 that it’s going to be fully autonomous.

01:02:09 You really are sticking to level two

01:02:12 and like it’s supposed to be supervised.

01:02:15 It is definitely supposed to be supervised

01:02:16 and we enforce the fact that it’s supervised.

01:02:19 We look at our rate of improvement in disengagements.

01:02:23 OpenPilot now has an unplanned disengagement

01:02:25 about every a hundred miles.

01:02:27 This is up from 10 miles, like maybe,

01:02:32 maybe maybe a year ago.

01:02:36 Yeah.

01:02:37 So maybe we’ve seen 10 X improvement in a year,

01:02:38 but a hundred miles is still a far cry

01:02:41 from the a hundred thousand you’re going to need.

01:02:43 So you’re going to somehow need to get three more 10 Xs

01:02:48 in there.

01:02:49 And you’re, what’s your intuition?

01:02:52 You’re basically hoping that there’s exponential

01:02:54 improvement built into the baked into the cake somewhere.

01:02:56 Well, that’s even, I mean, 10 X improvement,

01:02:58 that’s already assuming exponential, right?

01:03:00 There’s definitely exponential improvement.

01:03:02 And I think when Elon talks about exponential,

01:03:04 like these things, these systems are going to

01:03:06 exponentially improve, just exponential doesn’t mean

01:03:09 you’re getting a hundred gigahertz processors tomorrow.

01:03:12 Right? Like it’s going to still take a while

01:03:15 because the gap between even our best system

01:03:18 and humans is still large.

01:03:20 So that’s an interesting distinction to draw.

01:03:22 So if you look at the way Tesla is approaching the problem

01:03:26 and the way you’re approaching the problem,

01:03:28 which is very different than the rest of the self driving

01:03:31 car world.

01:03:32 So let’s put them aside is you’re treating most

01:03:35 the driving task as a machine learning problem.

01:03:37 And the way Tesla is approaching it is with the multitask

01:03:40 learning where you break the task of driving into hundreds

01:03:44 of different tasks and you have this multiheaded

01:03:47 neural network that’s very good at performing each task.

01:03:51 And there there’s presumably something on top that’s

01:03:54 stitching stuff together in order to make control

01:03:59 decisions, policy decisions about how you move the car.

01:04:02 But what that allows you, there’s a brilliance to this

01:04:04 because it allows you to master each task,

01:04:08 like lane detection, stop sign detection,

01:04:13 the traffic light detection, drivable area segmentation,

01:04:19 you know, vehicle, bicycle, pedestrian detection.

01:04:23 There’s some localization tasks in there.

01:04:25 Also predicting of like, yeah,

01:04:30 predicting how the entities in the scene are going to move.

01:04:34 Like everything is basically a machine learning task.

01:04:36 So there’s a classification, segmentation, prediction.

01:04:40 And it’s nice because you can have this entire engine,

01:04:44 data engine that’s mining for edge cases for each one of

01:04:48 these tasks.

01:04:49 And you can have people like engineers that are basically

01:04:52 masters of that task,

01:04:53 like become the best person in the world at,

01:04:56 as you talk about the cone guy for Waymo,

01:04:59 the becoming the best person in the world at cone detection.

01:05:06 So that’s a compelling notion from a supervised learning

01:05:10 perspective, automating much of the process of edge case

01:05:15 discovery and retraining neural network for each of the

01:05:17 individual perception tasks.

01:05:19 And then you’re looking at the machine learning in a more

01:05:22 holistic way, basically doing end to end learning on the

01:05:27 driving tasks, supervised, trained on the data of the

01:05:31 actual driving of people.

01:05:34 They use comma AI, like actual human drivers,

01:05:37 their manual control,

01:05:38 plus the moments of disengagement that maybe with some

01:05:44 labeling could indicate the failure of the system.

01:05:47 So you have the,

01:05:48 you have a huge amount of data for positive control of the

01:05:52 vehicle, like successful control of the vehicle,

01:05:55 both maintaining the lane as,

01:05:58 as I think you’re also working on longitudinal control of

01:06:01 the vehicle and then failure cases where the vehicle does

01:06:04 something wrong that needs disengagement.

01:06:08 So like what,

01:06:09 why do you think you’re right and Tesla is wrong on this?

01:06:14 And do you think,

01:06:15 do you think you’ll come around the Tesla way?

01:06:17 Do you think Tesla will come around to your way?

01:06:21 If you were to start a chess engine company,

01:06:23 would you hire a Bishop guy?

01:06:26 See, we have a,

01:06:27 this is Monday morning.

01:06:29 Quarterbacking is a yes, probably.

01:06:36 Oh, our Rook guy.

01:06:37 Oh, we stole the Rook guy from that company.

01:06:39 Oh, we’re going to have real good Rooks.

01:06:40 Well, there’s not many pieces, right?

01:06:43 You can,

01:06:46 there’s not many guys and gals to hire.

01:06:48 You just have a few that work in the Bishop,

01:06:51 a few that work in the Rook.

01:06:52 Is that not ludicrous today to think about

01:06:55 in a world of AlphaZero?

01:06:57 But AlphaZero is a chess game.

01:06:58 So the fundamental question is,

01:07:01 how hard is driving compared to chess?

01:07:04 Because, so long term,

01:07:07 end to end,

01:07:08 will be the right solution.

01:07:10 The question is how many years away is that?

01:07:13 End to end is going to be the only solution for level five.

01:07:15 For the only way we’ll get there.

01:07:17 Of course, and of course,

01:07:18 Tesla is going to come around to my way.

01:07:19 And if you’re a Rook guy out there, I’m sorry.

01:07:22 The cone guy.

01:07:24 I don’t know.

01:07:25 We’re going to specialize each task.

01:07:26 We’re going to really understand Rook placement.

01:07:29 Yeah.

01:07:30 I understand the intuition you have.

01:07:32 I mean, that,

01:07:35 that is a very compelling notion

01:07:36 that we can learn the task end to end,

01:07:39 like the same compelling notion you might have

01:07:40 for natural language conversation.

01:07:42 But I’m not

01:07:44 sure,

01:07:47 because one thing you sneaked in there

01:07:48 is the assertion that it’s impossible to get to level five

01:07:53 without this kind of approach.

01:07:55 I don’t know if that’s obvious.

01:07:57 I don’t know if that’s obvious either.

01:07:58 I don’t actually mean that.

01:08:01 I think that it is much easier

01:08:03 to get to level five with an end to end approach.

01:08:05 I think that the other approach is doable,

01:08:08 but the magnitude of the engineering challenge

01:08:11 may exceed what humanity is capable of.

01:08:13 But what do you think of the Tesla data engine approach,

01:08:19 which to me is an active learning task,

01:08:21 is kind of fascinating,

01:08:22 is breaking it down into these multiple tasks

01:08:25 and mining their data constantly for like edge cases

01:08:29 for these different tasks.

01:08:30 Yeah, but the tasks themselves are not being learned.

01:08:32 This is feature engineering.

01:08:35 Yeah, I mean, it’s a higher abstraction level

01:08:40 of feature engineering for the different tasks.

01:08:43 Task engineering in a sense.

01:08:44 It’s slightly better feature engineering,

01:08:46 but it’s still fundamentally is feature engineering.

01:08:49 And if anything about the history of AI

01:08:51 has taught us anything,

01:08:52 it’s that feature engineering approaches

01:08:54 will always be replaced and lose to end to end.

01:08:57 Now, to be fair, I cannot really make promises on timelines,

01:09:02 but I can say that when you look at the code for Stockfish

01:09:05 and the code for AlphaZero,

01:09:06 one is a lot shorter than the other,

01:09:09 a lot more elegant,

01:09:09 required a lot less programmer hours to write.

01:09:12 Yeah, but there was a lot more murder of bad agents

01:09:21 on the AlphaZero side.

01:09:24 By murder, I mean agents that played a game

01:09:29 and failed miserably.

01:09:30 Yeah.

01:09:31 Oh, oh.

01:09:32 In simulation, that failure is less costly.

01:09:34 Yeah.

01:09:35 In real world, it’s…

01:09:37 Do you mean in practice,

01:09:38 like AlphaZero has lost games miserably?

01:09:40 No.

01:09:41 Wow.

01:09:42 I haven’t seen that.

01:09:43 No, but I know, but the requirement for AlphaZero is…

01:09:47 A simulator.

01:09:48 To be able to like evolution, human evolution,

01:09:51 not human evolution, biological evolution of life on earth

01:09:54 from the origin of life has murdered trillions

01:09:58 upon trillions of organisms on the path thus humans.

01:10:02 Yeah.

01:10:03 So the question is, can we stitch together

01:10:05 a human like object without having to go

01:10:07 through the entirety process of evolution?

01:10:09 Well, no, but do the evolution in simulation.

01:10:11 Yeah, that’s the question.

01:10:12 Can we simulate?

01:10:13 So do you have a sense that it’s possible

01:10:15 to simulate some aspect?

01:10:16 MuZero is exactly this.

01:10:18 MuZero is the solution to this.

01:10:21 MuZero I think is going to be looked back

01:10:23 as the canonical paper.

01:10:25 And I don’t think deep learning is everything.

01:10:26 I think that there’s still a bunch of things missing

01:10:28 to get there, but MuZero I think is going to be looked back

01:10:31 as the kind of cornerstone paper

01:10:34 of this whole deep learning era.

01:10:37 And MuZero is the solution to self driving cars.

01:10:39 You have to make a few tweaks to it,

01:10:41 but MuZero does effectively that.

01:10:42 It does those rollouts and those murdering

01:10:45 in a learned simulator and a learned dynamics model.

01:10:50 That’s interesting.

01:10:51 It doesn’t get enough love.

01:10:51 I was blown away when I read that paper.

01:10:54 I’m like, okay, I’ve always said a comma.

01:10:57 I’m going to sit and I’m going to wait for the solution

01:10:58 to self driving cars to come along.

01:11:00 This year I saw it.

01:11:01 It’s MuZero.

01:11:05 So.

01:11:06 Sit back and let the winning roll in.

01:11:09 So your sense, just to elaborate a little bit,

01:11:12 it’s a link on the topic.

01:11:13 Your sense is neural networks will solve driving.

01:11:16 Yes.

01:11:17 Like we don’t need anything else.

01:11:18 I think the same way chess was maybe the chess

01:11:21 and maybe Google are the pinnacle of like search algorithms

01:11:25 and things that look kind of like a star.

01:11:28 The pinnacle of this era is going to be self driving cars.

01:11:34 But on the path of that, you have to deliver products

01:11:38 and it’s possible that the path to full self driving cars

01:11:42 will take decades.

01:11:44 I doubt it.

01:11:45 How long would you put on it?

01:11:47 Like what are we, you’re chasing it, Tesla’s chasing it.

01:11:53 What are we talking about?

01:11:54 Five years, 10 years, 50 years.

01:11:56 Let’s say in the 2020s.

01:11:58 In the 2020s.

01:11:59 The later part of the 2020s.

01:12:03 With the neural network.

01:12:05 Well, that would be nice to see.

01:12:06 And then the path to that, you’re delivering products,

01:12:09 which is a nice L2 system.

01:12:10 That’s what Tesla’s doing, a nice L2 system.

01:12:13 Just gets better every time.

01:12:14 L2, the only difference between L2 and the other levels

01:12:16 is who takes liability.

01:12:17 And I’m not a liability guy, I don’t wanna take liability.

01:12:20 I’m gonna level two forever.

01:12:22 Now on that little transition,

01:12:25 I mean, how do you make the transition work?

01:12:29 Is this where driver sensing comes in?

01:12:32 Like how do you make the, cause you said a hundred miles,

01:12:35 like, is there some sort of human factor psychology thing

01:12:41 where people start to overtrust the system,

01:12:43 all those kinds of effects,

01:12:45 once it gets better and better and better and better,

01:12:46 they get lazier and lazier and lazier.

01:12:49 Is that, like, how do you get that transition right?

01:12:52 First off, our monitoring is already adaptive.

01:12:54 Our monitoring is already seen adaptive.

01:12:56 Driver monitoring is just the camera

01:12:58 that’s looking at the driver.

01:13:00 You have an infrared camera in the…

01:13:03 Our policy for how we enforce the driver monitoring

01:13:06 is seen adaptive.

01:13:07 What’s that mean?

01:13:08 Well, for example, in one of the extreme cases,

01:13:12 if the car is not moving,

01:13:14 we do not actively enforce driver monitoring, right?

01:13:19 If you are going through a,

01:13:22 like a 45 mile an hour road with lights

01:13:25 and stop signs and potentially pedestrians,

01:13:27 we enforce a very tight driver monitoring policy.

01:13:30 If you are alone on a perfectly straight highway,

01:13:33 and this is, it’s all machine learning.

01:13:35 None of that is hand coded.

01:13:36 Actually, the stop is hand coded, but…

01:13:39 So there’s some kind of machine learning

01:13:41 estimation of risk.

01:13:42 Yes.

01:13:43 Yeah.

01:13:44 I mean, I’ve always been a huge fan of that.

01:13:45 That’s a…

01:13:47 Because…

01:13:48 It’s difficult to do every step into that direction

01:13:53 is a worthwhile step to take.

01:13:55 It might be difficult to do really well.

01:13:56 Like us humans are able to estimate risk pretty damn well,

01:13:59 whatever the hell that is.

01:14:01 That feels like one of the nice features of us humans.

01:14:06 Cause like we humans are really good drivers

01:14:08 when we’re really like tuned in

01:14:11 and we’re good at estimating risk.

01:14:12 Like when are we supposed to be tuned in?

01:14:14 Yeah.

01:14:15 And, you know, people are like,

01:14:17 oh, well, you know,

01:14:18 why would you ever make the driver monitoring policy

01:14:20 less aggressive?

01:14:21 Why would you always not keep it at its most aggressive?

01:14:23 Because then people are just going to get fatigued from it.

01:14:25 Yes.

01:14:26 When they get annoyed.

01:14:27 You want them…

01:14:28 Yeah.

01:14:29 You want the experience to be pleasant.

01:14:30 Obviously I want the experience to be pleasant,

01:14:32 but even just from a straight up safety perspective,

01:14:35 if you alert people when they look around and they’re like,

01:14:39 why is this thing alerting me?

01:14:41 There’s nothing I could possibly hit right now.

01:14:42 People will just learn to tune it out.

01:14:45 People will just learn to tune it out,

01:14:46 to put weights on the steering wheel,

01:14:48 to do whatever to overcome it.

01:14:49 And remember that you’re always part

01:14:52 of this adaptive system.

01:14:53 So all I can really say about, you know,

01:14:55 how this scales going forward is yeah,

01:14:57 it’s something we have to monitor for.

01:14:59 Ooh, we don’t know.

01:15:00 This is a great psychology experiment at scale.

01:15:02 Like we’ll see.

01:15:03 Yeah, it’s fascinating.

01:15:04 Track it.

01:15:04 And making sure you have a good understanding of attention

01:15:09 is a very key part of that psychology problem.

01:15:11 Yeah.

01:15:12 I think you and I probably have a different,

01:15:14 come to it differently, but to me,

01:15:16 it’s a fascinating psychology problem

01:15:19 to explore something much deeper than just driving.

01:15:22 It’s such a nice way to explore human attention

01:15:26 and human behavior, which is why, again,

01:15:30 we’ve probably both criticized Mr. Elon Musk

01:15:34 on this one topic from different avenues.

01:15:38 So both offline and online,

01:15:39 I had little chats with Elon and like,

01:15:44 I love human beings as a computer vision problem,

01:15:48 as an AI problem, it’s fascinating.

01:15:51 He wasn’t so much interested in that problem.

01:15:53 It’s like in order to solve driving,

01:15:56 the whole point is you want to remove the human

01:15:58 from the picture.

01:16:01 And it seems like you can’t do that quite yet.

01:16:04 Eventually, yes, but you can’t quite do that yet.

01:16:07 So this is the moment where you can’t yet say,

01:16:12 I told you so to Tesla, but it’s getting there

01:16:17 because I don’t know if you’ve seen this,

01:16:19 there’s some reporting that they’re in fact

01:16:21 starting to do driver monitoring.

01:16:23 Yeah, they shift the model in shadow mode.

01:16:26 With, I believe, only a visible light camera,

01:16:29 it might even be fisheye.

01:16:31 It’s like a low resolution.

01:16:33 Low resolution, visible light.

01:16:34 I mean, to be fair, that’s what we have in the Eon as well,

01:16:37 our last generation product.

01:16:38 This is the one area where I can say

01:16:41 our hardware is ahead of Tesla.

01:16:42 The rest of our hardware, way, way behind,

01:16:43 but our driver monitoring camera.

01:16:46 So you think, I think on the third row Tesla podcast,

01:16:50 or somewhere else, I’ve heard you say that obviously,

01:16:54 eventually they’re gonna have driver monitoring.

01:16:57 I think what I’ve said is Elon will definitely ship

01:16:59 driver monitoring before he ships level five.

01:17:01 Before level five.

01:17:02 And I’m willing to bet 10 grand on that.

01:17:04 And you bet 10 grand on that.

01:17:07 I mean, now I don’t wanna take the bet,

01:17:08 but before, maybe someone would have,

01:17:09 oh, I should have got my money in.

01:17:10 Yeah.

01:17:11 It’s an interesting bet.

01:17:12 I think you’re right.

01:17:16 I’m actually on a human level

01:17:19 because he’s been, he’s made the decision.

01:17:24 Like he said that driver monitoring is the wrong way to go.

01:17:27 But like, you have to think of as a human, as a CEO,

01:17:31 I think that’s the right thing to say when,

01:17:36 like sometimes you have to say things publicly

01:17:40 that are different than when you actually believe,

01:17:41 because when you’re producing a large number of vehicles

01:17:45 and the decision was made not to include the camera,

01:17:47 like what are you supposed to say?

01:17:49 Like our cars don’t have the thing

01:17:51 that I think is right to have.

01:17:54 It’s an interesting thing.

01:17:55 But like on the other side, as a CEO,

01:17:58 I mean, something you could probably speak to as a leader,

01:18:01 I think about me as a human

01:18:04 to publicly change your mind on something.

01:18:07 How hard is that?

01:18:08 Especially when assholes like George Haas say,

01:18:10 I told you so.

01:18:12 All I will say is I am not a leader

01:18:14 and I am happy to change my mind.

01:18:17 And I will.

01:18:17 You think Elon will?

01:18:20 Yeah, I do.

01:18:22 I think he’ll come up with a good way

01:18:24 to make it psychologically okay for him.

01:18:27 Well, it’s such an important thing, man.

01:18:29 Especially for a first principles thinker,

01:18:31 because he made a decision that driver monitoring

01:18:34 is not the right way to go.

01:18:35 And I could see that decision.

01:18:37 And I could even make that decision.

01:18:39 Like I was on the fence too.

01:18:41 Like I’m not a,

01:18:42 driver monitoring is such an obvious,

01:18:47 simple solution to the problem of attention.

01:18:49 It’s not obvious to me that just by putting a camera there,

01:18:52 you solve things.

01:18:54 You have to create an incredible, compelling experience.

01:18:59 Just like you’re talking about.

01:19:01 I don’t know if it’s easy to do that.

01:19:03 It’s not at all easy to do that, in fact, I think.

01:19:05 So as a creator of a car that’s trying to create a product

01:19:10 that people love, which is what Tesla tries to do, right?

01:19:14 It’s not obvious to me that as a design decision,

01:19:18 whether adding a camera is a good idea.

01:19:20 From a safety perspective either,

01:19:22 like in the human factors community,

01:19:25 everybody says that you should obviously

01:19:27 have driver sensing, driver monitoring.

01:19:30 But that’s like saying it’s obvious as parents,

01:19:36 you shouldn’t let your kids go out at night.

01:19:39 But okay, but like,

01:19:43 they’re still gonna find ways to do drugs.

01:19:45 Like, you have to also be good parents.

01:19:49 So like, it’s much more complicated than just the,

01:19:52 you need to have driver monitoring.

01:19:54 I totally disagree on, okay, if you have a camera there

01:19:58 and the camera’s watching the person,

01:20:00 but never throws an alert, they’ll never think about it.

01:20:03 Right?

01:20:04 The driver monitoring policy that you choose to,

01:20:08 how you choose to communicate with the user

01:20:10 is entirely separate from the data collection perspective.

01:20:14 Right?

01:20:15 Right?

01:20:15 So, you know, like, there’s one thing to say,

01:20:20 like, you know, tell your teenager they can’t do something.

01:20:24 There’s another thing to like, you know, gather the data.

01:20:27 So you can make informed decisions.

01:20:28 That’s really interesting.

01:20:29 But you have to make that,

01:20:30 that’s the interesting thing about cars.

01:20:33 But even true with common AI,

01:20:35 like you don’t have to manufacture the thing

01:20:37 into the car, is you have to make a decision

01:20:40 that anticipates the right strategy longterm.

01:20:44 So like, you have to start collecting the data

01:20:46 and start making decisions.

01:20:47 Started it three years ago.

01:20:49 I believe that we have the best driver monitoring solution

01:20:52 in the world.

01:20:54 I think that when you compare it to Super Cruise

01:20:57 is the only other one that I really know that shipped.

01:20:59 And ours is better.

01:21:01 What do you like and not like about Super Cruise?

01:21:06 I mean, I had a few Super Cruise,

01:21:08 the sun would be shining through the window,

01:21:12 would blind the camera,

01:21:13 and it would say I wasn’t paying attention.

01:21:14 When I was looking completely straight,

01:21:16 I couldn’t reset the attention with a steering wheel touch

01:21:19 and Super Cruise would disengage.

01:21:21 Like I was communicating to the car, I’m like, look,

01:21:22 I am here, I am paying attention.

01:21:24 Why are you really gonna force me to disengage?

01:21:26 And it did.

01:21:28 So it’s a constant conversation with the user.

01:21:32 And yeah, there’s no way to ship a system

01:21:33 like this if you can OTA.

01:21:35 We’re shipping a new one every month.

01:21:37 Sometimes we balance it with our users on Discord.

01:21:40 Like sometimes we make the driver monitoring

01:21:41 a little more aggressive and people complain.

01:21:43 Sometimes they don’t.

01:21:45 We want it to be as aggressive as possible

01:21:47 where people don’t complain and it doesn’t feel intrusive.

01:21:49 So being able to update the system over the air

01:21:51 is an essential component.

01:21:52 I mean, that’s probably to me, you mentioned,

01:21:56 I mean, to me that is the biggest innovation of Tesla,

01:22:01 that it made people realize that over the air updates

01:22:04 is essential.

01:22:06 Yeah.

01:22:07 I mean, was that not obvious from the iPhone?

01:22:10 The iPhone was the first real product that OTA’d, I think.

01:22:13 Was it actually, that’s brilliant, you’re right.

01:22:15 I mean, the game consoles used to not, right?

01:22:17 The game consoles were maybe the second thing that did.

01:22:18 Wow, I didn’t really think about one of the amazing features

01:22:22 of a smartphone isn’t just like the touchscreen

01:22:26 isn’t the thing, it’s the ability to constantly update.

01:22:30 Yeah, it gets better.

01:22:31 It gets better.

01:22:35 Love my iOS 14.

01:22:36 Yeah.

01:22:38 Well, one thing that I probably disagree with you

01:22:41 on driver monitoring is you said that it’s easy.

01:22:46 I mean, you tend to say stuff is easy.

01:22:48 I’m sure the, I guess you said it’s easy

01:22:52 relative to the external perception problem.

01:22:58 Can you elaborate why you think it’s easy?

01:23:00 Feature engineering works for driver monitoring.

01:23:03 Feature engineering does not work for the external.

01:23:05 So human faces are not, human faces and the movement

01:23:10 of human faces and head and body is not as variable

01:23:14 as the external environment, is your intuition?

01:23:17 Yes, and there’s another big difference as well.

01:23:20 Your reliability of a driver monitoring system

01:23:22 doesn’t actually need to be that high.

01:23:24 The uncertainty, if you have something that’s detecting

01:23:27 whether the human’s paying attention and it only works

01:23:29 92% of the time, you’re still getting almost all

01:23:31 the benefit of that because the human,

01:23:33 like you’re training the human, right?

01:23:35 You’re dealing with a system that’s really helping you out.

01:23:39 It’s a conversation.

01:23:40 It’s not like the external thing where guess what?

01:23:43 If you swerve into a tree, you swerve into a tree, right?

01:23:46 Like you get no margin for error there.

01:23:48 Yeah, I think that’s really well put.

01:23:49 I think that’s the right, exactly the place

01:23:54 where comparing to the external perception,

01:23:58 the control problem, the driver monitoring is easier

01:24:01 because you don’t, the bar for success is much lower.

01:24:05 Yeah, but I still think like the human face

01:24:09 is more complicated actually than the external environment,

01:24:12 but for driving, you don’t give a damn.

01:24:14 I don’t need, yeah, I don’t need something,

01:24:15 I don’t need something that complicated

01:24:18 to have to communicate the idea to the human

01:24:22 that I want to communicate, which is,

01:24:23 yo, system might mess up here.

01:24:25 You gotta pay attention.

01:24:26 Yeah, see, that’s my love and fascination is the human face.

01:24:32 And it feels like this is a nice place to create products

01:24:38 that create an experience in the car.

01:24:40 So like, it feels like there should be

01:24:42 more richer experiences in the car, you know?

01:24:47 Like that’s an opportunity for like something like On My Eye

01:24:51 or just any kind of system like a Tesla

01:24:53 or any of the autonomous vehicle companies

01:24:56 is because software is, there’s much more sensors

01:24:59 and so much is on our software

01:25:00 and you’re doing machine learning anyway,

01:25:02 there’s an opportunity to create totally new experiences

01:25:06 that we’re not even anticipating.

01:25:08 You don’t think so?

01:25:10 Nah.

01:25:10 You think it’s a box that gets you from A to B

01:25:12 and you want to do it chill?

01:25:14 Yeah, I mean, I think as soon as we get to level three

01:25:16 on highways, okay, enjoy your candy crush,

01:25:19 enjoy your Hulu, enjoy your, you know, whatever, whatever.

01:25:23 Sure, you get this, you can look at screens basically

01:25:26 versus right now where you have music and audio books.

01:25:28 So level three is where you can kind of disengage

01:25:31 in stretches of time.

01:25:34 Well, you think level three is possible?

01:25:37 Like on the highway going for 100 miles

01:25:39 and you can just go to sleep?

01:25:40 Oh yeah, sleep.

01:25:43 So again, I think it’s really all on a spectrum.

01:25:47 I think that being able to use your phone

01:25:50 while you’re on the highway and like this all being okay

01:25:53 and being aware that the car might alert you

01:25:55 and you have five seconds to basically.

01:25:57 So the five second thing is you think is possible?

01:25:59 Yeah, I think it is, oh yeah.

01:26:00 Not in all scenarios, right?

01:26:02 Some scenarios it’s not.

01:26:03 It’s the whole risk thing that you mentioned is nice

01:26:06 is to be able to estimate like how risky is this situation?

01:26:10 That’s really important to understand.

01:26:12 One other thing you mentioned comparing KAMA

01:26:15 and Autopilot is that something about the haptic feel

01:26:20 of the way KAMA controls the car when things are uncertain.

01:26:25 Like it behaves a little bit more uncertain

01:26:27 when things are uncertain.

01:26:29 That’s kind of an interesting point.

01:26:31 And then Autopilot is much more confident always

01:26:34 even when it’s uncertain until it runs into trouble.

01:26:39 That’s a funny thing.

01:26:40 I actually mentioned that to Elon, I think.

01:26:42 And then the first time we talked, he wasn’t biting.

01:26:46 It’s like communicating uncertainty.

01:26:48 I guess KAMA doesn’t really communicate uncertainty

01:26:51 explicitly, it communicates it through haptic feel.

01:26:55 Like what’s the role of communicating uncertainty

01:26:57 do you think?

01:26:58 Oh, we do some stuff explicitly.

01:26:59 Like we do detect the lanes when you’re on the highway

01:27:01 and we’ll show you how many lanes we’re using to drive with.

01:27:04 You can look at where it thinks the lanes are.

01:27:06 You can look at the path.

01:27:08 And we want to be better about this.

01:27:10 We’re actually hiring, want to hire some new UI people.

01:27:12 UI people, you mentioned this.

01:27:14 Cause it’s such an, it’s a UI problem too, right?

01:27:17 We have a great designer now, but you know,

01:27:19 we need people who are just going to like build this

01:27:21 and debug these UIs, QT people.

01:27:23 QT.

01:27:24 Is that what the UI is done with, is QT?

01:27:26 The new UI is in QT.

01:27:29 C++ QT?

01:27:32 Tesla uses it too.

01:27:33 Yeah.

01:27:34 We had some React stuff in there.

01:27:37 React JS or just React?

01:27:39 React is his own language, right?

01:27:41 React Native, React is a JavaScript framework.

01:27:44 Yeah.

01:27:45 So it’s all based on JavaScript, but it’s, you know,

01:27:48 I like C++.

01:27:51 What do you think about Dojo with Tesla

01:27:55 and their foray into what appears to be

01:28:00 specialized hardware for training your own nets?

01:28:05 I guess it’s something, maybe you can correct me,

01:28:07 from my shallow looking at it,

01:28:10 it seems like something like Google did with TPUs,

01:28:12 but specialized for driving data.

01:28:15 I don’t think it’s specialized for driving data.

01:28:18 It’s just legit, just TPU.

01:28:20 They want to go the Apple way,

01:28:22 basically everything required in the chain is done in house.

01:28:25 Well, so you have a problem right now,

01:28:27 and this is one of my concerns.

01:28:31 I really would like to see somebody deal with this.

01:28:33 If anyone out there is doing it,

01:28:35 I’d like to help them if I can.

01:28:38 You basically have two options right now to train.

01:28:40 One, your options are NVIDIA or Google.

01:28:45 So Google is not even an option.

01:28:50 Their TPUs are only available in Google Cloud.

01:28:53 Google has absolutely onerous

01:28:55 terms of service restrictions.

01:28:58 They may have changed it,

01:28:59 but back in Google’s terms of service,

01:29:00 it said explicitly you are not allowed to use Google Cloud ML

01:29:03 for training autonomous vehicles

01:29:05 or for doing anything that competes with Google

01:29:07 without Google’s prior written permission.

01:29:09 Wow, okay.

01:29:10 I mean, Google is not a platform company.

01:29:14 I wouldn’t touch TPUs with a 10 foot pole.

01:29:16 So that leaves you with the monopoly.

01:29:19 NVIDIA? NVIDIA.

01:29:21 So, I mean.

01:29:22 That you’re not a fan of.

01:29:23 Well, look, I was a huge fan of in 2016 NVIDIA.

01:29:28 Jensen came sat in the car.

01:29:31 Cool guy.

01:29:32 When the stock was $30 a share.

01:29:35 NVIDIA stock has skyrocketed.

01:29:38 I witnessed a real change

01:29:39 in who was in management over there in like 2018.

01:29:43 And now they are, let’s exploit.

01:29:46 Let’s take every dollar we possibly can

01:29:48 out of this ecosystem.

01:29:49 Let’s charge $10,000 for A100s

01:29:51 because we know we got the best shit in the game.

01:29:54 And let’s charge $10,000 for an A100

01:29:57 when it’s really not that different from a 3080,

01:30:00 which is 699.

01:30:03 The margins that they are making

01:30:05 off of those high end chips are so high

01:30:08 that, I mean, I think they’re shooting themselves

01:30:10 in the foot just from a business perspective.

01:30:12 Because there’s a lot of people talking like me now

01:30:14 who are like, somebody’s gotta take NVIDIA down.

01:30:19 Yeah.

01:30:19 Where they could dominate it.

01:30:21 NVIDIA could be the new Intel.

01:30:22 Yeah, to be inside everything essentially.

01:30:26 And yet the winners in certain spaces

01:30:30 like autonomous driving, the winners,

01:30:33 only the people who are like desperately falling back

01:30:36 and trying to catch up and have a ton of money,

01:30:38 like the big automakers are the ones

01:30:40 interested in partnering with NVIDIA.

01:30:43 Oh, and I think a lot of those things

01:30:44 are gonna fall through.

01:30:45 If I were NVIDIA, sell chips.

01:30:49 Sell chips at a reasonable markup.

01:30:52 To everybody.

01:30:53 To everybody.

01:30:53 Without any restrictions.

01:30:54 Without any restrictions.

01:30:56 Intel did this.

01:30:57 Look at Intel.

01:30:58 They had a great long run.

01:30:59 NVIDIA is trying to turn their,

01:31:01 they’re like trying to productize their chips

01:31:04 way too much.

01:31:05 They’re trying to extract way more value

01:31:07 than they can sustainably.

01:31:09 Sure, you can do it tomorrow.

01:31:10 Is it gonna up your share price?

01:31:12 Sure, if you’re one of those CEOs

01:31:13 who’s like, how much can I strip mine this company?

01:31:15 And I think, you know, and that’s what’s weird about it too.

01:31:17 Like the CEO is the founder.

01:31:19 It’s the same guy.

01:31:20 Yeah.

01:31:21 I mean, I still think Jensen’s a great guy.

01:31:22 He is great.

01:31:23 Why do this?

01:31:25 You have a choice.

01:31:26 You have a choice right now.

01:31:27 Are you trying to cash out?

01:31:28 Are you trying to buy a yacht?

01:31:30 If you are, fine.

01:31:32 But if you’re trying to be

01:31:34 the next huge semiconductor company, sell chips.

01:31:37 Well, the interesting thing about Jensen

01:31:40 is he is a big vision guy.

01:31:42 So he has a plan like for 50 years down the road.

01:31:48 So it makes me wonder like.

01:31:50 How does price gouging fit into it?

01:31:51 Yeah, how does that, like it’s,

01:31:54 it doesn’t seem to make sense as a plan.

01:31:57 I worry that he’s listening to the wrong people.

01:31:59 Yeah, that’s the sense I have too sometimes.

01:32:02 Because I, despite everything, I think NVIDIA

01:32:07 is an incredible company.

01:32:09 Well, one, so I’m deeply grateful to NVIDIA

01:32:12 for the products they’ve created in the past.

01:32:13 Me too.

01:32:14 Right?

01:32:15 And so.

01:32:16 The 1080 Ti was a great GPU.

01:32:18 Still have a lot of them.

01:32:18 Still is, yeah.

01:32:21 But at the same time, it just feels like,

01:32:26 feels like you don’t want to put all your stock in NVIDIA.

01:32:29 And so like Elon is doing, what Tesla is doing

01:32:32 with Autopilot and Dojo is the Apple way is,

01:32:37 because they’re not going to share Dojo with George Hott’s.

01:32:40 I know.

01:32:42 They should sell that chip.

01:32:43 Oh, they should sell that.

01:32:44 Even their accelerator.

01:32:46 The accelerator that’s in all the cars, the 30 watt one.

01:32:49 Sell it, why not?

01:32:51 So open it up.

01:32:52 Like make, why does Tesla have to be a car company?

01:32:55 Well, if you sell the chip, here’s what you get.

01:32:58 Yeah.

01:32:59 Make some money off the chips.

01:33:00 It doesn’t take away from your chip.

01:33:02 You’re going to make some money, free money.

01:33:03 And also the world is going to build an ecosystem

01:33:07 of tooling for you.

01:33:09 Right?

01:33:09 You’re not going to have to fix the bug in your 10H layer.

01:33:12 Someone else already did.

01:33:15 Well, the question, that’s an interesting question.

01:33:16 I mean, that’s the question Steve Jobs asked.

01:33:18 That’s the question Elon Musk is perhaps asking is,

01:33:24 do you want Tesla stuff inside other vehicles?

01:33:28 Inside, potentially inside like a iRobot vacuum cleaner.

01:33:32 Yeah.

01:33:34 I think you should decide where your advantages are.

01:33:37 I’m not saying Tesla should start selling battery packs

01:33:39 to automakers.

01:33:40 Because battery packs to automakers,

01:33:41 they are straight up in competition with you.

01:33:43 If I were Tesla, I’d keep the battery technology totally.

01:33:46 Yeah.

01:33:46 As far as we make batteries.

01:33:47 But the thing about the Tesla TPU is anybody can build that.

01:33:53 It’s just a question of, you know,

01:33:54 are you willing to spend the money?

01:33:57 It could be a huge source of revenue potentially.

01:34:00 Are you willing to spend a hundred million dollars?

01:34:02 Anyone can build it.

01:34:03 And someone will.

01:34:04 And a bunch of companies now are starting

01:34:06 trying to build AI accelerators.

01:34:08 Somebody is going to get the idea right.

01:34:10 And yeah, hopefully they don’t get greedy

01:34:13 because they’ll just lose to the next guy who finally,

01:34:15 and then eventually the Chinese are going to make knockoff

01:34:17 and video chips and that’s.

01:34:19 From your perspective,

01:34:20 I don’t know if you’re also paying attention

01:34:21 to stay on Tesla for a moment.

01:34:24 Dave, Elon Musk has talked about a complete rewrite

01:34:28 of the neural net that they’re using.

01:34:31 That seems to, again, I’m half paying attention,

01:34:34 but it seems to involve basically a kind of integration

01:34:39 of all the sensors to where it’s a four dimensional view.

01:34:44 You know, you have a 3D model of the world over time.

01:34:47 And then you can, I think it’s done both for the,

01:34:52 for the actually, you know,

01:34:53 so the neural network is able to,

01:34:55 in a more holistic way,

01:34:56 deal with the world and make predictions and so on,

01:34:59 but also to make the annotation task more, you know, easier.

01:35:04 Like you can annotate the world in one place

01:35:08 and then kind of distribute itself across the sensors

01:35:10 and across a different,

01:35:12 like the hundreds of tasks that are involved

01:35:15 in the Hydro Net.

01:35:16 What are your thoughts about this rewrite?

01:35:19 Is it just like some details that are kind of obvious

01:35:22 that are steps that should be taken,

01:35:24 or is there something fundamental

01:35:26 that could challenge your idea

01:35:27 that end to end is the right solution?

01:35:31 We’re in the middle of a big rewrite now as well.

01:35:33 We haven’t shipped a new model in a bit.

01:35:34 Of what kind?

01:35:36 We’re going from 2D to 3D.

01:35:38 Right now, all our stuff, like for example,

01:35:39 when the car pitches back,

01:35:40 the lane lines also pitch back

01:35:43 because we’re assuming the flat world hypothesis.

01:35:47 The new models do not do this.

01:35:48 The new models output everything in 3D.

01:35:50 But there’s still no annotation.

01:35:53 So the 3D is, it’s more about the output.

01:35:56 Yeah.

01:35:57 We have Zs in everything.

01:36:00 We’ve…

01:36:00 Zs.

01:36:01 Yeah.

01:36:02 We had a Zs.

01:36:03 We had a Zs.

01:36:04 We unified a lot of stuff as well.

01:36:06 We switched from TensorFlow to PyTorch.

01:36:10 My understanding of what Tesla’s thing is,

01:36:13 is that their annotator now annotates

01:36:15 across the time dimension.

01:36:16 Mm hmm.

01:36:19 I mean, cute.

01:36:22 Why are you building an annotator?

01:36:24 I find their entire pipeline.

01:36:28 I find your vision, I mean,

01:36:30 the vision of end to end very compelling,

01:36:32 but I also like the engineering of the data engine

01:36:35 that they’ve created.

01:36:37 In terms of supervised learning pipelines,

01:36:41 that thing is damn impressive.

01:36:43 You’re basically, the idea is that you have

01:36:47 hundreds of thousands of people

01:36:49 that are doing data collection for you

01:36:51 by doing their experience.

01:36:52 So that’s kind of similar to the Comma AI model.

01:36:55 And you’re able to mine that data

01:36:59 based on the kind of edge cases you need.

01:37:02 I think it’s harder to do in the end to end learning.

01:37:07 The mining of the right edge cases.

01:37:09 Like that’s where feature engineering

01:37:11 is actually really powerful

01:37:14 because like us humans are able to do

01:37:17 this kind of mining a little better.

01:37:19 But yeah, there’s obvious, as we know,

01:37:21 there’s obvious constraints and limitations to that idea.

01:37:25 Carpathia just tweeted, he’s like,

01:37:28 you get really interesting insights

01:37:29 if you sort your validation set by loss

01:37:33 and look at the highest loss examples.

01:37:36 Yeah.

01:37:37 So yeah, I mean, you can do,

01:37:39 we have a little data engine like thing.

01:37:42 We’re training a segment.

01:37:43 I know it’s not fancy.

01:37:44 It’s just like, okay, train the new segment,

01:37:48 run it on 100,000 images

01:37:50 and now take the thousand with highest loss.

01:37:52 Select a hundred of those by human,

01:37:54 put those, get those ones labeled, retrain, do it again.

01:37:57 And so it’s a much less well written data engine.

01:38:01 And yeah, you can take these things really far

01:38:03 and it is impressive engineering.

01:38:06 And if you truly need supervised data for a problem,

01:38:09 yeah, things like data engine are at the high end

01:38:12 of what is attention?

01:38:14 Is a human paying attention?

01:38:15 I mean, we’re going to probably build something

01:38:17 that looks like data engine

01:38:18 to push our driver monitoring further.

01:38:21 But for driving itself,

01:38:22 you have it all annotated beautifully by what the human does.

01:38:26 Yeah, that’s interesting.

01:38:27 I mean, that applies to driver attention as well.

01:38:30 Do you want to detect the eyes?

01:38:31 Do you want to detect blinking and pupil movement?

01:38:33 Do you want to detect all the like face alignments

01:38:36 or landmark detection and so on,

01:38:38 and then doing kind of reasoning based on that?

01:38:41 Or do you want to take the entirety of the face over time

01:38:43 and do end to end?

01:38:45 I mean, it’s obvious that eventually you have to do end

01:38:48 to end with some calibration, some fixes and so on,

01:38:51 but it’s like, I don’t know when that’s the right move.

01:38:55 Even if it’s end to end, there actually is,

01:38:58 there is no kind of, you have to supervise that with humans.

01:39:03 Whether a human is paying attention or not

01:39:05 is a completely subjective judgment.

01:39:08 Like you can try to like automatically do it

01:39:11 with some stuff, but you don’t have,

01:39:13 if I record a video of a human,

01:39:15 I don’t have true annotations anywhere in that video.

01:39:18 The only way to get them is with,

01:39:21 you know, other humans labeling it really.

01:39:22 Well, I don’t know.

01:39:26 If you think deeply about it,

01:39:28 you could, you might be able to just,

01:39:30 depending on the task,

01:39:31 maybe a discover self annotating things like,

01:39:34 you know, you can look at like steering wheel reverse

01:39:36 or something like that.

01:39:37 You can discover little moments of lapse of attention.

01:39:41 I mean, that’s where psychology comes in.

01:39:44 Is there indicate,

01:39:45 cause you have so much data to look at.

01:39:48 So you might be able to find moments when there’s like,

01:39:51 just inattention that even with smartphone,

01:39:54 if you want to detect smartphone use,

01:39:56 you can start to zoom in.

01:39:57 I mean, that’s the gold mine, sort of the comma AI.

01:40:01 I mean, Tesla is doing this too, right?

01:40:02 Is they’re doing annotation based on,

01:40:06 it’s like a self supervised learning too.

01:40:10 It’s just a small part of the entire picture.

01:40:13 That’s kind of the challenge of solving a problem

01:40:17 in machine learning.

01:40:18 If you can discover self annotating parts of the problem,

01:40:24 right?

01:40:25 Our driver monitoring team is half a person right now.

01:40:27 I would, you know, once we have,

01:40:29 once we have two, three people on that team,

01:40:33 I definitely want to look at self annotating stuff

01:40:35 for attention.

01:40:38 Let’s go back for a sec to a comma and what,

01:40:43 you know, for people who are curious to try it out,

01:40:46 how do you install a comma in say a 2020 Toyota Corolla

01:40:51 or like, what are the cars that are supported?

01:40:53 What are the cars that you recommend?

01:40:55 And what does it take?

01:40:57 You have a few videos out, but maybe through words,

01:41:00 can you explain what’s it take to actually install a thing?

01:41:02 So we support, I think it’s 91 cars, 91 makes the models.

01:41:08 We’ve got to 100 this year.

01:41:10 Nice.

01:41:11 The, yeah, the 2020 Corolla, great choice.

01:41:16 The 2020 Sonata, it’s using the stock longitudinal.

01:41:21 It’s using just our lateral control,

01:41:23 but it’s a very refined car.

01:41:25 Their longitudinal control is not bad at all.

01:41:28 So yeah, Corolla, Sonata,

01:41:31 or if you’re willing to get your hands a little dirty

01:41:34 and look in the right places on the internet,

01:41:35 the Honda Civic is great,

01:41:37 but you’re going to have to install a modified EPS firmware

01:41:40 in order to get a little bit more torque.

01:41:42 And I can’t help you with that.

01:41:43 Comma does not officially endorse that,

01:41:45 but we have been doing it.

01:41:47 We didn’t ever release it.

01:41:49 We waited for someone else to discover it.

01:41:51 And then, you know.

01:41:52 And you have a Discord server where people,

01:41:55 there’s a very active developer community, I suppose.

01:42:00 So depending on the level of experimentation

01:42:04 you’re willing to do, that’s the community.

01:42:07 If you just want to buy it and you have a supported car,

01:42:11 it’s 10 minutes to install.

01:42:13 There’s YouTube videos.

01:42:15 It’s Ikea furniture level.

01:42:17 If you can set up a table from Ikea,

01:42:19 you can install a Comma 2 in your supported car

01:42:21 and it will just work.

01:42:22 Now you’re like, oh, but I want this high end feature

01:42:24 or I want to fix this bug.

01:42:26 Okay, well, welcome to the developer community.

01:42:29 So what, if I wanted to,

01:42:31 this is something I asked you offline like a few months ago.

01:42:34 If I wanted to run my own code to,

01:42:39 so use Comma as a platform

01:42:43 and try to run something like OpenPilot,

01:42:46 what does it take to do that?

01:42:48 So there’s a toggle in the settings called enable SSH.

01:42:51 And if you toggle that, you can SSH into your device.

01:42:54 You can modify the code.

01:42:55 You can upload whatever code you want to it.

01:42:58 There’s a whole lot of people.

01:42:59 So about 60% of people are running stock comma.

01:43:03 About 40% of people are running forks.

01:43:05 And there’s a community of,

01:43:07 there’s a bunch of people who maintain these forks

01:43:10 and these forks support different cars

01:43:13 or they have different toggles.

01:43:15 We try to keep away from the toggles

01:43:17 that are like disabled driver monitoring,

01:43:18 but there’s some people might want that kind of thing

01:43:21 and like, yeah, you can, it’s your car.

01:43:24 I’m not here to tell you.

01:43:29 We have some, we ban,

01:43:31 if you’re trying to subvert safety features,

01:43:32 you’re banned from our Discord.

01:43:33 I don’t want anything to do with you,

01:43:35 but there’s some forks doing that.

01:43:37 Got it.

01:43:39 So you encourage responsible forking.

01:43:42 Yeah, yeah.

01:43:43 We encourage, some people, yeah, some people,

01:43:46 like there’s forks that will do,

01:43:48 some people just like having a lot of readouts on the UI,

01:43:52 like a lot of like flashing numbers.

01:43:53 So there’s forks that do that.

01:43:55 Some people don’t like the fact that it disengages

01:43:57 when you press the gas pedal.

01:43:58 There’s forks that disable that.

01:44:00 Got it.

01:44:01 Now the stock experience is what like,

01:44:04 so it does both lane keeping

01:44:06 and longitudinal control all together.

01:44:08 So it’s not separate like it is in autopilot.

01:44:11 No, so, okay.

01:44:12 Some cars we use the stock longitudinal control.

01:44:15 We don’t do the longitudinal control in all the cars.

01:44:17 Some cars, the ACCs are pretty good in the cars.

01:44:19 It’s the lane keep that’s atrocious in anything

01:44:21 except for autopilot and super cruise.

01:44:23 But, you know, you just turn it on and it works.

01:44:27 What does this engagement look like?

01:44:29 Yeah, so we have, I mean,

01:44:30 I’m very concerned about mode confusion.

01:44:32 I’ve experienced it on super cruise and autopilot

01:44:36 where like autopilot, like autopilot disengages.

01:44:39 I don’t realize that the ACC is still on.

01:44:42 The lead car moves slightly over

01:44:44 and then the Tesla accelerates

01:44:46 to like whatever my set speed is super fast.

01:44:48 I’m like, what’s going on here?

01:44:51 We have engaged and disengaged.

01:44:53 And this is similar to my understanding, I’m not a pilot,

01:44:56 but my understanding is either the pilot is in control

01:45:00 or the copilot is in control.

01:45:02 And we have the same kind of transition system.

01:45:05 Either open pilot is engaged or open pilot is disengaged.

01:45:08 Engage with cruise control,

01:45:10 disengage with either gas brake or cancel.

01:45:13 Let’s talk about money.

01:45:14 What’s the business strategy for Kama?

01:45:17 Profitable.

01:45:18 Well, so you’re.

01:45:19 We did it.

01:45:20 So congratulations.

01:45:23 What, so basically selling,

01:45:25 so we should say Kama cost a thousand bucks, Kama two?

01:45:29 200 for the interface to the car as well.

01:45:31 It’s 1200, I’ll send that.

01:45:34 Nobody’s usually upfront like this.

01:45:36 Yeah, you gotta add the tack on, right?

01:45:38 Yeah.

01:45:39 I love it.

01:45:39 I’m not gonna lie to you.

01:45:41 Trust me, it will add $1,200 of value to your life.

01:45:43 Yes, it’s still super cheap.

01:45:45 30 days, no questions asked, money back guarantee,

01:45:47 and prices are only going up.

01:45:50 If there ever is future hardware,

01:45:52 it could cost a lot more than $1,200.

01:45:53 So Kama three is in the works.

01:45:56 It could be.

01:45:57 All I will say is future hardware

01:45:59 is going to cost a lot more than the current hardware.

01:46:02 Yeah, the people that use,

01:46:05 the people I’ve spoken with that use Kama,

01:46:07 that use open pilot,

01:46:10 first of all, they use it a lot.

01:46:12 So people that use it, they fall in love with it.

01:46:14 Oh, our retention rate is insane.

01:46:16 It’s a good sign.

01:46:17 Yeah.

01:46:18 It’s a really good sign.

01:46:19 70% of Kama two buyers are daily active users.

01:46:23 Yeah, it’s amazing.

01:46:27 Oh, also, we don’t plan on stopping selling the Kama two.

01:46:30 Like it’s, you know.

01:46:31 So whatever you create that’s beyond Kama two,

01:46:36 it would be potentially a phase shift.

01:46:40 Like it’s so much better that,

01:46:42 like you could use Kama two

01:46:44 and you can use Kama whatever.

01:46:45 Depends what you want.

01:46:46 It’s 3.41, 42.

01:46:48 Yeah.

01:46:49 You know, autopilot hardware one versus hardware two.

01:46:52 The Kama two is kind of like hardware one.

01:46:53 Got it, got it.

01:46:54 You can still use both.

01:46:55 Got it, got it.

01:46:56 I think I heard you talk about retention rate

01:46:58 with the VR headsets that the average is just once.

01:47:01 Yeah.

01:47:02 Just fast.

01:47:02 I mean, it’s such a fascinating way

01:47:03 to think about technology.

01:47:05 And this is a really, really good sign.

01:47:07 And the other thing that people say about Kama

01:47:09 is like they can’t believe they’re getting this 4,000 bucks.

01:47:12 Right?

01:47:12 It seems like some kind of steal.

01:47:17 So, but in terms of like longterm business strategies

01:47:20 that basically to put,

01:47:21 so it’s currently in like a thousand plus cars.

01:47:27 1,200.

01:47:28 More, more.

01:47:30 So yeah, dailies is about, dailies is about 2,000.

01:47:35 Weeklys is about 2,500, monthlys is over 3,000.

01:47:38 Wow.

01:47:39 We’ve grown a lot since we last talked.

01:47:42 Is the goal, like can we talk crazy for a second?

01:47:44 I mean, what’s the goal to overtake Tesla?

01:47:48 Let’s talk, okay, so.

01:47:49 I mean, Android did overtake iOS.

01:47:51 That’s exactly it, right?

01:47:52 So they did it.

01:47:55 I actually don’t know the timeline of that one.

01:47:57 But let’s talk, because everything is in alpha now.

01:48:02 The autopilot you could argue is in alpha

01:48:03 in terms of towards the big mission

01:48:05 of autonomous driving, right?

01:48:07 And so what, yeah, is your goal to overtake

01:48:11 millions of cars essentially?

01:48:13 Of course.

01:48:15 Where would it stop?

01:48:16 Like it’s open source software.

01:48:18 It might not be millions of cars

01:48:19 with a piece of comma hardware, but yeah.

01:48:21 I think open pilot at some point

01:48:24 will cross over autopilot in users,

01:48:26 just like Android crossed over iOS.

01:48:29 How does Google make money from Android?

01:48:31 It’s complicated.

01:48:34 Their own devices make money.

01:48:37 Google, Google makes money

01:48:39 by just kind of having you on the internet.

01:48:42 Yes.

01:48:43 Google search is built in, Gmail is built in.

01:48:45 Android is just a shill

01:48:46 for the rest of Google’s ecosystem.

01:48:48 Yeah, but the problem is Android is not,

01:48:50 is a brilliant thing.

01:48:52 I mean, Android arguably changed the world.

01:48:55 So there you go.

01:48:56 That’s, you can feel good ethically speaking.

01:49:00 But as a business strategy, it’s questionable.

01:49:04 Or sell hardware.

01:49:05 Sell hardware.

01:49:06 I mean, it took Google a long time to come around to it,

01:49:08 but they are now making money on the Pixel.

01:49:10 You’re not about money, you’re more about winning.

01:49:13 Yeah, of course.

01:49:14 No, but if only 10% of open pilot devices

01:49:18 come from comma AI.

01:49:19 They still make a lot.

01:49:20 That is still, yes.

01:49:21 That is a ton of money for our company.

01:49:22 But can’t somebody create a better comma using open pilot?

01:49:27 Or are you basically saying, well, I’ll compete them?

01:49:28 Well, I’ll compete you.

01:49:29 Can you create a better Android phone than the Google Pixel?

01:49:32 Right.

01:49:32 I mean, you can, but like, you know.

01:49:34 I love that.

01:49:35 So you’re confident, like, you know

01:49:37 what the hell you’re doing.

01:49:38 Yeah.

01:49:40 It’s confidence and merit.

01:49:43 I mean, our money comes from, we’re

01:49:44 a consumer electronics company.

01:49:46 Yeah.

01:49:46 And put it this way.

01:49:48 So we sold like 3,000 comma twos.

01:49:51 2,500 right now.

01:49:54 And like, OK, we’re probably going

01:49:59 to sell 10,000 units next year.

01:50:01 10,000 units, even just $1,000 a unit, OK,

01:50:04 we’re at 10 million in revenue.

01:50:09 Get that up to 100,000, maybe double the price of the unit.

01:50:12 Now we’re talking like 200 million revenue.

01:50:13 We’re talking like series.

01:50:14 Yeah, actually making money.

01:50:15 One of the rare semi autonomous or autonomous vehicle companies

01:50:19 that are actually making money.

01:50:21 Yeah.

01:50:22 You know, if you look at a model,

01:50:24 and we were just talking about this yesterday.

01:50:26 If you look at a model, and like you’re AB testing your model,

01:50:29 and if you’re one branch of the AB test,

01:50:32 the losses go down very fast in the first five epochs.

01:50:35 That model is probably going to converge

01:50:37 to something considerably better than the one

01:50:39 where the losses are going down slower.

01:50:41 Why do people think this is going to stop?

01:50:43 Why do people think one day there’s

01:50:44 going to be a great like, well, Waymo’s eventually

01:50:46 going to surpass you guys?

01:50:49 Well, they’re not.

01:50:52 Do you see like a world where like a Tesla or a car

01:50:55 like a Tesla would be able to basically press a button

01:50:59 and you like switch to open pilot?

01:51:01 You know, you load in.

01:51:04 No, so I think so first off, I think

01:51:06 that we may surpass Tesla in terms of users.

01:51:10 I do not think we’re going to surpass Tesla ever

01:51:12 in terms of revenue.

01:51:13 I think Tesla can capture a lot more revenue per user

01:51:16 than we can.

01:51:17 But this mimics the Android iOS model exactly.

01:51:20 There may be more Android devices,

01:51:22 but there’s a lot more iPhones than Google Pixels.

01:51:24 So I think there’ll be a lot more Tesla cars sold

01:51:26 than pieces of common hardware.

01:51:30 And then as far as a Tesla owner being

01:51:34 able to switch to open pilot, does iPhones run Android?

01:51:40 No, but it doesn’t make sense.

01:51:42 You can if you really want to do it,

01:51:43 but it doesn’t really make sense.

01:51:44 Like it’s not.

01:51:45 It doesn’t make sense.

01:51:46 Who cares?

01:51:46 What about if a large company like automakers, Ford, GM,

01:51:51 Toyota came to George Hots?

01:51:53 Or on the tech space, Amazon, Facebook, Google

01:51:58 came with a large pile of cash?

01:52:01 Would you consider being purchased?

01:52:07 Do you see that as a one possible?

01:52:10 Not seriously, no.

01:52:12 I would probably see how much shit they’ll entertain for me.

01:52:19 And if they’re willing to jump through a bunch of my hoops,

01:52:22 then maybe.

01:52:22 But no, not the way that M&A works today.

01:52:25 I mean, we’ve been approached.

01:52:26 And I laugh in these people’s faces.

01:52:28 I’m like, are you kidding?

01:52:31 Yeah.

01:52:31 Because it’s so demeaning.

01:52:33 The M&A people are so demeaning to companies.

01:52:36 They treat the startup world as their innovation ecosystem.

01:52:41 And they think that I’m cool with going along with that,

01:52:43 so I can have some of their scam fake Fed dollars.

01:52:46 Fed coin.

01:52:47 What am I going to do with more Fed coin?

01:52:49 Fed coin.

01:52:50 Fed coin, man.

01:52:51 I love that.

01:52:52 So that’s the cool thing about podcasting,

01:52:54 actually, is people criticize.

01:52:56 I don’t know if you’re familiar with Spotify giving Joe Rogan

01:53:00 $100 million.

01:53:01 I don’t know about that.

01:53:03 And they respect, despite all the shit

01:53:08 that people are talking about Spotify,

01:53:11 people understand that podcasters like Joe Rogan

01:53:15 know what the hell they’re doing.

01:53:17 So they give them money and say, just do what you do.

01:53:21 And the equivalent for you would be like,

01:53:25 George, do what the hell you do, because you’re good at it.

01:53:28 Try not to murder too many people.

01:53:31 There’s some kind of common sense things,

01:53:33 like just don’t go on a weird rampage of it.

01:53:37 Yeah.

01:53:38 It comes down to what companies I could respect, right?

01:53:43 Could I respect GM?

01:53:44 Never.

01:53:46 No, I couldn’t.

01:53:47 I mean, could I respect a Hyundai?

01:53:50 More so.

01:53:52 That’s a lot closer.

01:53:53 Toyota?

01:53:54 What’s your?

01:53:55 Nah.

01:53:56 Nah.

01:53:57 Korean is the way.

01:53:59 I think that the Japanese, the Germans, the US, they’re all

01:54:02 too, they’re all too, they all think they’re too great.

01:54:05 What about the tech companies?

01:54:07 Apple?

01:54:08 Apple is, of the tech companies that I could respect,

01:54:11 Apple’s the closest.

01:54:12 Yeah.

01:54:12 I mean, I could never.

01:54:13 It would be ironic.

01:54:14 It would be ironic if Comma AI is acquired by Apple.

01:54:19 I mean, Facebook, look, I quit Facebook 10 years ago

01:54:21 because I didn’t respect the business model.

01:54:24 Google has declined so fast in the last five years.

01:54:28 What are your thoughts about Waymo and its present

01:54:32 and its future?

01:54:33 Let me start by saying something nice, which is I’ve

01:54:39 visited them a few times and have ridden in their cars.

01:54:45 And the engineering that they’re doing,

01:54:49 both the research and the actual development

01:54:51 and the engineering they’re doing

01:54:53 and the scale they’re actually achieving

01:54:55 by doing it all themselves is really impressive.

01:54:58 And the balance of safety and innovation.

01:55:01 And the cars work really well for the routes they drive.

01:55:07 It drives fast, which was very surprising to me.

01:55:10 It drives the speed limit or faster than the speed limit.

01:55:14 It goes.

01:55:16 And it works really damn well.

01:55:17 And the interface is nice.

01:55:19 In Chandler, Arizona, yeah.

01:55:20 Yeah, in Chandler, Arizona, very specific environment.

01:55:22 So it gives me enough material in my mind

01:55:27 to push back against the madmen of the world,

01:55:30 like George Hotz, to be like, because you kind of imply

01:55:36 there’s zero probability they’re going to win.

01:55:38 And after I’ve used, after I’ve ridden in it, to me,

01:55:43 it’s not zero.

01:55:44 Oh, it’s not for technology reasons.

01:55:46 Bureaucracy?

01:55:48 No, it’s worse than that.

01:55:49 It’s actually for product reasons, I think.

01:55:51 Oh, you think they’re just not capable of creating

01:55:53 an amazing product?

01:55:55 No, I think that the product that they’re building

01:55:58 doesn’t make sense.

01:56:01 So a few things.

01:56:03 You say the Waymo’s are fast.

01:56:05 Benchmark a Waymo against a competent Uber driver.

01:56:09 Right.

01:56:09 Right?

01:56:10 The Uber driver’s faster.

01:56:11 It’s not even about speed.

01:56:12 It’s the thing you said.

01:56:13 It’s about the experience of being stuck at a stop sign

01:56:16 because pedestrians are crossing nonstop.

01:56:20 I like when my Uber driver doesn’t come to a full stop

01:56:22 at the stop sign.

01:56:22 Yeah.

01:56:23 You know?

01:56:24 And so let’s say the Waymo’s are 20% slower than an Uber.

01:56:31 Right?

01:56:33 You can argue that they’re going to be cheaper.

01:56:35 And I argue that users already have the choice

01:56:37 to trade off money for speed.

01:56:39 It’s called UberPool.

01:56:42 I think it’s like 15% of rides are UberPools.

01:56:45 Right?

01:56:46 Users are not willing to trade off money for speed.

01:56:49 So the whole product that they’re building

01:56:52 is not going to be competitive with traditional ride sharing

01:56:56 networks.

01:56:56 Right.

01:56:59 And also, whether there’s profit to be made

01:57:04 depends entirely on one company having a monopoly.

01:57:07 I think that the level four autonomous ride sharing

01:57:11 vehicles market is going to look a lot like the scooter market

01:57:14 if even the technology does come to exist, which I question.

01:57:18 Who’s doing well in that market?

01:57:20 It’s a race to the bottom.

01:57:22 Well, it could be closer like an Uber and a Lyft,

01:57:25 where it’s just one or two players.

01:57:28 Well, the scooter people have given up

01:57:31 trying to market scooters as a practical means

01:57:34 of transportation.

01:57:35 And they’re just like, they’re super fun to ride.

01:57:37 Look at wheels.

01:57:38 I love those things.

01:57:39 And they’re great on that front.

01:57:40 Yeah.

01:57:41 But from an actual transportation product

01:57:43 perspective, I do not think scooters are viable.

01:57:46 And I do not think level four autonomous cars are viable.

01:57:49 If you, let’s play a fun experiment.

01:57:51 If you ran, let’s do a Tesla and let’s do Waymo.

01:57:56 If Elon Musk took a vacation for a year, he just said,

01:58:01 screw it, I’m going to go live on an island, no electronics.

01:58:05 And the board decides that we need to find somebody

01:58:07 to run the company.

01:58:09 And they did decide that you should run the company

01:58:11 for a year.

01:58:12 How do you run Tesla differently?

01:58:14 I wouldn’t change much.

01:58:16 Do you think they’re on the right track?

01:58:17 I wouldn’t change.

01:58:18 I mean, I’d have some minor changes.

01:58:21 But even my debate with Tesla about end

01:58:25 to end versus SegNets, that’s just software.

01:58:29 Who cares?

01:58:30 It’s not like you’re doing something terrible with SegNets.

01:58:33 You’re probably building something that’s

01:58:35 at least going to help you debug the end to end system a lot.

01:58:39 It’s very easy to transition from what they have

01:58:42 to an end to end kind of thing.

01:58:45 And then I presume you would, in the Model Y

01:58:50 or maybe in the Model 3, start adding driver

01:58:52 sensing with infrared.

01:58:53 Yes, I would add infrared camera, infrared lights

01:58:58 right away to those cars.

01:59:02 And start collecting that data and do all that kind of stuff,

01:59:04 yeah.

01:59:05 Very much.

01:59:06 I think they’re already kind of doing it.

01:59:07 It’s an incredibly minor change.

01:59:09 If I actually were CEO of Tesla, first off,

01:59:11 I’d be horrified that I wouldn’t be able to do

01:59:13 a better job as Elon.

01:59:14 And then I would try to understand

01:59:16 the way he’s done things before.

01:59:17 You would also have to take over his Twitter.

01:59:20 I don’t tweet.

01:59:22 Yeah, what’s your Twitter situation?

01:59:24 Why are you so quiet on Twitter?

01:59:25 Since Dukama is like what’s your social network presence like?

01:59:30 Because on Instagram, you do live streams.

01:59:34 You understand the music of the internet,

01:59:39 but you don’t always fully engage into it.

01:59:41 You’re part time.

01:59:42 Well, I used to have a Twitter.

01:59:44 Yeah, I mean, Instagram is a pretty place.

01:59:47 Instagram is a beautiful place.

01:59:49 It glorifies beauty.

01:59:49 I like Instagram’s values as a network.

01:59:53 Twitter glorifies conflict, glorifies shots,

02:00:00 taking shots of people.

02:00:01 And it’s like, you know, Twitter and Donald Trump

02:00:05 are perfectly, they’re perfect for each other.

02:00:08 So Tesla’s on the right track in your view.

02:00:12 OK, so let’s try, let’s really try this experiment.

02:00:16 If you ran Waymo, let’s say they’re,

02:00:19 I don’t know if you agree, but they

02:00:21 seem to be at the head of the pack of the kind of,

02:00:25 what would you call that approach?

02:00:27 Like it’s not necessarily lighter based

02:00:29 because it’s not about lighter.

02:00:30 Level four robotaxi.

02:00:31 Level four robotaxi, all in before making any revenue.

02:00:37 So they’re probably at the head of the pack.

02:00:38 If you were said, hey, George, can you

02:00:42 please run this company for a year, how would you change it?

02:00:47 I would go.

02:00:47 I would get Anthony Levandowski out of jail,

02:00:49 and I would put him in charge of the company.

02:00:56 Well, let’s try to break that apart.

02:00:58 Why do you want to destroy the company by doing that?

02:01:01 Or do you mean you like renegade style thinking that pushes,

02:01:09 that throws away bureaucracy and goes

02:01:11 to first principle thinking?

02:01:12 What do you mean by that?

02:01:14 I think Anthony Levandowski is a genius,

02:01:16 and I think he would come up with a much better idea of what

02:01:19 to do with Waymo than me.

02:01:22 So you mean that unironically.

02:01:23 He is a genius.

02:01:24 Oh, yes.

02:01:25 Oh, absolutely.

02:01:26 Without a doubt.

02:01:27 I mean, I’m not saying there’s no shortcomings,

02:01:30 but in the interactions I’ve had with him, yeah.

02:01:34 What?

02:01:35 He’s also willing to take, like, who knows

02:01:38 what he would do with Waymo?

02:01:39 I mean, he’s also out there, like far more out there

02:01:41 than I am.

02:01:41 Yeah, there’s big risks.

02:01:43 What do you make of him?

02:01:44 I was going to talk to him on this podcast,

02:01:47 and I was going back and forth.

02:01:48 I’m such a gullible, naive human.

02:01:51 Like, I see the best in people.

02:01:53 And I slowly started to realize that there

02:01:56 might be some people out there that, like,

02:02:02 have multiple faces to the world.

02:02:05 They’re, like, deceiving and dishonest.

02:02:08 I still refuse to, like, I just, I trust people,

02:02:13 and I don’t care if I get hurt by it.

02:02:14 But, like, you know, sometimes you

02:02:16 have to be a little bit careful, especially platform

02:02:18 wise and podcast wise.

02:02:21 What do you, what am I supposed to think?

02:02:23 So you think, you think he’s a good person?

02:02:26 Oh, I don’t know.

02:02:27 I don’t really make moral judgments.

02:02:30 It’s difficult to.

02:02:30 Oh, I mean this about the Waymo.

02:02:32 I actually, I mean that whole idea very nonironically

02:02:34 about what I would do.

02:02:36 The problem with putting me in charge of Waymo

02:02:38 is Waymo is already $10 billion in the hole, right?

02:02:41 Whatever idea Waymo does, look, commas profitable, commas

02:02:44 raised $8.1 million.

02:02:46 That’s small, you know, that’s small money.

02:02:48 Like, I can build a reasonable consumer electronics company

02:02:50 and succeed wildly at that and still never be able to pay back

02:02:54 Waymo’s $10 billion.

02:02:55 So I think the basic idea with Waymo, well,

02:02:58 forget the $10 billion because they have some backing,

02:03:00 but your basic thing is, like, what can we do

02:03:04 to start making some money?

02:03:05 Well, no, I mean, my bigger idea is, like,

02:03:07 whatever the idea is that’s gonna save Waymo,

02:03:10 I don’t have it.

02:03:11 It’s gonna have to be a big risk idea

02:03:13 and I cannot think of a better person

02:03:15 than Anthony Levandowski to do it.

02:03:17 So that is completely what I would do as CEO of Waymo.

02:03:20 I would call myself a transitionary CEO,

02:03:22 do everything I can to fix that situation up.

02:03:24 I’m gonna see.

02:03:25 Yeah.

02:03:27 Yeah.

02:03:28 Because I can’t do it, right?

02:03:29 Like, I can’t, I mean, I can talk about how

02:03:33 what I really wanna do is just apologize

02:03:35 for all those corny, you know, ad campaigns

02:03:38 and be like, here’s the real state of the technology.

02:03:40 Yeah, that’s, like, I have several criticism.

02:03:42 I’m a little bit more bullish on Waymo

02:03:44 than you seem to be, but one criticism I have

02:03:48 is it went into corny mode too early.

02:03:50 Like, it’s still a startup.

02:03:52 It hasn’t delivered on anything.

02:03:53 So it should be, like, more renegade

02:03:56 and show off the engineering that they’re doing,

02:03:59 which just can be impressive,

02:04:00 as opposed to doing these weird commercials

02:04:02 of, like, your friendly car company.

02:04:07 I mean, that’s my biggest snipe at Waymo is always,

02:04:10 that guy’s a paid actor.

02:04:11 That guy’s not a Waymo user.

02:04:12 He’s a paid actor.

02:04:13 Look here, I found his call sheet.

02:04:15 Do kind of like what SpaceX is doing

02:04:17 with the rocket launches.

02:04:18 Just put the nerds up front, put the engineers up front,

02:04:22 and just, like, show failures too, just.

02:04:25 I love SpaceX’s, yeah.

02:04:27 Yeah, the thing that they’re doing is right,

02:04:29 and it just feels like the right.

02:04:31 But.

02:04:32 We’re all so excited to see them succeed.

02:04:34 Yeah.

02:04:35 I can’t wait to see when it won’t fail, you know?

02:04:37 Like, you lie to me, I want you to fail.

02:04:39 You tell me the truth, you be honest with me,

02:04:41 I want you to succeed.

02:04:42 Yeah.

02:04:44 Ah, yeah, and that requires the renegade CEO, right?

02:04:50 I’m with you, I’m with you.

02:04:51 I still have a little bit of faith in Waymo

02:04:54 for the renegade CEO to step forward, but.

02:04:57 It’s not, it’s not John Kraftik.

02:05:00 Yeah, it’s, you can’t.

02:05:02 It’s not Chris Hormiston.

02:05:04 And those people may be very good at certain things.

02:05:07 Yeah.

02:05:08 But they’re not renegades.

02:05:10 Yeah, because these companies are fundamentally,

02:05:12 even though we’re talking about billion dollars,

02:05:14 all these crazy numbers,

02:05:15 they’re still, like, early stage startups.

02:05:19 I mean, and I just, if you are pre revenue

02:05:21 and you’ve raised 10 billion dollars,

02:05:23 I have no idea, like, this just doesn’t work.

02:05:26 You know, it’s against everything Silicon Valley.

02:05:28 Where’s your minimum viable product?

02:05:29 You know, where’s your users?

02:05:31 Where’s your growth numbers?

02:05:33 This is traditional Silicon Valley.

02:05:36 Why do you not apply it to what you think

02:05:38 you’re too big to fail already, like?

02:05:41 How do you think autonomous driving will change society?

02:05:45 So the mission is, for comma, to solve self driving.

02:05:52 Do you have, like, a vision of the world

02:05:54 of how it’ll be different?

02:05:57 Is it as simple as A to B transportation?

02:06:00 Or is there, like, cause these are robots.

02:06:03 It’s not about autonomous driving in and of itself.

02:06:05 It’s what the technology enables.

02:06:09 It’s, I think it’s the coolest applied AI problem.

02:06:12 I like it because it has a clear path to monetary value.

02:06:17 But as far as that being the thing that changes the world,

02:06:21 I mean, no, like, there’s cute things we’re doing in common.

02:06:25 Like, who’d have thought you could stick a phone

02:06:26 on the windshield and it’ll drive.

02:06:29 But like, really, the product that you’re building

02:06:31 is not something that people were not capable

02:06:33 of imagining 50 years ago.

02:06:35 So no, it doesn’t change the world on that front.

02:06:37 Could people have imagined the internet 50 years ago?

02:06:39 Only true genius visionaries.

02:06:42 Everyone could have imagined autonomous cars 50 years ago.

02:06:45 It’s like a car, but I don’t drive it.

02:06:47 See, I have this sense, and I told you, like,

02:06:49 my longterm dream is robots with which you have deep,

02:06:55 with whom you have deep connections, right?

02:06:59 And there’s different trajectories towards that.

02:07:03 And I’ve been thinking,

02:07:04 so I’ve been thinking of launching a startup.

02:07:07 I see autonomous vehicles

02:07:09 as a potential trajectory to that.

02:07:11 That’s not where the direction I would like to go,

02:07:16 but I also see Tesla or even Comma AI,

02:07:19 like, pivoting into robotics broadly defined

02:07:24 at some stage in the way, like you’re mentioning,

02:07:27 the internet didn’t expect.

02:07:29 Let’s solve, you know, when I say a comma about this,

02:07:32 we could talk about this,

02:07:33 but let’s solve self driving cars first.

02:07:35 You gotta stay focused on the mission.

02:07:37 Don’t, don’t, don’t, you’re not too big to fail.

02:07:39 For however much I think Comma’s winning,

02:07:41 like, no, no, no, no, no, you’re winning

02:07:43 when you solve level five self driving cars.

02:07:45 And until then, you haven’t won.

02:07:46 And you know, again, you wanna be arrogant

02:07:48 in the face of other people, great.

02:07:50 You wanna be arrogant in the face of nature, you’re an idiot.

02:07:53 Stay mission focused, brilliantly put.

02:07:56 Like I mentioned, thinking of launching a startup,

02:07:58 I’ve been considering, actually, before COVID,

02:08:01 I’ve been thinking of moving to San Francisco.

02:08:03 Ooh, ooh, I wouldn’t go there.

02:08:06 So why is, well, and now I’m thinking

02:08:09 about potentially Austin and we’re in San Diego now.

02:08:13 San Diego, come here.

02:08:14 So why, what, I mean, you’re such an interesting human.

02:08:20 You’ve launched so many successful things.

02:08:23 What, why San Diego?

02:08:26 What do you recommend?

02:08:27 Why not San Francisco?

02:08:29 Have you thought, so in your case,

02:08:31 San Diego with Qualcomm and Snapdragon,

02:08:33 I mean, that’s an amazing combination.

02:08:36 But.

02:08:37 That wasn’t really why.

02:08:38 That wasn’t the why?

02:08:39 No, I mean, Qualcomm was an afterthought.

02:08:41 Qualcomm was, it was a nice thing to think about.

02:08:42 It’s like, you can have a tech company here.

02:08:45 Yeah.

02:08:45 And a good one, I mean, you know, I like Qualcomm, but.

02:08:48 No.

02:08:49 Well, so why San Diego better than San Francisco?

02:08:50 Why does San Francisco suck?

02:08:51 Well, so, okay, so first off,

02:08:53 we all kind of said like, we wanna stay in California.

02:08:55 People like the ocean.

02:08:57 You know, California, for its flaws,

02:09:00 it’s like a lot of the flaws of California

02:09:02 are not necessarily California as a whole,

02:09:03 and they’re much more San Francisco specific.

02:09:05 Yeah.

02:09:06 San Francisco, so I think first tier cities in general

02:09:09 have stopped wanting growth.

02:09:13 Well, you have like in San Francisco, you know,

02:09:15 the voting class always votes to not build more houses

02:09:18 because they own all the houses.

02:09:19 And they’re like, well, you know,

02:09:21 once people have figured out how to vote themselves

02:09:23 more money, they’re gonna do it.

02:09:25 It is so insanely corrupt.

02:09:27 It is not balanced at all, like political party wise,

02:09:31 you know, it’s a one party city and.

02:09:34 For all the discussion of diversity,

02:09:38 it stops lacking real diversity of thought,

02:09:42 of background, of approaches, of strategies, of ideas.

02:09:48 It’s kind of a strange place

02:09:51 that it’s the loudest people about diversity

02:09:54 and the biggest lack of diversity.

02:09:56 I mean, that’s what they say, right?

02:09:58 It’s the projection.

02:10:00 Projection, yeah.

02:10:02 Yeah, it’s interesting.

02:10:02 And even people in Silicon Valley tell me

02:10:04 that’s like high up people,

02:10:07 everybody is like, this is a terrible place.

02:10:10 It doesn’t make sense.

02:10:10 I mean, and coronavirus is really what killed it.

02:10:13 San Francisco was the number one exodus

02:10:17 during coronavirus.

02:10:18 We still think San Diego is a good place to be.

02:10:21 Yeah.

02:10:23 Yeah, I mean, we’ll see.

02:10:24 We’ll see what happens with California a bit longer term.

02:10:29 Like Austin’s an interesting choice.

02:10:32 I wouldn’t, I don’t have really anything bad to say

02:10:33 about Austin either,

02:10:35 except for the extreme heat in the summer,

02:10:37 which, but that’s like very on the surface, right?

02:10:40 I think as far as like an ecosystem goes, it’s cool.

02:10:43 I personally love Colorado.

02:10:45 Colorado’s great.

02:10:47 Yeah, I mean, you have these states that are,

02:10:49 like just way better run.

02:10:51 California is, you know, it’s especially San Francisco.

02:10:55 It’s not a tie horse and like, yeah.

02:10:58 Can I ask you for advice to me and to others

02:11:02 about what’s it take to build a successful startup?

02:11:07 Oh, I don’t know.

02:11:08 I haven’t done that.

02:11:09 Talk to someone who did that.

02:11:10 Well, you’ve, you know,

02:11:14 this is like another book of years

02:11:16 that I’ll buy for $67, I suppose.

02:11:18 So there’s, um.

02:11:20 One of these days I’ll sell out.

02:11:24 Yeah, that’s right.

02:11:24 Jailbreaks are going to be a dollar

02:11:26 and books are going to be 67.

02:11:27 How I jailbroke the iPhone by George Hots.

02:11:32 That’s right.

02:11:32 How I jail broke the iPhone and you can too.

02:11:35 You can too.

02:11:36 67 dollars.

02:11:37 In 21 days.

02:11:39 That’s right.

02:11:39 That’s right.

02:11:40 Oh God.

02:11:41 Okay, I can’t wait.

02:11:42 But quite, so you have an introspective,

02:11:44 you have built a very unique company.

02:11:49 I mean, not you, but you and others.

02:11:53 But I don’t know.

02:11:55 There’s no, there’s nothing.

02:11:56 You have an introspective,

02:11:57 you haven’t really sat down and thought about like,

02:12:01 well, like if you and I were having a bunch of,

02:12:04 we’re having some beers

02:12:06 and you’re seeing that I’m depressed

02:12:08 and whatever, I’m struggling.

02:12:09 There’s no advice you can give?

02:12:11 Oh, I mean.

02:12:13 More beer?

02:12:13 More beer?

02:12:15 Um, yeah, I think it’s all very like situation dependent.

02:12:23 Here’s, okay, if I can give a generic piece of advice,

02:12:25 it’s the technology always wins.

02:12:28 The better technology always wins.

02:12:30 And lying always loses.

02:12:35 Build technology and don’t lie.

02:12:38 I’m with you.

02:12:39 I agree very much.

02:12:40 The long run, long run.

02:12:41 Sure.

02:12:42 That’s the long run, yeah.

02:12:43 The market can remain irrational longer

02:12:44 than you can remain solvent.

02:12:46 True fact.

02:12:47 Well, this is an interesting point

02:12:49 because I ethically and just as a human believe that

02:12:54 like hype and smoke and mirrors is not

02:12:58 at any stage of the company is a good strategy.

02:13:02 I mean, there’s some like, you know,

02:13:04 PR magic kind of like, you know.

02:13:07 Oh, hype around a new product, right?

02:13:08 If there’s a call to action,

02:13:09 if there’s like a call to action,

02:13:10 like buy my new GPU, look at it.

02:13:13 It takes up three slots and it’s this big.

02:13:14 It’s huge.

02:13:15 Buy my GPU.

02:13:16 Yeah, that’s great.

02:13:17 If you look at, you know,

02:13:18 especially in the AI space broadly,

02:13:20 but autonomous vehicles,

02:13:22 like you can raise a huge amount of money on nothing.

02:13:26 And the question to me is like, I’m against that.

02:13:30 I’ll never be part of that.

02:13:31 I don’t think, I hope not, willingly not.

02:13:36 But like, is there something to be said

02:13:40 to essentially lying to raise money,

02:13:44 like fake it till you make it kind of thing?

02:13:47 I mean, this is Billy McFarland in the Fyre Festival.

02:13:50 Like we all experienced, you know,

02:13:53 what happens with that.

02:13:54 No, no, don’t fake it till you make it.

02:13:57 Be honest and hope you make it the whole way.

02:14:00 The technology wins.

02:14:01 Right, the technology wins.

02:14:02 And like, there is, I’m not used to like the anti hype,

02:14:06 you know, that’s a Slava KPSS reference,

02:14:08 but hype isn’t necessarily bad.

02:14:13 I loved camping out for the iPhones, you know,

02:14:17 and as long as the hype is backed by like substance,

02:14:21 as long as it’s backed by something I can actually buy,

02:14:23 and like it’s real, then hype is great

02:14:26 and it’s a great feeling.

02:14:28 It’s when the hype is backed by lies

02:14:30 that it’s a bad feeling.

02:14:32 I mean, a lot of people call Elon Musk a fraud.

02:14:34 How could he be a fraud?

02:14:35 I’ve noticed this, this kind of interesting effect,

02:14:37 which is he does tend to over promise

02:14:42 and deliver, what’s the better way to phrase it?

02:14:45 Promise a timeline that he doesn’t deliver on,

02:14:49 he delivers much later on.

02:14:51 What do you think about that?

02:14:52 Cause I do that, I think that’s a programmer thing too.

02:14:56 I do that as well.

02:14:57 You think that’s a really bad thing to do or is that okay?

02:15:01 I think that’s, again, as long as like,

02:15:03 you’re working toward it and you’re gonna deliver on it,

02:15:06 it’s not too far off, right?

02:15:10 Right?

02:15:11 Like, you know, the whole autonomous vehicle thing,

02:15:14 it’s like, I mean, I still think Tesla’s on track

02:15:18 to beat us.

02:15:19 I still think even with their missteps,

02:15:21 they have advantages we don’t have.

02:15:25 You know, Elon is better than me

02:15:28 at like marshaling massive amounts of resources.

02:15:33 So, you know, I still think given the fact

02:15:36 they’re maybe making some wrong decisions,

02:15:38 they’ll end up winning.

02:15:39 And like, it’s fine to hype it

02:15:42 if you’re actually gonna win, right?

02:15:44 Like if Elon says, look, we’re gonna be landing rockets

02:15:47 back on earth in a year and it takes four,

02:15:49 like, you know, he landed a rocket back on earth

02:15:53 and he was working toward it the whole time.

02:15:55 I think there’s some amount of like,

02:15:57 I think when it becomes wrong is if you know

02:15:59 you’re not gonna meet that deadline.

02:16:00 If you’re lying.

02:16:01 Yeah, that’s brilliantly put.

02:16:03 Like this is what people don’t understand, I think.

02:16:06 Like Elon believes everything he says.

02:16:09 He does, as far as I can tell, he does.

02:16:12 And I detected that in myself too.

02:16:14 Like if I, it’s only bullshit

02:16:17 if you’re like conscious of yourself lying.

02:16:21 Yeah, I think so.

02:16:22 Yeah.

02:16:23 Now you can’t take that to such an extreme, right?

02:16:25 Like in a way, I think maybe Billy McFarland

02:16:27 believed everything he said too.

02:16:30 Right, that’s how you start a cult

02:16:31 and everybody kills themselves.

02:16:33 Yeah.

02:16:34 Yeah, like it’s, you need, you need,

02:16:36 if there’s like some factor on it, it’s fine.

02:16:39 And you need some people to like, you know,

02:16:41 keep you in check, but like,

02:16:44 if you deliver on most of the things you say

02:16:46 and just the timelines are off, yeah.

02:16:48 It does piss people off though.

02:16:50 I wonder, but who cares?

02:16:53 In a long arc of history, the people,

02:16:55 everybody gets pissed off at the people who succeed,

02:16:58 which is one of the things

02:16:59 that frustrates me about this world,

02:17:01 is they don’t celebrate the success of others.

02:17:07 Like there’s so many people that want Elon to fail.

02:17:12 It’s so fascinating to me.

02:17:14 Like what is wrong with you?

02:17:18 Like, so Elon Musk talks about like people shorting,

02:17:21 like they talk about financial,

02:17:23 but I think it’s much bigger than the financials.

02:17:25 I’ve seen like the human factors community,

02:17:27 they want, they want other people to fail.

02:17:31 Why, why, why?

02:17:32 Like even people, the harshest thing is like,

02:17:36 you know, even people that like seem

02:17:38 to really hate Donald Trump, they want him to fail

02:17:41 or like the other president

02:17:43 or they want Barack Obama to fail.

02:17:45 It’s like.

02:17:47 Yeah, we’re all on the same boat, man.

02:17:49 It’s weird, but I want that,

02:17:51 I would love to inspire that part of the world to change

02:17:54 because damn it, if the human species is gonna survive,

02:17:58 we should celebrate success.

02:18:00 Like it seems like the efficient thing to do

02:18:02 in this objective function that we’re all striving for

02:18:06 is to celebrate the ones that like figure out

02:18:09 how to like do better at that objective function

02:18:11 as opposed to like dragging them down back into the mud.

02:18:16 I think there is, this is the speech I always give

02:18:19 about the commenters on Hacker News.

02:18:21 So first off, something to remember

02:18:23 about the internet in general is commenters

02:18:26 are not representative of the population.

02:18:29 I don’t comment on anything.

02:18:31 You know, commenters are representative

02:18:34 of a certain sliver of the population.

02:18:36 And on Hacker News, a common thing I’ll see

02:18:39 is when you’ll see something that’s like,

02:18:42 you know, promises to be wild out there and innovative.

02:18:47 There is some amount of, you know,

02:18:49 checking them back to earth,

02:18:50 but there’s also some amount of if this thing succeeds,

02:18:55 well, I’m 36 and I’ve worked

02:18:57 at large tech companies my whole life.

02:19:02 They can’t succeed because if they succeed,

02:19:05 that would mean that I could have done something different

02:19:07 with my life, but we know that I couldn’t have,

02:19:09 we know that I couldn’t have,

02:19:10 and that’s why they’re gonna fail.

02:19:11 And they have to root for them to fail

02:19:13 to kind of maintain their world image.

02:19:15 So tune it out.

02:19:17 And they comment, well, it’s hard, I, so one of the things,

02:19:21 one of the things I’m considering startup wise

02:19:25 is to change that.

02:19:27 Cause I think the, I think it’s also a technology problem.

02:19:31 It’s a platform problem.

02:19:33 I agree.

02:19:33 It’s like, because the thing you said,

02:19:35 most people don’t comment.

02:19:39 I think most people want to comment.

02:19:42 They just don’t because it’s all the assholes

02:19:45 who are commenting.

02:19:46 Exactly, I don’t want to be grouped in with them.

02:19:47 You don’t want to be at a party

02:19:49 where everyone is an asshole.

02:19:50 And so they, but that’s a platform problem.

02:19:54 I can’t believe what Reddit’s become.

02:19:56 I can’t believe the group thinking, Reddit comments.

02:20:00 There’s a, Reddit is an interesting one

02:20:02 because they’re subreddits.

02:20:05 And so you can still see, especially small subreddits

02:20:09 that like, that are a little like havens

02:20:11 of like joy and positivity and like deep,

02:20:16 even disagreement, but like nuanced discussion.

02:20:18 But it’s only like small little pockets,

02:20:21 but that’s emergent.

02:20:23 The platform is not helping that or hurting that.

02:20:26 So I guess naturally something about the internet,

02:20:31 if you don’t put in a lot of effort to encourage

02:20:34 nuance and positive, good vibes,

02:20:37 it’s naturally going to decline into chaos.

02:20:41 I would love to see someone do this well.

02:20:42 Yeah.

02:20:43 I think it’s, yeah, very doable.

02:20:45 I think actually, so I feel like Twitter

02:20:49 could be overthrown.

02:20:52 Yashua Bach talked about how like,

02:20:55 if you have like and retweet,

02:20:58 like that’s only positive wiring, right?

02:21:02 The only way to do anything like negative there

02:21:05 is with a comment.

02:21:08 And that’s like that asymmetry is what gives,

02:21:12 you know, Twitter its particular toxicness.

02:21:15 Whereas I find YouTube comments to be much better

02:21:18 because YouTube comments have an up and a down

02:21:21 and they don’t show the downvotes.

02:21:23 Without getting into depth of this particular discussion,

02:21:26 the point is to explore possibilities

02:21:29 and get a lot of data on it.

02:21:30 Because I mean, I could disagree with what you just said.

02:21:34 The point is it’s unclear.

02:21:36 It hasn’t been explored in a really rich way.

02:21:39 Like these questions of how to create platforms

02:21:44 that encourage positivity.

02:21:47 Yeah, I think it’s a technology problem.

02:21:49 And I think we’ll look back at Twitter as it is now.

02:21:51 Maybe it’ll happen within Twitter,

02:21:53 but most likely somebody overthrows them

02:21:56 is we’ll look back at Twitter and say,

02:22:00 can’t believe we put up with this level of toxicity.

02:22:03 You need a different business model too.

02:22:05 Any social network that fundamentally has advertising

02:22:07 as a business model, this was in The Social Dilemma,

02:22:10 which I didn’t watch, but I liked it.

02:22:11 It’s like, you know, there’s always the, you know,

02:22:12 you’re the product, you’re not the,

02:22:15 but they had a nuanced take on it that I really liked.

02:22:17 And it said, the product being sold is influence over you.

02:22:24 The product being sold is literally your,

02:22:27 you know, influence on you.

02:22:29 Like that can’t be, if that’s your idea, okay.

02:22:33 Well, you know, guess what?

02:22:35 It can’t not be toxic.

02:22:37 Yeah, maybe there’s ways to spin it,

02:22:39 like with giving a lot more control to the user

02:22:42 and transparency to see what is happening to them

02:22:44 as opposed to in the shadows, it’s possible,

02:22:47 but that can’t be the primary source of.

02:22:49 But the users aren’t, no one’s gonna use that.

02:22:51 It depends, it depends, it depends.

02:22:54 I think that the, you’re not going to,

02:22:57 you can’t depend on self awareness of the users.

02:23:00 It’s a longer discussion because you can’t depend on it,

02:23:04 but you can reward self awareness.

02:23:09 Like if for the ones who are willing to put in the work

02:23:12 of self awareness, you can reward them and incentivize

02:23:16 and perhaps be pleasantly surprised how many people

02:23:20 are willing to be self aware on the internet.

02:23:23 Like we are in real life.

02:23:24 Like I’m putting in a lot of effort with you right now,

02:23:26 being self aware about if I say something stupid or mean,

02:23:30 I’ll like look at your like body language.

02:23:32 Like I’m putting in that effort.

02:23:33 It’s costly for an introvert, very costly.

02:23:36 But on the internet, fuck it.

02:23:39 Like most people are like, I don’t care if this hurts

02:23:42 somebody, I don’t care if this is not interesting

02:23:46 or if this is, yeah, it’s a mean or whatever.

02:23:48 I think so much of the engagement today on the internet

02:23:50 is so disingenuine too.

02:23:53 You’re not doing this out of a genuine,

02:23:54 this is what you think.

02:23:55 You’re doing this just straight up to manipulate others.

02:23:57 Whether you’re in, you just became an ad.

02:23:59 Yeah, okay, let’s talk about a fun topic,

02:24:02 which is programming.

02:24:04 Here’s another book idea for you.

02:24:05 Let me pitch.

02:24:07 What’s your perfect programming setup?

02:24:09 So like this by George Hots.

02:24:12 So like what, listen, you’re.

02:24:17 Give me a MacBook Air, sit me in a corner of a hotel room

02:24:20 and you know I’ll still ask you.

02:24:21 So you really don’t care.

02:24:22 You don’t fetishize like multiple monitors, keyboard.

02:24:27 Those things are nice and I’m not gonna say no to them,

02:24:30 but did they automatically unlock tons of productivity?

02:24:33 No, not at all.

02:24:34 I have definitely been more productive on a MacBook Air

02:24:36 in a corner of a hotel room.

02:24:38 What about IDE?

02:24:41 So which operating system do you love?

02:24:45 What text editor do you use IDE?

02:24:49 What, is there something that is like the perfect,

02:24:53 if you could just say the perfect productivity setup

02:24:57 for George Hots.

02:24:57 It doesn’t matter.

02:24:58 It literally doesn’t matter.

02:25:00 You know, I guess I code most of the time in Vim.

02:25:03 Like literally I’m using an editor from the 70s.

02:25:05 You know, you didn’t make anything better.

02:25:07 Okay, VS code is nice for reading code.

02:25:09 There’s a few things that are nice about it.

02:25:10 I think that you can build much better tools.

02:25:13 How like IDA’s xrefs work way better than VS codes, why?

02:25:18 Yeah, actually that’s a good question, like why?

02:25:20 I still use, sorry, Emacs for most.

02:25:25 I’ve actually never, I have to confess something dark.

02:25:28 So I’ve never used Vim.

02:25:32 I think maybe I’m just afraid

02:25:36 that my life has been like a waste.

02:25:39 I’m so, I’m not evangelical about Emacs.

02:25:43 I think this.

02:25:44 This is how I feel about TensorFlow versus PyTorch.

02:25:47 Having just like, we’ve switched everything to PyTorch now.

02:25:50 Put months into the switch.

02:25:51 I have felt like I’ve wasted years on TensorFlow.

02:25:54 I can’t believe it.

02:25:56 I can’t believe how much better PyTorch is.

02:25:58 Yeah.

02:25:59 I’ve used Emacs and Vim, doesn’t matter.

02:26:01 Yeah, it’s still just my heart.

02:26:03 Somehow I fell in love with Lisp.

02:26:04 I don’t know why.

02:26:05 You can’t, the heart wants what the heart wants.

02:26:08 I don’t understand it, but it just connected with me.

02:26:10 Maybe it’s the functional language

02:26:11 that first I connected with.

02:26:13 Maybe it’s because so many of the AI courses

02:26:15 before the deep learning revolution

02:26:17 were taught with Lisp in mind.

02:26:19 I don’t know.

02:26:20 I don’t know what it is, but I’m stuck with it.

02:26:22 But at the same time, like,

02:26:23 why am I not using a modern ID

02:26:25 for some of these programming?

02:26:26 I don’t know.

02:26:27 They’re not that much better.

02:26:28 I’ve used modern IDs too.

02:26:30 But at the same time, so to just,

02:26:32 well, not to disagree with you,

02:26:33 but like, I like multiple monitors.

02:26:35 Like I have to do work on a laptop

02:26:38 and it’s a pain in the ass.

02:26:41 And also I’m addicted to the Kinesis weird keyboard.

02:26:45 You could see there.

02:26:46 Yeah, yeah, yeah.

02:26:48 Yeah, so you don’t have any of that.

02:26:50 You can just be on a MacBook.

02:26:51 I mean, look at work.

02:26:53 I have three 24 inch monitors.

02:26:55 I have a happy hacking keyboard.

02:26:56 I have a Razer Death Hatter mouse, like.

02:26:59 But it’s not essential for you.

02:27:01 No.

02:27:02 Let’s go to a day in the life of George Hots.

02:27:04 What is the perfect day productivity wise?

02:27:08 So we’re not talking about like Hunter S. Thompson drugs.

02:27:12 Yeah, yeah, yeah.

02:27:13 And let’s look at productivity.

02:27:16 Like what’s the day look like, like hour by hour?

02:27:19 Is there any regularities that create

02:27:23 a magical George Hots experience?

02:27:25 I can remember three days in my life.

02:27:28 And I remember these days vividly

02:27:30 when I’ve gone through kind of radical transformations

02:27:36 to the way I think.

02:27:37 And what I would give, I would pay $100,000

02:27:40 if I could have one of these days tomorrow.

02:27:42 The days have been so impactful.

02:27:44 And one was first discovering Eliezer Yudkowsky

02:27:47 on the singularity and reading that stuff.

02:27:50 And like, you know, my mind was blown.

02:27:54 The next was discovering the Hutter Prize

02:27:57 and that AI is just compression.

02:27:59 Like finally understanding AIXI and what all of that was.

02:28:03 You know, I like read about it when I was 18, 19,

02:28:05 I didn’t understand it.

02:28:06 And then the fact that like lossless compression

02:28:08 implies intelligence, the day that I was shown that.

02:28:12 And then the third one is controversial.

02:28:14 The day I found a blog called Unqualified Reservations.

02:28:17 And read that and I was like.

02:28:20 Wait, which one is that?

02:28:21 That’s, what’s the guy’s name?

02:28:22 Curtis Yarvin.

02:28:24 Yeah.

02:28:25 So many people tell me I’m supposed to talk to him.

02:28:27 Yeah, the day.

02:28:28 He looks, he sounds insane.

02:28:30 Definitely. Or brilliant,

02:28:31 but insane or both, I don’t know.

02:28:33 The day I found that blog was another like,

02:28:35 this was during like Gamergate

02:28:37 and kind of the run up to the 2016 election.

02:28:39 And I’m like, wow, okay, the world makes sense now.

02:28:42 This is like, I had a framework now to interpret this.

02:28:45 Just like I got the framework for AI

02:28:47 and a framework to interpret technological progress.

02:28:49 Like those days when I discovered these new frameworks were.

02:28:52 Oh, interesting.

02:28:53 So it’s not about, but what was special about those days?

02:28:57 How did those days come to be?

02:28:58 Is it just, you got lucky?

02:28:59 Like, you just encountered a hotter prize

02:29:04 on Hacker News or something like that?

02:29:09 But you see, I don’t think it’s just,

02:29:11 see, I don’t think it’s just that like,

02:29:13 I could have gotten lucky at any point.

02:29:14 I think that in a way.

02:29:16 You were ready at that moment.

02:29:17 Yeah, exactly.

02:29:18 To receive the information.

02:29:21 But is there some magic to the day today

02:29:24 of like eating breakfast?

02:29:27 And it’s the mundane things.

02:29:29 Nah.

02:29:29 Nothing.

02:29:30 Nah, I drift through life.

02:29:32 Without structure.

02:29:34 I drift through life hoping and praying

02:29:36 that I will get another day like those days.

02:29:38 And there’s nothing in particular you do

02:29:40 to be a receptacle for another, for day number four.

02:29:46 No, I didn’t do anything to get the other ones.

02:29:48 So I don’t think I have to really do anything now.

02:29:51 I took a month long trip to New York

02:29:53 and the Ethereum thing was the highlight of it,

02:29:56 but the rest of it was pretty terrible.

02:29:57 I did a two week road trip

02:29:59 and I got, I had to turn around.

02:30:01 I had to turn around driving in Gunnison, Colorado.

02:30:06 I passed through Gunnison

02:30:08 and the snow starts coming down.

02:30:10 There’s a pass up there called Monarch Pass

02:30:12 in order to get through to Denver,

02:30:13 you gotta get over the Rockies.

02:30:14 And I had to turn my car around.

02:30:16 I couldn’t, I watched a F150 go off the road.

02:30:20 I’m like, I gotta go back.

02:30:21 And like that day was meaningful.

02:30:24 Cause like, it was real.

02:30:26 Like I actually had to turn my car around.

02:30:28 It’s rare that anything even real happens in my life.

02:30:31 Even as, you know, mundane as the fact that,

02:30:34 yeah, there was snow, I had to turn around,

02:30:36 stay in Gunnison and leave the next day.

02:30:37 Something about that moment felt real.

02:30:40 Okay, so actually it’s interesting to break apart

02:30:43 the three moments you mentioned, if it’s okay.

02:30:45 So I always have trouble pronouncing his name,

02:30:48 but Alousa Yurkowski.

02:30:53 So what, how did your worldview change

02:30:57 in starting to consider the exponential growth of AI

02:31:02 and AGI that he thinks about

02:31:05 and the threats of artificial intelligence

02:31:07 and all that kind of ideas?

02:31:09 Can you, is it just like, can you maybe break apart

02:31:12 like what exactly was so magical to you?

02:31:15 Is it transformational experience?

02:31:17 Today, everyone knows him for threats and AI safety.

02:31:20 This was pre that stuff.

02:31:22 There was, I don’t think a mention of AI safety on the page.

02:31:25 This is, this is old Yurkowski stuff.

02:31:27 He’d probably denounce it all now.

02:31:29 He’d probably be like,

02:31:29 that’s exactly what I didn’t want to happen.

02:31:32 Sorry, man.

02:31:33 Is there something specific you can take from his work

02:31:37 that you can remember?

02:31:38 Yeah, it was this realization

02:31:40 that computers double in power every 18 months

02:31:45 and humans do not, and they haven’t crossed yet.

02:31:50 But if you have one thing that’s doubling every 18 months

02:31:52 and one thing that’s staying like this, you know,

02:31:55 here’s your log graph, here’s your line, you know,

02:31:58 calculate that.

02:31:59 And then the data opened the door

02:32:03 to the exponential thinking, like thinking that like,

02:32:06 you know what, with technology,

02:32:07 we can actually transform the world.

02:32:11 It opened the door to human obsolescence.

02:32:13 It opened the door to realize that in my lifetime,

02:32:16 humans are going to be replaced.

02:32:20 And then the matching idea to that of artificial intelligence

02:32:23 with the Hutter prize, you know, I’m torn.

02:32:27 I go back and forth on what I think about it.

02:32:30 Yeah.

02:32:31 But the basic thesis is it’s a nice compelling notion

02:32:36 that we can reduce the task of creating

02:32:38 an intelligent system, a generally intelligent system

02:32:41 into the task of compression.

02:32:43 So you can think of all of intelligence in the universe,

02:32:46 in fact, as a kind of compression.

02:32:50 Do you find that, was that just at the time

02:32:52 you found that as a compelling idea

02:32:53 or do you still find that a compelling idea?

02:32:56 I still find that a compelling idea.

02:32:59 I think that it’s not that useful day to day,

02:33:02 but actually one of maybe my quests before that

02:33:06 was a search for the definition of the word intelligence.

02:33:09 And I never had one.

02:33:10 And I definitely have a definition of the word compression.

02:33:14 It’s a very simple, straightforward one.

02:33:18 And you know what compression is,

02:33:19 you know what lossless, it’s lossless compression,

02:33:21 not lossy, lossless compression.

02:33:22 And that that is equivalent to intelligence,

02:33:25 which I believe, I’m not sure how useful

02:33:27 that definition is day to day,

02:33:28 but like I now have a framework to understand what it is.

02:33:32 And he just 10X the prize for that competition

02:33:36 like recently a few months ago.

02:33:37 You ever thought of taking a crack at that?

02:33:39 Oh, I did.

02:33:41 Oh, I did.

02:33:41 I spent the next, after I found the prize,

02:33:44 I spent the next six months of my life trying it.

02:33:47 And well, that’s when I started learning everything about AI.

02:33:51 And then I worked at Vicarious for a bit

02:33:53 and then I read all the deep learning stuff.

02:33:55 And I’m like, okay, now I like I’m caught up to modern AI.

02:33:58 Wow.

02:33:59 And I had a really good framework to put it all in

02:34:01 from the compression stuff, right?

02:34:04 Like some of the first deep learning models I played with

02:34:07 were GPT basically, but before transformers,

02:34:12 before it was still RNNs to do character prediction.

02:34:17 But by the way, on the compression side,

02:34:19 I mean, especially with neural networks,

02:34:22 what do you make of the lossless requirement

02:34:25 with the Hutter prize?

02:34:26 So, you know, human intelligence and neural networks

02:34:31 can probably compress stuff pretty well,

02:34:33 but it would be lossy.

02:34:35 It’s imperfect.

02:34:36 You can turn a lossy compression

02:34:37 to a lossless compressor pretty easily

02:34:39 using an arithmetic encoder, right?

02:34:41 You can take an arithmetic encoder

02:34:42 and you can just encode the noise with maximum efficiency.

02:34:45 Right?

02:34:46 So even if you can’t predict exactly

02:34:48 what the next character is,

02:34:50 the better a probability distribution,

02:34:52 you can put over the next character.

02:34:54 You can then use an arithmetic encoder to, right?

02:34:57 You don’t have to know whether it’s an E or an I,

02:34:59 you just have to put good probabilities on them

02:35:01 and then, you know, code those.

02:35:03 And if you have, it’s a bits of entropy thing, right?

02:35:06 So let me, on that topic,

02:35:07 it’d be interesting as a little side tour.

02:35:10 What are your thoughts in this year about GPT3

02:35:13 and these language models and these transformers?

02:35:16 Is there something interesting to you as an AI researcher,

02:35:20 or is there something interesting to you

02:35:22 as an autonomous vehicle developer?

02:35:24 Nah, I think it’s overhyped.

02:35:27 I mean, it’s not, like, it’s cool.

02:35:29 It’s cool for what it is, but no,

02:35:30 we’re not just gonna be able to scale up to GPT12

02:35:33 and get general purpose intelligence.

02:35:35 Like, your loss function is literally just,

02:35:38 you know, cross entropy loss on the character, right?

02:35:41 Like, that’s not the loss function of general intelligence.

02:35:44 Is that obvious to you?

02:35:45 Yes.

02:35:47 Can you imagine that, like,

02:35:51 to play devil’s advocate on yourself,

02:35:53 is it possible that you can,

02:35:55 the GPT12 will achieve general intelligence

02:35:58 with something as dumb as this kind of loss function?

02:36:01 I guess it depends what you mean by general intelligence.

02:36:05 So there’s another problem with the GPTs,

02:36:07 and that’s that they don’t have a,

02:36:11 they don’t have longterm memory.

02:36:13 Right.

02:36:13 So, like, just GPT12,

02:36:18 a scaled up version of GPT2 or GPT3,

02:36:22 I find it hard to believe.

02:36:26 Well, you can scale it in,

02:36:28 so it’s a hard coded length,

02:36:32 but you can make it wider and wider and wider.

02:36:34 Yeah.

02:36:36 You’re gonna get cool things from those systems,

02:36:40 but I don’t think you’re ever gonna get something

02:36:44 that can, like, you know, build me a rocket ship.

02:36:47 What about solved driving?

02:36:49 So, you know, you can use Transformer with video,

02:36:53 for example.

02:36:54 You think, is there something in there?

02:36:57 No, because, I mean, look, we use a GRU.

02:37:01 We use a GRU.

02:37:02 We could change that GRU out to a Transformer.

02:37:05 I think driving is much more Markovian than language.

02:37:09 So, Markovian, you mean, like, the memory,

02:37:11 which aspect of Markovian?

02:37:13 I mean that, like, most of the information

02:37:16 in the state at T minus one is also in state T.

02:37:19 I see, yeah.

02:37:20 Right, and it kind of, like, drops off nicely like this,

02:37:22 whereas sometime with language,

02:37:23 you have to refer back to the third paragraph

02:37:25 on the second page.

02:37:27 I feel like.

02:37:28 There’s not many, like, you can say, like,

02:37:30 speed limit signs, but there’s really not many things

02:37:32 in autonomous driving that look like that.

02:37:33 But if you look at, to play devil’s advocate,

02:37:37 is the risk estimation thing that you’ve talked about

02:37:39 is kind of interesting.

02:37:41 Is, it feels like there might be some longer term

02:37:45 aggregation of context necessary to be able to figure out,

02:37:49 like, the context.

02:37:51 Yeah, I’m not even sure I’m believing my devil’s advocate.

02:37:55 We have a nice, like, vision model,

02:37:58 which outputs, like, a one or two,

02:38:00 four dimensional perception space.

02:38:03 Can I try Transformers on it?

02:38:04 Sure, I probably will.

02:38:06 At some point, we’ll try Transformers,

02:38:08 and then we’ll just see.

02:38:09 Do they do better?

02:38:09 Sure, I’m.

02:38:10 But it might not be a game changer, you’re saying?

02:38:12 No, well, I’m not.

02:38:13 Like, might Transformers work better than GRUs

02:38:15 for autonomous driving?

02:38:16 Sure.

02:38:16 Might we switch?

02:38:17 Sure.

02:38:18 Is this some radical change?

02:38:19 No.

02:38:20 Okay, we use a slightly different,

02:38:21 you know, we switch from RNNs to GRUs.

02:38:23 Like, okay, maybe it’s GRUs to Transformers,

02:38:24 but no, it’s not.

02:38:26 Yeah.

02:38:27 Well, on the topic of general intelligence,

02:38:30 I don’t know how much I’ve talked to you about it.

02:38:32 Like, what, do you think we’ll actually build

02:38:36 an AGI?

02:38:38 Like, if you look at Ray Kurzweil with Singularity,

02:38:40 do you have like an intuition about,

02:38:43 you’re kind of saying driving is easy.

02:38:45 Yeah.

02:38:46 And I tend to personally believe that solving driving

02:38:52 will have really deep, important impacts

02:38:56 on our ability to solve general intelligence.

02:38:59 Like, I think driving doesn’t require general intelligence,

02:39:03 but I think they’re going to be neighbors

02:39:05 in a way that it’s like deeply tied.

02:39:08 Cause it’s so, like driving is so deeply connected

02:39:11 to the human experience that I think solving one

02:39:15 will help solve the other.

02:39:17 But, so I don’t see, I don’t see driving as like easy

02:39:20 and almost like separate than general intelligence,

02:39:23 but like, what’s your vision of a future with a Singularity?

02:39:26 Do you see there’ll be a single moment,

02:39:28 like a Singularity where it’ll be a phase shift?

02:39:30 Are we in the Singularity now?

02:39:32 Like what, do you have crazy ideas about the future

02:39:34 in terms of AGI?

02:39:35 We’re definitely in the Singularity now.

02:39:38 We are?

02:39:38 Of course, of course.

02:39:40 Look at the bandwidth between people.

02:39:41 The bandwidth between people goes up, right?

02:39:44 The Singularity is just, you know, when the bandwidth, but.

02:39:47 What do you mean by the bandwidth of people?

02:39:48 Communications, tools, the whole world is networked.

02:39:51 The whole world is networked

02:39:52 and we raise the speed of that network, right?

02:39:54 Oh, so you think the communication of information

02:39:57 in a distributed way is an empowering thing

02:40:00 for collective intelligence?

02:40:02 Oh, I didn’t say it’s necessarily a good thing,

02:40:03 but I think that’s like,

02:40:04 when I think of the definition of the Singularity,

02:40:06 yeah, it seems kind of right.

02:40:08 I see, like it’s a change in the world

02:40:12 beyond which like the world be transformed

02:40:14 in ways that we can’t possibly imagine.

02:40:16 No, I mean, I think we’re in the Singularity now

02:40:18 in the sense that there’s like, you know,

02:40:19 one world and a monoculture and it’s also linked.

02:40:22 Yeah, I mean, I kind of share the intuition

02:40:24 that the Singularity will originate

02:40:27 from the collective intelligence of us ands

02:40:31 versus the like some single system AGI type thing.

02:40:35 Oh, I totally agree with that.

02:40:37 Yeah, I don’t really believe in like a hard take off AGI

02:40:40 kind of thing.

02:40:45 Yeah, I don’t even think AI is all that different in kind

02:40:49 from what we’ve already been building.

02:40:52 With respect to driving,

02:40:53 I think driving is a subset of general intelligence

02:40:56 and I think it’s a pretty complete subset.

02:40:58 I think the tools we develop at Kama

02:41:00 will also be extremely helpful

02:41:02 to solving general intelligence

02:41:04 and that’s I think the real reason why I’m doing it.

02:41:06 I don’t care about self driving cars.

02:41:08 It’s a cool problem to beat people at.

02:41:10 But yeah, I mean, yeah, you’re kind of, you’re of two minds.

02:41:14 So one, you do have to have a mission

02:41:16 and you wanna focus and make sure you get there.

02:41:19 You can’t forget that but at the same time,

02:41:22 there is a thread that’s much bigger

02:41:26 than that connects the entirety of your effort.

02:41:28 That’s much bigger than just driving.

02:41:31 With AI and with general intelligence,

02:41:33 it is so easy to delude yourself

02:41:35 into thinking you’ve figured something out when you haven’t.

02:41:37 If we build a level five self driving car,

02:41:39 we have indisputably built something.

02:41:42 Yeah.

02:41:43 Is it general intelligence?

02:41:44 I’m not gonna debate that.

02:41:45 I will say we’ve built something

02:41:47 that provides huge financial value.

02:41:49 Yeah, beautifully put.

02:41:50 That’s the engineering credo.

02:41:51 Like just build the thing.

02:41:53 It’s like, that’s why I’m with Elon

02:41:57 on go to Mars.

02:41:58 Yeah, that’s a great one.

02:41:59 You can argue like who the hell cares about going to Mars.

02:42:03 But the reality is set that as a mission, get it done.

02:42:07 Yeah.

02:42:08 And then you’re going to crack some problem

02:42:09 that you’ve never even expected

02:42:11 in the process of doing that, yeah.

02:42:13 Yeah, I mean, no, I think if I had a choice

02:42:16 between humanity going to Mars

02:42:17 and solving self driving cars,

02:42:18 I think going to Mars is better, but I don’t know.

02:42:21 I’m more suited for self driving cars.

02:42:23 I’m an information guy.

02:42:24 I’m not a modernist, I’m a postmodernist.

02:42:26 Postmodernist, all right, beautifully put.

02:42:29 Let me drag you back to programming for a sec.

02:42:32 What three, maybe three to five programming languages

02:42:35 should people learn, do you think?

02:42:36 Like if you look at yourself,

02:42:38 what did you get the most out of from learning?

02:42:42 Well, so everybody should learn C and assembly.

02:42:45 We’ll start with those two, right?

02:42:47 Assembly?

02:42:48 Yeah, if you can’t code an assembly,

02:42:49 you don’t know what the computer’s doing.

02:42:51 You don’t understand like,

02:42:53 you don’t have to be great in assembly,

02:42:54 but you have to code in it.

02:42:56 And then like, you have to appreciate assembly

02:42:58 in order to appreciate all the great things C gets you.

02:43:02 And then you have to code in C

02:43:03 in order to appreciate all the great things Python gets you.

02:43:06 So I’ll just say assembly C and Python,

02:43:07 we’ll start with those three.

02:43:09 The memory allocation of C and the fact that,

02:43:14 so assembly gives you a sense

02:43:16 of just how many levels of abstraction

02:43:18 you get to work on in modern day programming.

02:43:20 Yeah, yeah, yeah, yeah, graph coloring for assignment,

02:43:22 register assignment and compilers.

02:43:24 Like, you know, you gotta do,

02:43:25 you know, the compiler,

02:43:26 the computer only has a certain number of registers,

02:43:28 yet you can have all the variables you want in a C function.

02:43:31 So you get to start to build intuition about compilation,

02:43:34 like what a compiler gets you.

02:43:37 What else?

02:43:38 Well, then there’s kind of a,

02:43:41 so those are all very imperative programming languages.

02:43:45 Then there’s two other paradigms for programming

02:43:47 that everybody should be familiar with.

02:43:49 And one of them is functional.

02:43:51 You should learn Haskell and take that all the way through,

02:43:54 learn a language with dependent types like Coq,

02:43:57 learn that whole space,

02:43:58 like the very PL theory, heavy languages.

02:44:02 And Haskell is your favorite functional?

02:44:04 Is that the go to, you’d say?

02:44:06 Yeah, I’m not a great Haskell programmer.

02:44:08 I wrote a compiler in Haskell once.

02:44:10 There’s another paradigm,

02:44:11 and actually there’s one more paradigm

02:44:12 that I’ll even talk about after that,

02:44:14 that I never used to talk about

02:44:15 when I would think about this,

02:44:15 but the next paradigm is learn Verilog of HDL.

02:44:20 Understand this idea of all of the instructions

02:44:22 execute at once. If I have a block in Verilog

02:44:26 and I write stuff in it, it’s not sequential.

02:44:29 They all execute at once.

02:44:33 And then think like that, that’s how hardware works.

02:44:36 To be, so I guess assembly doesn’t quite get you that.

02:44:40 Assembly is more about compilation,

02:44:42 and Verilog is more about the hardware,

02:44:44 like giving a sense of what actually

02:44:46 is the hardware is doing.

02:44:48 Assembly, C, Python are straight,

02:44:50 like they sit right on top of each other.

02:44:52 In fact, C is, well, C is kind of coded in C,

02:44:55 but you could imagine the first C was coded in assembly,

02:44:57 and Python is actually coded in C.

02:45:00 So you can straight up go on that.

02:45:03 Got it, and then Verilog gives you, that’s brilliant.

02:45:06 Okay.

02:45:07 And then I think there’s another one now.

02:45:09 Everyone, Carpathia calls it programming 2.0,

02:45:12 which is learn a, I’m not even gonna,

02:45:16 don’t learn TensorFlow, learn PyTorch.

02:45:18 So machine learning.

02:45:20 We’ve got to come up with a better term

02:45:21 than programming 2.0, or, but yeah.

02:45:26 It’s a programming language, learn it.

02:45:29 I wonder if it can be formalized a little bit better.

02:45:32 It feels like we’re in the early days

02:45:34 of what that actually entails.

02:45:37 Data driven programming?

02:45:39 Data driven programming, yeah.

02:45:41 But it’s so fundamentally different

02:45:43 as a paradigm than the others.

02:45:44 Like it almost requires a different skillset.

02:45:50 But you think it’s still, yeah.

02:45:53 And PyTorch versus TensorFlow, PyTorch wins.

02:45:56 It’s the fourth paradigm.

02:45:57 It’s the fourth paradigm that I’ve kind of seen.

02:45:59 There’s like this, you know,

02:46:01 imperative functional hardware.

02:46:04 I don’t know a better word for it.

02:46:06 And then ML.

02:46:08 Do you have advice for people that wanna,

02:46:13 you know, get into programming, wanna learn programming?

02:46:16 You have a video,

02:46:19 what is programming noob lessons, exclamation point.

02:46:22 And I think the top comment is like,

02:46:24 warning, this is not for noobs.

02:46:27 Do you have a noob, like a TLDW for that video,

02:46:32 but also a noob friendly advice

02:46:38 on how to get into programming?

02:46:39 We’re never going to learn programming

02:46:41 by watching a video called Learn Programming.

02:46:44 The only way to learn programming, I think,

02:46:46 and the only one is that the only way

02:46:48 everyone I’ve ever met who can program well,

02:46:50 learned it all in the same way.

02:46:51 They had something they wanted to do

02:46:54 and then they tried to do it.

02:46:56 And then they were like, oh, well, okay.

02:47:00 This is kind of, you know, it’d be nice

02:47:01 if the computer could kind of do this.

02:47:02 And then, you know, that’s how you learn.

02:47:04 You just keep pushing on a project.

02:47:09 So the only advice I have for learning programming

02:47:10 is go program.

02:47:12 Somebody wrote to me a question like,

02:47:14 we don’t really, they’re looking to learn

02:47:17 about recurring neural networks.

02:47:19 And he’s saying, like, my company’s thinking

02:47:20 of using recurring neural networks for time series data,

02:47:24 but we don’t really have an idea of where to use it yet.

02:47:27 We just want to, like, do you have any advice

02:47:28 on how to learn about, these are these kind of

02:47:31 general machine learning questions.

02:47:33 And I think the answer is, like,

02:47:36 actually have a problem that you’re trying to solve.

02:47:39 And just.

02:47:40 I see that stuff.

02:47:41 Oh my God, when people talk like that,

02:47:42 they’re like, I heard machine learning is important.

02:47:45 Could you help us integrate machine learning

02:47:47 with macaroni and cheese production?

02:47:51 You just, I don’t even, you can’t help these people.

02:47:54 Like, who lets you run anything?

02:47:55 Who lets that kind of person run anything?

02:47:58 I think we’re all, we’re all beginners at some point.

02:48:02 So.

02:48:03 It’s not like they’re a beginner.

02:48:04 It’s like, my problem is not that they don’t know

02:48:07 about machine learning.

02:48:08 My problem is that they think that machine learning

02:48:10 has something to say about macaroni and cheese production.

02:48:14 Or like, I heard about this new technology.

02:48:17 How can I use it for why?

02:48:19 Like, I don’t know what it is, but how can I use it for why?

02:48:23 That’s true.

02:48:24 You have to build up an intuition of how,

02:48:26 cause you might be able to figure out a way,

02:48:27 but like the prerequisites,

02:48:29 you should have a macaroni and cheese problem to solve first.

02:48:32 Exactly.

02:48:33 And then two, you should have more traditional,

02:48:36 like the learning process should involve

02:48:39 more traditionally applicable problems

02:48:41 in the space of whatever that is, machine learning,

02:48:44 and then see if it can be applied to mac and cheese.

02:48:47 At least start with, tell me about a problem.

02:48:49 Like if you have a problem, you’re like,

02:48:50 you know, some of my boxes aren’t getting

02:48:52 enough macaroni in them.

02:48:54 Can we use machine learning to solve this problem?

02:48:56 That’s much, much better than how do I apply

02:48:59 machine learning to macaroni and cheese?

02:49:01 One big thing, maybe this is me talking

02:49:05 to the audience a little bit, cause I get these days

02:49:07 so many messages, advice on how to like learn stuff, okay?

02:49:15 My, this is not me being mean.

02:49:18 I think this is quite profound actually,

02:49:20 is you should Google it.

02:49:22 Oh yeah.

02:49:23 Like one of the like skills that you should really acquire

02:49:29 as an engineer, as a researcher, as a thinker,

02:49:33 like one, there’s two complementary skills.

02:49:36 Like one is with a blank sheet of paper

02:49:39 with no internet to think deeply.

02:49:41 And then the other is to Google the crap

02:49:44 out of the questions you have.

02:49:45 Like that’s actually a skill people often talk about,

02:49:49 but like doing research, like pulling at the thread,

02:49:52 like looking up different words,

02:49:53 going into like GitHub repositories with two stars

02:49:58 and like looking how they did stuff,

02:49:59 like looking at the code or going on Twitter,

02:50:03 seeing like there’s little pockets of brilliant people

02:50:05 that are like having discussions.

02:50:07 Like if you’re a neuroscientist,

02:50:09 go into signal processing community.

02:50:11 If you’re an AI person going into the psychology community,

02:50:15 like switch communities.

02:50:18 I keep searching, searching, searching,

02:50:19 because it’s so much better to invest

02:50:23 in like finding somebody else who already solved your problem

02:50:27 than it is to try to solve the problem.

02:50:30 And because they’ve often invested years of their life,

02:50:34 like entire communities are probably already out there

02:50:37 who have tried to solve your problem.

02:50:39 I think they’re the same thing.

02:50:40 I think you go try to solve the problem.

02:50:44 And then in trying to solve the problem,

02:50:46 if you’re good at solving problems,

02:50:47 you’ll stumble upon the person who solved it already.

02:50:50 But the stumbling is really important.

02:50:52 I think that’s a skill that people should really put,

02:50:54 especially in undergrad, like search.

02:50:57 If you ask me a question,

02:50:58 how should I get started in deep learning, like especially?

02:51:04 Like that is just so Googleable.

02:51:07 Like the whole point is you Google that

02:51:10 and you get a million pages and just start looking at them.

02:51:13 Start pulling at the threads, start exploring,

02:51:16 start taking notes, start getting advice

02:51:19 from a million people that already like spent their life

02:51:22 answering that question, actually.

02:51:25 Oh, well, yeah, I mean, that’s definitely also, yeah,

02:51:26 when people like ask me things like that, I’m like, trust me,

02:51:28 the top answer on Google is much, much better

02:51:30 than anything I’m going to tell you, right?

02:51:32 Yeah.

02:51:34 People ask, it’s an interesting question.

02:51:38 Let me know if you have any recommendations.

02:51:39 What three books, technical or fiction or philosophical,

02:51:43 had an impact on your life or you would recommend perhaps?

02:51:49 Maybe we’ll start with the least controversial,

02:51:51 Infinite Jest, Infinite Jest is a…

02:51:57 David Foster Wallace.

02:51:58 Yeah, it’s a book about wireheading, really.

02:52:03 Very enjoyable to read, very well written.

02:52:07 You know, you will grow as a person reading this book,

02:52:11 its effort, and I’ll set that up for the second book,

02:52:14 which is pornography, it’s called Atlas Shrugged,

02:52:17 which…

02:52:21 Atlas Shrugged is pornography.

02:52:22 I mean, it is, I will not defend the,

02:52:25 I will not say Atlas Shrugged is a well written book.

02:52:28 It is entertaining to read, certainly, just like pornography.

02:52:31 The production value isn’t great.

02:52:33 You know, there’s a 60 page monologue in there

02:52:36 that Ann Rand’s editor really wanted to take out.

02:52:38 And she paid, she paid out of her pocket

02:52:42 to keep that 60 page monologue in the book.

02:52:45 But it is a great book for a kind of framework

02:52:53 of human relations.

02:52:54 And I know a lot of people are like,

02:52:55 yeah, but it’s a terrible framework.

02:52:58 Yeah, but it’s a framework.

02:53:00 Just for context, in a couple of days,

02:53:02 I’m speaking for probably four plus hours

02:53:06 with Yaron Brook, who’s the main living,

02:53:10 remaining objectivist, objectivist.

02:53:13 Interesting.

02:53:14 So I’ve always found this philosophy quite interesting

02:53:19 on many levels.

02:53:20 One of how repulsive some percent of,

02:53:24 large percent of the population find it,

02:53:26 which is always, always funny to me

02:53:29 when people are like unable to even read a philosophy

02:53:32 because of some, I think that says more

02:53:36 about their psychological perspective on it.

02:53:40 But there is something about objectivism

02:53:45 and Ann Rand’s philosophy that’s deeply connected

02:53:48 to this idea of capitalism,

02:53:50 of the ethical life is the productive life

02:53:56 that was always compelling to me.

02:54:00 It didn’t seem as, like I didn’t seem to interpret it

02:54:03 in the negative sense that some people do.

02:54:05 To be fair, I read that book when I was 19.

02:54:07 So you had an impact at that point, yeah.

02:54:09 Yeah, and the bad guys in the book have this slogan

02:54:13 from each according to their ability

02:54:15 to each according to their need.

02:54:17 And I’m looking at this and I’m like,

02:54:19 these are the most cart,

02:54:20 this is team rocket level cartoonishness, right?

02:54:22 No bad guy.

02:54:23 And then when I realized that was actually the slogan

02:54:25 of the communist party, I’m like, wait a second.

02:54:29 Wait, no, no, no, no, no.

02:54:31 You’re telling me this really happened?

02:54:34 Yeah, it’s interesting.

02:54:34 I mean, one of the criticisms of her work

02:54:36 is she has a cartoonish view of good and evil.

02:54:39 Like the reality, as Jordan Peterson says,

02:54:44 is that each of us have the capacity for good and evil

02:54:47 in us as opposed to like, there’s some characters

02:54:49 who are purely evil and some characters that are purely good.

02:54:52 And that’s in a way why it’s pornographic.

02:54:55 The production value, I love it.

02:54:57 Like evil is punished and there’s very clearly,

02:55:01 there’s no, just like porn doesn’t have character growth.

02:55:06 Well, you know, neither does Alice Shrugged, like.

02:55:09 Really, well put.

02:55:10 But at 19 year old George Hots, it was good enough.

02:55:14 Yeah, yeah, yeah, yeah.

02:55:15 What’s the third?

02:55:16 You have something?

02:55:18 I could give, these two I’ll just throw out.

02:55:21 They’re sci fi.

02:55:22 Perputation City.

02:55:24 Great thing to start thinking about copies of yourself.

02:55:26 And then the…

02:55:27 Who’s that by?

02:55:28 Sorry, I didn’t catch that.

02:55:29 That is Greg Egan.

02:55:31 He’s a, that might not be his real name.

02:55:33 Some Australian guy, might not be Australian.

02:55:35 I don’t know.

02:55:36 And then this one’s online.

02:55:38 It’s called The Metamorphosis of Prime Intellect.

02:55:43 It’s a story set in a post singularity world.

02:55:45 It’s interesting.

02:55:46 Is there, can you, either of the worlds,

02:55:49 do you find something philosophical interesting in them

02:55:51 that you can comment on?

02:55:53 I mean, it is clear to me that

02:55:57 Metamorphosis of Prime Intellect is like written by

02:56:00 an engineer, which is,

02:56:03 it’s very almost a pragmatic take on a utopia, in a way.

02:56:12 Positive or negative?

02:56:15 That’s up to you to decide reading the book.

02:56:17 And the ending of it is very interesting as well.

02:56:21 And I didn’t realize what it was.

02:56:23 I first read that when I was 15.

02:56:25 I’ve reread that book several times in my life.

02:56:27 And it’s short, it’s 50 pages.

02:56:29 Everyone should go read it.

02:56:30 What’s, sorry, it’s a little tangent.

02:56:33 I’ve been working through the foundation.

02:56:34 I’ve been, I haven’t read much sci fi my whole life

02:56:37 and I’m trying to fix that the last few months.

02:56:40 That’s been a little side project.

02:56:42 What’s to you as the greatest sci fi novel

02:56:46 that people should read?

02:56:47 Or is that?

02:56:49 I mean, I would, yeah, I would say like, yeah,

02:56:51 Permutation City, Metamorphosis of Prime Intellect.

02:56:53 I don’t know.

02:56:54 I didn’t like Foundation.

02:56:56 I thought it was way too modernist.

02:56:58 You like Dune and all of those.

02:57:00 I’ve never read Dune.

02:57:01 I’ve never read Dune.

02:57:02 I have to read it.

02:57:04 Fire Upon the Deep is interesting.

02:57:09 Okay, I mean, look, everyone should read,

02:57:10 everyone should read Neuromancer.

02:57:11 Everyone should read Snow Crash.

02:57:12 If you haven’t read those, like start there.

02:57:15 Yeah, I haven’t read Snow Crash.

02:57:16 You haven’t read Snow Crash?

02:57:17 Oh, it’s, I mean, it’s very entertaining.

02:57:19 Go to Lesher Bach.

02:57:20 And if you want the controversial one,

02:57:22 Bronze Age Mindset.

02:57:25 All right, I’ll look into that one.

02:57:27 Those aren’t sci fi, but just to round out books.

02:57:30 So a bunch of people asked me on Twitter

02:57:34 and Reddit and so on for advice.

02:57:36 So what advice would you give a young person today

02:57:39 about life?

02:57:40 In other words, what, yeah, I mean, looking back,

02:57:47 especially when you were younger, you did,

02:57:50 and you continued it.

02:57:51 You’ve accomplished a lot of interesting things.

02:57:54 Is there some advice from those,

02:57:57 from that life of yours that you can pass on?

02:58:01 If college ever opens again,

02:58:03 I would love to give a graduation speech.

02:58:07 At that point, I will put a lot of somewhat satirical effort

02:58:11 into this question.

02:58:12 Yeah, at this, you haven’t written anything at this point.

02:58:15 Oh, you know what?

02:58:16 Always wear sunscreen.

02:58:18 This is water.

02:58:19 Pick your plagiarizing.

02:58:21 I mean, you know, but that’s the,

02:58:23 that’s the like clean your room.

02:58:26 You know, yeah, you can plagiarize from all of this stuff.

02:58:28 And it’s, there is no,

02:58:35 self help books aren’t designed to help you.

02:58:37 They’re designed to make you feel good.

02:58:40 Like whatever advice I could give, you already know.

02:58:44 Everyone already knows.

02:58:45 Sorry, it doesn’t feel good.

02:58:50 Right?

02:58:51 Like, you know, you know,

02:58:53 if I tell you that you should, you know,

02:58:56 eat well and read more and it’s not gonna do anything.

02:59:01 I think the whole like genre

02:59:03 of those kinds of questions is meaningless.

02:59:07 I don’t know.

02:59:08 If anything, it’s don’t worry so much about that stuff.

02:59:10 Don’t be so caught up in your head.

02:59:12 Right.

02:59:13 I mean, you’re, yeah.

02:59:14 In a sense that your whole life,

02:59:16 your whole existence is like moving version of that advice.

02:59:20 I don’t know.

02:59:23 There’s something, I mean,

02:59:25 there’s something in you that resists

02:59:27 that kind of thinking and that in itself is,

02:59:30 it’s just illustrative of who you are.

02:59:34 And there’s something to learn from that.

02:59:36 I think you’re clearly not overthinking stuff.

02:59:41 Yeah.

02:59:42 And you know what?

02:59:42 There’s a gut thing.

02:59:43 Even when I talk about my advice,

02:59:45 I’m like, my advice is only relevant to me.

02:59:47 It’s not relevant to anybody else.

02:59:48 I’m not saying you should go out.

02:59:49 If you’re the kind of person who overthinks things

02:59:51 to stop overthinking things, it’s not bad.

02:59:54 It doesn’t work for me.

02:59:54 Maybe it works for you.

02:59:55 I don’t know.

02:59:57 Let me ask you about love.

02:59:59 Yeah.

03:00:02 I think last time we talked about the meaning of life

03:00:05 and it was kind of about winning.

03:00:08 Of course.

03:00:10 I don’t think I’ve talked to you about love much,

03:00:13 whether romantic or just love

03:00:15 for the common humanity amongst us all.

03:00:18 What role has love played in your life?

03:00:21 In this quest for winning, where does love fit in?

03:00:26 Well, the word love, I think means several different things.

03:00:29 There’s love in the sense of, maybe I could just say,

03:00:32 there’s like love in the sense of opiates

03:00:34 and love in the sense of oxytocin

03:00:37 and then love in the sense of,

03:00:43 maybe like a love for math.

03:00:44 I don’t think it fits into either

03:00:45 of those first two paradigms.

03:00:49 So each of those, have they given something to you

03:00:55 in your life?

03:00:56 I’m not that big of a fan of the first two.

03:01:00 Why?

03:01:03 The same reason I’m not a fan of,

03:01:06 the same reason I don’t do opiates and don’t take ecstasy.

03:01:09 And there were times, look, I’ve tried both.

03:01:14 I liked opiates way more than I liked ecstasy,

03:01:18 but they’re not, the ethical life is the productive life.

03:01:24 So maybe that’s my problem with those.

03:01:27 And then like, yeah, a sense of, I don’t know,

03:01:29 like abstract love for humanity.

03:01:32 I mean, the abstract love for humanity,

03:01:34 I’m like, yeah, I’ve always felt that.

03:01:36 And I guess it’s hard for me to imagine

03:01:39 not feeling it and maybe there’s people who don’t.

03:01:41 And I don’t know.

03:01:43 Yeah, that’s just like a background thing that’s there.

03:01:46 I mean, since we brought up drugs, let me ask you,

03:01:51 this is becoming more and more a part of my life

03:01:54 because I’m talking to a few researchers

03:01:55 that are working on psychedelics.

03:01:57 I’ve eaten shrooms a couple of times

03:02:00 and it was fascinating to me that like the mind can go,

03:02:04 like just fascinating the mind can go to places

03:02:08 I didn’t imagine it could go.

03:02:09 And it was very friendly and positive and exciting

03:02:12 and everything was kind of hilarious in the place.

03:02:16 Wherever my mind went, that’s where I went.

03:02:18 Is, what do you think about psychedelics?

03:02:20 Do you think they have, where do you think the mind goes?

03:02:24 Have you done psychedelics?

03:02:25 Where do you think the mind goes?

03:02:28 Is there something useful to learn about the places it goes

03:02:32 once you come back?

03:02:33 I find it interesting that this idea

03:02:38 that psychedelics have something to teach

03:02:40 is almost unique to psychedelics, right?

03:02:43 People don’t argue this about amphetamines.

03:02:46 And I’m not really sure why.

03:02:50 I think all of the drugs have lessons to teach.

03:02:53 I think there’s things to learn from opiates.

03:02:55 I think there’s things to learn from amphetamines.

03:02:56 I think there’s things to learn from psychedelics,

03:02:58 things to learn from marijuana.

03:03:02 But also at the same time recognize

03:03:05 that I don’t think you’re learning things about the world.

03:03:07 I think you’re learning things about yourself.

03:03:09 Yes.

03:03:10 And, you know, what’s the, even, it might’ve even been,

03:03:15 might’ve even been a Timothy Leary quote.

03:03:17 I don’t wanna misquote him,

03:03:18 but the idea is basically like, you know,

03:03:20 everybody should look behind the door,

03:03:21 but then once you’ve seen behind the door,

03:03:22 you don’t need to keep going back.

03:03:26 So, I mean, and that’s my thoughts on all real drug use too.

03:03:29 Except maybe for caffeine.

03:03:32 It’s a little experience that is good to have, but.

03:03:37 Oh yeah, no, I mean, yeah, I guess,

03:03:39 yes, psychedelics are definitely.

03:03:41 So you’re a fan of new experiences, I suppose.

03:03:43 Yes.

03:03:44 Because they all contain a little,

03:03:45 especially the first few times,

03:03:47 it contains some lessons that can be picked up.

03:03:49 Yeah, and I’ll revisit psychedelics maybe once a year.

03:03:55 Usually smaller doses.

03:03:58 Maybe they turn up the learning rate of your brain.

03:04:01 I’ve heard that, I like that.

03:04:03 Yeah, that’s cool.

03:04:04 Big learning rates have pros and cons.

03:04:07 Last question, and this is a little weird one,

03:04:09 but you’ve called yourself crazy in the past.

03:04:14 First of all, on a scale of one to 10,

03:04:16 how crazy would you say are you?

03:04:18 Oh, I mean, it depends how you, you know,

03:04:19 when you compare me to Elon Musk and Anthony Levandowski,

03:04:21 not so crazy.

03:04:23 So like a seven?

03:04:25 Let’s go with six.

03:04:27 Six, six, six.

03:04:29 What?

03:04:31 Well, I like seven, seven’s a good number.

03:04:32 Seven, all right, well, I’m sure day by day it changes,

03:04:36 right, so, but you’re in that area.

03:04:42 In thinking about that,

03:04:43 what do you think is the role of madness?

03:04:45 Is that a feature or a bug

03:04:48 if you were to dissect your brain?

03:04:51 So, okay, from like a mental health lens on crazy,

03:04:57 I’m not sure I really believe in that.

03:04:59 I’m not sure I really believe in like a lot of that stuff.

03:05:02 Right, this concept of, okay, you know,

03:05:05 when you get over to like hardcore bipolar and schizophrenia,

03:05:09 these things are clearly real, somewhat biological.

03:05:13 And then over here on the spectrum,

03:05:14 you have like ADD and oppositional defiance disorder

03:05:18 and these things that are like,

03:05:20 wait, this is normal spectrum human behavior.

03:05:22 Like this isn’t, you know, where’s the line here

03:05:28 and why is this like a problem?

03:05:31 So there’s this whole, you know,

03:05:33 the neurodiversity of humanity is huge.

03:05:35 Like people think I’m always on drugs.

03:05:37 People are saying this to me on my streams.

03:05:38 And I’m like, guys, you know,

03:05:39 like I’m real open with my drug use.

03:05:41 I’d tell you if I was on drugs and yeah,

03:05:44 I had like a cup of coffee this morning,

03:05:45 but other than that, this is just me.

03:05:47 You’re witnessing my brain in action.

03:05:51 So the word madness doesn’t even make sense

03:05:55 in the rich neurodiversity of humans.

03:05:59 I think it makes sense, but only for like

03:06:04 some insane extremes.

03:06:07 Like if you are actually like visibly hallucinating,

03:06:11 you know, that’s okay.

03:06:15 But there is the kind of spectrum on which you stand out.

03:06:17 Like that’s like, if I were to look, you know,

03:06:22 at decorations on a Christmas tree or something like that,

03:06:25 like if you were a decoration, that would catch my eye.

03:06:28 Like that thing is sparkly, whatever the hell that thing is.

03:06:35 There’s something to that.

03:06:37 Just like refusing to be boring

03:06:42 or maybe boring is the wrong word,

03:06:43 but to yeah, I mean, be willing to sparkle, you know?

03:06:52 It’s like somewhat constructed.

03:06:54 I mean, I am who I choose to be.

03:06:57 I’m gonna say things as true as I can see them.

03:07:01 I’m not gonna lie.

03:07:04 But that’s a really important feature in itself.

03:07:06 So like whatever the neurodiversity of your,

03:07:09 whatever your brain is, not putting constraints on it

03:07:13 that force it to fit into the mold of what society is like,

03:07:18 defines what you’re supposed to be.

03:07:20 So you’re one of the specimens

03:07:22 that doesn’t mind being yourself.

03:07:27 Being right is super important,

03:07:31 except at the expense of being wrong.

03:07:37 Without breaking that apart,

03:07:38 I think it’s a beautiful way to end it.

03:07:40 George, you’re one of the most special humans I know.

03:07:43 It’s truly an honor to talk to you.

03:07:44 Thanks so much for doing it.

03:07:45 Thank you for having me.

03:07:47 Thanks for listening to this conversation with George Hotz

03:07:50 and thank you to our sponsors,

03:07:52 Four Sigmatic, which is the maker

03:07:54 of delicious mushroom coffee,

03:07:57 Decoding Digital, which is a tech podcast

03:07:59 that I listen to and enjoy,

03:08:02 and ExpressVPN, which is the VPN I’ve used for many years.

03:08:07 Please check out these sponsors in the description

03:08:09 to get a discount and to support this podcast.

03:08:13 If you enjoy this thing, subscribe on YouTube,

03:08:15 review it with Five Stars and Apple Podcast,

03:08:17 follow on Spotify, support on Patreon,

03:08:20 or connect with me on Twitter at Lex Friedman.

03:08:24 And now, let me leave you with some words

03:08:27 from the great and powerful Linus Torvalds.

03:08:30 Talk is cheap, show me the code.

03:08:33 Thank you for listening and hope to see you next time.