Jim Keller: The Future of Computing, AI, Life, and Consciousness #162

Transcript

00:00:00 The following is a conversation with Jim Keller,

00:00:02 his second time in the podcast.

00:00:04 Jim is a legendary microprocessor architect

00:00:08 and is widely seen as one of the greatest

00:00:11 engineering minds of the computing age.

00:00:14 In a peculiar twist of space time in our simulation,

00:00:18 Jim is also a brother in law of Jordan Peterson.

00:00:22 We talk about this and about computing,

00:00:25 artificial intelligence, consciousness, and life.

00:00:29 Quick mention of our sponsors.

00:00:31 Athletic Greens All In One Nutrition Drink,

00:00:33 Brooklyn and Sheets, ExpressVPN,

00:00:36 and Belcampo Grass Fed Meat.

00:00:39 Click the sponsor links to get a discount

00:00:41 and to support this podcast.

00:00:43 As a side note, let me say that Jim is someone who,

00:00:46 on a personal level, inspired me to be myself.

00:00:50 There was something in his words, on and off the mic,

00:00:53 or perhaps that he even paid attention to me at all,

00:00:56 that almost told me, you’re all right, kid.

00:00:59 A kind of pat on the back that can make the difference

00:01:01 between a mind that flourishes

00:01:03 and a mind that is broken down

00:01:05 by the cynicism of the world.

00:01:08 So I guess that’s just my brief few words

00:01:10 of thank you to Jim, and in general,

00:01:12 gratitude for the people who have given me a chance

00:01:15 on this podcast, in my work, and in life.

00:01:19 If you enjoy this thing, subscribe on YouTube,

00:01:21 review on Apple Podcast, follow on Spotify,

00:01:24 support on Patreon, or connect with me

00:01:26 on Twitter, Alex Friedman.

00:01:28 And now, here’s my conversation with Jim Keller.

00:01:33 What’s the value and effectiveness

00:01:35 of theory versus engineering, this dichotomy,

00:01:38 in building good software or hardware systems?

00:01:43 Well, good design is both.

00:01:46 I guess that’s pretty obvious.

00:01:48 By engineering, do you mean reduction of practice

00:01:51 of known methods?

00:01:53 And then science is the pursuit of discovering things

00:01:55 that people don’t understand.

00:01:57 Or solving unknown problems.

00:02:00 Definitions are interesting here,

00:02:01 but I was thinking more in theory,

00:02:04 constructing models that kind of generalize

00:02:06 about how things work.

00:02:08 And engineering is actually building stuff.

00:02:12 The pragmatic, like, okay, we have these nice models,

00:02:16 but how do we actually get things to work?

00:02:17 Maybe economics is a nice example.

00:02:20 Like, economists have all these models

00:02:22 of how the economy works,

00:02:23 and how different policies will have an effect,

00:02:26 but then there’s the actual, okay,

00:02:29 let’s call it engineering,

00:02:30 of like, actually deploying the policies.

00:02:33 So computer design is almost all engineering.

00:02:36 And reduction of practice of known methods.

00:02:38 Now, because of the complexity of the computers we built,

00:02:43 you know, you could think you’re,

00:02:44 well, we’ll just go write some code,

00:02:46 and then we’ll verify it, and then we’ll put it together,

00:02:49 and then you find out that the combination

00:02:50 of all that stuff is complicated.

00:02:53 And then you have to be inventive

00:02:54 to figure out how to do it, right?

00:02:56 So that definitely happens a lot.

00:02:59 And then, every so often, some big idea happens.

00:03:04 But it might be one person.

00:03:06 And that idea is in the space of engineering,

00:03:08 or is it in the space of…

00:03:10 Well, I’ll give you an example.

00:03:11 So one of the limits of computer performance

00:03:13 is branch prediction.

00:03:14 So, and there’s a whole bunch of ideas

00:03:17 about how good you could predict a branch.

00:03:19 And people said, there’s a limit to it,

00:03:21 it’s an asymptotic curve.

00:03:23 And somebody came up with a better way

00:03:24 to do branch prediction, it was a lot better.

00:03:28 And he published a paper on it,

00:03:29 and every computer in the world now uses it.

00:03:32 And it was one idea.

00:03:34 So the engineers who build branch prediction hardware

00:03:37 were happy to drop the one kind of training array

00:03:40 and put it in another one.

00:03:42 So it was a real idea.

00:03:44 And branch prediction is one of the key problems

00:03:48 underlying all of sort of the lowest level of software.

00:03:51 It boils down to branch prediction.

00:03:53 Boils down to uncertainty.

00:03:54 Computers are limited by…

00:03:56 Single thread computer is limited by two things.

00:03:58 The predictability of the path of the branches

00:04:01 and the predictability of the locality of data.

00:04:05 So we have predictors that now predict

00:04:07 both of those pretty well.

00:04:09 So memory is a couple hundred cycles away,

00:04:11 local cache is a couple cycles away.

00:04:14 When you’re executing fast,

00:04:15 virtually all the data has to be in the local cache.

00:04:19 So a simple program says,

00:04:21 add one to every element in an array,

00:04:23 it’s really easy to see what the stream of data will be.

00:04:26 But you might have a more complicated program

00:04:28 that says, get an element of this array,

00:04:31 look at something, make a decision,

00:04:32 go get another element, it’s kind of random.

00:04:35 And you can think, that’s really unpredictable.

00:04:37 And then you make this big predictor

00:04:39 that looks at this kind of pattern and you realize,

00:04:41 well, if you get this data and this data,

00:04:43 then you probably want that one.

00:04:44 And if you get this one and this one and this one,

00:04:46 you probably want that one.

00:04:47 And is that theory or is that engineering?

00:04:49 Like the paper that was written,

00:04:51 was it asymptotic kind of discussion

00:04:54 or is it more like, here’s a hack that works well?

00:04:57 It’s a little bit of both.

00:04:59 Like there’s information theory in it, I think somewhere.

00:05:01 Okay, so it’s actually trying to prove some kind of stuff.

00:05:04 But once you know the method,

00:05:06 implementing it is an engineering problem.

00:05:09 Now there’s a flip side of this,

00:05:10 which is in a big design team,

00:05:13 what percentage of people think

00:05:14 their plan or their life’s work is engineering

00:05:20 versus inventing things?

00:05:23 So lots of companies will reward you for filing patents.

00:05:27 Some, many big companies get stuck

00:05:29 because to get promoted,

00:05:30 you have to come up with something new.

00:05:32 And then what happens is everybody’s trying

00:05:34 to do some random new thing,

00:05:36 99% of which doesn’t matter.

00:05:39 And the basics get neglected.

00:05:41 Or there’s a dichotomy, they think like the cell library

00:05:47 and the basic CAD tools or basic software validation methods,

00:05:53 that’s simple stuff.

00:05:54 They wanna work on the exciting stuff.

00:05:56 And then they spend lots of time

00:05:58 trying to figure out how to patent something.

00:06:00 And that’s mostly useless.

00:06:02 But the breakthrough is on simple stuff.

00:06:04 No, no, you have to do the simple stuff really well.

00:06:08 If you’re building a building out of bricks,

00:06:11 you want great bricks.

00:06:13 So you go to two places that sell bricks.

00:06:14 So one guy says, yeah, they’re over there in a ugly pile.

00:06:17 And the other guy is like lovingly tells you

00:06:19 about the 50 kinds of bricks and how hard they are

00:06:22 and how beautiful they are and how square they are.

00:06:26 Which one are you gonna buy bricks from?

00:06:28 Which is gonna make a better house?

00:06:30 So you’re talking about the craftsman,

00:06:32 the person who understands bricks,

00:06:33 who loves bricks, who loves the varieties.

00:06:35 That’s a good word.

00:06:36 Good engineering is great craftsmanship.

00:06:39 And when you start thinking engineering is about invention

00:06:44 and you set up a system that rewards invention,

00:06:47 the craftsmanship gets neglected.

00:06:50 Okay, so maybe one perspective is the theory,

00:06:53 the science overemphasizes invention

00:06:57 and engineering emphasizes craftsmanship.

00:07:00 And therefore, so it doesn’t matter what you do,

00:07:03 theory, engineering. Well, everybody does.

00:07:05 Like read the tech ranks are always talking

00:07:06 about some breakthrough or innovation

00:07:09 and everybody thinks that’s the most important thing.

00:07:12 But the number of innovative ideas

00:07:13 is actually relatively low.

00:07:15 We need them, right?

00:07:17 And innovation creates a whole new opportunity.

00:07:19 Like when some guy invented the internet, right?

00:07:24 Like that was a big thing.

00:07:25 The million people that wrote software against that

00:07:28 were mostly doing engineering software writing.

00:07:31 So the elaboration of that idea was huge.

00:07:34 I don’t know if you know Brendan Eich,

00:07:35 he wrote JavaScript in 10 days.

00:07:38 That’s an interesting story.

00:07:39 It makes me wonder, and it was famously for many years

00:07:43 considered to be a pretty crappy programming language.

00:07:47 Still is perhaps.

00:07:48 It’s been improving sort of consistently.

00:07:51 But the interesting thing about that guy is,

00:07:55 you know, he doesn’t get any awards.

00:07:58 You don’t get a Nobel Prize or a Fields Medal or.

00:08:01 For inventing a crappy piece of, you know, software code.

00:08:06 That is currently the number one programming language

00:08:08 in the world and runs,

00:08:10 now is increasingly running the backend of the internet.

00:08:13 Well, does he know why everybody uses it?

00:08:17 Like that would be an interesting thing.

00:08:19 Was it the right thing at the right time?

00:08:22 Cause like when stuff like JavaScript came out,

00:08:24 like there was a move from, you know,

00:08:26 writing C programs and C++ to what they call

00:08:30 managed code frameworks,

00:08:32 where you write simple code, it might be interpreted,

00:08:35 it has lots of libraries, productivity is high,

00:08:37 and you don’t have to be an expert.

00:08:39 So, you know, Java was supposed to solve

00:08:41 all the world’s problems.

00:08:42 It was complicated.

00:08:43 JavaScript came out, you know,

00:08:45 after a bunch of other scripting languages.

00:08:47 I’m not an expert on it.

00:08:49 But was it the right thing at the right time?

00:08:51 Or was there something, you know, clever?

00:08:54 Cause he wasn’t the only one.

00:08:56 There’s a few elements.

00:08:57 And maybe if he figured out what it was,

00:08:59 then he’d get a prize.

00:09:02 Like that.

00:09:02 Yeah, you know, maybe his problem is he hasn’t defined this.

00:09:06 Or he just needs a good promoter.

00:09:09 Well, I think there was a bunch of blog posts

00:09:11 written about it, which is like,

00:09:13 wrong is right, which is like doing the crappy thing fast.

00:09:19 Just like hacking together the thing

00:09:21 that answers some of the needs.

00:09:23 And then iterating over time, listening to developers.

00:09:26 Like listening to people who actually use the thing.

00:09:28 This is something you can do more in software.

00:09:31 But the right time, like you have to sense,

00:09:33 you have to have a good instinct

00:09:35 of when is the right time for the right tool.

00:09:37 And make it super simple.

00:09:40 And just get it out there.

00:09:42 The problem is, this is true with hardware.

00:09:45 This is less true with software.

00:09:46 Is there’s backward compatibility

00:09:48 that just drags behind you as, you know,

00:09:51 as you try to fix all the mistakes of the past.

00:09:53 But the timing.

00:09:55 It was good.

00:09:56 There’s something about that.

00:09:57 And it wasn’t accidental.

00:09:58 You have to like give yourself over to the,

00:10:02 you have to have this like broad sense

00:10:05 of what’s needed now.

00:10:07 Both scientifically and like the community.

00:10:10 And just like this, it was obvious that there was no,

00:10:15 the interesting thing about JavaScript

00:10:17 is everything that ran in the browser at the time,

00:10:20 like Java and I think other like Scheme,

00:10:24 other programming languages,

00:10:25 they were all in a separate external container.

00:10:30 And then JavaScript was literally

00:10:32 just injected into the webpage.

00:10:34 It was the dumbest possible thing

00:10:36 running in the same thread as everything else.

00:10:39 And like it was inserted as a comment.

00:10:43 So JavaScript code is inserted as a comment in the HTML code.

00:10:47 And it was, I mean, there’s,

00:10:50 it’s either genius or super dumb, but it’s like.

00:10:53 Right, so it had no apparatus for like a virtual machine

00:10:55 and container, it just executed in the framework

00:10:58 of the program that’s already running.

00:10:59 Yeah, that’s cool.

00:11:00 And then because something about that accessibility,

00:11:04 the ease of its use resulted in then developers innovating

00:11:10 of how to actually use it.

00:11:11 I mean, I don’t even know what to make of that,

00:11:13 but it does seem to echo across different software,

00:11:18 like stories of different software.

00:11:19 PHP has the same story, really crappy language.

00:11:22 They just took over the world.

00:11:25 I always have a joke that the random length instructions,

00:11:28 variable length instructions, that’s always one,

00:11:30 even though they’re obviously worse.

00:11:33 Like nobody knows why.

00:11:34 X86 is arguably the worst architecture on the planet.

00:11:38 It’s one of the most popular ones.

00:11:40 Well, I mean, isn’t that also the story of risk versus,

00:11:43 I mean, is that simplicity?

00:11:46 There’s something about simplicity that us

00:11:49 in this evolutionary process is valued.

00:11:53 If it’s simple, it spreads faster, it seems like.

00:11:58 Or is that not always true?

00:11:59 Not always true.

00:12:01 Yeah, it could be simple is good, but too simple is bad.

00:12:04 So why did risk win, you think, so far?

00:12:06 Did risk win?

00:12:08 In the long archivist tree.

00:12:10 We don’t know.

00:12:11 So who’s gonna win?

00:12:12 What’s risk, what’s CISC, and who’s gonna win in that space

00:12:15 in these instruction sets?

00:12:17 AI software’s gonna win, but there’ll be little computers

00:12:21 that run little programs like normal all over the place.

00:12:24 But we’re going through another transformation, so.

00:12:28 But you think instruction sets underneath it all will change?

00:12:32 Yeah, they evolve slowly.

00:12:33 They don’t matter very much.

00:12:35 They don’t matter very much, okay.

00:12:36 I mean, the limits of performance are predictability

00:12:40 of instructions and data.

00:12:41 I mean, that’s the big thing.

00:12:43 And then the usability of it is some quality of design,

00:12:49 quality of tools, availability.

00:12:52 Like right now, x86 is proprietary with Intel and AMD,

00:12:56 but they can change it any way they want independently.

00:12:59 ARM is proprietary to ARM,

00:13:01 and they won’t let anybody else change it.

00:13:03 So it’s like a sole point.

00:13:05 And RISC 5 is open source, so anybody can change it,

00:13:09 which is super cool.

00:13:10 But that also might mean it gets changed

00:13:12 too many random ways that there’s no common subset of it

00:13:16 that people can use.

00:13:17 Do you like open or do you like closed?

00:13:19 Like if you were to bet all your money on one

00:13:21 or the other, RISC 5 versus it?

00:13:23 No idea.

00:13:24 It’s case dependent?

00:13:25 Well, x86, oddly enough, when Intel first started

00:13:27 developing it, they licensed like seven people.

00:13:30 So it was the open architecture.

00:13:33 And then they moved faster than others

00:13:35 and also bought one or two of them.

00:13:37 But there was seven different people making x86

00:13:40 because at the time there was 6502 and Z80s and 8086.

00:13:46 And you could argue everybody thought Z80

00:13:49 was the better instruction set,

00:13:50 but that was proprietary to one place.

00:13:54 Oh, and the 6800.

00:13:56 So there’s like four or five different microprocessors.

00:13:59 Intel went open, got the market share

00:14:02 because people felt like they had multiple sources from it,

00:14:04 and then over time it narrowed down to two players.

00:14:07 So why, you as a historian, why did Intel win for so long

00:14:14 with their processors?

00:14:17 I mean, I mean.

00:14:18 They were great.

00:14:18 Their process development was great.

00:14:21 Oh, so it’s just looking back to JavaScript

00:14:23 and what I like is Microsoft and Netscape

00:14:26 and all these internet browsers.

00:14:28 Microsoft won the browser game

00:14:31 because they aggressively stole other people’s ideas

00:14:35 like right after they did it.

00:14:37 You know, I don’t know

00:14:39 if Intel was stealing other people’s ideas.

00:14:41 They started making.

00:14:42 In a good way, stealing in a good way just to clarify.

00:14:43 They started making RAMs, random access memories.

00:14:48 And then at the time

00:14:50 when the Japanese manufacturers came up,

00:14:52 you know, they were getting out competed on that

00:14:54 and they pivoted the microprocessors

00:14:56 and they made the first, you know,

00:14:57 integrated microprocessor grant programs.

00:14:59 It was the 4D04 or something.

00:15:03 Who was behind that pivot?

00:15:04 That’s a hell of a pivot.

00:15:05 Andy Grove and he was great.

00:15:08 That’s a hell of a pivot.

00:15:10 And then they led semiconductor industry.

00:15:13 Like they were just a little company, IBM,

00:15:15 all kinds of big companies had boatloads of money

00:15:18 and they out innovated everybody.

00:15:21 Out innovated, okay.

00:15:22 Yeah, yeah.

00:15:23 So it’s not like marketing, it’s not any of that stuff.

00:15:26 Their processor designs were pretty good.

00:15:29 I think the, you know, Core 2 was probably the first one

00:15:34 I thought was great.

00:15:36 It was a really fast processor and then Haswell was great.

00:15:40 What makes a great processor in that?

00:15:42 Oh, if you just look at it,

00:15:43 it’s performance versus everybody else.

00:15:45 It’s, you know, the size of it, the usability of it.

00:15:49 So it’s not specific,

00:15:50 some kind of element that makes you beautiful.

00:15:52 It’s just like literally just raw performance.

00:15:55 Is that how you think about processors?

00:15:57 It’s just like raw performance?

00:15:59 Of course.

00:16:01 It’s like a horse race.

00:16:02 The fastest one wins.

00:16:04 Now.

00:16:05 You don’t care how.

00:16:05 Just as long as it wins.

00:16:08 Well, there’s the fastest in the environment.

00:16:10 Like, you know, for years you made the fastest one you could

00:16:13 and then people started to have power limits.

00:16:14 So then you made the fastest at the right PowerPoint.

00:16:17 And then when we started doing multi processors,

00:16:20 like if you could scale your processors

00:16:23 more than the other guy,

00:16:24 you could be 10% faster on like a single thread,

00:16:26 but you have more threads.

00:16:28 So there’s lots of variability.

00:16:30 And then ARM really explored,

00:16:34 like, you know, they have the A series

00:16:36 and the R series and the M series,

00:16:38 like a family of processors

00:16:40 for all these different design points

00:16:41 from like unbelievably small and simple.

00:16:44 And so then when you’re doing the design,

00:16:46 it’s sort of like this big pallet of CPUs.

00:16:49 Like they’re the only ones with a credible,

00:16:51 you know, top to bottom pallet.

00:16:54 What do you mean a credible top to bottom?

00:16:56 Well, there’s people who make microcontrollers

00:16:58 that are small, but they don’t have a fast one.

00:17:00 There’s people who make fast processors,

00:17:02 but don’t have a medium one or a small one.

00:17:04 Is that hard to do that full pallet?

00:17:07 That seems like a…

00:17:08 Yeah, it’s a lot of different.

00:17:09 So what’s the difference in the ARM folks and Intel

00:17:13 in terms of the way they’re approaching this problem?

00:17:15 Well, Intel, almost all their processor designs

00:17:19 were, you know, very custom high end,

00:17:21 you know, for the last 15, 20 years.

00:17:23 So the fastest horse possible.

00:17:24 Yeah.

00:17:25 In one horse race.

00:17:27 Yeah, and then architecturally they’re really good,

00:17:30 but the company itself was fairly insular

00:17:33 to what’s going on in the industry with CAD tools and stuff.

00:17:36 And there’s this debate about custom design

00:17:38 versus synthesis and how do you approach that?

00:17:41 I’d say Intel was slow on getting to synthesize processors.

00:17:45 ARM came in from the bottom and they generated IP,

00:17:49 which went to all kinds of customers.

00:17:50 So they had very little say

00:17:52 on how the customer implemented their IP.

00:17:54 So ARM is super friendly to the synthesis IP environment.

00:17:59 Whereas Intel said,

00:18:00 we’re gonna make this great client chip or server chip

00:18:03 with our own CAD tools, with our own process,

00:18:05 with our own, you know, other supporting IP

00:18:08 and everything only works with our stuff.

00:18:11 So is that, is ARM winning the mobile platform space

00:18:16 in terms of process?

00:18:17 Yeah.

00:18:18 And so in that, what you’re describing

00:18:21 is why they’re winning.

00:18:22 Well, they had lots of people doing lots

00:18:24 of different experiments.

00:18:26 So they controlled the processor architecture and IP,

00:18:29 but they let people put in lots of different chips.

00:18:32 And there was a lot of variability in what happened there.

00:18:35 Whereas Intel, when they made their mobile,

00:18:37 their foray into mobile,

00:18:38 they had one team doing one part, right?

00:18:41 So it wasn’t 10 experiments.

00:18:43 And then their mindset was PC mindset,

00:18:45 Microsoft software mindset.

00:18:48 And that brought a whole bunch of things along

00:18:49 that the mobile world and the embedded world don’t do.

00:18:52 Do you think it was possible for Intel to pivot hard

00:18:55 and win the mobile market?

00:18:58 That’s a hell of a difficult thing to do, right?

00:19:00 For a huge company to just pivot.

00:19:03 I mean, it’s so interesting to,

00:19:05 because we’ll talk about your current work.

00:19:07 It’s like, it’s clear that PCs were dominating

00:19:11 for several decades, like desktop computers.

00:19:14 And then mobile, it’s unclear.

00:19:17 It’s a leadership question.

00:19:19 Like Apple under Steve Jobs, when he came back,

00:19:23 they pivoted multiple times.

00:19:25 You know, they built iPads and iTunes and phones

00:19:28 and tablets and great Macs.

00:19:30 Like who knew computers should be made out of aluminum?

00:19:33 Nobody knew that.

00:19:35 But they’re great.

00:19:36 It’s super fun.

00:19:37 That was Steve?

00:19:38 Yeah, Steve Jobs.

00:19:38 Like they pivoted multiple times.

00:19:41 And you know, the old Intel, they did that multiple times.

00:19:45 They made DRAMs and processors and processes

00:19:48 and I gotta ask this,

00:19:50 what was it like working with Steve Jobs?

00:19:53 I didn’t work with him.

00:19:54 Did you interact with him?

00:19:55 Twice.

00:19:57 I said hi to him twice in the cafeteria.

00:19:59 What did he say?

00:20:01 Hi?

00:20:01 He said, hey fellas.

00:20:04 He was friendly.

00:20:05 He was wandering around and with somebody,

00:20:08 he couldn’t find a table because the cafeteria was packed

00:20:12 and I gave him my table.

00:20:13 But I worked for Mike Colbert who talked to,

00:20:16 like Mike was the unofficial CTO of Apple

00:20:19 and a brilliant guy and he worked for Steve for 25 years,

00:20:22 maybe more and he talked to Steve multiple times a day

00:20:26 and he was one of the people who could put up with Steve’s,

00:20:29 let’s say, brilliance and intensity

00:20:31 and Steve really liked him and Steve trusted Mike

00:20:35 to translate the shit he thought up

00:20:39 into engineering products that work

00:20:40 and then Mike ran a group called Platform Architecture

00:20:43 and I was in that group.

00:20:44 So many times I’d be sitting with Mike

00:20:46 and the phone would ring and it’d be Steve

00:20:48 and Mike would hold the phone like this

00:20:50 because Steve would be yelling about something or other.

00:20:53 And then he would translate.

00:20:54 And he’d translate and then he would say,

00:20:55 Steve wants us to do this.

00:20:58 So.

00:20:59 Was Steve a good engineer or no?

00:21:01 I don’t know.

00:21:02 He was a great idea guy.

00:21:03 Idea person.

00:21:04 And he’s a really good selector for talent.

00:21:07 Yeah, that seems to be one of the key elements

00:21:09 of leadership, right?

00:21:10 And then he was a really good first principles guy.

00:21:12 Like somebody would say something couldn’t be done

00:21:15 and he would just think, that’s obviously wrong, right?

00:21:20 But you know, maybe it’s hard to do.

00:21:23 Maybe it’s expensive to do.

00:21:24 Maybe we need different people.

00:21:25 You know, there’s like a whole bunch of,

00:21:27 if you want to do something hard,

00:21:29 you know, maybe it takes time.

00:21:30 Maybe you have to iterate.

00:21:31 There’s a whole bunch of things you could think about

00:21:33 but saying it can’t be done is stupid.

00:21:36 How would you compare?

00:21:38 So it seems like Elon Musk is more engineering centric

00:21:42 but is also, I think he considers himself a designer too.

00:21:45 He has a design mind.

00:21:46 Steve Jobs feels like he’s much more idea space,

00:21:50 design space versus engineering.

00:21:52 Just make it happen.

00:21:53 Like the world should be this way.

00:21:55 Just figure it out.

00:21:57 But he used computers.

00:21:58 You know, he had computer people talk to him all the time.

00:22:01 Like Mike was a really good computer guy.

00:22:03 He knew computers could do.

00:22:04 Computer meaning computer hardware?

00:22:06 Like hardware, software, all the pieces.

00:22:09 And then he would have an idea about

00:22:12 what could we do with this next.

00:22:14 That was grounded in reality.

00:22:16 It wasn’t like he was just finger painting on the wall

00:22:19 and wishing somebody would interpret it.

00:22:21 So he had this interesting connection

00:22:23 because he wasn’t a computer architect or designer

00:22:28 but he had an intuition from the computers we had

00:22:30 to what could happen.

00:22:31 And it’s interesting you say intuition

00:22:35 because it seems like he was pissing off a lot of engineers

00:22:39 in his intuition about what can and can’t be done.

00:22:43 Those, like the, what is all these stories

00:22:46 about like floppy disks and all that kind of stuff.

00:22:49 Yeah, so in Steve, the first round,

00:22:52 like he’d go into a lab and look at what’s going on

00:22:55 and hate it and fire people or ask somebody

00:22:59 in the elevator what they’re doing for Apple.

00:23:01 And not be happy.

00:23:03 When he came back, my impression was

00:23:06 is he surrounded himself

00:23:08 with a relatively small group of people

00:23:10 and didn’t really interact outside of that as much.

00:23:13 And then the joke was you’d see like somebody moving

00:23:16 a prototype through the quad with a black blanket over it.

00:23:20 And that was because it was secret, partly from Steve

00:23:24 because they didn’t want Steve to see it until it was ready.

00:23:26 Yeah, the dynamic with Johnny Ive and Steve is interesting.

00:23:31 It’s like you don’t wanna,

00:23:34 he ruins as many ideas as he generates.

00:23:37 Yeah, yeah.

00:23:38 It’s a dangerous kind of line to walk.

00:23:42 If you have a lot of ideas,

00:23:43 like Gordon Bell was famous for ideas, right?

00:23:47 And it wasn’t that the percentage of good ideas

00:23:49 was way higher than anybody else.

00:23:51 It was, he had so many ideas

00:23:53 and he was also good at talking to people about it

00:23:55 and getting the filters right.

00:23:58 And seeing through stuff.

00:24:00 Whereas Elon was like, hey, I wanna build rockets.

00:24:03 So Steve would hire a bunch of rocket guys

00:24:05 and Elon would go read rocket manuals.

00:24:08 So Elon is a better engineer, a sense like,

00:24:11 or like more like a love and passion for the manuals.

00:24:16 And the details.

00:24:17 The details, the craftsmanship too, right?

00:24:20 Well, I guess Steve had craftsmanship too,

00:24:22 but of a different kind.

00:24:24 What do you make of the,

00:24:26 just to stay in there for just a little longer,

00:24:27 what do you make of like the anger

00:24:29 and the passion and all of that?

00:24:30 The firing and the mood swings and the madness,

00:24:35 the being emotional and all of that, that’s Steve.

00:24:39 And I guess Elon too.

00:24:40 So what, is that a bug or a feature?

00:24:43 It’s a feature.

00:24:45 So there’s a graph, which is Y axis productivity,

00:24:50 X axis at zero is chaos,

00:24:52 and infinity is complete order, right?

00:24:56 So as you go from the origin,

00:25:00 as you improve order, you improve productivity.

00:25:04 And at some point, productivity peaks,

00:25:06 and then it goes back down again.

00:25:08 Too much order, nothing can happen.

00:25:09 Yes.

00:25:10 But the question is, how close to the chaos is that?

00:25:13 No, no, no, here’s the thing,

00:25:15 is once you start moving in the direction of order,

00:25:16 the force vector to drive you towards order is unstoppable.

00:25:21 Oh, so it’s a slippery slope.

00:25:22 And every organization will move to the place

00:25:24 where their productivity is stymied by order.

00:25:27 So you need a…

00:25:28 So the question is, who’s the counter force?

00:25:31 Because it also feels really good.

00:25:33 As you get more organized, the productivity goes up.

00:25:36 The organization feels it, they orient towards it, right?

00:25:39 They hired more people.

00:25:41 They got more guys who couldn’t run process,

00:25:42 you get bigger, right?

00:25:44 And then inevitably, the organization gets captured

00:25:49 by the bureaucracy that manages all the processes.

00:25:51 Yeah.

00:25:53 All right, and then humans really like that.

00:25:55 And so if you just walk into a room and say,

00:25:57 guys, love what you’re doing,

00:26:00 but I need you to have less order.

00:26:04 If you don’t have some force behind that,

00:26:06 nothing will happen.

00:26:09 I can’t tell you on how many levels that’s profound, so.

00:26:12 So that’s why I’d say it’s a feature.

00:26:14 Now, could you be nicer about it?

00:26:17 I don’t know, I don’t know any good examples

00:26:18 of being nicer about it.

00:26:20 Well, the funny thing is to get stuff done,

00:26:23 you need people who can manage stuff and manage people,

00:26:25 because humans are complicated.

00:26:26 They need lots of care and feeding that you need

00:26:28 to tell them they look nice and they’re doing good stuff

00:26:30 and pat them on the back, right?

00:26:33 I don’t know, you tell me, is that needed?

00:26:35 Oh yeah.

00:26:36 Do humans need that?

00:26:37 I had a friend, he started a magic group and he said,

00:26:39 I figured it out.

00:26:40 You have to praise them before they do anything.

00:26:43 I was waiting until they were done.

00:26:45 And they were always mad at me.

00:26:46 Now I tell them what a great job they’re doing

00:26:48 while they’re doing it.

00:26:49 But then you get stuck in that trap,

00:26:51 because then when they’re not doing something,

00:26:52 how do you confront these people?

00:26:54 I think a lot of people that had trauma

00:26:55 in their childhood would disagree with you,

00:26:57 successful people, that you need to first do the rough stuff

00:27:00 and then be nice later.

00:27:02 I don’t know.

00:27:03 Okay, but engineering companies are full of adults

00:27:05 who had all kinds of range of childhoods.

00:27:08 You know, most people had okay childhoods.

00:27:11 Well, I don’t know if…

00:27:12 Lots of people only work for praise, which is weird.

00:27:15 You mean like everybody.

00:27:16 I’m not that interested in it, but…

00:27:21 Well, you’re probably looking for somebody’s approval.

00:27:25 Even still.

00:27:27 Yeah, maybe.

00:27:28 I should think about that.

00:27:29 Maybe somebody who’s no longer with us kind of thing.

00:27:33 I don’t know.

00:27:34 I used to call up my dad and tell him what I was doing.

00:27:36 He was very excited about engineering and stuff.

00:27:38 You got his approval?

00:27:40 Uh, yeah, a lot.

00:27:42 I was lucky.

00:27:43 Like, he decided I was smart and unusual as a kid

00:27:47 and that was okay when I was really young.

00:27:50 So when I did poorly in school, I was dyslexic.

00:27:52 I didn’t read until I was third or fourth grade.

00:27:55 They didn’t care.

00:27:56 My parents were like, oh, he’ll be fine.

00:27:59 So I was lucky.

00:28:01 That was cool.

00:28:02 Is he still with us?

00:28:05 You miss him?

00:28:07 Sure, yeah.

00:28:08 He had Parkinson’s and then cancer.

00:28:10 His last 10 years were tough and I killed him.

00:28:15 Killing a man like that’s hard.

00:28:18 The mind?

00:28:19 Well, it’s pretty good.

00:28:21 Parkinson’s causes slow dementia

00:28:23 and the chemotherapy, I think, accelerated it.

00:28:29 But it was like hallucinogenic dementia.

00:28:31 So he was clever and funny and interesting

00:28:34 and it was pretty unusual.

00:28:37 Do you remember conversations?

00:28:39 From that time?

00:28:41 Like, do you have fond memories of the guy?

00:28:43 Yeah, oh yeah.

00:28:45 Anything come to mind?

00:28:48 A friend told me one time I could draw a computer

00:28:50 on the whiteboard faster than anybody he’d ever met.

00:28:52 I said, you should meet my dad.

00:28:54 Like, when I was a kid, he’d come home and say,

00:28:56 I was driving by this bridge and I was thinking about it

00:28:58 and he pulled out a piece of paper

00:28:59 and he’d draw the whole bridge.

00:29:01 He was a mechanical engineer.

00:29:03 And he would just draw the whole thing

00:29:05 and then he would tell me about it

00:29:06 and then tell me how he would have changed it.

00:29:08 And he had this idea that he could understand

00:29:11 and conceive anything.

00:29:13 And I just grew up with that, so that was natural.

00:29:16 So when I interview people, I ask them to draw a picture

00:29:19 of something they did on a whiteboard

00:29:21 and it’s really interesting.

00:29:22 Like, some people draw a little box

00:29:25 and then they’ll say, and then this talks to this

00:29:27 and I’ll be like, oh, this is frustrating.

00:29:30 I had this other guy come in one time, he says,

00:29:32 well, I designed a floating point in this chip

00:29:34 but I’d really like to tell you how the whole thing works

00:29:36 and then tell you how the floating point works inside of it.

00:29:38 Do you mind if I do that?

00:29:39 And he covered two whiteboards in like 30 minutes

00:29:42 and I hired him.

00:29:42 Like, he was great.

00:29:44 This is craftsman.

00:29:45 I mean, that’s the craftsmanship to that.

00:29:47 Yeah, but also the mental agility

00:29:49 to understand the whole thing,

00:29:51 put the pieces in context,

00:29:54 real view of the balance of how the design worked.

00:29:58 Because if you don’t understand it properly,

00:30:01 when you start to draw it,

00:30:02 you’ll fill up half the whiteboard

00:30:03 with like a little piece of it

00:30:05 and like your ability to lay it out in an understandable way

00:30:09 takes a lot of understanding, so.

00:30:11 And be able to, so zoom into the detail

00:30:13 and then zoom out to the big picture.

00:30:14 Zoom out really fast.

00:30:16 What about the impossible thing?

00:30:17 You see, your dad believed that you can do anything.

00:30:22 That’s a weird feature for a craftsman.

00:30:25 Yeah.

00:30:26 It seems that that echoes in your own behavior.

00:30:30 Like that’s the.

00:30:32 Well, it’s not that anybody can do anything right now, right?

00:30:36 It’s that if you work at it, you can get better at it

00:30:39 and there might not be a limit.

00:30:43 And they did funny things like,

00:30:44 like he always wanted to play piano.

00:30:46 So at the end of his life, he started playing the piano

00:30:48 when he had Parkinson’s and he was terrible.

00:30:51 But he thought if he really worked out in this life,

00:30:53 maybe the next life he’d be better at it.

00:30:56 He might be onto something.

00:30:57 Yeah, he enjoyed doing it.

00:31:00 Yeah.

00:31:01 It’s pretty funny.

00:31:02 Do you think the perfect is the enemy of the good

00:31:06 in hardware and software engineering?

00:31:08 It’s like we were talking about JavaScript a little bit

00:31:10 and the messiness of the 10 day building process.

00:31:14 Yeah, you know, creative tension, right?

00:31:19 So creative tension is you have two different ideas

00:31:21 that you can’t do both, right?

00:31:24 And, but the fact that you wanna do both

00:31:27 causes you to go try to solve that problem.

00:31:29 That’s the creative part.

00:31:32 So if you’re building computers,

00:31:35 like some people say we have the schedule

00:31:37 and anything that doesn’t fit in the schedule we can’t do.

00:31:40 Right?

00:31:41 And so they throw out the perfect

00:31:42 because they have a schedule.

00:31:44 I hate that.

00:31:46 Then there’s other people who say

00:31:48 we need to get this perfectly right.

00:31:50 And no matter what, you know, more people, more money,

00:31:53 right?

00:31:55 And there’s a really clear idea about what you want.

00:31:57 Some people are really good at articulating it, right?

00:32:00 So let’s call that the perfect, yeah.

00:32:02 Yeah.

00:32:03 All right, but that’s also terrible

00:32:04 because they never ship anything.

00:32:06 You never hit any goals.

00:32:07 So now you have your framework.

00:32:09 Yes.

00:32:10 You can’t throw out stuff

00:32:11 because you can’t get it done today

00:32:12 because maybe you’ll get it done tomorrow

00:32:14 or the next project, right?

00:32:15 You can’t, so you have to,

00:32:18 I work with a guy that I really like working with,

00:32:20 but he over filters his ideas.

00:32:23 Over filters?

00:32:24 He’d start thinking about something

00:32:26 and as soon as he figured out what was wrong with it,

00:32:28 he’d throw it out.

00:32:29 And then I start thinking about it

00:32:31 and you come up with an idea

00:32:32 and then you find out what’s wrong with it.

00:32:34 And then you give it a little time to set

00:32:36 because sometimes you figure out how to tweak it

00:32:39 or maybe that idea helps some other idea.

00:32:42 So idea generation is really funny.

00:32:45 So you have to give your ideas space.

00:32:46 Like spaciousness of mind is key.

00:32:49 But you also have to execute programs and get shit done.

00:32:53 And then it turns out computer engineering is fun

00:32:55 because it takes 100 people to build a computer,

00:32:58 200 or 300, whatever the number is.

00:33:00 And people are so variable about temperament

00:33:05 and skill sets and stuff.

00:33:07 That in a big organization,

00:33:09 you find the people who love the perfect ideas

00:33:11 and the people that want to get stuffed on yesterday

00:33:13 and people like to come up with ideas

00:33:16 and people like to, let’s say shoot down ideas.

00:33:19 And it takes the whole, it takes a large group of people.

00:33:23 Some are good at generating ideas, some are good at filtering ideas.

00:33:25 And then all in that giant mess, you’re somehow,

00:33:30 I guess the goal is for that giant mess of people

00:33:33 to find the perfect path through the tension,

00:33:37 the creative tension.

00:33:38 But like, how do you know when you said

00:33:41 there’s some people good at articulating

00:33:42 what perfect looks like, what a good design is?

00:33:44 Like if you’re sitting in a room

00:33:48 and you have a set of ideas

00:33:51 about like how to design a better processor,

00:33:55 how do you know this is something special here?

00:33:58 This is a good idea, let’s try this.

00:34:00 Have you ever brainstormed an idea

00:34:02 with a couple of people that were really smart?

00:34:04 And you kind of go into it and you don’t quite understand it

00:34:07 and you’re working on it.

00:34:09 And then you start talking about it,

00:34:12 putting it on the whiteboard, maybe it takes days or weeks.

00:34:16 And then your brain starts to kind of synchronize.

00:34:18 It’s really weird.

00:34:19 Like you start to see what each other is thinking.

00:34:25 And it starts to work.

00:34:28 Like you can see work.

00:34:29 Like my talent in computer design

00:34:30 is I can see how computers work in my head, like really well.

00:34:35 And I know other people can do that too.

00:34:37 And when you’re working with people that can do that,

00:34:40 like it is kind of an amazing experience.

00:34:45 And then every once in a while you get to that place

00:34:48 and then you find the flaw, which is kind of funny

00:34:50 because you can fool yourself.

00:34:53 The two of you kind of drifted along

00:34:55 in the direction that was useless.

00:34:58 That happens too.

00:34:59 Like you have to, because the nice thing

00:35:03 about computer design is always reduction in practice.

00:35:05 Like you come up with your good ideas

00:35:08 and I know some architects who really love ideas

00:35:10 and then they work on them and they put it on the shelf

00:35:13 and they go work on the next idea and put it on the shelf

00:35:14 and they never reduce it to practice.

00:35:16 So they find out what’s good and bad.

00:35:18 Because almost every time I’ve done something really new,

00:35:22 by the time it’s done, like the good parts are good,

00:35:25 but I know all the flaws, like.

00:35:27 Yeah.

00:35:28 Would you say your career, just your own experience,

00:35:31 is your career defined mostly by flaws or by successes?

00:35:35 Like if…

00:35:36 Again, there’s great tension between those.

00:35:38 If you haven’t tried hard, right?

00:35:42 And done something new, right?

00:35:46 Then you’re not gonna be facing the challenges

00:35:48 when you build it.

00:35:49 Then you find out all the problems with it.

00:35:51 And…

00:35:52 But when you look back, do you see problems?

00:35:55 Okay.

00:35:56 Oh, when I look back?

00:35:58 What do you remember?

00:35:58 I think earlier in my career,

00:36:00 like EV5 was the second alpha chip.

00:36:04 I was so embarrassed about the mistakes,

00:36:06 I could barely talk about it.

00:36:08 And it was in the Guinness Book of World Records

00:36:10 and it was the fastest processor on the planet.

00:36:12 Yeah.

00:36:13 So it was, and at some point I realized

00:36:15 that was really a bad mental framework

00:36:18 to deal with doing something new.

00:36:20 We did a bunch of new things

00:36:21 and some worked out great and some were bad.

00:36:23 And we learned a lot from it.

00:36:24 And then the next one, we learned a lot.

00:36:28 That EV6 also had some really cool things in it.

00:36:31 I think the proportion of good stuff went up,

00:36:34 but it had a couple of fatal flaws in it that were painful.

00:36:39 And then, yeah.

00:36:41 You learned to channel the pain into like pride.

00:36:44 Not pride, really.

00:36:45 You know, just a realization about how the world works

00:36:50 or how that kind of idea set works.

00:36:52 Life is suffering.

00:36:53 That’s the reality.

00:36:55 No, it’s not.

00:36:57 Well, I know the Buddha said that

00:36:58 and a couple other people are stuck on it.

00:37:00 No, it’s, you know, there’s this kind of weird combination

00:37:03 of good and bad, you know, light and darkness

00:37:06 that you have to tolerate and, you know, deal with.

00:37:10 Yeah, there’s definitely lots of suffering in the world.

00:37:12 Depends on the perspective.

00:37:13 It seems like there’s way more darkness,

00:37:15 but that makes the light part really nice.

00:37:18 What computing hardware or just any kind,

00:37:24 even software design, do you find beautiful

00:37:28 from your own work, from other people’s work?

00:37:32 You’re just, we were just talking about the battleground

00:37:37 of flaws and mistakes and errors,

00:37:39 but things that were just beautifully done.

00:37:42 Is there something that pops to mind?

00:37:44 Well, when things are beautifully done,

00:37:47 usually there’s a well thought out set of abstraction layers.

00:37:53 So the whole thing works in unison nicely.

00:37:56 Yes.

00:37:57 And when I say abstraction layer,

00:37:59 that means two different components

00:38:01 when they work together, they work independently.

00:38:04 They don’t have to know what the other one is doing.

00:38:07 So that decoupling.

00:38:08 Yeah.

00:38:09 So the famous one was the network stack.

00:38:11 Like there’s a seven layer network stack,

00:38:13 you know, data transport and protocol and all the layers.

00:38:16 And the innovation was,

00:38:17 is when they really wrote, got that right.

00:38:20 Cause networks before that didn’t define those very well.

00:38:22 The layers could innovate independently.

00:38:26 And occasionally the layer boundary would,

00:38:28 the interface would be upgraded.

00:38:30 And that let the design space breathe.

00:38:34 And you could do something new in layer seven

00:38:37 without having to worry about how layer four worked.

00:38:40 And so good design does that.

00:38:43 And you see it in processor designs.

00:38:45 When we did the Zen design at AMD,

00:38:48 we made several components very modular.

00:38:51 And, you know, my insistence at the top was

00:38:54 I wanted all the interfaces defined

00:38:56 before we wrote the RTL for the pieces.

00:38:59 One of the verification leads said,

00:39:01 if we do this right,

00:39:02 I can test the pieces so well independently

00:39:04 when we put it together,

00:39:06 we won’t find all these interaction bugs

00:39:08 cause the floating point knows how the cache works.

00:39:10 And I was a little skeptical,

00:39:12 but he was mostly right.

00:39:14 That the modularity of the design

00:39:16 greatly improved the quality.

00:39:18 Is that universally true in general?

00:39:20 Would you say about good designs,

00:39:21 the modularity is like usually modular?

00:39:24 Well, we talked about this before.

00:39:25 Humans are only so smart.

00:39:26 Like, and we’re not getting any smarter, right?

00:39:29 But the complexity of things is going up.

00:39:32 So, you know, a beautiful design can’t be bigger

00:39:36 than the person doing it.

00:39:37 It’s just, you know, their piece of it.

00:39:40 Like the odds of you doing a really beautiful design

00:39:42 of something that’s way too hard for you is low, right?

00:39:46 If it’s way too simple for you,

00:39:48 it’s not that interesting.

00:39:49 It’s like, well, anybody could do that.

00:39:50 But when you get the right match of your expertise

00:39:54 and, you know, mental power to the right design size,

00:39:58 that’s cool, but that’s not big enough

00:40:00 to make a meaningful impact in the world.

00:40:02 So now you have to have some framework

00:40:04 to design the pieces so that the whole thing

00:40:08 is big and harmonious.

00:40:10 But, you know, when you put it together,

00:40:13 it’s, you know, sufficiently interesting to be used.

00:40:18 And, you know, so that’s what a beautiful design is.

00:40:23 Matching the limits of that human cognitive capacity

00:40:27 to the module that you can create

00:40:30 and creating a nice interface between those modules

00:40:33 and thereby, do you think there’s a limit

00:40:34 to the kind of beautiful complex systems

00:40:37 we can build with this kind of modular design?

00:40:40 It’s like, you know, if we build increasingly

00:40:45 more complicated, you can think of like the internet.

00:40:49 Okay, let’s scale it down.

00:40:50 Or you can think of like social network,

00:40:52 like Twitter as one computing system.

00:40:57 But those are little modules, right?

00:41:00 But it’s built on so many components

00:41:03 nobody at Twitter even understands.

00:41:05 Right.

00:41:06 So if an alien showed up and looked at Twitter,

00:41:09 he wouldn’t just see Twitter as a beautiful,

00:41:11 simple thing that everybody uses, which is really big.

00:41:14 You would see the network, it runs on the fiber optics,

00:41:18 the data is transported to the computers.

00:41:19 The whole thing is so bloody complicated,

00:41:22 nobody at Twitter understands it.

00:41:23 And so that’s what the alien would see.

00:41:25 So yeah, if an alien showed up and looked at Twitter

00:41:28 or looked at the various different network systems

00:41:32 that you could see on Earth.

00:41:33 So imagine they were really smart

00:41:34 and they could comprehend the whole thing.

00:41:36 And then they sort of evaluated the human

00:41:40 and thought, this is really interesting.

00:41:41 No human on this planet comprehends the system they built.

00:41:45 No individual, well, would they even see individual humans?

00:41:48 Like we humans are very human centric, entity centric.

00:41:52 And so we think of us as the central organism

00:41:56 and the networks as just the connection of organisms.

00:41:59 But from a perspective of an alien,

00:42:02 from an outside perspective, it seems like.

00:42:05 Yeah, I get it.

00:42:06 We’re the ants and they’d see the ant colony.

00:42:08 The ant colony, yeah.

00:42:10 Or the result of production of the ant colony,

00:42:12 which is like cities and it’s,

00:42:18 in that sense, humans are pretty impressive.

00:42:19 The modularity that we’re able to,

00:42:23 and how robust we are to noise and mutation

00:42:25 and all that kind of stuff.

00:42:26 Well, that’s because it’s stress tested all the time.

00:42:28 Yeah.

00:42:29 You know, you build all these cities with buildings

00:42:31 and you get earthquakes occasionally

00:42:32 and, you know, wars, earthquakes.

00:42:35 Viruses every once in a while.

00:42:37 You know, changes in business plans

00:42:39 or, you know, like shipping or something.

00:42:41 Like as long as it’s all stress tested,

00:42:44 then it keeps adapting to the situation.

00:42:48 So that’s a curious phenomenon.

00:42:52 Well, let’s go, let’s talk about Moore’s Law a little bit.

00:42:55 It’s at the broad view of Moore’s Law

00:43:00 was just exponential improvement of computing capability.

00:43:05 Like OpenAI, for example, recently published

00:43:08 this kind of papers looking at the exponential improvement

00:43:14 in the training efficiency of neural networks

00:43:17 for like ImageNet and all that kind of stuff.

00:43:18 We just got better on this purely software side,

00:43:22 just figuring out better tricks and algorithms

00:43:25 for training neural networks.

00:43:26 And that seems to be improving significantly faster

00:43:30 than the Moore’s Law prediction, you know.

00:43:33 So that’s in the software space.

00:43:35 What do you think if Moore’s Law continues

00:43:39 or if the general version of Moore’s Law continues,

00:43:42 do you think that comes mostly from the hardware,

00:43:45 from the software, some mix of the two,

00:43:47 some interesting, totally,

00:43:50 so not the reduction of the size of the transistor

00:43:52 kind of thing, but more in the,

00:43:54 in the totally interesting kinds of innovations

00:43:58 in the hardware space, all that kind of stuff.

00:44:01 Well, there’s like a half a dozen things

00:44:04 going on in that graph.

00:44:05 So one is there’s initial innovations

00:44:08 that had a lot of headroom to be exploited.

00:44:11 So, you know, the efficiency of the networks

00:44:13 has improved dramatically.

00:44:15 And then the decomposability of those and the use going,

00:44:19 you know, they started running on one computer,

00:44:21 then multiple computers, then multiple GPUs,

00:44:23 and then arrays of GPUs, and they’re up to thousands.

00:44:27 And at some point, so it’s sort of like

00:44:30 they were consumed, they were going from

00:44:32 like a single computer application

00:44:33 to a thousand computer application.

00:44:36 So that’s not really a Moore’s Law thing.

00:44:38 That’s an independent vector.

00:44:39 How many computers can I put on this problem?

00:44:42 Because the computers themselves are getting better

00:44:44 on like a Moore’s Law rate,

00:44:45 but their ability to go from one to 10

00:44:47 to 100 to a thousand, you know, was something.

00:44:51 And then multiplied by, you know, the amount of computes

00:44:54 it took to resolve like AlexNet to ResNet to transformers.

00:44:58 It’s been quite, you know, steady improvements.

00:45:01 But those are like S curves, aren’t they?

00:45:03 That’s the exactly kind of S curves

00:45:04 that are underlying Moore’s Law from the very beginning.

00:45:07 So what’s the biggest, what’s the most productive,

00:45:13 rich source of S curves in the future, do you think?

00:45:16 Is it hardware, is it software, or is it?

00:45:18 So hardware is going to move along relatively slowly.

00:45:23 Like, you know, double performance every two years.

00:45:26 There’s still…

00:45:28 I like how you call that slowly.

00:45:29 Yeah, that’s the slow version.

00:45:31 The snail’s pace of Moore’s Law.

00:45:33 Maybe we should trademark that one.

00:45:39 Whereas the scaling by number of computers, you know,

00:45:41 can go much faster, you know.

00:45:44 I’m sure at some point Google had a, you know,

00:45:46 their initial search engine was running on a laptop,

00:45:48 you know, like.

00:45:50 And at some point they really worked on scaling that.

00:45:52 And then they factored the indexer from, you know,

00:45:55 this piece and this piece and this piece,

00:45:57 and they spread the data on more and more things.

00:45:59 And, you know, they did a dozen innovations.

00:46:02 But as they scaled up the number of computers on that,

00:46:05 it kept breaking, finding new bottlenecks

00:46:07 in their software and their schedulers,

00:46:09 and made them rethink.

00:46:11 Like, it seems insane to do a scheduler

00:46:13 across 1,000 computers to schedule parts of it

00:46:16 and then send the results to one computer.

00:46:19 But if you want to schedule a million searches,

00:46:21 that makes perfect sense.

00:46:23 So there’s the scaling by just quantity

00:46:26 is probably the richest thing.

00:46:28 But then as you scale quantity,

00:46:31 like a network that was great on 100 computers

00:46:34 may be completely the wrong one.

00:46:36 You may pick a network that’s 10 times slower

00:46:39 on 10,000 computers, like per computer.

00:46:42 But if you go from 100 to 10,000, it’s 100 times.

00:46:45 So that’s one of the things that happened

00:46:47 when we did internet scaling.

00:46:48 This efficiency went down, not up.

00:46:52 The future of computing is inefficiency, not efficiency.

00:46:55 But scale, inefficient scale.

00:46:57 It’s scaling faster than inefficiency bites you.

00:47:01 And as long as there’s, you know, dollar value there,

00:47:03 like scaling costs lots of money.

00:47:05 But Google showed, Facebook showed, everybody showed

00:47:08 that the scale was where the money was at.

00:47:10 It was, and so it was worth the financial.

00:47:13 Do you think, is it possible that like basically

00:47:17 the entirety of Earth will be like a computing surface?

00:47:21 Like this table will be doing computing.

00:47:24 This hedgehog will be doing computing.

00:47:26 Like everything really inefficient,

00:47:28 dumb computing will be leveraged.

00:47:29 The science fiction books, they call it computronium.

00:47:31 Computronium?

00:47:32 We turn everything into computing.

00:47:34 Well, most of the elements aren’t very good for anything.

00:47:37 Like you’re not gonna make a computer out of iron.

00:47:39 Like, you know, silicon and carbon have like nice structures.

00:47:45 You know, we’ll see what you can do with the rest of it.

00:47:48 Like people talk about, well, maybe we can turn the sun

00:47:50 into computer, but it’s hydrogen and a little bit of helium.

00:47:54 So.

00:47:55 What I mean is more like actually just adding computers

00:47:59 to everything.

00:47:59 Oh, okay.

00:48:00 So you’re just converting all the mass of the universe

00:48:03 into computer.

00:48:04 No, no, no.

00:48:05 So not using.

00:48:05 It’d be ironic from the simulation point of view.

00:48:07 It’s like the simulator build mass, the simulates.

00:48:12 Yeah, I mean, yeah.

00:48:12 So, I mean, ultimately this is all heading

00:48:14 towards a simulation.

00:48:15 Yeah, well, I think I might’ve told you this story.

00:48:18 At Tesla, they were deciding,

00:48:20 so they wanna measure the current coming out of the battery

00:48:22 and they decided between putting the resistor in there

00:48:25 and putting a computer with a sensor in there.

00:48:29 And the computer was faster than the computer

00:48:31 I worked on in 1982.

00:48:34 And we chose the computer

00:48:35 because it was cheaper than the resistor.

00:48:38 So, sure, this hedgehog costs $13

00:48:42 and we can put an AI that’s as smart as you

00:48:45 in there for five bucks.

00:48:46 It’ll have one.

00:48:48 So computers will be everywhere.

00:48:51 I was hoping it wouldn’t be smarter than me because.

00:48:54 Well, everything’s gonna be smarter than you.

00:48:56 But you were saying it’s inefficient.

00:48:58 I thought it was better to have a lot of dumb things.

00:49:00 Well, Moore’s law will slowly compact that stuff.

00:49:02 So even the dumb things will be smarter than us.

00:49:04 The dumb things are gonna be smart

00:49:06 or they’re gonna be smart enough to talk to something

00:49:08 that’s really smart.

00:49:10 You know, it’s like.

00:49:12 Well, just remember, like a big computer chip.

00:49:15 Yeah.

00:49:16 You know, it’s like an inch by an inch

00:49:17 and, you know, 40 microns thick.

00:49:20 It doesn’t take very much, very many atoms

00:49:23 to make a high power computer.

00:49:25 Yeah.

00:49:25 And 10,000 of them can fit in a shoebox.

00:49:29 But, you know, you have the cooling and power problems,

00:49:31 but, you know, people are working on that.

00:49:33 But they still can’t write compelling poetry or music

00:49:37 or understand what love is or have a fear of mortality.

00:49:41 So we’re still winning.

00:49:43 Neither can most of humanity, so.

00:49:46 Well, they can write books about it.

00:49:48 So, but speaking about this,

00:49:53 this walk along the path of innovation

00:49:56 towards the dumb things being smarter than humans,

00:50:00 you are now the CTO of 10storrent as of two months ago.

00:50:08 They build hardware for deep learning.

00:50:13 How do you build scalable and efficient deep learning?

00:50:16 This is such a fascinating space.

00:50:17 Yeah, yeah, so it’s interesting.

00:50:18 So up until recently,

00:50:20 I thought there was two kinds of computers.

00:50:22 There are serial computers that run like C programs,

00:50:25 and then there’s parallel computers.

00:50:27 So the way I think about it is, you know,

00:50:29 parallel computers have given parallelism.

00:50:31 Like, GPUs are great because you have a million pixels,

00:50:34 and modern GPUs run a program on every pixel.

00:50:37 They call it the shader program, right?

00:50:39 So, or like finite element analysis.

00:50:42 You build something, you know,

00:50:43 you make this into little tiny chunks,

00:50:45 you give each chunk to a computer,

00:50:47 so you’re given all these chunks,

00:50:48 you have parallelism like that.

00:50:50 But most C programs, you write this linear narrative,

00:50:53 and you have to make it go fast.

00:50:55 To make it go fast, you predict all the branches,

00:50:57 all the data fetches, and you run that.

00:50:59 More parallel, but that’s found parallelism.

00:51:04 AI is, I’m still trying to decide how fundamental this is.

00:51:08 It’s a given parallelism problem.

00:51:10 But the way people describe the neural networks,

00:51:14 and then how they write them in PyTorch, it makes graphs.

00:51:17 Yeah, that might be fundamentally different

00:51:19 than the GPU kind of.

00:51:21 Parallelism, yeah, it might be.

00:51:23 Because when you run the GPU program on all the pixels,

00:51:27 you’re running, you know, it depends,

00:51:29 this group of pixels say it’s background blue,

00:51:32 and it runs a really simple program.

00:51:34 This pixel is, you know, some patch of your face,

00:51:36 so you have some really interesting shader program

00:51:39 to give you the impression of translucency.

00:51:41 But the pixels themselves don’t talk to each other.

00:51:43 There’s no graph, right?

00:51:46 So you do the image, and then you do the next image,

00:51:49 and you do the next image,

00:51:51 and you run eight million pixels,

00:51:53 eight million programs every time,

00:51:55 and modern GPUs have like 6,000 thread engines in them.

00:51:59 So, you know, to get eight million pixels,

00:52:02 each one runs a program on, you know, 10 or 20 pixels.

00:52:06 And that’s how they work, but there’s no graph.

00:52:09 But you think graph might be a totally new way

00:52:13 to think about hardware.

00:52:14 So Rajagat Dori and I have been having this conversation

00:52:18 about given versus found parallelism.

00:52:20 And then the kind of walk,

00:52:22 because we got more transistors,

00:52:23 like, you know, computers way back when

00:52:25 did stuff on scalar data.

00:52:27 Now we did it on vector data, famous vector machines.

00:52:30 Now we’re making computers that operate on matrices, right?

00:52:34 And then the category we said that was next was spatial.

00:52:38 Like, imagine you have so much data

00:52:40 that, you know, you want to do the compute on this data,

00:52:43 and then when it’s done, it says,

00:52:45 send the result to this pile of data on some software on that.

00:52:49 And it’s better to think about it spatially

00:52:53 than to move all the data to a central processor

00:52:56 and do all the work.

00:52:57 So spatially, you mean moving in the space of data

00:53:00 as opposed to moving the data.

00:53:02 Yeah, you have a petabyte data space

00:53:05 spread across some huge array of computers.

00:53:08 And when you do a computation somewhere,

00:53:10 you send the result of that computation

00:53:12 or maybe a pointer to the next program

00:53:14 to some other piece of data and do it.

00:53:16 But I think a better word might be graph.

00:53:18 And all the AI neural networks are graphs.

00:53:21 Do some computations, send the result here,

00:53:24 do another computation, do a data transformation,

00:53:26 do a merging, do a pooling, do another computation.

00:53:30 Is it possible to compress and say

00:53:32 how we make this thing efficient,

00:53:34 this whole process efficient, this different?

00:53:37 So first, the fundamental elements in the graphs

00:53:40 are things like matrix multiplies, convolutions,

00:53:43 data manipulations, and data movements.

00:53:46 So GPUs emulate those things with their little singles,

00:53:49 you know, basically running a single threaded program.

00:53:53 And then there’s, you know, and NVIDIA calls it a warp

00:53:55 where they group a bunch of programs

00:53:56 that are similar together.

00:53:58 So for efficiency and instruction use.

00:54:01 And then at a higher level, you kind of,

00:54:04 you take this graph and you say this part of the graph

00:54:06 is a matrix multiplier, which runs on these 32 threads.

00:54:09 But the model at the bottom was built

00:54:12 for running programs on pixels, not executing graphs.

00:54:17 So it’s emulation, ultimately.

00:54:19 So is it possible to build something

00:54:21 that natively runs graphs?

00:54:23 Yes, so that’s what 10storrent did.

00:54:26 So.

00:54:27 Where are we on that?

00:54:28 How, like, in the history of that effort,

00:54:30 are we in the early days?

00:54:32 Yeah, I think so.

00:54:33 10storrent started by a friend of mine,

00:54:35 Labisha Bajek, and I was his first investor.

00:54:39 So I’ve been, you know, kind of following him

00:54:41 and talking to him about it for years.

00:54:43 And in the fall when I was considering things to do,

00:54:47 I decided, you know, we held a conference last year

00:54:51 with a friend, organized it,

00:54:53 and we wanted to bring in thinkers.

00:54:56 And two of the people were Andre Carpassi and Chris Ladner.

00:55:00 And Andre gave this talk, it’s on YouTube,

00:55:03 called Software 2.0, which I think is great.

00:55:06 Which is, we went from programmed computers,

00:55:10 where you write programs, to data program computers.

00:55:13 You know, like the future of software is data programs,

00:55:18 the networks.

00:55:19 And I think that’s true.

00:55:21 And then Chris has been working,

00:55:23 he worked on LLVM, the low level virtual machine,

00:55:26 which became the intermediate representation

00:55:29 for all compilers.

00:55:31 And now he’s working on another project called MLIR,

00:55:33 which is mid level intermediate representation,

00:55:36 which is essentially under the graph

00:55:39 about how do you represent that kind of computation

00:55:42 and then coordinate large numbers

00:55:44 of potentially heterogeneous computers.

00:55:47 And I would say technically, Tens Torrents,

00:55:51 you know, two pillars of those two ideas,

00:55:54 software 2.0 and mid level representation.

00:55:58 But it’s in service of executing graph programs.

00:56:01 The hardware is designed to do that.

00:56:03 So it’s including the hardware piece.

00:56:05 Yeah.

00:56:06 And then the other cool thing is,

00:56:08 for a relatively small amount of money,

00:56:10 they did a test chip and two production chips.

00:56:13 So it’s like a super effective team.

00:56:15 And unlike some AI startups,

00:56:18 where if you don’t build the hardware

00:56:20 to run the software that they really want to do,

00:56:22 then you have to fix it by writing lots more software.

00:56:26 So the hardware naturally does matrix multiply,

00:56:29 convolution, the data manipulations,

00:56:31 and the data movement between processing elements

00:56:35 that you can see in the graph,

00:56:37 which I think is all pretty clever.

00:56:40 And that’s what I’m working on now.

00:56:45 So the, I think it’s called the Grace Call Processor.

00:56:49 I introduced last year.

00:56:51 It’s, you know, there’s a bunch of measures of performance.

00:56:53 We’re talking about horses.

00:56:55 It seems to outperform 368 trillion operations per second.

00:56:59 It seems to outperform NVIDIA’s Tesla T4 system.

00:57:03 So these are just numbers.

00:57:04 What do they actually mean in real world performance?

00:57:07 Like what are the metrics for you

00:57:10 that you’re chasing in your horse race?

00:57:12 Like what do you care about?

00:57:13 Well, first, so the native language of,

00:57:17 you know, people who write AI network programs

00:57:20 is PyTorch now, PyTorch, TensorFlow.

00:57:22 There’s a couple others.

00:57:24 Do you think PyTorch is one over TensorFlow?

00:57:25 Or is it just?

00:57:26 I’m not an expert on that.

00:57:27 I know many people who have switched

00:57:29 from TensorFlow to PyTorch.

00:57:31 And there’s technical reasons for it.

00:57:33 I use both.

00:57:34 Both are still awesome.

00:57:35 Both are still awesome.

00:57:37 But the deepest love is for PyTorch currently.

00:57:39 Yeah, there’s more love for that.

00:57:41 And that may change.

00:57:42 So the first thing is when they write their programs,

00:57:46 can the hardware execute it pretty much as it was written?

00:57:50 Right, so PyTorch turns into a graph.

00:57:53 We have a graph compiler that makes that graph.

00:57:55 Then it fractions the graph down.

00:57:57 So if you have big matrix multiply,

00:57:58 we turn it into right size chunks

00:58:00 to run on the processing elements.

00:58:02 It hooks all the graph up.

00:58:03 It lays out all the data.

00:58:05 There’s a couple of mid level representations of it

00:58:08 that are also simulatable.

00:58:09 So that if you’re writing the code,

00:58:12 you can see how it’s gonna go through the machine,

00:58:15 which is pretty cool.

00:58:15 And then at the bottom, it schedules kernels,

00:58:17 like math, data manipulation, data movement kernels,

00:58:21 which do this stuff.

00:58:22 So we don’t have to write a little program

00:58:26 to do matrix multiply,

00:58:27 because we have a big matrix multiplier.

00:58:29 There’s no SIMD program for that.

00:58:31 But there is scheduling for that, right?

00:58:36 So one of the goals is,

00:58:37 if you write a piece of PyTorch code

00:58:40 that looks pretty reasonable,

00:58:41 you should be able to compile it, run it on the hardware

00:58:43 without having to tweak it

00:58:44 and do all kinds of crazy things to get performance.

00:58:48 There’s not a lot of intermediate steps.

00:58:50 It’s running directly as written.

00:58:51 Like on a GPU, if you write a large matrix multiply naively,

00:58:54 you’ll get five to 10% of the peak performance of the GPU.

00:58:58 Right, and then there’s a bunch of people

00:59:00 who’ve published papers on this,

00:59:01 and I read them about what steps do you have to do.

00:59:04 And it goes from pretty reasonable,

00:59:06 well, transpose one of the matrices.

00:59:08 So you do row ordered, not column ordered,

00:59:11 block it so that you can put a block of the matrix

00:59:14 on different SMs, groups of threads.

00:59:19 But some of it gets into little details,

00:59:21 like you have to schedule it just so,

00:59:23 so you don’t have register conflicts.

00:59:25 So they call them CUDA ninjas.

00:59:28 CUDA ninjas, I love it.

00:59:31 To get to the optimal point,

00:59:32 you either use a prewritten library,

00:59:36 which is a good strategy for some things,

00:59:37 or you have to be an expert

00:59:39 in micro architecture to program it.

00:59:42 Right, so the optimization step

00:59:43 is way more complicated with the GPU.

00:59:44 So our goal is if you write PyTorch,

00:59:47 that’s good PyTorch, you can do it.

00:59:49 Now there’s, as the networks are evolving,

00:59:53 they’ve changed from convolutional to matrix multiply.

00:59:56 People are talking about conditional graphs,

00:59:58 they’re talking about very large matrices,

00:59:59 they’re talking about sparsity,

01:00:01 they’re talking about problems

01:00:03 that scale across many, many chips.

01:00:06 So the native data item is a packet.

01:00:11 So you send a packet to a processor, it gets processed,

01:00:14 it does a bunch of work,

01:00:15 and then it may send packets to other processors,

01:00:17 and they execute in like a data flow graph

01:00:20 kind of methodology.

01:00:22 Got it.

01:00:22 We have a big network on chip,

01:00:24 and then the second chip has 16 ethernet ports

01:00:27 to hook lots of them together,

01:00:29 and it’s the same graph compiler across multiple chips.

01:00:32 So that’s where the scale comes in.

01:00:33 So it’s built to scale naturally.

01:00:35 Now, my experience with scaling is as you scale,

01:00:38 you run into lots of interesting problems.

01:00:40 So scaling is the mountain to climb.

01:00:43 Yeah.

01:00:44 So the hardware is built to do this,

01:00:44 and then we’re in the process of.

01:00:47 Is there a software part to this

01:00:49 with ethernet and all that?

01:00:51 Well, the protocol at the bottom,

01:00:54 we sent, it’s an ethernet PHY,

01:00:57 but the protocol basically says,

01:00:59 send the packet from here to there.

01:01:01 It’s all point to point.

01:01:03 The header bit says which processor to send it to,

01:01:05 and we basically take a packet off our on chip network,

01:01:09 put an ethernet header on it,

01:01:11 send it to the other end to strip the header off,

01:01:13 and send it to the local thing.

01:01:14 It’s pretty straightforward.

01:01:16 Human to human interaction is pretty straightforward too,

01:01:18 but when you get a million of us,

01:01:19 we could do some crazy stuff together.

01:01:21 Yeah, it’s gonna be fun.

01:01:23 So is that the goal is scale?

01:01:25 So like, for example, I’ve been recently

01:01:28 doing a bunch of robots at home

01:01:30 for my own personal pleasure.

01:01:32 Am I going to ever use 10th Story, or is this more for?

01:01:35 There’s all kinds of problems.

01:01:37 Like, there’s small inference problems,

01:01:38 or small training problems, or big training problems.

01:01:41 What’s the big goal?

01:01:42 Is it the big training problems,

01:01:45 or the small training problems?

01:01:46 Well, one of the goals is to scale

01:01:48 from 100 milliwatts to a megawatt, you know?

01:01:51 So like, really have some range on the problems,

01:01:54 and the same kind of AI programs

01:01:57 work at all different levels.

01:01:59 So that’s the goal.

01:02:00 The natural, since the natural data item

01:02:02 is a packet that we can move around,

01:02:05 it’s built to scale, but so many people have small problems.

01:02:11 Right, right.

01:02:12 But the, you know.

01:02:13 Like, inside that phone is a small problem to solve.

01:02:16 So do you see 10th Story potentially being inside a phone?

01:02:19 Well, the power efficiency of local memory,

01:02:22 local computation, and the way we built it is pretty good.

01:02:26 And then there’s a lot of efficiency

01:02:28 on being able to do conditional graphs and sparsity.

01:02:31 I think it’s, for complicated networks

01:02:34 that wanna go in a small factor, it’s gonna be quite good.

01:02:38 But we have to prove that, that’s all.

01:02:40 It’s a fun problem.

01:02:41 And that’s the early days of the company, right?

01:02:42 It’s a couple years, you said?

01:02:44 But you think, you invested, you think they’re legit.

01:02:47 Yeah.

01:02:48 And so you joined.

01:02:49 Yeah, I joined.

01:02:50 Well, that’s.

01:02:50 That’s a really interesting place to be.

01:02:53 Like, the AI world is exploding, you know.

01:02:55 And I looked at some other opportunities

01:02:58 like build a faster processor, which people want.

01:03:01 But that’s more on an incremental path

01:03:03 than what’s gonna happen in AI in the next 10 years.

01:03:07 Yeah.

01:03:08 So this is kind of, you know,

01:03:10 an exciting place to be part of.

01:03:12 Yeah, the revolutions will be happening

01:03:14 in the very space that Tesla is.

01:03:15 And then lots of people are working on it,

01:03:16 but there’s lots of technical reasons why some of them,

01:03:18 you know, aren’t gonna work out that well.

01:03:20 And, you know, that’s interesting.

01:03:23 And there’s also the same problem

01:03:25 about getting the basics right.

01:03:27 Like, we’ve talked to customers about exciting features.

01:03:30 And at some point we realized that,

01:03:32 Labish and I were realizing they want to hear first

01:03:34 about memory bandwidth, local bandwidth,

01:03:36 compute intensity, programmability.

01:03:39 They want to know the basics, power management,

01:03:42 how the network ports work, what are the basics,

01:03:44 do all the basics work.

01:03:46 Because it’s easy to say, we’ve got this great idea,

01:03:48 you know, the crack GPT3, but the people we talked to

01:03:53 want to say, if I buy the, so we have a PCI Express card

01:03:57 with our chip on it, if you buy the card,

01:03:59 you plug it in your machine to download the driver,

01:04:01 how long does it take me to get my network to run?

01:04:05 Right, right.

01:04:05 You know, that’s a real question.

01:04:06 It’s a very basic question.

01:04:08 So, yeah.

01:04:09 Is there an answer to that yet,

01:04:10 or is it trying to get to that?

01:04:11 Our goal is like an hour.

01:04:13 Okay.

01:04:14 When can I buy a Tesla?

01:04:16 Pretty soon.

01:04:17 Or my, for the small case training.

01:04:19 Yeah, pretty soon.

01:04:21 Months.

01:04:21 Good.

01:04:22 I love the idea of you inside the room

01:04:24 with the Carpathi, Andre Carpathi and Chris Ladner.

01:04:31 Very, very interesting, very brilliant people,

01:04:35 very out of the box thinkers,

01:04:37 but also like first principles thinkers.

01:04:39 Well, they both get stuff done.

01:04:42 They only get stuff done to get their own projects done.

01:04:44 They talk about it clearly.

01:04:47 They educate large numbers of people,

01:04:48 and they’ve created platforms for other people

01:04:50 to go do their stuff on.

01:04:52 Yeah, the clear thinking that’s able to be communicated

01:04:55 is kind of impressive.

01:04:57 It’s kind of remarkable to, yeah, I’m a fan.

01:05:00 Well, let me ask,

01:05:02 because I talk to Chris actually a lot these days.

01:05:05 He’s been one of the, just to give him a shout out,

01:05:08 he’s been so supportive as a human being.

01:05:13 So everybody’s quite different.

01:05:16 Like great engineers are different,

01:05:17 but he’s been like sensitive to the human element

01:05:20 in a way that’s been fascinating.

01:05:22 Like he was one of the early people

01:05:23 on this stupid podcast that I do to say like,

01:05:27 don’t quit this thing,

01:05:29 and also talk to whoever the hell you want to talk to.

01:05:34 That kind of from a legit engineer to get like props

01:05:38 and be like, you can do this.

01:05:39 That was, I mean, that’s what a good leader does, right?

01:05:42 To just kind of let a little kid do his thing,

01:05:45 like go do it, let’s see what turns out.

01:05:48 That’s a pretty powerful thing.

01:05:50 But what do you, what’s your sense about,

01:05:54 he used to be, no, I think stepped away from Google, right?

01:05:58 He’s at SciFive, I think.

01:06:02 What’s really impressive to you

01:06:03 about the things that Chris has worked on?

01:06:05 Because we mentioned the optimization,

01:06:08 the compiler design stuff, the LLVM,

01:06:10 then there’s, he’s also at Google worked at the TPU stuff.

01:06:16 He’s obviously worked on Swift,

01:06:19 so the programming language side.

01:06:21 Talking about people that work in the entirety of the stack.

01:06:24 What, from your time interacting with Chris

01:06:27 and knowing the guy, what’s really impressive to you

01:06:30 that just inspires you?

01:06:32 Well, like LLVM became the defacto platform

01:06:37 for the defacto platform for compilers.

01:06:42 It’s amazing.

01:06:43 And it was good code quality, good design choices.

01:06:46 He hit the right level of abstraction.

01:06:48 There’s a little bit of the right time, the right place.

01:06:52 And then he built a new programming language called Swift,

01:06:55 which after, let’s say some adoption resistance

01:06:59 became very successful.

01:07:01 I don’t know that much about his work at Google,

01:07:03 although I know that that was a typical,

01:07:07 they started TensorFlow stuff and it was new.

01:07:11 They wrote a lot of code and then at some point

01:07:13 it needed to be refactored to be,

01:07:17 because its development slowed down,

01:07:19 why PyTorch started a little later and then passed it.

01:07:22 So he did a lot of work on that.

01:07:23 And then his idea about MLIR,

01:07:25 which is what people started to realize

01:07:28 is the complexity of the software stack above

01:07:30 the low level IR was getting so high

01:07:33 that forcing the features of that into the level

01:07:36 was putting too much of a burden on it.

01:07:38 So he’s splitting that into multiple pieces.

01:07:41 And that was one of the inspirations for our software stack

01:07:43 where we have several intermediate representations

01:07:46 that are all executable and you can look at them

01:07:49 and do transformations on them before you lower the level.

01:07:53 So that was, I think we started before MLIR

01:07:58 really got far enough along to use,

01:08:01 but we’re interested in that.

01:08:02 He’s really excited about MLIR.

01:08:04 That’s his like little baby.

01:08:06 So he, and there seems to be some profound ideas on that

01:08:10 that are really useful.

01:08:11 So each one of those things has been,

01:08:14 as the world of software gets more and more complicated,

01:08:17 how do we create the right abstraction levels

01:08:20 to simplify it in a way that people can now work independently

01:08:23 on different levels of it?

01:08:25 So I would say all three of those projects,

01:08:27 LLVM, Swift, and MLIR did that successfully.

01:08:31 So I’m interested in what he’s gonna do next

01:08:33 in the same kind of way.

01:08:34 Yes.

01:08:36 On either the TPU or maybe the Nvidia GPU side,

01:08:41 how does 10th Story think, or the ideas underlying it,

01:08:45 does it have to be 10th Story?

01:08:47 Just this kind of graph focused,

01:08:51 graph centric hardware, deep learning centric hardware,

01:08:56 beat NVIDIAs, do you think it’s possible

01:09:00 for it to basically overtake NVIDIA?

01:09:02 Sure.

01:09:03 What’s that process look like?

01:09:05 What’s that journey look like, you think?

01:09:08 Well, GPUs were built to run shader programs

01:09:11 on millions of pixels, not to run graphs.

01:09:13 Yes.

01:09:14 So there’s a hypothesis that says

01:09:17 the way the graphs are built

01:09:20 is going to be really interesting

01:09:21 to be inefficient on computing this.

01:09:24 And then the primitives is not a SIMD program,

01:09:27 it’s matrix multiply convolution.

01:09:30 And then the data manipulations are fairly extensive about,

01:09:33 like, how do you do a fast transpose with a program?

01:09:36 I don’t know if you’ve ever written a transpose program.

01:09:38 They’re ugly and slow, but in hardware,

01:09:40 you can do really well.

01:09:42 Like, I’ll give you an example.

01:09:43 So when GPU accelerators first started doing triangles,

01:09:47 like, so you have a triangle

01:09:49 which maps on a set of pixels.

01:09:51 So you build, it’s very easy,

01:09:52 straightforward to build a hardware engine

01:09:54 that’ll find all those pixels.

01:09:55 And it’s kind of weird

01:09:56 because you walk along the triangle to get to the edge,

01:09:59 and then you have to go back down to the next row

01:10:01 and walk along, and then you have to decide on the edge

01:10:04 if the line of the triangle is like half on the pixel,

01:10:08 what’s the pixel color?

01:10:09 Because it’s half of this pixel and half the next one.

01:10:11 That’s called rasterization.

01:10:12 And you’re saying that could be done in hardware?

01:10:15 No, that’s an example of that operation

01:10:19 as a software program is really bad.

01:10:22 I’ve written a program that did rasterization.

01:10:24 The hardware that does it has actually less code

01:10:26 than the software program that does it,

01:10:28 and it’s way faster.

01:10:31 Right, so there are certain times

01:10:33 when the abstraction you have, rasterize a triangle,

01:10:37 you know, execute a graph, you know, components of a graph.

01:10:41 But the right thing to do in the hardware software boundary

01:10:43 is for the hardware to naturally do it.

01:10:45 And so the GPU is really optimized

01:10:47 for the rasterization of triangles.

01:10:50 Well, you know, that’s just, well, like in a modern,

01:10:52 you know, that’s a small piece of modern GPUs.

01:10:56 What they did is that they still rasterize triangles

01:10:59 when you’re running in a game, but for the most part,

01:11:02 most of the computation in the area of the GPU

01:11:04 is running shader programs.

01:11:05 But they’re single threaded programs on pixels, not graphs.

01:11:09 I have to be honest, I’d say I don’t actually know

01:11:11 the math behind shader, shading and lighting

01:11:15 and all that kind of stuff.

01:11:16 I don’t know what.

01:11:17 They look like little simple floating point programs

01:11:20 or complicated ones.

01:11:21 You can have 8,000 instructions in a shader program.

01:11:23 But I don’t have a good intuition

01:11:25 why it could be parallelized so easily.

01:11:27 No, it’s because you have 8 million pixels in every single.

01:11:30 So when you have a light, right, that comes down,

01:11:34 the angle, you know, the amount of light,

01:11:36 like say this is a line of pixels across this table, right?

01:11:40 The amount of light on each pixel is subtly different.

01:11:43 And each pixel is responsible for figuring out what.

01:11:45 Figuring it out.

01:11:46 So that pixel says, I’m this pixel.

01:11:48 I know the angle of the light.

01:11:49 I know the occlusion.

01:11:50 I know the color I am.

01:11:52 Like every single pixel here is a different color.

01:11:54 Every single pixel gets a different amount of light.

01:11:57 Every single pixel has a subtly different translucency.

01:12:00 So to make it look realistic,

01:12:02 the solution was you run a separate program on every pixel.

01:12:05 See, but I thought there’s like reflection

01:12:06 from all over the place.

01:12:08 Every pixel. Yeah, but there is.

01:12:09 So you build a reflection map,

01:12:11 which also has some pixelated thing.

01:12:14 And then when the pixel is looking at the reflection map,

01:12:16 it has to calculate what the normal of the surface is.

01:12:19 And it does it per pixel.

01:12:20 By the way, there’s boatloads of hacks on that.

01:12:22 You know, like you may have a lower resolution light map,

01:12:25 your reflection map.

01:12:26 There’s all these, you know, tax they do.

01:12:29 But at the end of the day, it’s per pixel computation.

01:12:32 And it’s so happening that you can map

01:12:35 graph like computation onto this pixel central computation.

01:12:39 You can do floating point programs

01:12:41 on convolutions and the matrices.

01:12:43 And Nvidia invested for years in CUDA.

01:12:46 First for HPC, and then they got lucky with the AI trend.

01:12:50 But do you think they’re going to essentially

01:12:52 not be able to hardcore pivot out of their?

01:12:55 We’ll see.

01:12:57 That’s always interesting.

01:12:59 How often do big companies hardcore pivot?

01:13:01 Occasionally.

01:13:03 How much do you know about Nvidia, folks?

01:13:06 Some. Some?

01:13:08 Well, I’m curious as well.

01:13:10 Who’s ultimately, as a…

01:13:11 Well, they’ve innovated several times.

01:13:13 But they’ve also worked really hard on mobile.

01:13:15 They’ve worked really hard on radios.

01:13:17 You know, they’re fundamentally a GPU company.

01:13:20 Well, they tried to pivot.

01:13:21 There’s an interesting little game and play

01:13:26 in autonomous vehicles, right?

01:13:27 With, or semi autonomous, like playing with Tesla

01:13:30 and so on and seeing that’s dipping a toe

01:13:34 into that kind of pivot.

01:13:35 They came out with this platform,

01:13:37 which is interesting technically.

01:13:39 But it was like a 3000 watt, you know,

01:13:42 3000 watt, $3,000 GPU platform.

01:13:46 I don’t know if it’s interesting technically.

01:13:47 It’s interesting philosophically.

01:13:49 Technically, I don’t know if it’s the execution

01:13:51 of the craftsmanship is there.

01:13:53 I’m not sure.

01:13:54 But I didn’t get a sense.

01:13:55 I think they were repurposing GPUs

01:13:57 for an automotive solution.

01:13:59 Right, it’s not a real pivot.

01:14:00 They didn’t build a ground up solution.

01:14:03 Right.

01:14:03 Like the chips inside Tesla are pretty cheap.

01:14:06 Like Mobileye has been doing this.

01:14:08 They’re doing the classic work from the simplest thing.

01:14:10 Yeah.

01:14:11 I mean, 40 square millimeter chips.

01:14:14 And Nvidia, their solution had 800 millimeter chips

01:14:17 and two 200 millimeter chips.

01:14:19 And, you know, like boatloads are really expensive DRAMs.

01:14:22 And, you know, it’s a really different approach.

01:14:27 And Mobileye fit the, let’s say,

01:14:28 automotive cost and form factor.

01:14:31 And then they added features as it was economically viable.

01:14:34 And Nvidia said, take the biggest thing

01:14:36 and we’re gonna go make it work.

01:14:38 You know, and that’s also influenced like Waymo.

01:14:41 There’s a whole bunch of autonomous startups

01:14:43 where they have a 5,000 watt server in their trunk.

01:14:46 Right.

01:14:47 But that’s because they think, well, 5,000 watts

01:14:50 and, you know, $10,000 is okay

01:14:52 because it’s replacing a driver.

01:14:54 Elon’s approach was that port has to be cheap enough

01:14:58 to put it in every single Tesla,

01:14:59 whether they turn on autonomous driving or not.

01:15:02 Which, and Mobileye was like,

01:15:04 we need to fit in the bomb and, you know,

01:15:06 cost structure that car companies do.

01:15:09 So they may sell you a GPS for 1500 bucks,

01:15:12 but the bomb for that, it’s like $25.

01:15:16 Well, and for Mobileye, it seems like neural networks

01:15:20 were not first class citizens, like the computation.

01:15:22 They didn’t start out as a…

01:15:24 Yeah, it was a CV problem.

01:15:26 Yeah.

01:15:27 And did classic CV and found stoplights and lines.

01:15:29 And they were really good at it.

01:15:31 Yeah, and they never, I mean,

01:15:33 I don’t know what’s happening now,

01:15:34 but they never fully pivoted.

01:15:35 I mean, it’s like, it’s the Nvidia thing.

01:15:37 And then as opposed to,

01:15:39 so if you look at the new Tesla work,

01:15:41 it’s like neural networks from the ground up, right?

01:15:45 Yeah, and even Tesla started with a lot of CV stuff in it

01:15:48 and Andrei’s basically been eliminating it.

01:15:51 Move everything into the network.

01:15:54 So without, this isn’t like confidential stuff,

01:15:57 but you sitting on a porch, looking over the world,

01:16:01 looking at the work that Andrei’s doing,

01:16:03 that Elon’s doing with Tesla Autopilot,

01:16:06 do you like the trajectory of where things are going

01:16:08 on the hardware side?

01:16:09 Well, they’re making serious progress.

01:16:10 I like the videos of people driving the beta stuff.

01:16:14 I guess taking some pretty complicated intersections

01:16:16 and all that, but it’s still an intervention per drive.

01:16:20 I mean, I have autopilot, the current autopilot,

01:16:23 my Tesla, I use it every day.

01:16:24 Do you have full self driving beta or no?

01:16:26 No.

01:16:27 So you like where this is going?

01:16:28 They’re making progress.

01:16:29 It’s taking longer than anybody thought.

01:16:32 You know, my wonder is, you know, hardware three,

01:16:37 is it enough computing off by two, off by five,

01:16:40 off by 10, off by a hundred?

01:16:42 Yeah.

01:16:43 And I thought it probably wasn’t enough,

01:16:47 but they’re doing pretty well with it now.

01:16:49 Yeah.

01:16:50 And one thing is the data set gets bigger,

01:16:53 the training gets better.

01:16:55 And then there’s this interesting thing is you sort of train

01:16:58 and build an arbitrary size network that solves the problem.

01:17:01 And then you refactor the network down to the thing

01:17:03 that you can afford to ship, right?

01:17:06 So the goal isn’t to build a network that fits in the phone.

01:17:10 It’s to build something that actually works.

01:17:14 And then how do you make that most effective

01:17:17 on the hardware you have?

01:17:19 And they seem to be doing that much better

01:17:21 than a couple of years ago.

01:17:23 Well, the one really important thing is also

01:17:25 what they’re doing well is how to iterate that quickly,

01:17:28 which means like it’s not just about one time deployment,

01:17:31 one building, it’s constantly iterating the network

01:17:34 and trying to automate as many steps as possible, right?

01:17:37 And that’s actually the principles of the Software 2.0,

01:17:41 like you mentioned with Andre is it’s not just,

01:17:46 I mean, I don’t know what the actual,

01:17:48 his description of Software 2.0 is.

01:17:50 If it’s just high level philosophical or their specifics,

01:17:53 but the interesting thing about what that actually looks

01:17:57 in the real world is it’s that what I think Andre calls

01:18:01 the data engine, it’s like it’s the iterative improvement

01:18:05 of the thing.

01:18:06 You have a neural network that does stuff,

01:18:10 fails on a bunch of things and learns from it

01:18:12 over and over and over.

01:18:13 So you’re constantly discovering edge cases.

01:18:15 So it’s very much about like data engineering,

01:18:19 like figuring out, it’s kind of what you were talking about

01:18:23 with TestTorrent is you have the data landscape.

01:18:25 And you have to walk along that data landscape

01:18:27 in a way that is constantly improving the neural network.

01:18:32 And that feels like that’s the central piece of it.

01:18:35 And there’s two pieces of it.

01:18:37 Like you find edge cases that don’t work

01:18:40 and then you define something that goes,

01:18:42 get your data for that.

01:18:44 But then the other constraint is whether you have

01:18:45 to label it or not.

01:18:46 Like the amazing thing about like the GPT3 stuff

01:18:49 is it’s unsupervised.

01:18:51 So there’s essentially infinite amount of data.

01:18:53 Now there’s obviously infinite amount of data available

01:18:56 from cars of people successfully driving.

01:18:59 But the current pipelines are mostly running

01:19:02 on labeled data, which is human limited.

01:19:04 So when that becomes unsupervised,

01:19:09 it’ll create unlimited amount of data,

01:19:12 which then they’ll scale.

01:19:14 Now the networks that may use that data

01:19:16 might be way too big for cars,

01:19:18 but then there’ll be the transformation from now

01:19:20 we have unlimited data, I know exactly what I want.

01:19:22 Now can I turn that into something that fits in the car?

01:19:25 And that process is gonna happen all over the place.

01:19:29 Every time you get to the place where you have

01:19:30 unlimited data, and that’s what software 2.0 is about,

01:19:34 unlimited data training networks to do stuff

01:19:37 without humans writing code to do it.

01:19:40 And ultimately also trying to discover,

01:19:42 like you’re saying, the self supervised formulation

01:19:46 of the problem.

01:19:47 So the unsupervised formulation of the problem.

01:19:49 Like in driving, there’s this really interesting thing,

01:19:53 which is you look at a scene that’s before you,

01:19:58 and you have data about what a successful human driver did

01:20:01 in that scene one second later.

01:20:04 It’s a little piece of data that you can use

01:20:06 just like with GPT3 as training.

01:20:09 Currently, even though Tesla says they’re using that,

01:20:12 it’s an open question to me, how far can you,

01:20:15 can you solve all of the driving

01:20:17 with just that self supervised piece of data?

01:20:20 And like, I think.

01:20:23 Well, that’s what Common AI is doing.

01:20:25 That’s what Common AI is doing,

01:20:26 but the question is how much data.

01:20:29 So what Common AI doesn’t have is as good

01:20:33 of a data engine, for example, as Tesla does.

01:20:35 That’s where the, like the organization of the data.

01:20:39 I mean, as far as I know, I haven’t talked to George,

01:20:41 but they do have the data.

01:20:44 The question is how much data is needed,

01:20:47 because we say infinite very loosely here.

01:20:51 And then the other question, which you said,

01:20:54 I don’t know if you think it’s still an open question is,

01:20:57 are we on the right order of magnitude

01:20:59 for the compute necessary?

01:21:02 That is this, is it like what Elon said,

01:21:04 this chip that’s in there now is enough

01:21:07 to do full self driving,

01:21:08 or do we need another order of magnitude?

01:21:10 I think nobody actually knows the answer to that question.

01:21:13 I like the confidence that Elon has, but.

01:21:16 Yeah, we’ll see.

01:21:17 There’s another funny thing is you don’t learn to drive

01:21:20 with infinite amounts of data.

01:21:22 You learn to drive with an intellectual framework

01:21:24 that understands physics and color and horizontal surfaces

01:21:28 and laws and roads and all your experience

01:21:33 from manipulating your environment.

01:21:36 Like, look, there’s so many factors go into that.

01:21:39 So then when you learn to drive,

01:21:40 like driving is a subset of this conceptual framework

01:21:44 that you have, right?

01:21:46 And so with self driving cars right now,

01:21:48 we’re teaching them to drive with driving data.

01:21:51 You never teach a human to do that.

01:21:53 You teach a human all kinds of interesting things,

01:21:55 like language, like don’t do that, watch out.

01:21:59 There’s all kinds of stuff going on.

01:22:01 Well, this is where you, I think previous time

01:22:02 we talked about where you poetically disagreed

01:22:07 with my naive notion about humans.

01:22:10 I just think that humans will make

01:22:13 this whole driving thing really difficult.

01:22:15 Yeah, all right.

01:22:17 I said, humans don’t move that slow.

01:22:19 It’s a ballistics problem.

01:22:20 It’s a ballistics, humans are a ballistics problem,

01:22:22 which is like poetry to me.

01:22:24 It’s very possible that in driving

01:22:26 they’re indeed purely a ballistics problem.

01:22:28 And I think that’s probably the right way to think about it.

01:22:30 But I still, they still continue to surprise me,

01:22:34 those damn pedestrians, the cyclists,

01:22:36 other humans in other cars and.

01:22:39 Yeah, but it’s gonna be one of these compensating things.

01:22:41 So like when you’re driving,

01:22:43 you have an intuition about what humans are going to do,

01:22:46 but you don’t have 360 cameras and radars

01:22:49 and you have an attention problem.

01:22:51 So the self driving car comes in with no attention problem,

01:22:55 360 cameras right now, a bunch of other features.

01:22:58 So they’ll wipe out a whole class of accidents, right?

01:23:01 And emergency braking with radar

01:23:05 and especially as it gets AI enhanced

01:23:07 will eliminate collisions, right?

01:23:10 But then you have the other problems

01:23:12 of these unexpected things where

01:23:13 you think your human intuition is helping,

01:23:15 but then the cars also have a set of hardware features

01:23:19 that you’re not even close to.

01:23:21 And the key thing of course is if you wipe out

01:23:25 a huge number of kind of accidents,

01:23:27 then it might be just way safer than a human driver,

01:23:30 even though, even if humans are still a problem,

01:23:32 that’s hard to figure out.

01:23:34 Yeah, that’s probably what will happen.

01:23:36 Those autonomous cars will have a small number of accidents

01:23:38 humans would have avoided, but they’ll wipe,

01:23:41 they’ll get rid of the bulk of them.

01:23:43 What do you think about like Tesla’s dojo efforts

01:23:48 or it can be bigger than Tesla in general.

01:23:51 It’s kind of like the tense torrent trying to innovate,

01:23:55 like this is the dichotomy, like should a company

01:23:58 try to from scratch build its own

01:24:00 neural network training hardware?

01:24:03 Well, first of all, I think it’s great.

01:24:04 So we need lots of experiments, right?

01:24:06 And there’s lots of startups working on this

01:24:09 and they’re pursuing different things.

01:24:11 I was there when we started dojo and it was sort of like,

01:24:14 what’s the unconstrained computer solution

01:24:17 to go do very large training problems?

01:24:21 And then there’s fun stuff like, we said,

01:24:24 well, we have this 10,000 watt board to cool.

01:24:27 Well, you go talk to guys at SpaceX

01:24:29 and they think 10,000 watts is a really small number,

01:24:31 not a big number.

01:24:32 And there’s brilliant people working on it.

01:24:35 I’m curious to see how it’ll come out.

01:24:37 I couldn’t tell you, I know it pivoted

01:24:39 a few times since I left, so.

01:24:41 So the cooling does seem to be a big problem.

01:24:44 I do like what Elon said about it, which is like,

01:24:47 we don’t wanna do the thing unless it’s way better

01:24:50 than the alternative, whatever the alternative is.

01:24:52 So it has to be way better than like racks or GPUs.

01:24:57 Yeah, and the other thing is just like,

01:25:00 you know, the Tesla autonomous driving hardware,

01:25:03 it was only serving one software stack.

01:25:06 And the hardware team and the software team

01:25:08 were tightly coupled.

01:25:09 You know, if you’re building a general purpose AI solution,

01:25:12 then you know, there’s so many different customers

01:25:14 with so many different needs.

01:25:16 Now, something Andre said is, I think this is amazing.

01:25:19 10 years ago, like vision, recommendation, language,

01:25:24 were completely different disciplines.

01:25:27 He said, the people literally couldn’t talk to each other.

01:25:29 And three years ago, it was all neural networks,

01:25:32 but the very different neural networks.

01:25:34 And recently, it’s converging on one set of networks.

01:25:37 They vary a lot in size, obviously, they vary in data,

01:25:40 vary in outputs, but the technology has converged

01:25:43 a good bit.

01:25:44 Yeah, these transformers behind GPT3,

01:25:47 it seems like they could be applied to video,

01:25:48 they could be applied to a lot of, and it’s like,

01:25:51 and they’re all really simple.

01:25:52 And it was like they literally replace letters with pixels.

01:25:56 It does vision, it’s amazing.

01:25:58 And then size actually improves the thing.

01:26:02 So the bigger it gets, the more compute you throw at it,

01:26:04 the better it gets.

01:26:05 And the more data you have, the better it gets.

01:26:08 So then you start to wonder, well,

01:26:11 is that a fundamental thing?

01:26:12 Or is this just another step to some fundamental understanding

01:26:16 about this kind of computation?

01:26:18 Which is really interesting.

01:26:20 Us humans don’t want to believe that that kind of thing

01:26:22 will achieve conceptual understandings, you were saying,

01:26:24 like you’ll figure out physics, but maybe it will.

01:26:27 Maybe.

01:26:27 Maybe it will.

01:26:29 Well, it’s worse than that.

01:26:31 It’ll understand physics in ways that we can’t understand.

01:26:33 I like your Stephen Wolfram talk where he said,

01:26:36 you know, there’s three generations of physics.

01:26:38 There was physics by reasoning.

01:26:40 Well, big things should fall faster than small things,

01:26:42 right?

01:26:43 That’s reasoning.

01:26:44 And then there’s physics by equations.

01:26:46 Like, you know, but the number of programs in the world

01:26:49 that are solved with a single equation is relatively low.

01:26:51 Almost all programs have, you know,

01:26:53 more than one line of code, maybe 100 million lines of code.

01:26:56 So he said, then now we’re going to physics by equation,

01:26:59 which is his project, which is cool.

01:27:02 I might point out there was two generations of physics

01:27:07 before reasoning habit.

01:27:10 Like all animals, you know, know things fall

01:27:12 and, you know, birds fly and, you know, predators know

01:27:15 how to, you know, solve a differential equation

01:27:17 to cut off a accelerating, you know, curving animal path.

01:27:22 And then there was, you know, the gods did it, right?

01:27:28 So, right.

01:27:29 So there was, you know, there’s five generations.

01:27:31 Now, software 2.0 says programming things

01:27:35 is not the last step.

01:27:38 Data.

01:27:39 So there’s going to be a physics past Stephen Wolfram’s con.

01:27:44 That’s not explainable to us humans.

01:27:47 And actually there’s no reason that I can see

01:27:51 well that even that’s the limit.

01:27:53 Like, there’s something beyond that.

01:27:55 I mean, they’re usually, like, usually when you have

01:27:57 this hierarchy, it’s not like, well, if you have this step

01:27:59 and this step and this step and they’re all qualitatively

01:28:01 different and conceptually different, it’s not obvious why,

01:28:05 you know, six is the right number of hierarchy steps

01:28:07 and not seven or eight or.

01:28:09 Well, then it’s probably impossible for us to,

01:28:12 to comprehend something that’s beyond the thing

01:28:15 that’s not explainable.

01:28:18 Yeah.

01:28:19 But the thing that, you know, understands the thing

01:28:21 that’s not explainable to us will conceive the next one.

01:28:25 And like, I’m not sure why there’s a limit to it.

01:28:30 Click your brain hurts.

01:28:31 That’s a sad story.

01:28:34 If we look at our own brain, which is an interesting

01:28:38 illustrative example in your work with test story

01:28:42 and trying to design deep learning architectures,

01:28:46 do you think about the brain at all?

01:28:50 Maybe from a hardware designer perspective,

01:28:53 if you could change something about the brain,

01:28:56 what would you change or do?

01:28:58 Funny question.

01:29:00 Like, how would you do it?

01:29:00 So your brain is really weird.

01:29:02 Like, you know, your cerebral cortex where we think

01:29:04 we do most of our thinking is what,

01:29:06 like six or seven neurons thick?

01:29:08 Yeah.

01:29:09 Like, that’s weird.

01:29:10 Like all the big networks are way bigger than that.

01:29:13 Like way deeper.

01:29:14 So that seems odd.

01:29:16 And then, you know, when you’re thinking if it’s,

01:29:19 if the input generates a result you can lose,

01:29:21 it goes really fast.

01:29:22 But if it can’t, that generates an output

01:29:25 that’s interesting, which turns into an input

01:29:27 and then your brain to the point where you mold things

01:29:29 over for days and how many trips

01:29:31 through your brain is that, right?

01:29:33 Like it’s, you know, 300 milliseconds or something

01:29:36 to get through seven levels of neurons.

01:29:37 I forget the number exactly.

01:29:39 But then it does it over and over and over as it searches.

01:29:43 And the brain clearly looks like some kind of graph

01:29:46 because you have a neuron with connections

01:29:48 and it talks to other ones

01:29:49 and it’s locally very computationally intense,

01:29:52 but it’s also does sparse computations

01:29:55 across a pretty big area.

01:29:57 There’s a lot of messy biological type of things

01:30:00 and it’s meaning like, first of all,

01:30:03 there’s mechanical, chemical and electrical signals.

01:30:06 It’s all that’s going on.

01:30:07 Then there’s the asynchronicity of signals.

01:30:12 And there’s like, there’s just a lot of variability

01:30:14 that seems continuous and messy

01:30:16 and just the mess of biology.

01:30:18 And it’s unclear whether that’s a good thing

01:30:22 or it’s a bad thing, because if it’s a good thing

01:30:26 that we need to run the entirety of the evolution,

01:30:29 well, we’re gonna have to start with basic bacteria

01:30:31 to create something.

01:30:32 So imagine we could control,

01:30:34 you could build a brain with 10 layers.

01:30:35 Would that be better or worse?

01:30:37 Or more connections or less connections,

01:30:39 or we don’t know to what level our brains are optimized.

01:30:44 But if I was changing things,

01:30:45 like you can only hold like seven numbers in your head.

01:30:49 Like why not a hundred or a million?

01:30:51 Never thought of that.

01:30:53 And why can’t we have like a floating point processor

01:30:56 that can compute anything we want

01:30:59 and see it all properly?

01:31:01 Like that would be kind of fun.

01:31:03 And why can’t we see in four or eight dimensions?

01:31:05 Because 3D is kind of a drag.

01:31:10 Like all the hard mass transforms

01:31:11 are up in multiple dimensions.

01:31:13 So you could imagine a brain architecture

01:31:16 that you could enhance with a whole bunch of features

01:31:21 that would be really useful for thinking about things.

01:31:24 It’s possible that the limitations you’re describing

01:31:26 are actually essential for like the constraints

01:31:29 are essential for creating like the depth of intelligence.

01:31:34 Like that, the ability to reason.

01:31:38 It’s hard to say

01:31:39 because like your brain is clearly a parallel processor.

01:31:44 10 billion neurons talking to each other

01:31:46 at a relatively low clock rate.

01:31:48 But it produces something

01:31:50 that looks like a serial thought process.

01:31:52 It’s a serial narrative in your head.

01:31:54 That’s true.

01:31:55 But then there are people famously who are visual thinkers.

01:31:59 Like I think I’m a relatively visual thinker.

01:32:02 I can imagine any object and rotate it in my head

01:32:05 and look at it.

01:32:06 And there are people who say

01:32:07 they don’t think that way at all.

01:32:09 And recently I read an article about people

01:32:12 who say they don’t have a voice in their head.

01:32:16 They can talk.

01:32:18 But when they, you know, it’s like,

01:32:19 well, what are you thinking?

01:32:21 No, they’ll describe something that’s visual.

01:32:24 So that’s curious.

01:32:26 Now, if you’re saying,

01:32:31 if we dedicated more hardware to holding information,

01:32:34 like, you know, 10 numbers or a million numbers,

01:32:37 like would that distract us from our ability

01:32:41 to form this kind of singular identity?

01:32:44 Like it dissipates somehow.

01:32:46 But maybe, you know, future humans

01:32:49 will have many identities

01:32:50 that have some higher level organization

01:32:53 but can actually do lots more things in parallel.

01:32:55 Yeah, there’s no reason, if we’re thinking modularly,

01:32:57 there’s no reason we can’t have multiple consciousnesses

01:33:00 in one brain.

01:33:01 Yeah, and maybe there’s some way to make it faster

01:33:03 so that the, you know, the area of the computation

01:33:07 could still have a unified feel to it

01:33:13 while still having way more ability

01:33:15 to do parallel stuff at the same time.

01:33:17 Could definitely be improved.

01:33:19 Could be improved?

01:33:20 Yeah.

01:33:20 Okay, well, it’s pretty good right now.

01:33:22 Actually, people don’t give it enough credit.

01:33:24 The thing is pretty nice.

01:33:25 The, you know, the fact that the right ends

01:33:29 seem to be, give a nice, like,

01:33:32 spark of beauty to the whole experience.

01:33:37 I don’t know.

01:33:38 I don’t know if it can be improved easily.

01:33:40 It could be more beautiful.

01:33:42 I don’t know how, I, what?

01:33:44 What do you mean, what do you mean how?

01:33:46 All the ways you can’t imagine.

01:33:48 No, but that’s the whole point.

01:33:49 I wouldn’t be able to,

01:33:51 the fact that I can imagine ways

01:33:53 in which it could be more beautiful means.

01:33:55 So do you know, you know, Ian Banks, his stories?

01:33:59 So the super smart AIs there live,

01:34:03 mostly live in the world of what they call infinite fun

01:34:07 because they can create arbitrary worlds.

01:34:12 So they interact in, you know, the story has it.

01:34:14 They interact in the normal world and they’re very smart

01:34:16 and they can do all kinds of stuff.

01:34:18 And, you know, a given mind can, you know,

01:34:20 talk to a million humans at the same time

01:34:22 because we’re very slow and for reasons,

01:34:24 you know, artificial, the story,

01:34:26 they’re interested in people and doing stuff,

01:34:28 but they mostly live in this other land of thinking.

01:34:33 My inclination is to think that the ability

01:34:36 to create infinite fun will not be so fun.

01:34:41 That’s sad.

01:34:42 Well, there are so many things to do.

01:34:43 Imagine being able to make a star move planets around.

01:34:47 Yeah, yeah, but because we can imagine that

01:34:50 is why life is fun, if we actually were able to do it,

01:34:53 it would be a slippery slope

01:34:55 where fun wouldn’t even have a meaning

01:34:56 because we just consistently desensitize ourselves

01:35:00 by the infinite amounts of fun we’re having.

01:35:04 And the sadness, the dark stuff is what makes it fun.

01:35:07 I think that could be the Russian.

01:35:10 It could be the fun makes it fun

01:35:12 and the sadness makes it bittersweet.

01:35:16 Yeah, that’s true.

01:35:17 Fun could be the thing that makes it fun.

01:35:20 So what do you think about the expansion,

01:35:22 not through the biology side,

01:35:23 but through the BCI, the brain computer interfaces?

01:35:27 Yeah, you got a chance to check out the Neuralink stuff.

01:35:30 It’s super interesting.

01:35:31 Like humans like our thoughts to manifest as action.

01:35:37 You know, like as a kid, you know,

01:35:39 like shooting a rifle was super fun,

01:35:41 driving a mini bike, doing things.

01:35:44 And then computer games, I think,

01:35:46 for a lot of kids became the thing

01:35:47 where they can do what they want.

01:35:50 They can fly a plane, they can do this, they can do this.

01:35:53 But you have to have this physical interaction.

01:35:55 Now imagine, you could just imagine stuff and it happens.

01:36:03 Like really richly and interestingly.

01:36:06 Like we kind of do that when we dream.

01:36:08 Like dreams are funny because like if you have some control

01:36:12 or awareness in your dreams,

01:36:13 like it’s very realistic looking,

01:36:16 or not realistic looking, it depends on the dream.

01:36:19 But you can also manipulate that.

01:36:22 And you know, what’s possible there is odd.

01:36:26 And the fact that nobody understands it, it’s hilarious, but.

01:36:29 Do you think it’s possible to expand

01:36:31 that capability through computing?

01:36:34 Sure.

01:36:35 Is there some interesting,

01:36:36 so from a hardware designer perspective,

01:36:38 is there, do you think it’ll present totally new challenges

01:36:41 in the kind of hardware required that like,

01:36:44 so this hardware isn’t standalone computing.

01:36:47 Well, this is not working with the brain.

01:36:49 So today, computer games are rendered by GPUs.

01:36:52 Right.

01:36:53 Right, so, but you’ve seen the GAN stuff, right?

01:36:56 Where trained neural networks render realistic images,

01:37:00 but there’s no pixels, no triangles, no shaders,

01:37:03 no light maps, no nothing.

01:37:05 So the future of graphics is probably AI, right?

01:37:09 Yes.

01:37:10 AI is heavily trained by lots of real data, right?

01:37:14 So if you have an interface with a AI renderer, right?

01:37:20 So if you say render a cat, it won’t say,

01:37:23 well, how tall’s the cat and how big it,

01:37:25 you know, it’ll render a cat.

01:37:26 And you might say, oh, a little bigger, a little smaller,

01:37:28 you know, make it a tabby, shorter hair.

01:37:31 You know, like you could tweak it.

01:37:32 Like the amount of data you’ll have to send

01:37:36 to interact with a very powerful AI renderer

01:37:40 could be low.

01:37:41 But the question is brain computer interfaces

01:37:44 would need to render not onto a screen,

01:37:47 but render onto the brain and like directly

01:37:51 so that there’s a bandwidth.

01:37:52 Well, it could do it both ways.

01:37:53 I mean, our eyes are really good sensors.

01:37:56 They could render onto a screen

01:37:58 and we could feel like we’re participating in it.

01:38:01 You know, they’re gonna have, you know,

01:38:03 like the Oculus kind of stuff.

01:38:04 It’s gonna be so good when a projection to your eyes,

01:38:07 you think it’s real.

01:38:08 You know, they’re slowly solving those problems.

01:38:12 And I suspect when the renderer of that information

01:38:17 into your head is also AI mediated,

01:38:19 they’ll be able to give you the cues that, you know,

01:38:23 you really want for depth and all kinds of stuff.

01:38:27 Like your brain is partly faking your visual field, right?

01:38:30 Like your eyes are twitching around,

01:38:32 but you don’t notice that.

01:38:33 Occasionally they blank, you don’t notice that.

01:38:36 You know, there’s all kinds of things.

01:38:37 Like you think you see over here,

01:38:39 but you don’t really see there.

01:38:40 It’s all fabricated.

01:38:42 Yeah, peripheral vision is fascinating.

01:38:45 So if you have an AI renderer that’s trained

01:38:48 to understand exactly how you see

01:38:51 and the kind of things that enhance the realism

01:38:54 of the experience, it could be super real actually.

01:39:01 So I don’t know what the limits to that are,

01:39:03 but obviously if we have a brain interface

01:39:06 that goes inside your visual cortex

01:39:10 in a better way than your eyes do, which is possible,

01:39:13 it’s a lot of neurons, maybe that’ll be even cooler.

01:39:19 Well, the really cool thing is that it has to do

01:39:21 with the infinite fun that you were referring to,

01:39:24 which is our brains seem to be very limited.

01:39:26 And like you said, computations.

01:39:28 It’s also very plastic.

01:39:29 Very plastic, yeah.

01:39:30 Yeah, so it’s a interesting combination.

01:39:33 The interesting open question is the limits

01:39:37 of that neuroplasticity, like how flexible is that thing?

01:39:42 Because we haven’t really tested it.

01:39:44 We know about that at the experiments

01:39:46 where they put like a pressure pad on somebody’s head

01:39:49 and had a visual transducer pressurize it

01:39:51 and somebody slowly learned to see.

01:39:53 Yep.

01:39:55 Especially at a young age, if you throw a lot at it,

01:39:58 like what can it, so can you like arbitrarily expand it

01:40:05 with computing power?

01:40:06 So connected to the internet directly somehow?

01:40:09 Yeah, the answer’s probably yes.

01:40:11 So the problem with biology and ethics

01:40:13 is like there’s a mess there.

01:40:15 Like us humans are perhaps unwilling to take risks

01:40:21 into directions that are full of uncertainty.

01:40:25 So it’s like. No, no.

01:40:26 90% of the population’s unwilling to take risks.

01:40:28 The other 10% is rushing into the risks

01:40:31 unaided by any infrastructure whatsoever.

01:40:34 And that’s where all the fun happens in society.

01:40:38 There’s been huge transformations

01:40:41 in the last couple thousand years.

01:40:43 Yeah, it’s funny.

01:40:44 I got a chance to interact with this Matthew Johnson

01:40:48 from Johns Hopkins.

01:40:49 He’s doing this large scale study of psychedelics.

01:40:52 It’s becoming more and more,

01:40:54 I’ve gotten a chance to interact

01:40:55 with that community of scientists working on psychedelics.

01:40:57 But because of that, that opened the door to me

01:41:00 to all these, what do they call it?

01:41:02 Psychonauts, the people who, like you said,

01:41:05 the 10% who are like, I don’t care.

01:41:08 I don’t know if there’s a science behind this.

01:41:09 I’m taking this spaceship to,

01:41:12 if I’m being the first on Mars, I’ll be.

01:41:15 Psychedelics are interesting in the sense

01:41:17 that in another dimension, like you said,

01:41:21 it’s a way to explore the limits of the human mind.

01:41:25 Like, what is this thing capable of doing?

01:41:28 Because you kind of, like when you dream, you detach it.

01:41:31 I don’t know exactly the neuroscience of it,

01:41:33 but you detach your reality from what your mind,

01:41:39 the images your mind is able to conjure up

01:41:40 and your mind goes into weird places and entities appear.

01:41:44 Somehow Freudian type of trauma

01:41:48 is probably connected in there somehow,

01:41:50 but you start to have these weird, vivid worlds that like.

01:41:54 So do you actively dream?

01:41:56 Do you, why not?

01:41:59 I have like six hours of dreams a night.

01:42:01 It’s like really useful time.

01:42:03 I know, I haven’t, I don’t for some reason.

01:42:06 I just knock out and I have sometimes anxiety inducing

01:42:11 kind of like very pragmatic nightmare type of dreams,

01:42:16 but nothing fun, nothing.

01:42:18 Nothing fun?

01:42:19 Nothing fun.

01:42:20 I try, I unfortunately have mostly have fun

01:42:24 in the waking world, which is very limited

01:42:27 in the amount of fun you can have.

01:42:30 It’s not that limited either.

01:42:31 Yeah, that’s why.

01:42:32 We’ll have to talk.

01:42:33 Yeah, I need instructions.

01:42:36 Yeah.

01:42:37 There’s like a manual for that.

01:42:38 You might wanna.

01:42:41 I’ll look it up.

01:42:41 I’ll ask Elon.

01:42:42 What would you dream?

01:42:44 You know, years ago when I read about, you know,

01:42:47 like, you know, a book about how to have, you know,

01:42:51 become aware of your dreams.

01:42:53 I worked on it for a while.

01:42:54 Like there’s this trick about, you know,

01:42:55 imagine you can see your hands and look out

01:42:58 and I got somewhat good at it.

01:43:00 Like, but my mostly, when I’m thinking about things

01:43:04 or working on problems, I prep myself before I go to sleep.

01:43:09 It’s like, I pull into my mind all the things

01:43:13 I wanna work on or think about.

01:43:15 And then that, let’s say, greatly improves the chances

01:43:19 that I’ll work on that while I’m sleeping.

01:43:23 And then I also, you know, basically ask to remember it.

01:43:30 And I often remember very detailed.

01:43:33 Within the dream.

01:43:34 Yeah.

01:43:34 Or outside the dream.

01:43:35 Well, to bring it up in my dreaming

01:43:37 and then to remember it when I wake up.

01:43:41 It’s just, it’s more of a meditative practice.

01:43:43 You say, you know, to prepare yourself to do that.

01:43:48 Like if you go to, you know, to sleep,

01:43:50 still gnashing your teeth about some random thing

01:43:52 that happened that you’re not that really interested in,

01:43:55 you’ll dream about it.

01:43:57 That’s really interesting.

01:43:58 Maybe.

01:43:59 But you can direct your dreams somewhat by prepping.

01:44:04 Yeah, I’m gonna have to try that.

01:44:05 It’s really interesting.

01:44:06 Like the most important, the interesting,

01:44:08 not like what did this guy send in an email

01:44:12 kind of like stupid worry stuff,

01:44:14 but like fundamental problems

01:44:15 you’re actually concerned about.

01:44:16 Yeah.

01:44:17 And interesting things you’re worried about.

01:44:18 Or books you’re reading or, you know,

01:44:20 some great conversation you had

01:44:21 or some adventure you want to have.

01:44:23 Like there’s a lot of space there.

01:44:28 And it seems to work that, you know,

01:44:32 my percentage of interesting dreams and memories went up.

01:44:36 Is there, is that the source of,

01:44:40 if you were able to deconstruct like

01:44:42 where some of your best ideas came from,

01:44:45 is there a process that’s at the core of that?

01:44:49 Like, so some people, you know, walk and think,

01:44:52 some people like in the shower, the best ideas hit them.

01:44:55 If you talk about like Newton, Apple hitting them on the head.

01:44:58 No, I found out a long time ago,

01:45:01 I process things somewhat slowly.

01:45:03 So like in college, I had friends who could study

01:45:05 at the last minute, get an A the next day.

01:45:07 I can’t do that at all.

01:45:09 So I always front loaded all the work.

01:45:10 Like I do all the problems early, you know,

01:45:14 for finals, like the last three days,

01:45:15 I wouldn’t look at a book because I want, you know,

01:45:18 cause like a new fact day before finals may screw up

01:45:22 my understanding of what I thought I knew.

01:45:23 So my goal was to always get it in and give it time to soak.

01:45:29 And I used to, you know,

01:45:32 I remember when we were doing like 3D calculus,

01:45:33 I would have these amazing dreams of 3D surfaces

01:45:36 with normal, you know, calculating the gradient.

01:45:38 And it’s just like all come up.

01:45:40 So it was like really fun, like very visual.

01:45:43 And if I got cycles of that, that was useful.

01:45:48 And the other is, is don’t over filter your ideas.

01:45:50 Like I like that process of brainstorming

01:45:54 where lots of ideas can happen.

01:45:55 I like people who have lots of ideas.

01:45:57 But then there’s a, yeah, I’ll let them sit

01:46:00 and let it breathe a little bit

01:46:02 and then reduce it to practice.

01:46:04 Like at some point you really have to, does it really work?

01:46:09 Like, you know, is this real or not, right?

01:46:13 But you have to do both.

01:46:15 There’s creative tension there.

01:46:16 Like how do you be both open and, you know, precise?

01:46:20 Have you had ideas that you just,

01:46:22 that sit in your mind for like years before the?

01:46:26 Sure.

01:46:27 It’s an interesting way to just generate ideas

01:46:31 and just let them sit, let them sit there for a while.

01:46:35 I think I have a few of those ideas.

01:46:38 You know, that was so funny.

01:46:40 Yeah, I think that’s, you know,

01:46:42 creativity this one or something.

01:46:45 For the slow thinkers in the room, I suppose.

01:46:49 As I, some people, like you said, are just like, like the.

01:46:53 Yeah, it’s really interesting.

01:46:54 There’s so much diversity in how people think.

01:46:57 You know, how fast or slow they are,

01:46:59 how well they remember or don’t.

01:47:01 Like, you know, I’m not super good at remembering facts,

01:47:04 but processes and methods.

01:47:06 Like in our engineering, I went to Penn State

01:47:08 and almost all our engineering tests were open book.

01:47:11 I could remember the page and not the formula.

01:47:14 But as soon as I saw the formula,

01:47:15 I could remember the whole method if I’d learned it.

01:47:19 Yeah.

01:47:20 So it’s just a funny, where some people could, you know,

01:47:23 I’d watch friends like flipping through the book,

01:47:25 trying to find the formula,

01:47:27 even knowing that they’d done just as much work.

01:47:30 And I would just open the book

01:47:31 and I was on page 27, about half,

01:47:33 I could see the whole thing visually.

01:47:35 Yeah.

01:47:36 And, you know.

01:47:37 And you have to learn that about yourself

01:47:39 and figure out what would function optimally.

01:47:41 I had a friend who was always concerned

01:47:43 he didn’t know how he came up with ideas.

01:47:45 He had lots of ideas, but he said they just sort of popped up.

01:47:49 Like, you’d be working on something, you have this idea,

01:47:51 like, where does it come from?

01:47:53 But you can have more awareness of it.

01:47:54 Like, how your brain works is a little murky

01:47:59 as you go down from the voice in your head

01:48:01 or the obvious visualizations.

01:48:03 Like, when you visualize something, how does that happen?

01:48:06 Yeah, that’s right.

01:48:07 You know, if I say, you know, visualize a volcano,

01:48:09 it’s easy to do, right?

01:48:10 And what does it actually look like when you visualize it?

01:48:12 I can visualize to the point where I don’t see very much

01:48:14 out of my eyes and I see the colors

01:48:16 of the thing I’m visualizing.

01:48:18 Yeah, but there’s a shape, there’s a texture,

01:48:20 there’s a color, but there’s also conceptual visualization.

01:48:23 Like, what are you actually visualizing

01:48:25 when you’re visualizing a volcano?

01:48:27 Just like with peripheral vision,

01:48:28 you think you see the whole thing.

01:48:29 Yeah, yeah, yeah, that’s a good way to say it.

01:48:31 You know, you have this kind of almost peripheral vision

01:48:34 of your visualizations, they’re like these ghosts.

01:48:38 But if, you know, if you work on it,

01:48:40 you can get a pretty high level of detail.

01:48:42 And somehow you can walk along those visualizations

01:48:44 and come up with an idea, which is weird.

01:48:47 But when you’re thinking about solving problems,

01:48:50 like, you’re putting information in,

01:48:53 you’re exercising the stuff you do know,

01:48:55 you’re sort of teasing the area that you don’t understand

01:48:59 and don’t know, but you can almost, you know,

01:49:02 feel, you know, that process happening.

01:49:06 You know, that’s how I, like,

01:49:10 like, I know sometimes when I’m working really hard

01:49:12 on something, like, I get really hot when I’m sleeping.

01:49:14 And, you know, it’s like, we got the blank throw,

01:49:17 I wake up, all the blanks are on the floor.

01:49:20 And, you know, every time it’s, well,

01:49:21 I wake up and think, wow, that was great.

01:49:24 You know?

01:49:25 Are you able to reverse engineer

01:49:27 what the hell happened there?

01:49:28 Well, sometimes it’s vivid dreams

01:49:30 and sometimes it’s just kind of, like you say,

01:49:32 like shadow thinking that you sort of have this feeling

01:49:35 you’re going through this stuff, but it’s not that obvious.

01:49:38 Isn’t that so amazing that the mind

01:49:40 just does all these little experiments?

01:49:42 I never, you know, I always thought it’s like a river

01:49:46 that you can’t, you’re just there for the ride,

01:49:48 but you’re right, if you prep it.

01:49:50 No, it’s all understandable.

01:49:52 Meditation really helps.

01:49:53 You gotta start figuring out,

01:49:55 you need to learn language of your own mind.

01:49:59 And there’s multiple levels of it, but.

01:50:02 The abstractions again, right?

01:50:04 It’s somewhat comprehensible and observable

01:50:06 and feelable or whatever the right word is.

01:50:11 You know, you’re not alone for the ride.

01:50:13 You are the ride.

01:50:15 I have to ask you, hardware engineer,

01:50:17 working on neural networks now, what’s consciousness?

01:50:21 What the hell is that thing?

01:50:22 Is that just some little weird quirk

01:50:25 of our particular computing device?

01:50:29 Or is it something fundamental

01:50:30 that we really need to crack open

01:50:32 if we’re to build good computers?

01:50:36 Do you ever think about consciousness?

01:50:37 Like why it feels like something to be?

01:50:39 I know, it’s really weird.

01:50:42 So.

01:50:43 Yeah.

01:50:45 I mean, everything about it’s weird.

01:50:48 First, it’s a half a second behind reality, right?

01:50:51 It’s a post hoc narrative about what happened.

01:50:53 You’ve already done stuff

01:50:56 by the time you’re conscious of it.

01:50:58 And your consciousness generally

01:51:00 is a single threaded thing,

01:51:01 but we know your brain is 10 billion neurons

01:51:03 running some crazy parallel thing.

01:51:07 And there’s a really big sorting thing going on there.

01:51:11 It also seems to be really reflective

01:51:13 in the sense that you create a space in your head.

01:51:18 Like we don’t really see anything, right?

01:51:19 Like photons hit your eyes,

01:51:21 it gets turned into signals,

01:51:22 it goes through multiple layers of neurons.

01:51:26 I’m so curious that that looks glassy

01:51:29 and that looks not glassy.

01:51:30 Like how the resolution of your vision is so high

01:51:33 you have to go through all this processing.

01:51:36 Where for most of it, it looks nothing like vision.

01:51:39 Like there’s no theater in your mind, right?

01:51:43 So we have a world in our heads.

01:51:46 We’re literally just isolated behind our sensors.

01:51:51 But we can look at it, speculate about it,

01:51:55 speculate about alternatives, problem solve, what if.

01:52:00 There’s so many things going on

01:52:02 and that process is lagging reality.

01:52:06 And it’s single threaded

01:52:07 even though the underlying thing is like massively parallel.

01:52:10 So it’s so curious.

01:52:12 So imagine you’re building an AI computer.

01:52:14 If you wanted to replicate humans,

01:52:16 well, you’d have huge arrays of neural networks

01:52:18 and apparently only six or seven deep, which is hilarious.

01:52:22 They don’t even remember seven numbers,

01:52:23 but I think we can upgrade that a lot, right?

01:52:26 And then somewhere in there,

01:52:28 you would train the network to create

01:52:30 basically the world that you live in, right?

01:52:32 So like tell stories to itself

01:52:34 about the world that it’s perceiving.

01:52:36 Well, create the world, tell stories in the world

01:52:40 and then have many dimensions of like side shows to it.

01:52:47 Like we have an emotional structure,

01:52:49 like we have a biological structure.

01:52:51 And that seems hierarchical too.

01:52:52 Like if you’re hungry, it dominates your thinking.

01:52:55 If you’re mad, it dominates your thinking.

01:52:59 And we don’t know if that’s important

01:53:00 to consciousness or not,

01:53:01 but it certainly disrupts, intrudes in the consciousness.

01:53:05 Like so there’s lots of structure to that.

01:53:08 And we like to dwell on the past.

01:53:09 We like to think about the future.

01:53:11 We like to imagine, we like to fantasize, right?

01:53:14 And the somewhat circular observation of that

01:53:18 is the thing we call consciousness.

01:53:21 Now, if you created a computer system

01:53:23 and did all things, create worldviews,

01:53:24 create the future alternate histories,

01:53:27 dwelled on past events, accurately or semi accurately.

01:53:33 Well, consciousness just spring up like naturally.

01:53:35 Well, would that look and feel conscious to you?

01:53:38 Like you seem conscious to me, but I don’t know.

01:53:39 Off of the external observer sense.

01:53:41 Do you think a thing that looks conscious is conscious?

01:53:44 Like do you, again, this is like an engineering

01:53:48 kind of question, I think, because like.

01:53:53 I don’t know.

01:53:54 If we want to engineer consciousness,

01:53:56 is it okay to engineer something

01:53:58 that just looks conscious?

01:54:00 Or is there a difference between something that is?

01:54:02 Well, we evolve consciousness

01:54:04 because it’s a super effective way to manage our affairs.

01:54:07 Yeah, this is a social element, yeah.

01:54:09 Well, it gives us a planning system.

01:54:11 We have a huge amount of stuff.

01:54:13 Like when we’re talking, like the reason

01:54:15 we can talk really fast is we’re modeling each other

01:54:17 at a really high level of detail.

01:54:19 And consciousness is required for that.

01:54:21 Well, all those components together

01:54:23 manifest consciousness, right?

01:54:26 So if we make intelligent beings

01:54:28 that we want to interact with that we’re like

01:54:30 wondering what they’re thinking,

01:54:32 looking forward to seeing them,

01:54:35 when they interact with them, they’re interesting,

01:54:37 surprising, you know, fascinating, you know,

01:54:41 they will probably feel conscious like we do

01:54:43 and we’ll perceive them as conscious.

01:54:47 I don’t know why not, but you never know.

01:54:49 Another fun question on this,

01:54:51 because from a computing perspective,

01:54:55 we’re trying to create something

01:54:55 that’s humanlike or superhumanlike.

01:54:59 Let me ask you about aliens.

01:55:01 Aliens.

01:55:02 Do you think there’s intelligent alien civilizations

01:55:08 out there and do you think their technology,

01:55:13 their computing, their AI bots,

01:55:16 their chips are of the same nature as ours?

01:55:21 Yeah, I’ve got no idea.

01:55:23 I mean, if there’s lots of aliens out there

01:55:25 that have been awfully quiet,

01:55:27 you know, there’s speculation about why.

01:55:29 There seems to be more than enough planets out there.

01:55:34 There’s a lot.

01:55:37 There’s intelligent life on this planet

01:55:38 that seems quite different, you know,

01:55:40 like dolphins seem like plausibly understandable,

01:55:44 octopuses don’t seem understandable at all.

01:55:47 If they lived longer than a year,

01:55:48 maybe they would be running the planet.

01:55:50 They seem really smart.

01:55:52 And their neural architecture

01:55:54 is completely different than ours.

01:55:56 Now, who knows how they perceive things.

01:55:58 I mean, that’s the question is for us intelligent beings,

01:56:01 we might not be able to perceive other kinds of intelligence

01:56:03 if they become sufficiently different than us.

01:56:05 Yeah, like we live in the current constrained world,

01:56:08 you know, it’s three dimensional geometry

01:56:10 and the geometry defines a certain amount of physics.

01:56:14 And, you know, there’s like how time works seems to work.

01:56:18 There’s so many things that seem like

01:56:21 a whole bunch of the input parameters to the, you know,

01:56:23 another conscious being are the same.

01:56:25 Yes, like if it’s biological,

01:56:28 biological things seem to be

01:56:30 in a relatively narrow temperature range, right?

01:56:32 Because, you know, organics aren’t stable,

01:56:35 too cold or too hot.

01:56:37 Now, so if you specify the list of things that input to that,

01:56:45 but as soon as we make really smart, you know, beings

01:56:49 and they go solve about how to think

01:56:51 about a billion numbers at the same time

01:56:52 and how to think in end dimensions.

01:56:56 There’s a funny science fiction book

01:56:57 where all the society had uploaded into this matrix.

01:57:01 And at some point, some of the beings in the matrix thought,

01:57:05 I wonder if there’s intelligent life out there.

01:57:07 So they had to do a whole bunch of work to figure out

01:57:09 like how to make a physical thing

01:57:12 because their matrix was self sustaining

01:57:15 and they made a little spaceship

01:57:16 and they traveled to another planet when they got there,

01:57:18 there was like life running around,

01:57:20 but there was no intelligent life.

01:57:22 And then they figured out that there was these huge,

01:57:26 you know, organic matrix all over the planet

01:57:28 inside there where intelligent beings

01:57:30 had uploaded themselves into that matrix.

01:57:34 So everywhere intelligent life was,

01:57:38 soon as it got smart, it upleveled itself

01:57:42 into something way more interesting than 3D geometry.

01:57:45 Yeah, it escaped whatever this,

01:57:47 not escaped, uplevel is better.

01:57:49 The essence of what we think of as an intelligent being,

01:57:53 I tend to like the thought experiment of the organism,

01:57:58 like humans aren’t the organisms.

01:58:00 I like the notion of like Richard Dawkins and memes

01:58:03 that ideas themselves are the organisms,

01:58:07 like that are just using our minds to evolve.

01:58:11 So like we’re just like meat receptacles

01:58:15 for ideas to breed and multiply and so on.

01:58:18 And maybe those are the aliens.

01:58:20 Yeah, so Jordan Peterson has a line that says,

01:58:26 you know, you think you have ideas, but ideas have you.

01:58:29 Yeah, good line.

01:58:30 Which, and then we know about the phenomenon of groupthink

01:58:34 and there’s so many things that constrain us.

01:58:37 But I think you can examine all that

01:58:39 and not be completely owned by the ideas

01:58:43 and completely sucked into groupthink.

01:58:46 And part of your responsibility as a human

01:58:49 is to escape that kind of phenomenon,

01:58:51 which isn’t, it’s one of the creative tension things again,

01:58:55 you’re constructed by it, but you can still observe it

01:58:59 and you can think about it and you can make choices

01:59:01 about to some level, how constrained you are by it.

01:59:06 And it’s useful to do that.

01:59:09 And, but at the same time, and it could be by doing that,

01:59:17 you know, the group and society you’re part of

01:59:21 becomes collectively even more interesting.

01:59:24 So, you know, so the outside observer will think,

01:59:27 wow, you know, all these Lexus running around

01:59:30 with all these really independent ideas

01:59:31 have created something even more interesting

01:59:33 in the aggregate.

01:59:35 So, I don’t know, those are lenses to look at the situation

01:59:41 that’ll give you some inspiration,

01:59:43 but I don’t think they’re constrained.

01:59:45 Right.

01:59:46 As a small little quirk of history,

01:59:49 it seems like you’re related to Jordan Peterson,

01:59:53 like you mentioned.

01:59:54 He’s going through some rough stuff now.

01:59:57 Is there some comment you can make

01:59:59 about the roughness of the human journey, the ups and downs?

02:00:04 Well, I became an expert in Benza withdrawal,

02:00:10 like, which is, you took Benza to Aspen’s,

02:00:13 and at some point they interact with GABA circuits,

02:00:18 you know, to reduce anxiety and do a hundred other things.

02:00:21 Like there’s actually no known list of everything they do

02:00:25 because they interact with so many parts of your body.

02:00:28 And then once you’re on them, you habituate to them

02:00:30 and you have a dependency.

02:00:32 It’s not like you’re a drug dependency

02:00:34 where you’re trying to get high.

02:00:35 It’s a metabolic dependency.

02:00:38 And then if you discontinue them,

02:00:42 there’s a funny thing called kindling,

02:00:45 which is if you stop them and then go,

02:00:47 you know, you’ll have a horrible withdrawal symptoms.

02:00:49 And if you go back on them at the same level,

02:00:51 you won’t be stable.

02:00:53 And that unfortunately happened to him.

02:00:55 Because it’s so deeply integrated

02:00:57 into all the kinds of systems in the body.

02:00:58 It literally changes the size and numbers

02:01:00 of neurotransmitter sites in your brain.

02:01:03 So there’s a process called the Ashton protocol

02:01:07 where you taper it down slowly over two years

02:01:10 to people go through that goes through unbelievable hell.

02:01:13 And what Jordan went through seemed to be worse

02:01:15 because on advice of doctors, you know,

02:01:18 we’ll stop taking these and take this.

02:01:20 It was the disaster.

02:01:21 And he got some, yeah, it was pretty tough.

02:01:26 He seems to be doing quite a bit better intellectually.

02:01:29 You can see his brain clicking back together.

02:01:32 I spent a lot of time with him.

02:01:32 I’ve never seen anybody suffer so much.

02:01:34 Well, his brain is also like this powerhouse, right?

02:01:37 So I wonder, does a brain that’s able to think deeply

02:01:42 about the world suffer more through these kinds

02:01:44 of withdrawals, like?

02:01:46 I don’t know.

02:01:47 I’ve watched videos of people going through withdrawal.

02:01:49 They all seem to suffer unbelievably.

02:01:54 And, you know, my heart goes out to everybody.

02:01:57 And there’s some funny math about this.

02:01:59 Some doctor said, as best he can tell, you know,

02:02:01 there’s the standard recommendations.

02:02:03 Don’t take them for more than a month

02:02:04 and then taper over a couple of weeks.

02:02:07 Many doctors prescribe them endlessly,

02:02:09 which is against the protocol, but it’s common, right?

02:02:13 And then something like 75% of people, when they taper,

02:02:17 it’s, you know, half the people have difficulty,

02:02:19 but 75% get off okay.

02:02:22 20% have severe difficulty

02:02:24 and 5% have life threatening difficulty.

02:02:27 And if you’re one of those, it’s really bad.

02:02:29 And the stories that people have on this

02:02:31 is heartbreaking and tough.

02:02:34 So you put some of the fault at the doctors.

02:02:36 They just not know what the hell they’re doing.

02:02:38 No, no, it’s hard to say.

02:02:40 It’s one of those commonly prescribed things.

02:02:43 Like one doctor said, what happens is,

02:02:46 if you’re prescribed them for a reason

02:02:47 and then you have a hard time getting off,

02:02:49 the protocol basically says you’re either crazy

02:02:52 or dependent and you get kind of pushed

02:02:55 into a different treatment regime.

02:02:58 You’re a drug addict or a psychiatric patient.

02:03:01 And so like one doctor said, you know,

02:03:04 I prescribed them for 10 years thinking

02:03:05 I was helping my patients

02:03:06 and I realized I was really harming them.

02:03:09 And you know, the awareness of that is slowly coming up.

02:03:14 The fact that they’re casually prescribed to people

02:03:18 is horrible and it’s bloody scary.

02:03:23 And some people are stable on them,

02:03:25 but they’re on them for life.

02:03:26 Like once you, you know, it’s another one of those drugs.

02:03:29 But benzos long range have real impacts on your personality.

02:03:32 People talk about the benzo bubble

02:03:34 where you get disassociated from reality

02:03:36 and your friends a little bit.

02:03:38 It’s really terrible.

02:03:40 The mind is terrifying.

02:03:41 We were talking about how the infinite possibility of fun,

02:03:45 but like it’s the infinite possibility of suffering too,

02:03:48 which is one of the dangers of like expansion

02:03:52 of the human mind.

02:03:53 It’s like, I wonder if all the possible experiences

02:03:58 that an intelligent computer can have,

02:04:01 is it mostly fun or is it mostly suffering?

02:04:05 So like if you brute force expand the set of possibilities,

02:04:11 like are you going to run into some trouble

02:04:13 in terms of like torture and suffering and so on?

02:04:16 Maybe our human brain is just protecting us

02:04:18 from much more possible pain and suffering.

02:04:22 Maybe the space of pain is like much larger

02:04:25 than we could possibly imagine.

02:04:27 And that.

02:04:28 The world’s in a balance.

02:04:30 You know, all the literature on religion and stuff is,

02:04:34 you know, the struggle between good and evil

02:04:36 is balanced for very finely tuned

02:04:39 for reasons that are complicated.

02:04:41 But that’s a long philosophical conversation.

02:04:44 Speaking of balance that’s complicated,

02:04:46 I wonder because we’re living through

02:04:48 one of the more important moments in human history

02:04:51 with this particular virus.

02:04:53 It seems like pandemics have at least the ability

02:04:56 to kill off most of the human population at their worst.

02:05:03 And there’s just fascinating

02:05:04 because there’s so many viruses in this world.

02:05:06 There’s so many, I mean, viruses basically run the world

02:05:08 in the sense that they’ve been around very long time.

02:05:12 They’re everywhere.

02:05:13 They seem to be extremely powerful

02:05:15 in the distributed kind of way.

02:05:17 But at the same time, they’re not intelligent

02:05:19 and they’re not even living.

02:05:21 Do you have like high level thoughts about this virus

02:05:23 that like in terms of you being fascinated or terrified

02:05:28 or somewhere in between?

02:05:30 So I believe in frameworks, right?

02:05:32 So like one of them is evolution.

02:05:36 Like we’re evolved creatures, right?

02:05:37 Yes.

02:05:38 And one of the things about evolution

02:05:40 is it’s hyper competitive.

02:05:42 And it’s not competitive out of a sense of evil.

02:05:44 It’s competitive as a sense of there’s endless variation

02:05:47 and variations that work better when.

02:05:50 And then over time, there’s so many levels

02:05:52 of that competition.

02:05:55 Like multicellular life partly exists

02:05:57 because of the competition

02:06:01 between different kinds of life forms.

02:06:04 And we know sex partly exists to scramble our genes

02:06:06 so that we have genetic variation

02:06:09 against the invasion of the bacteria and the viruses.

02:06:14 And it’s endless.

02:06:16 Like I read some funny statistic,

02:06:18 like the density of viruses and bacteria in the ocean

02:06:20 is really high.

02:06:22 And one third of the bacteria die every day

02:06:23 because a virus is invading them.

02:06:26 Like one third of them.

02:06:27 Wow.

02:06:29 Like I don’t know if that number is true,

02:06:31 but it was like the amount of competition

02:06:34 and what’s going on is stunning.

02:06:37 And there’s a theory as we age,

02:06:38 we slowly accumulate bacterias and viruses

02:06:41 and as our immune system kind of goes down,

02:06:45 that’s what slowly kills us.

02:06:47 It just feels so peaceful from a human perspective

02:06:50 when we sit back and are able

02:06:51 to have a relaxed conversation.

02:06:54 And there’s wars going on out there.

02:06:56 Like right now, you’re harboring how many bacteria?

02:07:00 And the ones, many of them are parasites on you

02:07:04 and some of them are helpful

02:07:06 and some of them are modifying your behavior

02:07:07 and some of them are, it’s just really wild.

02:07:12 But this particular manifestation is unusual

02:07:16 in the demographic, how it hit

02:07:18 and the political response that it engendered

02:07:21 and the healthcare response it engendered

02:07:23 and the technology it engendered, it’s kind of wild.

02:07:27 Yeah, the communication on Twitter that it led to,

02:07:30 all that kind of stuff, at every single level, yeah.

02:07:32 But what usually kills life,

02:07:34 the big extinctions are caused by meteors and volcanoes.

02:07:39 That’s the one you’re worried about

02:07:40 as opposed to human created bombs that we launch.

02:07:44 Solar flares are another good one.

02:07:46 Occasionally, solar flares hit the planet.

02:07:48 So it’s nature.

02:07:51 Yeah, it’s all pretty wild.

02:07:53 On another historic moment, this is perhaps outside

02:07:57 but perhaps within your space of frameworks

02:08:02 that you think about that just happened,

02:08:04 I guess a couple of weeks ago is,

02:08:06 I don’t know if you’re paying attention at all,

02:08:08 is the GameStop and Wall Street bets.

02:08:12 It’s super fun.

02:08:14 So it’s really fascinating.

02:08:16 There’s kind of a theme to this conversation today

02:08:19 because it’s like neural networks,

02:08:21 it’s cool how there’s a large number of people

02:08:25 in a distributed way, almost having a kind of fun,

02:08:30 we’re able to take on the powerful elites,

02:08:34 elite hedge funds, centralized powers and overpower them.

02:08:39 Do you have thoughts on this whole saga?

02:08:43 I don’t know enough about finance,

02:08:45 but it was like the Elon, Robinhood guy when they talked.

02:08:49 Yeah, what’d you think about that?

02:08:51 Well, Robinhood guy didn’t know

02:08:52 how the finance system worked.

02:08:54 That was clear, right?

02:08:55 He was treating like the people

02:08:57 who settled the transactions as a black box.

02:09:00 And suddenly somebody called him up and say,

02:09:01 hey, black box calling you, your transaction volume

02:09:04 means you need to put out $3 billion right now.

02:09:06 And he’s like, I don’t have $3 billion.

02:09:08 Like I don’t even make any money on these trades.

02:09:10 Why do I owe $3 billion while you’re sponsoring the trade?

02:09:13 So there was a set of abstractions

02:09:15 that I don’t think either, like now we understand it.

02:09:19 Like this happens in chip design.

02:09:21 Like you buy wafers from TSMC or Samsung or Intel,

02:09:25 and they say it works like this

02:09:27 and you do your design based on that.

02:09:29 And then chip comes back and doesn’t work.

02:09:31 And then suddenly you started having to open the black boxes.

02:09:34 The transistors really work like they said,

02:09:36 what’s the real issue?

02:09:37 So there’s a whole set of things

02:09:43 that created this opportunity and somebody spotted it.

02:09:46 Now, people spot these kinds of opportunities all the time.

02:09:49 So there’s been flash crashes,

02:09:51 there’s always short squeezes are fairly regular.

02:09:55 Every CEO I know hates the shorts

02:09:58 because they’re trying to manipulate their stock

02:10:01 in a way that they make money

02:10:03 and deprive value from both the company

02:10:07 and the investors.

02:10:08 So the fact that some of these stocks were so short,

02:10:13 it’s hilarious that this hasn’t happened before.

02:10:17 I don’t know why, and I don’t actually know why

02:10:19 some serious hedge funds didn’t do it to other hedge funds.

02:10:23 And some of the hedge funds

02:10:24 actually made a lot of money on this.

02:10:26 So my guess is we know 5% of what really happened

02:10:32 and that a lot of the players don’t know what happened.

02:10:34 And the people who probably made the most money

02:10:37 aren’t the people that they’re talking about.

02:10:39 That’s.

02:10:41 Do you think there was something,

02:10:42 I mean, this is the cool kind of Elon,

02:10:47 you’re the same kind of conversationalist,

02:10:50 which is like first principles questions of like,

02:10:53 what the hell happened?

02:10:56 Just very basic questions of like,

02:10:57 was there something shady going on?

02:11:00 What, who are the parties involved?

02:11:03 It’s the basic questions everybody wants to know about.

02:11:06 Yeah, so like we’re in a very hyper competitive world,

02:11:10 but transactions like buying and selling stock

02:11:12 is a trust event.

02:11:13 I trust the company, represented themselves properly.

02:11:16 I bought the stock because I think it’s gonna go up.

02:11:19 I trust that the regulations are solid.

02:11:22 Now, inside of that, there’s all kinds of places

02:11:26 where humans over trust and this exposed,

02:11:31 let’s say some weak points in the system.

02:11:34 I don’t know if it’s gonna get corrected.

02:11:37 I don’t know if we have close to the real story.

02:11:41 Yeah, my suspicion is we don’t.

02:11:44 And listen to that guy, he was like a little wide eyed

02:11:47 about and then he did this and then he did that.

02:11:49 And I was like, I think you should know more

02:11:51 about your business than that.

02:11:54 But again, there’s many businesses

02:11:56 when like this layer is really stable,

02:11:58 you stop paying attention to it.

02:12:00 You pay attention to the stuff that’s bugging you or new.

02:12:04 You don’t pay attention to the stuff

02:12:05 that just seems to work all the time.

02:12:07 You just, sky’s blue every day, California.

02:12:11 And every once in a while it rains

02:12:12 and everybody’s like, what do we do?

02:12:15 Somebody go bring in the lawn furniture.

02:12:17 It’s getting wet.

02:12:18 You don’t know why it’s getting wet.

02:12:19 Yeah, it doesn’t always work.

02:12:20 I was blue for like a hundred days and now it’s, so.

02:12:24 But part of the problem here with Vlad,

02:12:27 the CEO of Robinhood is the scaling

02:12:29 that we’ve been talking about is there’s a lot

02:12:32 of unexpected things that happen with the scaling

02:12:36 and you have to be, I think the scaling forces you

02:12:39 to then return to the fundamentals.

02:12:41 Well, it’s interesting because when you buy and sell stocks,

02:12:44 the scaling is, the stocks don’t only move

02:12:46 in a certain range and if you buy a stock,

02:12:48 you can only lose that amount of money.

02:12:50 On the short market, you can lose a lot more

02:12:52 than you can benefit.

02:12:53 Like it has a weird cost function

02:12:57 or whatever the right word for that is.

02:12:59 So he was trading in a market

02:13:01 where he wasn’t actually capitalized for the downside.

02:13:04 If it got outside a certain range.

02:13:07 Now, whether something nefarious has happened,

02:13:09 I have no idea, but at some point,

02:13:12 the financial risk to both him and his customers

02:13:16 was way outside of his financial capacity

02:13:19 and his understanding how the system work was clearly weak

02:13:23 or he didn’t represent himself.

02:13:25 I don’t know the person and when I listened to him,

02:13:28 it could have been the surprise question was like,

02:13:30 and then these guys called and it sounded like

02:13:34 he was treating stuff as a black box.

02:13:36 Maybe he shouldn’t have, but maybe he has a whole pile

02:13:38 of experts somewhere else and it was going on.

02:13:40 I don’t know.

02:13:41 Yeah, I mean, this is one of the qualities

02:13:45 of a good leader is under fire, you have to perform.

02:13:49 And that means to think clearly and to speak clearly.

02:13:53 And he dropped the ball on those things

02:13:55 because and understand the problem quickly,

02:13:58 learn and understand the problem at this basic level.

02:14:03 What the hell happened?

02:14:05 And my guess is, at some level it was amateurs trading

02:14:09 against experts slash insiders slash people

02:14:12 with special information.

02:14:14 Outsiders versus insiders.

02:14:16 Yeah, and the insiders, my guess is the next time

02:14:20 this happens, we’ll make money on it.

02:14:22 The insiders always win?

02:14:25 Well, they have more tools and more incentive.

02:14:27 I mean, this always happens.

02:14:28 Like the outsiders are doing this for fun.

02:14:30 The insiders are doing this 24 seven.

02:14:33 But there’s numbers in the outsiders.

02:14:35 This is the interesting thing is it could be

02:14:37 a new chapter. There’s numbers

02:14:38 on the insiders too.

02:14:41 Different kind of numbers, yeah.

02:14:44 But this could be a new era because, I don’t know,

02:14:46 at least I didn’t expect that a bunch of Redditors could,

02:14:49 there’s millions of people who can get together.

02:14:51 It was a surprise attack.

02:14:52 The next one will be a surprise.

02:14:54 But don’t you think the crowd, the people are planning

02:14:57 the next attack?

02:14:59 We’ll see.

02:15:00 But it has to be a surprise.

02:15:01 It can’t be the same game.

02:15:04 And so the insiders.

02:15:05 It’s like, it could be there’s a very large number

02:15:07 of games to play and they can be agile about it.

02:15:10 I don’t know.

02:15:11 I’m not an expert.

02:15:12 Right, that’s a good question.

02:15:13 The space of games, how restricted is it?

02:15:18 Yeah, and the system is so complicated

02:15:20 it could be relatively unrestricted.

02:15:22 And also during the last couple of financial crashes,

02:15:27 what set it off was sets of derivative events

02:15:30 where Nassim Taleb’s thing is they’re trying

02:15:35 to lower volatility in the short run

02:15:39 by creating tail events.

02:15:41 And the system’s always evolved towards that

02:15:43 and then they always crash.

02:15:45 The S curve is the start low, ramp, plateau, crash.

02:15:50 It’s 100% effective.

02:15:54 In the long run.

02:15:55 Let me ask you some advice to put on your profound hat.

02:16:01 There’s a bunch of young folks who listen to this thing

02:16:04 for no good reason whatsoever.

02:16:07 Undergraduate students, maybe high school students,

02:16:10 maybe just young folks, a young at heart

02:16:13 looking for the next steps to take in life.

02:16:16 What advice would you give to a young person today

02:16:19 about life, maybe career, but also life in general?

02:16:23 Get good at some stuff.

02:16:26 Well, get to know yourself, right?

02:16:28 Get good at something that you’re actually interested in.

02:16:30 You have to love what you’re doing to get good at it.

02:16:33 You really gotta find that.

02:16:34 Don’t waste all your time doing stuff

02:16:35 that’s just boring or bland or numbing, right?

02:16:40 Don’t let old people screw you.

02:16:42 Well, people get talked into doing all kinds of shit

02:16:46 and racking up huge student debts

02:16:49 and there’s so much crap going on.

02:16:52 And then drains your time and drains your energy.

02:16:54 The Eric Weinstein thesis that the older generation

02:16:58 won’t let go and they’re trapping all the young people.

02:17:01 Do you think there’s some truth to that?

02:17:02 Yeah, sure.

02:17:04 Just because you’re old doesn’t mean you stop thinking.

02:17:06 I know lots of really original old people.

02:17:10 I’m an old person.

02:17:14 But you have to be conscious about it.

02:17:15 You can fall into the ruts and then do that.

02:17:18 I mean, when I hear young people spouting opinions

02:17:22 that sounds like they come from Fox News or CNN,

02:17:24 I think they’ve been captured by groupthink and memes.

02:17:27 They’re supposed to think on their own.

02:17:29 So if you find yourself repeating

02:17:31 what everybody else is saying,

02:17:33 you’re not gonna have a good life.

02:17:36 Like, that’s not how the world works.

02:17:38 It seems safe, but it puts you at great jeopardy

02:17:41 for being boring or unhappy.

02:17:45 How long did it take you to find the thing

02:17:47 that you have fun with?

02:17:50 Oh, I don’t know.

02:17:52 I’ve been a fun person since I was pretty little.

02:17:54 So everything.

02:17:55 I’ve gone through a couple periods of depression in my life.

02:17:58 For a good reason or for a reason

02:18:00 that doesn’t make any sense?

02:18:02 Yeah, like some things are hard.

02:18:05 Like you go through mental transitions in high school.

02:18:08 I was really depressed for a year

02:18:10 and I think I had my first midlife crisis at 26.

02:18:15 I kind of thought, is this all there is?

02:18:16 Like I was working at a job that I loved,

02:18:20 but I was going to work and all my time was consumed.

02:18:23 What’s the escape out of that depression?

02:18:25 What’s the answer to is this all there is?

02:18:29 Well, a friend of mine, I asked him,

02:18:31 because he was working his ass off,

02:18:32 I said, what’s your work life balance?

02:18:34 Like there’s work, friends, family, personal time.

02:18:39 Are you balancing any of that?

02:18:41 And he said, work 80%, family 20%.

02:18:43 And I tried to find some time to sleep.

02:18:47 Like there’s no personal time.

02:18:49 There’s no passionate time.

02:18:51 Like the young people are often passionate about work.

02:18:54 So I was certainly like that.

02:18:56 But you need to have some space in your life

02:18:59 for different things.

02:19:01 And that creates, that makes you resistant

02:19:05 to the whole, the deep dips into depression kind of thing.

02:19:11 Yeah, well, you have to get to know yourself too.

02:19:13 Meditation helps.

02:19:14 Some physical, something physically intense helps.

02:19:18 Like the weird places your mind goes kind of thing.

02:19:21 Like, and why does it happen?

02:19:23 Why do you do what you do?

02:19:24 Like triggers, like the things that cause your mind

02:19:27 to go to different places kind of thing,

02:19:29 or like events like.

02:19:32 Your upbringing for better or worse,

02:19:33 whether your parents are great people or not,

02:19:35 you come into adulthood with all kinds of emotional burdens.

02:19:42 And you can see some people are so bloody stiff

02:19:45 and restrained, and they think the world’s

02:19:47 fundamentally negative, like you maybe.

02:19:50 You have unexplored territory.

02:19:53 Yeah.

02:19:53 Or you’re afraid of something.

02:19:56 Definitely afraid of quite a few things.

02:19:58 Then you gotta go face them.

02:20:00 Like what’s the worst thing that can happen?

02:20:03 You’re gonna die, right?

02:20:05 Like that’s inevitable.

02:20:06 You might as well get over that.

02:20:07 Like 100%, that’s right.

02:20:09 Like people are worried about the virus,

02:20:11 but you know, the human condition is pretty deadly.

02:20:14 There’s something about embarrassment

02:20:16 that’s, I’ve competed a lot in my life,

02:20:18 and I think the, if I’m to introspect it,

02:20:21 the thing I’m most afraid of is being like humiliated,

02:20:26 I think.

02:20:26 Yeah, nobody cares about that.

02:20:28 Like you’re the only person on the planet

02:20:29 that cares about you being humiliated.

02:20:31 Exactly.

02:20:32 It’s like a really useless thought.

02:20:34 It is.

02:20:35 It’s like, you’re all humiliated.

02:20:39 Something happened in a room full of people,

02:20:41 and they walk out, and they didn’t think about it

02:20:42 one more second.

02:20:43 Or maybe somebody told a funny story to somebody else.

02:20:45 And then it dissipates it throughout, yeah.

02:20:48 Yeah.

02:20:49 No, I know it too.

02:20:50 I mean, I’ve been really embarrassed about shit

02:20:53 that nobody cared about myself.

02:20:55 Yeah.

02:20:56 It’s a funny thing.

02:20:57 So the worst thing ultimately is just.

02:20:59 Yeah, but that’s a cage,

02:21:01 and then you have to get out of it.

02:21:02 Yeah.

02:21:02 Like once you, here’s the thing.

02:21:03 Once you find something like that,

02:21:05 you have to be determined to break it.

02:21:09 Because otherwise you’ll just,

02:21:10 so you accumulate that kind of junk,

02:21:11 and then you die as a mess.

02:21:15 So the goal, I guess it’s like a cage within a cage.

02:21:18 I guess the goal is to die in the biggest possible cage.

02:21:21 Well, ideally you’d have no cage.

02:21:25 People do get enlightened.

02:21:26 I’ve met a few.

02:21:27 It’s great.

02:21:28 You’ve found a few?

02:21:29 There’s a few out there?

02:21:30 I don’t know.

02:21:31 Of course there are.

02:21:32 I don’t know.

02:21:33 Either that or it’s a great sales pitch.

02:21:35 There’s enlightened people writing books

02:21:37 and doing all kinds of stuff.

02:21:38 It’s a good way to sell a book.

02:21:39 I’ll give you that.

02:21:40 You’ve never met somebody you just thought,

02:21:42 they just kill me.

02:21:43 Like they just, like mental clarity, humor.

02:21:47 No, 100%, but I just feel like

02:21:49 they’re living in a bigger cage.

02:21:50 They have their own.

02:21:52 You still think there’s a cage?

02:21:53 There’s still a cage.

02:21:54 You secretly suspect there’s always a cage.

02:21:57 There’s nothing outside the universe.

02:21:59 There’s nothing outside the cage.

02:22:02 You work in a bunch of companies,

02:22:10 you lead a lot of amazing teams.

02:22:15 I’m not sure if you’ve ever been

02:22:16 like in the early stages of a startup,

02:22:19 but do you have advice for somebody

02:22:24 that wants to do a startup or build a company,

02:22:28 like build a strong team of engineers that are passionate

02:22:31 and just want to solve a big problem?

02:22:35 Like, is there a more specifically on that point?

02:22:39 Well, you have to be really good at stuff.

02:22:41 If you’re going to lead and build a team,

02:22:43 you better be really interested

02:22:44 in how people work and think.

02:22:46 The people or the solution to the problem.

02:22:49 So there’s two things, right?

02:22:50 One is how people work and the other is the…

02:22:52 Well, actually there’s quite a few successful startups.

02:22:55 It’s pretty clear the founders

02:22:56 don’t know anything about people.

02:22:58 Like the idea was so powerful that it propelled them.

02:23:01 But I suspect somewhere early,

02:23:03 they hired some people who understood people

02:23:06 because people really need a lot of care and feeding

02:23:08 to collaborate and work together

02:23:10 and feel engaged and work hard.

02:23:13 Like startups are all about out producing other people.

02:23:17 Like you’re nimble because you don’t have any legacy.

02:23:19 You don’t have a bunch of people

02:23:22 who are depressed about life just showing up.

02:23:24 So startups have a lot of advantages that way.

02:23:29 Do you like the, Steve Jobs talked about this idea

02:23:32 of A players and B players.

02:23:34 I don’t know if you know this formulation.

02:23:37 Yeah, no.

02:23:39 Organizations that get taken over by B player leaders

02:23:44 often really underperform their C players.

02:23:48 That said, in big organizations,

02:23:50 there’s so much work to do.

02:23:52 And there’s so many people who are happy

02:23:54 to do what the leadership or the big idea people

02:23:57 would consider menial jobs.

02:24:00 And you need a place for them,

02:24:01 but you need an organization that both values and rewards

02:24:05 them but doesn’t let them take over the leadership of it.

02:24:08 Got it.

02:24:09 So you need to have an organization

02:24:11 that’s resistant to that.

02:24:11 But in the early days, the notion with Steve

02:24:16 was that like one B player in a room of A players

02:24:20 will be like destructive to the whole.

02:24:23 I’ve seen that happen.

02:24:24 I don’t know if it’s like always true.

02:24:28 You run into people who are clearly B players

02:24:30 but they think they’re A players

02:24:31 and so they have a loud voice at the table

02:24:33 and they make lots of demands for that.

02:24:35 But there’s other people who are like, I know who I am.

02:24:37 I just wanna work with cool people on cool shit

02:24:39 and just tell me what to do and I’ll go get it done.

02:24:42 So you have to, again, this is like people skills.

02:24:45 What kind of person is it?

02:24:47 I’ve met some really great people I love working with

02:24:51 that weren’t the biggest ID people or the most productive

02:24:53 ever but they show up, they get it done.

02:24:56 They create connection and community that people value.

02:24:59 It’s pretty diverse so I don’t think

02:25:02 there’s a recipe for that.

02:25:05 I gotta ask you about love.

02:25:07 I heard you’re into this now.

02:25:08 Into this love thing?

02:25:09 Yeah, is this, do you think this is your solution

02:25:11 to your depression?

02:25:13 No, I’m just trying to, like you said,

02:25:14 delighting people and occasionally trying to sell a book.

02:25:16 I’m writing a book about love.

02:25:18 You’re writing a book about love?

02:25:18 No, I’m not, I’m not.

02:25:21 I have a friend of mine, he’s gonna,

02:25:25 he said you should really write a book

02:25:27 about your management philosophy.

02:25:29 He said it’d be a short book.

02:25:35 Well, that one was thought pretty well.

02:25:37 What role do you think love, family, friendship,

02:25:40 all that kind of human stuff play in a successful life?

02:25:44 You’ve been exceptionally successful in the space

02:25:46 of running teams, building cool shit in this world,

02:25:51 creating some amazing things.

02:25:53 What, did love get in the way?

02:25:54 Did love help the family get in the way?

02:25:57 Did family help friendship?

02:25:59 You want the engineer’s answer?

02:26:02 Please.

02:26:03 But first, love is functional, right?

02:26:05 It’s functional in what way?

02:26:07 So we habituate ourselves to the environment.

02:26:11 And actually, Jordan Peterson told me this line.

02:26:13 So you go through life and you just get used to everything,

02:26:16 except for the things you love.

02:26:17 They remain new.

02:26:20 Like, this is really useful for, you know,

02:26:22 like other people’s children and dogs and trees.

02:26:26 You just don’t pay that much attention to them.

02:26:27 Your own kids, you monitor them really closely.

02:26:31 Like, and if they go off a little bit,

02:26:32 because you love them, if you’re smart,

02:26:35 if you’re gonna be a successful parent,

02:26:37 you notice it right away.

02:26:38 You don’t habituate to just things you love.

02:26:44 And if you want to be successful at work,

02:26:46 if you don’t love it,

02:26:47 you’re not gonna put the time in somebody else.

02:26:50 It’s somebody else that loves it.

02:26:51 Like, because it’s new and interesting,

02:26:53 and that lets you go to the next level.

02:26:57 So it’s the thing, it’s just a function

02:26:59 that generates newness and novelty

02:27:01 and surprises, you know, all those kinds of things.

02:27:04 It’s really interesting.

02:27:06 There’s people who figured out lots of frameworks for this.

02:27:09 Like, humans seem to go,

02:27:11 in partnership, go through interests.

02:27:13 Like, suddenly somebody’s interesting,

02:27:16 and then you’re infatuated with them,

02:27:18 and then you’re in love with them.

02:27:20 And then you, you know, different people have ideas

02:27:22 about parental love or mature love.

02:27:24 Like, you go through a cycle of that,

02:27:26 which keeps us together,

02:27:27 and it’s super functional for creating families

02:27:30 and creating communities and making you support somebody

02:27:34 despite the fact that you don’t love them.

02:27:36 Like, and it can be really enriching.

02:27:44 You know, now, in the work life balance scheme,

02:27:47 if alls you do is work,

02:27:49 you think you may be optimizing your work potential,

02:27:52 but if you don’t love your work

02:27:53 or you don’t have family and friends

02:27:56 and things you care about,

02:27:59 your brain isn’t well balanced.

02:28:02 Like, everybody knows the experience of,

02:28:03 he works on something all week.

02:28:04 He went home, took two days off, and he came back in.

02:28:07 The odds of you working on the thing,

02:28:09 you picking up right where you left off is zero.

02:28:12 Your brain refactored it.

02:28:17 But being in love is great.

02:28:19 It’s like changes the color of the light in the room.

02:28:22 It creates a spaciousness that’s different.

02:28:25 It helps you think.

02:28:27 It makes you strong.

02:28:29 Bukowski had this line about love being a fog

02:28:32 that dissipates with the first light of reality

02:28:36 in the morning.

02:28:37 That’s depressing.

02:28:38 I think it’s the other way around.

02:28:39 It lasts.

02:28:40 Well, like you said, it’s a function.

02:28:42 It’s a thing that generates.

02:28:42 It can be the light that actually enlivens your world

02:28:45 and creates the interest and the power and the strength

02:28:49 to go do something.

02:28:51 Well, it’s like, that sounds like,

02:28:54 you know, there’s like physical love, emotional love,

02:28:56 intellectual love, spiritual love, right?

02:28:58 Isn’t it all the same thing, kind of?

02:28:59 Nope.

02:29:01 You should differentiate that.

02:29:02 Maybe that’s your problem.

02:29:04 In your book, you should refine that a little bit.

02:29:06 Is it different chapters?

02:29:07 Yeah, there’s different chapters.

02:29:08 What’s these, aren’t these just different layers

02:29:11 of the same thing, the stack of physical?

02:29:14 People, some people are addicted to physical love

02:29:17 and they have no idea about emotional or intellectual love.

02:29:21 I don’t know if they’re the same things.

02:29:22 I think they’re different.

02:29:23 That’s true.

02:29:24 They could be different.

02:29:25 I guess the ultimate goal is for it to be the same.

02:29:28 Well, if you want something to be bigger and interesting,

02:29:30 you should find all its components and differentiate them,

02:29:32 not clump it together.

02:29:34 Like, people do this all the time.

02:29:36 Yeah, the modularity.

02:29:38 Get your abstraction layers right

02:29:39 and then you have room to breathe.

02:29:41 Well, maybe you can write the forward to my book

02:29:43 about love.

02:29:44 Or the afterwards.

02:29:45 And the after.

02:29:46 You really tried.

02:29:49 I feel like Lex has made a lot of progress in this book.

02:29:53 Well, you have things in your life that you love.

02:29:55 Yeah, yeah.

02:29:57 And they are, you’re right, they’re modular.

02:29:59 It’s quality.

02:30:01 And you can have multiple things with the same person

02:30:04 or the same thing.

02:30:06 But, yeah.

02:30:08 Depending on the moment of the day.

02:30:09 Yeah, there’s, like what Bukowski described

02:30:13 is that moment when you go from being in love

02:30:15 to having a different kind of love.

02:30:17 Yeah.

02:30:18 And that’s a transition.

02:30:19 But when it happens, if you read the owner’s manual

02:30:21 and you believed it, you would have said,

02:30:23 oh, this happened.

02:30:25 It doesn’t mean it’s not love.

02:30:26 It’s a different kind of love.

02:30:27 But maybe there’s something better about that.

02:30:32 As you grow old, all you do is regret how you used to be.

02:30:36 It’s sad.

02:30:38 Right?

02:30:39 You should have learned a lot of things

02:30:40 because like who you can be in your future self

02:30:43 is actually more interesting and possibly delightful

02:30:46 than being a mad kid in love with the next person.

02:30:52 Like, that’s super fun when it happens.

02:30:54 But that’s, you know, 5% of the possibility.

02:30:59 Yeah, that’s right.

02:31:02 There’s a lot more fun to be had in the long lasting stuff.

02:31:05 Yeah, or meaning, you know, if that’s your thing.

02:31:07 Which is a kind of fun.

02:31:09 It’s a deeper kind of fun.

02:31:10 And it’s surprising.

02:31:11 You know, that’s, like the thing I like is surprises.

02:31:15 You know, and you just never know what’s gonna happen.

02:31:19 But you have to look carefully and you have to work at it

02:31:21 and you have to think about it and you know, it’s.

02:31:24 Yeah, you have to see the surprises when they happen, right?

02:31:26 You have to be looking for it.

02:31:28 From the branching perspective, you mentioned regrets.

02:31:33 Do you have regrets about your own trajectory?

02:31:36 Oh yeah, of course.

02:31:38 Yeah, some of it’s painful,

02:31:39 but you wanna hear the painful stuff?

02:31:41 No.

02:31:42 I would say, like in terms of working with people,

02:31:46 when people did stuff I didn’t like,

02:31:48 especially if it was a bit nefarious,

02:31:50 I took it personally and I also felt it was personal

02:31:54 about them.

02:31:56 But a lot of times, like humans are,

02:31:57 you know, most humans are a mess, right?

02:31:59 And then they act out and they do stuff.

02:32:02 And the psychologist I heard a long time ago said,

02:32:06 you tend to think somebody does something to you.

02:32:09 But really what they’re doing is they’re doing

02:32:10 what they’re doing while they’re in front of you.

02:32:13 It’s not that much about you, right?

02:32:16 And as I got more interested in,

02:32:20 you know, when I work with people,

02:32:21 I think about them and probably analyze them

02:32:25 and understand them a little bit.

02:32:26 And then when they do stuff, I’m way less surprised.

02:32:29 And if it’s bad, I’m way less hurt.

02:32:32 And I react way less.

02:32:34 Like I sort of expect everybody’s got their shit.

02:32:37 Yeah, and it’s not about you as much.

02:32:38 It’s not about me that much.

02:32:41 It’s like, you know, you do something

02:32:42 and you think you’re embarrassed, but nobody cares.

02:32:45 Like, and somebody’s really mad at you,

02:32:46 the odds of it being about you.

02:32:49 No, they’re getting mad the way they’re doing that

02:32:51 because of some pattern they learned.

02:32:53 And you know, and maybe you can help them

02:32:55 if you care enough about it.

02:32:56 But, or you could see it coming and step out of the way.

02:33:00 Like, I wish I was way better at that.

02:33:02 I’m a bit of a hothead.

02:33:04 And in support of that.

02:33:06 You said with Steve, that was a feature, not a bug.

02:33:08 Yeah, well, he was using it as the counter force

02:33:11 to orderliness that would crush his work.

02:33:13 Well, you were doing the same.

02:33:15 Yeah, maybe.

02:33:15 I don’t think I, I don’t think my vision was big enough.

02:33:18 It was more like I just got pissed off and did stuff.

02:33:22 I’m sure that’s the, yeah, you’re telling me.

02:33:27 I don’t know if it had the,

02:33:29 it didn’t have the amazing effect

02:33:30 of creating the trillion dollar company.

02:33:32 It was more like I just got pissed off and left

02:33:35 and, or made enemies that I shouldn’t have.

02:33:38 And yeah, it’s hard.

02:33:40 Like, I didn’t really understand politics

02:33:42 until I worked at Apple where, you know,

02:33:44 Steve was a master player of politics

02:33:46 and his staff had to be, or they wouldn’t survive him.

02:33:48 And it was definitely part of the culture.

02:33:51 And then I’ve been in companies where they say

02:33:52 it’s political, but it’s all, you know,

02:33:54 fun and games compared to Apple.

02:33:56 And it’s not that the people at Apple are bad people.

02:34:00 It’s just, they operate politically at a higher level.

02:34:04 You know, it’s not like, oh, somebody said something bad

02:34:06 about somebody, somebody else, which is most politics.

02:34:10 It’s, you know, they had strategies

02:34:13 about accomplishing their goals.

02:34:15 Sometimes, you know, over the dead bodies of their enemies.

02:34:19 You know, with sophistication, yeah,

02:34:23 more Game of Thrones than sophistication

02:34:25 and like a big time factor rather than a, you know.

02:34:29 Wow, that requires a lot of control over your emotions,

02:34:31 I think, to have a bigger strategy in the way you behave.

02:34:35 Yeah, and it’s effective in the sense

02:34:38 that coordinating thousands of people

02:34:40 to do really hard things where many of the people

02:34:44 in there don’t understand themselves,

02:34:45 much less how they’re participating,

02:34:47 creates all kinds of, you know, drama and problems

02:34:52 that, you know, our solution is political in nature.

02:34:55 Like how do you convince people?

02:34:57 How do you leverage them?

02:34:57 How do you motivate them?

02:34:59 How do you get rid of them?

02:35:00 How do you, you know, like there’s so many layers

02:35:02 of that that are interesting.

02:35:04 And even though some of it, let’s say, may be tough,

02:35:08 it’s not evil unless, you know, you use that skill

02:35:13 to evil purposes, which some people obviously do.

02:35:16 But it’s a skill set that operates, you know.

02:35:19 And I wish I’d, you know, I was interested in it,

02:35:22 but I, you know, it was sort of like,

02:35:24 I’m an engineer, I do my thing.

02:35:26 And, you know, there’s times

02:35:28 when I could have had a way bigger impact

02:35:31 if I, you know, knew how to,

02:35:33 if I paid more attention and knew more about that.

02:35:36 Yeah, about the human layer of the stack.

02:35:38 Yeah, that human political power, you know,

02:35:41 expression layer of the stack.

02:35:43 Just complicated.

02:35:44 And there’s lots to know about it.

02:35:45 I mean, people are good at it, are just amazing.

02:35:49 And when they’re good at it,

02:35:50 and let’s say, relatively kind and oriented

02:35:55 in a good direction, you can really feel,

02:35:58 you can get lots of stuff done and coordinate things

02:36:00 that you never thought possible.

02:36:03 But all people like that also have some pretty hard edges

02:36:06 because, you know, it’s a heavy lift.

02:36:09 And I wish I’d spent more time like that when I was younger.

02:36:13 But maybe I wasn’t ready.

02:36:14 You know, I was a wide eyed kid for 30 years.

02:36:17 Still a bit of a kid.

02:36:18 Yeah, I know.

02:36:19 What do you hope your legacy is

02:36:23 when there’s a book like Hitchhiker’s Guide to the Galaxy,

02:36:28 and this is like a one sentence entry by Jim Waller

02:36:31 from like that guy lived at some point.

02:36:34 There’s not many, you know,

02:36:35 not many people would be remembered.

02:36:37 You’re one of the sparkling little human creatures

02:36:42 that had a big impact on the world.

02:36:44 How do you hope you’ll be remembered?

02:36:46 My daughter was trying to get,

02:36:48 she edited my Wikipedia page

02:36:49 to say that I was a legend and a guru.

02:36:53 But they took it out, so she put it back in.

02:36:55 She’s 15.

02:36:58 I think that was probably the best part of my legacy.

02:37:02 She got her sister, and they were all excited.

02:37:04 They were like trying to put it in the references

02:37:06 because there’s articles and that on the title.

02:37:09 So in the eyes of your kids, you’re a legend.

02:37:13 Well, they’re pretty skeptical

02:37:14 because they don’t be better than that.

02:37:15 They’re like dad.

02:37:18 So yeah, that kind of stuff is super fun.

02:37:21 In terms of the big legends stuff, I don’t care.

02:37:24 You don’t care.

02:37:25 I don’t really care.

02:37:26 You’re just an engineer.

02:37:28 Yeah, I’ve been thinking about building a big pyramid.

02:37:32 So I had a debate with a friend

02:37:33 about whether pyramids or craters are cooler.

02:37:36 And he realized that there’s craters everywhere,

02:37:39 but they built a couple of pyramids 5,000 years ago.

02:37:42 And they remember you for a while.

02:37:43 We’re still talking about it.

02:37:45 So I think that would be cool.

02:37:47 Those aren’t easy to build.

02:37:48 Oh, I know.

02:37:50 And they don’t actually know how they built them,

02:37:51 which is great.

02:37:54 It’s either AGI or aliens could be involved.

02:37:58 So I think you’re gonna have to figure out

02:38:01 quite a few more things than just

02:38:03 the basics of civil engineering.

02:38:05 So I guess you hope your legacy is pyramids.

02:38:10 That would be cool.

02:38:12 And my Wikipedia page, you know,

02:38:13 getting updated by my daughter periodically.

02:38:16 Like those two things would pretty much make it.

02:38:18 Jim, it’s a huge honor talking to you again.

02:38:20 I hope we talk many more times in the future.

02:38:22 I can’t wait to see what you do with Tense Torrent.

02:38:26 I can’t wait to use it.

02:38:27 I can’t wait for you to revolutionize

02:38:30 yet another space in computing.

02:38:33 It’s a huge honor to talk to you.

02:38:34 Thanks for talking to me.

02:38:35 This was fun.

02:39:05 See you next time.