Peter Wang: Python and the Source Code of Humans, Computers, and Reality #250

Transcript

00:00:00 The following is a conversation with Peter Wang,

00:00:02 one of the most impactful leaders and developers

00:00:04 in the Python community.

00:00:06 Former physicist, current philosopher,

00:00:09 and someone who many people told me about

00:00:11 and praised as a truly special mind

00:00:14 that I absolutely should talk to.

00:00:16 Recommendations ranging from Travis Hallifont

00:00:19 to Eric Weinstein.

00:00:20 So, here we are.

00:00:23 This is the Lex Friedman podcast.

00:00:25 To support it, please check out our sponsors

00:00:27 in the description.

00:00:28 And now, here’s my conversation with Peter Wang.

00:00:33 You’re one of the most impactful humans

00:00:35 in the Python ecosystem.

00:00:38 So, you’re an engineer, leader of engineers,

00:00:40 but you’re also a philosopher.

00:00:42 So, let’s talk both in this conversation

00:00:45 about programming and philosophy.

00:00:47 First, programming.

00:00:49 What to you is the best

00:00:51 or maybe the most beautiful feature of Python?

00:00:54 Or maybe the thing that made you fall in love

00:00:56 or stay in love with Python?

00:00:59 Well, those are three different things.

00:01:00 What I think is the most beautiful,

00:01:01 what made me fall in love, what made me stay in love.

00:01:03 When I first started using it

00:01:05 was when I was a C++ computer graphics performance nerd.

00:01:10 In the 90s?

00:01:10 Yeah, in the late 90s.

00:01:12 And that was my first job out of college.

00:01:15 And we kept trying to do more and more abstract

00:01:18 and higher order programming in C++,

00:01:20 which at the time was quite difficult.

00:01:23 With templates, the compiler support wasn’t great, et cetera.

00:01:26 So, when I started playing around with Python,

00:01:28 that was my first time encountering

00:01:30 really first class support for types, for functions,

00:01:33 and things like that.

00:01:34 And it felt so incredibly expressive.

00:01:37 So, that was what kind of made me fall in love

00:01:39 with it a little bit.

00:01:39 And also, once you spend a lot of time

00:01:42 in a C++ dev environment,

00:01:44 the ability to just whip something together

00:01:46 that basically runs and works the first time is amazing.

00:01:49 So, really productive scripting language.

00:01:51 I mean, I knew Perl, I knew Bash, I was decent at both.

00:01:55 But Python just made everything,

00:01:57 it made the whole world accessible.

00:01:59 I could script this and that and the other,

00:02:01 network things, little hard drive utilities.

00:02:04 I could write all of these things

00:02:05 in the space of an afternoon.

00:02:06 And that was really, really cool.

00:02:07 So, that’s what made me fall in love.

00:02:08 Is there something specific you could put your finger on

00:02:11 that you’re not programming in Perl today?

00:02:14 Like, why Python for scripting?

00:02:17 I think there’s not a specific thing

00:02:19 as much as the design motif of both the creator

00:02:22 of the language and the core group of people

00:02:25 that built the standard library around him.

00:02:28 There was definitely, there was a taste to it.

00:02:32 I mean, Steve Jobs used that term

00:02:34 in somewhat of an arrogant way,

00:02:35 but I think it’s a real thing,

00:02:37 that it was designed to fit.

00:02:39 A friend of mine actually expressed this really well.

00:02:40 He said, Python just fits in my head.

00:02:42 And there’s nothing better to say than that.

00:02:45 Now, people might argue modern Python,

00:02:47 there’s a lot more complexity,

00:02:49 but certainly as version 1.5.2,

00:02:51 I think was my first version,

00:02:53 that fit in my head very easily.

00:02:54 So, that’s what made me fall in love with it.

00:02:56 Okay, so the most beautiful feature of Python

00:03:01 that made you stay in love.

00:03:03 It’s like over the years, what has like,

00:03:06 you do a double take and you return too often

00:03:09 as a thing that just brings you a smile.

00:03:11 I really still like the ability to play with meta classes

00:03:17 and express higher order of things.

00:03:19 When I have to create some new object model

00:03:22 to model something, right?

00:03:23 It’s easy for me,

00:03:24 cause I’m pretty expert as a Python programmer.

00:03:27 I can easily put all sorts of lovely things together

00:03:29 and use properties and decorators and other kinds of things

00:03:32 and create something that feels very nice.

00:03:34 So, that to me, I would say that’s tied

00:03:37 with the NumPy and vectorization capabilities.

00:03:40 I love thinking in terms of the matrices and the vectors

00:03:43 and these kind of data structures.

00:03:46 So, I would say those two are kind of tied for me.

00:03:49 So, the elegance of the NumPy data structure,

00:03:52 like slicing through the different multi dimensional.

00:03:54 Yeah, there’s just enough things there.

00:03:56 It’s like a very, it’s a very simple, comfortable tool.

00:04:00 Just, it’s easy to reason about what it does

00:04:02 when you don’t stray too far afield.

00:04:05 Can you put your finger on how to design a language

00:04:09 such that it fits in your head?

00:04:11 Certain things like the colon

00:04:14 or the certain notation aspects of Python

00:04:17 that just kind of work.

00:04:18 Is it something you have to kind of write out on paper,

00:04:22 look and say, it’s just right?

00:04:24 Is it a taste thing or is there a systematic process?

00:04:27 What’s your sense?

00:04:28 I think it’s more of a taste thing.

00:04:31 But one thing that should be said

00:04:33 is that you have to pick your audience, right?

00:04:36 So, the better defined the user audience is

00:04:39 or the users are, the easier it is to build something

00:04:42 that fits in their minds because their needs

00:04:45 will be more compact and coherent.

00:04:47 It is possible to find a projection, right?

00:04:49 A compact projection for their needs.

00:04:50 The more diverse the user base, the harder that is.

00:04:54 And so, as Python has grown in popularity,

00:04:57 that’s also naturally created more complexity

00:05:00 as people try to design any given thing.

00:05:01 There’ll be multiple valid opinions

00:05:04 about a particular design approach.

00:05:06 And so, I do think that’s the downside of popularity.

00:05:10 It’s almost an intrinsic aspect

00:05:11 of the complexity of the problem.

00:05:13 Well, at the very beginning,

00:05:14 aren’t you an audience of one, isn’t ultimately,

00:05:17 aren’t all the greatest projects in history

00:05:19 were just solving a problem that you yourself had?

00:05:21 Well, so Clay Shirky in his book on crowdsourcing

00:05:25 or his kind of thoughts on crowdsourcing,

00:05:27 he identifies the first step of crowdsourcing

00:05:29 is me first collaboration.

00:05:31 You first have to make something

00:05:32 that works well for yourself.

00:05:34 It’s very telling that when you look at all of the impactful

00:05:37 big project, well, they’re fundamental projects now

00:05:40 in the SciPy and Pydata ecosystem.

00:05:42 They all started with the people in the domain

00:05:46 trying to scratch their own itch.

00:05:48 And the whole idea of scratching your own itch

00:05:49 is something that the open source

00:05:51 or the free software world has known for a long time.

00:05:53 But in the scientific computing areas,

00:05:56 these are assistant professors

00:05:58 or electrical engineering grad students.

00:06:00 They didn’t have really a lot of programming skill

00:06:03 necessarily, but Python was just good enough

00:06:05 for them to put something together

00:06:06 that fit in their domain, right?

00:06:09 So it’s almost like a,

00:06:11 it’s a necessity is the mother of invention aspect.

00:06:13 And also it was a really harsh filter

00:06:16 for utility and compactness and expressiveness.

00:06:20 Like it was too hard to use,

00:06:22 then they wouldn’t have built it

00:06:23 because that was just too much trouble, right?

00:06:24 It was a side project for them.

00:06:26 And also necessity creates a kind of deadline.

00:06:28 It seems like a lot of these projects

00:06:29 are quickly thrown together in the first step.

00:06:32 And that, even though it’s flawed,

00:06:35 that just seems to work well for software projects.

00:06:38 Well, it does work well for software projects in general.

00:06:41 And in this particular space,

00:06:43 one of my colleagues, Stan Siebert identified this,

00:06:46 that all the projects in the SciPy ecosystem,

00:06:50 if we just rattle them off,

00:06:51 there’s NumPy, there’s SciPy

00:06:53 built by different collaborations of people.

00:06:55 Although Travis is the heart of both of them.

00:06:57 But NumPy coming from numeric and numery,

00:06:59 these are different people.

00:07:00 And then you’ve got Pandas,

00:07:01 you’ve got Jupyter or IPython,

00:07:04 there’s Matplotlib,

00:07:06 there’s just so many others, I’m not gonna do justice

00:07:09 if I try to name them all.

00:07:10 But all of them are actually different people.

00:07:12 And as they rolled out their projects,

00:07:15 the fact that they had limited resources

00:07:17 meant that they were humble about scope.

00:07:21 A great famous hacker, Jamie Zawisky,

00:07:23 once said that every geek’s dream

00:07:26 is to build the ultimate middleware, right?

00:07:29 And the thing is with these scientists turned programmers,

00:07:32 they had no such dream.

00:07:33 They were just trying to write something

00:07:34 that was a little bit better for what they needed,

00:07:36 the MATLAB,

00:07:37 and they were gonna leverage what everyone else had built.

00:07:39 So naturally, almost in kind of this annealing process

00:07:42 or whatever, we built a very modular cover

00:07:46 of the basic needs of a scientific computing library.

00:07:50 If you look at the whole human story,

00:07:51 how much of a leap is it?

00:07:53 We’ve developed all kinds of languages,

00:07:55 all kinds of methodologies for communication.

00:07:57 It just kind of like grew this collective intelligence,

00:08:00 civilization grew, it expanded, wrote a bunch of books,

00:08:04 and now we tweet how big of a leap is programming

00:08:08 if programming is yet another language?

00:08:10 Is it just a nice little trick

00:08:12 that’s temporary in our human history,

00:08:15 or is it like a big leap in the,

00:08:19 almost us becoming another organism

00:08:23 at a higher level of abstraction, something else?

00:08:26 I think the act of programming

00:08:28 or using grammatical constructions

00:08:32 of some underlying primitives,

00:08:34 that is something that humans do learn,

00:08:37 but every human learns this.

00:08:38 Anyone who can speak learns how to do this.

00:08:41 What makes programming different

00:08:42 has been that up to this point,

00:08:44 when we try to give instructions to computing systems,

00:08:49 all of our computers, well, actually this is not quite true,

00:08:51 but I’ll first say it,

00:08:53 and then I’ll tell you why it’s not true.

00:08:55 But for the most part,

00:08:56 we can think of computers as being these iterated systems.

00:08:58 So when we program,

00:08:59 we’re giving very precise instructions to iterated systems

00:09:04 that then run at incomprehensible speed

00:09:07 and run those instructions.

00:09:08 In my experience,

00:09:10 some people are just better equipped

00:09:12 to model systematic iterated systems,

00:09:16 well, whatever, iterated systems in their head.

00:09:20 Some people are really good at that,

00:09:21 and other people are not.

00:09:23 And so when you have like, for instance,

00:09:26 sometimes people have tried to build systems

00:09:27 that make programming easier by making it visual,

00:09:30 drag and drop.

00:09:31 And the issue is you can have a drag and drop thing,

00:09:33 but once you start having to iterate the system

00:09:35 with conditional logic,

00:09:36 handling case statements and branch statements

00:09:37 and all these other things,

00:09:39 the visual drag and drop part doesn’t save you anything.

00:09:42 You still have to reason about this giant iterated system

00:09:44 with all these different conditions around it.

00:09:46 That’s the hard part, right?

00:09:48 So handling iterated logic, that’s the hard part.

00:09:52 The languages we use then emerge

00:09:54 to give us the ability and capability over these things.

00:09:57 Now, the one exception to this rule, of course,

00:09:58 is the most popular programming system in the world,

00:10:00 which is Excel, which is a data flow

00:10:03 and a data driven, immediate mode,

00:10:05 data transformation oriented programming system.

00:10:08 And this actually not an accident

00:10:10 that that system is the most popular programming system

00:10:12 because it’s so accessible

00:10:14 to a much broader group of people.

00:10:16 I do think as we build future computing systems,

00:10:21 you’re actually already seeing this a little bit,

00:10:22 it’s much more about composition of modular blocks.

00:10:25 They themselves actually maintain all their internal state

00:10:29 and the interfaces between them

00:10:31 are well defined data schemas.

00:10:32 And so to stitch these things together using like IFTTT

00:10:35 or Zapier or any of these kind of,

00:10:38 I would say compositional scripting kinds of things,

00:10:42 I mean, HyperCard was also a little bit in this vein.

00:10:44 That’s much more accessible to most people.

00:10:47 It’s really that implicit state

00:10:49 that’s so hard for people to track.

00:10:52 Yeah, okay, so that’s modular stuff,

00:10:53 but there’s also an aspect

00:10:54 where you’re standing on the shoulders of giants.

00:10:55 So you’re building like higher and higher levels

00:10:58 of abstraction, but you do that a little bit with language.

00:11:02 So with language, you develop sort of ideas,

00:11:05 philosophies from Plato and so on.

00:11:07 And then you kind of leverage those philosophies

00:11:09 as you try to have deeper and deeper conversations.

00:11:12 But with programming,

00:11:13 it seems like you can build much more complicated systems.

00:11:17 Like without knowing how everything works,

00:11:18 you can build on top of the work of others.

00:11:21 And it seems like you’re developing

00:11:22 more and more sophisticated expressions,

00:11:27 ability to express ideas in a computational space.

00:11:31 I think it’s worth pondering the difference here

00:11:35 between complexity and complication.

00:11:40 Okay, right. Back to Excel.

00:11:42 Well, not quite back to Excel,

00:11:43 but the idea is when we have a human conversation,

00:11:47 all languages for humans emerged

00:11:51 to support human relational communications,

00:11:55 which is that the person we’re communicating with

00:11:57 is a person and they would communicate back to us.

00:12:01 And so we sort of hit a resonance point, right?

00:12:05 When we actually agree on some concepts.

00:12:07 So there’s a messiness to it and there’s a fluidity to it.

00:12:10 With computing systems,

00:12:11 when we express something to the computer and it’s wrong,

00:12:14 we just try again.

00:12:15 So we can basically live many virtual worlds

00:12:17 of having failed at expressing ourselves to the computer

00:12:20 until the one time we expressed ourselves right.

00:12:22 Then we kind of put in production

00:12:23 and then discover that it’s still wrong

00:12:25 a few days down the road.

00:12:27 So I think the sophistication of things

00:12:30 that we build with computing,

00:12:32 one has to really pay attention to the difference

00:12:35 between when an end user is expressing something

00:12:38 onto a system that exists

00:12:39 versus when they’re extending the system

00:12:42 to increase the system’s capability

00:12:45 for someone else to then interface with.

00:12:47 We happen to use the same language for both of those things

00:12:49 in most cases, but it doesn’t have to be that.

00:12:52 And Excel is actually a great example of this,

00:12:54 of kind of a counterpoint to that.

00:12:56 Okay, so what about the idea of, you said messiness.

00:13:01 Wouldn’t you put the software 2.0 idea,

00:13:06 this idea of machine learning

00:13:08 into the further and further steps

00:13:12 into the world of messiness.

00:13:14 The same kind of beautiful messiness of human communication.

00:13:17 Isn’t that what machine learning is?

00:13:19 Is building on levels of abstraction

00:13:23 that don’t have messiness in them,

00:13:25 that at the operating system level,

00:13:27 then there’s Python, the programming languages

00:13:29 that have more and more power.

00:13:30 But then finally, there’s neural networks

00:13:34 that ultimately work with data.

00:13:38 And so the programming is almost in the space of data

00:13:40 and the data is allowed to be messy.

00:13:42 Isn’t that a kind of program?

00:13:43 So the idea of software 2.0 is a lot of the programming

00:13:47 happens in the space of data, so back to Excel,

00:13:52 all roads lead back to Excel, in the space of data

00:13:55 and also the hyperparameters of the neural networks.

00:13:57 And all of those allow the same kind of messiness

00:14:02 that human communication allows.

00:14:04 It does, but my background is in physics.

00:14:07 I took like two CS courses in college.

00:14:09 So I don’t have, now I did cram a bunch of CS in prep

00:14:13 when I applied for grad school,

00:14:15 but still I don’t have a formal background

00:14:18 in computer science.

00:14:19 But what I have observed in studying programming languages

00:14:22 and programming systems and things like that

00:14:25 is that there seems to be this triangle.

00:14:27 It’s one of these beautiful little iron triangles

00:14:30 that you find in life sometimes.

00:14:32 And it’s the connection between the code correctness

00:14:35 and kind of expressiveness of code,

00:14:37 the semantics of the data,

00:14:39 and then the kind of correctness or parameters

00:14:42 of the underlying hardware compute system.

00:14:44 So there’s the algorithms that you wanna apply,

00:14:48 there’s what the bits that are stored on whatever media

00:14:52 actually represent, so the semantics of the data

00:14:55 within the representation,

00:14:56 and then there’s what the computer can actually do.

00:14:59 And every programming system, every information system

00:15:02 ultimately finds some spot in the middle

00:15:05 of this little triangle.

00:15:07 Sometimes some systems collapse them into just one edge.

00:15:11 Are we including humans as a system in this?

00:15:13 No, no, I’m just thinking about computing systems here.

00:15:15 And the reason I bring this up is because

00:15:17 I believe there’s no free lunch around this stuff.

00:15:20 So if we build machine learning systems

00:15:23 to sort of write the correct code

00:15:25 that is at a certain level of performance,

00:15:27 so it’ll sort of select with hyperparameters

00:15:30 we can tune kind of how we want the performance boundary

00:15:32 in SLA to look like for transforming some set of inputs

00:15:37 into certain kinds of outputs.

00:15:39 That training process itself is intrinsically sensitive

00:15:43 to the kinds of inputs we put into it.

00:15:45 It’s quite sensitive to the boundary conditions

00:15:47 we put around the performance.

00:15:49 So I think even as we move to using automated systems

00:15:52 to build this transformation,

00:15:53 as opposed to humans explicitly

00:15:55 from a top down perspective, figuring out,

00:15:57 well, this schema and this database and these columns

00:15:59 get selected for this algorithm,

00:16:01 and here we put a Fibonacci heap for some other thing.

00:16:04 Human design or computer design,

00:16:06 ultimately what we hit,

00:16:08 the boundaries that we hit with these information systems

00:16:10 is when the representation of the data hits the real world

00:16:14 is where there’s a lot of slop and a lot of interpretation.

00:16:17 And that’s where actually I think

00:16:18 a lot of the work will go in the future

00:16:20 is actually understanding kind of how to better

00:16:23 in the view of these live data systems,

00:16:26 how to better encode the semantics of the world

00:16:29 for those things.

00:16:30 There’ll be less of the details

00:16:31 of how we write a particular SQL query.

00:16:33 Okay, but given the semantics of the real world

00:16:35 and the messiness of that,

00:16:36 what does the word correctness mean

00:16:38 when you’re talking about code?

00:16:40 There’s a lot of dimensions to correctness.

00:16:42 Historically, and this is one of the reasons I say

00:16:45 that we’re coming to the end of the era of software,

00:16:47 because for the last 40 years or so,

00:16:49 software correctness was really defined

00:16:52 about functional correctness.

00:16:54 I write a function, it’s got some inputs,

00:16:56 does it produce the right outputs?

00:16:57 If so, then I can turn it on,

00:16:59 hook it up to the live database and it goes.

00:17:01 And more and more now we have,

00:17:03 I mean, in fact, I think the bright line in the sand

00:17:05 between machine learning systems

00:17:06 or modern data driven systems

00:17:08 versus classical software systems

00:17:10 is that the values of the input

00:17:14 actually have to be considered with the function together

00:17:17 to say this whole thing is correct or not.

00:17:19 And usually there’s a performance SLA as well.

00:17:21 Like did it actually finish making this?

00:17:23 What’s SLA?

00:17:24 Sorry, service level agreement.

00:17:25 So it has to return within some time.

00:17:27 You have a 10 millisecond time budget

00:17:29 to return a prediction of this level of accuracy, right?

00:17:32 So these are things that were not traditionally

00:17:35 in most business computing systems for the last 20 years

00:17:37 at all, people didn’t think about it.

00:17:39 But now we have value dependence on functional correctness.

00:17:42 So that question of correctness

00:17:44 is becoming a bigger and bigger question.

00:17:45 What does that map to the end of software?

00:17:48 We’ve thought about software as just this thing

00:17:50 that you can do in isolation with some test trial inputs

00:17:54 and in a very sort of sandboxed environment.

00:17:58 And we can quantify how does it scale?

00:18:00 How does it perform?

00:18:02 How many nodes do we need to allocate

00:18:03 if we wanna scale this many inputs?

00:18:05 When we start turning this stuff into prediction systems,

00:18:08 real cybernetic systems,

00:18:10 you’re going to find scenarios where you get inputs

00:18:12 that you’re gonna wanna spend

00:18:13 a little more time thinking about.

00:18:14 You’re gonna find inputs that are not,

00:18:15 it’s not clear what you should do, right?

00:18:17 So then the software has a varying amount of runtime

00:18:20 and correctness with regard to input.

00:18:22 And that is a different kind of system altogether.

00:18:24 Now it’s a full on cybernetic system.

00:18:25 It’s a next generation information system

00:18:27 that is not like traditional software systems.

00:18:30 Can you maybe describe what is a cybernetic system?

00:18:33 Do you include humans in that picture?

00:18:35 So is a human in the loop kind of complex mess

00:18:38 of the whole kind of interactivity of software

00:18:41 with the real world or is it something more concrete?

00:18:44 Well, when I say cybernetic,

00:18:45 I really do mean that the software itself

00:18:47 is closing the observe, orient, decide, act loop by itself.

00:18:51 So humans being out of the loop is the fact

00:18:54 what for me makes it a cybernetic system.

00:18:58 And humans are out of that loop.

00:19:00 When humans are out of the loop,

00:19:01 when the machine is actually sort of deciding on its own

00:19:05 what it should do next to get more information,

00:19:07 that makes it a cybernetic system.

00:19:09 So we’re just at the dawn of this, right?

00:19:11 I think everyone talking about MLAI, it’s great.

00:19:15 But really the thing we should be talking about

00:19:16 is when we really enter the cybernetic era

00:19:20 and all of the questions of ethics and governance

00:19:22 and all correctness and all these things,

00:19:24 they really are the most important questions.

00:19:27 Okay, can we just linger on this?

00:19:28 What does it mean for the human to be out of the loop

00:19:30 in a cybernetic system, because isn’t the cybernetic system

00:19:34 that’s ultimately accomplishing some kind of purpose

00:19:37 that at the bottom, the turtles all the way down,

00:19:41 at the bottom turtle is a human.

00:19:44 Well, the human may have set some criteria,

00:19:45 but the human wasn’t precise.

00:19:47 So for instance, I just read the other day

00:19:49 that earlier this year,

00:19:51 or maybe it was last year at some point,

00:19:52 the Libyan army, I think,

00:19:55 sent out some automated killer drones with explosives.

00:19:58 And there was no human in the loop at that point.

00:20:00 They basically put them in a geofenced area,

00:20:02 said find any moving target, like a truck or vehicle

00:20:04 that looks like this, and boom.

00:20:07 That’s not a human in the loop, right?

00:20:09 So increasingly, the less human there is in the loop,

00:20:12 the more concerned you are about these kinds of systems,

00:20:15 because there’s unintended consequences,

00:20:18 like less the original designer and engineer of the system

00:20:22 is able to predict, even one with good intent

00:20:25 is able to predict the consequences of such a system.

00:20:27 Is that it? That’s right.

00:20:28 There are some software systems, right,

00:20:30 that run without humans in the loop

00:20:31 that are quite complex.

00:20:32 And that’s like the electronic markets.

00:20:34 And we get flash crashes all the time.

00:20:35 We get in the heyday of high frequency trading,

00:20:40 there’s a lot of market microstructure,

00:20:41 people doing all sorts of weird stuff

00:20:43 that the market designers had never really thought about,

00:20:47 contemplated or intended.

00:20:48 So when we run these full on systems

00:20:50 with these automated trading bots,

00:20:52 now they become automated killer drones

00:20:55 and then all sorts of other stuff.

00:20:57 We are, that’s what I mean by we’re at the dawn

00:20:59 of the cybernetic era and the end of the era

00:21:01 of just pure software.

00:21:03 Are you more concerned,

00:21:06 if you’re thinking about cybernetic systems

00:21:08 or even like self replicating systems,

00:21:10 so systems that aren’t just doing a particular task,

00:21:13 but are able to sort of multiply and scale

00:21:15 in some dimension in the digital

00:21:18 or even the physical world.

00:21:20 Are you more concerned about like the lobster being boiled?

00:21:24 So a gradual with us not noticing,

00:21:29 collapse of civilization or a big explosion,

00:21:34 like oops, kind of a big thing where everyone notices,

00:21:38 but it’s too late.

00:21:40 I think that it will be a different experience

00:21:44 for different people.

00:21:46 I do share a common point of view

00:21:49 with some of the climate,

00:21:52 people who are concerned about climate change

00:21:53 and just the big existential risks that we have.

00:21:59 But unlike a lot of people who share my level of concern,

00:22:02 I think the collapse will not be quite so dramatic

00:22:06 as some of them think.

00:22:07 And what I mean is that,

00:22:09 I think that for certain tiers of let’s say economic class

00:22:12 or certain locations in the world,

00:22:14 people will experience dramatic collapse scenarios.

00:22:17 But for a lot of people, especially in the developed world,

00:22:20 the realities of collapse will be managed.

00:22:24 There’ll be narrative management around it

00:22:26 so that they essentially insulate,

00:22:29 the middle class will be used to insulate the upper class

00:22:31 from the pitchforks and the flaming torches and everything.

00:22:35 It’s interesting because,

00:22:37 so my specific question wasn’t as general.

00:22:39 My question was more about cybernetic systems or software.

00:22:42 Okay.

00:22:43 It’s interesting,

00:22:44 but it would nevertheless perhaps be about class.

00:22:46 So the effect of algorithms

00:22:48 might affect certain classes more than others.

00:22:50 Absolutely.

00:22:51 I was more thinking about

00:22:52 whether it’s social media algorithms or actual robots,

00:22:57 is there going to be a gradual effect on us

00:23:00 where we wake up one day

00:23:02 and don’t recognize the humans we are,

00:23:05 or is it something truly dramatic

00:23:07 where there’s like a meltdown of a nuclear reactor

00:23:11 kind of thing, Chernobyl, like catastrophic events

00:23:15 that are almost bugs in a program that scaled itself

00:23:20 too quickly?

00:23:21 Yeah, I’m not as concerned about the visible stuff.

00:23:26 And the reason is because the big visible explosions,

00:23:29 I mean, this is something I said about social media

00:23:31 is that at least with nuclear weapons,

00:23:33 when a nuke goes off, you can see it

00:23:34 and you’re like, well, that’s really,

00:23:36 wow, that’s kind of bad, right?

00:23:37 I mean, Oppenheimer was reciting the Bhagavad Gita, right?

00:23:40 When he saw one of those things go off.

00:23:42 So we can see nukes are really bad.

00:23:45 He’s not reciting anything about Twitter.

00:23:48 Well, but right, but then when you have social media,

00:23:51 when you have all these different things that conspire

00:23:54 to create a layer of virtual experience for people

00:23:57 that alienates them from reality and from each other,

00:24:00 that’s very pernicious, that’s impossible to see, right?

00:24:03 And it kind of slowly gets in there, so.

00:24:07 You’ve written about this idea of virtuality

00:24:09 on this topic, which you define as the subjective phenomenon

00:24:14 of knowingly engaging with virtual sensation and perception

00:24:17 and suspending or forgetting the context

00:24:19 that it’s simulacrum.

00:24:22 So let me ask, what is real?

00:24:26 Is there a hard line between reality and virtuality?

00:24:30 Like perception drifts from some kind of physical reality.

00:24:33 We have to kind of have a sense of what is the line

00:24:36 that’s too, we’ve gone too far.

00:24:37 Right, right.

00:24:38 For me, it’s not about any hard line about physical reality

00:24:42 as much as a simple question of,

00:24:47 does the particular technology help people connect

00:24:51 in a more integral way with other people,

00:24:54 with their environment,

00:24:56 with all of the full spectrum of things around them?

00:24:58 So it’s less about, oh, this is a virtual thing

00:25:00 and this is a hard real thing,

00:25:03 more about when we create virtual representations

00:25:05 of the real things, always some things

00:25:09 are lost in translation.

00:25:10 Usually many, many dimensions are lost in translation.

00:25:14 We’re now coming to almost two years of COVID,

00:25:16 people on Zoom all the time.

00:25:17 You know it’s different when you meet somebody in person

00:25:19 than when you see them on,

00:25:20 I’ve seen you on YouTube lots, right?

00:25:22 But then seeing a person is very different.

00:25:24 And so I think when we engage in virtual experiences

00:25:29 all the time, and we only do that,

00:25:31 there is absolutely a level of embodiment.

00:25:34 There’s a level of embodied experience

00:25:36 and participatory interaction that is lost.

00:25:40 And it’s very hard to put your finger on exactly what it is.

00:25:42 It’s hard to say, oh, we’re gonna spend $100 million

00:25:44 building a new system that captures this 5% better,

00:25:49 higher fidelity human expression.

00:25:51 No one’s gonna pay for that, right?

00:25:52 So when we rush madly into a world of simulacrum

00:25:57 and virtuality, the things that are lost are,

00:26:02 it’s difficult.

00:26:04 Once everyone moves there, it can be hard to look back

00:26:06 and see what we’ve lost.

00:26:08 So is it irrecoverably lost?

00:26:10 Or rather, when you put it all on the table,

00:26:14 is it possible for more to be gained than is lost?

00:26:17 If you look at video games,

00:26:18 they create virtual experiences that are surreal

00:26:22 and can bring joy to a lot of people,

00:26:24 can connect a lot of people,

00:26:26 and can get people to talk a lot of trash.

00:26:29 So they can bring out the best and the worst in people.

00:26:32 So is it possible to have a future world

00:26:35 where the pros outweigh the cons?

00:26:38 It is.

00:26:39 I mean, it’s possible to have that in the current world.

00:26:41 But when literally trillions of dollars of capital

00:26:46 are tied to using those things

00:26:48 to groom the worst of our inclinations

00:26:52 and to attack our weaknesses in the limbic system

00:26:56 to create these things into id machines

00:26:57 versus connection machines,

00:26:59 then those good things don’t stand a chance.

00:27:03 Can you make a lot of money by building connection machines?

00:27:06 Is it possible, do you think,

00:27:09 to bring out the best in human nature

00:27:10 to create fulfilling connections and relationships

00:27:13 in the digital world and make a shit ton of money?

00:27:18 If I figure it out, I’ll let you know.

00:27:21 But what’s your intuition

00:27:22 without concretely knowing what’s the solution?

00:27:24 My intuition is that a lot of our digital technologies

00:27:27 give us the ability to have synthetic connections

00:27:30 or to experience virtuality.

00:27:33 They have co evolved with sort of the human expectations.

00:27:38 It’s sort of like sugary drinks.

00:27:40 As people have more sugary drinks,

00:27:42 they need more sugary drinks to get that same hit, right?

00:27:45 So with these virtual things and with TV and fast cuts

00:27:50 and TikToks and all these different kinds of things,

00:27:52 we’re co creating essentially humanity

00:27:55 that sort of asks and needs those things.

00:27:57 And now it becomes very difficult

00:27:58 to get people to slow down.

00:28:00 It gets difficult for people to hold their attention

00:28:03 on slow things and actually feel that embodied experience.

00:28:07 So mindfulness now more than ever is so important in schools

00:28:11 and as a therapy technique for people

00:28:13 because our environment has been accelerated.

00:28:15 And McLuhan actually talks about this

00:28:17 in the electric environment of the television.

00:28:19 And that was before TikTok and before front facing cameras.

00:28:22 So I think for me, the concern is that

00:28:25 it’s not like we can ever switch to doing something better,

00:28:28 but more of the humans and technology,

00:28:32 they’re not independent of each other.

00:28:33 The technology that we use kind of molds what we need

00:28:37 for the next generation of technology.

00:28:39 Yeah, but humans are intelligent and they’re introspective

00:28:43 and they can reflect on the experiences of their life.

00:28:45 So for example, there’s been many years in my life

00:28:47 where I ate an excessive amount of sugar.

00:28:50 And then a certain moment I woke up and said,

00:28:54 why do I keep doing this?

00:28:55 This doesn’t feel good.

00:28:57 Like longterm.

00:28:59 And I think, so going through the TikTok process

00:29:02 of realizing, okay, when I shorten my attention span,

00:29:06 actually that does not make me feel good longer term.

00:29:10 And realizing that and then going to platforms,

00:29:13 going to places that are away from the sugar.

00:29:18 So in so doing, you can create platforms

00:29:21 that can make a lot of money to help people wake up

00:29:24 to what actually makes them feel good longterm.

00:29:26 Develop, grow as human beings.

00:29:28 And it just feels like humans are more intelligent

00:29:31 than mice looking for cheese.

00:29:35 They’re able to sort of think, I mean,

00:29:36 we can contemplate our own mortality.

00:29:39 We can contemplate things like longterm love

00:29:43 and we can have a longterm fear

00:29:46 of certain things like mortality.

00:29:48 We can contemplate whether the experiences,

00:29:51 the sort of the drugs of daily life

00:29:53 that we’ve been partaking in is making us happier,

00:29:57 better people.

00:29:58 And then once we contemplate that,

00:30:00 we can make financial decisions in using services

00:30:03 and paying for services that are making us better people.

00:30:06 So it just seems that we’re in the very first stages

00:30:11 of social networks that just were able to make a lot of money

00:30:15 really quickly, but in bringing out sometimes

00:30:20 the bad parts of human nature, they didn’t destroy humans.

00:30:23 They just fed everybody a lot of sugar.

00:30:26 And now everyone’s gonna wake up and say,

00:30:28 hey, we’re gonna start having like sugar free social media.

00:30:31 Right, right.

00:30:33 Well, there’s a lot to unpack there.

00:30:34 I think some people certainly have the capacity for that.

00:30:37 And I certainly think, I mean, it’s very interesting

00:30:39 even the way you said it, you woke up one day

00:30:41 and you thought, well, this doesn’t feel very good.

00:30:44 Well, it’s still your limbic system saying

00:30:45 this doesn’t feel very good, right?

00:30:47 You have a cat brain’s worth of neurons around your gut,

00:30:50 right?

00:30:50 And so maybe that saturated and that was telling you,

00:30:53 hey, this isn’t good.

00:30:55 Humans are more than just mice looking for cheese

00:30:58 or monkeys looking for sex and power, right?

00:31:00 So.

00:31:01 Let’s slow down.

00:31:02 Now a lot of people would argue with you on that one,

00:31:05 but yes.

00:31:06 Well, we’re more than just that, but we’re at least that.

00:31:08 And we’re very, very seldom not that.

00:31:11 So I don’t actually disagree with you

00:31:15 that we could be better and that better platforms exist.

00:31:18 And people are voluntarily noping out of things

00:31:20 like Facebook and noping out.

00:31:21 That’s an awesome verb.

00:31:22 It’s a great term.

00:31:23 Yeah, I love it.

00:31:24 I use it all the time.

00:31:25 You’re welcome, Mike.

00:31:26 I’m gonna have to nope out of that.

00:31:27 I’m gonna have to nope out of that, right?

00:31:28 It’s gonna be a hard pass and that’s great.

00:31:32 But that’s again, to your point,

00:31:34 that’s the first generation of front facing cameras

00:31:37 of social pressures.

00:31:38 And you as a self starter, self aware adult

00:31:43 have the capacity to say, yeah, I’m not gonna do that.

00:31:46 I’m gonna go and spend time on long form reads.

00:31:48 I’m gonna spend time managing my attention.

00:31:50 I’m gonna do some yoga.

00:31:52 If you’re a 15 year old in high school

00:31:54 and your entire social environment

00:31:57 is everyone doing these things,

00:31:58 guess what you’re gonna do?

00:31:59 You’re gonna kind of have to do that

00:32:00 because your limbic system says,

00:32:01 hey, I need to get the guy or the girl or the whatever.

00:32:04 And that’s what I’m gonna do.

00:32:05 And so one of the things that we have to reason about here

00:32:07 is the social media systems or social media,

00:32:10 I think is our first encounter with a technological system

00:32:15 that runs a bit of a loop around our own cognition

00:32:20 and attention.

00:32:21 It’s not the last, it’s far from the last.

00:32:25 And it gets to the heart of some of the philosophical

00:32:28 Achilles heel of the Western philosophical system,

00:32:31 which is each person gets to make their own determination.

00:32:34 Each person is an individual that’s sacrosanct

00:32:37 in their agency and their sovereignty and all these things.

00:32:39 The problem with these systems is they come down

00:32:42 and they are able to make their own decisions.

00:32:44 They come down and they are able to manage everyone on mass.

00:32:48 And so every person is making their own decision,

00:32:50 but together the bigger system is causing them to act

00:32:53 with a group dynamic that’s very profitable for people.

00:32:58 So this is the issue that we have is that our philosophies

00:33:02 are actually not geared to understand

00:33:05 what is it for a person to have a high trust connection

00:33:10 as part of a collective and for that collective

00:33:12 to have its right to coherency and agency.

00:33:16 That’s something like when a social media app

00:33:19 causes a family to break apart,

00:33:21 it’s done harm to more than just individuals, right?

00:33:24 So that concept is not something we really talk about

00:33:27 or think about very much, but that’s actually the problem

00:33:30 is that we’re vaporizing molecules into atomic units

00:33:33 and then we’re hitting all the atoms with certain things.

00:33:35 That’s like, yeah, well, that person chose to look at my app.

00:33:38 So our understanding of human nature

00:33:40 at the individual level, it emphasizes the individual

00:33:43 too much because ultimately society operates

00:33:46 at the collective level.

00:33:47 And these apps do as well.

00:33:48 And the apps do as well.

00:33:49 So for us to understand the progression and the development

00:33:53 of this organism we call human civilization,

00:33:56 we have to think at the collective level too.

00:33:58 I would say multi tiered.

00:33:59 Multi tiered.

00:34:00 So individual as well.

00:34:01 Individuals, family units, social collectives

00:34:05 and all the way up.

00:34:06 Okay, so you’ve said that individual humans

00:34:09 are multi layered susceptible to signals and waves

00:34:12 and multiple strata, the physical, the biological,

00:34:15 social, cultural, intellectual.

00:34:16 So sort of going along these lines,

00:34:19 can you describe the layers of the cake

00:34:22 that is a human being and maybe the human collective,

00:34:27 human society?

00:34:29 So I’m just stealing wholesale here from Robert Persig,

00:34:32 who is the author of Zen and the Art of Motorcycle

00:34:34 Maintenance and his follow on book has a sequel to it

00:34:40 called Lila.

00:34:40 He goes into this in a little more detail.

00:34:42 But it’s a crude approach to thinking about people.

00:34:47 But I think it’s still an advancement

00:34:48 over traditional subject object metaphysics,

00:34:51 where we look at people as a dualist would say,

00:34:53 well, is your mind, your consciousness,

00:34:57 is that just merely the matter that’s in your brain

00:35:01 or is there something kind of more beyond that?

00:35:03 And they would say, yes, there’s a soul,

00:35:05 sort of ineffable soul beyond just merely the physical body.

00:35:09 And I’m not one of those people.

00:35:11 I think that we don’t have to draw a line between are things

00:35:15 only this or only that.

00:35:16 Collectives of things can emerge structures and patterns

00:35:19 that are just as real as the underlying pieces.

00:35:22 But they’re transcendent, but they’re still

00:35:24 of the underlying pieces.

00:35:26 So your body is this way.

00:35:28 I mean, we just know physically you consist of atoms

00:35:31 and whatnot.

00:35:32 And then the atoms are arranged into molecules

00:35:34 which then arrange into certain kinds of structures

00:35:37 that seem to have a homeostasis to them.

00:35:39 We call them cells.

00:35:40 And those cells form sort of biological structures.

00:35:44 Those biological structures give your body

00:35:46 its physical ability and the biological ability

00:35:49 to consume energy and to maintain homeostasis.

00:35:51 But humans are social animals.

00:35:54 I mean, human by themselves is not very long for the world.

00:35:57 So part of our biology is why are two connect to other people?

00:36:02 From the mirror neurons to our language centers

00:36:04 and all these other things.

00:36:06 So we are intrinsically, there’s a layer,

00:36:09 there’s a part of us that wants to be part of a thing.

00:36:12 If we’re around other people, not saying a word,

00:36:14 but they’re just up and down jumping and dancing, laughing,

00:36:17 we’re going to feel better.

00:36:18 And there was no exchange of physical anything.

00:36:21 They didn’t give us like five atoms of happiness.

00:36:24 But there’s an induction in our own sense of self

00:36:27 that is at that social level.

00:36:29 And then beyond that, Persick puts the intellectual level

00:36:33 kind of one level higher than social.

00:36:35 I think they’re actually more intertwined than that.

00:36:37 But the intellectual level is the level of pure ideas.

00:36:41 That you are a vessel for memes.

00:36:42 You’re a vessel for philosophies.

00:36:45 You will conduct yourself in a particular way.

00:36:47 I mean, I think part of this is if we think about it

00:36:49 from a physics perspective, you’re not,

00:36:52 there’s the joke that physicists like to approximate things.

00:36:55 And we’ll say, well, approximate a spherical cow, right?

00:36:57 You’re not a spherical cow, you’re not a spherical human.

00:36:59 You’re a messy human.

00:37:00 And we can’t even say what the dynamics of your emotion

00:37:04 will be unless we analyze all four of these layers, right?

00:37:08 If you’re Muslim at a certain time of day, guess what?

00:37:11 You’re going to be on the ground kneeling and praying, right?

00:37:14 And that has nothing to do with your biological need

00:37:15 to get on the ground or physics of gravity.

00:37:18 It is an intellectual drive that you have.

00:37:20 It’s a cultural phenomenon

00:37:22 and an intellectual belief that you carry.

00:37:23 So that’s what the four layered stack is all about.

00:37:28 It’s that a person is not only one of these things,

00:37:30 they’re all of these things at the same time.

00:37:31 It’s a superposition of dynamics that run through us

00:37:35 that make us who we are.

00:37:37 So no layer is special.

00:37:40 Not so much no layer is special,

00:37:41 each layer is just different.

00:37:44 But we are.

00:37:45 Each layer gets the participation trophy.

00:37:48 Yeah, each layer is a part of what you are.

00:37:50 You are a layer cake, right, of all these things.

00:37:52 And if we try to deny, right,

00:37:54 so many philosophies do try to deny

00:37:56 the reality of some of these things, right?

00:37:58 Some people will say, well, we’re only atoms.

00:38:01 Well, we’re not only atoms

00:38:02 because there’s a lot of other things that are only atoms.

00:38:04 I can reduce a human being to a bunch of soup

00:38:07 and they’re not the same thing,

00:38:08 even though it’s the same atoms.

00:38:09 So I think the order and the patterns

00:38:12 that emerge within humans to understand,

00:38:15 to really think about what a next generation philosophy

00:38:18 would look like, that would allow us to reason

00:38:20 about extending humans into the digital realm

00:38:22 or to interact with autonomous intelligences

00:38:25 that are not biological in nature.

00:38:27 We really need to appreciate these,

00:38:29 that human, what human beings actually are

00:38:32 is the superposition of these different layers.

00:38:34 You mentioned consciousness.

00:38:36 Are each of these layers of cake conscious?

00:38:39 Is consciousness a particular quality of one of the layers?

00:38:43 Is there like a spike if you have a consciousness detector

00:38:46 at these layers or is something that just permeates

00:38:49 all of these layers and just takes different form?

00:38:51 I believe what humans experience as consciousness

00:38:54 is something that sits on a gradient scale

00:38:57 of a general principle in the universe

00:39:00 that seems to look for order and reach for order

00:39:04 when there’s an excess of energy.

00:39:06 You know, it would be odd to say a proton is alive, right?

00:39:09 It’d be odd to say like this particular atom

00:39:12 or molecule of hydrogen gas is alive,

00:39:15 but there’s certainly something we can make assemblages

00:39:20 of these things that have autopoetic aspects to them

00:39:24 that will create structures that will, you know,

00:39:26 crystalline solids will form very interesting

00:39:28 and beautiful structures.

00:39:29 This gets kind of into weird mathematical territories.

00:39:33 You start thinking about Penrose and Game of Life stuff

00:39:35 about the generativity of math itself,

00:39:37 like the hyperreal numbers, things like that.

00:39:39 But without going down that rabbit hole,

00:39:42 I would say that there seems to be a tendency

00:39:45 in the world that when there is excess energy,

00:39:49 things will structure and pattern themselves.

00:39:51 And they will then actually furthermore try to create

00:39:53 an environment that furthers their continued stability.

00:39:58 It’s the concept of externalized extended phenotype

00:40:00 or niche construction.

00:40:02 So this is ultimately what leads to certain kinds

00:40:06 of amino acids forming certain kinds of structures

00:40:09 and so on and so forth until you get the ladder of life.

00:40:11 So what we experience as consciousness,

00:40:12 no, I don’t think cells are conscious at that level,

00:40:15 but is there something beyond mere equilibrium state biology

00:40:19 and chemistry and biochemistry

00:40:21 that drives what makes things work?

00:40:25 I think there is.

00:40:27 So Adrian Bajan has his ConstructoLaw.

00:40:29 There’s other things you can look at.

00:40:31 When you look at the life sciences

00:40:32 and you look at any kind of statistical physics

00:40:36 and statistical mechanics,

00:40:37 when you look at things far out of equilibrium,

00:40:40 when you have excess energy, what happens then?

00:40:43 Life doesn’t just make a hotter soup.

00:40:45 It starts making structure.

00:40:47 There’s something there.

00:40:48 The poetry of reaches for order

00:40:50 when there’s an excess of energy.

00:40:54 Because you brought up game of life.

00:40:57 You did it, not me.

00:40:59 I love cellular automata,

00:41:00 so I have to sort of linger on that for a little bit.

00:41:06 So cellular automata, I guess, or game of life

00:41:09 is a very simple example of reaching for order

00:41:11 when there’s an excess of energy.

00:41:14 Or reaching for order and somehow creating complexity.

00:41:17 Within this explosion of just turmoil,

00:41:22 somehow trying to construct structures.

00:41:25 And in so doing, create very elaborate

00:41:29 organism looking type things.

00:41:32 What intuition do you draw from this simple mechanism?

00:41:35 Well, I like to turn that around its head.

00:41:37 And look at it as what if every single one of the patterns

00:41:42 created life, or created, not life,

00:41:45 but created interesting patterns?

00:41:47 Because some of them don’t.

00:41:48 And sometimes you make cool gliders.

00:41:50 And other times, you start with certain things

00:41:52 and you make gliders and other things

00:41:54 that then construct like AND gates and NOT gates, right?

00:41:57 And you build computers on them.

00:41:59 All of these rules that create these patterns

00:42:00 that we can see, those are just the patterns we can see.

00:42:04 What if our subjectivity is actually limiting

00:42:06 our ability to perceive the order in all of it?

00:42:11 What if some of the things that we think are random

00:42:12 are actually not that random?

00:42:13 We’re simply not integrating at a final level

00:42:16 across a broad enough time horizon.

00:42:18 And this is again, I said, we go down the rabbit holes

00:42:20 and the Penrose stuff or like Wolfram’s explorations

00:42:22 on these things.

00:42:24 There is something deep and beautiful

00:42:27 in the mathematics of all this.

00:42:28 That is hopefully one day I’ll have enough money

00:42:30 to work and retire and just ponder those questions.

00:42:33 But there’s something there.

00:42:34 But you’re saying there’s a ceiling to,

00:42:36 when you have enough money and you retire and you ponder,

00:42:38 there’s a ceiling to how much you can truly ponder

00:42:40 because there’s cognitive limitations

00:42:43 in what you’re able to perceive as a pattern.

00:42:46 Yeah.

00:42:47 And maybe mathematics extends your perception capabilities,

00:42:51 but it’s still finite.

00:42:53 It’s just like.

00:42:55 Yeah, the mathematics we use is the mathematics

00:42:57 that can fit in our head.

00:42:59 Yeah.

00:43:00 Did God really create the integers?

00:43:02 Or did God create all of it?

00:43:03 And we just happen at this point in time

00:43:05 to be able to perceive integers.

00:43:07 Well, he just did the positive in it.

00:43:09 She, I just said, did she create all of it?

00:43:11 And then we.

00:43:14 She just created the natural numbers

00:43:15 and then we screwed it all up with zero and then I guess.

00:43:17 Okay.

00:43:18 But we did, we created mathematical operations

00:43:21 so that we can have iterated steps

00:43:23 to approach bigger problems, right?

00:43:26 I mean, the entire point of the Arabic Neural System

00:43:29 and it’s a rubric for mapping a certain set of operations,

00:43:32 folding them into a simple little expression,

00:43:35 but that’s just the operations that we can fit in our heads.

00:43:38 There are many other operations besides, right?

00:43:41 The thing that worries me the most about aliens and humans

00:43:46 is that the aliens are all around us and we’re too dumb.

00:43:50 Yeah.

00:43:51 To see them.

00:43:52 Oh, certainly, yeah.

00:43:53 Or life, let’s say just life,

00:43:54 life of all kinds of forms or organisms.

00:43:58 You know what, just even the intelligence of organisms

00:44:01 is imperceptible to us

00:44:04 because we’re too dumb and self centered.

00:44:06 That worries me.

00:44:07 Well, we’re looking for a particular kind of thing.

00:44:09 Yeah.

00:44:10 When I was at Cornell,

00:44:11 I had a lovely professor of Asian religions,

00:44:13 Jane Marie Law,

00:44:14 and she would tell this story about a musical,

00:44:17 a musician, a Western musician who went to Japan

00:44:20 and he taught classical music

00:44:21 and could play all sorts of instruments.

00:44:24 He went to Japan and he would ask people,

00:44:27 he would basically be looking for things in the style of

00:44:30 a Western chromatic scale and these kinds of things.

00:44:34 And then finding none of it,

00:44:35 he would say, well, there’s really no music in Japan,

00:44:37 but they’re using a different scale.

00:44:38 They’re playing different kinds of instruments, right?

00:44:40 The same thing she was using as a sort of a metaphor

00:44:42 for religion as well.

00:44:43 In the West, we center a lot of religion,

00:44:45 certainly the religions of Abraham,

00:44:47 we center them around belief.

00:44:50 And in the East, it’s more about practice, right?

00:44:52 Spirituality and practice rather than belief.

00:44:54 So anyway, the point is here to your point,

00:44:57 life, we, I think so many people are so fixated

00:45:00 on certain aspects of self replication

00:45:03 or homeostasis or whatever.

00:45:06 But if we kind of broaden and generalize this thing

00:45:08 of things reaching for order,

00:45:10 under which conditions can they then create an environment

00:45:13 that sustains that order, that allows them,

00:45:17 the invention of death is an interesting thing.

00:45:20 There are some organisms on earth

00:45:21 that are thousands of years old.

00:45:23 And it’s not like they’re incredibly complex,

00:45:25 they’re actually simpler than the cells that comprise us,

00:45:28 but they never die.

00:45:29 So at some point, death was invented,

00:45:33 somewhere along the eukaryotic scale,

00:45:34 I mean, even the protists, right?

00:45:35 There’s death.

00:45:37 And why is that along with the sexual reproduction, right?

00:45:41 There is something about the renewal process,

00:45:45 something about the ability to respond

00:45:46 to a changing environment,

00:45:48 where it just becomes,

00:45:50 just killing off the old generation

00:45:51 and letting new generations try,

00:45:54 seems to be the best way to fit into the niche.

00:45:57 Human historians seems to write about wheels and fires,

00:46:00 the greatest inventions,

00:46:01 but it seems like death and sex are pretty good.

00:46:04 And they’re kind of essential inventions

00:46:06 at the very beginning.

00:46:07 At the very beginning, yeah.

00:46:08 Well, we didn’t invent them, right?

00:46:10 Well, Broad, you didn’t invent them.

00:46:13 I see us as one,

00:46:15 you particular Homo sapiens did not invent them,

00:46:17 but we together, it’s a team project,

00:46:21 just like you’re saying.

00:46:21 I think the greatest Homo sapiens invention

00:46:24 is collaboration.

00:46:25 So when you say collaboration,

00:46:29 Peter, where do ideas come from

00:46:32 and how do they take hold in society?

00:46:35 Is that the nature of collaboration?

00:46:36 Is that the basic atom of collaboration is ideas?

00:46:40 It’s not not ideas, but it’s not only ideas.

00:46:43 There’s a book I just started reading

00:46:44 called Death From A Distance.

00:46:45 Have you heard of this?

00:46:46 No.

00:46:47 It’s a really fascinating thesis,

00:46:49 which is that humans are the only conspecific,

00:46:53 the only species that can kill other members

00:46:55 of the species from range.

00:46:58 And maybe there’s a few exceptions,

00:46:59 but if you look in the animal world,

00:47:01 you see like pronghorns butting heads, right?

00:47:03 You see the alpha lion and the beta lion

00:47:05 and they take each other down.

00:47:07 Humans, we developed the ability

00:47:08 to chuck rocks at each other,

00:47:10 well, at prey, but also at each other.

00:47:11 And that means the beta male can chunk a rock

00:47:14 at the alpha male and take them down.

00:47:17 And he can throw a lot of rocks actually,

00:47:20 miss a bunch of times, but just hit once and be good.

00:47:22 So this ability to actually kill members

00:47:25 of our own species from range

00:47:27 without a threat of harm to ourselves

00:47:29 created essentially mutually assured destruction

00:47:32 where we had to evolve cooperation.

00:47:34 If we didn’t, then if we just continue to try to do,

00:47:37 like I’m the biggest monkey in the tribe

00:47:39 and I’m gonna own this tribe and you have to go,

00:47:43 if we do it that way, then those tribes basically failed.

00:47:46 And the tribes that persisted

00:47:48 and that have now given rise to the modern Homo sapiens

00:47:51 are the ones where respecting the fact

00:47:53 that we can kill each other from a range

00:47:56 without harm, like there’s an asymmetric ability

00:47:58 to snipe the leader from range.

00:48:00 That meant that we sort of had to learn

00:48:03 how to cooperate with each other, right?

00:48:05 Come back here, don’t throw that rock at me.

00:48:06 Let’s talk our differences out.

00:48:07 So violence is also part of collaboration.

00:48:10 The threat of violence, let’s say.

00:48:12 Well, the recognition, maybe the better way to put it

00:48:15 is the recognition that we have more to gain

00:48:17 by working together than the prisoner’s dilemma

00:48:21 of both of us defecting.

00:48:23 So mutually assured destruction in all its forms

00:48:26 is part of this idea of collaboration.

00:48:28 Well, and Eric Weinstein talks about our nuclear peace,

00:48:31 right, I mean, it kind of sucks

00:48:32 with thousands of warheads aimed at each other,

00:48:34 we mean Russia and the US, but it’s like,

00:48:36 on the other hand, we only fought proxy wars, right?

00:48:39 We did not have another World War III

00:48:41 of like hundreds of millions of people dying

00:48:43 to like machine gun fire and giant guided missiles.

00:48:47 So the original nuclear weapon is a rock

00:48:50 that we learned how to throw, essentially?

00:48:52 The original, yeah, well, the original scope of the world

00:48:54 for any human being was their little tribe.

00:48:58 I would say it still is for the most part.

00:49:00 Eric Weinstein speaks very highly of you,

00:49:05 which is very surprising to me at first

00:49:08 because I didn’t know there’s this depth to you

00:49:10 because I knew you as an amazing leader of engineers

00:49:15 and an engineer yourself and so on, so it’s fascinating.

00:49:18 Maybe just as a comment, a side tangent that we can take,

00:49:23 what’s your nature of your friendship with Eric Weinstein?

00:49:27 How did the two, how did such two interesting paths cross?

00:49:30 Is it your origins in physics?

00:49:32 Is it your interest in philosophy

00:49:35 and the ideas of how the world works?

00:49:37 What is it?

00:49:38 It’s very random, Eric found me.

00:49:40 He actually found Travis and I.

00:49:43 Travis Oliphant.

00:49:44 Oliphant, yeah, we were both working

00:49:45 at a company called Enthought back in the mid 2000s

00:49:48 and we were doing a lot of consulting

00:49:50 around scientific Python and we’d made some tools

00:49:54 and Eric was trying to use some of these Python tools

00:49:57 to visualize, he had a fiber bundle approach

00:50:00 to modeling certain aspects of economics.

00:50:03 He was doing this and that’s how he kind of got in touch

00:50:05 with us and so.

00:50:06 This was in the early.

00:50:08 This was mid 2000s, oh seven timeframe, oh six, oh seven.

00:50:13 Eric Weinstein trying to use Python.

00:50:16 Right, to visualize fiber bundles.

00:50:18 Using some of the tools that we had built

00:50:20 in the open source.

00:50:21 That’s somehow entertaining to me, the thought of that.

00:50:24 It’s very funny but then we met with him a couple times,

00:50:27 a really interesting guy and then in the wake

00:50:28 of the oh seven, oh eight kind of financial collapse,

00:50:31 he helped organize with Lee Smolin a symposium

00:50:35 at the Perimeter Institute about okay, well clearly,

00:50:39 big finance can’t be trusted, government’s in its pockets

00:50:42 with regulatory capture, what the F do we do?

00:50:45 And all sorts of people, Nassim Tlaib was there

00:50:47 and Andy Lowe from MIT was there and Bill Janeway,

00:50:51 I mean just a lot of top billing people were there

00:50:54 and he invited me and Travis and another one

00:50:58 of our coworkers, Robert Kern, who is anyone

00:51:01 in the SciPy, NumPy community knows Robert.

00:51:03 Really great guy.

00:51:04 So the three of us also got invited to go to this thing

00:51:06 and that’s where I met Brett Weinstein

00:51:07 for the first time as well.

00:51:09 Yeah, I knew him before he got all famous

00:51:11 for unfortunate reasons, I guess.

00:51:13 But anyway, so we met then and kind of had a friendship

00:51:19 throughout since then.

00:51:21 You have a depth of thinking that kind of runs

00:51:26 with Eric in terms of just thinking about the world deeply

00:51:28 and thinking philosophically and then there’s Eric’s

00:51:31 interest in programming.

00:51:33 I actually have never, you know, he’ll bring up programming

00:51:38 to me quite a bit as a metaphor for stuff.

00:51:41 But I never kind of pushed the point of like,

00:51:44 what’s the nature of your interest in programming?

00:51:46 I think he saw it probably as a tool.

00:51:48 Yeah, absolutely.

00:51:49 That you visualize, to explore mathematics

00:51:52 and explore physics and I was wondering like,

00:51:55 what’s his depth of interest and also his vision

00:51:59 for what programming would look like in the future.

00:52:05 Have you had interaction with him, like discussion

00:52:08 in the space of Python, programming?

00:52:09 Well, in the sense of sometimes he asks me,

00:52:11 why is this stuff still so hard?

00:52:13 Yeah, you know, everybody’s a critic.

00:52:18 But actually, no, Eric.

00:52:20 Programming, you mean, like in general?

00:52:21 Yes, yes, well, not programming in general,

00:52:23 but certain things in the Python ecosystem.

00:52:25 But he actually, I think what I find in listening

00:52:29 to some of his stuff is that he does use

00:52:31 programming metaphors a lot, right?

00:52:33 He’ll talk about APIs or object oriented

00:52:35 and things like that.

00:52:36 So I think that’s a useful set of frames

00:52:39 for him to draw upon for discourse.

00:52:42 I haven’t pair programmed with him in a very long time.

00:52:45 You’ve previously pair coded with Eric.

00:52:47 Well, I mean, I look at his code trying to help

00:52:49 like put together some of the visualizations

00:52:50 around these things.

00:52:51 But it’s been a very, not really pair programmed,

00:52:54 but like even looked at his code, right?

00:52:55 I mean.

00:52:56 How legendary would be is that like Git repo

00:53:01 with Peter Wang and Eric Weinstein?

00:53:02 Well, honestly, Robert Kern did all the heavy lifting.

00:53:05 So I have to give credit where credit is due.

00:53:06 Robert is the silent but incredibly deep, quiet,

00:53:10 not silent, but quiet, but incredibly deep individual

00:53:13 at the heart of a lot of those things

00:53:14 that Eric was trying to do.

00:53:16 But we did have, you know, as Travis and I

00:53:19 were starting our company in 2012 timeframe,

00:53:23 we went to New York.

00:53:24 Eric was still in New York at the time.

00:53:26 He hadn’t moved to, this is before he joined Teal Capital.

00:53:29 We just had like a steak dinner somewhere.

00:53:31 Maybe it was Keynes, I don’t know, somewhere in New York.

00:53:33 So it was me, Travis, Eric, and then Wes McKinney,

00:53:36 the creative pandas, and then Wes’s then business partner,

00:53:39 Adam, the five of us sat around having this,

00:53:42 just a hilarious time, amazing dinner.

00:53:45 I forget what all we talked about,

00:53:46 but it was one of those conversations,

00:53:49 which I wish as soon as COVID is over,

00:53:51 maybe Eric and I can sit down.

00:53:53 Recreate.

00:53:53 Recreate it somewhere in LA, or maybe he comes here,

00:53:56 because a lot of cool people are here in Austin, right?

00:53:58 Exactly.

00:53:59 Yeah, we’re all here.

00:53:59 He should come here.

00:54:00 Come here.

00:54:01 Yeah.

00:54:02 So he uses the metaphor source code sometimes

00:54:04 to talk about physics.

00:54:05 We figure out our own source code.

00:54:07 So you with a physics background

00:54:10 and somebody who’s quite a bit of an expert in source code,

00:54:14 do you think we’ll ever figure out our own source code

00:54:17 in the way that Eric means?

00:54:19 Do you think we’ll figure out the nature of reality?

00:54:20 Well, I think we’re constantly working on that problem.

00:54:21 I mean, I think we’ll make more and more progress.

00:54:24 For me, there’s some things I don’t really doubt too much.

00:54:28 Like, I don’t really doubt that one day

00:54:29 we will create a synthetic, maybe not fully in silicon,

00:54:34 but a synthetic approach to

00:54:39 cognition that rivals the biological

00:54:42 20 watt computers in our heads.

00:54:44 What’s cognition here?

00:54:46 Cognition.

00:54:46 Which aspect?

00:54:47 Perception, attention, memory, recall,

00:54:49 asking better questions.

00:54:51 That for me is a measure of intelligence.

00:54:53 Doesn’t Roomba vacuum cleaner already do that?

00:54:55 Or do you mean, oh, it doesn’t ask questions.

00:54:57 I mean, no, it’s, I mean, I have a Roomba,

00:55:00 but it’s not even as smart as my cat, right?

00:55:03 Yeah, but it asks questions about what is this wall?

00:55:05 It now, new feature asks, is this poop or not, apparently.

00:55:08 Yes, a lot of our current cybernetic system,

00:55:11 it’s a cybernetic system.

00:55:12 It will go and it will happily vacuum up some poop, right?

00:55:14 The older generations would.

00:55:16 The new one, just released, does not vacuum up the poop.

00:55:19 Okay.

00:55:20 This is a commercial for.

00:55:21 I wonder if it still gets stuck

00:55:21 under my first rung of my stair.

00:55:23 In any case, these cybernetic systems we have,

00:55:27 they are mold, they’re designed to be sent off

00:55:32 into a relatively static environment.

00:55:34 And whatever dynamic things happen in the environment,

00:55:36 they have a very limited capacity to respond to.

00:55:38 A human baby, a human toddler of 18 months of age

00:55:43 has more capacity to manage its own attention

00:55:45 and its own capacity to make better sense of the world

00:55:49 than the most advanced robots today.

00:55:51 So again, my cat, I think can do a better job of my two

00:55:55 and they’re both pretty clever.

00:55:56 So I do think though, back to my kind of original point,

00:55:59 I think that it’s not, for me, it’s not question at all

00:56:02 that we will be able to create synthetic systems

00:56:05 that are able to do this better than the human,

00:56:09 at an equal level or better than the human mind.

00:56:11 It’s also for me, not a question that we will be able

00:56:16 to put them alongside humans

00:56:20 so that they capture the full broad spectrum

00:56:23 of what we are seeing as well.

00:56:25 And also looking at our responses,

00:56:28 listening to our responses,

00:56:28 even maybe measuring certain vital signs about us.

00:56:32 So in this kind of sidecar mode,

00:56:34 a greater intelligence could use us

00:56:37 and our whatever 80 years of life to train itself up

00:56:42 and then be a very good simulacrum of us moving forward.

00:56:45 So who is in the sidecar

00:56:48 in that picture of the future exactly?

00:56:50 The baby version of our immortal selves.

00:56:52 Okay, so once the baby grows up,

00:56:56 is there any use for humans?

00:56:58 I think so.

00:56:59 I think that out of epistemic humility,

00:57:03 we need to keep humans around for a long time.

00:57:05 And I would hope that anyone making those systems

00:57:07 would believe that to be true.

00:57:10 Out of epistemic humility,

00:57:11 what’s the nature of the humility that?

00:57:13 That we don’t know what we don’t know.

00:57:16 So we don’t.

00:57:18 Right?

00:57:19 So we don’t know.

00:57:20 First we have to build systems

00:57:21 that help us do the things that we do know about

00:57:24 that can then probe the unknowns that we know about.

00:57:26 But the unknown unknowns, we don’t know.

00:57:28 We could always know.

00:57:30 Nature is the one thing

00:57:31 that is infinitely able to surprise us.

00:57:33 So we should keep biological humans around

00:57:35 for a very, very, very long time.

00:57:37 Even after our immortal selves have transcended

00:57:40 and have gone off to explore other worlds,

00:57:42 gone to go communicate with the lifeforms living in the sun

00:57:45 or whatever else.

00:57:46 So I think that’s,

00:57:49 for me, these seem like things that are going to happen.

00:57:53 Like I don’t really question that,

00:57:54 that they’re gonna happen.

00:57:55 Assuming we don’t completely destroy ourselves.

00:57:58 Is it possible to create an AI system

00:58:02 that you fall in love with and it falls in love with you

00:58:06 and you have a romantic relationship with it?

00:58:08 Or a deep friendship, let’s say.

00:58:10 I would hope that that is the design criteria

00:58:12 for any of these systems.

00:58:14 If we cannot have a meaningful relationship with it,

00:58:18 then it’s still just a chunk of silicon.

00:58:20 So then what is meaningful?

00:58:21 Because back to sugar.

00:58:23 Well, sugar doesn’t love you back, right?

00:58:25 So the computer has to love you back.

00:58:26 And what does love mean?

00:58:28 Well, in this context, for me, love,

00:58:30 I’m gonna take a page from Alain de Botton.

00:58:32 Love means that it wants to help us

00:58:34 become the best version of ourselves.

00:58:36 Yes, that’s beautiful.

00:58:39 That’s a beautiful definition of love.

00:58:40 So what role does love play in the human condition

00:58:45 at the individual level and at the group level?

00:58:48 Because you were kind of saying that humans,

00:58:51 we should really consider humans

00:58:52 both at the individual and the group and the societal level.

00:58:55 What’s the role of love in this whole thing?

00:58:56 We talked about sex, we talked about death,

00:58:59 thanks to the bacteria that invented it.

00:59:02 At which point did we invent love, by the way?

00:59:04 I mean, is that also?

00:59:05 No, I think love is the start of it all.

00:59:08 And the feelings of, and this gets sort of beyond

00:59:13 just romantic, sensual, whatever kind of things,

00:59:16 but actually genuine love as we have for another person.

00:59:19 Love as it would be used in a religious text, right?

00:59:22 I think that capacity to feel love

00:59:25 more than consciousness, that is the universal thing.

00:59:28 Our feeling of love is actually a sense

00:59:30 of that generativity.

00:59:31 When we can look at another person

00:59:33 and see that they can be something more than they are,

00:59:37 and more than just a pigeonhole we might stick them in.

00:59:42 I mean, I think there’s, in any religious text,

00:59:44 you’ll find voiced some concept of this,

00:59:47 that you should see the grace of God in the other person.

00:59:50 They’re made in the spirit of the love

00:59:54 that God feels for his creation or her creation.

00:59:57 And so I think this thing is actually the root of it.

01:00:00 So I would say, I don’t think molecules of water

01:00:04 feel consciousness, have consciousness,

01:00:06 but there is some proto micro quantum thing of love.

01:00:10 That’s the generativity when there’s more energy

01:00:14 than what they need to maintain equilibrium.

01:00:16 And that when you sum it all up is something that leads to,

01:00:19 I mean, I had my mind blown one day as an undergrad

01:00:23 at the physics computer lab.

01:00:24 I logged in and when you log into bash for a long time,

01:00:28 there was a little fortune that would come out.

01:00:29 And it said, man was created by water

01:00:32 to carry itself uphill.

01:00:33 And I was logging into work on some problem set

01:00:37 and I logged in and I saw that and I just said,

01:00:40 son of a bitch, I just, I logged out

01:00:43 and I went to the coffee shop and I got a coffee

01:00:45 and I sat there on the quad and I’m like,

01:00:47 you know, it’s not wrong and yet WTF, right?

01:00:53 So when you look at it that way,

01:00:55 it’s like, yeah, okay, non equilibrium physics is a thing.

01:00:59 And so when we think about love,

01:01:00 when we think about these kinds of things, I would say

01:01:05 that in the modern day human condition,

01:01:08 there’s a lot of talk about freedom and individual liberty

01:01:12 and rights and all these things,

01:01:14 but that’s very Hegelian, it’s very kind of following

01:01:18 from the Western philosophy of the individual as sacrosanct,

01:01:22 but it’s not really couched I think the right way

01:01:26 because it should be how do we maximize people’s ability

01:01:29 to love each other, to love themselves first,

01:01:32 to love each other, their responsibilities

01:01:34 to the previous generation, to the future generations.

01:01:37 Those are the kinds of things

01:01:39 that should be our design criteria, right?

01:01:41 Those should be what we start with to then come up

01:01:45 with the philosophies of self and of rights

01:01:48 and responsibilities, but that love being at the center

01:01:52 of it, I think when we design systems for cognition,

01:01:56 it should absolutely be built that way.

01:01:58 I think if we simply focus on efficiency and productivity,

01:02:02 these kind of very industrial era,

01:02:05 all the things that Marx had issues with, right?

01:02:08 Those, that’s a way to go and really I think go off

01:02:11 the deep end in the wrong way.

01:02:13 So one of the interesting consequences of thinking of life

01:02:19 in this hierarchical way of an individual human

01:02:22 and then there’s groups and there’s societies

01:02:25 is I believe that you believe that corporations are people.

01:02:31 So this is a kind of a politically dense idea,

01:02:36 all those kinds of things.

01:02:37 If we just throw politics aside,

01:02:39 if we throw all of that aside,

01:02:41 in which sense do you believe that corporations are people?

01:02:46 And how does love connect to that?

01:02:47 Right, so the belief is that groups of people

01:02:52 have some kind of higher level, I would say mesoscopic

01:02:55 claim to agency.

01:02:57 So where do I, let’s start with this.

01:03:00 Most people would say, okay, individuals have claims

01:03:03 to agency and sovereignty.

01:03:05 Nations, we certainly act as if nations,

01:03:07 so at a very large, large scale,

01:03:09 nations have rights to sovereignty and agency.

01:03:13 Like everyone plays the game of modernity

01:03:15 as if that’s true, right?

01:03:16 We believe France is a thing,

01:03:17 we believe the United States is a thing.

01:03:18 But to say that groups of people at a smaller level

01:03:23 than that, like a family unit is the thing.

01:03:26 Well, in our laws, we actually do encode this concept.

01:03:30 I believe that in a relationship and a marriage, right,

01:03:33 one partner can sue for loss of consortium, right?

01:03:37 If someone breaks up the marriage or whatever.

01:03:39 So these are concepts that even in law,

01:03:41 we do respect that there is something about the union

01:03:44 and about the family.

01:03:45 So for me, I don’t think it’s so weird to think

01:03:48 that groups of people have a right to,

01:03:51 a claim to rights and sovereignty of some degree.

01:03:54 I mean, we look at our clubs, we look at churches.

01:03:59 These are, we talk about these collectives of people

01:04:02 as if they have a real agency to them, and they do.

01:04:05 But I think if we take that one step further and say,

01:04:08 okay, they can accrue resources.

01:04:10 Well, yes, check, you know, and by law they can.

01:04:13 They can own land, they can engage in contracts,

01:04:17 they can do all these different kinds of things.

01:04:18 So we in legal terms support this idea

01:04:22 that groups of people have rights.

01:04:26 Where we go wrong on this stuff

01:04:28 is that the most popular version of this

01:04:31 is the for profit absentee owner corporation

01:04:35 that then is able to amass larger resources

01:04:38 than anyone else in the landscape, anything else,

01:04:40 any other entity of equivalent size.

01:04:42 And they’re able to essentially bully around individuals,

01:04:45 whether it’s laborers, whether it’s people

01:04:47 whose resources they want to capture.

01:04:48 They’re also able to bully around

01:04:50 our system of representation,

01:04:52 which is still tied to individuals, right?

01:04:55 So I don’t believe that’s correct.

01:04:58 I don’t think it’s good that they, you know,

01:05:01 they’re people, but they’re assholes.

01:05:02 I don’t think that corporations as people

01:05:03 acting like assholes is a good thing.

01:05:05 But the idea that collectives and collections of people

01:05:08 that we should treat them philosophically

01:05:10 as having some agency and some mass,

01:05:15 at a mesoscopic level, I think that’s an important thing

01:05:18 because one thing I do think we underappreciate sometimes

01:05:22 is the fact that relationships have relationships.

01:05:26 So it’s not just individuals

01:05:27 having relationships with each other.

01:05:29 But if you have eight people seated around a table, right?

01:05:32 Each person has a relationship with each of the others

01:05:34 and that’s obvious.

01:05:35 But then if it’s four couples,

01:05:37 each couple also has a relationship

01:05:39 with each of the other couples, right?

01:05:41 The dyads do.

01:05:42 And if it’s couples, but one is the, you know,

01:05:45 father and mother older, and then, you know,

01:05:48 one of their children and their spouse,

01:05:50 that family unit of four has a relationship

01:05:53 with the other family unit of four.

01:05:55 So the idea that relationships have relationships

01:05:57 is something that we intuitively know

01:05:59 in navigating the social landscape,

01:06:01 but it’s not something I hear expressed like that.

01:06:04 It’s certainly not something that is,

01:06:06 I think, taken into account very well

01:06:08 when we design these kinds of things.

01:06:09 So I think the reason why I care a lot about this

01:06:14 is because I think the future of humanity

01:06:16 requires us to form better sense make,

01:06:19 collective sense making units at something, you know,

01:06:23 around Dunbar number, you know, half to five X Dunbar.

01:06:28 And that’s very different than right now

01:06:30 where we defer sense making

01:06:33 to massive aging zombie institutions.

01:06:37 Or we just do it ourselves.

01:06:38 Go it alone.

01:06:39 Go to the dark force of the internet by ourselves.

01:06:41 So that’s really interesting.

01:06:42 So you’ve talked about agency,

01:06:45 I think maybe calling it a convenient fiction

01:06:47 at all these different levels.

01:06:49 So even at the human individual level,

01:06:52 it’s kind of a fiction.

01:06:53 We all believe, because we are, like you said,

01:06:55 made of cells and cells are made of atoms.

01:06:57 So that’s a useful fiction.

01:06:58 And then there’s nations that seems to be a useful fiction,

01:07:02 but it seems like some fictions are better than others.

01:07:06 You know, there’s a lot of people that argue

01:07:08 the fiction of nation is a bad idea.

01:07:11 One of them lives two doors down from me,

01:07:13 Michael Malice, he’s an anarchist.

01:07:16 You know, I’m sure there’s a lot of people

01:07:18 who are into meditation that believe the idea,

01:07:21 this useful fiction of agency of an individual

01:07:24 is a troublesome as well.

01:07:26 We need to let go of that in order to truly,

01:07:29 like to transcend, I don’t know.

01:07:32 I don’t know what words you want to use,

01:07:33 but suffering or to elevate the experience of life.

01:07:38 So you’re kind of arguing that,

01:07:40 okay, so we have some of these useful fictions of agency.

01:07:44 We should add a stronger fiction that we tell ourselves

01:07:49 about the agency of groups in the hundreds

01:07:52 of the half a Dunbar’s number, 5X Dunbar’s number.

01:07:57 Yeah, something on that order.

01:07:58 And we call them fictions,

01:07:59 but really they’re rules of the game, right?

01:08:01 Rules that we feel are fair or rules that we consent to.

01:08:05 Yeah, I always question the rules

01:08:07 when I lose like a monopoly.

01:08:08 That’s when I usually question the rules.

01:08:09 When I’m winning, I don’t question the rules.

01:08:11 We should play a game Monopoly someday.

01:08:12 There’s a trippy version of it that we could do.

01:08:15 What kind?

01:08:16 Contract Monopoly is introduced by a friend of mine to me

01:08:19 where you can write contracts on future earnings

01:08:23 or landing on various things.

01:08:24 And you can hand out like, you know,

01:08:26 you can land the first three times you land

01:08:28 in a park place, it’s free or whatever.

01:08:30 And then you can start trading those contracts for money.

01:08:33 And then you create a human civilization

01:08:36 and somehow Bitcoin comes into it.

01:08:38 Okay, but some of these.

01:08:40 Actually, I bet if me and you and Eric sat down

01:08:43 to play a game Monopoly and we were to make NFTs

01:08:45 out of the contracts we wrote, we could make a lot of money.

01:08:48 Now it’s a terrible idea.

01:08:49 I would never do it,

01:08:50 but I bet we could actually sell the NFTs around.

01:08:52 I have other ideas to make money that I could tell you

01:08:56 and they’re all terrible ideas.

01:08:58 Yeah, including cat videos on the internet.

01:09:02 Okay, but some of these rules of the game,

01:09:04 some of these fictions are,

01:09:06 it seems like they’re better than others.

01:09:09 They have worked this far to cohere human,

01:09:13 to organize human collective action.

01:09:14 But you’re saying something about,

01:09:16 especially this technological age

01:09:19 requires modified fictions, stories of agency.

01:09:23 Why the Dunbar number?

01:09:25 And also, you know, how do you select the group of people?

01:09:28 You know, Dunbar numbers, I think I have the sense

01:09:31 that it’s overused as a kind of law

01:09:36 that somehow we can have deep human connection at this scale.

01:09:41 Like some of it feels like an interface problem too.

01:09:45 It feels like if I have the right tools,

01:09:48 I can deeply connect with a larger number of people.

01:09:51 It just feels like there’s a huge value

01:09:55 to interacting just in person, getting to share

01:09:59 traumatic experiences together,

01:10:00 beautiful experiences together.

01:10:02 There’s other experiences like that in the digital space

01:10:06 that you can share.

01:10:07 It just feels like Dunbar’s number

01:10:09 could be expanded significantly,

01:10:10 perhaps not to the level of millions and billions,

01:10:15 but it feels like it could be expanded.

01:10:16 So how do we find the right interface, you think,

01:10:21 for having a little bit of a collective here

01:10:24 that has agency?

01:10:26 You’re right that there’s many different ways

01:10:28 that we can build trust with each other.

01:10:30 Yeah.

01:10:30 My friend Joe Edelman talks about a few different ways

01:10:33 that, you know, mutual appreciation, trustful conflict,

01:10:39 just experiencing something like, you know,

01:10:41 there’s a variety of different things that we can do,

01:10:43 but all those things take time and you have to be present.

01:10:48 The less present you are, I mean, there’s just, again,

01:10:50 a no free lunch principle here.

01:10:51 The less present you are, the more of them you can do,

01:10:54 but then the less connection you build.

01:10:56 So I think there is sort of a human capacity issue

01:10:59 around some of these things.

01:11:00 Now, that being said, if we can use certain technologies,

01:11:04 so for instance, if I write a little monograph

01:11:07 on my view of the world,

01:11:08 you read it asynchronously at some point,

01:11:10 and you’re like, wow, Peter, this is great.

01:11:11 Here’s mine.

01:11:12 I read it.

01:11:13 I’m like, wow, Lex, this is awesome.

01:11:15 We can be friends without having to spend 10 years,

01:11:18 you know, figuring all this stuff out together.

01:11:20 We just read each other’s thing and be like,

01:11:22 oh yeah, this guy’s exactly in my wheelhouse

01:11:24 and vice versa.

01:11:26 And we can then, you know, connect just a few times a year

01:11:30 and maintain a high trust relationship.

01:11:33 It can be expanded a little bit,

01:11:34 but it also requires,

01:11:35 these things are not all technological in nature.

01:11:37 It requires the individual themselves

01:11:39 to have a certain level of capacity,

01:11:41 to have a certain lack of neuroticism, right?

01:11:44 If you want to use like the ocean big five sort of model,

01:11:48 people have to be pretty centered.

01:11:49 The less centered you are,

01:11:50 the fewer authentic connections you can really build

01:11:52 for a particular unit of time.

01:11:54 It just takes more time.

01:11:55 Other people have to put up with your crap.

01:11:57 Like there’s just a lot of the stuff

01:11:58 that you have to deal with

01:12:00 if you are not so well balanced, right?

01:12:02 So yes, we can help people get better

01:12:04 to where they can develop more relationships faster,

01:12:06 and then you can maybe expand Dunbar number by quite a bit,

01:12:09 but you’re not going to do it.

01:12:10 I think it’s going to be hard to get it beyond 10X,

01:12:12 kind of the rough swag of what it is, you know?

01:12:16 Well, don’t you think that AI systems could be an addition

01:12:20 to the Dunbar’s number?

01:12:22 So like why?

01:12:23 Do you count as one system or multiple AI systems?

01:12:25 Multiple AI systems.

01:12:26 So I do believe that AI systems,

01:12:28 for them to integrate into human society as it is now,

01:12:31 have to have a sense of agency.

01:12:32 So there has to be a individual

01:12:35 because otherwise we wouldn’t relate to them.

01:12:37 We could engage certain kinds of individuals

01:12:40 to make sense of them for us and be almost like,

01:12:42 did you ever watch Star Trek?

01:12:44 Like Voyager, like there’s the Volta,

01:12:46 who are like the interfaces,

01:12:47 the ambassadors for the Dominion.

01:12:50 We may have ambassadors that speak

01:12:53 on behalf of these systems.

01:12:54 They’re like the Mentats of Dune, maybe,

01:12:56 or something like this.

01:12:57 I mean, we already have this to some extent.

01:12:59 If you look at the biggest sort of,

01:13:01 I wouldn’t say AI system,

01:13:02 but the biggest cybernetic system in the world

01:13:04 is the financial markets.

01:13:05 It runs outside of any individual’s control,

01:13:08 and you have an entire stack of people on Wall Street,

01:13:09 Wall Street analysts to CNBC reporters, whatever.

01:13:13 They’re all helping to communicate what does this mean?

01:13:16 You know, like Jim Cramer,

01:13:18 like coming around and yelling and stuff.

01:13:19 Like all of these people are part of that lowering

01:13:22 of the complexity there to meet sense,

01:13:26 you know, to help do sense making for people

01:13:28 at whatever capacity they’re at.

01:13:29 And I don’t see this changing with AI systems.

01:13:31 I think you would have ringside commentators

01:13:33 talking about all this stuff

01:13:34 that this AI system is trying to do over here, over here,

01:13:36 because it’s actually a super intelligence.

01:13:39 So if you want to talk about humans interfacing,

01:13:40 making first contact with the super intelligence,

01:13:42 we’re already there.

01:13:43 We do it pretty poorly.

01:13:44 And if you look at the gradient of power and money,

01:13:47 what happens is the people closest to it

01:13:48 will absolutely exploit their distance

01:13:50 for personal financial gain.

01:13:54 So we should look at that and be like,

01:13:56 oh, well, that’s probably what the future

01:13:57 will look like as well.

01:13:58 But nonetheless, I mean,

01:14:00 we’re already doing this kind of thing.

01:14:01 So in the future, we can have AI systems,

01:14:03 but you’re still gonna have to trust people

01:14:05 to bridge the sense making gap to them.

01:14:08 See, I just feel like there could be

01:14:10 like millions of AI systems that have,

01:14:15 have agencies, you have,

01:14:17 when you say one super intelligence,

01:14:19 super intelligence in that context means

01:14:22 it’s able to solve particular problems extremely well.

01:14:26 But there’s some aspect of human like intelligence

01:14:29 that’s necessary to be integrated into human society.

01:14:32 So not financial markets,

01:14:33 not sort of weather prediction systems,

01:14:36 or I don’t know, logistics optimization.

01:14:39 I’m more referring to things that you interact with

01:14:43 on the intellectual level.

01:14:45 And that I think requires,

01:14:47 there has to be a backstory.

01:14:48 There has to be a personality.

01:14:50 I believe it has to fear its own mortality in a genuine way.

01:14:53 Like there has to be all,

01:14:56 many of the elements that we humans experience

01:14:59 that are fundamental to the human condition,

01:15:01 because otherwise we would not have

01:15:03 a deep connection with it.

01:15:05 But I don’t think having a deep connection with it

01:15:07 is necessarily going to stop us from building a thing

01:15:10 that has quite an alien intelligence aspect here.

01:15:13 So the other kind of alien intelligence on this planet

01:15:16 is the octopuses or octopodes

01:15:18 or whatever you wanna call them.

01:15:19 Octopi. Octopi, yeah.

01:15:21 There’s a little controversy

01:15:22 as to what the plural is, I guess.

01:15:23 But an octopus. I look forward to your letters.

01:15:26 Yeah, an octopus,

01:15:30 it really acts as a collective intelligence

01:15:32 of eight intelligent arms, right?

01:15:34 Its arms have a tremendous amount of neural density to them.

01:15:37 And I see if we can build,

01:15:40 I mean, just let’s go with what you’re saying.

01:15:42 If we build a singular intelligence

01:15:44 that interfaces with humans that has a sense of agency

01:15:48 so it can run the cybernetic loop

01:15:49 and develop its own theory of mind

01:15:51 as well as its theory of action,

01:15:52 all these things, I agree with you

01:15:54 that that’s the necessary components

01:15:56 to build a real intelligence, right?

01:15:57 There’s gotta be something at stake.

01:15:58 It’s gotta make a decision.

01:16:00 It’s gotta then run the OODA loop.

01:16:01 Okay, so we build one of those.

01:16:03 Well, if we can build one of those,

01:16:03 we can probably build 5 million of them.

01:16:05 So we build 5 million of them.

01:16:07 And if their cognitive systems are already digitized

01:16:09 and already kind of there,

01:16:12 we stick an antenna on each of them,

01:16:13 bring it all back to a hive mind

01:16:15 that maybe doesn’t make all the individual decisions

01:16:17 for them, but treats each one

01:16:19 as almost like a neuronal input

01:16:21 of a much higher bandwidth and fidelity,

01:16:23 going back to a central system

01:16:25 that is then able to perceive much broader dynamics

01:16:30 that we can’t see.

01:16:31 In the same way that a phased array radar, right?

01:16:32 You think about how phased array radar works.

01:16:34 It’s just sensitivity.

01:16:36 It’s just radars, and then it’s hypersensitivity

01:16:39 and really great timing between all of them.

01:16:41 And with a flat array,

01:16:42 it’s as good as a curved radar dish, right?

01:16:44 So with these things,

01:16:45 it’s a phased array of cybernetic systems

01:16:47 that’ll give the centralized intelligence

01:16:51 much, much better, a much higher fidelity understanding

01:16:55 of what’s actually happening in the environment.

01:16:56 But the more power,

01:16:57 the more understanding the central super intelligence has,

01:17:02 the dumber the individual like fingers

01:17:06 of this intelligence are, I think.

01:17:08 I think you…

01:17:08 Not necessarily.

01:17:09 In my sense…

01:17:10 I don’t see what has to be.

01:17:11 This argument, there has to be,

01:17:13 the experience of the individual agent

01:17:15 has to have the full richness of the human like experience.

01:17:20 You have to be able to be driving the car in the rain,

01:17:23 listening to Bruce Springsteen,

01:17:25 and all of a sudden break out in tears

01:17:28 because remembering something that happened to you

01:17:30 in high school.

01:17:31 We can implant those memories

01:17:32 if that’s really needed.

01:17:33 But no, I’m…

01:17:33 No, but the central agency,

01:17:34 like I guess I’m saying for, in my view,

01:17:37 for intelligence to be born,

01:17:39 you have to have a decentralization.

01:17:43 Like each one has to struggle and reach.

01:17:47 So each one in excess of energy has to reach for order

01:17:51 as opposed to a central place doing so.

01:17:54 Have you ever read like some sci fi

01:17:55 where there’s like hive minds?

01:17:58 Like the Wernher Vinge, I think has one of these.

01:18:01 And then some of the stuff from the Commonwealth Saga,

01:18:05 the idea that you’re an individual,

01:18:06 but you’re connected with like a few other individuals

01:18:09 telepathically as well.

01:18:10 And together you form a swarm.

01:18:12 So if you are, I ask you,

01:18:14 what do you think is the experience of if you are like,

01:18:18 well, a Borg, right?

01:18:18 If you are one, if you’re part of this hive mind,

01:18:22 outside of all the aesthetics, forget the aesthetics,

01:18:25 internally, what is your experience like?

01:18:28 Because I have a theory as to what that looks like.

01:18:30 The one question I have for you about that experience is

01:18:34 how much is there a feeling of freedom, of free will?

01:18:38 Because I obviously as a human, very unbiased,

01:18:43 but also somebody who values freedom and biased,

01:18:46 it feels like the experience of freedom is essential for

01:18:52 trying stuff out, to being creative

01:18:55 and doing something truly novel, which is at the core of.

01:18:59 Yeah, well, I don’t think you have to lose any freedom

01:19:00 when you’re in that mode.

01:19:02 Because I think what happens is we think,

01:19:04 we still think, I mean, you’re still thinking about this

01:19:06 in a sense of a top down command and control hierarchy,

01:19:09 which is not what it has to be at all.

01:19:12 I think the experience, so I’ll just show by cards here.

01:19:16 I think the experience of being a robot in that robot swarm,

01:19:19 a robot who has agency over their own local environment

01:19:22 that’s doing sense making

01:19:23 and reporting it back to the hive mind,

01:19:25 I think that robot’s experience would be one,

01:19:28 when the hive mind is working well,

01:19:31 it would be an experience of like talking to God, right?

01:19:34 That you essentially are reporting to,

01:19:37 you’re sort of saying, here’s what I see.

01:19:38 I think this is what’s gonna happen over here.

01:19:40 I’m gonna go do this thing.

01:19:41 Because I think if I’m gonna do this,

01:19:42 this will make this change happen in the environment.

01:19:45 And then God, she may tell you, that’s great.

01:19:50 And in fact, your brothers and sisters will join you

01:19:52 to help make this go better, right?

01:19:54 And then she can let your brothers and sisters know,

01:19:56 hey, Peter’s gonna go do this thing.

01:19:58 Would you like to help him?

01:19:59 Because we think that this will make this thing go better.

01:20:01 And they’ll say, yes, we’ll help him.

01:20:03 So the whole thing could be actually very emergent.

01:20:05 The sense of, what does it feel like to be a cell

01:20:09 and a network that is alive, that is generative.

01:20:11 And I think actually the feeling is serendipity.

01:20:16 That there’s random order, not random disorder or chaos,

01:20:20 but random order, just when you need it to hear Bruce Springsteen,

01:20:24 you turn on the radio and bam, it’s Bruce Springsteen, right?

01:20:28 That feeling of serendipity, I feel like,

01:20:30 this is a bit of a flight of fancy,

01:20:31 but every cell in your body must have,

01:20:35 what does it feel like to be a cell in your body?

01:20:37 When it needs sugar, there’s sugar.

01:20:39 When it needs oxygen, there’s just oxygen.

01:20:41 Now, when it needs to go and do its work

01:20:43 and pull like as one of your muscle fibers, right?

01:20:46 It does its work and it’s great.

01:20:48 It contributes to the cause, right?

01:20:49 So this is all, again, a flight of fancy,

01:20:51 but I think as we extrapolate up,

01:20:53 what does it feel like to be an independent individual

01:20:56 with some bounded sense of freedom?

01:20:58 All sense of freedom is actually bounded,

01:20:59 but it was a bounded sense of freedom

01:21:01 that still lives within a network that has order to it.

01:21:04 And I feel like it has to be a feeling of serendipity.

01:21:06 So the cell, there’s a feeling of serendipity, even though.

01:21:10 It has no way of explaining why it’s getting oxygen

01:21:12 and sugar when it gets it.

01:21:13 So you have to, each individual component has to be too dumb

01:21:17 to understand the big picture.

01:21:19 No, the big picture is bigger than what it can understand.

01:21:22 But isn’t that an essential characteristic

01:21:24 of the individual is to be too dumb

01:21:27 to understand the bigger picture.

01:21:29 Like not dumb necessarily,

01:21:31 but limited in its capacity to understand.

01:21:33 Because the moment you understand,

01:21:36 I feel like that leads to, if you tell me now

01:21:41 that there are some bigger intelligence

01:21:43 controlling everything I do,

01:21:45 intelligence broadly defined, meaning like,

01:21:47 you know, even the Sam Harris thing, there’s no free will.

01:21:51 If I’m smart enough to truly understand that that’s the case,

01:21:56 that’s gonna, I don’t know if I.

01:21:58 We have philosophical breakdown, right?

01:22:00 Because we’re in the West and we’re pumped full of this stuff

01:22:03 of like, you are a golden, fully free individual

01:22:06 with all your freedoms and all your liberties

01:22:08 and go grab a gun and shoot whatever you want to.

01:22:10 No, it’s actually, you don’t actually have a lot of these,

01:22:14 you’re not unconstrained,

01:22:15 but the areas where you can manifest agency,

01:22:20 you’re free to do those things.

01:22:21 You can say whatever you want on this podcast.

01:22:23 You can create a podcast, right?

01:22:24 Yeah.

01:22:24 You’re not, I mean, you have a lot of this kind of freedom,

01:22:27 but even as you’re doing this, you are actually,

01:22:30 I guess where the denouement of this is that

01:22:33 we are already intelligent agents in such a system, right?

01:22:37 In that one of these like robots

01:22:39 of one of 5 million little swarm robots

01:22:42 or one of the Borg,

01:22:43 they’re just posting on internal bulletin board.

01:22:45 I mean, maybe the Borg cube

01:22:46 is just a giant Facebook machine floating in space

01:22:48 and everyone’s just posting on there.

01:22:50 They’re just posting really fast and like, oh yeah.

01:22:52 It’s called the metaverse now.

01:22:53 That’s called the metaverse, that’s right.

01:22:54 Here’s the enterprise.

01:22:55 Maybe we should all go shoot it.

01:22:56 Yeah, everyone upvotes and they’re gonna go shoot it, right?

01:22:58 But we already are part of a human online

01:23:02 collaborative environment

01:23:03 and collaborative sensemaking system.

01:23:05 It’s not very good yet.

01:23:07 It’s got the overhangs of zombie sensemaking institutions

01:23:10 all over it, but as that washes away

01:23:13 and as we get better at this,

01:23:15 we are going to see humanity improving

01:23:18 at speeds that are unthinkable in the past.

01:23:21 And it’s not because anyone’s freedoms were limited.

01:23:23 In fact, the open source,

01:23:24 and we started this with open source software, right?

01:23:26 The collaboration, what the internet surfaced

01:23:29 was the ability for people all over the world

01:23:31 to collaborate and produce some of the most

01:23:32 foundational software that’s in use today, right?

01:23:35 That entire ecosystem was created

01:23:36 by collaborators all over the place.

01:23:38 So these online kind of swarm kind of things

01:23:42 are not novel.

01:23:44 It’s just, I’m just suggesting that future AI systems,

01:23:47 if you can build one smart system,

01:23:49 you have no reason not to build multiple.

01:23:51 If you build multiple,

01:23:52 there’s no reason not to integrate them all

01:23:53 into a collective sensemaking substrate.

01:23:57 And that thing will certainly have immersion intelligence

01:24:00 that none of the individuals

01:24:01 and probably not any of the human designers

01:24:03 will be able to really put a bow around and explain.

01:24:06 But in some sense, would that AI system

01:24:09 still be able to go like rural Texas,

01:24:13 buy a ranch, go off the grid, go full survivalist?

01:24:16 Like, can you disconnect from the hive mind?

01:24:20 You may not want to.

01:24:25 So to be ineffective, to be intelligent.

01:24:27 You have access to way more intelligence capability

01:24:30 if you’re plugged into five million other

01:24:31 really, really smart cyborgs.

01:24:33 Why would you leave?

01:24:34 So like there’s a word control that comes to mind.

01:24:37 So it doesn’t feel like control,

01:24:39 like overbearing control.

01:24:43 It’s just knowledge.

01:24:44 I think systems, well, this is to your point.

01:24:46 I mean, look at how much,

01:24:47 how uncomfortable you are with this concept, right?

01:24:49 I think systems that feel like overbearing control

01:24:52 will not evolutionarily win out.

01:24:54 I think systems that give their individual elements

01:24:57 the feeling of serendipity and the feeling of agency

01:25:00 that that will, those systems will win.

01:25:04 But that’s not to say that there will not be

01:25:05 emergent higher level order on top of it.

01:25:09 And that’s the thing, that’s the philosophical breakdown

01:25:11 that we’re staring right at,

01:25:13 which is in the Western mind,

01:25:14 I think there’s a very sharp delineation

01:25:17 between explicit control,

01:25:21 Cartesian, like what is the vector?

01:25:23 Where is the position?

01:25:24 Where is it going?

01:25:25 It’s completely deterministic.

01:25:27 And kind of this idea that things emerge.

01:25:30 Everything we see is the emergent patterns

01:25:32 of other things.

01:25:33 And there is agency when there’s extra energy.

01:25:38 So you have spoken about a kind of meaning crisis

01:25:42 that we’re going through.

01:25:44 But it feels like since we invented sex and death,

01:25:50 we broadly speaking,

01:25:52 we’ve been searching for a kind of meaning.

01:25:54 So it feels like a human civilization

01:25:56 has been going through a meaning crisis

01:25:58 of different flavors throughout its history.

01:26:00 Why is, how is this particular meaning crisis different?

01:26:05 Or is it really a crisis and it wasn’t previously?

01:26:09 What’s your sense?

01:26:09 A lot of human history,

01:26:11 there wasn’t so much a meaning crisis.

01:26:13 There was just a like food

01:26:14 and not getting eaten by bears crisis, right?

01:26:16 Once you get to a point where you can make food,

01:26:18 there was the like not getting killed

01:26:20 by other humans crisis.

01:26:21 So sitting around wondering what is it all about,

01:26:24 it’s actually a relatively recent luxury.

01:26:26 And to some extent, the meaning crisis coming out of that

01:26:29 is precisely because, well, it’s not precisely because,

01:26:33 I believe that meaning is the consequence of

01:26:37 when we make consequential decisions,

01:26:40 it’s tied to agency, right?

01:26:42 When we make consequential decisions,

01:26:44 that generates meaning.

01:26:46 So if we make a lot of decisions,

01:26:47 but we don’t see the consequences of them,

01:26:50 then it feels like what was the point, right?

01:26:52 But if there’s all these big things

01:26:53 that we don’t see the consequences of,

01:26:55 right, but if there’s all these big things happening,

01:26:57 but we’re just along for the ride,

01:26:58 then it also does not feel very meaningful.

01:27:00 Meaning, as far as I can tell,

01:27:01 this is my working definition of CERCA 2021,

01:27:04 is generally the result of a person

01:27:08 making a consequential decision,

01:27:09 acting on it and then seeing the consequences of it.

01:27:12 So historically, just when humans are in survival mode,

01:27:16 you’re making consequential decisions all the time.

01:27:19 So there’s not a lack of meaning

01:27:20 because like you either got eaten or you didn’t, right?

01:27:23 You got some food and that’s great, you feel good.

01:27:25 Like these are all consequential decisions.

01:27:27 Only in the post fossil fuel and industrial revolution

01:27:33 could we create a massive leisure class.

01:27:36 I could sit around not being threatened by bears,

01:27:39 not starving to death,

01:27:43 making decisions somewhat,

01:27:44 but a lot of times not seeing the consequences

01:27:47 of any decisions they make.

01:27:49 The general sort of sense of anomie,

01:27:51 I think that is the French term for it,

01:27:53 in the wake of the consumer society,

01:27:55 in the wake of mass media telling everyone,

01:27:58 hey, choosing between Hermes and Chanel

01:28:01 is a meaningful decision.

01:28:03 No, it’s not.

01:28:04 I don’t know what either of those mean.

01:28:05 Oh, they’re high end luxury purses and crap like that.

01:28:10 But the point is that we give people the idea

01:28:13 that consumption is meaning,

01:28:15 that making a choice of this team versus that team,

01:28:17 spectating has meaning.

01:28:20 So we produce all of these different things

01:28:22 that are as if meaning, right?

01:28:25 But really making a decision that has no consequences for us.

01:28:28 And so that creates the meaning crisis.

01:28:30 Well, you’re saying choosing between Chanel

01:28:33 and the other one has no consequence.

01:28:35 I mean, why is one more meaningful than the other?

01:28:38 It’s not that it’s more meaningful than the other.

01:28:39 It’s that you make a decision between these two brands

01:28:42 and you’re told this brand will make me look better

01:28:45 in front of other people.

01:28:45 If I buy this brand of car,

01:28:47 if I wear that brand of apparel, right?

01:28:50 Like a lot of decisions we make are around consumption,

01:28:54 but consumption by itself doesn’t actually yield meaning.

01:28:57 Gaining social status does provide meaning.

01:28:59 So that’s why in this era of abundant production,

01:29:05 so many things turn into status games.

01:29:07 The NFT kind of explosion is a similar kind of thing.

01:29:09 Everywhere there are status games

01:29:11 because we just have so much excess production.

01:29:16 But aren’t those status games a source of meaning?

01:29:18 Like why do the games we play have to be grounded

01:29:22 in physical reality like they are

01:29:24 when you’re trying to run away from lions?

01:29:26 Why can’t we, in this virtuality world, on social media,

01:29:30 why can’t we play the games on social media,

01:29:32 even the dark ones?

01:29:33 Right, we can, we can.

01:29:35 But you’re saying that’s creating a meaning crisis.

01:29:37 Well, there’s a meaning crisis

01:29:39 in that there’s two aspects of it.

01:29:41 Number one, playing those kinds of status games

01:29:44 oftentimes requires destroying the planet

01:29:46 because it ties to consumption,

01:29:51 consuming the latest and greatest version of a thing,

01:29:54 buying the latest limited edition sneaker

01:29:56 and throwing out all the old ones.

01:29:58 Maybe it keeps in the old ones,

01:29:59 but the amount of sneakers we have to cut up

01:30:01 and destroy every year

01:30:02 to create artificial scarcity for the next generation, right?

01:30:05 This is kind of stuff that’s not great.

01:30:07 It’s not great at all.

01:30:09 So conspicuous consumption fueling status games

01:30:13 is really bad for the planet, not sustainable.

01:30:16 The second thing is you can play these kinds of status games,

01:30:19 but then what it does is it renders you captured

01:30:22 to the virtual environment.

01:30:24 The status games that really wealthy people are playing

01:30:26 are all around the hard resources

01:30:29 where they’re gonna build the factories,

01:30:30 they’re gonna have the fuel in the rare earths

01:30:31 to make the next generation of robots.

01:30:33 They’re then going to run game,

01:30:34 run circles around you and your children.

01:30:37 So that’s another reason not to play

01:30:38 those virtual status games.

01:30:40 So you’re saying ultimately the big picture game is won

01:30:44 by people who have access or control

01:30:46 over actual hard resources.

01:30:48 So you can’t, you don’t see a society

01:30:51 where most of the games are played in the virtual space.

01:30:55 They’ll be captured in the physical space.

01:30:57 It all builds.

01:30:57 It’s just like the stack of human being, right?

01:31:00 If you only play the game at the cultural

01:31:04 and then intellectual level,

01:31:05 then the people with the hard resources

01:31:07 and access to layer zero physical are going to own you.

01:31:10 But isn’t money not connected to,

01:31:13 or less and less connected to hard resources

01:31:15 and money still seems to work?

01:31:17 It’s a virtual technology.

01:31:18 There’s different kinds of money.

01:31:20 Part of the reason that some of the stuff is able

01:31:22 to go a little unhinged is because the big sovereignties

01:31:29 where one spends money and uses money

01:31:32 and plays money games and inflates money,

01:31:34 their ability to adjudicate the physical resources

01:31:38 and hard resources and the resources

01:31:40 and hard resources on land and things like that,

01:31:42 those have not been challenged in a very long time.

01:31:45 So, you know, we went off the gold standard.

01:31:47 Most money is not connected to physical resources.

01:31:51 It’s an idea.

01:31:53 And that idea is very closely connected to status.

01:31:59 But it’s also tied to like, it’s actually tied to law.

01:32:03 It is tied to some physical hard things

01:32:04 so you have to pay your taxes.

01:32:06 Yes, so it’s always at the end going to be connected

01:32:09 to the blockchain of physical reality.

01:32:12 So in the case of law and taxes, it’s connected to government

01:32:17 and government is what violence is the,

01:32:21 I’m playing with stacks of devil’s advocates here

01:32:27 and popping one devil off the stack at a time.

01:32:30 Isn’t ultimately, of course,

01:32:31 it’ll be connected to physical reality,

01:32:33 but just because people control the physical reality,

01:32:35 it doesn’t mean the status.

01:32:36 I guess LeBron James in theory could make more money

01:32:39 than the owners of the teams in theory.

01:32:43 And to me, that’s a virtual idea.

01:32:44 So somebody else constructed a game

01:32:47 and now you’re playing in the virtual space of the game.

01:32:51 So it just feels like there could be games where status,

01:32:55 we build realities that give us meaning in the virtual space.

01:33:00 I can imagine such things being possible.

01:33:02 Oh yeah, okay, so I see what you’re saying.

01:33:04 I think I see what you’re saying there

01:33:05 with the idea there, I mean, we’ll take the LeBron James side

01:33:08 and put in like some YouTube influencer.

01:33:10 Yes, sure.

01:33:11 So the YouTube influencer, it is status games,

01:33:15 but at a certain level, it precipitates into real dollars

01:33:18 and into like, well, you look at Mr. Beast, right?

01:33:21 He’s like sending off half a million dollars

01:33:23 worth of fireworks or something, right?

01:33:24 Not a YouTube video.

01:33:25 And also like saving, like saving trees and so on.

01:33:28 Sure, right, trying to find a million trees

01:33:29 with Mark Rober or whatever it was.

01:33:30 Yeah, like it’s not that those kinds of games

01:33:33 can’t lead to real consequences.

01:33:34 It’s that for the vast majority of people in consumer culture,

01:33:40 they are incented by the, I would say mostly,

01:33:44 I’m thinking about middle class consumers.

01:33:46 They’re incented by advertisements,

01:33:48 they’re scented by their memetic environment

01:33:50 to treat the purchasing of certain things,

01:33:54 the need to buy the latest model, whatever,

01:33:56 the need to appear, however,

01:33:58 the need to pursue status games as a driver of meaning.

01:34:02 And my point would be that it’s a very hollow

01:34:04 driver of meaning.

01:34:05 And that is what creates a meaning crisis.

01:34:08 Because at the end of the day,

01:34:10 it’s like eating a lot of empty calories, right?

01:34:12 Yeah, it tasted good going down, a lot of sugar,

01:34:13 but man, it did not, it was not enough protein

01:34:15 to help build your muscles.

01:34:17 And you kind of feel that in your gut.

01:34:18 And I think that’s, I mean, so all this stuff aside

01:34:21 and setting aside our discussion on currency,

01:34:22 which I hope we get back to,

01:34:24 that’s what I mean about the meaning crisis,

01:34:27 part of it being created by the fact that we don’t,

01:34:30 we’re not encouraged to have more and more

01:34:32 direct relationships.

01:34:34 We’re actually alienated from relating to,

01:34:37 even our family members sometimes, right?

01:34:40 We’re encouraged to relate to brands.

01:34:43 We’re encouraged to relate to these kinds of things

01:34:46 that then tell us to do things

01:34:49 that are really of low consequence.

01:34:51 And that’s where the meaning crisis comes from.

01:34:52 So the role of technology in this,

01:34:54 so there’s somebody you mentioned who’s Jacques,

01:34:57 his view of technology, he warns about the towering piles

01:35:01 of technique, which I guess is a broad idea of technology.

01:35:05 So I think, correct me if I’m wrong for him,

01:35:08 technology is bad at moving away from human nature

01:35:12 and it’s ultimately is destructive.

01:35:14 My question, broadly speaking, this meaning crisis,

01:35:16 can technology, what are the pros and cons of technology?

01:35:19 Can it be a good?

01:35:21 Yeah, I think it can be.

01:35:22 I certainly think it can be a good thing.

01:35:24 Can it be a good? Yeah, I think it can be.

01:35:27 I certainly draw on some of Alol’s ideas

01:35:29 and I think some of them are pretty good.

01:35:32 But the way he defines technique is,

01:35:36 well, also Simondon as well.

01:35:37 I mean, he speaks to the general mentality of efficiency,

01:35:41 homogenized processes, homogenized production,

01:35:43 homogenized labor to produce homogenized artifacts

01:35:47 that then are not actually,

01:35:50 they don’t sit well in the environment.

01:35:53 Essentially, you can think of it as the antonym of craft.

01:35:57 Whereas a craftsman will come to a problem,

01:36:02 maybe a piece of wood and they make into a chair.

01:36:04 It may be a site to build a house or build a stable

01:36:06 or build whatever.

01:36:08 And they will consider how to bring various things in

01:36:12 to build something well contextualized

01:36:15 that’s in right relationship with that environment.

01:36:20 But the way we have driven technology

01:36:22 over the last 100 and 150 years is not that at all.

01:36:25 It is how can we make sure the input materials

01:36:30 are homogenized, cut to the same size,

01:36:33 diluted and doped to exactly the right alloy concentrations.

01:36:36 How do we create machines that then consume exactly

01:36:38 the right kind of energy to be able to run

01:36:39 at this high speed to stamp out the same parts,

01:36:42 which then go out the door,

01:36:44 everyone gets the same tickle of Mielmo.

01:36:45 And the reason why everyone wants it

01:36:46 is because we have broadcasts that tells everyone

01:36:49 this is the cool thing.

01:36:50 So we homogenize demand, right?

01:36:52 And we’re like Baudrillard and other critiques

01:36:55 of modernity coming from that direction,

01:36:57 the situation lists as well.

01:36:59 It’s that their point is that at this point in time,

01:37:02 consumption is the thing that drives

01:37:04 a lot of the economic stuff, not the need,

01:37:06 but the need to consume and build status games on top.

01:37:09 So we have homogenized, when we discovered,

01:37:12 I think this is really like Bernays and stuff, right?

01:37:14 In the early 20th century, we discovered we can create,

01:37:17 we can create demand, we can create desire

01:37:20 in a way that was not possible before

01:37:23 because of broadcast media.

01:37:25 And not only do we create desire,

01:37:27 we don’t create a desire for each person

01:37:29 to connect to some bespoke thing,

01:37:31 to build a relationship with their neighbor or their spouse.

01:37:33 We are telling them, you need to consume this brand,

01:37:36 you need to drive this vehicle,

01:37:37 you gotta listen to this music,

01:37:38 have you heard this, have you seen this movie, right?

01:37:40 So creating homogenized demand makes it really cheap

01:37:44 to create homogenized product.

01:37:46 And now you have economics of scale.

01:37:48 So we make the same tickle me Elmo,

01:37:50 give it to all the kids and all the kids are like,

01:37:52 hey, I got a tickle me Elmo, right?

01:37:54 So this is ultimately where this ties in then

01:37:58 to runaway hypercapitalism is that we then,

01:38:03 capitalism is always looking for growth.

01:38:04 It’s always looking for growth

01:38:05 and growth only happens at the margins.

01:38:07 So you have to squeeze more and more demand out.

01:38:09 You gotta make it cheaper and cheaper

01:38:11 to make the same thing,

01:38:12 but tell everyone they’re still getting meaning from it.

01:38:15 You’re still like, this is still your tickle me Elmo, right?

01:38:18 And we see little bits of this dripping critiques

01:38:21 of this dripping in popular culture.

01:38:22 You see it sometimes it’s when Buzz Lightyear

01:38:25 walks into the thing, he’s like,

01:38:27 oh my God, at the toy store, I’m just a toy.

01:38:30 Like there’s millions of other,

01:38:31 or there’s hundreds of other Buzz Lightyear’s

01:38:33 just like me, right?

01:38:34 That is, I think, a fun Pixar critique

01:38:38 on this homogenization dynamic.

01:38:40 I agree with you on most of the things you’re saying.

01:38:42 So I’m playing devil’s advocate here,

01:38:44 but this homogenized machine of capitalism

01:38:50 is also the thing that is able to fund,

01:38:54 if channeled correctly, innovation, invention,

01:38:59 and development of totally new things

01:39:00 that in the best possible world,

01:39:02 create all kinds of new experiences that can enrich lives,

01:39:06 the quality of lives for all kinds of people.

01:39:09 So isn’t this the machine

01:39:12 that actually enables the experiences

01:39:15 and more and more experiences that will then give meaning?

01:39:18 It has done that to some extent.

01:39:21 I mean, it’s not all good or bad in my perspective.

01:39:24 We can always look backwards

01:39:26 and offer a critique of the path we’ve taken

01:39:29 to get to this point in time.

01:39:31 But that’s a different, that’s somewhat different

01:39:33 and informs the discussion,

01:39:35 but it’s somewhat different than the question

01:39:37 of where do we go in the future, right?

01:39:40 Is this still the same rocket we need to ride

01:39:42 to get to the next point?

01:39:43 Will it even get us to the next point?

01:39:44 Well, how does this, so you’re predicting the future,

01:39:46 how does it go wrong in your view?

01:39:48 We have the mechanisms,

01:39:51 we have now explored enough technologies

01:39:53 to where we can actually, I think, sustainably produce

01:39:59 what most people in the world need to live.

01:40:03 We have also created the infrastructures

01:40:07 to allow continued research and development

01:40:10 of additional science and medicine

01:40:13 and various other kinds of things.

01:40:16 The organizing principles that we use

01:40:18 to govern all these things today have been,

01:40:21 a lot of them have been just inherited

01:40:25 from honestly medieval times.

01:40:28 Some of them have refactored a little bit

01:40:30 in the industrial era,

01:40:31 but a lot of these modes of organizing people

01:40:35 are deeply problematic.

01:40:38 And furthermore, they’re rooted in,

01:40:41 I think, a very industrial mode perspective on human labor.

01:40:46 And this is one of those things,

01:40:47 I’m gonna go back to the open source thing.

01:40:49 There was a point in time when,

01:40:51 well, let me ask you this.

01:40:53 If you look at the core SciPy sort of collection of libraries,

01:40:57 so SciPy, NumPy, Matplotlib, right?

01:40:59 There’s iPython Notebook, let’s throw pandas in there,

01:41:01 scikit learn, a few of these things.

01:41:03 How much value do you think, economic value,

01:41:07 would you say they drive in the world today?

01:41:10 That’s one of the fascinating things

01:41:12 about talking to you and Travis is like,

01:41:16 it’s a measure, it’s like a…

01:41:18 At least a billion dollars a day, maybe?

01:41:20 A billion dollars, sure.

01:41:21 I mean, it’s like, it’s similar question of like,

01:41:23 how much value does Wikipedia create?

01:41:26 Right.

01:41:26 It’s like, all of it, I don’t know.

01:41:30 Well, I mean, if you look at it,

01:41:31 all of it, I don’t know.

01:41:33 Well, I mean, if you look at our systems,

01:41:34 when you do a Google search, right?

01:41:36 Now, some of that stuff runs through TensorFlow,

01:41:37 but when you look at Siri,

01:41:40 when you do credit card transaction fraud,

01:41:42 like just everything, right?

01:41:43 Every intelligence agency under the sun,

01:41:45 they’re using some aspect of these kinds of tools.

01:41:47 So I would say that these create billions of dollars

01:41:51 of value.

01:41:52 Oh, you mean like direct use of tools

01:41:53 that leverage this data?

01:41:54 Yes, direct, yeah.

01:41:55 Yeah, even that’s billions a day, yeah.

01:41:56 Yeah, right, easily, I think.

01:41:58 Like the things they could not do

01:41:59 if they didn’t have these tools, right?

01:42:01 Yes.

01:42:02 So that’s billions of dollars a day, great.

01:42:04 I think that’s about right.

01:42:05 Now, if we take, how many people did it take

01:42:07 to make that, right?

01:42:09 And there was a point in time, not anymore,

01:42:11 but there was a point in time when they could fit

01:42:12 in a van.

01:42:13 I could have fit them in my Mercedes winter, right?

01:42:15 And so if you look at that, like, holy crap,

01:42:19 literally a van of maybe a dozen people

01:42:22 could create value to the tune of billions of dollars a day.

01:42:28 What lesson do you draw from that?

01:42:30 Well, here’s the thing.

01:42:31 What can we do to do more of that?

01:42:35 Like that’s open source.

01:42:36 The way I’ve talked about this in other environments is

01:42:39 when we use generative participatory crowdsourced

01:42:43 approaches, we unlock human potential

01:42:47 at a level that is better than what capitalism can do.

01:42:52 I would challenge anyone to go and try to hire

01:42:55 the right 12 people in the world

01:42:58 to build that entire stack

01:43:00 the way those 12 people did that, right?

01:43:02 They would be very, very hard pressed to do that.

01:43:04 If a hedge fund could just hire a dozen people

01:43:06 and create like something that is worth

01:43:08 billions of dollars a day,

01:43:10 every single one of them would be racing to do it, right?

01:43:12 But finding the right people,

01:43:13 fostering the right collaborations,

01:43:15 getting it adopted by the right other people

01:43:16 to then refine it,

01:43:18 that is a thing that was organic in nature.

01:43:21 That took crowdsourcing.

01:43:22 That took a lot of the open source ethos

01:43:24 and it took the right kinds of people, right?

01:43:26 Now those people who started that said,

01:43:27 I need to have a part of a multi billion dollar a day

01:43:30 sort of enterprise.

01:43:32 They’re like, I’m doing this cool thing

01:43:33 to solve my problem for my friends, right?

01:43:35 So the point of telling the story

01:43:37 is to say that our way of thinking about value,

01:43:40 our way of thinking about allocation of resources,

01:43:42 our ways of thinking about property rights

01:43:44 and all these kinds of things,

01:43:46 they come from finite game, scarcity mentality,

01:43:50 medieval institutions.

01:43:52 As we are now entering,

01:43:54 to some extent we’re sort of in a post scarcity era,

01:43:57 although some people are hoarding a whole lot of stuff.

01:43:59 We are at a point where if not now soon,

01:44:02 we’ll be in a post scarcity era.

01:44:03 The question of how we allocate resources

01:44:06 has to be revisited at a fundamental level

01:44:08 because the kind of software these people built,

01:44:11 the modalities that those human ecologies

01:44:13 that built that software,

01:44:15 it treats offers unproperty.

01:44:17 Actually sharing creates value.

01:44:20 Restricting and forking reduces value.

01:44:23 So that’s different than any other physical resource

01:44:26 that we’ve ever dealt with.

01:44:27 It’s different than how most corporations

01:44:28 treat software IP, right?

01:44:31 So if treating software in this way

01:44:34 created this much value so efficiently, so cheaply,

01:44:37 because feeding a dozen people for 10 years

01:44:39 is really cheap, right?

01:44:41 That’s the reason I care about this right now

01:44:44 is because looking forward

01:44:46 when we can automate a lot of labor,

01:44:48 where we can in fact,

01:44:49 the programming for your robot in your part,

01:44:52 neck of the woods and your part of the Amazon

01:44:54 to build something sustainable for you

01:44:55 and your tribe to deliver the right medicines,

01:44:58 to take care of the kids,

01:45:00 that’s just software, that’s just code

01:45:02 that could be totally open sourced, right?

01:45:05 So we can actually get to a mode

01:45:07 where all of this additional generative things

01:45:10 that humans are doing,

01:45:12 they don’t have to be wrapped up in a container

01:45:16 and then we charge for all the exponential dynamics

01:45:18 out of it.

01:45:19 That’s what Facebook did.

01:45:20 That’s what modern social media did, right?

01:45:22 Because the old internet was connecting people just fine.

01:45:24 So Facebook came along and said,

01:45:25 well, anyone can post a picture,

01:45:26 anyone can post some text

01:45:28 and we’re gonna amplify the crap out of it to everyone else.

01:45:31 And it exploded this generative network

01:45:33 of human interaction.

01:45:34 And then I said, how do I make money off that?

01:45:36 Oh yeah, I’m gonna be a gatekeeper

01:45:38 on everybody’s attention.

01:45:39 And that’s how I’m gonna make money.

01:45:41 So how do we create more than one van?

01:45:45 How do we have millions of vans full of people

01:45:47 that create NumPy, SciPy, that create Python?

01:45:51 So the story of those people is often they have

01:45:55 some kind of job outside of this.

01:45:57 This is what they’re doing for fun.

01:45:58 Don’t you need to have a job?

01:46:00 Don’t you have to be connected,

01:46:02 plugged in to the capitalist system?

01:46:04 Isn’t that what,

01:46:07 isn’t this consumerism,

01:46:09 the engine that results in the individuals

01:46:13 that kind of take a break from it every once in a while

01:46:15 to create something magical?

01:46:17 Like at the edges is where the innovation happens.

01:46:19 There’s a surplus, right, this is the question.

01:46:21 Like if everyone were to go and run their own farm,

01:46:24 no one would have time to go and write NumPy, SciPy, right?

01:46:27 Maybe, but that’s what I’m talking about

01:46:29 when I say we’re maybe at a post scarcity point

01:46:32 for a lot of people.

01:46:34 The question that we’re never encouraged to ask

01:46:37 in a Super Bowl ad is how much do you need?

01:46:40 How much is enough?

01:46:41 Do you need to have a new car every two years, every five?

01:46:45 If you have a reliable car,

01:46:46 can you drive one for 10 years, is that all right?

01:46:48 I had a car for 10 years and it was fine.

01:46:50 Your iPhone, do you have to upgrade every two years?

01:46:52 I mean, it’s sort of, you’re using the same apps

01:46:54 you did four years ago, right?

01:46:56 This should be a Super Bowl ad.

01:46:58 This should be a Super Bowl ad, that’s great.

01:46:59 Maybe somebody. Do you really need a new iPhone?

01:47:01 Maybe one of our listeners will fund something like this

01:47:03 of like, no, but just actually bringing it back,

01:47:06 bringing it back to actually the question

01:47:09 of what do you need?

01:47:11 How do we create the infrastructure

01:47:13 for collectives of people to live on the basis

01:47:17 of providing what we need, meeting people’s needs

01:47:21 with a little bit of access to handle emergencies,

01:47:23 things like that, pulling our resources together

01:47:26 to handle the really, really big emergencies,

01:47:28 somebody with a really rare form of cancer

01:47:30 or some massive fire sweeps through half the village

01:47:34 or whatever, but can we actually unscale things

01:47:38 and solve for people’s needs

01:47:41 and then give them the capacity to explore

01:47:45 how to be the best version of themselves?

01:47:47 And for Travis, that was throwing away his shot of tenure

01:47:51 in order to write NumPy.

01:47:52 For others, there is a saying in the SciFi community

01:47:56 that SciFi advances one failed postdoc at a time.

01:48:00 And that’s, we can do these things.

01:48:03 We can actually do this kind of collaboration

01:48:05 because code, software, information, organization,

01:48:08 that’s cheap.

01:48:09 Those bits are very cheap to fling across the oceans.

01:48:13 So you mentioned Travis.

01:48:14 We’ve been talking and we’ll continue to talk

01:48:16 about open source.

01:48:19 Maybe you can comment.

01:48:20 How did you meet Travis?

01:48:21 Who is Travis Aliphant?

01:48:24 What’s your relationship been like through the years?

01:48:28 Where did you work together?

01:48:30 How did you meet?

01:48:31 What’s the present and the future look like?

01:48:35 Yeah, so the first time I met Travis

01:48:36 was at a SciFi conference in Pasadena.

01:48:39 Do you remember the year?

01:48:40 2005.

01:48:42 I was working at, again, at nthought,

01:48:44 working on scientific computing consulting.

01:48:47 And a couple of years later,

01:48:51 he joined us at nthought, I think 2007.

01:48:55 And he came in as the president.

01:48:58 One of the founders of nthought was the CEO, Eric Jones.

01:49:01 And we were all very excited that Travis was joining us

01:49:04 and that was great fun.

01:49:05 And so I worked with Travis

01:49:06 on a number of consulting projects

01:49:08 and we worked on some open source stuff.

01:49:12 I mean, it was just a really, it was a good time there.

01:49:15 And then…

01:49:15 It was primarily Python related?

01:49:17 Oh yeah, it was all Python, NumPy, SciFi consulting

01:49:19 kind of stuff.

01:49:21 Towards the end of that time,

01:49:23 we started getting called into more and more finance shops.

01:49:27 They were adopting Python pretty heavily.

01:49:29 I did some work on like a high frequency trading shop,

01:49:33 working on some stuff.

01:49:34 And then we worked together on some,

01:49:36 at a couple of investment banks in Manhattan.

01:49:39 And so we started seeing that there was a potential

01:49:42 to take Python in the direction of business computing,

01:49:45 more than just being this niche like MATLAB replacement

01:49:48 for big vector computing.

01:49:50 What we were seeing was, oh yeah,

01:49:51 you could actually use Python as a Swiss army knife

01:49:53 to do a lot of shadow data transformation kind of stuff.

01:49:56 So that’s when we realized the potential is much greater.

01:50:00 And so we started Anaconda,

01:50:03 I mean, it was called Continuum Analytics at the time,

01:50:05 but we started in January of 2012

01:50:07 with a vision of shoring up the parts of Python

01:50:10 that needed to get expanded to handle data at scale,

01:50:13 to do web visualization, application development, et cetera.

01:50:17 And that was that, yeah.

01:50:18 So he was CEO and I was president for the first five years.

01:50:23 And then we raised some money and then the board,

01:50:27 it was sort of put in a new CEO.

01:50:28 They hired a kind of professional CEO.

01:50:31 And then Travis, you laugh at that.

01:50:34 I took over the CTO role.

01:50:35 Travis then left after a year to do his own thing,

01:50:37 to do Quonsight, which was more oriented

01:50:41 around some of the bootstrap years that we did at Continuum

01:50:43 where it was open source and consulting.

01:50:46 It wasn’t sort of like gung ho product development.

01:50:48 And it wasn’t focused on,

01:50:50 we accidentally stumbled

01:50:51 into the package management problem at Anaconda,

01:50:55 but we had a lot of other visions of other technology

01:50:57 that we built in the open source.

01:50:58 And Travis was really trying to push,

01:51:02 again, the frontiers of numerical computing,

01:51:04 vector computing,

01:51:05 handling things like auto differentiation and stuff

01:51:07 intrinsically in the open ecosystem.

01:51:09 So I think that’s kind of the direction

01:51:14 he’s working on in some of his work.

01:51:18 We remain great friends and colleagues and collaborators,

01:51:22 even though he’s no longer day to day working at Anaconda,

01:51:25 but he gives me a lot of feedback

01:51:27 about this and that and the other.

01:51:29 What’s a big lesson you’ve learned from Travis

01:51:32 about life or about programming or about leadership?

01:51:35 Wow, there’s a lot.

01:51:36 There’s a lot.

01:51:37 Travis is a really, really good guy.

01:51:39 He really, his heart is really in it.

01:51:41 He cares a lot.

01:51:44 I’ve gotten that sense having to interact with him.

01:51:46 It’s so interesting.

01:51:47 Such a good human being.

01:51:48 He’s a really good dude.

01:51:49 And he and I, it’s so interesting.

01:51:51 We come from very different backgrounds.

01:51:53 We’re quite different as people,

01:51:56 but I think we can like not talk for a long time

01:52:00 and then be on a conversation

01:52:03 and be eye to eye on like 90% of things.

01:52:06 And so he’s someone who I believe

01:52:08 no matter how much fog settles in over the ocean,

01:52:10 his ship, my ship are pointed

01:52:12 sort of in the same direction of the same star.

01:52:14 Wow, that’s a beautiful way to phrase it.

01:52:16 No matter how much fog there is,

01:52:18 we’re pointed at the same star.

01:52:20 Yeah, and I hope he feels the same way.

01:52:21 I mean, I hope he knows that over the years now.

01:52:23 We both care a lot about the community.

01:52:27 For someone who cares so deeply,

01:52:28 I would say this about Travis that’s interesting.

01:52:29 For someone who cares so deeply about the nerd details

01:52:33 of like type system design and vector computing

01:52:36 and efficiency of expressing this and that and the other,

01:52:38 memory layouts and all that stuff,

01:52:40 he cares even more about the people

01:52:43 in the ecosystem, the community.

01:52:45 And I have a similar kind of alignment.

01:52:49 I care a lot about the tech, I really do.

01:52:53 But for me, the beauty of what this human ecology

01:52:58 has produced is I think a touchstone.

01:53:01 It’s an early version, we can look at it and say,

01:53:03 how do we replicate this for humanity at scale?

01:53:05 What this open source collaboration was able to produce?

01:53:08 How can we be generative in human collaboration

01:53:11 moving forward and create that

01:53:12 as a civilizational kind of dynamic?

01:53:15 Like, can we seize this moment to do that?

01:53:17 Because like a lot of the other open source movements,

01:53:19 it’s all nerds nerding out on code for nerds.

01:53:23 And this because it’s scientists,

01:53:25 because it’s people working on data,

01:53:27 that all of it faces real human problems.

01:53:31 I think we have an opportunity

01:53:32 to actually make a bigger impact.

01:53:34 Is there a way for this kind of open source vision

01:53:37 to make money?

01:53:39 Absolutely.

01:53:40 To fund the people involved?

01:53:41 Is that an essential part of it?

01:53:43 It’s hard, but we’re trying to do that

01:53:45 in our own way at Anaconda,

01:53:48 because we know that business users,

01:53:49 as they use more of the stuff, they have needs,

01:53:52 like business specific needs around security, provenance.

01:53:54 They really can’t tell their VPs and their investors,

01:53:59 hey, we’re having, our data scientists

01:54:01 are installing random packages from who knows where

01:54:03 and running on customer data.

01:54:04 So they have to have someone to talk to you.

01:54:05 And that’s what Anaconda does.

01:54:07 So we are a governed source of packages for them,

01:54:10 and that’s great, that makes some money.

01:54:12 We take some of that and we just take that as a dividend.

01:54:16 We take a percentage of our revenues

01:54:17 and write that as a dividend for the open source community.

01:54:20 But beyond that, I really see the development

01:54:23 of a marketplace for people to create notebooks,

01:54:27 models, data sets, curation of these different kinds

01:54:30 of things, and to really have

01:54:33 a long tail marketplace dynamic with that.

01:54:37 Can you speak about this problem

01:54:38 that you stumbled into of package management,

01:54:41 Python package management?

01:54:43 What is that?

01:54:46 A lot of people speak very highly of Conda,

01:54:48 which is part of Anaconda, which is a package manager.

01:54:50 There’s a ton of packages.

01:54:52 So first, what are package managers?

01:54:55 And second, what was there before?

01:54:57 What is pip?

01:54:58 And why is Conda more awesome?

01:55:01 The package problem is this, which is that

01:55:04 in order to do numerical computing efficiently with Python,

01:55:11 there are a lot of low level libraries

01:55:14 that need to be compiled, compiled with a C compiler

01:55:17 or C++ compiler or Fortran compiler.

01:55:19 They need to not just be compiled,

01:55:21 but they need to be compiled with all of the right settings.

01:55:23 And oftentimes those settings are tuned

01:55:25 for specific chip architectures.

01:55:27 And when you add GPUs to the mix,

01:55:29 when you look at different operating systems,

01:55:32 you may be on the same chip,

01:55:33 but if you’re running Mac versus Linux versus Windows

01:55:37 on the same x86 chip, you compile and link differently.

01:55:40 All of this complexity is beyond the capability

01:55:44 of most data scientists to reason about.

01:55:46 And it’s also beyond what most of the package developers

01:55:50 want to deal with too.

01:55:51 Because if you’re a package developer,

01:55:52 you’re like, I code on Linux.

01:55:54 This works for me, I’m good.

01:55:55 It is not my problem to figure out how to build this

01:55:58 on an ancient version of Windows, right?

01:56:00 That’s just simply not my problem.

01:56:01 So what we end up with is we have a creator economy

01:56:05 or create a very creative crowdsourced environment

01:56:08 where people want to use this stuff, but they can’t.

01:56:11 And so we ended up creating a new set of technologies

01:56:15 like a build recipe system, a build system

01:56:18 and an installer system that is able to,

01:56:22 well, to put it simply,

01:56:24 it’s able to build these packages correctly

01:56:27 on each of these different kinds of platforms

01:56:29 and operating systems,

01:56:30 and make it so when people want to install something,

01:56:33 they can, it’s just one command.

01:56:34 They don’t have to set up a big compiler system

01:56:36 and do all these things.

01:56:38 So when it works well, it works great.

01:56:40 Now, the difficulty is we have literally thousands

01:56:43 of people writing code in the ecosystem,

01:56:46 building all sorts of stuff and each person writing code,

01:56:48 they may take a dependence on something else.

01:56:50 And so you have all this web,

01:56:52 incredibly complex web of dependencies.

01:56:54 So installing the correct package

01:56:57 for any given set of packages you want,

01:57:00 getting that right subgraph is an incredibly hard problem.

01:57:04 And again, most data scientists

01:57:05 don’t want to think about this.

01:57:06 They’re like, I want to install NumPy and pandas.

01:57:09 I want this version of some like geospatial library.

01:57:11 I want this other thing.

01:57:13 Like, why is this hard?

01:57:14 These exist, right?

01:57:15 And it is hard because it’s, well,

01:57:17 you’re installing this on a version of Windows, right?

01:57:20 And half of these libraries are not built for Windows

01:57:23 or the latest version isn’t available,

01:57:25 but the old version was.

01:57:26 And if you go to the old version of this library,

01:57:27 that means you need to go to a different version

01:57:28 of that library.

01:57:30 And so the Python ecosystem,

01:57:32 by virtue of being crowdsourced,

01:57:34 we were able to fill a hundred thousand different niches.

01:57:38 But then we also suffer this problem

01:57:40 that because it’s crowdsourced and no one,

01:57:43 it’s like a tragedy of the commons, right?

01:57:44 No one really needs, wants to support

01:57:47 their thousands of other dependencies.

01:57:49 So we end up sort of having to do a lot of this.

01:57:52 And of course the conda forge community

01:57:53 also steps up as an open source community that,

01:57:55 you know, maintain some of these recipes.

01:57:57 That’s what conda does.

01:57:58 Now, pip is a tool that came along after conda,

01:58:01 to some extent, it came along as an easier way

01:58:04 for the Python developers writing Python code

01:58:09 that didn’t have as much compiled, you know, stuff.

01:58:12 They could then install different packages.

01:58:15 And what ended up happening in the Python ecosystem

01:58:17 was that a lot of the core Python and web Python developers,

01:58:20 they never ran into any of this compilation stuff at all.

01:58:24 So even we have, you know, on video,

01:58:27 we have Guido van Rossum saying,

01:58:29 you know what, the scientific community’s packaging problems

01:58:31 are just too exotic and different.

01:58:33 I mean, you’re talking about Fortran compilers, right?

01:58:35 Like you guys just need to build your own solution

01:58:37 perhaps, right?

01:58:38 So the Python core Python community went

01:58:41 and built its own sort of packaging technologies,

01:58:45 not really contemplating the complexity

01:58:47 of this stuff over here.

01:58:49 And so now we have the challenge where

01:58:51 you can pip install some things, some libraries,

01:58:53 if you just want to get started with them,

01:58:55 you can pip install TensorFlow and that works great.

01:58:57 The instant you want to also install some other packages

01:59:00 that use different versions of NumPy

01:59:02 or some like graphics library or some OpenCV thing

01:59:05 or some other thing, you now run into dependency hell

01:59:08 because you cannot, you know,

01:59:09 OpenCV can have a different version of libjpeg over here

01:59:12 than PyTorch over here.

01:59:14 Like they actually, they all have to use the,

01:59:15 if you want to use GPU acceleration,

01:59:17 they have to all use the same underlying drivers

01:59:18 and same GPU CUDA things.

01:59:20 So it’s, it gets to be very gnarly

01:59:22 and it’s a level of technology

01:59:24 that both the makers and the users

01:59:26 don’t really want to think too much about.

01:59:28 And that’s where you step in and try to solve this.

01:59:30 We try to solve it.

01:59:31 Subgraph problems.

01:59:32 How much is that?

01:59:33 I mean, you said that you don’t want to think,

01:59:34 they don’t want to think about it,

01:59:35 but how much is it a little bit on the developer

01:59:38 and providing them tools to be a little bit more clear

01:59:42 of that subgraph of dependency that’s necessary?

01:59:44 It is getting to a point where we do have to think about,

01:59:47 look, can we pull some of the most popular packages together

01:59:51 and get them to work on a coordinated release timeline,

01:59:53 get them to build against the same test matrix,

01:59:55 et cetera, et cetera, right?

01:59:57 And there is a little bit of dynamic around this,

01:59:58 but again, it is a volunteer community.

02:00:01 Yeah.

02:00:02 You know, people working on these different projects

02:00:04 have their own timelines

02:00:06 and their own things they’re trying to meet.

02:00:07 So we end up trying to pull these things together.

02:00:11 And then it’s this incredibly,

02:00:13 and I would recommend just as a business tip,

02:00:15 don’t ever go into business

02:00:16 where when your hard work works, you’re invisible.

02:00:19 And when it breaks because of someone else’s problem,

02:00:21 you get flagged for it.

02:00:23 Because that’s in our situation, right?

02:00:25 When something doesn’t condensate all properly,

02:00:27 usually it’s some upstream issue,

02:00:28 but it looks like condensate is broken.

02:00:30 It looks like, you know, Anaconda screwed something up.

02:00:32 When things do work though, it’s like, oh yeah, cool.

02:00:34 It’s worked.

02:00:35 Assuming naturally, of course,

02:00:36 it’s very easy to make that work, right?

02:00:38 So we end up in this kind of problematic scenario,

02:00:41 but it’s okay because I think we’re still,

02:00:45 you know, our heart’s in the right place.

02:00:46 We’re trying to move this forward

02:00:47 as a community sort of affair.

02:00:49 I think most of the people in the community

02:00:50 also appreciate the work we’ve done over the years

02:00:53 to try to move these things forward

02:00:54 in a collaborative fashion, so.

02:00:57 One of the subgraphs of dependencies

02:01:01 that became super complicated

02:01:03 is the move from Python 2 to Python 3.

02:01:05 So there’s all these ways to mess

02:01:07 with these kinds of ecosystems of packages and so on.

02:01:11 So I just want to ask you about that particular one.

02:01:13 What do you think about the move from Python 2 to 3?

02:01:18 Why did it take so long?

02:01:19 What were, from your perspective,

02:01:20 just seeing the packages all struggle

02:01:23 and the community all struggle through this process,

02:01:26 what lessons do you take away from it?

02:01:27 Why did it take so long?

02:01:29 Looking back, some people perhaps underestimated

02:01:33 how much adoption Python 2 had.

02:01:38 I think some people also underestimated how much,

02:01:41 or they overestimated how much value

02:01:44 some of the new features in Python 3 really provided.

02:01:47 Like the things they really loved about Python 3

02:01:49 just didn’t matter to some of these people in Python 2.

02:01:52 Because this change was happening as Python, SciPy,

02:01:56 was starting to take off really like past,

02:01:58 like a hockey stick of adoption

02:02:00 in the early data science era, in the early 2010s.

02:02:02 A lot of people were learning and onboarding

02:02:04 in whatever just worked.

02:02:06 And the teachers were like,

02:02:07 well, yeah, these libraries I need

02:02:09 are not supported in Python 3 yet,

02:02:10 I’m going to teach you Python 2.

02:02:12 Took a lot of advocacy to get people

02:02:13 to move over to Python 3.

02:02:15 So I think it wasn’t any particular single thing,

02:02:18 but it was one of those death by a dozen cuts,

02:02:21 which just really made it hard to move off of Python 2.

02:02:25 And also Python 3 itself,

02:02:27 as they were kind of breaking things

02:02:28 and changing things around

02:02:29 and reorganizing the standard library,

02:02:30 there’s a lot of stuff that was happening there

02:02:32 that kept giving people an excuse to say,

02:02:35 I’ll put off till the next version.

02:02:37 2 is working fine enough for me right now.

02:02:39 So I think that’s essentially what happened there.

02:02:41 And I will say this though,

02:02:43 the strength of the Python data science movement,

02:02:48 I think is what kept Python alive in that transition.

02:02:52 Because a lot of languages have died

02:02:54 and left their user bases behind.

02:02:56 If there wasn’t the use of Python for data,

02:02:58 there’s a good chunk of Python users

02:03:01 that during that transition,

02:03:02 would have just left for Go and Rust and stayed.

02:03:04 In fact, some people did.

02:03:06 They moved to Go and Rust and they just never looked back.

02:03:08 The fact that we were able to grow by millions of users,

02:03:13 the Python data community,

02:03:15 that is what kept the momentum for Python going.

02:03:18 And now the usage of Python for data is over 50%

02:03:21 of the overall Python user base.

02:03:24 So I’m happy to debate that on stage somewhere,

02:03:27 I don’t know if they really wanna take issue

02:03:29 with that statement, but from where I sit,

02:03:31 I think that’s true.

02:03:32 The statement there, the idea is that the switch

02:03:35 from Python 2 to Python 3 would have probably

02:03:39 destroyed Python if it didn’t also coincide with Python

02:03:43 for whatever reason,

02:03:45 just overtaking the data science community,

02:03:49 anything that processes data.

02:03:51 So like the timing was perfect that this maybe

02:03:55 imperfect decision was coupled with a great timing

02:03:59 on the value of data in our world.

02:04:02 I would say the troubled execution of a good decision.

02:04:04 It was a decision that was necessary.

02:04:07 It’s possible if we had more resources,

02:04:08 we could have done in a way that was a little bit smoother,

02:04:11 but ultimately, the arguments for Python 3,

02:04:15 I bought them at the time and I buy them now, right?

02:04:17 Having great text handling is like a nonnegotiable

02:04:20 table stakes thing you need to have in a language.

02:04:23 So that’s great, but the execution,

02:04:29 Python is the, it’s volunteer driven.

02:04:33 It’s like now the most popular language on the planet,

02:04:34 but it’s all literally volunteers.

02:04:37 So the lack of resources meant that they had to really,

02:04:40 they had to do things in a very hamstrung way.

02:04:43 And I think to carry the Python momentum in the language

02:04:46 through that time, the data movement

02:04:48 was a critical part of that.

02:04:49 So some of it is carrot and stick, I actually have to

02:04:54 shamefully admit that it took me a very long time

02:04:57 to switch from Python 2 and Python 3

02:04:58 because I’m a machine learning person.

02:05:00 It was just for the longest time,

02:05:01 you could just do fine with Python 2.

02:05:03 Right.

02:05:04 But I think the moment where I switched everybody

02:05:09 I worked with and switched myself for small projects

02:05:13 and big is when finally, when NumPy announced

02:05:17 that they’re going to end support like in 2020

02:05:21 or something like that.

02:05:22 Right.

02:05:23 So like when I realized, oh, this isn’t going,

02:05:26 this is going to end.

02:05:27 Right.

02:05:28 So that’s the stick, that’s not a carrot.

02:05:29 That’s not, so for the longest time it was carrots.

02:05:31 It was like all of these packages were saying,

02:05:34 okay, we have Python 3 support now, come join us.

02:05:37 We have Python 2 and Python 3, but when NumPy,

02:05:40 one of the packages I sort of love and depend on

02:05:43 said like, nope, it’s over.

02:05:47 That’s when I decided to switch.

02:05:50 I wonder if you think it was possible much earlier

02:05:53 for somebody like NumPy or some major package

02:05:58 to step into the cold and say like we’re ending this.

02:06:03 Well, it’s a chicken and egg problem too, right?

02:06:05 You don’t want to cut off a lot of users

02:06:07 unless you see the user momentum going too.

02:06:09 So the decisions for the scientific community

02:06:12 for each of the different projects,

02:06:14 you know, there’s not a monolith.

02:06:15 Some projects are like, we’ll only be releasing

02:06:17 new features on Python 3.

02:06:18 And that was more of a sticky carrot, right?

02:06:21 A firm carrot, if you will, a firm carrot.

02:06:26 A stick shaped carrot.

02:06:27 But then for others, yeah, NumPy in particular,

02:06:30 cause it’s at the base of the dependency stack

02:06:32 for so many things, that was the final stick.

02:06:36 That was a stick shaped stick.

02:06:37 People were saying, look, if I have to keep maintaining

02:06:40 my releases for Python 2, that’s that much less energy

02:06:43 that I can put into making things better

02:06:45 for the Python 3 folks or in my new version,

02:06:48 which is of course going to be Python 3.

02:06:49 So people were also getting kind of pulled by this tension.

02:06:53 So the overall community sort of had a lot of input

02:06:56 into when the NumPy core folks decided

02:06:58 that they would end of life on Python 2.

02:07:01 So as these numbers are a little bit loose,

02:07:04 but there are about 10 million Python programmers

02:07:06 in the world, you could argue that number,

02:07:08 but let’s say 10 million.

02:07:10 That’s actually where I was looking,

02:07:12 said 27 million total programmers, developers in the world.

02:07:17 You mentioned in a talk that changes need to be made

02:07:20 for there to be 100 million Python programmers.

02:07:24 So first of all, do you see a future

02:07:26 where there’s 100 million Python programmers?

02:07:28 And second, what kind of changes need to be made?

02:07:31 So Anaconda and Miniconda get downloaded

02:07:33 about a million times a week.

02:07:34 So I think the idea that there’s only

02:07:37 10 million Python programmers in the world

02:07:39 is a little bit undercounting.

02:07:41 There are a lot of people who escape traditional counting

02:07:44 that are using Python and data in their jobs.

02:07:48 I do believe that the future world for it to,

02:07:52 well, the world I would like to see

02:07:53 is one where people are data literate.

02:07:56 So they are able to use tools

02:07:58 that let them express their questions and ideas fluidly.

02:08:03 And the data variety and data complexity will not go down.

02:08:06 It will only keep increasing.

02:08:08 So I think some level of code or code like things

02:08:12 will continue to be relevant.

02:08:15 And so my hope is that we can build systems

02:08:19 that allow people to more seamlessly integrate

02:08:22 Python kinds of expressivity with data systems

02:08:26 and operationalization methods that are much more seamless.

02:08:31 And what I mean by that is, you know,

02:08:32 right now you can’t punch Python code into an Excel cell.

02:08:35 I mean, there’s some tools you can do to kind of do this.

02:08:37 We didn’t build a thing for doing this back in the day,

02:08:39 but I feel like the total addressable market

02:08:43 for Python users, if we do the things right,

02:08:46 is on the order of the Excel users,

02:08:49 which is, you know, a few hundred million.

02:08:51 So I think Python has to get better at being embedded,

02:08:57 you know, being a smaller thing that pulls in

02:08:59 just the right parts of the ecosystem

02:09:01 to run numerics and do data exploration,

02:09:05 meeting people where they’re already at

02:09:07 with their data and their data tools.

02:09:09 And then I think also it has to be easier

02:09:12 to take some of those things they’ve written

02:09:14 and flow those back into deployed systems

02:09:17 or little apps or visualizations.

02:09:19 I think if we don’t do those things,

02:09:20 then we will always be kept in a silo

02:09:23 as sort of an expert user’s tool

02:09:25 and not a tool for the masses.

02:09:27 You know, I work with a bunch of folks

02:09:28 in the Adobe Creative Suite,

02:09:32 and I’m kind of forcing them or inspired them

02:09:35 to learn Python, to do a bunch of stuff that helps them.

02:09:38 And it’s interesting, because they probably

02:09:39 wouldn’t call themselves Python programmers,

02:09:41 but they’re all using Python.

02:09:43 I would love it if the tools like Photoshop and Premiere

02:09:46 and all those kinds of tools that are targeted

02:09:48 towards creative people, I guess that’s where Excel,

02:09:52 Excel is targeted towards a certain kind of audience

02:09:54 that works with data, financial people,

02:09:56 all that kind of stuff, if there would be easy ways

02:10:00 to leverage to use Python for quick scripting tasks.

02:10:03 And you know, there’s an exciting application

02:10:06 of artificial intelligence in this space

02:10:09 that I’m hopeful about, looking at open AI codecs

02:10:13 with generating programs.

02:10:16 So almost helping people bridge the gap

02:10:20 from kind of visual interface to generating programs,

02:10:25 to something formal, and then they can modify it and so on,

02:10:28 but kind of without having to read the manual,

02:10:32 without having to do a Google search and stack overflow,

02:10:34 which is essentially what a neural network does

02:10:36 when it’s doing code generation,

02:10:39 is actually generating code and allowing a human

02:10:42 to communicate with multiple programs,

02:10:44 and then maybe even programs to communicate

02:10:46 with each other via Python.

02:10:48 So that to me is a really exciting possibility,

02:10:51 because I think there’s a friction to kind of,

02:10:55 like how do I learn how to use Python in my life?

02:10:58 There’s oftentimes you kind of start a class,

02:11:03 you start learning about types, I don’t know, functions.

02:11:07 Like this is, you know, Python is the first language

02:11:09 with which you start to learn to program.

02:11:11 But I feel like that’s going to take a long time

02:11:16 for you to understand why it’s useful.

02:11:18 You almost want to start with a script.

02:11:20 Well, you do, in fact.

02:11:22 I think starting with the theory behind programming languages

02:11:24 and types and all that, I mean,

02:11:26 types are there to make the compiler writer’s jobs easier.

02:11:30 Types are not, I mean, heck, do you have an ontology

02:11:32 of types or just the objects on this table?

02:11:34 No.

02:11:35 So types are there because compiler writers are human

02:11:39 and they’re limited in what they can do.

02:11:40 But I think that the beauty of scripting,

02:11:45 like there’s a Python book that’s called

02:11:47 “‘Automate the Boring Stuff,’

02:11:49 which is exactly the right mentality.

02:11:51 I grew up with computers in a time when I could,

02:11:56 when Steve Jobs was still pitching these things

02:11:58 as bicycles for the mind.

02:11:59 They were supposed to not be just media consumption devices,

02:12:03 but they were actually, you could write some code.

02:12:05 You could write basic, you could write some stuff

02:12:07 to do some things.

02:12:09 And that feeling of a computer as a thing

02:12:12 that we can use to extend ourselves

02:12:14 has all but evaporated for a lot of people.

02:12:17 So you see a little bit in parts

02:12:19 in the current, the generation of youth

02:12:21 around Minecraft or Roblox, right?

02:12:23 And I think Python, circuit Python,

02:12:25 these things could be a renaissance of that,

02:12:28 of people actually shaping and using their computers

02:12:32 as computers, as an extension of their minds

02:12:35 and their curiosity, their creativity.

02:12:37 So you talk about scripting the Adobe Suite with Python

02:12:41 in the 3D graphics world.

02:12:42 Python is a scripting language

02:12:46 that some of these 3D graphics suites use.

02:12:48 And I think that’s great.

02:12:49 We should better support those kinds of things.

02:12:51 But ultimately the idea that I should be able

02:12:54 to have power over my computing environment.

02:12:56 If I want these things to happen repeatedly all the time,

02:12:59 I should be able to say that somehow to the computer, right?

02:13:02 Now, whether the operating systems get there faster

02:13:06 by having some Siri backed with open AI with whatever.

02:13:09 So you can just say, Siri, make this do this

02:13:10 and this and every other Friday, right?

02:13:12 We probably will get there somewhere.

02:13:14 And Apple’s always had these ideas.

02:13:15 There’s the Apple script in the menu that no one ever uses,

02:13:19 but you can do these kinds of things.

02:13:21 But when you start doing that kind of scripting,

02:13:23 the challenge isn’t learning the type system

02:13:25 or even the syntax of the language.

02:13:27 The challenge is all of the dictionaries

02:13:29 and all the objects of all their properties

02:13:31 and attributes and parameters.

02:13:32 Like who’s got time to learn all that stuff, right?

02:13:35 So that’s when then programming by prototype

02:13:38 or by example becomes the right way

02:13:40 to get the user to express their desire.

02:13:43 So there’s a lot of these different ways

02:13:45 that we can approach programming.

02:13:46 But I do think when, as you were talking

02:13:48 about the Adobe scripting thing,

02:13:49 I was thinking about, you know,

02:13:51 when we do use something like NumPy,

02:13:53 when we use things in the Python data

02:13:55 and scientific, let’s say, expression system,

02:14:00 there’s a reason we use that,

02:14:01 which is that it gives us mathematical precision.

02:14:04 It gives us actually quite a lot of precision

02:14:06 over precisely what we mean about this data set,

02:14:09 that data set, and it’s the fact

02:14:11 that we can have that precision

02:14:13 that lets Python be powerful over as a duct tape for data.

02:14:18 You know, you give me a TSV or a CSV,

02:14:21 and if you give me some massively expensive vendor tool

02:14:25 for data transformation,

02:14:26 I don’t know I’m gonna be able to solve your problem.

02:14:28 But if you give me a Python prompt,

02:14:30 you can throw whatever data you want at me.

02:14:32 I will be able to mash it into shape.

02:14:34 So that ability to take it as sort of this like,

02:14:38 you know, machete out into the data jungle

02:14:40 is really powerful.

02:14:41 And I think that’s why at some level,

02:14:44 we’re not gonna get away from some of these expressions

02:14:47 and APIs and libraries in Python for data transformation.

02:14:53 You’ve been at the center of the Python community

02:14:57 for many years.

02:14:58 If you could change one thing about the community

02:15:03 to help it grow, to help it improve,

02:15:05 to help it flourish and prosper, what would it be?

02:15:09 I mean, you know, it doesn’t have to be one thing,

02:15:11 but what kind of comes to mind?

02:15:13 What are the challenges?

02:15:15 Humility is one of the values that we have

02:15:16 at Anaconda at the company,

02:15:17 but it’s also one of the values in the community.

02:15:21 That it’s been breached a little bit in the last few years,

02:15:24 but in general, people are quite decent

02:15:27 and reasonable and nice.

02:15:29 And that humility prevents them from seeing

02:15:34 the greatness that they could have.

02:15:36 I don’t know how many people in the core Python community

02:15:40 really understand that they stand perched at the edge

02:15:46 of an opportunity to transform how people use computers.

02:15:50 And actually, PyCon, I think it’s the last physical PyCon

02:15:52 I went to, Russell Keith McGee gave a great keynote

02:15:56 about very much along the lines of the challenges I have,

02:16:00 which is Python, for a language that doesn’t actually,

02:16:04 that can’t put an interface up,

02:16:05 put an interface up on the most popular computing devices,

02:16:09 it’s done really well as a language, hasn’t it?

02:16:11 You can’t write a web front end with Python, really.

02:16:13 I mean, everyone uses JavaScript.

02:16:15 You certainly can’t write native apps.

02:16:17 So for a language that you can’t actually write apps

02:16:20 in any of those front end runtime environments,

02:16:22 Python’s done exceedingly well.

02:16:26 And so that wasn’t to pat ourselves on the back.

02:16:28 That was to challenge ourselves as a community to say,

02:16:30 we, through our current volunteer dynamic,

02:16:32 have gotten to this point.

02:16:34 What comes next and how do we seize,

02:16:36 you know, we’ve caught the tiger by the tail.

02:16:38 How do we make sure we keep up with it as it goes forward?

02:16:40 So that’s one of the questions I have

02:16:42 about sort of open source communities,

02:16:44 is at its best, there’s a kind of humility.

02:16:48 Is that humility prevent you to have a vision

02:16:52 for creating something like very new and powerful?

02:16:55 And you’ve brought us back to consciousness again.

02:16:57 The collaboration is a swarm emergent dynamic.

02:17:00 Humility lets these people work together

02:17:02 without anyone trouncing anyone else.

02:17:04 How do they, you know, in consciousness,

02:17:07 there’s the question of the binding problem.

02:17:08 How does a singular, our attention,

02:17:10 how does that emerge from billions of neurons?

02:17:13 So how can you have a swarm of people emerge a consensus

02:17:17 that has a singular vision to say, we will do this.

02:17:20 And most importantly, we’re not gonna do these things.

02:17:23 Emerging a coherent, pointed, focused leadership dynamic

02:17:29 from a collaboration, being able to do that kind of,

02:17:32 and then dissolve it so people can still do

02:17:34 the swarm thing, that’s a problem, that’s a question.

02:17:37 So do you have to have a charismatic leader?

02:17:40 For some reason, Linus Torvald comes to mind,

02:17:42 but there’s people who criticize.

02:17:44 He rules with an iron fist, man.

02:17:46 But there’s still charisma to it.

02:17:48 There is charisma, right?

02:17:49 There’s a charisma to that iron fist.

02:17:51 There’s, every leader’s different, I would say,

02:17:55 in their success.

02:17:56 So he doesn’t, I don’t even know if you can say

02:17:59 he doesn’t have humility, there’s such a meritocracy

02:18:04 of ideas that like, this is a good idea

02:18:09 and this is a bad idea.

02:18:10 There’s a step function to it.

02:18:11 Once you clear a threshold, he’s open.

02:18:13 Once you clear the bozo threshold,

02:18:15 he’s open to your ideas, I think, right?

02:18:17 But see, the interesting thing is obviously

02:18:20 that will not stand in an open source community

02:18:23 if that threshold that is defined

02:18:25 by that one particular person is not actually that good.

02:18:30 So you actually have to be really excellent at what you do.

02:18:33 So he’s very good at what he does.

02:18:37 And so there’s some aspect of leadership

02:18:39 where you can get thrown out, people can just leave.

02:18:42 That’s how it works with open source, the fork.

02:18:45 But at the same time, you want to sometimes be a leader

02:18:49 like with a strong opinion, because people,

02:18:52 I mean, there’s some kind of balance here

02:18:54 for this like hive mind to get like behind.

02:18:57 Leadership is a big topic.

02:18:58 And I didn’t, I’m not one of these guys

02:18:59 that went to MBA school and said,

02:19:01 I’m gonna be an entrepreneur and I’m gonna be a leader.

02:19:03 And I’m gonna read all these Harvard Business Review

02:19:05 articles on leadership and all this other stuff.

02:19:07 Like I was a physicist turned into a software nerd

02:19:10 who then really like nerded out on Python.

02:19:13 And now I am entrepreneurial, right?

02:19:14 I saw a business opportunity around the use

02:19:16 of Python for data.

02:19:17 But for me, what has been interesting over this journey

02:19:20 with the last 10 years is how much I started really

02:19:25 enjoying the understanding, thinking deeper

02:19:28 about organizational dynamics and leadership.

02:19:31 And leadership does come down to a few core things.

02:19:35 Number one, a leader has to create belief

02:19:40 or at least has to dispel disbelief.

02:19:44 Leadership also, you have to have vision,

02:19:46 loyalty and experience.

02:19:49 So can you say belief in a singular vision?

02:19:52 Like what does belief mean?

02:19:53 Yeah, belief means a few things.

02:19:55 Belief means here’s what we need to do

02:19:57 and this is a valid thing to do and we can do it.

02:20:01 That you have to be able to drive that belief.

02:20:06 And every step of leadership along the way

02:20:08 has to help you amplify that belief to more people.

02:20:12 I mean, I think at a fundamental level, that’s what it is.

02:20:15 You have to have a vision.

02:20:17 You have to be able to show people that,

02:20:20 or you have to convince people to believe in the vision

02:20:23 and to get behind you.

02:20:25 And that’s where the loyalty part comes in

02:20:26 and the experience part comes in.

02:20:28 There’s all different flavors of leadership.

02:20:30 So if we talk about Linus, we could talk about Elon Musk

02:20:34 and Steve Jobs, there’s Sunder Prachai.

02:20:38 There’s people that kind of put themselves at the center

02:20:40 and are strongly opinionated.

02:20:42 And some people are more like consensus builders.

02:20:45 What works well for open source?

02:20:47 What works well in the space of programmers?

02:20:49 So you’ve been a programmer, you’ve led many programmers

02:20:53 that are now sort of at the center of this ecosystem.

02:20:55 What works well in the programming world would you say?

02:20:58 It really depends on the people.

02:21:01 What style of leadership is best?

02:21:02 And it depends on the programming community.

02:21:04 I think for the Python community,

02:21:06 servant leadership is one of the values.

02:21:08 At the end of the day, the leader has to also be

02:21:11 the high priest of values, right?

02:21:14 So any collection of people has values of their living.

02:21:19 And if you want to maintain certain values

02:21:23 and those values help you as an organization

02:21:26 become more powerful,

02:21:27 then the leader has to live those values unequivocally

02:21:30 and has to hold the values.

02:21:33 So in our case, in this collaborative community

02:21:36 around Python, I think that the humility

02:21:41 is one of those values.

02:21:42 Servant leadership, you actually have to kind of do the stuff.

02:21:45 You have to walk the walk, not just talk the talk.

02:21:49 I don’t feel like the Python community really demands

02:21:52 that much from a vision standpoint.

02:21:53 And they should.

02:21:54 And I think they should.

02:21:56 This is the interesting thing is like so many people

02:22:00 use Python, from where comes the vision?

02:22:04 You know, like you have a Elon Musk type character

02:22:07 who makes bold statements about the vision

02:22:12 for particular companies he’s involved with.

02:22:14 And it’s like, I think a lot of people that work

02:22:18 at those companies kind of can only last

02:22:22 if they believe that vision.

02:22:24 And some of it is super bold.

02:22:26 So my question is, and by the way,

02:22:28 those companies often use Python.

02:22:32 How do you establish a vision?

02:22:33 Like get to 100 million users, right?

02:22:37 Get to where, you know, the Python is at the center

02:22:42 of the machine learning and was it data science,

02:22:46 machine learning, deep learning,

02:22:48 artificial intelligence revolution, right?

02:22:51 Like in many ways, perhaps the Python community

02:22:54 is not thinking of it that way,

02:22:55 but it’s leading the way on this.

02:22:58 Like the tooling is like essential.

02:23:01 Right, well, you know, for a while,

02:23:03 PyCon people in the scientific Python

02:23:05 and the PyData community, they would submit talks.

02:23:09 Those are early 2010s, mid 2010s.

02:23:12 They would submit talks for PyCon

02:23:14 and the talks would all be rejected

02:23:15 because there was the separate sort of PyData conferences.

02:23:18 And like, well, these probably belong more to PyData.

02:23:21 And instead there’d be yet another talk about, you know,

02:23:23 threads and, you know, whatever, some web framework.

02:23:26 And it’s like, that was an interesting dynamic to see

02:23:29 that there was, I mean, at the time it was a little annoying

02:23:32 because we wanted to try to get more users

02:23:34 and get more people talking about these things.

02:23:35 And PyCon is a huge venue, right?

02:23:37 It’s thousands of Python programmers.

02:23:40 But then also came to appreciate that, you know,

02:23:42 parallel, having an ecosystem that allows parallel

02:23:45 innovation is not bad, right?

02:23:47 There are people doing embedded Python stuff.

02:23:49 There’s people doing web programming,

02:23:50 people doing scripting, there’s cyber uses of Python.

02:23:53 I think the, ultimately at some point,

02:23:55 if your slide mold covers so much stuff,

02:23:58 you have to respect that different things are growing

02:24:00 in different areas and different niches.

02:24:02 Now, at some point that has to come together

02:24:04 and the central body has to provide resources.

02:24:07 The principle here is subsidiarity.

02:24:09 Give resources to the various groups

02:24:11 to then allocate as they see fit in their niches.

02:24:15 That would be a really helpful dynamic.

02:24:16 But again, it’s a volunteer community.

02:24:17 It’s not like they had that many resources to start with.

02:24:21 What was or is your favorite programming setup?

02:24:23 What operating system, what keyboard,

02:24:25 how many screens are you listening to?

02:24:29 What time of day are you drinking coffee, tea?

02:24:32 Tea, sometimes coffee, depending on how well I slept.

02:24:36 I used to have.

02:24:37 How much sleep do you get?

02:24:38 Are you a night owl?

02:24:39 I remember somebody asked you somewhere,

02:24:41 a question about work life balance.

02:24:44 Not just work life balance, but like a family,

02:24:47 you lead a company and your answer was basically like,

02:24:52 I still haven’t figured it out.

02:24:54 Yeah, I think I’ve gotten to a little bit better balance.

02:24:56 I have a really great leadership team now supporting me

02:24:58 and so that takes a lot of the day to day stuff

02:25:01 off my plate and my kids are getting a little older.

02:25:04 So that helps.

02:25:05 So, and of course I have a wonderful wife

02:25:07 who takes care of a lot of the things

02:25:09 that I’m not able to take care of and she’s great.

02:25:11 I try to get to sleep earlier now

02:25:13 because I have to get up every morning at six

02:25:15 to take my kid down to the bus stop.

02:25:17 So there’s a hard thing.

02:25:19 For a while I was doing polyphasic sleep,

02:25:21 which is really interesting.

02:25:22 Like I go to bed at nine, wake up at like 2 a.m.,

02:25:24 work till five, sleep three hours, wake up at eight.

02:25:27 Like that was actually, it was interesting.

02:25:29 It wasn’t too bad.

02:25:30 How did it feel?

02:25:31 It was good.

02:25:32 I didn’t keep it up for years, but once I have travel,

02:25:34 then it just, everything goes out the window, right?

02:25:37 Because then you’re like time zones and all these things.

02:25:39 Socially was it, except like were you able to live

02:25:42 outside of how you felt?

02:25:43 Were you able to live normal society?

02:25:45 Oh yeah, because like on the nights

02:25:47 that I wasn’t out hanging out with people or whatever,

02:25:48 going to bed at nine, no one cares.

02:25:50 I wake up at two, I’m still responding to their slacks,

02:25:52 emails, whatever, and you know, shitposting on Twitter

02:25:56 or whatever at two in the morning is great, right?

02:25:59 And then you go to bed for a few hours and you wake up,

02:26:02 it’s like you had an extra day in the middle.

02:26:04 And I’d read somewhere that humans naturally

02:26:06 have biphasic sleep or something, I don’t know.

02:26:09 I read basically everything somewhere.

02:26:11 So every option of everything.

02:26:13 Every option of everything.

02:26:14 I will say that that worked out for me for a while,

02:26:16 although I don’t do it anymore.

02:26:18 In terms of programming setup,

02:26:19 I had a 27 inch high DPI setup that I really liked,

02:26:24 but then I moved to a curved monitor

02:26:26 just because when I moved to the new house,

02:26:28 I want to have a bit more screen for Zoom plus communications

02:26:32 plus various kinds of things.

02:26:33 So it’s like one large monitor.

02:26:35 One large curved monitor.

02:26:38 What operating system?

02:26:39 Mac.

02:26:40 Okay. Yeah.

02:26:41 Is that what happens when you become important,

02:26:43 is you stop using Linux and Windows?

02:26:46 No, I actually have a Windows box as well

02:26:48 on the next table over, but I have three desks, right?

02:26:54 So the main one is the standing desk so that I can,

02:26:57 whatever, when I’m like, I have a teleprompter set up

02:26:59 and everything else.

02:27:00 And then I’ve got my iMac and then eGPU and then Windows PC.

02:27:06 The reason I moved to Mac was it’s got a Linux prompt

02:27:10 or no, sorry, it’s got a, it’s got a Unix prompt

02:27:13 so I can do all my stuff, but then I don’t have to worry.

02:27:18 Like when I’m presenting for clients

02:27:19 or investors or whatever, like it,

02:27:21 I don’t have to worry about any like ACPI related

02:27:25 fsic things in the middle of a presentation,

02:27:27 like none of that.

02:27:28 It just, it will always wake from sleep

02:27:30 and it won’t kernel panic on me.

02:27:32 And this is not a dig against Linux,

02:27:34 except that I just, I feel really bad.

02:27:38 I feel like a traitor to my community saying this, right?

02:27:40 But in 2012, I was just like, okay, start my own company.

02:27:43 What do I get?

02:27:44 And Linux laptops were just not quite there.

02:27:47 And so I’ve just stuck with Macs.

02:27:48 Can I just defend something that nobody respectable

02:27:51 seems to do, which is, so I do a boot on Linux windows,

02:27:55 but in windows, I have a windows subsystems

02:27:59 for Linux or whatever, WSL.

02:28:02 And I find myself being able to handle everything I need

02:28:06 and almost everything I need in Linux

02:28:08 for basic sort of tasks, scripting tasks within WSL

02:28:11 and it creates a really nice environment.

02:28:12 So I’ve been, but like whenever I hang out with like,

02:28:15 especially important people,

02:28:17 like they’re all on iPhone and a Mac

02:28:20 and it’s like, yeah, like what,

02:28:23 there is a messiness to windows and a messiness to Linux

02:28:27 that makes me feel like you’re still in it.

02:28:31 Well, the Linux stuff, windows subsystem for Linux

02:28:34 is very tempting, but there’s still the windows

02:28:37 on the outside where I don’t know where,

02:28:40 and I’ve been, okay, I’ve used DOS since version 1.11

02:28:44 or 1.21 or something.

02:28:45 So I’ve been a long time Microsoft user.

02:28:48 And I will say that like, it’s really hard

02:28:52 for me to know where anything is,

02:28:53 how to get to the details behind something

02:28:55 when something screws up as an invariably does

02:28:57 and just things like changing group permissions

02:28:59 on some shared folders and stuff,

02:29:01 just everything seems a little bit more awkward,

02:29:03 more clicks than it needs to be.

02:29:06 Not to say that there aren’t weird things

02:29:07 like hidden attributes and all this other happy stuff

02:29:09 on Mac, but for the most part,

02:29:14 and well, actually, especially now

02:29:15 with the new hardware coming out on Mac,

02:29:16 it’ll be very interesting with the new M1.

02:29:20 There were some dark years in the last few years

02:29:21 when I was like, I think maybe I have to move off of Mac

02:29:24 as a platform, but this, I mean,

02:29:27 like my keyboard was just not working.

02:29:29 Like literally my keyboard just wasn’t working, right?

02:29:31 I had this touch bar, didn’t have a physical escape button

02:29:33 like I needed to because I used Vim,

02:29:35 and now I think we’re back, so.

02:29:37 So you use Vim and you have a, what kind of keyboard?

02:29:40 So I use a RealForce 87U, it’s a mechanical,

02:29:44 it’s a Topre keyswitch.

02:29:45 Like it’s a weird shape, there’s a normal shape, okay.

02:29:48 Well, no, because I say that because I use a Kinesis,

02:29:51 and you said some dark, you said you had dark moments.

02:29:55 I recently had a dark moment,

02:29:57 I was like, what am I doing with my life?

02:29:58 So I remember sort of flying in a very kind of tight space,

02:30:03 and as I’m working, this is what I do on an airplane.

02:30:06 I pull out a laptop, and on top of the laptop,

02:30:09 I’ll put a Kinesis keyboard.

02:30:11 That’s hardcore, man.

02:30:12 I was thinking, is this who I am?

02:30:13 Is this what I’m becoming?

02:30:15 Will I be this person?

02:30:16 Because I’m on Emacs with this Kinesis keyboard,

02:30:18 sitting like with everybody around.

02:30:21 Emacs on Windows.

02:30:23 On WSL, yeah.

02:30:25 Yeah, Emacs on Linux on Windows.

02:30:27 Yeah, on Windows.

02:30:28 And like everybody around me is using their iPhone

02:30:32 to look at TikTok.

02:30:33 So I’m like in this land, and I thought, you know what?

02:30:36 Maybe I need to become an adult and put the 90s behind me,

02:30:40 and use like a normal keyboard.

02:30:43 And then I did some soul searching,

02:30:45 and I decided like this is who I am.

02:30:46 This is me like coming out of the closet

02:30:48 to saying I’m Kinesis keyboard all the way.

02:30:50 I’m going to use Emacs.

02:30:52 You know who else is a Kinesis fan?

02:30:55 Wes McKinney, the creator of Pandas.

02:30:56 Oh.

02:30:57 He banged out Pandas on a Kinesis keyboard, I believe.

02:31:00 I don’t know if he’s still using one, maybe,

02:31:01 but certainly 10 years ago, like he was.

02:31:04 If anyone’s out there,

02:31:05 maybe we need to have a Kinesis support group.

02:31:07 Please reach out.

02:31:08 Isn’t there already one?

02:31:09 Is there one?

02:31:10 I don’t know.

02:31:11 There’s gotta be an RSC channel, man.

02:31:12 Oh no, and you access it through Emacs.

02:31:16 Okay.

02:31:18 Do you still program these days?

02:31:19 I do a little bit.

02:31:21 Honestly, the last thing I did was I had written,

02:31:25 I was working with my son to script some Minecraft stuff.

02:31:28 So I was doing a little bit of that.

02:31:29 That was the last, literally the last code I wrote.

02:31:33 Oh, you know what?

02:31:33 Also, I wrote some code to do some cap table evaluation,

02:31:36 waterfall modeling kind of stuff.

02:31:39 What advice would you give to a young person,

02:31:41 you said your son, today, in high school,

02:31:44 maybe even college, about career, about life?

02:31:48 This may be where I get into trouble a little bit.

02:31:51 We are coming to the end.

02:31:53 We’re rapidly entering a time between worlds.

02:31:56 So we have a world now that’s starting to really crumble

02:31:59 under the weight of aging institutions

02:32:01 that no longer even pretend to serve the purposes

02:32:04 they were created for.

02:32:05 We are creating technologies that are hurtling billions

02:32:09 of people headlong into philosophical crises

02:32:11 who they don’t even know the philosophical operating systems

02:32:14 in their firmware.

02:32:15 And they’re heading into a time when that gets vaporized.

02:32:17 So for people in high school,

02:32:20 and certainly I tell my son this as well,

02:32:21 he’s in middle school, people in college,

02:32:24 you are going to have to find your own way.

02:32:29 You’re going to have to have a pioneer spirit,

02:32:31 even if you live in the middle

02:32:34 of the most dense urban environment.

02:32:36 All of human reality around you

02:32:40 is the result of the last few generations of humans

02:32:44 agreeing to play certain kinds of games.

02:32:47 A lot of those games no longer operate

02:32:51 according to the rules they used to.

02:32:55 Collapse is nonlinear, but it will be managed.

02:32:58 And so if you are in a particular social caste

02:33:02 or economic caste,

02:33:03 and I think it’s not kosher to say that about America,

02:33:10 but America is a very stratified and classist society.

02:33:14 There’s some mobility, but it’s really quite classist.

02:33:17 And in America, unless you’re in the upper middle class,

02:33:20 you are headed into very choppy waters.

02:33:23 So it is really, really good to think

02:33:26 and understand the fundamentals of what you need

02:33:29 to build a meaningful life for you, your loved ones,

02:33:32 with your family.

02:33:35 And almost all of the technology being created

02:33:38 that’s consumer facing is designed to own people,

02:33:41 to take the four stack of people, to delaminate them,

02:33:47 and to own certain portions of that stack.

02:33:50 And so if you want to be an integral human being,

02:33:52 if you want to have your agency

02:33:54 and you want to find your own way in the world,

02:33:57 when you’re young would be a great time to spend time

02:34:00 looking at some of the classics

02:34:02 around what it means to live a good life,

02:34:05 what it means to build connection with people.

02:34:08 And so much of the status game, so much of the stuff,

02:34:13 one of the things that I sort of talk about

02:34:14 as we create more and more technology,

02:34:18 there’s a gradient of technology,

02:34:19 and a gradient of technology

02:34:20 always leads to a gradient of power.

02:34:22 And this is Jacques Leleu’s point to some extent as well.

02:34:25 That gradient of power is not going to go away.

02:34:27 The technologies are going so fast

02:34:29 that even people like me who helped create

02:34:32 some of the stuff, I’m being left behind.

02:34:33 Some of the cutting edge research,

02:34:34 I don’t know what’s going on against today.

02:34:36 You know, I go read some proceedings.

02:34:38 So as the world gets more and more technological,

02:34:42 it will create more and more gradients

02:34:44 where people will seize power, economic fortunes.

02:34:48 And the way they make the people who are left behind

02:34:51 okay with their lot in life is they create lottery systems.

02:34:54 They make you take part in the narrative

02:35:00 of your own being trapped in your own economic sort of zone.

02:35:04 So avoiding those kinds of things is really important.

02:35:07 Knowing when someone is running game on you basically.

02:35:10 So these are the things I would tell young people.

02:35:12 It’s a dark message, but it’s realism.

02:35:14 I mean, that’s what I see.

02:35:15 So after you gave some realism, you sit back.

02:35:18 You sit back with your son.

02:35:19 You’re looking out at the sunset.

02:35:21 What to him can you give as words of hope and to you

02:35:27 from where do you derive hope for the future of our world?

02:35:32 So you said at the individual level,

02:35:33 you have to have a pioneer mindset

02:35:36 to go back to the classics,

02:35:38 to understand what is in human nature you can find meaning.

02:35:41 But at the societal level, what trajectory,

02:35:44 when you look up possible trajectories, what gives you hope?

02:35:47 What gives me hope is that we have little tremors now

02:35:52 shaking people out of the reverie

02:35:54 of the fiction of modernity that they’ve been living in,

02:35:57 kind of a late 20th century style modernity.

02:36:00 That’s good, I think.

02:36:02 Because, and also to your point earlier,

02:36:06 people are burning out on some of the social media stuff.

02:36:08 They’re sort of seeing the ugly side,

02:36:09 especially the latest news with Facebook

02:36:11 and the whistleblower, right?

02:36:12 It’s quite clear these things are not

02:36:15 all they’re cracked up to be.

02:36:16 Do you believe, I believe better social media can be built

02:36:20 because they are burning out

02:36:21 and it’ll incentivize other competitors to be built.

02:36:25 Do you think that’s possible?

02:36:26 Well, the thing about it is that

02:36:29 when you have extractive return on returns

02:36:33 capital coming in and saying,

02:36:35 look, you own a network,

02:36:36 give me some exponential dynamics out of this network.

02:36:39 What are you gonna do?

02:36:39 You’re gonna just basically put a toll keeper

02:36:41 at every single node and every single graph edge,

02:36:45 every node, every vertex, every edge.

02:36:48 But if you don’t have that need for it,

02:36:49 if no one’s sitting there saying,

02:36:51 hey, Wikipedia, monetize every character,

02:36:53 every byte, every phrase,

02:36:54 then generative human dynamics will naturally sort of arise,

02:36:58 assuming we respect a few principles

02:37:01 around online communications.

02:37:03 So the greatest and biggest social network in the world

02:37:05 is still like email, SMS, right?

02:37:08 So we’re fine there.

02:37:10 The issue with the social media, as we call it now,

02:37:13 is they’re actually just new amplification systems, right?

02:37:16 Now it’s benefited certain people like yourself

02:37:18 who have interesting content to be amplified.

02:37:23 So it’s created a creator economy, and that’s cool.

02:37:25 There’s a lot of great content out there.

02:37:26 But giving everyone a shot at the fame lottery,

02:37:29 saying, hey, you could also have your,

02:37:31 if you wiggle your butt the right way on TikTok,

02:37:33 you can have your 15 seconds of micro fame.

02:37:36 That’s not healthy for society at large.

02:37:38 So I think if we can create tools that help people

02:37:41 be conscientious about their attention,

02:37:45 spend time looking at the past,

02:37:46 and really retrieving memory and calling,

02:37:49 not calling, but processing and thinking about that,

02:37:53 I think that’s certainly possible,

02:37:55 and hopefully that’s what we get.

02:37:57 So the bigger question that you’re asking

02:38:01 about what gives me hope

02:38:02 is that these early shocks of COVID lockdowns

02:38:08 and remote work and all these different kinds of things,

02:38:11 I think it’s getting people to a point

02:38:13 where they’re sort of no longer in the reverie.

02:38:19 As my friend Jim Rutt says,

02:38:21 there’s more people with ears to hear now.

02:38:23 With the pandemic and education,

02:38:26 everyone’s like, wait, wait,

02:38:27 what have you guys been doing with my kids?

02:38:28 How are you teaching them?

02:38:30 What is this crap you’re giving them as homework?

02:38:32 So I think these are the kinds of things

02:38:33 that are getting, and the supply chain disruptions,

02:38:36 getting more people to think about,

02:38:38 how do we actually just make stuff?

02:38:40 This is all good, but the concern is that

02:38:44 it’s still gonna take a while for these things,

02:38:48 for people to learn how to be agentic again,

02:38:51 and to be in right relationship with each other

02:38:53 and with the world.

02:38:55 So the message of hope is still people are resilient,

02:38:58 and we are building some really amazing technology.

02:39:01 And I also, to me, I derive a lot of hope

02:39:03 from individuals in that van.

02:39:08 The power of a single individual to transform the world,

02:39:11 to do positive things for the world is quite incredible.

02:39:14 Now you’ve been talking about,

02:39:16 it’s nice to have as many of those individuals as possible,

02:39:18 but even the power of one, it’s kind of magical.

02:39:21 It is, it is.

02:39:22 We’re in a mode now where we can do that.

02:39:24 I think also, part of what I try to do

02:39:26 is in coming to podcasts like yours,

02:39:29 and then spamming with all this philosophical stuff

02:39:31 that I’ve got going on,

02:39:33 there are a lot of good people out there

02:39:34 trying to put words around the current technological,

02:39:40 social, economic crises that we’re facing.

02:39:43 And in the space of a few short years,

02:39:44 I think there has been a lot of great content

02:39:46 produced around this stuff.

02:39:47 For people who wanna see, wanna find out more,

02:39:50 or think more about this,

02:39:52 we’re popularizing certain kinds of philosophical ideas

02:39:54 that move people beyond just the,

02:39:56 oh, you’re communist, oh, you’re capitalist kind of stuff.

02:39:58 Like it’s sort of, we’re way past that now.

02:40:01 So that also gives me hope,

02:40:03 that I feel like I myself am getting a handle

02:40:05 on how to think about these things.

02:40:08 It makes me feel like I can,

02:40:09 hopefully affect change for the better.

02:40:12 We’ve been sneaking up on this question all over the place.

02:40:15 Let me ask the big, ridiculous question.

02:40:17 What is the meaning of life?

02:40:20 Wow.

02:40:23 The meaning of life.

02:40:28 Yeah, I don’t know.

02:40:29 I mean, I’ve never really understood that question.

02:40:32 When you say meaning crisis,

02:40:34 you’re saying that there is a search

02:40:39 for a kind of experience

02:40:42 that could be described as fulfillment,

02:40:45 as like the aha moment of just like joy,

02:40:50 and maybe when you see something beautiful,

02:40:53 or maybe you have created something beautiful,

02:40:55 that experience that you get,

02:40:57 it feels like it all makes sense.

02:41:02 So some of that is just chemicals coming together

02:41:04 in your mind and all those kinds of things.

02:41:06 But it seems like we’re building

02:41:08 a sophisticated collective intelligence

02:41:12 that’s providing meaning in all kinds of ways

02:41:15 to its members.

02:41:17 And there’s a theme to that meaning.

02:41:20 So for a lot of history,

02:41:22 I think faith played an important role.

02:41:26 Faith in God, sort of religion.

02:41:29 I think technology in the modern era

02:41:32 is kind of serving a little bit

02:41:34 of a source of meaning for people,

02:41:36 like innovation of different kinds.

02:41:39 I think the old school things of love

02:41:43 and the basics of just being good at stuff.

02:41:47 But you were a physicist,

02:41:50 so there’s a desire to say, okay, yeah,

02:41:52 but these seem to be like symptoms of something deeper.

02:41:56 Right.

02:41:57 Like why?

02:41:57 A little meaning, what’s capital M meaning?

02:41:59 Yeah, what’s capital M meaning?

02:42:00 Why are we reaching for order

02:42:03 when there is excess of energy?

02:42:06 I don’t know if I can answer the why.

02:42:09 Any why that I come up with, I think, is gonna be,

02:42:13 I’d have to think about that a little more,

02:42:15 maybe get back to you on that.

02:42:17 But I will say this.

02:42:19 We do look at the world through a traditional,

02:42:22 I think most people look at the world through

02:42:24 what I would say is a subject object

02:42:25 to kind of metaphysical lens,

02:42:27 that we have our own subjectivity,

02:42:29 and then there’s all of these object things that are not us.

02:42:34 So I’m me, and these things are not me, right?

02:42:37 And I’m interacting with them, I’m doing things to them.

02:42:39 But a different view of the world

02:42:41 that looks at it as much more connected,

02:42:44 that realizes, oh, I’m really quite embedded

02:42:49 in a soup of other things,

02:42:50 and I’m simply almost like a standing wave pattern

02:42:54 of different things, right?

02:42:55 So when you look at the world

02:42:58 in that kind of connected sense,

02:42:59 I’ve recently taken a shine

02:43:02 to this particular thought experiment,

02:43:04 which is what if it was the case

02:43:08 that everything that we touch with our hands,

02:43:12 that we pay attention to,

02:43:13 that we actually give intimacy to,

02:43:16 what if there’s actually all the mumbo jumbo,

02:43:21 like people with the magnetic healing crystals

02:43:25 and all this other kind of stuff and quantum energy stuff,

02:43:28 what if that was a thing?

02:43:30 What if literally when your hand touches an object,

02:43:34 when you really look at something

02:43:35 and you concentrate and you focus on it

02:43:36 and you really give it attention,

02:43:39 you actually give it,

02:43:40 there is some physical residue of something,

02:43:44 a part of you, a bit of your life force that goes into it.

02:43:48 Okay, now this is of course completely mumbo jumbo stuff.

02:43:51 This is not like, I don’t actually think this is real,

02:43:53 but let’s do the thought experiment.

02:43:55 What if it was?

02:43:57 What if there actually was some quantum magnetic crystal

02:44:01 and energy field thing that just by touching this can,

02:44:05 this can has changed a little bit somehow.

02:44:08 And it’s not much unless you put a lot into it

02:44:11 and you touch it all the time, like your phone, right?

02:44:15 These things gained, they gain meaning to you a little bit,

02:44:19 but what if there’s something that,

02:44:23 technical objects, the phone is a technical object,

02:44:25 it does not really receive attention or intimacy

02:44:29 and then allow itself to be transformed by it.

02:44:31 But if it’s a piece of wood,

02:44:33 if it’s the handle of a knife that your mother used

02:44:36 for 20 years to make dinner for you, right?

02:44:40 What if it’s a keyboard that you banged out,

02:44:43 your world transforming software library on?

02:44:46 These are technical objects

02:44:47 and these are physical objects,

02:44:48 but somehow there’s something to them.

02:44:51 We feel an attraction to these objects

02:44:53 as if we have imbued them with life energy, right?

02:44:56 So if you walk that thought experiment through,

02:44:58 what happens when we touch another person,

02:45:00 when we hug them, when we hold them?

02:45:03 And the reason this ties into my answer for your question

02:45:07 is that if there is such a thing,

02:45:12 if there is such a thing,

02:45:13 if we were to hypothesize, you know,

02:45:15 hypothesize it’s such a thing,

02:45:18 it could be that the purpose of our lives

02:45:23 is to imbue as many things with that love as possible.

02:45:30 That’s a beautiful answer

02:45:32 and a beautiful way to end it, Peter.

02:45:35 You’re an incredible person.

02:45:36 Thank you.

02:45:37 Spanning so much in the space of engineering

02:45:41 and in the space of philosophy.

02:45:44 I’m really proud to be living in the same city as you

02:45:49 and I’m really grateful

02:45:51 that you would spend your valuable time with me today.

02:45:53 Thank you so much.

02:45:53 Well, thank you.

02:45:54 I appreciate the opportunity to speak with you.

02:45:56 Thanks for listening to this conversation with Peter Wang.

02:45:59 To support this podcast,

02:46:00 please check out our sponsors in the description.

02:46:03 And now let me leave you with some words

02:46:05 from Peter Wang himself.

02:46:07 We tend to think of people

02:46:09 as either malicious or incompetent,

02:46:12 but in a world filled with corruptible

02:46:15 and unchecked institutions,

02:46:17 there exists a third thing, malicious incompetence.

02:46:21 It’s a social cancer

02:46:22 and it only appears once human organizations scale

02:46:26 beyond personal accountability.

02:46:27 Thank you for listening and hope to see you next time.