Jaron Lanier: Virtual Reality, Social Media & the Future of Humans and AI #218

Transcript

00:00:00 The following is a conversation with Jaron Lanier,

00:00:03 a computer scientist, visual artist, philosopher,

00:00:06 writer, futurist, musician,

00:00:08 and the founder of the field of virtual reality.

00:00:11 To support this podcast,

00:00:12 please check out our sponsors in the description.

00:00:15 As a side note,

00:00:16 you may know that Jaron is a staunch critic

00:00:19 of social media platforms.

00:00:20 Him and I agree on many aspects of this,

00:00:23 except perhaps I am more optimistic

00:00:26 about it being possible to build better platforms.

00:00:29 And better artificial intelligence systems

00:00:32 that put longterm interests

00:00:33 and happiness of human beings first.

00:00:36 Let me also say a general comment about these conversations.

00:00:40 I try to make sure I prepare well,

00:00:42 remove my ego from the picture,

00:00:44 and focus on making the other person shine

00:00:47 as we try to explore the most beautiful

00:00:49 and insightful ideas in their mind.

00:00:51 This can be challenging

00:00:53 when the ideas that are close to my heart

00:00:55 are being criticized.

00:00:57 In those cases, I do offer a little pushback,

00:00:59 but respectfully, and then move on,

00:01:02 trying to have the other person come out

00:01:04 looking wiser in the exchange.

00:01:06 I think there’s no such thing as winning in conversations,

00:01:09 nor in life.

00:01:11 My goal is to learn and to have fun.

00:01:14 I ask that you don’t see my approach

00:01:15 to these conversations as weakness.

00:01:17 It is not.

00:01:19 It is my attempt at showing respect

00:01:21 and love for the other person.

00:01:24 That said, I also often just do a bad job of talking,

00:01:28 but you probably already knew that.

00:01:30 So please give me a pass on that as well.

00:01:33 This is the Lex Friedman Podcast,

00:01:35 and here is my conversation with Jaron Lanier.

00:01:39 You’re considered the founding father of virtual reality.

00:01:44 Do you think we will one day spend most

00:01:47 or all of our lives in virtual reality worlds?

00:01:51 I have always found the very most valuable moment

00:01:56 in virtual reality to be the moment

00:01:58 when you take off the headset and your senses are refreshed

00:02:01 and you perceive physicality afresh,

00:02:05 as if you were a newborn baby,

00:02:07 but with a little more experience.

00:02:09 So you can really notice just how incredibly strange

00:02:13 and delicate and peculiar and impossible the real world is.

00:02:18 So the magic is, and perhaps forever will be

00:02:22 in the physical world.

00:02:23 Well, that’s my take on it.

00:02:25 That’s just me.

00:02:25 I mean, I think I don’t get to tell everybody else

00:02:29 how to think or how to experience virtual reality.

00:02:31 And at this point, there have been multiple generations

00:02:35 of younger people who’ve come along and liberated me

00:02:39 from having to worry about these things.

00:02:42 But I should say also even in what some,

00:02:45 well, I called it mixed reality,

00:02:47 back in the day, and these days it’s called

00:02:49 augmented reality, but with something like a HoloLens,

00:02:53 even then, like one of my favorite things

00:02:56 is to augment a forest, not because I think the forest

00:02:58 needs augmentation, but when you look at the augmentation

00:03:02 next to a real tree, the real tree just pops out

00:03:05 as being astounding, it’s interactive,

00:03:09 it’s changing slightly all the time if you pay attention,

00:03:12 and it’s hard to pay attention to that,

00:03:14 but when you compare it to virtual reality,

00:03:16 all of a sudden you do.

00:03:18 And even in practical applications,

00:03:20 my favorite early application of virtual reality,

00:03:24 which we prototyped going back to the 80s

00:03:27 when I was working with Dr. Joe Rosen at Stanford Med

00:03:30 near where we are now, we made the first surgical simulator.

00:03:34 And to go from the fake anatomy of the simulation,

00:03:39 which is incredibly valuable for many things,

00:03:42 for designing procedures, for training,

00:03:43 for all kinds of things, then to go to the real world

00:03:45 is then to go to the real person,

00:03:47 boy, it’s really something like surgeons

00:03:51 really get woken up by that transition, it’s very cool.

00:03:54 So I think the transition is actually more valuable

00:03:56 than the simulation.

00:03:58 That’s fascinating, I never really thought about that.

00:04:01 It’s almost, it’s like traveling elsewhere

00:04:05 in the physical space can help you appreciate

00:04:08 how much you value your home once you return.

00:04:11 Well, that’s how I take it.

00:04:13 I mean, once again, people have different attitudes

00:04:16 towards it, all are welcome.

00:04:18 What do you think is the difference

00:04:20 between the virtual world and the physical meat space world

00:04:23 that you are still drawn, for you personally,

00:04:26 still drawn to the physical world?

00:04:28 Like there clearly then is a distinction.

00:04:31 Is there some fundamental distinction

00:04:33 or is it the peculiarities of the current set of technology?

00:04:37 In terms of the kind of virtual reality that we have now,

00:04:41 it’s made of software and software is terrible stuff.

00:04:46 Software is always the slave of its own history,

00:04:50 its own legacy.

00:04:52 It’s always infinitely arbitrarily messy and arbitrary.

00:04:57 Working with it brings out a certain kind

00:05:00 of nerdy personality in people, or at least in me,

00:05:03 which I’m not that fond of.

00:05:05 And there are all kinds of things about software

00:05:07 I don’t like.

00:05:09 And so that’s different from the physical world.

00:05:11 It’s not something we understand, as you just pointed out.

00:05:15 On the other hand, I’m a little mystified

00:05:17 when people ask me, well,

00:05:18 do you think the universe is a computer?

00:05:21 And I have to say, well, I mean,

00:05:24 what on earth could you possibly mean

00:05:26 if you say it isn’t a computer?

00:05:27 If it isn’t a computer,

00:05:30 it wouldn’t follow principles consistently

00:05:33 and it wouldn’t be intelligible

00:05:35 because what else is a computer ultimately?

00:05:38 I mean, and we have physics, we have technology,

00:05:41 so we can do technology so we can program it.

00:05:43 So, I mean, of course it’s some kind of computer,

00:05:45 but I think trying to understand it as a Turing machine

00:05:49 is probably a foolish approach.

00:05:52 Right, that’s the question, whether it performs,

00:05:56 this computer we call the universe,

00:05:58 performs the kind of computation that can be modeled

00:06:01 as a universal Turing machine,

00:06:03 or is it something much more fancy,

00:06:07 so fancy, in fact, that it may be

00:06:09 beyond our cognitive capabilities to understand?

00:06:12 Turing machines are kind of,

00:06:16 I call them teases in a way,

00:06:18 because if you have an infinitely smart programmer

00:06:23 with an infinite amount of time,

00:06:24 an infinite amount of memory,

00:06:25 and an infinite clock speed, then they’re universal,

00:06:29 but that cannot exist.

00:06:31 So they’re not universal in practice.

00:06:33 And they actually are, in practice,

00:06:36 a very particular sort of machine within the constraints,

00:06:40 within the conservation principles of any reality

00:06:44 that’s worth being in, probably.

00:06:46 And so I think universality of a particular model

00:06:55 is probably a deceptive way to think,

00:06:58 even though at some sort of limit,

00:07:00 of course something like that’s gotta be true

00:07:05 at some sort of high enough limit,

00:07:07 but it’s just not accessible to us, so what’s the point?

00:07:10 Well, to me, the question of whether we’re living

00:07:12 inside a computer or a simulation

00:07:15 is interesting in the following way.

00:07:18 There’s a technical question that’s here.

00:07:20 How difficult is it to build a machine,

00:07:25 not that simulates the universe,

00:07:28 but that makes it sufficiently realistic

00:07:31 that we wouldn’t know the difference,

00:07:33 or better yet, sufficiently realistic

00:07:36 that we would kinda know the difference,

00:07:37 but we would prefer to stay in the virtual world anyway?

00:07:41 I wanna give you a few different answers.

00:07:42 I wanna give you the one that I think

00:07:43 has the most practical importance

00:07:45 to human beings right now,

00:07:47 which is that there’s a kind of an assertion

00:07:51 sort of built into the way the questions usually asked

00:07:54 that I think is false, which is a suggestion

00:07:57 that people have a fixed level of ability

00:08:00 to perceive reality in a given way.

00:08:03 And actually, people are always learning,

00:08:07 evolving, forming themselves.

00:08:09 We’re fluid, too.

00:08:10 We’re also programmable, self programmable,

00:08:13 changing, adapting.

00:08:15 And so my favorite way to get at this

00:08:18 is to talk about the history of other media.

00:08:21 So for instance, there was a peer review paper

00:08:23 that showed that an early wire recorder

00:08:26 playing back an opera singer behind a curtain

00:08:28 was indistinguishable from a real opera singer.

00:08:31 And so now, of course, to us,

00:08:32 it would not only be distinguishable,

00:08:34 but it would be very blatant

00:08:35 because the recording would be horrible.

00:08:37 But to the people at the time,

00:08:39 without the experience of it, it seemed plausible.

00:08:43 There was an early demonstration

00:08:46 of extremely crude video teleconferencing

00:08:49 between New York and DC in the 30s, I think so,

00:08:54 that people viewed as being absolutely realistic

00:08:56 and indistinguishable, which to us would be horrible.

00:08:59 And there are many other examples.

00:09:01 Another one, one of my favorite ones,

00:09:02 is in the Civil War era,

00:09:04 there were itinerant photographers

00:09:06 who collected photographs of people

00:09:07 who just looked kind of like a few archetypes.

00:09:10 So you could buy a photo of somebody

00:09:12 who looked kind of like your loved one

00:09:14 to remind you of that person

00:09:17 because actually photographing them was inconceivable

00:09:20 and hiring a painter was too expensive

00:09:22 and you didn’t have any way for the painter

00:09:23 to represent them remotely anyway.

00:09:25 How would they even know what they looked like?

00:09:27 So these are all great examples

00:09:29 of how in the early days of different media,

00:09:32 we perceived the media as being really great,

00:09:34 but then we evolved through the experience of the media.

00:09:37 This gets back to what I was saying.

00:09:38 Maybe the greatest gift of photography

00:09:40 is that we can see the flaws in a photograph

00:09:42 and appreciate reality more.

00:09:44 Maybe the greatest gift of audio recording

00:09:46 is that we can distinguish that opera singer now

00:09:49 from that recording of the opera singer

00:09:52 on the horrible wire recorder.

00:09:53 So we shouldn’t limit ourselves

00:09:57 by some assumption of stasis that’s incorrect.

00:10:01 So that’s the first thing, that’s my first answer,

00:10:03 which is I think the most important one.

00:10:05 Now, of course, somebody might come back and say,

00:10:07 oh, but you know, technology can go so far.

00:10:09 There must be some point at which it would surpass.

00:10:11 That’s a different question.

00:10:12 I think that’s also an interesting question,

00:10:14 but I think the answer I just gave you

00:10:16 is actually the more important answer

00:10:17 to the more important question.

00:10:18 That’s profound, yeah.

00:10:20 But can you, the second question,

00:10:23 which you’re now making me realize is way different.

00:10:26 Is it possible to create worlds

00:10:28 in which people would want to stay

00:10:31 instead of the real world?

00:10:32 Well.

00:10:33 Like, en masse, like large numbers of people.

00:10:38 What I hope is, you know, as I said before,

00:10:41 I hope that the experience of virtual worlds

00:10:44 helps people appreciate this physical world we have

00:10:49 and feel tender towards it

00:10:51 and keep it from getting too fucked up.

00:10:54 That’s my hope.

00:10:57 Do you see all technology in that way?

00:10:58 So basically technology helps us appreciate

00:11:02 the more sort of technology free aspect of life.

00:11:08 Well, media technology.

00:11:10 You know, I mean, you can stretch that.

00:11:13 I mean, you can, let me say,

00:11:15 I could definitely play McLuhan

00:11:17 and turn this into a general theory.

00:11:19 It’s totally doable.

00:11:20 The program you just described is totally doable.

00:11:23 In fact, I will psychically predict

00:11:25 that if you did the research,

00:11:26 you could find 20 PhD theses that do that already.

00:11:29 I don’t know, but they might exist.

00:11:31 But I don’t know how much value there is

00:11:34 in pushing a particular idea that far.

00:11:38 Claiming that reality isn’t a computer in some sense

00:11:41 seems incoherent to me because we can program it.

00:11:44 We have technology.

00:11:46 It seems to obey physical laws.

00:11:48 What more do you want from it to be a computer?

00:11:50 I mean, it’s a computer of some kind.

00:11:52 We don’t know exactly what kind.

00:11:53 We might not know how to think about it.

00:11:54 We’re working on it, but.

00:11:57 Sorry to interrupt, but you’re absolutely right.

00:11:59 Like, that’s my fascination with the AI as well,

00:12:01 is it helps, in the case of AI,

00:12:05 I see it as a set of techniques

00:12:07 that help us understand ourselves, understand us humans.

00:12:10 In the same way, virtual reality,

00:12:12 and you’re putting it brilliantly,

00:12:14 which it’s a way to help us understand reality,

00:12:17 appreciate and open our eyes more richly to reality.

00:12:23 That’s certainly how I see it.

00:12:26 And I wish people who become incredibly fascinated,

00:12:29 who go down the rabbit hole of the different fascinations

00:12:33 with whether we’re in a simulation or not,

00:12:35 or, you know, there’s a whole world of variations on that.

00:12:40 I wish they’d step back

00:12:41 and think about their own motivations

00:12:42 and exactly what they mean, you know?

00:12:45 And I think the danger with these things is,

00:12:52 so if you say, is the universe

00:12:54 some kind of computer broadly,

00:12:56 it has to be because it’s not coherent to say that it isn’t.

00:12:59 On the other hand, to say that that means

00:13:02 you know anything about what kind of computer,

00:13:05 that’s something very different.

00:13:06 And the same thing is true for the brain.

00:13:07 The same thing is true for anything

00:13:10 where you might use computational metaphors.

00:13:12 Like, we have to have a bit of modesty about where we stand.

00:13:14 And the problem I have with these framings of computation

00:13:19 is these ultimate cosmic questions

00:13:21 is that it has a way of getting people

00:13:23 to pretend they know more than they do.

00:13:25 Can you maybe, this is a therapy session,

00:13:28 psychoanalyze me for a second.

00:13:30 I really liked the Elder Scrolls series.

00:13:32 It’s a role playing game, Skyrim, for example.

00:13:36 Why do I enjoy so deeply just walking around that world?

00:13:41 And then there’s people and you could talk to

00:13:45 and you can just like, it’s an escape.

00:13:48 But you know, my life is awesome.

00:13:49 I’m truly happy, but I also am happy

00:13:52 with the music that’s playing in the mountains

00:13:56 and carrying around a sword and just that.

00:14:00 I don’t know what that is.

00:14:02 It’s very pleasant though to go there.

00:14:04 And I miss it sometimes.

00:14:06 I think it’s wonderful to love artistic creations.

00:14:12 It’s wonderful to love contact with other people.

00:14:15 It’s wonderful to love play and ongoing evolving

00:14:21 meaning and patterns with other people.

00:14:24 I think it’s a good thing.

00:14:30 I’m not like anti tech

00:14:31 and I’m certainly not anti digital tech.

00:14:34 I’m anti, as everybody knows by now,

00:14:37 I think the manipulative economy of social media

00:14:41 is making everybody nuts and all that.

00:14:42 So I’m anti that stuff.

00:14:43 But the core of it, of course, I worked for many, many years

00:14:47 on trying to make that stuff happen

00:14:49 because I think it can be beautiful.

00:14:51 Like I don’t like, why not?

00:14:55 And by the way, there’s a thing about humans,

00:14:59 which is we’re problematic.

00:15:03 Any kind of social interaction with other people

00:15:07 is gonna have its problems.

00:15:10 People are political and tricky.

00:15:14 And like, I love classical music,

00:15:16 but when you actually go to a classical music thing

00:15:18 and it turns out, oh, actually,

00:15:19 this is like a backroom power deal kind of place

00:15:22 and a big status ritual as well.

00:15:24 And that’s kind of not as fun.

00:15:27 That’s part of the package.

00:15:29 And the thing is, it’s always going to be,

00:15:30 there’s always gonna be a mix of things.

00:15:34 I don’t think the search for purity

00:15:38 is gonna get you anywhere.

00:15:40 So I’m not worried about that.

00:15:42 I worry about the really bad cases

00:15:44 where we’re making ourselves crazy or cruel enough

00:15:48 that we might not survive.

00:15:49 And I think the social media criticism rises to that level,

00:15:53 but I’m glad you enjoy it.

00:15:54 I think it’s great.

00:15:57 And I like that you basically say

00:15:59 that every experience has both beauty and darkness,

00:16:02 as in with classical music.

00:16:03 I also play classical piano, so I appreciate it very much.

00:16:07 But it’s interesting.

00:16:08 I mean, every, and even the darkness,

00:16:10 it’s a man’s search for meaning

00:16:11 with Viktor Frankl in the concentration camps.

00:16:15 Even there, there’s opportunity to discover beauty.

00:16:20 And so that’s the interesting thing about humans,

00:16:25 is the capacity to discover beautiful

00:16:27 in the darkest of moments,

00:16:29 but there’s always the dark parts too.

00:16:31 Well, I mean, it’s our situation is structurally difficult.

00:16:37 We are, no, it is, it’s true.

00:16:42 We perceive socially, we depend on each other

00:16:44 for our sense of place and perception of the world.

00:16:50 I mean, we’re dependent on each other.

00:16:52 And yet there’s also a degree in which we’re inevitably,

00:16:58 we never really let each other down.

00:17:01 We are set up to be competitive as well as supportive.

00:17:05 I mean, it’s just our fundamental situation

00:17:08 is complicated and challenging,

00:17:10 and I wouldn’t have it any other way.

00:17:13 Okay, let’s talk about one of the most challenging things.

00:17:17 One of the things I unfortunately am very afraid of

00:17:20 being human, allegedly.

00:17:23 You wrote an essay on death and consciousness

00:17:26 in which you write a note.

00:17:28 Certainly the fear of death

00:17:29 has been one of the greatest driving forces

00:17:31 in the history of thought

00:17:33 and in the formation of the character of civilization.

00:17:37 And yet it is under acknowledged.

00:17:39 The great book on the subject,

00:17:41 The Denial of Death by Ernest Becker

00:17:43 deserves a reconsideration.

00:17:47 I’m Russian, so I have to ask you about this.

00:17:49 What’s the role of death in life?

00:17:51 See, you would have enjoyed coming to our house

00:17:54 because my wife is Russian and we also have,

00:17:58 we have a piano of such spectacular qualities,

00:18:01 you wouldn’t, you would have freaked out.

00:18:04 But anyway, we’ll let all that go.

00:18:07 So the context in which,

00:18:09 I remember that essay sort of,

00:18:12 this was from maybe the 90s or something.

00:18:15 And I used to publish in a journal

00:18:18 called the Journal of Consciousness Studies

00:18:20 because I was interested in these endless debates

00:18:24 about consciousness and science,

00:18:27 which certainly continue today.

00:18:31 And I was interested in how the fear of death

00:18:38 and the denial of death played into

00:18:41 different philosophical approaches to consciousness.

00:18:44 Because I think on the one hand,

00:18:53 the sort of sentimental school of dualism,

00:18:58 meaning the feeling that there’s something

00:19:00 apart from the physical brain,

00:19:02 some kind of soul or something else,

00:19:05 is obviously motivated in a sense

00:19:07 by a hope that whatever that is

00:19:09 will survive death and continue.

00:19:11 And that’s a very core aspect of a lot of the world religions,

00:19:15 not all of them, not really, but most of them.

00:19:21 The thing I noticed is that the opposite of those,

00:19:26 which might be the sort of hardcore,

00:19:28 no, the brain’s a computer and that’s it.

00:19:31 In a sense, we’re motivated in the same way

00:19:36 with a remarkably similar chain of arguments,

00:19:40 which is no, the brain’s a computer

00:19:43 and I’m gonna figure it out in my lifetime

00:19:45 and upload myself and I’ll live forever.

00:19:48 That’s interesting.

00:19:50 Yeah, that’s like the implied thought, right?

00:19:53 Yeah, and so it’s kind of this,

00:19:55 in a funny way, it’s the same thing.

00:20:02 It’s peculiar to notice that these people

00:20:06 who would appear to be opposites in character

00:20:09 and cultural references and in their ideas

00:20:14 actually are remarkably similar.

00:20:16 And to an incredible degree,

00:20:20 this sort of hardcore computationalist idea

00:20:24 about the brain has turned into medieval Christianity

00:20:28 with together, like there’s the people who are afraid

00:20:31 that if you have the wrong thought,

00:20:32 you’ll piss off the super AIs of the future

00:20:34 who will come back and zap you and all that stuff.

00:20:38 It’s really turned into medieval Christianity

00:20:41 all over again.

00:20:43 This is so the Ernest Becker’s idea that death,

00:20:46 the fear of death is the warm at the core,

00:20:49 which is like, that’s the core motivator

00:20:53 of everything we see humans have created.

00:20:56 The question is if that fear of mortality is somehow core,

00:21:00 is like a prerequisite to consciousness.

00:21:03 You just moved across this vast cultural chasm

00:21:10 that separates me from most of my colleagues in a way.

00:21:13 And I can’t answer what you just said on the level

00:21:15 without this huge deconstruction.

00:21:18 Should I do it?

00:21:19 Yes, what’s the chasm?

00:21:20 Okay.

00:21:21 Let us travel across this vast chasm.

00:21:23 Okay, I don’t believe in AI.

00:21:25 I don’t think there’s any AI.

00:21:26 There’s just algorithms, we make them, we control them.

00:21:28 Now, they’re tools, they’re not creatures.

00:21:30 Now, this is something that robs a lot of people,

00:21:33 the wrong way, and don’t I know it.

00:21:36 When I was young, my main mentor was Marvin Minsky,

00:21:39 who’s the principal author of the computer

00:21:43 as creature rhetoric that we still use.

00:21:47 He was the first person to have the idea at all,

00:21:48 but he certainly populated AI culture

00:21:52 with most of its tropes, I would say,

00:21:55 because a lot of the people will say,

00:21:57 oh, did you hear this new idea about AI?

00:21:58 And I’m like, yeah, I heard it in 1978.

00:22:00 Sure, yeah, I remember that.

00:22:01 So Marvin was really the person.

00:22:03 And Marvin and I used to argue all the time about this stuff

00:22:08 because I always rejected it.

00:22:10 And of all of his,

00:22:14 of all of his, I wasn’t formally his student,

00:22:17 but I worked for him as a researcher,

00:22:19 but of all of his students and student like people

00:22:23 of his young adoptees,

00:22:26 I think I was the one who argued with him

00:22:28 about this stuff in particular, and he loved it.

00:22:31 Yeah, I would have loved to hear that conversation.

00:22:33 It was fun.

00:22:34 Did you ever converse to a place?

00:22:36 Oh, no, no.

00:22:37 So the very last time I saw him, he was quite frail.

00:22:40 And I was in Boston, and I was going to the old house

00:22:45 in Brookline, his amazing house.

00:22:47 And one of our mutual friends said,

00:22:49 hey, listen, Marvin’s so frail.

00:22:52 Don’t do the argument with him.

00:22:54 Don’t argue about AI, you know?

00:22:56 And so I said, but Marvin loves that.

00:22:58 And so I showed up, and he’s like, he was frail.

00:23:01 He looked up and he said, are you ready to argue?

00:23:04 He’s such an amazing person for that.

00:23:10 So it’s hard to summarize this

00:23:13 because it’s decades of stuff.

00:23:16 The first thing to say is that nobody can claim

00:23:19 absolute knowledge about whether somebody

00:23:23 or something else is conscious or not.

00:23:25 This is all a matter of faith.

00:23:27 And in fact, I think the whole idea of faith

00:23:31 needs to be updated.

00:23:32 So it’s not about God,

00:23:34 but it’s just about stuff in the universe.

00:23:36 We have faith in each other, being conscious.

00:23:39 And then I used to frame this

00:23:42 as a thing called the circle of empathy in my old papers.

00:23:45 And then it turned into a thing

00:23:47 for the animal rights movement too.

00:23:49 I noticed Peter Singer using it.

00:23:50 I don’t know if it was coincident or,

00:23:52 but anyway, there’s this idea

00:23:54 that you draw a circle around yourself

00:23:56 and the stuff inside is more like you,

00:23:58 might be conscious, might be deserving of your empathy,

00:24:00 of your consideration,

00:24:02 and the stuff outside the circle isn’t.

00:24:04 And outside the circle might be a rock or,

00:24:10 I don’t know.

00:24:12 And that circle is fundamentally based on faith.

00:24:15 Well, if you don’t know it.

00:24:16 Your faith in what is and what isn’t.

00:24:17 The thing about this circle is it can’t be pure faith.

00:24:21 It’s also a pragmatic decision

00:24:23 and this is where things get complicated.

00:24:26 If you try to make it too big,

00:24:27 you suffer from incompetence.

00:24:29 If you say, I don’t wanna kill a bacteria,

00:24:33 I will not brush my teeth.

00:24:34 I don’t know, like, what do you do?

00:24:35 Like, there’s a competence question

00:24:39 where you do have to draw the line.

00:24:41 People who make it too small become cruel.

00:24:44 People are so clannish and political

00:24:46 and so worried about themselves ending up

00:24:48 on the bottom of society

00:24:51 that they are always ready to gang up

00:24:52 on some designated group.

00:24:54 And so there’s always these people who are being,

00:24:56 we’re always trying to shove somebody out of the circle.

00:24:58 And so.

00:24:59 So aren’t you shoving AI outside the circle?

00:25:01 Well, give me a second.

00:25:02 All right.

00:25:03 So there’s a pragmatic consideration here.

00:25:05 And so, and the biggest questions

00:25:09 are probably fetuses and animals lately,

00:25:11 but AI is getting there.

00:25:13 Now with AI, I think,

00:25:19 and I’ve had this discussion so many times.

00:25:21 People say, but aren’t you afraid if you exclude AI,

00:25:23 you’d be cruel to some consciousness?

00:25:26 And then I would say, well, if you include AI,

00:25:29 you make yourself, you exclude yourself

00:25:32 from being able to be a good engineer or designer.

00:25:35 And so you’re facing incompetence immediately.

00:25:38 So like, I really think we need to subordinate algorithms

00:25:41 and be much more skeptical of them.

00:25:43 Your intuition, you speak about this brilliantly

00:25:45 with social media, how things can go wrong.

00:25:48 Isn’t it possible to design systems

00:25:54 that show compassion, not to manipulate you,

00:25:57 but give you control and make your life better

00:26:01 if you so choose to, like grow together with systems.

00:26:04 And the way we grow with dogs and cats, with pets,

00:26:07 with significant others in that way,

00:26:09 they grow to become better people.

00:26:11 I don’t understand why that’s fundamentally not possible.

00:26:14 You’re saying oftentimes you get into trouble

00:26:18 by thinking you know what’s good for people.

00:26:20 Well, look, there’s this question

00:26:23 of what framework we’re speaking in.

00:26:25 Do you know who Alan Watts was?

00:26:27 So Alan Watts once said, morality is like gravity

00:26:32 that in some absolute cosmic sense, there can’t be morality

00:26:35 because at some point it all becomes relative

00:26:37 and who are we anyway?

00:26:39 Like morality is relative to us tiny creatures.

00:26:42 But here on earth, we’re with each other,

00:26:45 this is our frame and morality is a very real thing.

00:26:47 Same thing with gravity.

00:26:48 At some point, you get into interstellar space

00:26:52 and you might not feel much of it, but here we are on earth.

00:26:55 And I think in the same sense,

00:26:58 I think this identification with a frame that’s quite remote

00:27:04 cannot be separated from a feeling of wanting to feel

00:27:08 sort of separate from and superior to other people

00:27:11 or something like that.

00:27:12 There’s an impulse behind it that I really have to reject.

00:27:16 And we’re just not competent yet

00:27:18 to talk about these kinds of absolutes.

00:27:21 Okay, so I agree with you that a lot of technologists

00:27:24 sort of lack this basic respect, understanding

00:27:27 and love for humanity.

00:27:29 There’s a separation there.

00:27:30 The thing I’d like to push back against,

00:27:32 it’s not that you disagree,

00:27:33 but I believe you can create technologies

00:27:36 and you can create a new kind of technologist engineer

00:27:41 that does build systems that respect humanity,

00:27:44 not just respect, but admire humanity,

00:27:46 that have empathy for common humans, have compassion.

00:27:51 I mean, no, no, no.

00:27:52 I think, yeah, I mean, I think musical instruments

00:27:57 are a great example of that.

00:27:58 Musical instruments are technologies

00:28:00 that help people connect in fantastic ways.

00:28:02 And that’s a great example.

00:28:08 My invention or design during the pandemic period

00:28:11 was this thing called together mode

00:28:12 where people see themselves seated sort of

00:28:14 in a classroom or a theater instead of in squares.

00:28:20 And it allows them to semi consciously perform to each other

00:28:26 as if they have proper eye contact,

00:28:29 as if they’re paying attention to each other nonverbally

00:28:31 and weirdly that turns out to work.

00:28:34 And so it promotes empathy so far as I can tell.

00:28:36 I hope it is of some use to somebody.

00:28:39 The AI idea isn’t really new.

00:28:41 I would say it was born with Adam Smith’s invisible hand

00:28:45 with this idea that we build this algorithmic thing

00:28:48 and it gets a bit beyond us

00:28:50 and then we think it must be smarter than us.

00:28:52 And the thing about the invisible hand

00:28:54 is absolutely everybody has some line they draw

00:28:57 where they say, no, no, no,

00:28:58 we’re gonna take control of this thing.

00:29:00 They might have different lines,

00:29:01 they might care about different things,

00:29:03 but everybody ultimately became a Keynesian

00:29:05 because it just didn’t work.

00:29:06 It really wasn’t that smart.

00:29:08 It was sometimes smart and sometimes it failed, you know?

00:29:10 And so if you really, you know,

00:29:13 people who really, really, really wanna believe

00:29:16 in the invisible hand is infinitely smart,

00:29:19 screw up their economies terribly.

00:29:21 You have to recognize the economy as a subservient tool.

00:29:26 Everybody does when it’s to their advantage.

00:29:28 They might not when it’s not to their advantage.

00:29:30 That’s kind of an interesting game that happens.

00:29:33 But the thing is, it’s just like that with our algorithms,

00:29:35 you know, like, you can have a sort of a Chicago,

00:29:42 you know, economic philosophy about your computer.

00:29:44 Say, no, no, no, my things come alive,

00:29:46 it’s smarter than anything.

00:29:48 I think that there is a deep loneliness within all of us.

00:29:51 This is what we seek, we seek love from each other.

00:29:55 I think AI can help us connect deeper.

00:29:58 Like this is what you criticize social media for.

00:30:01 I think there’s much better ways of doing social media

00:30:03 than doing social media that doesn’t lead to manipulation,

00:30:06 but instead leads to deeper connection between humans,

00:30:09 leads to you becoming a better human being.

00:30:12 And what that requires is some agency on the part of AI

00:30:15 to be almost like a therapist, I mean, a companion.

00:30:18 It’s not telling you what’s right.

00:30:22 It’s not guiding you as if it’s an all knowing thing.

00:30:25 It’s just another companion that you can leave at any time.

00:30:28 You have complete transparency control over.

00:30:32 There’s a lot of mechanisms that you can have

00:30:34 that are counter to how current social media operates

00:30:38 that I think is subservient to humans,

00:30:41 or no, deeply respects human beings

00:30:46 and is empathetic to their experience

00:30:47 and all those kinds of things.

00:30:48 I think it’s possible to create AI systems like that.

00:30:51 And I think they, I mean, that’s a technical discussion

00:30:54 of whether they need to have something that looks like more,

00:30:58 something that looks like more like AI versus algorithms,

00:31:03 something that has a identity,

00:31:05 something that has a personality, all those kinds of things.

00:31:09 AI systems, and you’ve spoken extensively

00:31:11 how AI systems manipulate you within social networks.

00:31:17 And that’s the biggest problem,

00:31:19 isn’t necessarily that there’s advertisement

00:31:24 that social networks present you with advertisements

00:31:29 that then get you to buy stuff.

00:31:31 That’s not the biggest problem.

00:31:32 The biggest problem is they then manipulate you.

00:31:36 They alter your human nature to get you to buy stuff

00:31:41 or to get you to do whatever the advertiser wants.

00:31:46 Or maybe you can correct me.

00:31:47 Yeah, I don’t see it quite that way,

00:31:49 but we can work with that as an approximation.

00:31:52 Sure, so my…

00:31:53 I think the actual thing is even sort of more ridiculous

00:31:55 and stupider than that, but that’s okay, let’s…

00:31:58 So my question is, let’s not use the word AI,

00:32:02 but how do we fix it?

00:32:05 Oh, fixing social media,

00:32:07 that diverts us into this whole other field in my view,

00:32:11 which is economics,

00:32:12 which I always thought was really boring,

00:32:14 but we have no choice but to turn into economists

00:32:16 if we wanna fix this problem,

00:32:17 because it’s all about incentives.

00:32:19 But I’ve been around this thing since it started,

00:32:24 and I’ve been in the meetings

00:32:27 where the social media companies sell themselves

00:32:31 to the people who put the most money into them,

00:32:33 which are usually the big advertising holding companies

00:32:36 and whatnot.

00:32:37 And there’s this idea that I think is kind of a fiction,

00:32:41 and maybe it’s even been recognized as that by everybody,

00:32:45 that the algorithm will get really good

00:32:48 at getting people to buy something.

00:32:49 Because I think people have looked at their returns

00:32:52 and looked at what happens,

00:32:53 and everybody recognizes it’s not exactly right.

00:32:56 It’s more like a cognitive access blackmail payment

00:33:02 at this point.

00:33:03 Like just to be connected, you’re paying the money.

00:33:06 It’s not so much that the persuasion algorithms…

00:33:08 So Stanford renamed its program,

00:33:10 but it used to be called Engage Persuade.

00:33:12 The engage part works, the persuade part is iffy,

00:33:15 but the thing is that once people are engaged,

00:33:18 in order for you to exist as a business,

00:33:20 in order for you to be known at all,

00:33:21 you have to put money into the…

00:33:23 Oh, that’s dark.

00:33:24 Oh, no, that’s not…

00:33:25 It doesn’t work, but they have to…

00:33:27 But they’re still…

00:33:28 It’s a giant cognitive access blackmail scheme

00:33:31 at this point.

00:33:32 So because the science behind the persuade part,

00:33:35 it’s not entirely a failure,

00:33:39 but it’s not what…

00:33:42 We play make believe that it works more than it does.

00:33:46 The damage doesn’t come…

00:33:48 Honestly, as I’ve said in my books,

00:33:51 I’m not anti advertising.

00:33:53 I actually think advertising can be demeaning

00:33:57 and annoying and banal and ridiculous

00:34:01 and take up a lot of our time with stupid stuff.

00:34:04 Like there’s a lot of ways to criticize advertising

00:34:06 that’s accurate and it can also lie and all kinds of things.

00:34:11 However, if I look at the biggest picture,

00:34:13 I think advertising, at least as it was understood

00:34:17 before social media, helped bring people into modernity

00:34:20 in a way that overall actually did benefit people overall.

00:34:24 And you might say, am I contradicting myself

00:34:27 because I was saying you shouldn’t manipulate people?

00:34:29 Yeah, I am, probably here.

00:34:30 I mean, I’m not pretending to have

00:34:31 this perfect airtight worldview without some contradictions.

00:34:35 I think there’s a bit of a contradiction there, so.

00:34:37 Well, looking at the long arc of history,

00:34:39 advertisement has, in some parts, benefited society

00:34:43 because it funded some efforts that perhaps…

00:34:46 Yeah, I mean, I think like there’s a thing

00:34:50 where sometimes I think it’s actually been of some use.

00:34:53 Now, where the damage comes is a different thing though.

00:34:59 Social media, algorithms on social media

00:35:03 have to work on feedback loops

00:35:04 where they present you with stimulus

00:35:06 and they have to see if you respond to the stimulus.

00:35:09 Now, the problem is that the measurement mechanism

00:35:12 for telling if you respond in the engagement feedback loop

00:35:16 is very, very crude.

00:35:17 It’s things like whether you click more

00:35:19 or occasionally if you’re staring at the screen more

00:35:21 if there’s a forward facing camera that’s activated,

00:35:23 but typically there isn’t.

00:35:25 So you have this incredibly crude back channel of information

00:35:29 and so it’s crude enough that it only catches

00:35:32 sort of the more dramatic responses from you

00:35:35 and those are the fight or flight responses.

00:35:37 Those are the things where you get scared or pissed off

00:35:40 or aggressive or horny.

00:35:43 These are these ancient,

00:35:44 the sort of what are sometimes called the lizard brain

00:35:46 circuits or whatever, these fast response,

00:35:51 old, old, old evolutionary business circuits that we have

00:35:55 that are helpful in survival once in a while

00:35:58 but are not us at our best.

00:36:00 They’re not who we wanna be.

00:36:01 They’re not how we relate to each other.

00:36:03 They’re this old business.

00:36:05 So then just when you’re engaged using those intrinsically

00:36:08 totally aside from whatever the topic is,

00:36:11 you start to get incrementally just a little bit

00:36:13 more paranoid, xenophobic, aggressive.

00:36:17 You get a little stupid and you become a jerk

00:36:20 and it happens slowly.

00:36:22 It’s not like everybody’s instantly transformed,

00:36:26 but it does kind of happen progressively

00:36:28 where people who get hooked kind of get drawn

00:36:30 more and more into this pattern of being at their worst.

00:36:33 Would you say that people are able to,

00:36:35 when they get hooked in this way,

00:36:37 look back at themselves from 30 days ago

00:36:40 and say, I am less happy with who I am now

00:36:45 or I’m not happy with who I am now

00:36:47 versus who I was 30 days ago.

00:36:48 Are they able to self reflect

00:36:50 when you take yourself outside of the lizard brain?

00:36:52 Sometimes.

00:36:54 I wrote a book about people suggesting people take a break

00:36:57 from their social media to see what happens

00:36:58 and maybe even, actually the title of the book

00:37:01 was just the arguments to delete your account.

00:37:04 Yeah, 10 arguments.

00:37:05 10 arguments.

00:37:06 Although I always said, I don’t know that you should.

00:37:08 I can give you the arguments.

00:37:09 It’s up to you.

00:37:10 I’m always very clear about that.

00:37:11 But you know, I get like,

00:37:13 I don’t have a social media account obviously

00:37:15 and it’s not that easy for people to reach me.

00:37:18 They have to search out an old fashioned email address

00:37:21 on a super crappy antiquated website.

00:37:23 Like it’s actually a bit, I don’t make it easy.

00:37:26 And even with that, I get this huge flood of mail

00:37:28 from people who say, oh, I quit my social media.

00:37:30 I’m doing so much better.

00:37:31 I can’t believe how bad it was.

00:37:33 But the thing is, what’s for me a huge flood of mail

00:37:36 would be an imperceptible trickle

00:37:37 from the perspective of Facebook, right?

00:37:39 And so I think it’s rare for somebody

00:37:43 to look at themselves and say,

00:37:44 oh boy, I sure screwed myself over.

00:37:46 It’s a really hard thing to ask of somebody.

00:37:49 None of us find that easy, right?

00:37:51 It’s just hard.

00:37:52 The reason I asked this is,

00:37:54 is it possible to design social media systems

00:37:58 that optimize for some longer term metrics

00:38:01 of you being happy with yourself?

00:38:04 Well see, I don’t think you should try

00:38:06 to engineer personal growth or happiness.

00:38:08 I think what you should do is design a system

00:38:10 that’s just respectful of the people

00:38:12 and subordinates itself to the people

00:38:14 and doesn’t have perverse incentives.

00:38:16 And then at least there’s a chance

00:38:18 of something decent happening.

00:38:19 You have to recommend stuff, right?

00:38:22 So you’re saying like, be respectful.

00:38:24 What does that actually mean engineering wise?

00:38:26 Yeah, curation.

00:38:27 People have to be responsible.

00:38:30 Algorithms shouldn’t be recommending.

00:38:31 Algorithms don’t understand enough to recommend.

00:38:33 Algorithms are crap in this era.

00:38:35 I mean, I’m sorry, they are.

00:38:37 And I’m not saying this as somebody

00:38:38 as a critic from the outside.

00:38:39 I’m in the middle of it.

00:38:40 I know what they can do.

00:38:41 I know the math.

00:38:41 I know what the corpora are.

00:38:45 I know the best ones.

00:38:46 Our office is funding GPT3 and all these things

00:38:49 that are at the edge of what’s possible.

00:38:53 And they do not have yet.

00:38:57 I mean, it still is statistical emergent pseudo semantics.

00:39:02 It doesn’t actually have deep representation

00:39:04 emerging of anything.

00:39:05 It’s just not like, I mean that I’m speaking the truth here

00:39:07 and you know it.

00:39:08 Well, let me push back on this.

00:39:11 This, there’s several truths here.

00:39:13 So one, you’re speaking to the way

00:39:15 certain companies operate currently.

00:39:17 I don’t think it’s outside the realm

00:39:18 of what’s technically feasible to do.

00:39:21 There’s just not incentive,

00:39:22 like companies are not, why fix this thing?

00:39:26 I am aware that, for example, the YouTube search

00:39:29 and discovery has been very helpful to me.

00:39:32 And there’s a huge number of, there’s so many videos

00:39:36 that it’s nice to have a little bit of help.

00:39:39 But I’m still in control.

00:39:40 Let me ask you something.

00:39:41 Have you done the experiment of letting YouTube

00:39:44 recommend videos to you either starting

00:39:46 from a absolutely anonymous random place

00:39:49 where it doesn’t know who you are

00:39:50 or from knowing who you or somebody else is

00:39:52 and then going 15 or 20 hops?

00:39:54 Have you ever done that and just let it go

00:39:56 top video recommend and then just go 20 hops?

00:39:59 No, I’ve not.

00:40:00 I’ve done that many times now.

00:40:02 I have, because of how large YouTube is

00:40:05 and how widely it’s used,

00:40:06 it’s very hard to get to enough scale

00:40:10 to get a statistically solid result on this.

00:40:13 I’ve done it with high school kids,

00:40:15 with dozens of kids doing it at a time.

00:40:17 Every time I’ve done an experiment,

00:40:19 the majority of times after about 17 or 18 hops,

00:40:22 you end up in really weird, paranoid, bizarre territory.

00:40:26 Because ultimately, that is the stuff

00:40:28 the algorithm rewards the most

00:40:30 because of the feedback crudeness I was just talking about.

00:40:33 So I’m not saying that the video

00:40:36 never recommends something cool.

00:40:37 I’m saying that its fundamental core

00:40:40 is one that promotes a paranoid style

00:40:43 that promotes increasing irritability,

00:40:45 that promotes xenophobia, promotes fear, anger,

00:40:49 promotes selfishness, promotes separation between people.

00:40:53 And the thing is, it’s very hard to do this work solidly.

00:40:57 Many have repeated this experiment

00:40:59 and yet it still is kind of anecdotal.

00:41:01 I’d like to do a large citizen science thing sometime

00:41:05 and do it, but then I think the problem with that

00:41:06 is YouTube would detect it and then change it.

00:41:09 Yes, I love that kind of stuff on Twitter.

00:41:11 So Jack Dorsey has spoken about doing healthy conversations

00:41:15 on Twitter or optimizing for healthy conversations.

00:41:18 What that requires within Twitter

00:41:20 are most likely citizen experiments

00:41:23 of what does healthy conversation actually look like

00:41:26 and how do you incentivize those healthy conversations

00:41:29 you’re describing what often happens

00:41:32 and what is currently happening.

00:41:33 What I’d like to argue is it’s possible

00:41:36 to strive for healthy conversations,

00:41:39 not in a dogmatic way of saying,

00:41:42 I know what healthy conversations are and I will tell you.

00:41:44 I think one way to do this is to try to look around

00:41:47 at social, maybe not things that are officially social media,

00:41:51 but things where people are together online

00:41:53 and see which ones have more healthy conversations,

00:41:56 even if it’s hard to be completely objective

00:42:00 in that measurement, you can kind of, at least crudely.

00:42:02 You could do subjective annotation

00:42:05 like have a large crowd source annotation.

00:42:07 One that I’ve been really interested in is GitHub

00:42:10 because it could change.

00:42:14 I’m not saying it’ll always be, but for the most part,

00:42:17 GitHub has had a relatively quite low poison quotient.

00:42:21 And I think there’s a few things about GitHub

00:42:24 that are interesting.

00:42:26 One thing about it is that people have a stake in it.

00:42:29 It’s not just empty status games.

00:42:31 There’s actual code or there’s actual stuff being done.

00:42:35 And I think as soon as you have a real world stake

00:42:37 in something, you have a motivation

00:42:40 to not screw up that thing.

00:42:42 And I think that that’s often missing

00:42:45 that there’s no incentive for the person

00:42:48 to really preserve something.

00:42:49 If they get a little bit of attention

00:42:51 from dumping on somebody’s TikTok or something,

00:42:55 they don’t pay any price for it.

00:42:56 But you have to kind of get decent with people

00:43:00 when you have a shared stake, a little secret.

00:43:03 So GitHub does a bit of that.

00:43:06 GitHub is wonderful, yes.

00:43:08 But I’m tempted to play the Jaren Becker at you,

00:43:13 which is that, so GitHub is currently is amazing.

00:43:16 But the thing is, if you have a stake,

00:43:18 then if it’s a social media platform,

00:43:20 they can use the fact that you have a stake

00:43:22 to manipulate you because you want to preserve the stake.

00:43:25 So like, so like.

00:43:26 Right, well, this is why,

00:43:27 all right, this gets us into the economics.

00:43:29 So there’s this thing called data dignity

00:43:30 that I’ve been studying for a long time.

00:43:32 I wrote a book about an earlier version of it

00:43:34 called Who Owns the Future?

00:43:36 And the basic idea of it is that,

00:43:41 once again, this is a 30 year conversation.

00:43:43 It’s a fascinating topic.

00:43:44 Let me do the fastest version of this I can do.

00:43:46 The fastest way I know how to do this

00:43:48 is to compare two futures, all right?

00:43:51 So future one is then the normative one,

00:43:55 the one we’re building right now.

00:43:56 And future two is gonna be data dignity, okay?

00:44:00 And I’m gonna use a particular population.

00:44:03 I live on the hill in Berkeley.

00:44:05 And one of the features about the hill

00:44:06 is that as the climate changes,

00:44:08 we might burn down and I’ll lose our houses

00:44:10 or die or something.

00:44:11 Like it’s dangerous, you know, and it didn’t used to be.

00:44:14 And so who keeps us alive?

00:44:16 Well, the city does.

00:44:18 The city does some things.

00:44:19 The electric company kind of sort of,

00:44:21 maybe hopefully better.

00:44:23 Individual people who own property

00:44:25 take care of their property.

00:44:26 That’s all nice.

00:44:27 But there’s this other middle layer,

00:44:29 which is fascinating to me,

00:44:30 which is that the groundskeepers

00:44:33 who work up and down that hill,

00:44:35 many of whom are not legally here,

00:44:38 many of whom don’t speak English,

00:44:40 cooperate with each other

00:44:42 to make sure trees don’t touch

00:44:44 to transfer fire easily from lot to lot.

00:44:46 They have this whole little web

00:44:48 that’s keeping us safe.

00:44:49 I didn’t know about this at first.

00:44:50 I just started talking to them

00:44:52 because they were out there during the pandemic.

00:44:54 And so I try to just see who are these people?

00:44:56 Who are these people who are keeping us alive?

00:44:59 Now, I want to talk about the two different phases

00:45:01 for those people in your future one and future two.

00:45:04 Future one, some weird like kindergarten paint job van

00:45:10 with all these like cameras and weird things,

00:45:11 drives up, observes what the gardeners

00:45:13 and groundskeepers are doing.

00:45:15 A few years later, some amazing robots

00:45:18 that can shimmy up trees and all this show up.

00:45:20 All those people are out of work

00:45:21 and there are these robots doing the thing

00:45:23 and the robots are good.

00:45:23 And they can scale to more land

00:45:26 and they’re actually good.

00:45:28 But then there are all these people out of work

00:45:29 and these people have lost dignity.

00:45:31 They don’t know what they’re going to do.

00:45:32 And then somebody will say,

00:45:34 well, they go on basic income, whatever.

00:45:35 They become wards of the state.

00:45:38 My problem with that solution is every time in history

00:45:42 that you’ve had some centralized thing

00:45:44 that’s doling out the benefits,

00:45:45 that things get seized by people

00:45:47 because it’s too centralized and it gets seized.

00:45:49 This happened to every communist experiment I can find.

00:45:53 So I think that turns into a poor future

00:45:55 that will be unstable.

00:45:57 I don’t think people will feel good in it.

00:45:59 I think it’ll be a political disaster

00:46:01 with a sequence of people seizing this central source

00:46:04 of the basic income.

00:46:06 And you’ll say, oh no, an algorithm can do it.

00:46:08 Then people will seize the algorithm.

00:46:09 They’ll seize control.

00:46:11 Unless the algorithm is decentralized

00:46:13 and it’s impossible to seize the control.

00:46:15 Yeah, but 60 something people

00:46:20 own a quarter of all the Bitcoin.

00:46:21 Like the things that we think are decentralized

00:46:24 are not decentralized.

00:46:25 So let’s go to future two.

00:46:27 Future two, the gardeners see that van with all the cameras

00:46:32 and the kindergarten paint job,

00:46:33 and they say, the groundskeepers,

00:46:35 and they say, hey, the robots are coming.

00:46:37 We’re going to form a data union.

00:46:38 And amazingly, California has a little baby data union law

00:46:43 emerging in the books.

00:46:44 Yes.

00:46:45 And so they say, we’re going to form a data union

00:46:52 and we’re going to,

00:46:53 not only are we going to sell our data to this place,

00:46:56 but we’re going to make it better than it would have been

00:46:57 if they were just grabbing it without our cooperation.

00:47:00 And we’re going to improve it.

00:47:01 We’re going to make the robots more effective.

00:47:03 We’re going to make them better

00:47:04 and we’re going to be proud of it.

00:47:05 We’re going to become a new class of experts

00:47:08 that are respected.

00:47:09 And then here’s the interesting,

00:47:11 there’s two things that are different about that world

00:47:14 from future one.

00:47:15 One thing, of course, the people have more pride.

00:47:17 They have more sense of ownership, of agency,

00:47:23 but what the robots do changes.

00:47:27 Instead of just like this functional,

00:47:29 like we’ll figure out how to keep the neighborhood

00:47:31 from burning down,

00:47:33 you have this whole creative community

00:47:35 that wasn’t there before thinking,

00:47:36 well, how can we make these robots better

00:47:38 so we can keep on earning money?

00:47:39 There’ll be waves of creative groundskeeping

00:47:44 with spiral pumping, pumpkin patches

00:47:46 and waves of cultural things.

00:47:47 There’ll be new ideas like,

00:47:49 wow, I wonder if we could do something

00:47:51 about climate change mitigation with how we do this.

00:47:54 What about, what about fresh water?

00:47:56 Can we, what about, can we make the food healthier?

00:47:59 What about, what about all of a sudden

00:48:00 there’ll be this whole creative community on the case?

00:48:03 And isn’t it nicer to have a high tech future

00:48:06 with more creative classes

00:48:07 than one with more dependent classes?

00:48:09 Isn’t that a better future?

00:48:10 But, but, but, but, future one and future two

00:48:14 have the same robots and the same algorithms.

00:48:17 There’s no technological difference.

00:48:19 There’s only a human difference.

00:48:21 And that second future two, that’s data dignity.

00:48:25 The economy that you’re, I mean,

00:48:27 the game theory here is on the humans

00:48:29 and then the technology is just the tools

00:48:31 that enable both possibilities.

00:48:33 I mean, I think you can believe in AI

00:48:36 and be in future two.

00:48:37 I just think it’s a little harder.

00:48:38 You have to do more contortions, it’s possible.

00:48:43 So in the case of social media,

00:48:46 what does data dignity look like?

00:48:49 Is it people getting paid for their data?

00:48:51 Yeah, I think what should happen is in the future

00:48:55 there should be massive data unions

00:48:59 for people putting content into the system

00:49:03 and those data unions should smooth out

00:49:05 the results a little bit.

00:49:06 So it’s not winner take all, but at the same time,

00:49:10 and people have to pay for it too.

00:49:11 They have to pay for Facebook

00:49:13 the way they pay for Netflix

00:49:14 with an allowance for the poor.

00:49:17 There has to be a way out too.

00:49:20 But the thing is people do pay for Netflix.

00:49:22 It’s a going concern.

00:49:24 People pay for Xbox and PlayStation.

00:49:26 Like people, there’s enough people

00:49:27 to pay for stuff they want.

00:49:28 This could happen too.

00:49:29 It’s just that this precedent started

00:49:31 that moved it in the wrong direction.

00:49:33 And then what has to happen,

00:49:34 the economy is a measuring device.

00:49:36 If it’s an honest measuring device,

00:49:39 the outcomes for people form a normal distribution,

00:49:42 a bell curve.

00:49:43 And then, so there should be a few people

00:49:45 who do really well, a lot of people who do okay.

00:49:47 And then we should have an expanding economy

00:49:49 reflecting more and more creativity and expertise

00:49:52 flowing through the network.

00:49:54 And that expanding economy moves the result

00:49:57 just a bit forward.

00:49:57 So more people are getting money out of it

00:50:00 than are putting money into it.

00:50:01 So it gradually expands the economy

00:50:03 and lifts all boats.

00:50:04 And the society has to support the lower wing

00:50:08 of the bell curve too, but not universal basic income.

00:50:10 It has to be for the,

00:50:12 cause if it’s an honest economy,

00:50:15 there will be that lower wing

00:50:17 and we have to support those people.

00:50:19 There has to be a safety net.

00:50:21 But see what I believe, I’m not gonna talk about AI,

00:50:24 but I will say that I think there’ll be more

00:50:27 and more algorithms that are useful.

00:50:29 And so I don’t think everybody’s gonna be supplying data

00:50:33 to grounds keeping robots,

00:50:34 nor do I think everybody’s gonna make their living

00:50:36 with TikTok videos.

00:50:37 I think in both cases,

00:50:38 there’ll be a rather small contingent

00:50:41 that do well enough at either of those things.

00:50:43 But I think there might be many, many, many,

00:50:46 many of those niches that start to evolve

00:50:48 as there are more and more algorithms,

00:50:49 more and more robots.

00:50:50 And it’s that large number that will create

00:50:54 the economic potential for a very large part of society

00:50:57 to become members of new creative classes.

00:51:00 Do you think it’s possible to create a social network

00:51:05 that competes with Twitter and Facebook

00:51:06 that’s large and centralized in this way?

00:51:08 Not centralized, sort of large, large.

00:51:10 How to get, all right, so I gotta tell you

00:51:13 how to get from where we are

00:51:16 to anything kind of in the zone

00:51:18 of what I’m talking about is challenging.

00:51:22 I know some of the people who run,

00:51:24 like I know Jack Dorsey at H1N1,

00:51:26 and I view Jack as somebody who’s actually,

00:51:34 I think he’s really striving and searching

00:51:36 and trying to find a way to make it better,

00:51:40 but is kind of like,

00:51:42 it’s very hard to do it while in flight

00:51:44 and he’s under enormous business pressure too.

00:51:46 So Jack Dorsey to me is a fascinating study

00:51:49 because I think his mind is in a lot of good places.

00:51:52 He’s a good human being,

00:51:54 but there’s a big Titanic ship

00:51:56 that’s already moving in one direction.

00:51:57 It’s hard to know what to do with it.

00:51:59 I think that’s the story of Twitter.

00:52:00 I think that’s the story of Twitter.

00:52:02 One of the things that I observed is that

00:52:04 if you just wanna look at the human side,

00:52:06 meaning like how are people being changed?

00:52:08 How do they feel?

00:52:09 What does the culture like?

00:52:11 Almost all of the social media platforms that get big

00:52:15 have an initial sort of honeymoon period

00:52:17 where they’re actually kind of sweet and cute.

00:52:19 Yeah.

00:52:20 Like if you look at the early years of Twitter,

00:52:22 it was really sweet and cute,

00:52:23 but also look at Snap, TikTok.

00:52:27 And then what happens is as they scale

00:52:30 and the algorithms become more influential

00:52:32 instead of just the early people,

00:52:33 when it gets big enough that it’s the algorithm running it,

00:52:36 then you start to see the rise of the paranoid style

00:52:39 and then they start to get dark.

00:52:40 And we’ve seen that shift in TikTok rather recently.

00:52:43 But I feel like that scaling reveals the flaws

00:52:48 within the incentives.

00:52:51 I feel like I’m torturing you.

00:52:52 I’m sorry.

00:52:53 It’s not torture.

00:52:54 No, because I have hope for the world with humans

00:53:00 and I have hope for a lot of things that humans create,

00:53:02 including technology.

00:53:04 And I just, I feel it is possible to create

00:53:07 social media platforms that incentivize

00:53:11 different things than the current.

00:53:13 I think the current incentivization is around

00:53:16 like the dumbest possible thing that was invented

00:53:19 like 20 years ago, however long.

00:53:21 And it just works and so nobody’s changing it.

00:53:24 I just think that there could be a lot of innovation

00:53:26 for more, see, you kind of push back this idea

00:53:29 that we can’t know what longterm growth or happiness is.

00:53:33 If you give control to people to define

00:53:36 what their longterm happiness and goals are,

00:53:39 then that optimization can happen

00:53:42 for each of those individual people.

00:53:43 Well, I mean, imagine a future where

00:53:53 probably a lot of people would love to make their living

00:53:57 doing TikTok dance videos, but people recognize generally

00:54:01 that’s kind of hard to get into.

00:54:03 Nonetheless, dance crews have an experience

00:54:07 that’s very similar to programmers working together on GitHub.

00:54:10 So the future is like a cross between TikTok and GitHub

00:54:14 and they get together and they have rights.

00:54:18 They’re negotiating for returns.

00:54:21 They join different artists societies

00:54:23 in order to soften the blow of the randomness

00:54:26 of who gets the network effect benefit

00:54:29 because nobody can know that.

00:54:31 And I think an individual person

00:54:35 might join a thousand different data unions

00:54:37 in the course of their lives, or maybe even 10,000.

00:54:40 I don’t know, but the point is that we’ll have

00:54:42 like these very hedge distributed portfolios

00:54:45 of different data unions we’re part of.

00:54:47 And some of them might just trickle in a little money

00:54:50 for nonsense stuff where we’re contributing

00:54:52 to health studies or something.

00:54:54 But I think people will find their way.

00:54:56 They’ll find their way to the right GitHub like community

00:55:00 in which they find their value in the context

00:55:03 of supplying inputs and data and taste

00:55:07 and correctives and all of this into the algorithms

00:55:11 and the robots of the future.

00:55:14 And that is a way to resist

00:55:18 the lizard brain based funding system mechanisms.

00:55:22 It’s an alternate economic system

00:55:25 that rewards productivity, creativity,

00:55:28 value as perceived by others.

00:55:30 It’s a genuine market.

00:55:31 It’s not doled out from a center.

00:55:32 There’s not some communist person deciding who’s valuable.

00:55:36 It’s actual market.

00:55:38 And the money is made by supporting that

00:55:43 instead of just grabbing people’s attention

00:55:46 in the cheapest possible way,

00:55:47 which is definitely how you get the lizard brain.

00:55:49 Yeah, okay.

00:55:51 So we’re finally at the agreement.

00:55:55 But I just think that…

00:55:59 So yeah, I’ll tell you how I think to fix social media.

00:56:03 There’s a few things.

00:56:05 So one, I think people should have complete control

00:56:07 over their data and transparency of what that data is

00:56:11 and how it’s being used if they do hand over the control.

00:56:14 Another thing they should be able to delete,

00:56:16 walk away with their data at any moment, easy.

00:56:19 Like with a single click of a button, maybe two buttons,

00:56:22 I don’t know, just easily walk away with their data.

00:56:26 The other is control of the algorithm,

00:56:28 individualized control of the algorithm for them.

00:56:31 So each one has their own algorithm.

00:56:33 Each person has their own algorithm.

00:56:34 They get to be the decider of what they see in this world.

00:56:38 And to me, that’s, I guess, fundamentally decentralized

00:56:43 in terms of the key decisions being made.

00:56:45 But if that’s made transparent,

00:56:47 I feel like people will choose that system

00:56:50 over Twitter of today, over Facebook of today,

00:56:53 when they have the ability to walk away,

00:56:55 to control their data

00:56:56 and to control the kinds of things they see.

00:56:58 Now, let’s walk away from the term AI.

00:57:01 You’re right.

00:57:02 In this case, you have full control

00:57:05 of the algorithms that help you

00:57:07 if you want to use their help.

00:57:09 But you can also say a few to those algorithms

00:57:12 and just consume the raw, beautiful waterfall

00:57:17 of the internet.

00:57:18 I think that, to me, that’s not only fixes social media,

00:57:22 but I think it would make a lot more money.

00:57:24 So I would like to challenge the idea.

00:57:26 I know you’re not presenting that,

00:57:27 but that the only way to make a ton of money

00:57:30 is to operate like Facebook is.

00:57:32 I think you can make more money by giving people control.

00:57:36 Yeah, I mean, I certainly believe that.

00:57:38 We’re definitely in the territory

00:57:40 of a wholehearted agreement here.

00:57:44 I do want to caution against one thing,

00:57:47 which is making a future that benefits programmers

00:57:50 versus this idea that people are in control of their data.

00:57:53 So years ago, I cofounded an advisory board for the EU

00:57:58 with a guy named Jay.

00:57:59 Giovanni Bottarelli, who passed away.

00:58:00 It’s one of the reasons I wanted to mention it.

00:58:02 A remarkable guy who’d been,

00:58:04 he was originally a prosecutor

00:58:06 who was throwing mafioso in jail in Sicily.

00:58:10 So he was like this intense guy who was like,

00:58:13 I’ve dealt with death threats.

00:58:16 Mark Zuckerberg doesn’t scare me or whatever.

00:58:17 So we worked on this path of saying,

00:58:21 let’s make it all about transparency and consent.

00:58:23 And it was one of the feeders that led to this huge data

00:58:26 privacy and protection framework in Europe called the GDPR.

00:58:30 And so therefore we’ve been able to have empirical feedback

00:58:35 on how that goes.

00:58:36 And the problem is that most people actually get stymied

00:58:41 by the complexity of that kind of management.

00:58:44 They have trouble and reasonably so.

00:58:46 I don’t, I’m like a techie.

00:58:48 I can go in and I can figure out what’s going on.

00:58:51 But most people really do.

00:58:54 And so there’s a problem that it differentially benefits

00:59:00 those who kind of have a technical mindset

00:59:02 and can go in and sort of have a feeling

00:59:04 for how this stuff works.

00:59:05 I kind of still want to come back to incentives.

00:59:08 And so if the incentive for whoever is,

00:59:11 if the commercial incentive is to help the creative people

00:59:14 of the future make more money,

00:59:15 because you get a cut of it,

00:59:17 that’s how you grow an economy.

00:59:19 Not the programmers.

00:59:20 Well, some of them will be programmers.

00:59:24 It’s not anti programmer.

00:59:26 I’m just saying that it’s not only programmers, you know?

00:59:30 So, yeah, you have to make sure the incentives are right.

00:59:35 I mean, I like control is an interface problem

00:59:40 to where you have to create something that’s compelling

00:59:43 to everybody, to the creatives, to the public.

00:59:47 I mean, there’s, I don’t know, Creative Commons,

00:59:51 like the licensing, there’s a bunch of legal speak

00:59:57 just in general, the whole legal profession.

01:00:00 It’s nice when it can be simplified

01:00:01 in the way that you can truly simply understand.

01:00:03 Everybody can simply understand the basics.

01:00:07 In the same way, it should be very simple to understand

01:00:12 how the data is being used

01:00:14 and what data is being used for people.

01:00:17 But then you’re arguing that in order for that to happen,

01:00:20 you have to have the incentives alike.

01:00:22 I mean, a lot of the reason that money works

01:00:26 is actually information hiding and information loss.

01:00:30 Like one of the things about money

01:00:32 is a particular dollar you get

01:00:34 might have passed through your enemy’s hands

01:00:36 and you don’t know it.

01:00:37 But also, I mean, this is what Adam Smith,

01:00:40 if you wanna give the most charitable interpretation possible

01:00:43 to the invisible hand is what he was saying,

01:00:45 is that like there’s this whole complicated thing

01:00:48 and not only do you not need to know about it,

01:00:50 the truth is you’d never be able to follow it if you tried

01:00:52 and just like let the economic incentives

01:00:55 solve for this whole thing.

01:00:58 And that in a sense, every transaction

01:01:00 is like a neuron and a neural net.

01:01:02 If he’d had that metaphor, he would have used it

01:01:05 and let the whole thing settle to a solution

01:01:07 and don’t worry about it.

01:01:10 I think this idea of having incentives

01:01:13 that reduce complexity for people

01:01:15 can be made to work.

01:01:17 And that’s an example of an algorithm

01:01:19 that could be manipulative or not,

01:01:20 going back to your question before

01:01:21 about can you do it in a way that’s not manipulative?

01:01:24 And I would say a GitHub like,

01:01:28 if you just have this vision,

01:01:29 GitHub plus TikTok combined, is it possible?

01:01:33 I think it is.

01:01:34 I really think it is.

01:01:34 I’m not gonna be able to unsee that idea

01:01:38 of creatives on TikTok collaborating

01:01:40 in the same way that people on GitHub collaborate.

01:01:42 Why not?

01:01:42 I like that kind of version.

01:01:44 Why not?

01:01:45 I like it, I love it.

01:01:46 I just like, right now when people use,

01:01:48 by the way, father of teenage daughter.

01:01:50 It’s all about TikTok, right?

01:01:53 So, when people use TikTok,

01:01:55 there’s a lot of, it’s kind of funny,

01:01:58 I was gonna say cattiness,

01:01:59 but I was just using the cat

01:02:01 as this exemplar of what we’re talking about.

01:02:04 I contradict myself.

01:02:05 But anyway, there’s all this cattiness

01:02:07 where people are like,

01:02:07 ee, this person’s ee.

01:02:09 And I just, what about people getting together

01:02:13 and kind of saying,

01:02:14 okay, we’re gonna work on this move.

01:02:16 We’re gonna get a better,

01:02:17 can we get a better musician?

01:02:18 Like, and they do that,

01:02:20 but that’s the part

01:02:21 that’s kind of off the books right now.

01:02:25 That should be like right there.

01:02:26 That should be the center.

01:02:27 That’s where the, that’s the really best part.

01:02:29 Well, that’s where the invention of Git period,

01:02:31 the versioning is brilliant.

01:02:33 And so some of the things

01:02:35 you’re talking about,

01:02:36 technology, algorithms, tools can empower.

01:02:40 And that’s the thing for humans to connect,

01:02:43 to collaborate and so on.

01:02:44 Can we upset more people a little bit?

01:02:49 Maybe we’d have to try.

01:02:50 No, no.

01:02:51 Can we, can I ask you to elaborate?

01:02:53 Cause I, my intuition was that

01:02:55 you would be a supporter of something

01:02:57 like cryptocurrency and Bitcoin

01:02:59 because it is fundamentally emphasizes decentralization.

01:03:02 What do you, so can you elaborate?

01:03:05 Yeah.

01:03:06 Okay, look.

01:03:06 Your thoughts on Bitcoin.

01:03:07 I, it’s kind of funny.

01:03:10 Um, I, I wrote, I, I’ve been advocating

01:03:14 some kind of digital currency for a long time.

01:03:17 And when the, the, uh, when, when Bitcoin came out

01:03:22 and the original paper on, on blockchain,

01:03:26 um, my heart kind of sank because I thought,

01:03:29 Oh my God, we’re applying all of this fancy thought

01:03:32 and all these very careful distributed security

01:03:35 measures to recreate the gold standard.

01:03:38 Like it’s just so retro.

01:03:40 It’s so dysfunctional.

01:03:42 It’s so useless from an economic point of view.

01:03:44 So it’s always, and then the other thing

01:03:46 is using computational inefficiency

01:03:48 at a boundless scale as your form of security

01:03:51 is a crime against this atmosphere.

01:03:53 Obviously a lot of people know that now,

01:03:55 but we knew that at the start.

01:03:57 Like the thing is when the first paper came out,

01:03:59 I remember a lot of people saying,

01:04:00 Oh my God, I think this thing scales.

01:04:02 It’s a carbon disaster, you know?

01:04:04 And, and, um, I, I just like, I’m just mystified,

01:04:09 but that’s a different question than when you asked,

01:04:11 can you have, um, a cryptographic currency

01:04:15 or at least some kind of digital currency

01:04:17 that’s of a benefit?

01:04:18 And absolutely.

01:04:19 Like I’m, and there are people who are trying

01:04:21 to be thoughtful about this.

01:04:22 You should, uh, if you haven’t,

01:04:23 you should interview, uh, Vitalik Buterin sometime.

01:04:25 Yeah, I’ve interviewed him twice.

01:04:27 Okay.

01:04:28 So like there are people in the community

01:04:29 who are trying to be thoughtful

01:04:30 and trying to figure out how to do this better.

01:04:32 It has nice properties though, right?

01:04:34 So the, one of the nice properties is that

01:04:36 like government centralized, it’s hard to control.

01:04:38 Uh, and then the other one to fix some of the issues

01:04:40 that you’re referring to,

01:04:41 I’m sort of playing devil’s advocate here is,

01:04:43 you know, there’s lightning network.

01:04:44 There’s ideas how to, how you, uh, build stuff

01:04:48 on top of Bitcoin, similar with gold

01:04:50 that allow you to have this kind of vibrant economy

01:04:53 that operates not on the blockchain,

01:04:55 but outside the blockchain.

01:04:56 And you use this, uh, Bitcoin for, uh, for like

01:05:00 checking the security of those transactions.

01:05:02 So Bitcoin’s not new.

01:05:03 It’s been around for a while.

01:05:05 I’ve been watching it closely.

01:05:07 I’ve not, I’ve not seen one example of it

01:05:11 creating economic growth.

01:05:12 There was this obsession with the idea

01:05:14 that government was the problem,

01:05:16 that idea that government’s the problem.

01:05:18 Let’s say government earned that wrath, honestly,

01:05:22 because if you look at some of the things

01:05:25 that governments have done in recent decades,

01:05:27 it’s not a pretty story.

01:05:28 Like, uh, after, uh, after a very small number

01:05:32 of people in the US government decided to bomb

01:05:35 in landmine Southeast Asia, it’s hard to come back

01:05:40 and say, oh, government’s this great thing.

01:05:41 But, uh, then the problem is that this resistance

01:05:47 to government is basically resistance to politics.

01:05:50 It’s a way of saying, if I can get rich,

01:05:53 nobody should bother me.

01:05:54 It’s a way of not, of not having obligations to others.

01:05:56 And that ultimately is a very suspect motivation.

01:06:00 But does that mean that the impulse that the government, um,

01:06:06 should not overreach its power is flawed?

01:06:09 Well, I mean, what I want to ask you to do

01:06:12 is to replace the word government with politics.

01:06:15 Like our politics is people having to deal with each other.

01:06:20 My theory about freedom is that the only authentic form

01:06:23 of freedom is perpetual annoyance.

01:06:26 All right.

01:06:27 So annoyance means you’re actually dealing with people

01:06:30 because people are annoying.

01:06:31 Perpetual means that that annoyance is survivable

01:06:34 so it doesn’t destroy us all.

01:06:36 So if you have perpetual annoyance,

01:06:37 then you have freedom.

01:06:38 And that’s politics.

01:06:39 That’s politics.

01:06:40 If you don’t have perpetual annoyance,

01:06:42 something’s gone very wrong and you’ve suppressed those people

01:06:45 that it’s only temporary.

01:06:46 It’s going to come back and be horrible.

01:06:48 You should seek perpetual annoyance.

01:06:50 I’ll invite you to a Berkeley city council meeting

01:06:52 so you can know what that feels like.

01:06:53 What perpetual annoyance feels like.

01:06:57 But anyway, so freedom is being…

01:06:59 The test of freedom is that you’re annoyed by other people.

01:07:02 If you’re not, you’re not free.

01:07:03 If you’re not, you’re trapped in some temporary illusion

01:07:06 that’s going to fall apart.

01:07:07 Now, this quest to avoid government

01:07:10 is really a quest to avoid that political feeling,

01:07:12 but you have to have it.

01:07:14 You have to deal with it.

01:07:16 And it sucks, but that’s the human situation.

01:07:19 That’s the human condition.

01:07:20 And this idea that we’re going to have this abstract thing

01:07:22 that protects us from having to deal with each other

01:07:25 is always an illusion.

01:07:26 The idea, and I apologize,

01:07:28 I overstretched the use of the word government.

01:07:32 The idea is there should be some punishment from the people

01:07:37 when a bureaucracy, when a set of people

01:07:40 or a particular leader, like in an authoritarian regime,

01:07:44 which more than half the world currently lives under,

01:07:47 if they become, they stop representing the people,

01:07:53 it stops being like a Berkeley meeting

01:07:56 and starts being more like a dictatorial kind of situation.

01:08:01 And so the point is, it’s nice to give people,

01:08:05 the populace in a decentralized way,

01:08:08 power to resist that kind of government becoming over authoritarian.

01:08:15 Yeah, but people see this idea that the problem

01:08:18 is always the government being powerful is false.

01:08:21 The problem can also be criminal gangs.

01:08:23 The problem can also be weird cults.

01:08:25 The problem can be abusive clergy.

01:08:30 The problem can be infrastructure that fails.

01:08:35 The problem can be poisoned water.

01:08:37 The problem can be failed electric grids.

01:08:39 The problem can be a crappy education system

01:08:45 that makes the whole society less and less able to create value.

01:08:51 There are all these other problems

01:08:52 that are different from an overbearing government.

01:08:54 Like you have to keep some sense of perspective

01:08:56 and not be obsessed with only one kind of problem

01:08:59 because then the others will pop up.

01:09:01 But empirically speaking, some problems are bigger than others.

01:09:05 So like some groups of people,

01:09:08 like governments or gangs or companies lead to problems.

01:09:12 Are you a US citizen?

01:09:13 Yes.

01:09:14 Has the government ever really been a problem for you?

01:09:16 Well, okay.

01:09:17 So first of all, I grew up in the Soviet Union.

01:09:20 Yeah, my wife did too.

01:09:22 So I have seen, and has the government bothered me?

01:09:28 I would say that that’s a really complicated question,

01:09:32 especially because the United States is such,

01:09:34 it’s a special place like a lot of other countries.

01:09:39 My wife’s family were refused NICs.

01:09:41 And so we have like a very,

01:09:43 and her dad was sent to the Gulag.

01:09:46 For what it’s worth on my father’s side,

01:09:49 all but a few were killed by a pogrom

01:09:51 in a post Soviet pogrom in Ukraine.

01:09:57 So I would say because you did a little trick

01:10:00 of eloquent trick of language

01:10:02 that you switched to the United States

01:10:04 to talk about government.

01:10:06 So I believe unlike my friend,

01:10:09 Michael Malus, who’s an anarchist,

01:10:11 I believe government can do a lot of good in the world.

01:10:15 That is exactly what you’re saying,

01:10:16 which is it’s politics.

01:10:19 The thing that Bitcoin folks and cryptocurrency folks argue

01:10:22 is that one of the big ways that government

01:10:25 can control the populace is centralized bank,

01:10:27 like control the money.

01:10:29 That was the case in the Soviet Union too.

01:10:32 There’s inflation can really make poor people suffer.

01:10:38 And so what they argue is this is one way to go around

01:10:43 that power that government has

01:10:46 of controlling the monetary system.

01:10:48 So that’s a way to resist.

01:10:50 That’s not actually saying government bad.

01:10:53 That’s saying some of the ways

01:10:55 that central banks get into trouble

01:10:59 can be resisted through centralized.

01:11:01 So let me ask you on balance today in the real world

01:11:04 in terms of actual facts,

01:11:07 do you think cryptocurrencies are doing more

01:11:10 to prop up corrupt, murderous, horrible regimes

01:11:13 or to resist those regimes?

01:11:15 Where do you think the balance is right now?

01:11:17 I know exactly having talked to a lot of cryptocurrency folks

01:11:21 what they would tell me, right?

01:11:22 I, it’s hard, it’s, I don’t, no, no.

01:11:27 I’m asking it as a real question.

01:11:29 There’s no way to know the answer perfectly.

01:11:30 There’s no way to know the answer perfectly.

01:11:32 However, I gotta say, if you look at people

01:11:36 who’ve been able to decode blockchains

01:11:39 and they do leak a lot of data.

01:11:41 They’re not as secure as this widely thought.

01:11:43 There are a lot of unknown Bitcoin whales

01:11:47 from pretty early and they’re huge.

01:11:49 And if you ask, who are these people?

01:11:54 There’s evidence that a lot of them are quite

01:11:57 not the people you’d wanna support, let’s say.

01:12:00 And I just don’t, like, I think empirically

01:12:03 this idea that there’s some intrinsic way

01:12:07 that bad governments will be disempowered

01:12:13 and people will be able to resist them more

01:12:16 than new villains or even villainous governments

01:12:18 will be empowered.

01:12:19 There’s no basis for that assertion.

01:12:21 It just is kind of circumstantial.

01:12:23 And I think in general, Bitcoin ownership is one thing,

01:12:30 but Bitcoin transactions have tended

01:12:32 to support criminality more than productivity.

01:12:36 Of course, they would argue that was the story

01:12:38 of its early days, that now more and more Bitcoin

01:12:42 is being used for legitimate transactions, but…

01:12:46 That’s the difference.

01:12:47 I didn’t say for legitimate transactions.

01:12:48 I said for economic growth, for creativity.

01:12:51 Like, I think what’s happening is people are using it

01:12:56 a little bit for buying, I don’t know,

01:12:58 maybe some of these companies make it available

01:13:02 for this and that, they buy a Tesla with it or something.

01:13:04 Investing in a startup hard, it might’ve happened

01:13:10 a little bit, but it’s not an engine of productivity,

01:13:13 creativity, and economic growth,

01:13:15 whereas old fashioned currency still is.

01:13:17 And anyway, look, I think something…

01:13:24 I’m pro the idea of digital currencies.

01:13:28 I am anti the idea of economics wiping out politics

01:13:36 as a result.

01:13:37 I think they have to exist in some balance

01:13:39 to avoid the worst dysfunctions of each.

01:13:42 In some ways, there’s parallels to our discussion

01:13:44 of algorithms and cryptocurrency is you’re pro the idea,

01:13:50 but it can be used to manipulate,

01:13:54 you can be used poorly by aforementioned humans.

01:13:59 Well, I think that you can make better designs

01:14:02 and worse designs.

01:14:04 And the thing about cryptocurrency that’s so interesting

01:14:07 is how many of us are responsible for the poor designs

01:14:12 because we’re all so hooked on that Horatio Alger story

01:14:16 on like, I’m gonna be the one who gets the viral benefit.

01:14:20 Way back when all this stuff was starting,

01:14:22 I remember it would have been in the 80s,

01:14:24 somebody had the idea of using viral

01:14:26 as a metaphor for network effect.

01:14:29 And the whole point was to talk about

01:14:32 how bad network effect was,

01:14:33 that it always created distortions

01:14:35 that ruined the usefulness of economic incentives

01:14:39 that created dangerous distortions.

01:14:42 Like, but then somehow, even after the pandemic,

01:14:45 we think of viral as this good thing

01:14:46 because we imagine ourselves as the virus, right?

01:14:49 We wanna be on the beneficiary side of it.

01:14:52 But of course, you’re not likely to be.

01:14:54 There is a sense because money is involved,

01:14:56 people are not reasoning clearly always

01:15:01 because they want to be part of that first viral wave

01:15:06 that makes them rich.

01:15:07 And that blinds people from their basic morality.

01:15:11 I had an interesting conversation.

01:15:14 I sort of feel like I should respect some people’s privacy,

01:15:16 but some of the initial people who started Bitcoin,

01:15:20 I remember having an argument about like,

01:15:24 it’s intrinsically a Ponzi scheme,

01:15:26 like the early people have more than the later people.

01:15:29 And the further down the chain you get,

01:15:31 the more you’re subject to gambling like dynamics

01:15:34 where it’s more and more random

01:15:36 and more and more subject to weird network effects

01:15:37 and whatnot unless you’re a very small player perhaps

01:15:41 and you’re just buying something,

01:15:42 but even then you’ll be subject to fluctuations

01:15:45 because the whole thing is just kind of,

01:15:48 as it fluctuates,

01:15:48 it’s gonna wave around the little people more.

01:15:51 And I remember the conversation turned to gambling

01:15:55 because gambling is a pretty large economic sector.

01:15:58 And it’s always struck me as being nonproductive.

01:16:01 Like somebody goes to Las Vegas and they lose money.

01:16:03 And so one argument is, well, they got entertainment.

01:16:06 They paid for entertainment as they lost money.

01:16:08 So that’s fine.

01:16:10 And Las Vegas does up the losing of money

01:16:13 in an entertaining way.

01:16:14 So why not?

01:16:14 It’s like going to a show.

01:16:15 So that’s one argument.

01:16:17 The argument that was made to me was different from that.

01:16:19 It’s that, no, what they’re doing

01:16:21 is they’re getting a chance to experience hope.

01:16:23 And a lot of people don’t get that chance.

01:16:25 And so that’s really worth it.

01:16:26 Even if they’re gonna lose,

01:16:27 they have that moment of hope

01:16:28 and they need to be able to experience that.

01:16:31 And it was a very interesting argument.

01:16:33 That’s so heartbreaking, but I’ve seen that.

01:16:39 I have that a little bit of a sense.

01:16:41 I’ve talked to some young people

01:16:43 who invest in cryptocurrency.

01:16:45 And what I see is this hope.

01:16:48 This is the first thing that gave them hope.

01:16:50 And that’s so heartbreaking to me

01:16:52 that you’ve gotten hope from that.

01:16:55 So much is invested.

01:16:56 It’s like hope from somehow becoming rich

01:16:59 as opposed to something to me.

01:17:01 I apologize, but money is in the longterm

01:17:04 not going to be a source of that deep meaning.

01:17:07 It’s good to have enough money,

01:17:09 but it should not be the source of hope.

01:17:11 And it’s heartbreaking to me

01:17:13 how many people is the source of hope.

01:17:16 Yeah, you’ve just described the psychology of virality

01:17:21 or the psychology of trying to base a civilization

01:17:25 on semi random occurrences of network effect peaks.

01:17:28 Yeah, and it doesn’t really work.

01:17:31 I mean, I think we need to get away from that.

01:17:33 We need to soften those peaks

01:17:37 and accept Microsoft, which deserves every penny,

01:17:39 but in every other case.

01:17:41 Well, you mentioned GitHub.

01:17:43 I think what Microsoft did with GitHub was brilliant.

01:17:45 I was very happy.

01:17:47 Okay, if I can give a, not a critical,

01:17:50 but on Microsoft because they recently purchased Bethesda.

01:17:56 So Elder Scrolls is in their hands.

01:17:58 I’m watching you, Microsoft,

01:18:01 do not screw up my favorite game.

01:18:03 Yeah, well, look, I’m not speaking for Microsoft.

01:18:06 I have an explicit arrangement with them

01:18:08 where I don’t speak for them, obviously,

01:18:11 like that should be very clear.

01:18:12 I do not speak for them.

01:18:14 I am not saying I like them.

01:18:17 I think such is amazing.

01:18:20 The term data dignity was coined by Sacha.

01:18:23 Like, so, you know, we have, it’s kind of extraordinary,

01:18:27 but, you know, Microsoft’s this giant thing.

01:18:29 It’s going to screw up this or that.

01:18:30 You know, it’s not, I don’t know.

01:18:33 It’s kind of interesting.

01:18:34 I’ve had a few occasions in my life

01:18:36 to see how things work from the inside of some big thing.

01:18:39 And, you know, it’s always just people kind of,

01:18:44 I don’t know, there’s always like coordination problems.

01:18:48 There’s always human problems.

01:18:50 Oh God, there’s some good people.

01:18:51 There’s some bad people.

01:18:52 It’s always, I hope Microsoft doesn’t screw up your game.

01:18:55 And I hope they bring Clippy back.

01:18:57 You should never kill Clippy.

01:18:59 Bring Clippy back.

01:19:00 Oh, Clippy.

01:19:01 But Clippy promotes the myth of AI.

01:19:03 Well, that’s why, this is why I think you’re wrong.

01:19:06 How about if we, all right.

01:19:07 Could we bring back Bob instead of Clippy?

01:19:10 Which one was Bob?

01:19:11 Oh, Bob was another thing.

01:19:13 Bob was this other screen character

01:19:15 who was supposed to be the voice of AI.

01:19:16 Cortana?

01:19:17 Cortana?

01:19:17 Would Cortana do it for you?

01:19:19 Cortana is too corporate.

01:19:20 I like it, Cortana’s fine.

01:19:23 There’s a woman in Seattle who’s like the model for Cortana,

01:19:27 did Cortana’s voice.

01:19:28 The voice?

01:19:29 There was like,

01:19:29 No, the voice is great.

01:19:31 We had her as a, she used to walk around

01:19:34 if you were wearing Hollands for a bit.

01:19:36 I don’t think that’s happening anymore.

01:19:38 I think, I don’t think you should turn a software

01:19:40 into a creature.

01:19:41 Well, you and I,

01:19:42 Get a cat, just get a cat.

01:19:43 You and I, you and I.

01:19:44 Well, get a dog.

01:19:45 Get a dog.

01:19:46 Or a dog, yeah.

01:19:47 Yeah.

01:19:48 Or a hedgehog.

01:19:49 A hedgehog.

01:19:50 Yeah.

01:19:51 You coauthored a paper, you mentioned Lee Smolin,

01:19:56 titled The Autodidactic Universe,

01:20:00 which describes our universe as one that learns its own physical laws.

01:20:06 That’s a trippy and beautiful and powerful idea.

01:20:09 What are, what would you say are the key ideas in this paper?

01:20:12 Ah, okay.

01:20:13 Well, I should say that paper reflected work from last year

01:20:18 and the project, the program has moved quite a lot.

01:20:21 So it’s a little, there’s a lot of stuff that’s not published

01:20:24 that I’m quite excited about.

01:20:25 So I have to kind of keep my frame in that,

01:20:28 in that last year’s thing.

01:20:30 So I have to try to be a little careful about that.

01:20:33 We can think about it in a few different ways.

01:20:37 The core of the paper, the technical core of it

01:20:40 is a triple correspondence.

01:20:43 One part of it was already established

01:20:46 and then another part is in the process.

01:20:49 The part that was established was, of course,

01:20:53 understanding different theories of physics as matrix models.

01:20:57 The part that was fresher is understanding those

01:21:01 as machine learning systems so that we could move fluidly

01:21:04 between these different ways of describing systems.

01:21:07 And the reason to want to do that is to just have more tools

01:21:11 and more options because, well,

01:21:15 theoretical physics is really hard

01:21:17 and a lot of programs have kind of run into a state

01:21:23 where they feel a little stalled, I guess.

01:21:25 I want to be delicate about this

01:21:26 because I’m not a physicist,

01:21:27 I’m the computer scientist collaborating.

01:21:29 So I don’t mean to diss anybody’s.

01:21:32 So this is almost like gives a framework

01:21:34 for generating new ideas in physics.

01:21:37 As we start to publish more about where it’s gone,

01:21:40 I think you’ll start to see there’s tools

01:21:42 and ways of thinking about theories

01:21:45 that I think open up some new paths

01:21:49 that will be of interest.

01:21:52 There’s the technical core of it,

01:21:54 which is this idea of a correspondence

01:21:56 to give you more facility.

01:21:58 But then there’s also the storytelling part of it.

01:22:00 And this is something Lee loves stories and I do.

01:22:05 And the idea here is that a typical way

01:22:13 of thinking about physics is that there’s some kind

01:22:17 of starting condition and then there’s some principle

01:22:19 by which the starting condition evolves.

01:22:23 And the question is like, why the starting condition?

01:22:28 The starting condition has to be fine tuned

01:22:32 and all these things about it have to be kind of perfect.

01:22:35 And so we were thinking, well, look,

01:22:37 what if we could push the storytelling

01:22:40 about where the universe comes from much further back

01:22:42 by starting with really simple things that evolve

01:22:46 and then through that evolution,

01:22:47 explain how things got to be how they are

01:22:48 through very simple principles, right?

01:22:51 And so we’ve been exploring a variety of ways

01:22:55 to push the start of the storytelling

01:22:57 further and further back,

01:23:00 and it’s really kind of interesting

01:23:03 because like for all of his,

01:23:07 Lee is sometimes considered to be,

01:23:11 to have a radical quality in the physics world.

01:23:13 But he still is like, no, this is gonna be like,

01:23:18 the kind of time we’re talking about

01:23:19 in which evolution happens is the same time we’re now

01:23:22 and we’re talking about something that starts and continues.

01:23:25 And I’m like, well, what if there’s some other kind

01:23:27 of time that’s time like, and it sounds like metaphysics,

01:23:31 but there’s an ambiguity, you know, like,

01:23:34 it has to start from something

01:23:36 and it’s kind of interesting.

01:23:37 So there’s this, a lot of the math

01:23:41 can be thought of either way, which is kind of interesting.

01:23:44 So push this so far back that basically

01:23:46 all the things that we take for granted in physics

01:23:47 start becoming emergent, it’s emergent.

01:23:50 I really wanna emphasize this is all super baby steps.

01:23:53 I don’t wanna over claim.

01:23:54 It’s like, I think a lot of the things we’re doing,

01:23:57 we’re approaching some old problems

01:23:59 in a pretty fresh way, informed.

01:24:02 There’s been a zillion papers about how you can think

01:24:04 of the universe as a big neural net

01:24:06 or how you can think of different ideas in physics

01:24:09 as being quite similar to, or even equivalent

01:24:11 to some of the ideas in machine learning.

01:24:15 And that actually works out crazy well.

01:24:18 Like, I mean, that is actually kind of eerie

01:24:21 when you look at it, like there’s probably

01:24:24 two or three dozen papers that have this quality

01:24:26 and some of them are just crazy good.

01:24:28 And it’s very interesting.

01:24:30 What we’re trying to do is take those kinds

01:24:33 of observations and turn them into an actionable framework

01:24:35 where you can then start to do things

01:24:38 with landscapes or theories that you couldn’t do before

01:24:40 and that sort of thing.

01:24:42 So in that context, or maybe beyond,

01:24:46 how do you explain us humans?

01:24:47 How unlikely are we, this intelligent civilization

01:24:50 or is there a lot of others or are we alone in this universe?

01:24:54 Yeah.

01:24:57 You seem to appreciate humans very much.

01:25:03 I’ve grown fond of us.

01:25:06 We’re okay.

01:25:09 We have our nice qualities.

01:25:12 I like that.

01:25:14 I mean, we’re kind of weird.

01:25:16 We sprout this hair on our heads and then we’re,

01:25:18 I don’t know, we’re sort of weird animals.

01:25:20 That’s the feature, not a bug, I think.

01:25:22 The weirdness.

01:25:23 I hope so.

01:25:24 I hope so.

01:25:30 I think if I’m just going to answer you in terms of truth,

01:25:35 the first thing I’d say is we’re not in a privileged enough

01:25:39 position, at least as yet, to really know much about who we

01:25:44 are, how we are, what we’re really like in the context

01:25:48 of something larger, what that context is,

01:25:50 like all that stuff.

01:25:51 We might learn more in the future.

01:25:52 Our descendants might learn more, but we don’t really know

01:25:55 very much, which you can either view as frustrating or charming

01:25:59 like that first year of TikTok or something.

01:26:03 All roads lead back to TikTok.

01:26:04 I like it.

01:26:05 Well, lately.

01:26:07 But in terms of, there’s another level at which I can think

01:26:10 about it where I sometimes think that if you are just quiet

01:26:19 and you do something that gets you in touch with the way

01:26:22 reality happens, and for me it’s playing music, sometimes it

01:26:27 seems like you can feel a bit of how the universe is.

01:26:30 And it feels like there’s a lot more going on in it and there

01:26:34 is a lot more life and a lot more stuff happening and a lot

01:26:38 more stuff flowing through it.

01:26:39 I’m not speaking as a scientist now.

01:26:40 This is kind of a more my artist side talking and I feel like

01:26:46 I’m suddenly in multiple personalities with you.

01:26:50 Jack Kerouac said that music is the only truth.

01:26:57 It sounds like you might be at least in part.

01:27:01 There’s a passage in Kerouac’s book, Dr.

01:27:04 Sacks, where somebody tries to just explain the whole

01:27:07 situation with reality and people in like a paragraph.

01:27:10 And I couldn’t reproduce it for you here, but it’s like, yeah,

01:27:13 like there are these bulbous things that walk around and

01:27:15 they make these sounds, you can sort of understand them, but

01:27:17 only kind of, and then there’s like this, and it’s just like

01:27:19 this amazing, like just really quick, like if some spirit

01:27:24 being or something was going to show up in our reality and

01:27:26 hadn’t knew nothing about it, it’s like a little basic intro

01:27:29 of like, okay, here’s what’s going on here.

01:27:30 It’s an incredible passage.

01:27:32 Yeah.

01:27:32 Yeah.

01:27:33 It’s like a one or two sentence summary in H.

01:27:36 Hiker’s Guide to the Galaxy, right?

01:27:38 Of what this…

01:27:40 Mostly harmless.

01:27:41 Mostly harmless.

01:27:42 Do you think there’s truth to that, that music somehow

01:27:45 connects to something that words cannot?

01:27:48 Yeah.

01:27:49 Music is something that just towers above me.

01:27:52 I don’t feel like I have an overview of it.

01:27:57 It’s just the reverse.

01:27:58 I don’t fully understand it because on one level it’s simple.

01:28:02 Like you can say, oh, it’s a thing people evolved to

01:28:06 coordinate our brains on a pattern level or something like that.

01:28:11 There’s all these things you can say about music, which are,

01:28:14 you know, some of that’s probably true.

01:28:16 It’s also, there’s kind of like this, this is the mystery of

01:28:25 meaning.

01:28:26 Like there’s a way that just instead of just being pure

01:28:30 abstraction, music can have like this kind of substantiality

01:28:34 to it that is philosophically impossible.

01:28:39 I don’t know what to do with it.

01:28:41 Yeah.

01:28:41 The amount of understanding I feel I have when I hear the

01:28:45 right song at the right time is not comparable to anything I

01:28:51 can read on Wikipedia.

01:28:53 Anything I can understand, read through in language.

01:28:57 The music does connect us to something.

01:28:59 There’s this thing there.

01:29:00 Yeah, there’s some kind of a thing in it.

01:29:04 And I’ve never ever, I’ve read across a lot of explanations

01:29:09 from all kinds of interesting people like that it’s some kind

01:29:13 of a flow language between people or between people and how

01:29:18 they perceive and that kind of thing.

01:29:20 And that sort of explanation is fine, but it’s not quite it

01:29:26 either.

01:29:26 Yeah.

01:29:27 There’s something about music that makes me believe that

01:29:31 panpsychism could possibly be true, which is that everything

01:29:35 in the universe is conscious.

01:29:36 It makes me think, makes me be humble in how much or how

01:29:43 little I understand about the functions of our universe that

01:29:48 everything might be conscious.

01:29:50 Most people interested in theoretical physics eventually

01:29:54 land in panpsychism, but I’m not one of them.

01:30:00 I still think there’s this pragmatic imperative to treat

01:30:08 people as special.

01:30:09 So I will proudly be a dualist without people and cats.

01:30:14 Yeah, I’m not quite sure where to draw the line or why the

01:30:19 line’s there or anything like that.

01:30:21 But I don’t think I should be required to all the same

01:30:23 questions are equally mysterious for no line.

01:30:25 So I don’t feel disadvantaged by that.

01:30:28 So I shall remain a dualist.

01:30:30 But if you listen to anyone trying to explain where

01:30:36 consciousness is in a dualistic sense, either believing in

01:30:39 souls or some special thing in the brain or something, you

01:30:42 pretty much say, screw this.

01:30:44 I’m going to be a panpsychist.

01:30:51 Fair enough.

01:30:52 Well put.

01:30:53 Is there moments in your life that happened that we’re

01:30:56 defining in the way that you hope others your daughter?

01:31:00 Well, listen, I got to say the moments that defined me were

01:31:04 not the good ones.

01:31:06 The moments that defined me were often horrible.

01:31:12 I’ve had successes, you know, but if you ask what defined

01:31:16 me, my mother’s death, being under the World Trade Center

01:31:24 and the attack, the things that have had an effect on me

01:31:30 were the most were sort of real world, terrible things,

01:31:35 which I don’t wish on young people at all.

01:31:38 And this is the thing that’s hard about giving advice to

01:31:42 young people that they have to learn their own lessons.

01:31:48 And lessons don’t come easily.

01:31:52 And a world which avoids hard lessons will be a stupid

01:31:56 world, you know, and I don’t know what to do with it.

01:31:59 That’s a little bundle of truth that has a bit of a fatalistic

01:32:03 quality to it, but I don’t—this is like when I’m saying

01:32:07 that, you know, freedom equals eternal annoyance.

01:32:08 Like, you can’t—like, there’s a degree to which honest

01:32:14 advice is not that pleasant to give.

01:32:19 And I don’t want young people to have to know about

01:32:24 everything.

01:32:25 You don’t want to wish hardship on them.

01:32:27 Yeah, I think they deserve to have a little grace period

01:32:33 of naiveté that’s pleasant.

01:32:34 I mean, I do, you know, if it’s possible, if it’s—these

01:32:40 things are—this is like—this is tricky stuff.

01:32:42 I mean, if you—okay, so let me try a little bit on this

01:32:50 advice thing.

01:32:50 I think one thing—and any serious, broad advice will

01:32:55 have been given a thousand times before for a thousand

01:32:57 years, so I’m not going to claim originality, but I think

01:33:04 trying to find a way to really pay attention to what you’re

01:33:11 feeling fundamentally, what your sense of the world is, what

01:33:14 your intuition is, if you feel like an intuitive person, what

01:33:17 you’re—like, to try to escape the constant sway of social

01:33:27 perception or manipulation, whatever you wish—not to

01:33:30 escape it entirely, that would be horrible, but to find cover

01:33:35 from it once in a while, to find a sense of being anchored

01:33:39 in that, to believe in experience as a real thing.

01:33:44 Believing in experience as a real thing is very dualistic.

01:33:47 That goes with my philosophy of dualism.

01:33:50 I believe there’s something magical, and instead of squirting

01:33:53 the magic dust on the programs, I think experience is something

01:33:57 real and something apart, something mystical and something—

01:34:00 Your own personal experience that you just have, and then

01:34:04 you’re saying silence the rest of the world enough to hear

01:34:07 that—like, whatever that magic dust is in that experience.

01:34:11 Find what is there, and I think that’s one thing.

01:34:18 Another thing is to recognize that kindness requires genius,

01:34:24 that it’s actually really hard, that facile kindness is not

01:34:29 kindness, and that it’ll take you a while to have the skills

01:34:33 to have kind impulses to want to be kind you can have right

01:34:36 away. To be effectively kind is hard.

01:34:39 To be effectively kind, yeah.

01:34:41 It takes skill. It takes hard lessons.

01:34:50 You’ll never be perfect at it. To the degree you get anywhere

01:34:55 with it, it’s the most rewarding thing ever.

01:35:01 Let’s see, what else would I say?

01:35:02 I would say when you’re young, you can be very overwhelmed

01:35:12 by social and interpersonal emotions. You’ll have broken hearts and

01:35:19 jealousies. You’ll feel socially down the ladder instead of up the

01:35:24 ladder. It feels horrible when that happens. All of these things.

01:35:28 And you have to remember what a fragile crust all that stuff is,

01:35:35 and it’s hard because right when it’s happening, it’s just so intense.

01:35:46 If I was actually giving this advice to my daughter, she’d already

01:35:48 be out of the room. This is for some hypothetical teenager that

01:35:55 doesn’t really exist that really wants to sit and listen to my

01:35:58 voice for your daughter 10 years from now. Maybe.

01:36:03 Can I ask you a difficult question?

01:36:06 Yeah, sure.

01:36:07 You talked about losing your mom.

01:36:10 Yeah.

01:36:11 Do you miss her?

01:36:14 Yeah, I mean, I still connected her through music. She was a

01:36:18 a young prodigy piano player in Vienna, and she survived the

01:36:26 concentration camp and then died in a car accident here in the US.

01:36:32 What music makes you think of her? Is there a song that connects?

01:36:38 Well, she was in Vienna, so she had the whole Viennese music thing

01:36:46 going, which is this incredible school of absolute skill and

01:36:54 romance bundled together and wonderful on the piano, especially.

01:36:58 I learned to play some of the Beethoven sonatas for her, and I

01:37:01 played them in this exaggerated, drippy way I remember when I was

01:37:05 a kid.

01:37:06 Exaggerated meaning too full of emotion?

01:37:09 Yeah, just like…

01:37:11 Isn’t that the only way to play Beethoven? I mean, I didn’t know

01:37:14 there’s any other way.

01:37:14 That’s a reasonable question. I mean, the fashion these days is to

01:37:17 be slightly Apollonian even with Beethoven, but one imagines that

01:37:23 actual Beethoven playing might have been different. I don’t

01:37:26 know. I’ve gotten to play a few instruments he played and tried

01:37:31 to see if I could feel anything about how it might have been for

01:37:33 him. I don’t know, really.

01:37:34 I was always against the clinical precision of classical music.

01:37:38 I thought a great piano player should be, like, in pain, like,

01:37:47 you know, emotionally, like, truly feel the music and make it

01:37:55 messy, sort of maybe play classical music the way, I don’t

01:38:00 know, blues pianist plays blues.

01:38:02 It seems like they actually got happier, and I’m not sure if

01:38:05 Beethoven got happier. I think it’s a different kind of concept

01:38:10 of the place of music. I think the blues, the whole African

01:38:17 American tradition was initially surviving awful, awful

01:38:21 circumstances. So you could say, you know, there was some of

01:38:23 that in the concentration camps and all that too. And it’s not

01:38:29 that Beethoven’s circumstances were brilliant, but he kind of

01:38:32 also, I don’t know, this is hard. Like, I mean, it would

01:38:38 seem to be his misery was somewhat self imposed, maybe

01:38:41 through, I don’t know. It’s kind of interesting, like, I’ve

01:38:44 known some people who loathed Beethoven, like the composer,

01:38:47 late composer, Pauline Oliveros, this wonderful modernist

01:38:50 composer. I played in her band for a while, and she was like,

01:38:54 oh, Beethoven, like, that’s the worst music ever. It’s like,

01:38:56 all ego. It completely, it turns information, I mean, it

01:39:02 turns emotion into your enemy. And it’s ultimately all about

01:39:08 your own self importance, which has to be at the expense of

01:39:11 others. What else could it be? And blah, blah, blah. So she

01:39:15 had, I shouldn’t say, I don’t mean to be dismissive, but I’m

01:39:17 just saying, like, her position on Beethoven was very negative

01:39:21 and very unimpressed, which is really interesting because

01:39:24 the manner of the music. I think, I don’t know. I mean,

01:39:27 she’s not here to speak for herself. So it’s a little hard

01:39:29 for me to answer that question. But it was interesting because

01:39:32 I’d always thought of Beethoven as like, whoa, you know, this

01:39:34 is like Beethoven is like really the dude, you know, and it’s

01:39:38 just like, Beethoven, Schmadovan, you know, it’s like

01:39:42 not really happening. Yeah, I still, even though it’s cliche,

01:39:44 I like playing personally, just for myself, Moonlight Sonata.

01:39:47 I mean, I just, Moonlight’s amazing. I mean, it’s like,

01:39:52 Moonlight’s amazing. You know, I, you know, you’re talking

01:39:59 about comparing the blues and that sensibility from Europe

01:40:02 is so different in so many ways. One of the musicians I

01:40:06 play with is John Batiste, who has the band on Colbert Show,

01:40:09 and he’ll sit there playing jazz and suddenly go into

01:40:12 Moonlight. He loves Moonlight. And what’s kind of interesting

01:40:16 is he’s found a way to do Beethoven. And he, by the way,

01:40:22 he can really do Beethoven. Like, he went through Juilliard

01:40:25 and one time he was at my house, he’s saying, hey, do you

01:40:28 have the book of Beethoven’s Sonatas? I say, yeah, I want to

01:40:30 find one I haven’t played. And then he sight read through the

01:40:32 whole damn thing perfectly. And I’m like, oh, God, I just

01:40:35 get out of here. I can’t even deal with this. But anyway,

01:40:41 but anyway, the thing is he has this way of with the same

01:40:45 persona and the same philosophy moving from the blues into

01:40:48 Beethoven that’s really, really fascinating to me. It’s like,

01:40:53 I don’t want to say he plays it as if it were jazz, but he

01:40:56 kind of does. It’s kind of really, and he talks, well, he

01:41:00 was sight reading, he talks like Beethoven’s talking to him.

01:41:03 Like he’s like, oh yeah, here, he’s doing this. I can’t do

01:41:05 John, but you know, it’s like, it’s really interesting. Like

01:41:09 it’s very different. Like for me, I was introduced to

01:41:11 Beethoven as like almost like this godlike figure, and I

01:41:14 presume Pauline was too, that was really kind of a press

01:41:17 for an art to deal with. And for him, it’s just like the

01:41:20 conversation. He’s playing James P. Johnson or something. It’s

01:41:23 like another musician who did something and they’re talking

01:41:25 and it’s very cool to be around. It’s very kind of freeing

01:41:30 to see someone have that relationship. I would love to

01:41:35 hear him play Beethoven. That sounds amazing. He’s great. We

01:41:39 talked about Ernest Becker and how much value he puts on our

01:41:45 mortality and our denial of our mortality. Do you think about

01:41:50 your mortality? Do you think about your own death? You know

01:41:53 what’s funny is I used to not be able to, but as you get older,

01:41:57 you just know people who die and there’s all these things

01:41:59 that just becomes familiar and and more of a more ordinary,

01:42:04 which is what it is. But are you afraid? Sure, although less

01:42:11 so. And it’s not like I didn’t have some kind of insight or

01:42:18 revelation to become less afraid. I think I just, like I

01:42:22 say, it’s kind of familiarity. It’s just knowing people who’ve

01:42:27 died and I really believe in the future. I have this optimism

01:42:34 that people or this whole thing of life on Earth, this whole

01:42:37 thing we’re part of, I don’t know where to draw that circle,

01:42:39 but this thing is going somewhere and has some kind of

01:42:47 value and you can’t both believe in the future and want

01:42:51 to live forever. You have to make room for it. You know, like

01:42:54 you have to, that optimism has to also come with its own like

01:42:58 humility. You have to make yourself small to believe in

01:43:01 the future and so it actually in a funny way comforts me.

01:43:06 Wow, that’s powerful. And optimism requires you to kind

01:43:13 of step down after a time. Yeah, I mean, that said, life

01:43:18 seems kind of short, but you know, whatever. Do you think

01:43:22 there’s I’ve tried to find I can’t find the complaint

01:43:24 department. You know, I really want to I want to bring this

01:43:26 up, but the customer service number never answers and like

01:43:29 the email bounces one way. So yeah, do you think there’s

01:43:32 meaning to it to life? We’ll see. Meaning is a funny word

01:43:38 like we say all these things as if we know what they mean, but

01:43:40 meaning we don’t know what we mean when we say meaning like

01:43:43 we obviously do not and it’s a it’s it’s a funny little

01:43:47 mystical thing. I think it ultimately connects to that

01:43:50 sense of experience that dualists tend to believe in.

01:43:56 I guess there are why like if you look up to the stars and

01:43:58 you experience that awe inspiring like joy at whatever

01:44:04 when you look up to the stars that I don’t know why for me

01:44:07 that’s kind of makes me feel joyful, maybe a little bit

01:44:11 melancholy, just some weird soup of feelings and ultimately

01:44:15 the question is like why are we here in this vast universe?

01:44:22 That question why?

01:44:25 Have you been able in some way maybe through music answer it

01:44:30 for yourself?

01:44:37 My impulse is to feel like it’s not quite the right question

01:44:42 to ask, but I feel like going down that path is just too

01:44:46 tedious for the moment and I don’t want to do it, but

01:44:51 the wrong question. Well, just because you know, I don’t know

01:44:56 what meaning is and I think I do know that sense of awe. I

01:45:01 grew up in southern New Mexico and the stars were so vivid.

01:45:08 I’ve had some weird misfortunes, but I’ve had some

01:45:13 weird luck also. One of our near neighbors was the head of

01:45:19 optics research at White Sands and when he was young he

01:45:22 discovered Pluto. His name was Clyde Tombaugh and he taught me

01:45:26 how to make telescopes, grinding mirrors and stuff. My dad

01:45:29 had also made telescopes when he was a kid, but Clyde had like

01:45:33 backyard telescopes that would put to shame a lot of like

01:45:37 I mean he really he did his telescopes you know and so

01:45:40 I remember he’d let me go and play with him and just like looking at a

01:45:45 globular cluster and you’re seeing the actual photons and with a good

01:45:48 telescope it’s really like this object like you can really tell

01:45:51 this isn’t coming through some intervening information structure this

01:45:55 is like the actual photons and it’s really a three dimensional object

01:45:59 and you have even a feeling for the vastness of it

01:46:02 and it’s it’s it’s I don’t know I so I definitely I was

01:46:08 very very fortunate to have a connection to the sky that way

01:46:13 when I was a kid. To have had that experience

01:46:17 again the emphasis on experience.

01:46:22 It’s kind of funny like I feel like sometimes

01:46:25 like I’ve taken when she was younger I took my daughter and her friends to

01:46:30 to like a telescope there are a few around here that are

01:46:33 kids can go and use and they would like look at Jupiter’s moons or something

01:46:37 I think like Galilean moons and I don’t know if they quite

01:46:41 had that because it’s like too

01:46:44 it’s been just too normalized and I think maybe

01:46:49 when I was growing up screens weren’t that common yet and maybe it’s like too

01:46:53 confusable with the screen I don’t know you know somebody uh

01:46:57 brought up in conversation to me somewhere I don’t remember who

01:47:02 but they they kind of posited this idea that

01:47:05 if humans early humans weren’t able to see the stars like if

01:47:09 earth atmosphere was such there was cloudy

01:47:12 that we would not develop human civilization there’s something about

01:47:15 being able to look up and see a vast universe is like

01:47:20 that’s fundamental to the development of human civilization

01:47:23 I thought that was a curious kind of thought that reminds me of that

01:47:28 old Isaac Asimov story where the you know there’s this planet where they

01:47:32 finally get to see what’s in the sky once in a while and it turns out they’re in

01:47:35 the middle of a globular cluster and they’re all these stars and

01:47:38 I forget what happens exactly god that’s that’s from when I was the same age as a

01:47:41 kid I don’t really remember yeah uh but um yeah I don’t know it’s uh

01:47:46 it’s it might be right I’m just thinking of all the

01:47:49 civilizations that grew up under clouds I mean like

01:47:52 the the vikings needed a special uh diffracting piece of mica to navigate

01:47:58 because they could never see the sun they had this thing called a sunstone

01:48:01 that they found from this this one cave you know about that

01:48:03 so they were in this like uh they were trying to navigate

01:48:07 boats you know in the north atlantic with without being able to see the sun

01:48:11 because it was cloudy and so they they used uh of a uh

01:48:17 a chunk of mica to diffract it in order to be able to align where the sun really

01:48:22 was because they couldn’t tell by eye and navigate so

01:48:25 I’m just saying there are a lot of civilizations that are pretty impressive

01:48:27 that had to deal with a lot of clouds uh

01:48:31 the amazonians invented our agriculture and they they were probably under

01:48:35 clouds a lot I don’t know I don’t know to me personally the the question of the

01:48:39 meaning of life becomes most um

01:48:44 vibrant most apparent when you look up at the stars

01:48:47 because it makes me feel very small uh that we’re not small

01:48:54 but then you ask it it still feels that we’re special and then the natural

01:49:00 question is like well if we are special as I think we

01:49:04 are why the heck are we here in this vast

01:49:08 universe that ultimately is the question of um

01:49:12 right well the meaning of life I mean look

01:49:15 there’s a confusion sometimes in trying to use uh

01:49:22 to set up a question or a thought experiment or something

01:49:26 that’s defined in terms of a context to explain something

01:49:30 where there is no larger context and that’s a category error

01:49:34 um if we want to do it in physics um or well or in computer science um

01:49:41 it’s hard to talk about the universe as a Turing machine because a Turing

01:49:44 machine has an external clock and an observer and a

01:49:47 input and output there’s a larger context implied in order for it to be

01:49:51 defined at all and so if you’re talking about the

01:49:53 universe you can’t talk about it coherently as a Turing machine uh

01:49:57 quantum mechanics is like that quantum mechanics has an external clock and has

01:50:01 some kind of external context depending on your interpretation

01:50:04 um that’s either you know the observer or whatever

01:50:08 uh and there’s a they’re they’re similar that way so maybe

01:50:12 maybe Turing machines and quantum mechanics can be

01:50:16 better friends or something because they have a similar setup but the thing is if

01:50:19 you have something that’s defined in terms of an outer context you can’t

01:50:24 talk about ultimates with it because obviously it doesn’t

01:50:27 it’s not suited for that so there’s some ideas that

01:50:30 are their own context general relativity is its own context

01:50:34 it’s different that’s why it’s hard to unify and

01:50:37 um i think the same thing is true when we talk about

01:50:42 these types of questions like uh meaning is in a context and

01:50:49 to talk about ultimate meaning is therefore a category error it’s not

01:50:53 it’s not a um it’s not a resolvable way of thinking

01:50:59 it might be a way of thinking that is experientially

01:51:06 um or aesthetically valuable because it is awesome in the sense of

01:51:13 you know awe inspiring um but to try to treat it analytically is not

01:51:18 sensible maybe that’s what music can poetry for

01:51:22 yeah maybe i think so i think music actually does

01:51:25 escape any particular context that’s how it feels to me but i’m not sure about

01:51:28 that that’s once again crazy artist talking not scientist

01:51:33 well you did uh you do both masterfully uh jaron i’m like i said i’m a big fan

01:51:38 of everything you’ve done of you as a human being

01:51:41 um i appreciate the the fun argument we had today that will i’m sure

01:51:47 continue for 30 years as it did with mark mitski um honestly

01:51:52 i i deeply appreciate that you spend your really valuable time with me today

01:51:55 it was a really great conversation thank you so much

01:51:58 thanks for listening to this conversation with jaron lanier

01:52:01 to support this podcast please check out our sponsors in the description

01:52:06 and now let me leave you with some words from jaron lanier himself

01:52:10 a real friendship ought to introduce each person

01:52:13 to unexpected weirdness in the other thank you for listening i hope to see

01:52:19 you next time