Ray Kurzweil: Singularity, Superintelligence, and Immortality #321

Transcript

00:00:00 By the time he gets to 2045,

00:00:02 we’ll be able to multiply our intelligence

00:00:05 many millions fold.

00:00:07 And it’s just very hard to imagine what that will be like.

00:00:13 The following is a conversation with Ray Kurzweil,

00:00:16 author, inventor, and futurist,

00:00:19 who has an optimistic view of our future

00:00:22 as a human civilization,

00:00:24 predicting that exponentially improving technologies

00:00:27 will take us to a point of a singularity

00:00:29 beyond which superintelligent artificial intelligence

00:00:33 will transform our world in nearly unimaginable ways.

00:00:38 18 years ago, in the book Singularity is Near,

00:00:41 he predicted that the onset of the singularity

00:00:44 will happen in the year 2045.

00:00:47 He still holds to this prediction and estimate.

00:00:50 In fact, he’s working on a new book on this topic

00:00:53 that will hopefully be out next year.

00:00:56 This is the Lex Friedman podcast.

00:00:58 To support it, please check out our sponsors

00:01:00 in the description.

00:01:01 And now, dear friends, here’s Ray Kurzweil.

00:01:06 In your 2005 book titled The Singularity is Near,

00:01:10 you predicted that the singularity will happen in 2045.

00:01:15 So now, 18 years later, do you still estimate

00:01:18 that the singularity will happen on 2045?

00:01:22 And maybe first, what is the singularity,

00:01:24 the technological singularity, and when will it happen?

00:01:27 Singularity is where computers really change our view

00:01:31 of what’s important and change who we are.

00:01:35 But we’re getting close to some salient things

00:01:39 that will change who we are.

00:01:42 A key thing is 2029,

00:01:45 when computers will pass the Turing test.

00:01:50 And there’s also some controversy

00:01:51 whether the Turing test is valid.

00:01:53 I believe it is.

00:01:55 Most people do believe that,

00:01:57 but there’s some controversy about that.

00:01:59 But Stanford got very alarmed at my prediction about 2029.

00:02:06 I made this in 1999 in my book.

00:02:10 The Age of Spiritual Machines.

00:02:12 Right.

00:02:12 And then you repeated the prediction in 2005.

00:02:15 In 2005.

00:02:16 Yeah.

00:02:17 So they held an international conference,

00:02:19 you might have been aware of it,

00:02:20 of AI experts in 1999 to assess this view.

00:02:26 So people gave different predictions,

00:02:29 and they took a poll.

00:02:30 It was really the first time that AI experts worldwide

00:02:34 were polled on this prediction.

00:02:37 And the average poll was 100 years.

00:02:41 20% believed it would never happen.

00:02:44 And that was the view in 1999.

00:02:48 80% believed it would happen,

00:02:50 but not within their lifetimes.

00:02:53 There’s been so many advances in AI

00:02:56 that the poll of AI experts has come down over the years.

00:03:01 So a year ago, something called Meticulous,

00:03:05 which you may be aware of,

00:03:07 assesses different types of experts on the future.

00:03:11 They again assessed what AI experts then felt.

00:03:16 And they were saying 2042.

00:03:18 For the Turing test.

00:03:20 For the Turing test.

00:03:22 So it’s coming down.

00:03:23 And I was still saying 2029.

00:03:26 A few weeks ago, they again did another poll,

00:03:30 and it was 2030.

00:03:32 So AI experts now basically agree with me.

00:03:37 I haven’t changed at all, I’ve stayed with 2029.

00:03:42 And AI experts now agree with me,

00:03:44 but they didn’t agree at first.

00:03:46 So Alan Turing formulated the Turing test,

00:03:50 and…

00:03:50 Right, now, what he said was very little about it.

00:03:54 I mean, the 1950 paper

00:03:55 where he had articulated the Turing test,

00:03:59 there’s like a few lines that talk about the Turing test.

00:04:06 And it really wasn’t very clear how to administer it.

00:04:12 And he said if they did it in like 15 minutes,

00:04:16 that would be sufficient,

00:04:17 which I don’t really think is the case.

00:04:20 These large language models now,

00:04:22 some people are convinced by it already.

00:04:25 I mean, you can talk to it and have a conversation with it.

00:04:28 You can actually talk to it for hours.

00:04:31 So it requires a little more depth.

00:04:35 There’s some problems with large language models

00:04:38 which we can talk about.

00:04:41 But some people are convinced by the Turing test.

00:04:46 Now, if somebody passes the Turing test,

00:04:50 what are the implications of that?

00:04:52 Does that mean that they’re sentient,

00:04:53 that they’re conscious or not?

00:04:56 It’s not necessarily clear what the implications are.

00:05:00 Anyway, I believe 2029, that’s six, seven years from now,

00:05:07 we’ll have something that passes the Turing test

00:05:10 and a valid Turing test,

00:05:12 meaning it goes for hours, not just a few minutes.

00:05:15 Can you speak to that a little bit?

00:05:16 What is your formulation of the Turing test?

00:05:21 You’ve proposed a very difficult version

00:05:23 of the Turing test, so what does that look like?

00:05:25 Basically, it’s just to assess it over several hours

00:05:30 and also have a human judge that’s fairly sophisticated

00:05:36 on what computers can do and can’t do.

00:05:40 If you take somebody who’s not that sophisticated

00:05:43 or even an average engineer,

00:05:48 they may not really assess various aspects of it.

00:05:52 So you really want the human to challenge the system.

00:05:55 Exactly, exactly.

00:05:57 On its ability to do things

00:05:58 like common sense reasoning, perhaps.

00:06:00 That’s actually a key problem with large language models.

00:06:04 They don’t do these kinds of tests

00:06:08 that would involve assessing chains of reasoning,

00:06:17 but you can lose track of that.

00:06:18 If you talk to them,

00:06:20 they actually can talk to you pretty well

00:06:22 and you can be convinced by it,

00:06:24 but it’s somebody that would really convince you

00:06:27 that it’s a human, whatever that takes.

00:06:32 Maybe it would take days or weeks,

00:06:34 but it would really convince you that it’s human.

00:06:40 Large language models can appear that way.

00:06:45 You can read conversations and they appear pretty good.

00:06:49 There are some problems with it.

00:06:52 It doesn’t do math very well.

00:06:55 You can ask how many legs did 10 elephants have

00:06:58 and they’ll tell you, well, okay,

00:07:00 each elephant has four legs

00:07:01 and it’s 10 elephants, so it’s 40 legs.

00:07:03 And you go, okay, that’s pretty good.

00:07:05 How many legs do 11 elephants have?

00:07:07 And they don’t seem to understand the question.

00:07:11 Do all humans understand that question?

00:07:14 No, that’s the key thing.

00:07:15 I mean, how advanced a human do you want it to be?

00:07:19 But we do expect a human

00:07:21 to be able to do multi chain reasoning,

00:07:24 to be able to take a few facts

00:07:26 and put them together, not perfectly.

00:07:29 And we see that in a lot of polls

00:07:32 that people don’t do that perfectly at all.

00:07:39 So it’s not very well defined,

00:07:42 but it’s something where it really would convince you

00:07:44 that it’s a human.

00:07:45 Is your intuition that large language models

00:07:48 will not be solely the kind of system

00:07:52 that passes the Turing test in 2029?

00:07:55 Do we need something else?

00:07:56 No, I think it will be a large language model,

00:07:58 but they have to go beyond what they’re doing now.

00:08:02 I think we’re getting there.

00:08:05 And another key issue is if somebody

00:08:09 actually passes the Turing test validly,

00:08:12 I would believe they’re conscious.

00:08:13 And then not everybody would say that.

00:08:15 It’s okay, we can pass the Turing test,

00:08:17 but we don’t really believe that it’s conscious.

00:08:20 That’s a whole nother issue.

00:08:23 But if it really passes the Turing test,

00:08:24 I would believe that it’s conscious.

00:08:26 But I don’t believe that of large language models today.

00:08:32 If it appears to be conscious,

00:08:35 that’s as good as being conscious, at least for you,

00:08:38 in some sense.

00:08:40 I mean, consciousness is not something that’s scientific.

00:08:46 I mean, I believe you’re conscious,

00:08:49 but it’s really just a belief,

00:08:51 and we believe that about other humans

00:08:52 that at least appear to be conscious.

00:08:57 When you go outside of shared human assumption,

00:09:01 like are animals conscious?

00:09:04 Some people believe they’re not conscious.

00:09:06 Some people believe they are conscious.

00:09:08 And would a machine that acts just like a human be conscious?

00:09:14 I mean, I believe it would be.

00:09:17 But that’s really a philosophical belief.

00:09:20 You can’t prove it.

00:09:22 I can’t take an entity and prove that it’s conscious.

00:09:25 There’s nothing that you can do

00:09:27 that would indicate that.

00:09:30 It’s like saying a piece of art is beautiful.

00:09:32 You can say it.

00:09:35 Multiple people can experience a piece of art as beautiful,

00:09:39 but you can’t prove it.

00:09:41 But it’s also an extremely important issue.

00:09:44 I mean, imagine if you had something

00:09:47 where nobody’s conscious.

00:09:49 The world may as well not exist.

00:09:55 And so some people, like say Marvin Minsky,

00:10:02 said, well, consciousness is not logical,

00:10:05 it’s not scientific, and therefore we should dismiss it,

00:10:08 and any talk about consciousness is just not to be believed.

00:10:15 But when he actually engaged with somebody

00:10:18 who was conscious, he actually acted

00:10:20 as if they were conscious.

00:10:22 He didn’t ignore that.

00:10:24 He acted as if consciousness does matter.

00:10:26 Exactly.

00:10:28 Whereas he said it didn’t matter.

00:10:30 Well, that’s Marvin Minsky.

00:10:31 Yeah.

00:10:32 He’s full of contradictions.

00:10:34 But that’s true of a lot of people as well.

00:10:37 But to you, consciousness matters.

00:10:39 But to me, it’s very important.

00:10:42 But I would say it’s not a scientific issue.

00:10:45 It’s a philosophical issue.

00:10:49 And people have different views.

00:10:50 Some people believe that anything

00:10:52 that makes a decision is conscious.

00:10:54 So your light switch is conscious.

00:10:56 Its level of consciousness is low,

00:10:59 not very interesting, but that’s a consciousness.

00:11:05 So a computer that makes a more interesting decision

00:11:09 is still not at human levels,

00:11:10 but it’s also conscious and at a higher level

00:11:12 than your light switch.

00:11:13 So that’s one view.

00:11:17 There’s many different views of what consciousness is.

00:11:20 So if a system passes the Turing test,

00:11:24 it’s not scientific, but in issues of philosophy,

00:11:30 things like ethics start to enter the picture.

00:11:32 Do you think there would be,

00:11:35 we would start contending as a human species

00:11:39 about the ethics of turning off such a machine?

00:11:42 Yeah, I mean, that’s definitely come up.

00:11:47 Hasn’t come up in reality yet.

00:11:49 Yet.

00:11:50 But I’m talking about 2029.

00:11:52 It’s not that many years from now.

00:11:56 So what are our obligations to it?

00:11:59 It has a different, I mean, a computer that’s conscious,

00:12:03 it has a little bit different connotations than a human.

00:12:08 We have a continuous consciousness.

00:12:15 We’re in an entity that does not last forever.

00:12:22 Now, actually, a significant portion of humans still exist

00:12:27 and are therefore still conscious.

00:12:31 But anybody who is over a certain age

00:12:34 doesn’t exist anymore.

00:12:37 That wouldn’t be true of a computer program.

00:12:40 You could completely turn it off

00:12:42 and a copy of it could be stored and you could recreate it.

00:12:46 And so it has a different type of validity.

00:12:51 You could actually take it back in time.

00:12:52 You could eliminate its memory and have it go over again.

00:12:55 I mean, it has a different kind of connotation

00:12:59 than humans do.

00:13:01 Well, perhaps it can do the same thing with humans.

00:13:04 It’s just that we don’t know how to do that yet.

00:13:06 It’s possible that we figure out all of these things

00:13:09 on the machine first.

00:13:12 But that doesn’t mean the machine isn’t conscious.

00:13:15 I mean, if you look at the way people react,

00:13:17 say, 3CPO or other machines that are conscious in movies,

00:13:25 they don’t actually present how it’s conscious,

00:13:26 but we see that they are a machine

00:13:30 and people will believe that they are conscious

00:13:33 and they’ll actually worry about it

00:13:34 if they get into trouble and so on.

00:13:37 So 2029 is going to be the first year

00:13:40 when a major thing happens.

00:13:43 Right.

00:13:44 And that will shake our civilization

00:13:46 to start to consider the role of AI in this world.

00:13:50 Yes and no.

00:13:51 I mean, this one guy at Google claimed

00:13:54 that the machine was conscious.

00:13:58 But that’s just one person.

00:14:00 Right.

00:14:01 When it starts to happen to scale.

00:14:03 Well, that’s exactly right because most people

00:14:06 have not taken that position.

00:14:07 I don’t take that position.

00:14:08 I mean, I’ve used different things like this

00:14:17 and they don’t appear to me to be conscious.

00:14:20 As we eliminate various problems

00:14:22 of these large language models,

00:14:26 more and more people will accept that they’re conscious.

00:14:30 So when we get to 2029, I think a large fraction

00:14:35 of people will believe that they’re conscious.

00:14:39 So it’s not gonna happen all at once.

00:14:42 I believe it will actually happen gradually

00:14:44 and it’s already started to happen.

00:14:47 And so that takes us one step closer to the singularity.

00:14:52 Another step then is in the 2030s

00:14:55 when we can actually connect our neocortex,

00:14:59 which is where we do our thinking, to computers.

00:15:04 And I mean, just as this actually gains a lot

00:15:09 to being connected to computers

00:15:12 that will amplify its abilities,

00:15:15 I mean, if this did not have any connection,

00:15:17 it would be pretty stupid.

00:15:19 It could not answer any of your questions.

00:15:21 If you’re just listening to this, by the way,

00:15:24 Ray’s holding up the all powerful smartphone.

00:15:29 So we’re gonna do that directly from our brains.

00:15:33 I mean, these are pretty good.

00:15:35 These already have amplified our intelligence.

00:15:37 I’m already much smarter than I would otherwise be

00:15:40 if I didn’t have this.

00:15:42 Because I remember my first book,

00:15:44 The Age of Intelligent Machines,

00:15:49 there was no way to get information from computers.

00:15:52 I actually would go to a library, find a book,

00:15:55 find the page that had an information I wanted,

00:15:58 and I’d go to the copier,

00:15:59 and my most significant information tool

00:16:04 was a roll of quarters where I could feed the copier.

00:16:08 So we’re already greatly advanced

00:16:11 that we have these things.

00:16:13 There’s a few problems with it.

00:16:15 First of all, I constantly put it down,

00:16:17 and I don’t remember where I put it.

00:16:19 I’ve actually never lost it.

00:16:21 But you have to find it, and then you have to turn it on.

00:16:26 So there’s a certain amount of steps.

00:16:28 It would actually be quite useful

00:16:30 if someone would just listen to your conversation

00:16:33 and say, oh, that’s so and so actress,

00:16:38 and tell you what you’re talking about.

00:16:41 So going from active to passive,

00:16:43 where it just permeates your whole life.

00:16:46 Yeah, exactly.

00:16:47 The way your brain does when you’re awake.

00:16:49 Your brain is always there.

00:16:51 Right.

00:16:52 That’s something that could actually

00:16:53 just about be done today,

00:16:55 where we’d listen to your conversation,

00:16:57 understand what you’re saying,

00:16:58 understand what you’re not missing,

00:17:01 and give you that information.

00:17:04 But another step is to actually go inside your brain.

00:17:09 And there are some prototypes

00:17:12 where you can connect your brain.

00:17:15 They actually don’t have the amount

00:17:17 of bandwidth that we need.

00:17:19 They can work, but they work fairly slowly.

00:17:21 So if it actually would connect to your neocortex,

00:17:26 and the neocortex, which I describe

00:17:30 in How to Create a Mind,

00:17:33 the neocortex is actually,

00:17:36 it has different levels,

00:17:38 and as you go up the levels,

00:17:39 it’s kind of like a pyramid.

00:17:41 The top level is fairly small,

00:17:44 and that’s the level where you wanna connect

00:17:47 these brain extenders.

00:17:50 And so I believe that will happen in the 2030s.

00:17:58 So just the way this is greatly amplified

00:18:01 by being connected to the cloud,

00:18:04 we can connect our own brain to the cloud,

00:18:07 and just do what we can do by using this machine.

00:18:14 Do you think it would look like

00:18:15 the brain computer interface of like Neuralink?

00:18:18 So would it be?

00:18:19 Well, Neuralink, it’s an attempt to do that.

00:18:22 It doesn’t have the bandwidth that we need.

00:18:26 Yet, right?

00:18:27 Right, but I think,

00:18:30 I mean, they’re gonna get permission for this

00:18:31 because there are a lot of people

00:18:33 who absolutely need it because they can’t communicate.

00:18:36 I know a couple people like that

00:18:38 who have ideas and they cannot,

00:18:42 they cannot move their muscles and so on.

00:18:44 They can’t communicate.

00:18:45 And so for them, this would be very valuable,

00:18:52 but we could all use it.

00:18:54 Basically, it’d be,

00:18:59 turn us into something that would be like we have a phone,

00:19:02 but it would be in our minds.

00:19:05 It would be kind of instantaneous.

00:19:07 And maybe communication between two people

00:19:09 would not require this low bandwidth mechanism of language.

00:19:14 Yes, exactly.

00:19:15 We don’t know what that would be,

00:19:17 although we do know that computers can share information

00:19:22 like language instantly.

00:19:24 They can share many, many books in a second.

00:19:28 So we could do that as well.

00:19:31 If you look at what our brain does,

00:19:34 it actually can manipulate different parameters.

00:19:39 So we talk about these large language models.

00:19:46 I mean, I had written that

00:19:51 it requires a certain amount of information

00:19:55 in order to be effective

00:19:58 and that we would not see AI really being effective

00:20:01 until it got to that level.

00:20:04 And we had large language models

00:20:06 that were like 10 billion bytes, didn’t work very well.

00:20:09 They finally got to a hundred billion bytes

00:20:11 and now they work fairly well.

00:20:13 And now we’re going to a trillion bytes.

00:20:16 If you say lambda has a hundred billion bytes,

00:20:22 what does that mean?

00:20:23 Well, what if you had something that had one byte,

00:20:27 one parameter, maybe you wanna tell

00:20:30 whether or not something’s an elephant or not.

00:20:33 And so you put in something that would detect its trunk.

00:20:37 If it has a trunk, it’s an elephant.

00:20:39 If it doesn’t have a trunk, it’s not an elephant.

00:20:41 That would work fairly well.

00:20:44 There’s a few problems with it.

00:20:47 And it really wouldn’t be able to tell what a trunk is,

00:20:49 but anyway.

00:20:50 And maybe other things other than elephants have trunks,

00:20:54 you might get really confused.

00:20:55 Yeah, exactly.

00:20:56 I’m not sure which animals have trunks,

00:20:58 but how do you define a trunk?

00:21:02 But yeah, that’s one parameter.

00:21:04 You can do okay.

00:21:06 So these things have a hundred billion parameters.

00:21:08 So they’re able to deal with very complex issues.

00:21:12 All kinds of trunks.

00:21:14 Human beings actually have a little bit more than that,

00:21:16 but they’re getting to the point

00:21:17 where they can emulate humans.

00:21:22 If we were able to connect this to our neocortex,

00:21:27 we would basically add more of these abilities

00:21:33 to make distinctions,

00:21:35 and it could ultimately be much smarter

00:21:37 and also be attached to information

00:21:39 that we feel is reliable.

00:21:43 So that’s where we’re headed.

00:21:45 So you think that there will be a merger in the 30s,

00:21:49 an increasing amount of merging

00:21:50 between the human brain and the AI brain?

00:21:55 Exactly.

00:21:57 And the AI brain is really an emulation of human beings.

00:22:02 I mean, that’s why we’re creating them,

00:22:04 because human beings act the same way,

00:22:07 and this is basically to amplify them.

00:22:09 I mean, this amplifies our brain.

00:22:13 It’s a little bit clumsy to interact with,

00:22:15 but it definitely is way beyond what we had 15 years ago.

00:22:21 But the implementation becomes different,

00:22:23 just like a bird versus the airplane,

00:22:26 even though the AI brain is an emulation,

00:22:30 it starts adding features we might not otherwise have,

00:22:34 like ability to consume a huge amount

00:22:36 of information quickly,

00:22:38 like look up thousands of Wikipedia articles in one take.

00:22:43 Exactly.

00:22:44 I mean, we can get, for example,

00:22:46 issues like simulated biology,

00:22:48 where it can simulate many different things at once.

00:22:56 We already had one example of simulated biology,

00:22:59 which is the Moderna vaccine.

00:23:04 And that’s gonna be now

00:23:06 the way in which we create medications.

00:23:11 But they were able to simulate

00:23:13 what each example of an mRNA would do to a human being,

00:23:17 and they were able to simulate that quite reliably.

00:23:21 And we actually simulated billions

00:23:23 of different mRNA sequences,

00:23:27 and they found the ones that were the best,

00:23:29 and they created the vaccine.

00:23:31 And they did, and talked about doing that quickly,

00:23:34 they did that in two days.

00:23:36 Now, how long would a human being take

00:23:37 to simulate billions of different mRNA sequences?

00:23:41 I don’t know that we could do it at all,

00:23:42 but it would take many years.

00:23:45 They did it in two days, and one of the reasons

00:23:50 that people didn’t like vaccines

00:23:53 is because it was done too quickly,

00:23:55 it was done too fast.

00:23:58 And they actually included the time it took to test it out,

00:24:01 which was 10 months, so they figured,

00:24:03 okay, it took 10 months to create this.

00:24:06 Actually, it took us two days.

00:24:09 And we also will be able to ultimately do the tests

00:24:11 in a few days as well.

00:24:14 Oh, because we can simulate how the body will respond to it.

00:24:16 Yeah, that’s a little bit more complicated

00:24:19 because the body has a lot of different elements,

00:24:22 and we have to simulate all of that,

00:24:25 but that’s coming as well.

00:24:27 So ultimately, we could create it in a few days

00:24:30 and then test it in a few days, and it would be done.

00:24:34 And we can do that with every type

00:24:35 of medical insufficiency that we have.

00:24:40 So curing all diseases, improving certain functions

00:24:46 of the body, supplements, drugs for recreation,

00:24:53 for health, for performance, for productivity,

00:24:56 all that kind of stuff.

00:24:56 Well, that’s where we’re headed,

00:24:58 because I mean, right now we have a very inefficient way

00:25:00 of creating these new medications.

00:25:04 But we’ve already shown it, and the Moderna vaccine

00:25:07 is actually the best of the vaccines we’ve had,

00:25:12 and it literally took two days to create.

00:25:16 And we’ll get to the point

00:25:17 where we can test it out also quickly.

00:25:20 Are you impressed by AlphaFold

00:25:22 and the solution to the protein folding,

00:25:25 which essentially is simulating, modeling

00:25:30 this primitive building block of life,

00:25:33 which is a protein, and its 3D shape?

00:25:36 It’s pretty remarkable that they can actually predict

00:25:39 what the 3D shape of these things are,

00:25:42 but they did it with the same type of neural net

00:25:45 that won, for example, the Go test.

00:25:51 So it’s all the same.

00:25:52 It’s all the same.

00:25:53 All the same approaches.

00:25:54 They took that same thing and just changed the rules

00:25:57 to chess, and within a couple of days,

00:26:01 it now played a master level of chess

00:26:03 greater than any human being.

00:26:09 And the same thing then worked for AlphaFold,

00:26:13 which no human had done.

00:26:14 I mean, human beings could do,

00:26:16 the best humans could maybe do 15, 20%

00:26:22 of figuring out what the shape would be.

00:26:25 And after a few takes, it ultimately did just about 100%.

00:26:30 100%.

00:26:32 Do you still think the singularity will happen in 2045?

00:26:37 And what does that look like?

00:26:40 Once we can amplify our brain with computers directly,

00:26:46 which will happen in the 2030s,

00:26:48 that’s gonna keep growing.

00:26:49 That’s another whole theme,

00:26:51 which is the exponential growth of computing power.

00:26:54 Yeah, so looking at price performance of computation

00:26:57 from 1939 to 2021.

00:26:59 Right, so that starts with the very first computer

00:27:02 actually created by a German during World War II.

00:27:06 You might have thought that that might be significant,

00:27:09 but actually the Germans didn’t think computers

00:27:12 were significant, and they completely rejected it.

00:27:16 The second one is also the ZUSA 2.

00:27:20 And by the way, we’re looking at a plot

00:27:22 with the X axis being the year from 1935 to 2025.

00:27:27 And on the Y axis in log scale

00:27:30 is computation per second per constant dollar.

00:27:34 So dollar normalized inflation.

00:27:37 And it’s growing linearly on the log scale,

00:27:40 which means it’s growing exponentially.

00:27:41 The third one was the British computer,

00:27:44 which the Allies did take very seriously.

00:27:47 And it cracked the German code

00:27:51 and enables the British to win the Battle of Britain,

00:27:55 which otherwise absolutely would not have happened

00:27:57 if they hadn’t cracked the code using that computer.

00:28:02 But that’s an exponential graph.

00:28:03 So a straight line on that graph is exponential growth.

00:28:07 And you see 80 years of exponential growth.

00:28:11 And I would say about every five years,

00:28:15 and this happened shortly before the pandemic,

00:28:18 people saying, well, they call it Moore’s law,

00:28:20 which is not the correct, because that’s not all intel.

00:28:25 In fact, this started decades before intel was even created.

00:28:29 It wasn’t with transistors formed into a grid.

00:28:34 So it’s not just transistor count or transistor size.

00:28:37 Right, it started with relays, then went to vacuum tubes,

00:28:43 then went to individual transistors,

00:28:46 and then to integrated circuits.

00:28:51 And integrated circuits actually starts

00:28:54 like in the middle of this graph.

00:28:56 And it has nothing to do with intel.

00:28:58 Intel actually was a key part of this.

00:29:02 But a few years ago, they stopped making the fastest chips.

00:29:08 But if you take the fastest chip of any technology

00:29:12 in that year, you get this kind of graph.

00:29:16 And it’s definitely continuing for 80 years.

00:29:19 So you don’t think Moore’s law, broadly defined, is dead.

00:29:24 It’s been declared dead multiple times throughout this process.

00:29:29 I don’t like the term Moore’s law,

00:29:31 because it has nothing to do with Moore or with intel.

00:29:34 But yes, the exponential growth of computing is continuing.

00:29:41 It has never stopped.

00:29:42 From various sources.

00:29:43 I mean, it went through World War II,

00:29:45 it went through global recessions.

00:29:49 It’s just continuing.

00:29:53 And if you continue that out, along with software gains,

00:29:58 which is a whole nother issue,

00:30:01 and they really multiply,

00:30:02 whatever you get from software gains,

00:30:04 you multiply by the computer gains,

00:30:07 you get faster and faster speed.

00:30:10 This is actually the fastest computer models

00:30:14 that have been created.

00:30:15 And that actually expands roughly twice a year.

00:30:19 Like, every six months it expands by two.

00:30:22 So we’re looking at a plot from 2010 to 2022.

00:30:28 On the x axis is the publication date of the model,

00:30:31 and perhaps sometimes the actual paper associated with it.

00:30:34 And on the y axis is training, compute, and flops.

00:30:40 And so basically this is looking at the increase

00:30:43 in the, not transistors,

00:30:46 but the computational power of neural networks.

00:30:51 Yes, the computational power that created these models.

00:30:55 And that’s doubled every six months.

00:30:57 Which is even faster than transistor division.

00:31:00 Yeah.

00:31:02 Now actually, since it goes faster than the amount of cost,

00:31:06 this has actually become a greater investment

00:31:10 to create these.

00:31:12 But at any rate, by the time we get to 2045,

00:31:16 we’ll be able to multiply our intelligence

00:31:19 many millions fold.

00:31:21 And it’s just very hard to imagine what that will be like.

00:31:25 And that’s the singularity where we can’t even imagine.

00:31:28 Right, that’s why we call it the singularity.

00:31:30 Because the singularity in physics,

00:31:32 something gets sucked into its singularity

00:31:35 and you can’t tell what’s going on in there

00:31:37 because no information can get out of it.

00:31:40 There’s various problems with that,

00:31:42 but that’s the idea.

00:31:44 It’s too much beyond what we can imagine.

00:31:48 Do you think it’s possible we don’t notice

00:31:52 that what the singularity actually feels like

00:31:56 is we just live through it

00:31:59 with exponentially increasing cognitive capabilities

00:32:05 and we almost, because everything’s moving so quickly,

00:32:09 aren’t really able to introspect

00:32:11 that our life has changed.

00:32:13 Yeah, but I mean, we will have that much greater capacity

00:32:17 to understand things, so we should be able to look back.

00:32:20 Looking at history, understand history.

00:32:23 But we will need people, basically like you and me,

00:32:26 to actually think about these things.

00:32:29 But we might be distracted

00:32:30 by all the other sources of entertainment and fun

00:32:34 because the exponential power of intellect is growing,

00:32:39 but also there’ll be a lot of fun.

00:32:41 The amount of ways you can have, you know.

00:32:46 I mean, we already have a lot of fun with computer games

00:32:48 and so on that are really quite remarkable.

00:32:51 What do you think about the digital world,

00:32:54 the metaverse, virtual reality?

00:32:57 Will that have a component in this

00:32:59 or will most of our advancement be in physical reality?

00:33:01 Well, that’s a little bit like Second Life,

00:33:04 although the Second Life actually didn’t work very well

00:33:06 because it couldn’t actually handle too many people.

00:33:09 And I don’t think the metaverse has come to being.

00:33:14 I think there will be something like that.

00:33:16 It won’t necessarily be from that one company.

00:33:21 I mean, there’s gonna be competitors.

00:33:23 But yes, we’re gonna live increasingly online,

00:33:26 and particularly if our brains are online.

00:33:28 I mean, how could we not be online?

00:33:31 Do you think it’s possible that given this merger with AI,

00:33:34 most of our meaningful interactions

00:33:39 will be in this virtual world most of our life?

00:33:43 We fall in love, we make friends,

00:33:46 we come up with ideas, we do collaborations, we have fun.

00:33:49 I actually know somebody who’s marrying somebody

00:33:51 that they never met.

00:33:54 I think they just met her briefly before the wedding,

00:33:57 but she actually fell in love with this other person,

00:34:01 never having met them.

00:34:06 And I think the love is real, so.

00:34:10 That’s a beautiful story,

00:34:11 but do you think that story is one that might be experienced

00:34:15 as opposed to by hundreds of thousands of people,

00:34:18 but instead by hundreds of millions of people?

00:34:22 I mean, it really gives you appreciation

00:34:23 for these virtual ways of communicating.

00:34:28 And if anybody can do it,

00:34:30 then it’s really not such a freak story.

00:34:34 So I think more and more people will do that.

00:34:37 But that’s turning our back

00:34:38 on our entire history of evolution.

00:34:41 The old days, we used to fall in love by holding hands

00:34:45 and sitting by the fire, that kind of stuff.

00:34:49 Here, you’re playing.

00:34:50 Actually, I have five patents on where you can hold hands,

00:34:54 even if you’re separated.

00:34:57 Great.

00:34:58 So the touch, the sense, it’s all just senses.

00:35:01 It’s all just replicated.

00:35:03 Yeah, I mean, touch is,

00:35:04 it’s not just that you’re touching someone or not.

00:35:07 There’s a whole way of doing it, and it’s very subtle.

00:35:11 But ultimately, we can emulate all of that.

00:35:17 Are you excited by that future?

00:35:19 Do you worry about that future?

00:35:23 I have certain worries about the future,

00:35:25 but not virtual touch.

00:35:27 Well, I agree with you.

00:35:31 You described six stages

00:35:33 in the evolution of information processing in the universe,

00:35:36 as you started to describe.

00:35:39 Can you maybe talk through some of those stages

00:35:42 from the physics and chemistry to DNA and brains,

00:35:46 and then to the very end,

00:35:48 to the very beautiful end of this process?

00:35:52 It actually gets more rapid.

00:35:54 So physics and chemistry, that’s how we started.

00:35:59 So the very beginning of the universe.

00:36:02 We had lots of electrons and various things traveling around.

00:36:07 And that took actually many billions of years,

00:36:11 kind of jumping ahead here to kind of

00:36:14 some of the last stages where we have things

00:36:16 like love and creativity.

00:36:19 It’s really quite remarkable that that happens.

00:36:21 But finally, physics and chemistry created biology and DNA.

00:36:29 And now you had actually one type of molecule

00:36:33 that described the cutting edge of this process.

00:36:38 And we go from physics and chemistry to biology.

00:36:44 And finally, biology created brains.

00:36:48 I mean, not everything that’s created by biology

00:36:51 has a brain, but eventually brains came along.

00:36:56 And all of this is happening faster and faster.

00:36:58 Yeah.

00:37:00 It created increasingly complex organisms.

00:37:04 Another key thing is actually not just brains,

00:37:08 but our thumb.

00:37:12 Because there’s a lot of animals

00:37:15 with brains even bigger than humans.

00:37:18 I mean, elephants have a bigger brain.

00:37:21 Whales have a bigger brain.

00:37:24 But they’ve not created technology

00:37:27 because they don’t have a thumb.

00:37:29 So that’s one of the really key elements

00:37:32 in the evolution of humans.

00:37:34 This physical manipulator device

00:37:37 that’s useful for puzzle solving in the physical reality.

00:37:41 So I could think, I could look at a tree and go,

00:37:43 oh, I could actually trip that branch down

00:37:46 and eliminate the leaves and carve a tip on it

00:37:49 and I would create technology.

00:37:53 And you can’t do that if you don’t have a thumb.

00:37:56 Yeah.

00:37:59 So thumbs then created technology

00:38:04 and technology also had a memory.

00:38:08 And now those memories are competing

00:38:10 with the scale and scope of human beings.

00:38:15 And ultimately we’ll go beyond it.

00:38:18 And then we’re gonna merge human technology

00:38:22 with human intelligence

00:38:27 and understand how human intelligence works,

00:38:30 which I think we already do.

00:38:33 And we’re putting that into our human technology.

00:38:39 So create the technology inspired by our own intelligence

00:38:43 and then that technology supersedes us

00:38:45 in terms of its capabilities.

00:38:47 And we ride along.

00:38:48 Or do you ultimately see it as…

00:38:50 And we ride along, but a lot of people don’t see that.

00:38:52 They say, well, you’ve got humans and you’ve got machines

00:38:56 and there’s no way we can ultimately compete with humans.

00:39:00 And you can already see that.

00:39:02 Lee Soudal, who’s like the best Go player in the world,

00:39:07 says he’s not gonna play Go anymore.

00:39:10 Because playing Go for a human,

00:39:12 that was like the ultimate in intelligence

00:39:14 because no one else could do that.

00:39:18 But now a machine can actually go way beyond him.

00:39:22 And so he says, well, there’s no point playing it anymore.

00:39:25 That may be more true for games than it is for life.

00:39:30 I think there’s a lot of benefit

00:39:31 to working together with AI in regular life.

00:39:34 So if you were to put a probability on it,

00:39:37 is it more likely that we merge with AI

00:39:41 or AI replaces us?

00:39:43 A lot of people just think computers come along

00:39:47 and they compete with them.

00:39:48 We can’t really compete and that’s the end of it.

00:39:52 As opposed to them increasing our abilities.

00:39:57 And if you look at most technology,

00:39:59 it increases our abilities.

00:40:04 I mean, look at the history of work.

00:40:07 Look at what people did 100 years ago.

00:40:11 Does any of that exist anymore?

00:40:13 People, I mean, if you were to predict

00:40:16 that all of these jobs would go away

00:40:19 and would be done by machines,

00:40:21 people would say, well, there’s gonna be,

00:40:22 no one’s gonna have jobs

00:40:24 and it’s gonna be massive unemployment.

00:40:29 But I show in this book that’s coming out

00:40:34 the amount of people that are working,

00:40:36 even as a percentage of the population has gone way up.

00:40:41 We’re looking at the x axis year from 1774 to 2024

00:40:46 and on the y axis, personal income per capita

00:40:49 in constant dollars and it’s growing super linearly.

00:40:52 I mean, it’s 2021 constant dollars and it’s gone way up.

00:40:58 That’s not what you would predict

00:41:00 given that we would predict

00:41:01 that all these jobs would go away.

00:41:03 But the reason it’s gone up is because

00:41:07 we’ve basically enhanced our own capabilities

00:41:09 by using these machines

00:41:11 as opposed to them just competing with us.

00:41:14 That’s a key way in which we’re gonna be able

00:41:16 to become far smarter than we are now

00:41:18 by increasing the number of different parameters

00:41:23 we can consider in making a decision.

00:41:26 I was very fortunate, I am very fortunate

00:41:28 to be able to get a glimpse preview

00:41:31 of your upcoming book, Singularity is Nearer.

00:41:37 And one of the themes outside of just discussing

00:41:41 the increasing exponential growth of technology,

00:41:44 one of the themes is that things are getting better

00:41:48 in all aspects of life.

00:41:50 And you talked just about this.

00:41:53 So one of the things you’re saying is with jobs.

00:41:55 So let me just ask about that.

00:41:57 There is a big concern that automation,

00:42:01 especially powerful AI, will get rid of jobs.

00:42:06 There are people who lose jobs.

00:42:07 And as you were saying, the sense is

00:42:10 throughout the history of the 20th century,

00:42:14 automation did not do that ultimately.

00:42:16 And so the question is, will this time be different?

00:42:20 Right, that is the question.

00:42:22 Will this time be different?

00:42:24 And it really has to do with how quickly

00:42:26 we can merge with this type of intelligence.

00:42:29 Whether Lambda or GPT3 is out there,

00:42:34 and maybe it’s overcome some of its key problems,

00:42:40 and we really haven’t enhanced human intelligence,

00:42:43 that might be a negative scenario.

00:42:49 But I mean, that’s why we create technologies,

00:42:53 to enhance ourselves.

00:42:56 And I believe we will be enhanced

00:42:58 when I’m just going to sit here with

00:43:03 300 million modules in our neocortex.

00:43:09 We’re going to be able to go beyond that.

00:43:14 Because that’s useful, but we can multiply that by 10,

00:43:19 100, 1,000, a million.

00:43:22 And you might think, well, what’s the point of doing that?

00:43:30 It’s like asking somebody that’s never heard music,

00:43:33 well, what’s the value of music?

00:43:36 I mean, you can’t appreciate it until you’ve created it.

00:43:41 There’s some worry that there’ll be a wealth disparity.

00:43:46 Class or wealth disparity, only the rich people

00:43:50 will be, basically, the rich people

00:43:53 will first have access to this kind of thing,

00:43:55 and then because of this kind of thing,

00:43:58 because the ability to merge

00:43:59 will get richer exponentially faster.

00:44:02 And I say that’s just like cell phones.

00:44:06 I mean, there’s like four billion cell phones

00:44:08 in the world today.

00:44:10 In fact, when cell phones first came out,

00:44:13 you had to be fairly wealthy.

00:44:14 They weren’t very inexpensive.

00:44:17 So you had to have some wealth in order to afford them.

00:44:20 Yeah, there were these big, sexy phones.

00:44:22 And they didn’t work very well.

00:44:24 They did almost nothing.

00:44:26 So you can only afford these things if you’re wealthy

00:44:31 at a point where they really don’t work very well.

00:44:35 So achieving scale and making it inexpensive

00:44:39 is part of making the thing work well.

00:44:42 Exactly.

00:44:43 So these are not totally cheap, but they’re pretty cheap.

00:44:46 I mean, you can get them for a few hundred dollars.

00:44:52 Especially given the kind of things it provides for you.

00:44:55 There’s a lot of people in the third world

00:44:57 that have very little, but they have a smartphone.

00:45:00 Yeah, absolutely.

00:45:01 And the same will be true with AI.

00:45:03 I mean, I see homeless people have their own cell phones.

00:45:07 Yeah, so your sense is any kind of advanced technology

00:45:12 will take the same trajectory.

00:45:13 Right, it ultimately becomes cheap and will be affordable.

00:45:19 I probably would not be the first person

00:45:21 to put something in my brain to connect to computers

00:45:28 because I think it will have limitations.

00:45:30 But once it’s really perfected,

00:45:34 and at that point it’ll be pretty inexpensive,

00:45:36 I think it’ll be pretty affordable.

00:45:39 So in which other ways, as you outline your book,

00:45:43 is life getting better?

00:45:44 Because I think…

00:45:45 Well, I mean, I have 50 charts in there

00:45:49 where everything is getting better.

00:45:51 I think there’s a kind of cynicism about,

00:45:55 like even if you look at extreme poverty, for example.

00:45:58 For example, this is actually a poll

00:46:00 taken on extreme poverty, and people were asked,

00:46:05 has poverty gotten better or worse?

00:46:08 And the options are increased by 50%,

00:46:11 increased by 25%, remain the same,

00:46:13 decreased by 25%, decreased by 50%.

00:46:16 If you’re watching this or listening to this,

00:46:18 try to vote for yourself.

00:46:21 70% thought it had gotten worse,

00:46:24 and that’s the general impression.

00:46:27 88% thought it had gotten worse or remained the same.

00:46:32 Only 1% thought it decreased by 50%,

00:46:35 and that is the answer.

00:46:37 It actually decreased by 50%.

00:46:39 So only 1% of people got the right optimistic estimate

00:46:43 of how poverty is.

00:46:45 Right, and this is the reality,

00:46:47 and it’s true of almost everything you look at.

00:46:51 You don’t wanna go back 100 years or 50 years.

00:46:54 Things were quite miserable then,

00:46:56 but we tend not to remember that.

00:47:01 So literacy rate increasing over the past few centuries

00:47:05 across all the different nations,

00:47:07 nearly to 100% across many of the nations in the world.

00:47:11 It’s gone way up.

00:47:12 Average years of education have gone way up.

00:47:15 Life expectancy is also increasing.

00:47:18 Life expectancy was 48 in 1900.

00:47:24 And it’s over 80 now.

00:47:26 And it’s gonna continue to go up,

00:47:28 particularly as we get into more advanced stages

00:47:30 of simulated biology.

00:47:33 For life expectancy, these trends are the same

00:47:35 for at birth, age one, age five, age 10,

00:47:37 so it’s not just the infant mortality.

00:47:40 And I have 50 more graphs in the book

00:47:42 about all kinds of things.

00:47:46 Even spread of democracy,

00:47:48 which might bring up some sort of controversial issues,

00:47:52 it still has gone way up.

00:47:55 Well, that one has gone way up,

00:47:57 but that one is a bumpy road, right?

00:47:59 Exactly, and somebody might represent democracy

00:48:03 and go backwards, but we basically had no democracies

00:48:08 before the creation of the United States,

00:48:10 which was a little over two centuries ago,

00:48:13 which in the scale of human history isn’t that long.

00:48:17 Do you think superintelligence systems will help

00:48:21 with democracy?

00:48:23 So what is democracy?

00:48:25 Democracy is giving a voice to the populace

00:48:29 and having their ideas, having their beliefs,

00:48:33 having their views represented.

00:48:38 Well, I hope so.

00:48:41 I mean, we’ve seen social networks

00:48:44 can spread conspiracy theories,

00:48:49 which have been quite negative,

00:48:51 being, for example, being against any kind of stuff

00:48:55 that would help your health.

00:48:58 So those kinds of ideas have,

00:49:03 on social media, what you notice is they increase

00:49:06 engagement, so dramatic division increases engagement.

00:49:10 Do you worry about AI systems that will learn

00:49:13 to maximize that division?

00:49:17 I mean, I do have some concerns about this,

00:49:22 and I have a chapter in the book about the perils

00:49:25 of advanced AI, spreading misinformation

00:49:32 on social networks is one of them,

00:49:34 but there are many others.

00:49:36 What’s the one that worries you the most

00:49:40 that we should think about to try to avoid?

00:49:47 Well, it’s hard to choose.

00:49:50 We do have the nuclear power that evolved

00:49:55 when I was a child, I remember,

00:49:57 and we would actually do these drills against a nuclear war.

00:50:03 We’d get under our desks and put our hands behind our heads

00:50:07 to protect us from a nuclear war.

00:50:11 Seems to work, we’re still around, so.

00:50:15 You’re protected.

00:50:17 But that’s still a concern.

00:50:20 And there are key dangerous situations

00:50:22 that can take place in biology.

00:50:27 Someone could create a virus that’s very,

00:50:33 I mean, we have viruses that are hard to spread,

00:50:40 and they can be very dangerous,

00:50:42 and we have viruses that are easy to spread,

00:50:46 but they’re not so dangerous.

00:50:47 Somebody could create something

00:50:51 that would be very easy to spread and very dangerous,

00:50:55 and be very hard to stop.

00:50:58 It could be something that would spread

00:51:02 without people noticing, because people could get it,

00:51:04 they’d have no symptoms, and then everybody would get it,

00:51:08 and then symptoms would occur maybe a month later.

00:51:11 So I mean, and that actually doesn’t occur normally,

00:51:18 because if we were to have a problem with that,

00:51:24 we wouldn’t exist.

00:51:26 So the fact that humans exist means that we don’t have

00:51:30 viruses that can spread easily and kill us,

00:51:35 because otherwise we wouldn’t exist.

00:51:37 Yeah, viruses don’t wanna do that.

00:51:39 They want to spread and keep the host alive somewhat.

00:51:44 So you can describe various dangers with biology.

00:51:48 Also nanotechnology, which we actually haven’t experienced

00:51:53 yet, but there are people that are creating nanotechnology,

00:51:56 and I describe that in the book.

00:51:57 Now you’re excited by the possibilities of nanotechnology,

00:52:00 of nanobots, of being able to do things inside our body,

00:52:04 inside our mind, that’s going to help.

00:52:07 What’s exciting, what’s terrifying about nanobots?

00:52:10 What’s exciting is that that’s a way to communicate

00:52:13 with our neocortex, because each neocortex is pretty small

00:52:19 and you need a small entity that can actually get in there

00:52:22 and establish a communication channel.

00:52:25 And that’s gonna really be necessary to connect our brains

00:52:30 to AI within ourselves, because otherwise it would be hard

00:52:35 for us to compete with it.

00:52:38 In a high bandwidth way.

00:52:40 Yeah, yeah.

00:52:41 And that’s key, actually, because a lot of the things

00:52:45 like Neuralink are really not high bandwidth yet.

00:52:49 So nanobots is the way you achieve high bandwidth.

00:52:52 How much intelligence would those nanobots have?

00:52:55 Yeah, they don’t need a lot, just enough to basically

00:53:00 establish a communication channel to one nanobot.

00:53:04 So it’s primarily about communication.

00:53:06 Yeah.

00:53:07 Between external computing devices

00:53:09 and our biological thinking machine.

00:53:15 What worries you about nanobots?

00:53:17 Is it similar to with the viruses?

00:53:19 Well, I mean, it’s the great goo challenge.

00:53:22 Yes.

00:53:24 If you had a nanobot that wanted to create

00:53:29 any kind of entity and repeat itself,

00:53:37 and was able to operate in a natural environment,

00:53:41 it could turn everything into that entity

00:53:45 and basically destroy all biological life.

00:53:52 So you mentioned nuclear weapons.

00:53:54 Yeah.

00:53:55 I’d love to hear your opinion about the 21st century

00:54:01 and whether you think we might destroy ourselves.

00:54:05 And maybe your opinion, if it has changed

00:54:08 by looking at what’s going on in Ukraine,

00:54:11 that we could have a hot war with nuclear powers involved

00:54:18 and the tensions building and the seeming forgetting

00:54:23 of how terrifying and destructive nuclear weapons are.

00:54:29 Do you think humans might destroy ourselves

00:54:32 in the 21st century, and if we do, how?

00:54:36 And how do we avoid it?

00:54:38 I don’t think that’s gonna happen

00:54:41 despite the terrors of that war.

00:54:45 It is a possibility, but I mean, I don’t.

00:54:50 It’s unlikely in your mind.

00:54:52 Yeah, even with the tensions we’ve had

00:54:55 with this one nuclear power plant that’s been taken over,

00:55:02 it’s very tense, but I don’t actually see a lot of people

00:55:07 worrying that that’s gonna happen.

00:55:10 I think we’ll avoid that.

00:55:11 We had two nuclear bombs go off in 45,

00:55:15 so now we’re 77 years later.

00:55:20 Yeah, we’re doing pretty good.

00:55:22 We’ve never had another one go off through anger.

00:55:27 People forget the lessons of history.

00:55:31 Well, yeah, I mean, I am worried about it.

00:55:33 I mean, that is definitely a challenge.

00:55:37 But you believe that we’ll make it out

00:55:40 and ultimately superintelligent AI will help us make it out

00:55:44 as opposed to destroy us.

00:55:47 I think so, but we do have to be mindful of these dangers.

00:55:52 And there are other dangers besides nuclear weapons, so.

00:55:56 So to get back to merging with AI,

00:56:01 will we be able to upload our mind in a computer

00:56:06 in a way where we might even transcend

00:56:09 the constraints of our bodies?

00:56:11 So copy our mind into a computer and leave the body behind?

00:56:15 Let me describe one thing I’ve already done with my father.

00:56:21 That’s a great story.

00:56:23 So we created a technology, this is public,

00:56:26 came out, I think, six years ago,

00:56:30 where you could ask any question

00:56:33 and the release product,

00:56:35 which I think is still on the market,

00:56:37 it would read 200,000 books.

00:56:40 And then find the one sentence in 200,000 books

00:56:46 that best answered your question.

00:56:49 And it’s actually quite interesting.

00:56:51 You can ask all kinds of questions

00:56:52 and you get the best answer in 200,000 books.

00:56:57 But I was also able to take it

00:56:59 and not go through 200,000 books,

00:57:03 but go through a book that I put together,

00:57:07 which is basically everything my father had written.

00:57:10 So everything he had written, I had gathered,

00:57:14 and we created a book,

00:57:17 everything that Frederick Herzog had written.

00:57:20 Now, I didn’t think this actually would work that well

00:57:23 because stuff he had written was stuff about how to lay out.

00:57:30 I mean, he directed choral groups

00:57:35 and music groups,

00:57:39 and he would be laying out how the people should,

00:57:44 where they should sit and how to fund this

00:57:49 and all kinds of things

00:57:52 that really didn’t seem that interesting.

00:57:57 And yet, when you ask a question,

00:57:59 it would go through it

00:58:00 and it would actually give you a very good answer.

00:58:04 So I said, well, who’s the most interesting composer?

00:58:07 And he said, well, definitely Brahms.

00:58:09 And he would go on about how Brahms was fabulous

00:58:13 and talk about the importance of music education.

00:58:18 So you could have essentially a question and answer,

00:58:21 a conversation with him.

00:58:21 You could have a conversation with him,

00:58:23 which was actually more interesting than talking to him

00:58:25 because if you talked to him,

00:58:27 he’d be concerned about how they’re gonna lay out

00:58:30 this property to give a choral group.

00:58:34 He’d be concerned about the day to day

00:58:36 versus the big questions.

00:58:37 Exactly, yeah.

00:58:39 And you did ask about the meaning of life

00:58:41 and he answered, love.

00:58:43 Yeah.

00:58:46 Do you miss him?

00:58:49 Yes, I do.

00:58:52 Yeah, you get used to missing somebody after 52 years,

00:58:58 and I didn’t really have intelligent conversations with him

00:59:02 until later in life.

00:59:06 In the last few years, he was sick,

00:59:08 which meant he was home a lot

00:59:10 and I was actually able to talk to him

00:59:11 about different things like music and other things.

00:59:15 And so I miss that very much.

00:59:19 What did you learn about life from your father?

00:59:25 What part of him is with you now?

00:59:29 He was devoted to music.

00:59:31 And when he would create something to music,

00:59:33 it put him in a different world.

00:59:37 Otherwise, he was very shy.

00:59:42 And if people got together,

00:59:43 he tended not to interact with people

00:59:47 just because of his shyness.

00:59:49 But when he created music, he was like a different person.

00:59:55 Do you have that in you?

00:59:56 That kind of light that shines?

00:59:59 I mean, I got involved with technology at like age five.

01:00:06 And you fell in love with it

01:00:07 in the same way he did with music?

01:00:09 Yeah, yeah.

01:00:11 I remember this actually happened with my grandmother.

01:00:16 She had a manual typewriter

01:00:20 and she wrote a book, One Life Is Not Enough,

01:00:23 which actually a good title for a book I might write,

01:00:26 but it was about a school she had created.

01:00:30 Well, actually her mother created it.

01:00:33 So my mother’s mother’s mother created the school in 1868.

01:00:38 And it was the first school in Europe

01:00:40 that provided higher education for girls.

01:00:42 It went through 14th grade.

01:00:45 If you were a girl and you were lucky enough

01:00:48 to get an education at all,

01:00:50 it would go through like ninth grade.

01:00:52 And many people didn’t have any education as a girl.

01:00:56 This went through 14th grade.

01:01:00 Her mother created it, she took it over,

01:01:04 and the book was about the history of the school

01:01:09 and her involvement with it.

01:01:12 When she presented it to me,

01:01:14 I was not so interested in the story of the school,

01:01:19 but I was totally amazed with this manual typewriter.

01:01:25 I mean, here is something you could put a blank piece

01:01:27 of paper into and you could turn it into something

01:01:31 that looked like it came from a book.

01:01:33 And you can actually type on it

01:01:34 and it looked like it came from a book.

01:01:36 It was just amazing to me.

01:01:39 And I could see actually how it worked.

01:01:42 And I was also interested in magic.

01:01:44 But in magic, if somebody actually knows how it works,

01:01:50 the magic goes away.

01:01:52 The magic doesn’t stay there

01:01:53 if you actually understand how it works.

01:01:56 But here was technology.

01:01:57 I didn’t have that word when I was five or six.

01:02:01 And the magic was still there for you?

01:02:02 The magic was still there, even if you knew how it worked.

01:02:06 So I became totally interested in this

01:02:08 and then went around, collected little pieces

01:02:12 of mechanical objects from bicycles, from broken radios.

01:02:17 I would go through the neighborhood.

01:02:20 This was an era where you would allow five or six year olds

01:02:23 to run through the neighborhood and do this.

01:02:26 We don’t do that anymore.

01:02:27 But I didn’t know how to put them together.

01:02:30 I said, if I could just figure out

01:02:32 how to put these things together, I could solve any problem.

01:02:37 And I actually remember talking to these very old girls.

01:02:41 I think they were 10.

01:02:45 And telling them, if I could just figure this out,

01:02:48 we could fly, we could do anything.

01:02:50 And they said, well, you have quite an imagination.

01:02:56 And then when I was in third grade,

01:03:00 so I was like eight,

01:03:02 created like a virtual reality theater

01:03:05 where people could come on stage

01:03:07 and they could move their arms.

01:03:09 And all of it was controlled through one control box.

01:03:13 It was all done with mechanical technology.

01:03:16 And it was a big hit in my third grade class.

01:03:21 And then I went on to do things

01:03:22 in junior high school science fairs

01:03:24 and high school science fairs.

01:03:27 I won the Westinghouse Science Talent Search.

01:03:30 So I mean, I became committed to technology

01:03:33 when I was five or six years old.

01:03:37 You’ve talked about how you use lucid dreaming to think,

01:03:43 to come up with ideas as a source of creativity.

01:03:45 Because you maybe talk through that,

01:03:49 maybe the process of how to,

01:03:52 you’ve invented a lot of things.

01:03:54 You’ve came up and thought through

01:03:55 some very interesting ideas.

01:03:58 What advice would you give,

01:03:59 or can you speak to the process of thinking,

01:04:03 of how to think, how to think creatively?

01:04:07 Well, I mean, sometimes I will think through in a dream

01:04:10 and try to interpret that.

01:04:12 But I think the key issue that I would tell younger people

01:04:22 is to put yourself in the position

01:04:25 that what you’re trying to create already exists.

01:04:30 And then you’re explaining, like…

01:04:34 How it works.

01:04:35 Exactly.

01:04:38 That’s really interesting.

01:04:39 You paint a world that you would like to exist,

01:04:42 you think it exists, and reverse engineer that.

01:04:45 And then you actually imagine you’re giving a speech

01:04:47 about how you created this.

01:04:50 Well, you’d have to then work backwards

01:04:51 as to how you would create it in order to make it work.

01:04:57 That’s brilliant.

01:04:58 And that requires some imagination too,

01:05:01 some first principles thinking.

01:05:03 You have to visualize that world.

01:05:06 That’s really interesting.

01:05:07 And generally, when I talk about things

01:05:10 we’re trying to invent, I would use the present tense

01:05:13 as if it already exists.

01:05:15 Not just to give myself that confidence,

01:05:18 but everybody else who’s working on it.

01:05:21 We just have to kind of do all the steps

01:05:26 in order to make it actual.

01:05:31 How much of a good idea is about timing?

01:05:35 How much is it about your genius

01:05:37 versus that its time has come?

01:05:41 Timing’s very important.

01:05:42 I mean, that’s really why I got into futurism.

01:05:46 I didn’t, I wasn’t inherently a futurist.

01:05:50 That was not really my goal.

01:05:54 It’s really to figure out when things are feasible.

01:05:57 We see that now with large scale models.

01:06:01 The very large scale models like GPT3,

01:06:06 it started two years ago.

01:06:09 Four years ago, it wasn’t feasible.

01:06:11 In fact, they did create GPT2, which didn’t work.

01:06:18 So it required a certain amount of timing

01:06:22 having to do with this exponential growth

01:06:24 of computing power.

01:06:27 So futurism in some sense is a study of timing,

01:06:31 trying to understand how the world will evolve

01:06:34 and when will the capacity for certain ideas emerge.

01:06:38 And that’s become a thing in itself

01:06:40 and to try to time things in the future.

01:06:43 But really its original purpose was to time my products.

01:06:48 I mean, I did OCR in the 1970s

01:06:55 because OCR doesn’t require a lot of computation.

01:07:01 Optical character recognition.

01:07:02 Yeah, so we were able to do that in the 70s

01:07:06 and I waited till the 80s to address speech recognition

01:07:11 since that requires more computation.

01:07:14 So you were thinking through timing

01:07:16 when you’re developing those things.

01:07:17 Yeah.

01:07:18 Time come.

01:07:19 Yeah.

01:07:21 And that’s how you’ve developed that brain power

01:07:24 to start to think in a futurist sense

01:07:26 when how will the world look like in 2045

01:07:31 and work backwards and how it gets there.

01:07:33 But that has to become a thing in itself

01:07:35 because looking at what things will be like in the future

01:07:40 and the future reflects such dramatic changes in how humans will live

01:07:48 that was worth communicating also.

01:07:51 So you developed that muscle of predicting the future

01:07:56 and then applied broadly

01:07:58 and started to discuss how it changes the world of technology,

01:08:02 how it changes the world of human life on earth.

01:08:06 In Danielle, one of your books,

01:08:09 you write about someone who has the courage

01:08:11 to question assumptions that limit human imagination

01:08:15 to solve problems.

01:08:16 And you also give advice

01:08:18 on how each of us can have this kind of courage.

01:08:22 Well, it’s good that you picked that quote

01:08:24 because I think that does symbolize what Danielle is about.

01:08:27 Courage.

01:08:28 So how can each of us have that courage

01:08:30 to question assumptions?

01:08:33 I mean, we see that when people can go beyond

01:08:38 the current realm and create something that’s new.

01:08:43 I mean, take Uber, for example.

01:08:45 Before that existed, you never thought

01:08:48 that that would be feasible

01:08:49 and it did require changes in the way people work.

01:08:54 Is there practical advice as you give in the book

01:08:57 about what each of us can do to be a Danielle?

01:09:04 Well, she looks at the situation

01:09:06 and tries to imagine how she can overcome various obstacles

01:09:15 and then she goes for it.

01:09:17 And she’s a very good communicator

01:09:19 so she can communicate these ideas to other people.

01:09:25 And there’s practical advice of learning to program

01:09:27 and recording your life and things of this nature.

01:09:32 Become a physicist.

01:09:33 So you list a bunch of different suggestions

01:09:36 of how to throw yourself into this world.

01:09:39 Yeah, I mean, it’s kind of an idea

01:09:42 how young people can actually change the world

01:09:46 by learning all of these different skills.

01:09:52 And at the core of that is the belief

01:09:54 that you can change the world.

01:09:57 That your mind, your body can change the world.

01:10:00 Yeah, that’s right.

01:10:02 And not letting anyone else tell you otherwise.

01:10:06 That’s really good, exactly.

01:10:08 When we upload the story you told about your dad

01:10:13 and having a conversation with him,

01:10:16 we’re talking about uploading your mind to the computer.

01:10:21 Do you think we’ll have a future

01:10:23 with something you call afterlife?

01:10:25 We’ll have avatars that mimic increasingly better and better

01:10:29 our behavior, our appearance, all that kind of stuff.

01:10:33 Even those that are perhaps no longer with us.

01:10:36 Yes, I mean, we need some information about them.

01:10:42 I mean, think about my father.

01:10:45 I have what he wrote.

01:10:48 Now, he didn’t have a word processor,

01:10:50 so he didn’t actually write that much.

01:10:53 And our memories of him aren’t perfect.

01:10:56 So how do you even know if you’ve created something

01:10:59 that’s satisfactory?

01:11:00 Now, you could do a Frederick Kurzweil Turing test.

01:11:04 It seems like Frederick Kurzweil to me.

01:11:07 But the people who remember him, like me,

01:11:11 don’t have a perfect memory.

01:11:14 Is there such a thing as a perfect memory?

01:11:16 Maybe the whole point is for him to make you feel

01:11:24 a certain way.

01:11:25 Yeah, well, I think that would be the goal.

01:11:28 And that’s the connection we have with loved ones.

01:11:30 It’s not really based on very strict definition of truth.

01:11:35 It’s more about the experiences we share.

01:11:37 And they get morphed through memory.

01:11:39 But ultimately, they make us smile.

01:11:41 I think we definitely can do that.

01:11:44 And that would be very worthwhile.

01:11:46 So do you think we’ll have a world of replicants?

01:11:49 Of copies?

01:11:51 There’ll be a bunch of Ray Kurzweils.

01:11:53 Like, I could hang out with one.

01:11:55 I can download it for five bucks

01:11:58 and have a best friend, Ray.

01:12:01 And you, the original copy, wouldn’t even know about it.

01:12:07 Is that, do you think that world is,

01:12:11 first of all, do you think that world is feasible?

01:12:13 And do you think there’s ethical challenges there?

01:12:16 Like, how would you feel about me hanging out

01:12:18 with Ray Kurzweil and you not knowing about it?

01:12:20 It doesn’t strike me as a problem.

01:12:28 Which you, the original?

01:12:30 Would you strike, would that cause a problem for you?

01:12:34 No, I would really very much enjoy it.

01:12:37 No, not just hang out with me,

01:12:38 but if somebody hang out with you, a replicant of you.

01:12:43 Well, I think I would start, it sounds exciting,

01:12:46 but then what if they start doing better than me

01:12:51 and take over my friend group?

01:12:55 And then, because they may be an imperfect copy

01:13:02 or there may be more social, all these kinds of things,

01:13:05 and then I become like the old version

01:13:07 that’s not nearly as exciting.

01:13:10 Maybe they’re a copy of the best version of me

01:13:12 on a good day.

01:13:13 Yeah, but if you hang out with a replicant of me

01:13:18 and that turned out to be successful,

01:13:20 I’d feel proud of that person because it was based on me.

01:13:24 So it’s, but it is a kind of death of this version of you.

01:13:32 Well, not necessarily.

01:13:33 I mean, you can still be alive, right?

01:13:36 But, and you would be proud, okay,

01:13:38 so it’s like having kids and you’re proud

01:13:40 that they’ve done even more than you were able to do.

01:13:42 Yeah, exactly.

01:13:48 It does bring up new issues,

01:13:50 but it seems like an opportunity.

01:13:55 Well, that replicant should probably have the same rights

01:13:57 as you do.

01:13:59 Well, that gets into a whole issue

01:14:05 because when a replicant occurs,

01:14:07 they’re not necessarily gonna have your rights.

01:14:10 And if a replicant occurs,

01:14:11 if it’s somebody who’s already dead,

01:14:14 do they have all the obligations

01:14:17 and that the original person had?

01:14:21 Do they have all the agreements that they had?

01:14:25 I think you’re gonna have to have laws that say yes.

01:14:30 There has to be, if you wanna create a replicant,

01:14:33 they have to have all the same rights as human rights.

01:14:35 Well, you don’t know.

01:14:37 Someone can create a replicant and say,

01:14:38 well, it’s a replicant,

01:14:39 but I didn’t bother getting their rights.

01:14:40 And so.

01:14:41 Yeah, but that would be illegal, I mean.

01:14:43 Like if you do that, you have to do that in the black market.

01:14:47 If you wanna get an official replicant.

01:14:49 Okay, it’s not so easy.

01:14:51 It’s supposed to create multiple replicants.

01:14:55 The original rights,

01:14:59 maybe for one person and not for a whole group of people.

01:15:04 Sure.

01:15:08 So there has to be at least one.

01:15:10 And then all the other ones kinda share the rights.

01:15:14 Yeah, I just don’t think that,

01:15:16 that’s very difficult to conceive for us humans,

01:15:18 the idea that this country.

01:15:20 You create a replicant that has certain,

01:15:24 I mean, I’ve talked to people about this,

01:15:26 including my wife, who would like to get back her father.

01:15:32 And she doesn’t worry about who has rights to what.

01:15:38 She would have somebody that she could visit with

01:15:40 and might give her some satisfaction.

01:15:44 And she wouldn’t care about any of these other rights.

01:15:49 What does your wife think about multiple rake or as wells?

01:15:53 Have you had that discussion?

01:15:54 I haven’t addressed that with her.

01:15:58 I think ultimately that’s an important question,

01:16:00 loved ones, how they feel about.

01:16:03 There’s something about love.

01:16:05 Well, that’s the key thing, right?

01:16:06 If the loved one’s rejected,

01:16:07 it’s not gonna work very well, so.

01:16:12 So the loved ones really are the key determinant,

01:16:15 whether or not this works or not.

01:16:19 But there’s also ethical rules.

01:16:22 We have to contend with the idea,

01:16:24 and we have to contend with that idea with AI.

01:16:27 But what’s gonna motivate it is,

01:16:30 I mean, I talk to people who really miss people who are gone

01:16:34 and they would love to get something back,

01:16:37 even if it isn’t perfect.

01:16:40 And that’s what’s gonna motivate this.

01:16:47 And that person lives on in some form.

01:16:51 And the more data we have,

01:16:52 the more we’re able to reconstruct that person

01:16:56 and allow them to live on.

01:16:59 And eventually as we go forward,

01:17:01 we’re gonna have more and more of this data

01:17:03 because we’re gonna have none of us

01:17:06 that are inside our neocortex

01:17:08 and we’re gonna collect a lot of data.

01:17:11 In fact, anything that’s data is always collected.

01:17:15 There is something a little bit sad,

01:17:18 which is becoming, or maybe it’s hopeful,

01:17:23 which is more and more common these days,

01:17:26 which when a person passes away,

01:17:28 you have their Twitter account,

01:17:31 and you have the last tweet they tweeted,

01:17:34 like something they needed.

01:17:35 And you can recreate them now

01:17:36 with large language models and so on.

01:17:38 I mean, you can create somebody that’s just like them

01:17:40 and can actually continue to communicate.

01:17:45 I think that’s really exciting

01:17:46 because I think in some sense,

01:17:49 like if I were to die today,

01:17:51 in some sense I would continue on if I continued tweeting.

01:17:56 I tweet, therefore I am.

01:17:58 Yeah, well, I mean, that’s one of the advantages

01:18:02 of a replicant, they can recreate the communications

01:18:06 of that person.

01:18:10 Do you hope, do you think, do you hope

01:18:14 humans will become a multi planetary species?

01:18:17 You’ve talked about the phases, the six epochs,

01:18:20 and one of them is reaching out into the stars in part.

01:18:23 Yes, but the kind of attempts we’re making now

01:18:28 to go to other planetary objects

01:18:34 doesn’t excite me that much

01:18:36 because it’s not really advancing anything.

01:18:38 It’s not efficient enough?

01:18:41 Yeah, and we’re also putting out other human beings,

01:18:48 which is a very inefficient way

01:18:50 to explore these other objects.

01:18:52 What I’m really talking about in the sixth epoch,

01:18:57 the universe wakes up.

01:19:00 It’s where we can spread our super intelligence

01:19:03 throughout the universe.

01:19:05 And that doesn’t mean sending a very soft,

01:19:08 squishy creatures like humans.

01:19:10 Yeah, the universe wakes up.

01:19:13 I mean, we would send intelligence masses of nanobots

01:19:18 which can then go out and colonize

01:19:24 these other parts of the universe.

01:19:29 Do you think there’s intelligent alien civilizations

01:19:31 out there that our bots might meet?

01:19:35 My hunch is no.

01:19:38 Most people say yes, absolutely.

01:19:40 I mean, and the universe is too big.

01:19:43 And they’ll cite the Drake equation.

01:19:46 And I think in Singularity is Near,

01:19:52 I have two analyses of the Drake equation,

01:19:56 both with very reasonable assumptions.

01:20:00 And one gives you thousands of advanced civilizations

01:20:04 in each galaxy.

01:20:07 And another one gives you one civilization.

01:20:11 And we know of one.

01:20:13 A lot of the analyses are forgetting

01:20:16 the exponential growth of computation.

01:20:21 Because we’ve gone from where the fastest way

01:20:24 I could send a message to somebody was with a pony,

01:20:30 which was what, like a century and a half ago?

01:20:34 To the advanced civilization we have today.

01:20:37 And if you accept what I’ve said,

01:20:40 go forward a few decades,

01:20:42 you can have absolutely fantastic amount of civilization

01:20:46 compared to a pony, and that’s in a couple hundred years.

01:20:50 Yeah, the speed and the scale of information transfer

01:20:53 is growing exponentially in a blink of an eye.

01:20:58 Now think about these other civilizations.

01:21:01 They’re gonna be spread out at cosmic times.

01:21:06 So if something is like ahead of us or behind us,

01:21:10 it could be ahead of us or behind us by maybe millions

01:21:14 of years, which isn’t that much.

01:21:16 I mean, the world is billions of years old,

01:21:21 14 billion or something.

01:21:23 So even a thousand years, if two or 300 years is enough

01:21:29 to go from a pony to fantastic amount of civilization,

01:21:33 we would see that.

01:21:35 So of other civilizations that have occurred,

01:21:39 okay, some might be behind us, but some might be ahead of us.

01:21:43 If they’re ahead of us, they’re ahead of us

01:21:45 by thousands, millions of years,

01:21:49 and they would be so far beyond us,

01:21:51 they would be doing galaxy wide engineering.

01:21:56 But we don’t see anything doing galaxy wide engineering.

01:22:00 So either they don’t exist, or this very universe

01:22:05 is a construction of an alien species.

01:22:08 We’re living inside a video game.

01:22:11 Well, that’s another explanation that yes,

01:22:14 you’ve got some teenage kids in another civilization.

01:22:19 Do you find compelling the simulation hypothesis

01:22:22 as a thought experiment that we’re living in a simulation?

01:22:25 The universe is computational.

01:22:29 So we are an example in a computational world.

01:22:34 Therefore, it is a simulation.

01:22:39 It doesn’t necessarily mean an experiment

01:22:41 by some high school kid in another world,

01:22:44 but it nonetheless is taking place

01:22:47 in a computational world.

01:22:50 And everything that’s going on

01:22:51 is basically a form of computation.

01:22:58 So you really have to define what you mean

01:23:00 by this whole world being a simulation.

01:23:06 Well, then it’s the teenager that makes the video game.

01:23:12 Us humans with our current limited cognitive capability

01:23:16 have strived to understand ourselves

01:23:20 and we have created religions.

01:23:23 We think of God.

01:23:25 Whatever that is, do you think God exists?

01:23:32 And if so, who is God?

01:23:35 I alluded to this before.

01:23:37 We started out with lots of particles going around

01:23:42 and there’s nothing that represents love and creativity.

01:23:53 And somehow we’ve gotten into a world

01:23:55 where love actually exists

01:23:57 and that has to do actually with consciousness

01:23:59 because you can’t have love without consciousness.

01:24:03 So to me, that’s God, the fact that we have something

01:24:06 where love, where you can be devoted to someone else

01:24:11 and really feel the love, that’s God.

01:24:19 And if you look at the Old Testament,

01:24:21 it was actually created by several different

01:24:26 ravenants in there.

01:24:29 And I think they’ve identified three of them.

01:24:34 One of them dealt with God as a person

01:24:39 that you can make deals with and he gets angry

01:24:42 and he wrecks vengeance on various people.

01:24:48 But two of them actually talk about God

01:24:50 as a symbol of love and peace and harmony and so forth.

01:24:58 That’s how they describe God.

01:25:01 So that’s my view of God, not as a person in the sky

01:25:06 that you can make deals with.

01:25:09 It’s whatever the magic that goes from basic elements

01:25:13 to things like consciousness and love.

01:25:15 Do you think one of the things I find

01:25:19 extremely beautiful and powerful is cellular automata,

01:25:22 which you also touch on?

01:25:24 Do you think whatever the heck happens in cellular automata

01:25:27 where interesting, complicated objects emerge,

01:25:31 God is in there too?

01:25:33 The emergence of love in this seemingly primitive universe?

01:25:38 Well, that’s the goal of creating a replicant

01:25:42 is that they would love you and you would love them.

01:25:47 There wouldn’t be much point of doing it

01:25:50 if that didn’t happen.

01:25:52 But all of it, I guess what I’m saying

01:25:54 about cellular automata is it’s primitive building blocks

01:25:59 and they somehow create beautiful things.

01:26:03 Is there some deep truth to that

01:26:06 about how our universe works?

01:26:07 Is the emergence from simple rules,

01:26:11 beautiful, complex objects can emerge?

01:26:14 Is that the thing that made us?

01:26:16 Yeah, well. As we went through

01:26:18 all the six phases of reality.

01:26:21 That’s a good way to look at it.

01:26:23 It does make some point to the whole value

01:26:27 of having a universe.

01:26:31 Do you think about your own mortality?

01:26:34 Are you afraid of it?

01:26:36 Yes, but I keep going back to my idea

01:26:41 of being able to expand human life quickly enough

01:26:48 in advance of our getting there, longevity escape velocity,

01:26:55 which we’re not quite at yet,

01:26:58 but I think we’re actually pretty close,

01:27:01 particularly with, for example, doing simulated biology.

01:27:06 I think we can probably get there within,

01:27:08 say, by the end of this decade, and that’s my goal.

01:27:12 Do you hope to achieve the longevity escape velocity?

01:27:16 Do you hope to achieve immortality?

01:27:20 Well, immortality is hard to say.

01:27:22 I can’t really come on your program saying I’ve done it.

01:27:26 I’ve achieved immortality because it’s never forever.

01:27:32 A long time, a long time of living well.

01:27:35 But we’d like to actually advance

01:27:37 human life expectancy, advance my life expectancy

01:27:41 more than a year every year,

01:27:44 and I think we can get there within,

01:27:45 by the end of this decade.

01:27:47 How do you think we’d do it?

01:27:49 So there’s practical things in Transcend,

01:27:53 the nine steps to living well forever, your book.

01:27:56 You describe just that.

01:27:58 There’s practical things like health,

01:28:00 exercise, all those things.

01:28:02 Yeah, I mean, we live in a body

01:28:03 that doesn’t last forever.

01:28:08 There’s no reason why it can’t, though,

01:28:11 and we’re discovering things, I think, that will extend it.

01:28:17 But you do have to deal with,

01:28:19 I mean, I’ve got various issues.

01:28:23 Went to Mexico 40 years ago, developed salmonella.

01:28:28 I created pancreatitis, which gave me

01:28:33 a strange form of diabetes.

01:28:37 It’s not type one diabetes, because it’s an autoimmune

01:28:42 disorder that destroys your pancreas.

01:28:44 I don’t have that.

01:28:46 But it’s also not type two diabetes,

01:28:48 because type two diabetes is your pancreas works fine,

01:28:51 but your cells don’t absorb the insulin well.

01:28:55 I don’t have that either.

01:28:58 The pancreatitis I had partially damaged my pancreas,

01:29:04 but it was a one time thing.

01:29:06 It didn’t continue, and I’ve learned now how to control it.

01:29:11 But so that’s just something that I had to do

01:29:15 in order to continue to exist.

01:29:18 Since your particular biological system,

01:29:20 you had to figure out a few hacks,

01:29:22 and the idea is that science would be able

01:29:24 to do that much better, actually.

01:29:26 Yeah, so I mean, I do spend a lot of time

01:29:29 just tinkering with my own body to keep it going.

01:29:34 So I do think I’ll last till the end of this decade,

01:29:37 and I think we’ll achieve longevity, escape velocity.

01:29:41 I think that we’ll start with people

01:29:43 who are very diligent about this.

01:29:46 Eventually, it’ll become sort of routine

01:29:48 that people will be able to do it.

01:29:51 So if you’re talking about kids today,

01:29:54 or even people in their 20s or 30s,

01:29:56 that’s really not a very serious problem.

01:30:01 I have had some discussions with relatives

01:30:05 who are like almost 100, and saying,

01:30:10 well, we’re working on it as quickly as possible.

01:30:13 I don’t know if that’s gonna work.

01:30:16 Is there a case, this is a difficult question,

01:30:18 but is there a case to be made against living forever

01:30:23 that a finite life, that mortality is a feature, not a bug,

01:30:29 that living a shorter, so dying makes ice cream

01:30:36 taste delicious, makes life intensely beautiful

01:30:40 more than it otherwise might be?

01:30:42 Most people believe that way, except if you present

01:30:46 a death of anybody they care about or love,

01:30:51 they find that extremely depressing.

01:30:55 And I know people who feel that way

01:30:58 20, 30, 40 years later, they still want them back.

01:31:06 So I mean, death is not something to celebrate,

01:31:11 but we’ve lived in a world where people just accept this.

01:31:16 Life is short, you see it all the time on TV,

01:31:18 oh, life’s short, you have to take advantage of it

01:31:21 and nobody accepts the fact that you could actually

01:31:23 go beyond normal lifetimes.

01:31:27 But anytime we talk about death or a death of a person,

01:31:31 even one death is a terrible tragedy.

01:31:35 If you have somebody that lives to 100 years old,

01:31:39 we still love them in return.

01:31:43 And there’s no limitation to that.

01:31:47 In fact, these kinds of trends are gonna provide

01:31:52 greater and greater opportunity for everybody,

01:31:54 even if we have more people.

01:31:57 So let me ask about an alien species

01:32:00 or a super intelligent AI 500 years from now

01:32:03 that will look back and remember Ray Kurzweil version zero.

01:32:11 Before the replicants spread,

01:32:13 how do you hope they remember you

01:32:17 in Hitchhiker’s Guide to the Galaxy summary of Ray Kurzweil?

01:32:21 What do you hope your legacy is?

01:32:24 Well, I mean, I do hope to be around, so that’s.

01:32:26 Some version of you, yes.

01:32:27 So.

01:32:29 Do you think you’ll be the same person around?

01:32:32 I mean, am I the same person I was when I was 20 or 10?

01:32:37 You would be the same person in that same way,

01:32:39 but yes, we’re different, we’re different.

01:32:44 All we have of that, all you have of that person

01:32:46 is your memories, which are probably distorted in some way.

01:32:53 Maybe you just remember the good parts,

01:32:55 depending on your psyche.

01:32:57 You might focus on the bad parts,

01:32:59 might focus on the good parts.

01:33:02 Right, but I mean, I still have a relationship

01:33:06 to the way I was when I was earlier, when I was younger.

01:33:11 How will you and the other super intelligent AIs

01:33:14 remember you of today from 500 years ago?

01:33:18 What do you hope to be remembered by this version of you

01:33:22 before the singularity?

01:33:25 Well, I think it’s expressed well in my books,

01:33:28 trying to create some new realities that people will accept.

01:33:32 I mean, that’s something that gives me great pleasure,

01:33:40 and greater insight into what makes humans valuable.

01:33:49 I’m not the only person who’s tempted to comment on that.

01:33:57 And optimism that permeates your work.

01:34:00 Optimism about the future is ultimately that optimism

01:34:04 paves the way for building a better future.

01:34:06 Yeah, I agree with that.

01:34:10 So you asked your dad about the meaning of life,

01:34:15 and he said, love, let me ask you the same question.

01:34:19 What’s the meaning of life?

01:34:21 Why are we here?

01:34:22 This beautiful journey that we’re on in phase four,

01:34:26 reaching for phase five of this evolution

01:34:32 and information processing, why?

01:34:35 Well, I think I’d give the same answers as my father.

01:34:42 Because if there were no love,

01:34:43 and we didn’t care about anybody,

01:34:46 there’d be no point existing.

01:34:49 Love is the meaning of life.

01:34:51 The AI version of your dad had a good point.

01:34:54 Well, I think that’s a beautiful way to end it.

01:34:57 Ray, thank you for your work.

01:34:59 Thank you for being who you are.

01:35:01 Thank you for dreaming about a beautiful future

01:35:03 and creating it along the way.

01:35:06 And thank you so much for spending

01:35:09 your really valuable time with me today.

01:35:10 This was awesome.

01:35:12 It was my pleasure, and you have some great insights,

01:35:16 both into me and into humanity as well, so I appreciate that.

01:35:21 Thanks for listening to this conversation

01:35:22 with Ray Kurzweil.

01:35:24 To support this podcast,

01:35:25 please check out our sponsors in the description.

01:35:28 And now, let me leave you with some words

01:35:30 from Isaac Asimov.

01:35:32 It is change, continuous change, inevitable change

01:35:37 that is the dominant factor in society today.

01:35:41 No sensible decision can be made any longer

01:35:43 without taking into account not only the world as it is,

01:35:47 but the world as it will be.

01:35:49 This, in turn, means that our statesmen,

01:35:52 our businessmen, our everyman,

01:35:55 must take on a science fictional way of thinking.

01:35:58 Thank you for listening, and hope to see you next time.