Richard Dawkins: Evolution, Intelligence, Simulation, and Memes #87

Transcript

00:00:00 The following is a conversation with Richard Dawkins,

00:00:03 an evolutionary biologist and author of The Selfish Gene,

00:00:07 The Blind Watchmaker, The God Delusion, The Magic of Reality,

00:00:11 and The Greatest Show of Earth and his latest All Growing God.

00:00:15 He is the originator and popularizer of a lot of fascinating ideas in evolutionary biology

00:00:21 and science in general, including, funny enough, the introduction of the word

00:00:26 meme in his 1976 book, The Selfish Gene, which, in the context of a gene centered view of evolution,

00:00:32 is an exceptionally powerful idea. He’s outspoken, bold, and often fearless in the

00:00:39 defense of science and reason, and in this way, is one of the most influential thinkers of our time.

00:00:46 This conversation was recorded before the outbreak of the pandemic.

00:00:50 For everyone feeling the medical, psychological, and financial burden of this crisis,

00:00:54 I’m sending love your way. Stay strong. We’re in this together. We’ll beat this thing.

00:01:00 This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,

00:01:05 review it with 5 stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter

00:01:10 at Lex Friedman, spelled F R I D M A N. As usual, I’ll do a few minutes of ads now,

00:01:16 and never any ads in the middle that can break the flow of the conversation.

00:01:20 I hope that works for you and doesn’t hurt the listening experience.

00:01:25 This show is presented by Cash App, the number one finance app in the App Store.

00:01:29 When you get it, use code LEX PODCAST. Cash App lets you send money to friends,

00:01:34 buy bitcoin, and invest in the stock market with as little as one dollar.

00:01:39 Since Cash App allows you to send and receive money digitally, peer to peer,

00:01:43 security in all digital transactions is very important. Let me mention the PCI

00:01:48 data security standard that Cash App is compliant with. I’m a big fan of standards for safety and

00:01:53 security. PCI DSS is a good example of that, where a bunch of competitors got together and agreed

00:02:00 that there needs to be a global standard around the security of transactions.

00:02:04 Now we just need to do the same for autonomous vehicles and artificial intelligence systems in

00:02:08 general. So again, if you get Cash App from the App Store or Google Play and use the code LEX

00:02:14 PODCAST, you get ten dollars and Cash App will also donate ten dollars to FIRST,

00:02:19 an organization that is helping to advance robotics and STEM education for young people

00:02:23 around the world. And now, here’s my conversation with Richard Dawkins.

00:02:30 Do you think there’s intelligent life out there in the universe?

00:02:34 Well, if we accept that there’s intelligent life here and we accept that the number of planets in

00:02:40 the universe is gigantic, I mean, 10 to the 22 stars has been estimated, it seems to me highly

00:02:45 likely that there is not only life in the universe elsewhere, but also intelligent life. If you deny

00:02:51 that, then you’re committed to the view that the things that happened on this planet are

00:02:55 staggeringly improbable, I mean, ludicrously off the charts improbable. And I don’t think it’s that

00:03:02 improbable. Certainly the origin of life itself, there are really two steps, the origin of life,

00:03:06 which is probably fairly improbable, and then the subsequent evolution to intelligent life,

00:03:11 which is also fairly improbable. So the juxtaposition of those two, you could say,

00:03:15 is pretty improbable, but not 10 to the 22 improbable. It’s an interesting question,

00:03:20 maybe you’re coming on to it, how we would recognize intelligence from outer space if we

00:03:25 encountered it. The most likely way we would come across them would be by radio. It’s highly

00:03:30 unlikely they’d ever visit us. But it’s not that unlikely that we would pick up radio signals,

00:03:38 and then we would have to have some means of deciding that it was intelligent.

00:03:44 People involved in the SETI program discuss how they would do it, and things like prime numbers

00:03:50 would be an obvious way for them to broadcast, to say, we are intelligent, we are here.

00:03:56 I suspect it probably would be obvious, actually.

00:03:59 Well, that’s interesting, prime numbers, so the mathematical patterns, it’s an open question

00:04:03 whether mathematics is the same for us as it would be for aliens. I suppose we could assume

00:04:10 that ultimately, if we’re governed by the same laws of physics, then we should be governed by

00:04:15 the same laws of mathematics.

00:04:17 I think so. I suspect that they will have Pythagoras theorem, etc. I don’t think their

00:04:22 mathematics will be that different.

00:04:23 Do you think evolution would also be a force on the alien planets as well?

00:04:27 I stuck my neck out and said that if ever that we do discover life elsewhere, it will be Darwinian

00:04:33 life, in the sense that it will work by some kind of natural selection, the nonrandom survival of

00:04:41 randomly generated codes. It doesn’t mean that the genetic, it would have to have some kind of

00:04:47 genetics, but it doesn’t have to be DNA genetics, probably wouldn’t be actually.

00:04:51 But I think it would have to be Darwinian, yes.

00:04:53 So some kind of selection process.

00:04:56 Yes, in the general sense, it would be Darwinian.

00:05:00 So let me ask kind of an artificial intelligence engineering question. So you’ve been an

00:05:05 outspoken critic of, I guess, what could be called intelligent design, which is an attempt

00:05:11 to describe the creation of a human mind and body by some religious folks, religious folks

00:05:16 used to describe. So broadly speaking, evolution is, as far as I know, again, you can correct me,

00:05:23 is the only scientific theory we have for the development of intelligent life. Like there’s no

00:05:27 alternative theory, as far as I understand.

00:05:30 None has ever been suggested, and I suspect it never will be.

00:05:35 Well, of course, whenever somebody says that, a hundred years later.

00:05:39 I know. It’s a risk.

00:05:42 It’s a risk.

00:05:43 It’s a risk. But what a bet. I mean, I’m pretty confident.

00:05:48 But it would look, sorry, yes, it would probably look very similar, but it’s almost like Einstein

00:05:53 general relativity versus Newtonian physics. It’ll be maybe an alteration of the theory or

00:05:59 something like that, but it won’t be fundamentally different. But okay.

00:06:06 So now for the past 70 years, even before the AI community has been trying to engineer

00:06:11 intelligence, in a sense, to do what intelligent design says, you know, was done here on earth.

00:06:18 What’s your intuition? Do you think it’s possible to build intelligence, to build computers that

00:06:26 are intelligent, or do we need to do something like the evolutionary process? Like there’s

00:06:31 no shortcuts here.

00:06:33 That’s an interesting question. I’m committed to the belief that is ultimately possible

00:06:38 because I think there’s nothing nonphysical in our brains. I think our brains work by

00:06:44 the laws of physics. And so it must, in principle, it’d be possible to replicate that.

00:06:49 In practice, though, it might be very difficult. And as you suggest, it may be the only way

00:06:54 to do it is by something like an evolutionary process. I’d be surprised. I suspect that

00:06:59 it will come, but it’s certainly been slower in coming than some of the early pioneers

00:07:05 thought it would be.

00:07:06 Yeah. But in your sense, is the evolutionary process efficient? So you can see it as exceptionally

00:07:12 wasteful in one perspective, but at the same time, maybe that is the only path.

00:07:17 It’s a paradox, isn’t it? I mean, on the one side, it is deplorably wasteful. It’s

00:07:22 fundamentally based on waste. On the other hand, it does produce magnificent results.

00:07:26 I mean, the design of a soaring bird, an albatross, a vulture, an eagle, is superb. An engineer

00:07:38 would be proud to have done it. On the other hand, an engineer would not be proud to have

00:07:41 done some of the other things that evolution has served up. Some of the sort of botched

00:07:46 jobs that you can easily understand because of their historical origins, but they don’t

00:07:51 look well designed.

00:07:52 Do you have examples of bad design?

00:07:55 My favorite example is the recurrent laryngeal nerve. I’ve used this many times. This is

00:07:59 a nerve. It’s one of the cranial nerves, which goes from the brain, and the end organ is

00:08:04 that it supplies is the voice box, the larynx. But it doesn’t go straight to the larynx.

00:08:10 It goes right down into the chest and then loops around an artery in the chest and then

00:08:15 comes straight back up again to the larynx. And I’ve assisted in the dissection of a

00:08:21 giraffe’s neck, which happened to have died in a zoo. And we saw the recurrent laryngeal

00:08:27 nerve whizzing straight past the larynx, within an inch of the larynx, down into the chest,

00:08:32 and then back up again, which is a detour of many feet. Very, very inefficient.

00:08:41 The reason is historical. The ancestors are fish ancestors, the ancestors of all mammals

00:08:46 and fish. The most direct pathway of that, of the equivalent of that nerve, there wasn’t

00:08:54 a larynx in those days, but it innervated part of the gills. The most direct pathway

00:08:59 was behind that artery. And then when the mammal, when the tetrapods, when the land

00:09:06 vertebrae started evolving, and then the neck started to stretch, the marginal cost of changing

00:09:12 the embryological design to jump that nerve over the artery was too great. Or rather,

00:09:19 each step of the way was a very small cost, but the cost of actually jumping it over would have

00:09:24 been very large. As the neck lengthened, it was a negligible change to just increase the length of

00:09:31 the detour a tiny bit, a tiny bit, a tiny bit, each millimeter at a time, didn’t make any difference.

00:09:35 But finally, when you get to a giraffe, it’s a huge detour and no doubt is very inefficient.

00:09:40 Now that’s bad design. Any engineer would reject that piece of design. It’s ridiculous.

00:09:47 And there are quite a number of examples, as you’d expect. It’s not surprising that we find

00:09:52 examples of that sort. In a way, what’s surprising is there aren’t more of them. In a way, what’s

00:09:55 surprising is that the design of living things is so good. So natural selection manages to achieve

00:10:01 excellent results, partly by tinkering, partly by coming along and cleaning up initial mistakes and,

00:10:11 as it were, making the best of a bad job. That’s really interesting. I mean, it is surprising and

00:10:17 beautiful and it’s a mystery from an engineering perspective that so many things are well designed.

00:10:22 I suppose the thing we’re forgetting is how many generations have to die for that.

00:10:30 That’s the inefficiency of it. Yes, that’s the horrible wastefulness of it.

00:10:33 So yeah, we marvel at the final product, but yeah, the process is painful.

00:10:39 Elon Musk describes human beings as potentially what he calls the biological bootloader for

00:10:45 artificial intelligence or artificial general intelligence is used as the term. It’s kind of

00:10:50 like super intelligence. Do you see superhuman level intelligence as potentially the next step

00:10:57 in the evolutionary process? Yes, I think that if superhuman intelligence is to be found,

00:11:02 it will be artificial. I don’t have any hope that we ourselves, our brains will go on

00:11:09 getting larger in ordinary biological evolution. I think that’s probably come to an end. It is

00:11:16 the dominant trend or one of the dominant trends in our fossil history for the last two or three

00:11:22 million years. Brain size? Brain size, yes. So it’s been swelling rather dramatically over the last

00:11:28 two or three million years. That is unlikely to continue. The only way that happens is because

00:11:35 natural selection favors those individuals with the biggest brains and that’s not happening anymore.

00:11:41 Right. So in general, in humans, the selection pressures are not, I mean, are they active in

00:11:48 any form? Well, in order for them to be active, it would be necessary that the most, let’s call it

00:11:56 intelligence. Not that intelligence is simply correlated with brain size, but let’s talk about

00:12:02 intelligence. In order for that to evolve, it’s necessary that the most intelligent beings have

00:12:08 the most, individuals have the most children. And so intelligence may buy you money, it may buy you

00:12:17 worldly success, it may buy you a nice house and a nice car and things like that if you have a

00:12:22 successful career. It may buy you the admiration of your fellow people, but it doesn’t increase the

00:12:29 number of offspring that you have. It doesn’t increase your genetic legacy to the next generation.

00:12:35 On the other hand, artificial intelligence, I mean, computers and technology generally, is

00:12:42 is evolving by a non genetic means, by leaps and bounds, of course. And so what do you think,

00:12:48 I don’t know if you’re familiar, there’s a company called Neuralink, but there’s a general effort of

00:12:52 brain computer interfaces, which is to try to build a connection between the computer and the brain

00:12:59 to send signals both directions. And the long term dream there is to do exactly that, which is expand,

00:13:05 I guess, expand the size of the brain, expand the capabilities of the brain. Do you see this as

00:13:12 interesting? Do you see this as a promising possible technology? Or is the interface between

00:13:18 the computer and the brain, like the brain is this wet, messy thing that’s just impossible to

00:13:22 interface with? Well, of course, it’s interesting, whether it’s promising, I’m really not qualified

00:13:27 to say. What I do find puzzling is that the brain being as small as it is compared to a computer and

00:13:34 the individual components being as slow as they are compared to our electronic components,

00:13:40 it is astonishing what it can do. I mean, imagine building a computer that fits into the size of a

00:13:47 human skull. And with the equivalent of transistors or integrated circuits, which work as slowly as

00:13:57 neurons do. It’s something mysterious about that, something, something must be going on that we

00:14:04 don’t understand. So I have just talked to Roger Penrose, I’m not sure you’re familiar with his

00:14:11 work. And he also describes this kind of mystery in the mind, in the brain, that as he sees a

00:14:20 materialist, so there’s no sort of mystical thing going on. But there’s so much about the material

00:14:27 of the brain that we don’t understand. That might be quantum mechanical in nature and so on. So

00:14:32 there the idea is about consciousness. Do you have any, have you ever thought about, do you ever

00:14:37 think about ideas of consciousness or a little bit more about the mystery of intelligence and

00:14:42 consciousness that seems to pop up just like you’re saying from our brain? I agree with Roger

00:14:48 Penrose that there is a mystery there. I mean, he’s one of the world’s greatest physicists. I

00:14:55 can’t possibly argue with his… But nobody knows anything about consciousness. And in fact,

00:15:02 if we talk about religion and so on, the mystery of consciousness is so awe inspiring and we know

00:15:10 so little about it that the leap to sort of religious or mystical explanations is too easy

00:15:16 to make. I think that it’s just an act of cowardice to leap to religious explanations and

00:15:21 Roger doesn’t do that, of course. But I accept that there may be something that we don’t understand

00:15:28 about it. So correct me if I’m wrong, but in your book, Selfish Gene, the gene centered view of

00:15:34 evolution allows us to think of the physical organisms as just the medium through which the

00:15:40 software of our genetics and the ideas sort of propagate. So maybe can we start just with the

00:15:49 basics? What in this context does the word meme mean? It would mean the cultural equivalent of a

00:15:57 gene, cultural equivalent in the sense of that which plays the same role as the gene in the

00:16:02 transmission of culture and the transmission of ideas in the broadest sense. And it’s a

00:16:08 useful word if there’s something Darwinian going on. Obviously, culture is transmitted,

00:16:14 but is there anything Darwinian going on? And if there is, that means there has to be something

00:16:18 like a gene, which becomes more numerous or less numerous in the population.

00:16:25 So it can replicate?

00:16:27 It can replicate. Well, it clearly does replicate. There’s no question about that.

00:16:31 The question is, does it replicate in a sort of differential way in a Darwinian fashion? Could you

00:16:36 say that certain ideas propagate because they’re successful in the meme pool? In a sort of trivial

00:16:43 sense, you can. Would you wish to say, though, that in the same way as an animal body is modified,

00:16:52 adapted to serve as a machine for propagating genes, is it also a machine for propagating memes?

00:16:59 Could you actually say that something about the way a human is, is modified, adapted,

00:17:05 is modified, adapted for the function of meme propagation?

00:17:12 That’s such a fascinating possibility, if that’s true. That it’s not just about the genes which

00:17:18 seem somehow more comprehensible as these things of biology. The idea that culture or maybe ideas,

00:17:28 you can really broadly define it, operates under these mechanisms.

00:17:33 Even morphology, even anatomy does evolve by memetic means. I mean, things like hairstyles,

00:17:42 styles of makeup, circumcision, these things are actual changes in the body form which are

00:17:49 nongenetic and which get passed on from generation to generation or sideways like a virus in a

00:17:57 quasi genetic way.

00:17:59 But the moment you start drifting away from the physical, it becomes interesting because

00:18:05 the space of ideas, ideologies, political systems.

00:18:09 Of course, yes.

00:18:10 So what’s your sense? Are memes a metaphor more or are they really,

00:18:20 is there something fundamental, almost physical presence of memes?

00:18:24 Well, I think they’re a bit more than a metaphor. And I mentioned the physical

00:18:30 bodily characteristics which are a bit trivial in a way, but when things like the propagation

00:18:35 of religious ideas, both longitudinally down generations and transversely as in a sort of

00:18:42 epidemiology of ideas, when a charismatic preacher converts people, that resembles viral

00:18:54 transmission. Whereas the longitudinal transmission from grandparent to parent to child,

00:19:01 et cetera, is more like conventional genetic transmission.

00:19:06 That’s such a beautiful, especially in the modern day idea. Do you think about this

00:19:12 implication in social networks where the propagation of ideas, the viral propagation of ideas,

00:19:17 and has the new use of the word meme to describe?

00:19:21 Well, the internet, of course, provides extremely rapid method of transmission.

00:19:27 Before, when I first coined the word, the internet didn’t exist. And so I was thinking

00:19:32 that in terms of books, newspapers, broader radio, television, that kind of thing.

00:19:38 Now an idea can just leap around the world in all directions instantly. And so the internet

00:19:47 provides a step change in the facility of propagation of memes.

00:19:54 How does that make you feel? Isn’t it fascinating that sort of ideas, it’s like you have Galapagos

00:20:00 Islands or something, it’s the 70s, and the internet allowed all these species to just

00:20:05 like globalize. And in a matter of seconds, you can spread the message to millions of

00:20:11 people. And these ideas, these memes can breed, can evolve, can mutate. And there’s a selection,

00:20:21 and there’s like different, I guess, groups that have all like, there’s a dynamics that’s

00:20:26 fascinating here. Do you think, yes, basically, do you think your work in this direction,

00:20:31 while fundamentally was focused on life on Earth, do you think it should continue, like

00:20:37 to be taken further?

00:20:38 Well, I do think it would probably be a good idea to think in a Darwinian way about this

00:20:43 sort of thing. We conventionally think of the transmission of ideas from an evolutionary

00:20:49 context as being limited to, in our ancestors, people living in villages, living in small

00:20:58 bands where everybody knew each other, and ideas could propagate within the village,

00:21:03 and they might hop to a neighboring village, occasionally, and maybe even to a neighboring

00:21:08 continent eventually. And that was a slow process. Nowadays, villages are international.

00:21:15 I mean, you have people, it’s been called echo chambers, where people are in a sort

00:21:22 of internet village, where the other members of the village may be geographically distributed

00:21:28 all over the world, but they just happen to be interested in the same things, use the

00:21:32 same terminology, the same jargon, have the same enthusiasm. So, people like the Flat

00:21:38 Earth Society, they don’t all live in one place, they find each other, and they talk

00:21:44 the same language to each other, they talk the same nonsense to each other. And they,

00:21:48 so this is a kind of distributed version of the primitive idea of people living in villages

00:21:56 and propagating their ideas in a local way.

00:21:58 Is there Darwinist parallel here? So, is there evolutionary purpose of villages, or is that

00:22:06 just a…

00:22:07 I wouldn’t use a word like evolutionary purpose in that case, but villages will be something

00:22:12 that just emerged, that’s the way people happen to live.

00:22:16 And in just the same kind of way, the Flat Earth Society, societies of ideas emerge in

00:22:23 the same kind of way in this digital space.

00:22:26 Yes, yes.

00:22:27 Is there something interesting to say about the, I guess, from a perspective of Darwin,

00:22:35 could we fully interpret the dynamics of social interaction in these social networks? Or is

00:22:43 there some much more complicated thing need to be developed? Like, what’s your sense?

00:22:49 Well, a Darwinian selection idea would involve investigating which ideas spread and which

00:22:55 don’t. So, some ideas don’t have the ability to spread. I mean, the Flat Earth, Flat Earthism

00:23:03 is, there are a few people believe in it, but it’s not going to spread because it’s

00:23:07 obvious nonsense. But other ideas, even if they are wrong, can spread because they are

00:23:14 attractive in some sense.

00:23:16 So the spreading and the selection in the Darwinian context is, it just has to be attractive

00:23:24 in some sense. Like we don’t have to define, like it doesn’t have to be attractive in the

00:23:27 way that animals attract each other. It could be attractive in some other way.

00:23:32 Yes. All that matters is, all that is needed is that it should spread. And it doesn’t have

00:23:38 to be true to spread. In truth, there’s one criterion which might help an idea to spread.

00:23:43 But there are other criteria which might help it to spread. As you say, attraction in animals

00:23:49 is not necessarily valuable for survival. The famous peacock’s tail doesn’t help the

00:23:56 peacock to survive. It helps it to pass on its genes. Similarly, an idea which is actually

00:24:02 rubbish, but which people don’t know is rubbish and think is very attractive will spread in

00:24:08 the same way as a peacock’s gene spread.

00:24:10 It’s a small sidestep. I remember reading somewhere, I think recently, that in some

00:24:16 species of birds, sort of the idea that beauty may have its own purpose and the idea that

00:24:22 some birds, I’m being ineloquent here, but there’s some aspects of their feathers and

00:24:31 so on that serve no evolutionary purpose whatsoever. There’s somebody making an argument that there

00:24:37 are some things about beauty that animals do that may be its own purpose. Does that

00:24:44 ring a bell for you? Does that sound ridiculous?

00:24:46 I think it’s a rather distorted bell. Darwin, when he coined the phrase sexual selection,

00:24:56 didn’t feel the need to suggest that what was attractive to females, usually is males

00:25:04 attracting females, that what females found attractive had to be useful. He said it didn’t

00:25:08 have to be useful. It was enough that females found it attractive. And so it could be completely

00:25:13 useless, probably was completely useless in the conventional sense, but was not at all

00:25:18 useless in the sense of passing on, Darwin didn’t call them genes, but in the sense of

00:25:24 reproducing. Others, starting with Wallace, the co discoverer of natural selection, didn’t

00:25:30 like that idea and they wanted sexually selected characteristics like peacock’s tails to be

00:25:37 in some sense useful. It’s a bit of a stretch to think of a peacock’s tail as being useful,

00:25:41 but in the sense of survival, but others have run with that idea and have brought it up

00:25:47 to date. And so there are two schools of thought on sexual selection, which are still active

00:25:53 and about equally supported now. Those who follow Darwin in thinking that it’s just enough

00:25:58 to say it’s attractive and those who follow Wallace and say that it has to be in some

00:26:06 sense useful.

00:26:08 Do you fall into one category or the other?

00:26:10 No, I’m open minded. I think they both could be correct in different cases. I mean, they’ve

00:26:16 both been made sophisticated in a mathematical sense, more so than when Darwin and Wallace

00:26:20 first started talking about it.

00:26:22 I’m Russian, I romanticize things, so I prefer the former, where the beauty in itself is

00:26:30 a powerful attraction, is a powerful force in evolution. On religion, do you think there

00:26:40 will ever be a time in our future where almost nobody believes in God, or God is not a part

00:26:47 of the moral fabric of our society?

00:26:49 Yes, I do. I think it may happen after a very long time. It may take a long time for that

00:26:55 to happen.

00:26:56 So do you think ultimately for everybody on Earth, religion, other forms of doctrines,

00:27:03 ideas could do better job than what religion does?

00:27:07 Yes. I mean, following in truth, reason.

00:27:12 Well, truth is a funny, funny word. And reason too. There’s, yeah, it’s a difficult idea

00:27:23 now with truth on the internet, right, and fake news and so on. I suppose when you say

00:27:29 reason, you mean the very basic sort of inarguable conclusions of science versus which political

00:27:37 system is better.

00:27:38 Yes, yes. I mean, truth about the real world, which is ascertainable by, not just by the

00:27:46 more rigorous methods of science, but by just ordinary sensory observation.

00:27:51 So do you think there will ever be a time when we move past it? Like, I guess another

00:27:58 way to ask it, are we hopelessly, fundamentally tied to religion in the way our society functions?

00:28:08 Well, clearly all individuals are not hopelessly tied to it because many individuals don’t

00:28:14 believe. You could mean something like society needs religion in order to function properly,

00:28:21 something like that. And some people have suggested that.

00:28:24 What’s your intuition on that?

00:28:26 Well, I’ve read books on it and they’re persuasive. I don’t think they’re that persuasive though.

00:28:33 I mean, some people suggested that society needs a sort of figurehead, which can be a

00:28:41 non existent figurehead in order to function properly. I think there’s something rather

00:28:45 patronising about the idea that, well, you and I are intelligent enough not to believe

00:28:51 in God, but the plebs need it sort of thing. And I think that’s patronising. And I’d like

00:28:57 to think that that was not the right way to proceed.

00:29:01 But at the individual level, do you think there’s some value of spirituality? Sort of,

00:29:10 if I think sort of as a scientist, the amount of things we actually know about our universe

00:29:15 is a tiny, tiny, tiny percentage of what we could possibly know. So just from everything,

00:29:21 even the certainty we have about the laws of physics, it seems to be that there’s yet

00:29:25 a huge amount to discover. And therefore we’re sitting where 99.99% of things are just still

00:29:32 shrouded in mystery. Do you think there’s a role in a kind of spiritual view of that,

00:29:38 sort of a humbled spiritual view?

00:29:39 I think it’s right to be humble. I think it’s right to admit that there’s a lot we don’t

00:29:43 know, a lot we don’t understand, a lot that we still need to work on. We’re working on

00:29:48 it. What I don’t think is that it helps to invoke supernatural explanations. If our current

00:29:57 scientific explanations aren’t adequate to do the job, then we need better ones. We need

00:30:01 to work more. And of course, the history of science shows just that, that as science goes

00:30:06 on, problems get solved one after another, and the science advances as science gets better.

00:30:13 But to invoke a non scientific, non physical explanation is simply to lie down in a cowardly

00:30:21 way and say, we can’t solve it, so we’re going to invoke magic. Don’t let’s do that. Let’s

00:30:25 say we need better science. We need more science. It may be that the science will never do it.

00:30:30 It may be that we will never actually understand everything. And that’s okay, but let’s keep

00:30:36 working on it.

00:30:39 A challenging question there is, do you think science can lead us astray in terms of the

00:30:43 humbleness? So there’s some aspect of science, maybe it’s the aspect of scientists and not

00:30:50 science, but of sort of a mix of ego and confidence that can lead us astray in terms of discovering

00:30:59 the, you know, some of the big open questions about the universe.

00:31:05 I think that’s right. I mean, there are, there are arrogant people in any walk of life and

00:31:09 scientists are no exception to that. And so there are arrogant scientists who think we’ve

00:31:13 solved everything. Of course we haven’t. So humility is a proper stance for a scientist.

00:31:18 I mean, it’s a proper working stance because it encourages further work. But in a way to

00:31:25 resort to a supernatural explanation is a kind of arrogance because it’s saying, well,

00:31:30 we don’t understand it scientifically. Therefore the non scientific religious supernatural

00:31:38 explanation must be the right one. That’s arrogant. What is, what is humble is to say

00:31:42 we don’t know and we need to work further on it.

00:31:46 So maybe if I could psychoanalyze you for a second, you have at times been just slightly

00:31:53 frustrated with people who have supernat, you know, have a supernatural. Has that changed

00:32:00 over the years? Have you become like, how do people that kind of have a seek supernatural

00:32:06 explanations, how do you see those people as human beings as it’s like, do you see them

00:32:12 as dishonest? Do you see them as, um, sort of, uh, ignorant? Do you see them as, I don’t

00:32:21 know, is it like, how do you think of certainly not, not, not dishonest. And, and I mean,

00:32:26 obviously many of them are perfectly nice people. So I don’t, I don’t sort of despise

00:32:30 them in that sense. Um, I think it’s often a misunderstanding that, that, um, people

00:32:38 will jump from the admission that we don’t understand something. They will jump straight

00:32:44 to what they think of as an alternative explanation, which is the supernatural one, which is not

00:32:49 an alternative. It’s a non explanation. Um, instead of jumping to the conclusion that

00:32:55 science needs more work, that we need to actually get, do some better, better science. So, um,

00:33:02 I don’t have, I mean, personal antipathy towards such people. I just think they’re, they’re

00:33:09 misguided.

00:33:10 So what about this really interesting space that I have trouble with? So religion I have

00:33:15 a better grasp on, but, um, there’s a large communities, like you said, Flat Earth community,

00:33:21 uh, that I’ve recently, because I’ve made a few jokes about it. I saw that there’s,

00:33:27 I’ve noticed that there’s people that take it quite seriously. So there’s this bigger

00:33:33 world of conspiracy theorists, which is a kind of, I mean, there’s elements of it that

00:33:40 are religious as well, but I think they’re also scientific. So the, the basic credo of

00:33:48 a conspiracy theorist is to question everything, which is also the credo of a good scientist,

00:33:56 I would say. So what do you make of this?

00:33:59 I mean, I think it’s probably too easy to say that by labeling something conspiracy,

00:34:07 you therefore dismiss it. I mean, occasionally conspiracies are right. And so we shouldn’t

00:34:11 dismiss conspiracy theories out of hand. We should examine them on their own merits. Flat

00:34:17 Earthism is obvious nonsense. We don’t have to examine that much further. Um, but, um,

00:34:22 I mean, there may be other conspiracy theories which are actually right.

00:34:27 So I’ve, you know, grew up in the Soviet Union. So I, I just, you know, uh, the space race

00:34:31 was very influential for me on both sides of the coin. Uh, you know, there’s a conspiracy

00:34:37 theory that we never went to the moon. Right. And it’s, uh, it’s like, I cannot understand

00:34:45 it and it’s very difficult to rigorously scientifically show one way or the other. It’s just, you

00:34:50 have to use some of the human intuition about who would have to lie, who would have to work

00:34:54 together. And it’s clear that very unlikely, uh, good behind that is my general intuition

00:35:01 that most people in this world are good. You know, in order to really put together some

00:35:06 conspiracy theories, there has to be a large number of people working together and essentially

00:35:12 being dishonest.

00:35:13 Yes, which is improbable. The sheer number who would have to be in on this conspiracy

00:35:18 and the sheer detail, the attention to detail they’d have had to have had and so on. I’d

00:35:23 also worry about the motive and why would anyone want to suggest that it didn’t happen?

00:35:29 What’s the, what’s the, why is it so hard to believe? I mean, the, the physics of it,

00:35:35 the mathematics of it, the, the idea of computing orbits and, and, and trajectories and things,

00:35:40 it, it all works mathematically. Why wouldn’t you believe it?

00:35:44 It’s a psychology question because there’s something really pleasant about, um, you know,

00:35:50 pointing out that the emperor has no clothes when everybody like, uh, you know, thinking

00:35:55 outside the box and coming up with the true answer where everybody else is diluted. There’s

00:36:00 something, I mean, I have that for science, right? You want to prove the entire scientific

00:36:04 community wrong. That’s the whole.

00:36:06 That’s, that’s, that’s right. And, and of course, historically, lone geniuses have come

00:36:11 out right sometimes, but often people with who think they’re a lone genius much more

00:36:15 often turn out not to. Um, so you have to judge each case on its merits. The mere fact

00:36:20 that you’re a maverick, the mere fact that you, you’re going against the current tide

00:36:25 doesn’t make you right. You’ve got to show you’re right by looking at the evidence.

00:36:29 So because you focus so much on, on religion and disassembled a lot of ideas there and

00:36:35 I just, I was wondering if, if you have ideas about conspiracy theory groups, because it’s

00:36:41 such a prevalent, even reaching into, uh, presidential politics and so on. It seems

00:36:46 like it’s a very large communities that believe different kinds of conspiracy theories. Is

00:36:50 there some connection there to your thinking on religion? And it is curious. It’s a matter.

00:36:56 It’s an obvious difficult thing. Uh, I don’t understand why people believe things that

00:37:03 are clearly nonsense, like, well, flat earth and also the conspiracy about not landing

00:37:07 on the moon or, um, that, um, the, that the United States engineered 9 11 that, that kind

00:37:15 of thing. Um, so it’s not clearly nonsense. It’s extremely unlikely. Okay. It’s extremely

00:37:21 unlikely that religion is a bit different because it’s passed down from generation to

00:37:27 generation. So many of the people who are religious, uh, got it from their parents who

00:37:31 got it from their parents who got it from their parents and childhood indoctrination

00:37:35 is a very powerful force. But these things like the nine 11 conspiracy theory, the, um,

00:37:45 Kennedy assassination conspiracy theory, the man on the moon conspiracy theory, these are

00:37:50 not childhood indoctrination. These are, um, presumably dreamed up by somebody who then

00:37:57 tells somebody else who then wants to believe it. And I don’t know why people are so eager

00:38:04 to fall in line with some, just some person that they happen to read or meet who spins

00:38:10 some yarn. I can kind of understand why they believe what their parents and teachers told

00:38:16 them when they were very tiny and not capable of critical thinking for themselves. So I

00:38:21 sort of get why the great religions of the world like Catholicism and Islam go on persisting.

00:38:28 It’s because of childhood indoctrination, but that’s not true of flat earthism and sure

00:38:34 enough flat earthism is a very minority cult way larger than I ever realized. Well, yes,

00:38:40 I know, but so that’s a really clean idea and you’ve articulated that in your new book

00:38:43 and then, and I’ll go on God and in God, the illusion is the early indoctrination. That’s

00:38:49 really interesting that you can get away with a lot of out there ideas in terms of religious

00:38:54 texts. If, um, the age at which you convey those ideas at first is a young age. So indoctrination

00:39:04 is sort of an essential element of propagation of religion. So let me ask on the morality

00:39:11 side in the books that I mentioned, God, delusion, and I’ll go on God. You described that human

00:39:16 beings don’t need religion to be moral. So from an engineering perspective, we want to

00:39:21 engineer morality into AI systems. So in general, where do you think morals come from in humans?

00:39:32 A very complicated and interesting question. It’s clear to me that the moral standards,

00:39:40 the moral values of our civilization changes as the decades go by, certainly as the centuries

00:39:50 go by, even as the decades go by. And we in the 21st century are quite clearly labeled

00:39:59 21st century people in terms of our moral values. There’s a spread. I mean, some of

00:40:05 us are a little bit more ruthless, some of us more conservative, some of us more liberal

00:40:10 and so on. But we all subscribe to pretty much the same views when you compare us with

00:40:18 say 18th century, 17th century people, even 19th century, 20th century people. So we’re

00:40:26 much less racist, we’re much less sexist and so on than we used to be. Some people are

00:40:31 still racist and some are still sexist, but the spread has shifted. The Gaussian distribution

00:40:37 has moved and moves steadily as the centuries go by. And that is the most powerful influence

00:40:47 I can see on our moral values. And that doesn’t have anything to do with religion. I mean,

00:40:54 the religion, sorry, the morals of the Old Testament are Bronze Age models. They’re deplorable

00:41:03 and they are to be understood in terms of the people in the desert who made them up

00:41:09 at the time. And so human sacrifice, an eye for an eye, a tooth for a tooth, petty revenge,

00:41:17 killing people for breaking the Sabbath, all that kind of thing, inconceivable now.

00:41:23 So at some point religious texts may have in part reflected that Gaussian distribution

00:41:29 at that time.

00:41:30 I’m sure they did. I’m sure they always reflect that, yes.

00:41:32 And then now, but the sort of almost like the meme, as you describe it, of ideas moves

00:41:39 much faster than religious texts do, than new religions.

00:41:42 Yes. So basing your morals on religious texts, which were written millennia ago, is not a

00:41:49 great way to proceed. I think that’s pretty clear. So not only should we not get our morals

00:41:56 from such texts, but we don’t. We quite clearly don’t. If we did, then we’d be discriminating

00:42:03 against women and we’d be racist, we’d be killing homosexuals and so on. So we don’t

00:42:12 and we shouldn’t. Now, of course, it’s possible to use your 21st century standards of morality

00:42:20 and you can look at the Bible and you can cherry pick particular verses which conform

00:42:25 to our modern morality, and you’ll find that Jesus says some pretty nice things, which

00:42:30 is great. But you’re using your 21st century morality to decide which verses to pick, which

00:42:38 verses to reject. And so why not cut out the middleman of the Bible and go straight to

00:42:44 the 21st century morality, which is where that comes from. It’s a much more complicated

00:42:51 question. Why is it that morality, moral values change as the centuries go by? They undoubtedly

00:42:57 do. And it’s a very interesting question to ask why. It’s another example of cultural

00:43:02 evolution, just as technology progresses, so moral values progress for probably very

00:43:09 different reasons.

00:43:10 But it’s interesting if the direction in which that progress is happening has some evolutionary

00:43:15 value or if it’s merely a drift that can go into any direction.

00:43:18 I’m not sure it’s any direction and I’m not sure it’s evolutionarily valuable. What it

00:43:22 is is progressive in the sense that each step is a step in the same direction as the previous

00:43:29 step. So it becomes more gentle, more decent by modern standards, more liberal, less violent.

00:43:37 But more decent, I think you’re using terms and interpreting everything in the context

00:43:42 of the 21st century because Genghis Khan would probably say that this is not more decent

00:43:48 because we’re now, you know, there’s a lot of weak members of society that we’re not

00:43:52 murdering.

00:43:53 Yes. I was careful to say by the standards of the 21st century, by our standards, if

00:43:58 we with hindsight look back at history, what we see is a trend in the direction towards

00:44:03 us, towards our present, our present value system.

00:44:06 For us, we see progress, but it’s an open question whether that won’t, you know, I don’t

00:44:13 see necessarily why we can never return to Genghis Khan times.

00:44:17 We could. I suspect we won’t. But if you look at the history of moral values over the centuries,

00:44:26 it is in a progressive, I use the word progressive not in a value judgment sense, in the sense

00:44:31 of a transitive sense. Each step is the same, is the same direction as the previous step.

00:44:37 So things like we don’t derive entertainment from torturing cats. We don’t derive entertainment

00:44:47 from like the Romans did in the Colosseum from that state.

00:44:53 Or rather we suppress the desire to get, I mean, to have play. It’s probably in us somewhere.

00:45:00 So there’s a bunch of parts of our brain, one that probably, you know, limbic system

00:45:05 that wants certain pleasures. And that’s I don’t, I mean, I wouldn’t have said that,

00:45:10 but you’re at liberty to think that you like, well, no, there’s a, there’s a Dan Carlin

00:45:16 of hardcore history. There’s a really nice explanation of how we’ve enjoyed watching

00:45:20 the torture of people, the fighting of people, just to torture the suffering of people throughout

00:45:25 history as entertainment until quite recently. And now everything we do with sports, we’re

00:45:32 kind of channeling that feeling into something else. I mean, there, there is some dark aspects

00:45:38 of human nature that are underneath everything. And I do hope this like higher level software

00:45:44 we’ve built will keep us at bay. I’m also Jewish and have history with the Soviet Union

00:45:52 and the Holocaust. And I clearly remember that some of the darker aspects of human nature

00:45:58 creeped up there.

00:45:59 They do. There have been, there have been steps backwards admittedly, and the Holocaust

00:46:04 is an obvious one. But if you take a broad view of history, it’s the same direction.

00:46:11 So Pamela McCordick in Machines Who Think has written that AI began with an ancient

00:46:16 wish to forge the gods. Do you see, it’s a poetic description I suppose, but do you see

00:46:24 a connection between our civilizations, historic desire to create gods, to create religions

00:46:30 and our modern desire to create technology and intelligent technology?

00:46:35 I suppose there’s a link between an ancient desire to explain away mystery and science,

00:46:46 but intelligence, artificial intelligence, creating gods, creating new gods. And I forget,

00:46:53 I read somewhere a somewhat facetious paper which said that we have a new god is called

00:46:59 Google and we pray to it and we worship it and we ask its advice like an Oracle and so

00:47:05 on. That’s fun.

00:47:08 You don’t see that, you see that as a fun statement, a facetious statement. You don’t

00:47:12 see that as a kind of truth of us creating things that are more powerful than ourselves

00:47:17 and natural.

00:47:18 It has a kind of poetic resonance to it, which I get, but I wouldn’t, I wouldn’t have bothered

00:47:26 to make the point myself, put it that way.

00:47:28 All right. So you don’t think AI will become our new god, a new religion, a new gods like

00:47:34 Google?

00:47:35 Well, yes. I mean, I can see that the future of intelligent machines or indeed intelligent

00:47:42 aliens from outer space might yield beings that we would regard as gods in the sense

00:47:48 that they are so superior to us that we might as well worship them. That’s highly plausible,

00:47:55 I think. But I see a very fundamental distinction between a god who is simply defined as something

00:48:03 very, very powerful and intelligent on the one hand and a god who doesn’t need explaining

00:48:09 by a progressive step by step process like evolution or like engineering design. So suppose

00:48:20 we did meet an alien from outer space who was marvelously, magnificently more intelligent

00:48:27 than us and we would sort of worship it for that reason. Nevertheless, it would not be

00:48:31 a god in the very important sense that it did not just happen to be there like god is

00:48:39 supposed to. It must have come about by a gradual step by step incremental progressive

00:48:46 process, presumably like Darwinian evolution. There’s all the difference in the world between

00:48:52 those two. Intelligence, design comes into the universe late as a product of a progressive

00:49:01 evolutionary process or progressive engineering design process.

00:49:06 So most of the work is done through this slow moving progress.

00:49:11 Exactly.

00:49:12 Yeah. Yeah. But there’s still this desire to get answers to the why question that if

00:49:23 the world is a simulation, if we’re living in a simulation, that there’s a programmer

00:49:27 like creature that we can ask questions of.

00:49:30 Well, let’s pursue the idea that we’re living in a simulation, which is not totally ridiculous,

00:49:35 by the way.

00:49:36 There we go.

00:49:39 Then you still need to explain the programmer. The programmer had to come into existence

00:49:46 by some… Even if we’re in a simulation, the programmer must have evolved. Or if he’s

00:49:53 in a sort of…

00:49:54 Or she.

00:49:55 If she’s in a meta simulation, then the meta program must have evolved by a gradual process.

00:50:03 You can’t escape that. Fundamentally, you’ve got to come back to a gradual incremental

00:50:09 process of explanation to start with.

00:50:13 There’s no shortcuts in this world.

00:50:15 No, exactly.

00:50:17 But maybe to linger on that point about the simulation, do you think it’s an interesting

00:50:22 thing? Basically, you talk to… Bored the heck out of everybody asking this question,

00:50:28 but whether you live in a simulation, do you think… First, do you think we live in a

00:50:33 simulation? Second, do you think it’s an interesting thought experiment?

00:50:37 It’s certainly an interesting thought experiment. I first met it in a science fiction novel

00:50:42 by Daniel Galloy called Counterfeit World, in which it’s all about… I mean, our heroes

00:50:53 are running a gigantic computer which simulates the world, and something goes wrong, and so

00:51:00 one of them has to go down into the simulated world in order to fix it. And then the denouement

00:51:05 of the thing, the climax to the novel, is that they discover that they themselves are

00:51:10 in another simulation at a higher level. So I was intrigued by this, and I love others

00:51:15 of Daniel Galloy’s science fiction novels. Then it was revived seriously by Nick Bostrom…

00:51:23 Bostrom talking to him in an hour.

00:51:27 And he goes further, not just treat it as a science fiction speculation, he actually

00:51:32 thinks it’s positively likely. I mean, he thinks it’s very likely, actually.

00:51:37 He makes a probabilistic argument, which you can use to come up with very interesting conclusions

00:51:42 about the nature of this universe.

00:51:44 I mean, he thinks that we’re in a simulation done by, so to speak, our descendants of the

00:51:50 future. But it’s still a product of evolution. It’s still ultimately going to be a product

00:51:56 of evolution, even though the super intelligent people of the future have created our world,

00:52:05 and you and I are just a simulation, and this table is a simulation and so on. I don’t actually

00:52:11 in my heart of hearts believe it, but I like his argument.

00:52:15 Well, so the interesting thing is that I agree with you, but the interesting thing to me,

00:52:21 if I were to say, if we’re living in a simulation, that in that simulation, to make it work,

00:52:26 you still have to do everything gradually, just like you said. That even though it’s

00:52:31 programmed, I don’t think there could be miracles.

00:52:33 Well, no, I mean, the programmer, the higher, the upper ones have to have evolved gradually.

00:52:39 However, the simulation they create could be instantaneous. I mean, they could be switched

00:52:44 on and we come into the world with fabricated memories.

00:52:47 True, but what I’m trying to convey, so you’re saying the broader statement, but I’m saying

00:52:53 from an engineering perspective, both the programmer has to be slowly evolved and the

00:52:59 simulation because it’s like, from an engineering perspective.

00:53:03 Oh yeah, it takes a long time to write a program.

00:53:05 No, like just, I don’t think you can create the universe in a snap. I think you have to

00:53:11 grow it.

00:53:12 Okay. Well, that’s a good point. That’s an arguable point. By the way, I have thought

00:53:20 about using the Nick Bostrom idea to solve the riddle of how you were talking. We were

00:53:26 talking earlier about why the human brain can achieve so much. I thought of this when

00:53:33 my then 100 year old mother was marveling at what I could do with a smartphone and I

00:53:39 could call, look up anything in the encyclopedia, I could play her music that she liked and

00:53:44 so on. She said, but it’s all in that tiny little phone. No, it’s out there. It’s in

00:53:48 the cloud. And maybe most of what we do is in a cloud. So maybe if we are a simulation,

00:53:56 even all the power that we think is in our skull, it actually may be like the power that

00:54:01 we think is in the iPhone. But is that actually out there in an interface to something else?

00:54:07 I mean, that’s what, including Roger Penrose with panpsychism, that consciousness is somehow

00:54:14 a fundamental part of physics, that it doesn’t have to actually all reside inside. But Roger

00:54:19 thinks it does reside in the skull, whereas I’m suggesting that it doesn’t, that there’s

00:54:26 a cloud.

00:54:27 That’d be a fascinating notion. On a small tangent, are you familiar with the work of

00:54:35 Donald Hoffman, I guess? Maybe not saying his name correctly, but just forget the name,

00:54:43 the idea that there’s a difference between reality and perception. So like we biological

00:54:51 organisms perceive the world in order for the natural selection process to be able to

00:54:55 survive and so on. But that doesn’t mean that our perception actually reflects the fundamental

00:55:01 reality, the physical reality underneath.

00:55:03 Well, I do think that although it reflects the fundamental reality, I do believe there

00:55:10 is a fundamental reality, I do think that our perception is constructive in the sense

00:55:18 that we construct in our minds a model of what we’re seeing. And so this is really the

00:55:26 view of people who work on visual illusions, like Richard Gregory, who point out that things

00:55:32 like a Necker cube, which flip from a two dimensional picture of a cube on a sheet of

00:55:40 paper, we see it as a three dimensional cube, and it flips from one orientation to another

00:55:46 at regular intervals. What’s going on is that the brain is constructing a cube, but the

00:55:53 sense data are compatible with two alternative cubes. And so rather than stick with one of

00:55:58 them, it alternates between them. I think that’s just a model for what we do all the

00:56:04 time when we see a table, when we see a person, when we see anything, we’re using the sense

00:56:10 data to construct or make use of a perhaps previously constructed model. I noticed this

00:56:18 when I meet somebody who actually is, say, a friend of mine, but until I kind of realized

00:56:26 that it is him, he looks different. And then when I finally clock that it’s him, his features

00:56:33 switch like a Necker cube into the familiar form. As it were, I’ve taken his face out

00:56:39 of the filing cabinet inside and grafted it onto or used the sense data to invoke it.

00:56:48 Yeah, we do some kind of miraculous compression on this whole thing to be able to filter out

00:56:53 most of the sense data and make sense of it. That’s just a magical thing that we do. So

00:56:58 you’ve written several, many amazing books, but let me ask, what books, technical or fiction

00:57:08 or philosophical, had a big impact on your own life? What books would you recommend people

00:57:15 consider reading in their own intellectual journey?

00:57:19 Darwin, of course. The original. I’m actually ashamed to say I’ve never read Darwin. He’s

00:57:29 astonishingly prescient because considering he was writing in the middle of the 19th century,

00:57:35 Michael Gieselin said he’s working 100 years ahead of his time. Everything except genetics

00:57:41 is amazingly right and amazingly far ahead of his time. And of course, you need to read

00:57:49 the updatings that have happened since his time as well. I mean, he would be astonished

00:57:55 by, well, let alone Watson and Crick, of course, but he’d be astonished by Mendelian genetics

00:58:03 as well.

00:58:04 Yeah, it’d be fascinating to see what he thought about DNA, what he would think about DNA.

00:58:08 I mean, yes, it would. Because in many ways, it clears up what appeared in his time to

00:58:15 be a riddle. The digital nature of genetics clears up what was a problem, what was a big

00:58:23 problem. Gosh, there’s so much that I could think of. I can’t really…

00:58:28 Is there something outside sort of more fiction? When you think young, was there books that

00:58:34 just kind of outside of kind of the realm of science or religion that just kind of sparked

00:58:39 your journey?

00:58:40 Yes. Well, actually, I suppose I could say that I’ve learned some science from science

00:58:47 fiction. I mentioned Daniel Galloy, and that’s one example, but another of his novels called

00:58:57 Dark Universe, which is not terribly well known, but it’s a very, very nice science

00:59:01 fiction story. It’s about a world of perpetual darkness. And we’re not told at the beginning

00:59:07 of the book why these people are in darkness. They stumble around in some kind of underground

00:59:12 world of caverns and passages, using echolocation like bats and whales to get around. And they’ve

00:59:21 adapted, presumably by Darwinian means, to survive in perpetual total darkness. But what’s

00:59:28 interesting is that their mythology, their religion has echoes of Christianity, but it’s

00:59:36 based on light. And so there’s been a fall from a paradise world that once existed where

00:59:44 light reigns supreme. And because of the sin of mankind, light banished them. So they no

00:59:52 longer are in light’s presence, but light survives in the form of mythology and in the

00:59:58 form of sayings like, there’s a great light almighty. Oh, for light’s sake, don’t do that.

01:00:04 And I hear what you mean rather than I see what you mean.

01:00:08 So some of the same religious elements are present in this other totally kind of absurd

01:00:12 different form.

01:00:13 Yes. And so it’s a wonderful, I wouldn’t call it satire, because it’s too good natured

01:00:17 for that. I mean, a wonderful parable about Christianity and the doctrine, the theological

01:00:24 doctrine of the fall. So I find that kind of science fiction immensely stimulating.

01:00:31 Fred Hoyle’s The Black Cloud. Oh, by the way, anything by Arthur C. Clarke I find very wonderful

01:00:36 too. Fred Hoyle’s The Black Cloud, his first science fiction novel, where he, well, I learned

01:00:46 a lot of science from that. It suffers from an obnoxious hero, unfortunately, but apart

01:00:52 from that, you learn a lot of science from it. Another of his novels, A for Andromeda,

01:00:59 which by the way, the theme of that is taken up by Carl Sagan’s science fiction novel,

01:01:05 another wonderful writer, Carl Sagan, Contact, where the idea is, again, we will not be visited

01:01:15 from outer space by physical bodies. We will be visited possibly, we might be visited by

01:01:21 radio, but the radio signals could manipulate us and actually have a concrete influence

01:01:28 on the world if they make us or persuade us to build a computer, which runs their software.

01:01:37 So that they can then transmit their software by radio, and then the computer takes over

01:01:43 the world. And this is the same theme in both Hoyle’s book and Sagan’s book, I presume.

01:01:50 I don’t know whether Sagan knew about Hoyle’s book, probably did. But it’s a clever idea

01:01:56 that we will never be invaded by physical bodies. The War of the Worlds of H.G. Wells

01:02:04 will never happen. But we could be invaded by radio signals, code, coded information,

01:02:11 which is sort of like DNA. And we are, I call them, we are survival machines of our DNA.

01:02:20 So it has great resonance for me, because I think of us, I think of bodies, physical

01:02:26 bodies, biological bodies, as being manipulated by coded information, DNA, which has come

01:02:34 down through generations.

01:02:35 And in the space of memes, it doesn’t have to be physical, it can be transmitted through

01:02:40 the space of information. That’s a fascinating possibility, that from outer space we can

01:02:47 be infiltrated by other memes, by other ideas, and thereby controlled in that way. Let me

01:02:54 ask the last, the silliest, or maybe the most important question. What is the meaning of

01:03:00 life? What gives your life fulfillment, purpose, happiness, meaning?

01:03:06 From a scientific point of view, the meaning of life is the propagation of DNA, but that’s

01:03:10 not what I feel. That’s not the meaning of my life. So the meaning of my life is something

01:03:16 which is probably different from yours and different from other people’s, but we each

01:03:19 make our own meaning. So we set up goals, we want to achieve, we want to write a book,

01:03:27 we want to do whatever it is we do, write a quartet, we want to win a football match.

01:03:36 And these are short term goals, well, maybe even quite long term goals, which are set

01:03:41 up by our brains, which have goal seeking machinery built into them. But what we feel,

01:03:46 we don’t feel motivated by the desire to pass on our DNA, mostly. We have other goals which

01:03:54 can be very moving, very important. They could even be called as called spiritual in some

01:04:01 cases. We want to understand the riddle of the universe, we want to understand consciousness,

01:04:07 we want to understand how the brain works. These are all noble goals. Some of them can

01:04:13 be noble goals anyway. And they are a far cry from the fundamental biological goal,

01:04:20 which is the propagation of DNA. But the machinery that enables us to set up these higher level

01:04:26 goals is originally programmed into us by natural selection of DNA.

01:04:34 The propagation of DNA. But what do you make of this unfortunate fact that we are mortal?

01:04:41 Do you ponder your mortality? Does it make you sad?

01:04:47 I ponder it. It would, it makes me sad that I shall have to leave and not see what’s going

01:04:53 to happen next. If there’s something frightening about mortality, apart from sort of missing,

01:05:02 as I said, something more deeply, darkly frightening, it’s the idea of eternity. But eternity is

01:05:10 only frightening if you’re there. Eternity before we were born, billions of years before

01:05:15 we were born, and we were effectively dead before we were born. As I think it was Mark

01:05:20 Twain said, I was dead for billions of years before I was born and never suffered the smallest

01:05:25 inconvenience. That’s how it’s going to be after we leave. So I think of it as really,

01:05:31 mortality is a frightening prospect. And so the best way to spend it is under a general

01:05:36 anesthetic, which is what it’ll be.

01:05:39 Beautifully put. Richard, it is a huge honor to meet you, to talk to you. Thank you so

01:05:44 much for your time.

01:05:45 Thank you very much.

01:05:46 Thanks for listening to this conversation with Richard Dawkins. And thank you to our

01:05:50 presenting sponsor, Cash App. Please consider supporting the podcast by downloading Cash

01:05:55 App and using code LEXPodcast. If you enjoy this podcast, subscribe on YouTube, review

01:06:01 with 5 stars on Apple Podcast, support on Patreon, or simply connect with me on Twitter

01:06:06 at Lex Friedman.

01:06:08 And now let me leave you with some words of wisdom from Richard Dawkins.

01:06:13 We are going to die. And that makes us the lucky ones. Most people are never going to

01:06:18 die because they are never going to be born. The potential people who could have been here

01:06:24 in my place but who will in fact never see the light of day outnumber the sand grains

01:06:29 of Arabia. Certainly, those unborn ghosts include greater poets than Keats, scientists

01:06:36 greater than Newton. We know this because the set of possible people allowed by our

01:06:42 DNA so massively exceeds the set of actual people. In the teeth of these stupefying odds,

01:06:49 it is you and I, in our ordinariness, that are here. We privileged few who won the lottery

01:06:57 of birth against all odds. How dare we whine at our inevitable return to that prior state

01:07:04 from which the vast majority have never stirred.

01:07:08 Thank you for listening and hope to see you next time.