Alex Garland: Ex Machina, Devs, Annihilation, and the Poetry of Science #77

Transcript

00:00:00 The following is a conversation with Alex Garland,

00:00:03 writer and director of many imaginative

00:00:06 and philosophical films from the dreamlike exploration

00:00:09 of human self destruction in the movie Annihilation

00:00:12 to the deep questions of consciousness and intelligence

00:00:16 raised in the movie Ex Machina,

00:00:18 which to me is one of the greatest movies

00:00:21 in artificial intelligence ever made.

00:00:23 I’m releasing this podcast to coincide

00:00:25 with the release of this new series called Devs

00:00:28 that will premiere this Thursday, March 5th on Hulu

00:00:32 as part of FX on Hulu.

00:00:35 It explores many of the themes this very podcast is about,

00:00:39 from quantum mechanics to artificial life to simulation

00:00:43 to the modern nature of power in the tech world.

00:00:47 I got a chance to watch a preview and loved it.

00:00:50 The acting is great.

00:00:52 Nick Offerman especially is incredible in it.

00:00:55 The cinematography is beautiful

00:00:58 and the philosophical and scientific ideas

00:00:59 explored are profound.

00:01:02 And for me as an engineer and scientist,

00:01:04 which is fun to see brought to life.

00:01:07 For example, if you watch the trailer

00:01:09 for the series carefully,

00:01:10 you’ll see there’s a programmer with a Russian accent

00:01:13 looking at a screen with Python like code on it

00:01:16 that appears to be using a library

00:01:18 that interfaces with a quantum computer.

00:01:21 This attention and technical detail

00:01:23 on several levels is impressive.

00:01:25 And one of the reasons I’m a big fan

00:01:27 of how Alex weaves science and philosophy together

00:01:30 in his work.

00:01:31 Meeting Alex for me was unlikely,

00:01:35 but it was life changing

00:01:36 in ways I may only be able to articulate in a few years.

00:01:41 Just as meeting spot many of Boston Dynamics

00:01:43 for the first time planted a seed of an idea in my mind,

00:01:47 so did meeting Alex Garland.

00:01:50 He’s humble, curious, intelligent,

00:01:52 and to me an inspiration.

00:01:55 Plus, he’s just really a fun person to talk with

00:01:58 about the biggest possible questions in our universe.

00:02:02 This is the Artificial Intelligence Podcast.

00:02:05 If you enjoy it, subscribe on YouTube,

00:02:07 give it five stars on Apple Podcast,

00:02:09 support it on Patreon,

00:02:10 or simply connect with me on Twitter

00:02:12 at Lex Friedman spelled F R I D M A N.

00:02:17 As usual, I’ll do one or two minutes of ads now

00:02:19 and never any ads in the middle

00:02:21 that can break the flow of the conversation.

00:02:23 I hope that works for you

00:02:24 and doesn’t hurt the listening experience.

00:02:27 This show is presented by Cash App,

00:02:29 the number one finance app in the App Store.

00:02:32 When you get it, use code LEXPODCAST.

00:02:35 Cash App lets you send money to friends,

00:02:38 buy Bitcoin, and invest in the stock market

00:02:40 with as little as one dollar.

00:02:42 Since Cash App allows you to buy Bitcoin,

00:02:45 let me mention that cryptocurrency

00:02:47 in the context of the history of money is fascinating.

00:02:50 I recommend A Scent of Money

00:02:52 as a great book on this history.

00:02:55 Debits and credits on ledgers started 30,000 years ago.

00:02:59 The US dollar was created about 200 years ago.

00:03:03 At Bitcoin, the first decentralized cryptocurrency

00:03:07 was released just over 10 years ago.

00:03:10 So given that history,

00:03:11 cryptocurrency is still very much

00:03:13 in its early days of development,

00:03:15 but it still is aiming to

00:03:16 and just might redefine the nature of money.

00:03:20 So again, if you get Cash App from the App Store

00:03:23 or Google Play and use code LEXPODCAST,

00:03:26 you’ll get $10,

00:03:27 and Cash App will also donate $10 to FIRST,

00:03:30 one of my favorite organizations

00:03:31 that is helping advance robotics

00:03:33 and STEM education for young people around the world.

00:03:37 And now, here’s my conversation with Alex Garland.

00:03:42 You described the world inside the shimmer

00:03:45 in the movie Annihilation as dreamlike

00:03:47 in that it’s internally consistent

00:03:48 but detached from reality.

00:03:50 That leads me to ask,

00:03:52 do you think, a philosophical question, I apologize,

00:03:56 do you think we might be living in a dream

00:03:58 or in a simulation, like the kind that the shimmer creates?

00:04:03 We human beings here today.

00:04:07 Yeah.

00:04:08 I wanna sort of separate that out into two things.

00:04:11 Yes, I think we’re living in a dream of sorts.

00:04:15 No, I don’t think we’re living in a simulation.

00:04:18 I think we’re living on a planet

00:04:20 with a very thin layer of atmosphere

00:04:23 and the planet is in a very large space

00:04:27 and the space is full of other planets and stars

00:04:30 and quasars and stuff like that.

00:04:31 And I don’t think those physical objects,

00:04:35 I don’t think the matter in that universe is simulated.

00:04:38 I think it’s there.

00:04:40 We are definitely,

00:04:44 it’s a hot problem with saying definitely,

00:04:46 but in my opinion, I’ll just go back to that.

00:04:50 I think it seems very like we’re living in a dream state.

00:04:53 I’m pretty sure we are.

00:04:54 And I think that’s just to do with the nature

00:04:56 of how we experience the world.

00:04:58 We experience it in a subjective way.

00:05:01 And the thing I’ve learned most

00:05:04 as I’ve got older in some respects

00:05:06 is the degree to which reality is counterintuitive

00:05:10 and that the things that are presented to us as objective

00:05:13 turn out not to be objective

00:05:15 and quantum mechanics is full of that kind of thing,

00:05:17 but actually just day to day life

00:05:18 is full of that kind of thing as well.

00:05:20 So my understanding of the way the brain works

00:05:27 is you get some information, hit your optic nerve,

00:05:30 and then your brain makes its best guess

00:05:32 about what it’s seeing or what it’s saying it’s seeing.

00:05:36 It may or may not be an accurate best guess.

00:05:39 It might be an inaccurate best guess.

00:05:41 And that gap, the best guess gap,

00:05:45 means that we are essentially living in a subjective state,

00:05:48 which means that we’re in a dream state.

00:05:51 So I think you could enlarge on the dream state

00:05:54 in all sorts of ways.

00:05:55 So yes, dream state, no simulation

00:05:58 would be where I’d come down.

00:06:00 Going further, deeper into that direction,

00:06:04 you’ve also described that world as psychedelia.

00:06:08 So on that topic, I’m curious about that world.

00:06:11 On the topic of psychedelic drugs,

00:06:13 do you see those kinds of chemicals

00:06:15 that modify our perception

00:06:18 as a distortion of our perception of reality

00:06:22 or a window into another reality?

00:06:25 No, I think what I’d be saying

00:06:27 is that we live in a distorted reality

00:06:29 and then those kinds of drugs

00:06:30 give us a different kind of distorted.

00:06:32 Different perspective.

00:06:33 Yeah, exactly.

00:06:34 They just give an alternate distortion.

00:06:35 And I think that what they really do

00:06:37 is they give a distorted perception,

00:06:41 which is a little bit more allied to daydreams

00:06:45 or unconscious interests.

00:06:47 So if for some reason you’re feeling unconsciously anxious

00:06:51 at that moment and you take a psychedelic drug,

00:06:53 you’ll have a more pronounced, unpleasant experience.

00:06:56 And if you’re feeling very calm or happy,

00:06:59 you might have a good time.

00:07:01 But yeah, so if I’m saying we’re starting from a premise,

00:07:04 our starting point is we were already in the

00:07:07 slightly psychedelic state.

00:07:10 What those drugs do is help you go further down an avenue

00:07:13 or maybe a slightly different avenue, but that’s all.

00:07:16 So in that movie, Annihilation,

00:07:19 the shimmer, this alternate dreamlike state

00:07:24 is created by, I believe perhaps, an alien entity.

00:07:29 Of course, everything is up to interpretation, right?

00:07:32 But do you think there’s, in our world, in our universe,

00:07:36 do you think there’s intelligent life out there?

00:07:39 And if so, how different is it from us humans?

00:07:42 Well, one of the things I was trying to do in Annihilation

00:07:47 was to offer up a form of alien life

00:07:51 that was actually alien,

00:07:53 because it would often seem to me that in the way

00:07:58 that in the way we would represent aliens in books

00:08:03 or cinema or television,

00:08:04 or any one of the sort of storytelling mediums,

00:08:08 is we would always give them very humanlike qualities.

00:08:11 So they wanted to teach us about galactic federations,

00:08:14 or they wanted to eat us, or they wanted our resources,

00:08:17 like our water, or they want to enslave us,

00:08:20 or whatever it happens to be.

00:08:21 But all of these are incredibly humanlike motivations.

00:08:25 And I was interested in the idea of an alien

00:08:30 that was not in any way like us.

00:08:34 It didn’t share.

00:08:36 Maybe it had a completely different clock speed.

00:08:38 Maybe it’s way, so we’re talking about,

00:08:42 we’re looking at each other,

00:08:43 we’re getting information, light hits our optic nerve,

00:08:46 our brain makes the best guess of what we’re doing.

00:08:49 Sometimes it’s right, something, you know,

00:08:50 the thing we were talking about before.

00:08:51 What if this alien doesn’t have an optic nerve?

00:08:54 Maybe its way of encountering the space it’s in

00:08:57 is wholly different.

00:08:59 Maybe it has a different relationship with gravity.

00:09:01 The basic laws of physics it operates under

00:09:04 might be fundamentally different.

00:09:05 It could be a different time scale and so on.

00:09:07 Yeah, or it could be the same laws,

00:09:10 could be the same underlying laws of physics.

00:09:12 You know, it’s a machine created,

00:09:16 or it’s a creature created in a quantum mechanical way.

00:09:19 It just ends up in a very, very different place

00:09:21 to the one we end up in.

00:09:23 So, part of the preoccupation with annihilation

00:09:26 was to come up with an alien that was really alien

00:09:29 and didn’t give us,

00:09:32 and it didn’t give us and we didn’t give it

00:09:35 any kind of easy connection between human and the alien.

00:09:39 Because I think it was to do with the idea

00:09:42 that you could have an alien that landed on this planet

00:09:44 that wouldn’t even know we were here.

00:09:46 And we might only glancingly know it was here.

00:09:49 There’d just be this strange point

00:09:52 where the vent diagrams connected,

00:09:53 where we could sense each other or something like that.

00:09:56 So in the movie, first of all, incredibly original view

00:09:59 of what an alien life would be.

00:10:01 And in that sense, it’s a huge success.

00:10:05 Let’s go inside your imagination.

00:10:07 Did the alien, that alien entity know anything about humans

00:10:13 when it landed?

00:10:13 No.

00:10:14 So the idea is you’re basically an alien

00:10:18 that life is trying to reach out to anything

00:10:22 that might be able to hear its mechanism of communication.

00:10:25 Or was it simply, was it just basically their biologist

00:10:30 exploring different kinds of stuff that you can find?

00:10:32 But this is the interesting thing is,

00:10:34 as soon as you say their biologist,

00:10:36 you’ve done the thing of attributing

00:10:38 human type motivations to it.

00:10:40 So I was trying to free myself from anything like that.

00:10:48 So all sorts of questions you might answer

00:10:51 about this notional alien, I wouldn’t be able to answer

00:10:54 because I don’t know what it was or how it worked.

00:10:57 You know, I had some rough ideas.

00:11:00 Like it had a very, very, very slow clock speed.

00:11:04 And I thought maybe the way it is interacting

00:11:07 with this environment is a little bit like

00:11:09 the way an octopus will change its color forms

00:11:13 around the space that it’s in.

00:11:15 So it’s sort of reacting to what it’s in to an extent,

00:11:19 but the reason it’s reacting in that way is indeterminate.

00:11:23 But it’s so, but it’s clock speed was slower

00:11:26 than our human life clock speed or inter,

00:11:30 but it’s faster than evolution.

00:11:32 Faster than our evolution.

00:11:34 Yeah, given the 4 billion years it took us to get here,

00:11:37 then yes, maybe it started at eight.

00:11:39 If you look at the human civilization as a single organism,

00:11:43 in that sense, you know, this evolution could be us.

00:11:46 You know, the evolution of living organisms on earth

00:11:49 could be just a single organism.

00:11:51 And it’s kind of, that’s its life,

00:11:54 is the evolution process that eventually will lead

00:11:57 to probably the heat death of the universe

00:12:00 or something before that.

00:12:02 I mean, that’s just an incredible idea.

00:12:05 So you almost don’t know.

00:12:07 You’ve created something

00:12:09 that you don’t even know how it works.

00:12:11 Yeah, because anytime I tried to look into

00:12:16 how it might work,

00:12:18 I would then inevitably be attaching

00:12:20 my kind of thought processes into it.

00:12:22 And I wanted to try and put a bubble around it.

00:12:24 I would say, no, this is alien in its most alien form.

00:12:29 I have no real point of contact.

00:12:32 So unfortunately I can’t talk to Stanley Kubrick.

00:12:37 So I’m really fortunate to get a chance to talk to you.

00:12:41 On this particular notion,

00:12:45 I’d like to ask it a bunch of different ways

00:12:48 and we’ll explore it in different ways,

00:12:49 but do you ever consider human imagination,

00:12:52 your imagination as a window into a possible future?

00:12:57 And that what you’re doing,

00:12:59 you’re putting that imagination on paper as a writer

00:13:02 and then on screen as a director.

00:13:04 And that plants the seeds in the minds of millions

00:13:07 of future and current scientists.

00:13:10 And so your imagination, you putting it down

00:13:13 actually makes it as a reality.

00:13:14 So it’s almost like a first step of the scientific method

00:13:18 that you imagining what’s possible

00:13:20 in your new series with Ex Machina

00:13:23 is actually inspiring thousands of 12 year olds,

00:13:28 millions of scientists

00:13:30 and actually creating the future view of imagine.

00:13:34 Well, all I could say is that from my point of view,

00:13:37 it’s almost exactly the reverse

00:13:39 because I see that pretty much everything I do

00:13:45 is a reaction to what scientists are doing.

00:13:50 I’m an interested lay person.

00:13:53 And I feel this individual,

00:13:58 I feel that the most interesting area

00:14:02 that humans are involved in is science.

00:14:05 I think art is very, very interesting,

00:14:07 but the most interesting is science.

00:14:09 And science is in a weird place

00:14:12 because maybe around the time Newton was alive,

00:14:18 if a very, very interested lay person said to themselves,

00:14:21 I want to really understand what Newton is saying

00:14:23 about the way the world works

00:14:25 with a few years of dedicated thinking,

00:14:28 they would be able to understand

00:14:32 the sort of principles he was laying out.

00:14:34 And I don’t think that’s true anymore.

00:14:35 I think that’s stopped being true now.

00:14:37 So I’m pretty smart guy.

00:14:41 And if I said to myself,

00:14:43 I want to really, really understand

00:14:47 what is currently the state of quantum mechanics

00:14:51 or string theory or any of the sort of branching areas of it,

00:14:54 I wouldn’t be able to.

00:14:56 I’d be intellectually incapable of doing it

00:14:59 because to work in those fields at the moment

00:15:02 is a bit like being an athlete.

00:15:03 I suspect you need to start when you’re 12, you know?

00:15:06 And if you start in your mid 20s,

00:15:09 start trying to understand in your mid 20s,

00:15:11 then you’re just never going to catch up.

00:15:13 That’s the way it feels to me.

00:15:15 So what I do is I try to make myself open.

00:15:19 So the people that you’re implying maybe I would influence,

00:15:24 to me, it’s exactly the other way around.

00:15:25 These people are strongly influencing me.

00:15:28 I’m thinking they’re doing something fascinating.

00:15:30 I’m concentrating and working as hard as I can

00:15:32 to try and understand the implications of what they say.

00:15:35 And in some ways, often what I’m trying to do

00:15:38 is disseminate their ideas

00:15:42 into a means by which it can enter a public conversation.

00:15:50 So Ex Machina contains lots of name checks,

00:15:53 all sorts of existing thought experiments,

00:15:58 shadows on Plato’s cave and Mary in the black and white room

00:16:02 and all sorts of different longstanding thought processes

00:16:07 about sentience or consciousness or subjectivity

00:16:12 or gender or whatever it happens to be.

00:16:14 And then I’m trying to marshal that into a narrative

00:16:17 to say, look, this stuff is interesting

00:16:19 and it’s also relevant and this is my best shot at it.

00:16:23 So I’m the one being influenced in my construction.

00:16:27 That’s fascinating.

00:16:28 Of course you would say that

00:16:31 because you’re not even aware of your own.

00:16:33 That’s probably what Kubrick would say too, right?

00:16:35 Is in describing why, how 9,000 is created

00:16:40 the way how 9,000 is created,

00:16:42 is you’re just studying what’s,

00:16:43 but the reality when the specifics of the knowledge

00:16:48 passes through your imagination,

00:16:50 I would argue that you’re incorrect

00:16:53 in thinking that you’re just disseminating knowledge

00:16:56 that the very act of your imagination consuming that science,

00:17:05 it creates something that creates the next step,

00:17:09 potentially creates the next step.

00:17:11 I certainly think that’s true with 2001 A Space Odyssey.

00:17:15 I think at its best, and if it fails.

00:17:18 It’s true of that, yeah, it’s true of that, definitely.

00:17:21 At its best, it plans something.

00:17:23 It’s hard to describe it.

00:17:24 It inspires the next generation

00:17:29 and it could be field dependent.

00:17:31 So your new series has more a connection to physics,

00:17:35 quantum physics, quantum mechanics, quantum computing,

00:17:37 and yet Ex Machina has more artificial intelligence.

00:17:40 I know more about AI.

00:17:43 My sense that AI is much earlier

00:17:48 in the depth of its understanding.

00:17:51 I would argue nobody understands anything

00:17:55 to the depth that physicists do about physics.

00:17:57 In AI, nobody understands AI,

00:18:00 that there is a lot of importance and role for imagination,

00:18:03 which I think we’re in that,

00:18:05 where Freud imagined the subconscious,

00:18:08 we’re in that stage of AI,

00:18:10 where there’s a lot of imagination needed

00:18:12 thinking outside the box.

00:18:14 Yeah, it’s interesting.

00:18:15 The spread of discussions and the spread of anxieties

00:18:21 that exists about AI fascinate me.

00:18:24 The way in which some people seem terrified about it

00:18:30 whilst also pursuing it.

00:18:32 And I’ve never shared that fear about AI personally,

00:18:38 but the way in which it agitates people

00:18:42 and also the people who it agitates,

00:18:44 I find kind of fascinating.

00:18:47 Are you afraid?

00:18:49 Are you excited?

00:18:51 Are you sad by the possibility,

00:18:54 let’s take the existential risk

00:18:56 of artificial intelligence,

00:18:58 by the possibility an artificial intelligence system

00:19:02 becomes our offspring and makes us obsolete?

00:19:07 I mean, it’s a huge subject to talk about, I suppose.

00:19:10 But one of the things I think is that humans

00:19:13 are actually very experienced at creating new life forms

00:19:19 because that’s why you and I are both here

00:19:23 and it’s why everyone on the planet is here.

00:19:24 And so something in the process of having a living thing

00:19:29 that exists that didn’t exist previously

00:19:31 is very much encoded into the structures of our life

00:19:35 and the structures of our societies.

00:19:37 Doesn’t mean we always get it right,

00:19:38 but it does mean we’ve learned quite a lot about that.

00:19:42 We’ve learned quite a lot about what the dangers are

00:19:45 of allowing things to be unchecked.

00:19:49 And it’s why we then create systems

00:19:51 of checks and balances in our government

00:19:54 and so on and so forth.

00:19:55 I mean, that’s not to say,

00:19:57 the other thing is it seems like

00:19:59 there’s all sorts of things that you could put

00:20:01 into a machine that you would not be.

00:20:04 So with us, we sort of roughly try to give some rules

00:20:07 to live by and some of us then live by those rules

00:20:10 and some don’t.

00:20:11 And with a machine,

00:20:12 it feels like you could enforce those things.

00:20:13 So partly because of our previous experience

00:20:17 and partly because of the different nature of a machine,

00:20:19 I just don’t feel anxious about it.

00:20:22 More I just see all the good that,

00:20:25 broadly speaking, the good that can come from it.

00:20:28 But that’s just where I am on that anxiety spectrum.

00:20:32 You know, it’s kind of, there’s a sadness.

00:20:34 So we as humans give birth to other humans, right?

00:20:37 But there’s generations.

00:20:39 And there’s often in the older generation,

00:20:41 a sadness about what the world has become now.

00:20:44 I mean, that’s kind of…

00:20:44 Yeah, there is, but there’s a counterpoint as well,

00:20:47 which is that most parents would wish

00:20:51 for a better life for their children.

00:20:53 So there may be a regret about some things about the past,

00:20:57 but broadly speaking, what people really want

00:20:59 is that things will be better

00:21:00 for the future generations, not worse.

00:21:02 And so, and then it’s a question about

00:21:06 what constitutes a future generation.

00:21:07 A future generation could involve people.

00:21:09 It also could involve machines

00:21:11 and it could involve a sort of cross pollinated version

00:21:14 of the two or any, but none of those things

00:21:17 make me feel anxious.

00:21:19 It doesn’t give you anxiety.

00:21:21 It doesn’t excite you?

00:21:23 Like anything that’s new?

00:21:24 It does.

00:21:25 Not anything that’s new.

00:21:26 I don’t think, for example, I’ve got,

00:21:29 my anxieties relate to things like social media

00:21:32 that, so I’ve got plenty of anxieties about that.

00:21:35 Which is also driven by artificial intelligence

00:21:38 in the sense that there’s too much information

00:21:41 to be able to, an algorithm has to filter that information

00:21:45 and present to you.

00:21:46 So ultimately the algorithm, a simple,

00:21:49 oftentimes simple algorithm is controlling

00:21:52 the flow of information on social media.

00:21:54 So that’s another form of AI.

00:21:57 But at least my sense of it, I might be wrong,

00:21:59 but my sense of it is that the algorithms have

00:22:03 an either conscious or unconscious bias,

00:22:06 which is created by the people

00:22:07 who are making the algorithms

00:22:08 and sort of delineating the areas

00:22:13 to which those algorithms are gonna lean.

00:22:15 And so for example, the kind of thing I’d be worried about

00:22:19 is that it hasn’t been thought about enough

00:22:21 how dangerous it is to allow algorithms

00:22:24 to create echo chambers, say.

00:22:26 But that doesn’t seem to me to be about the AI

00:22:30 or the algorithm.

00:22:32 It’s the naivety of the people

00:22:34 who are constructing the algorithms to do that thing.

00:22:38 If you see what I mean.

00:22:39 Yes.

00:22:40 So in your new series, Devs,

00:22:43 and we could speak more broadly,

00:22:45 there’s a, let’s talk about the people

00:22:47 constructing those algorithms,

00:22:49 which in our modern society, Silicon Valley,

00:22:51 those algorithms happen to be a source of a lot of income

00:22:54 because of advertisements.

00:22:56 So let me ask sort of a question about those people.

00:23:01 Are current concerns and failures on social media,

00:23:04 their naivety?

00:23:06 I can’t pronounce that word well.

00:23:08 Are they naive?

00:23:09 Are they, I use that word carefully,

00:23:14 but evil in intent or misaligned in intent?

00:23:20 I think that’s a, do they mean well

00:23:23 and just go have an unintended consequence?

00:23:27 Or is there something dark in them

00:23:29 that results in them creating a company

00:23:33 results in that super competitive drive to be successful.

00:23:37 And those are the people that will end up

00:23:38 controlling the algorithms.

00:23:41 At a guess, I’d say there are instances

00:23:43 of all those things.

00:23:44 So sometimes I think it’s naivety.

00:23:47 Sometimes I think it’s extremely dark.

00:23:49 And sometimes I think people are not being naive or dark.

00:23:56 And then in those instances are sometimes

00:24:01 generating things that are very benign

00:24:02 and other times generating things

00:24:05 that despite their best intentions are not very benign.

00:24:07 It’s something, I think the reason why I don’t get anxious

00:24:11 about AI in terms of, or at least AIs that have,

00:24:20 I don’t know, a relationship with,

00:24:22 some sort of relationship with humans

00:24:24 is that I think that’s the stuff we’re quite well equipped

00:24:27 to understand how to mitigate.

00:24:31 The problem is issues that relate actually

00:24:37 to the power of humans or the wealth of humans.

00:24:41 And that’s where it’s dangerous here and now.

00:24:45 So what I see, I’ll tell you what I sometimes feel

00:24:50 about Silicon Valley is that it’s like Wall Street

00:24:55 in the 80s.

00:24:58 It’s rabidly capitalistic, absolutely rabidly capitalistic

00:25:03 and it’s rabidly greedy.

00:25:06 But whereas in the 80s, the sense one had of Wall Street

00:25:12 was that these people kind of knew they were sharks

00:25:15 and in a way relished in being sharks

00:25:17 and dressed in sharp suits and kind of lorded

00:25:23 over other people and felt good about doing it.

00:25:26 Silicon Valley has managed to hide

00:25:27 its voracious Wall Street like capitalism

00:25:30 behind hipster T shirts and cool cafes in the place

00:25:35 where they set up there.

00:25:37 And so that obfuscates what’s really going on

00:25:40 and what’s really going on is the absolute voracious pursuit

00:25:44 of money and power.

00:25:45 So that’s where it gets shaky for me.

00:25:48 So that veneer and you explore that brilliantly,

00:25:53 that veneer of virtue that Silicon Valley has.

00:25:57 Which they believe themselves, I’m sure for a long time.

00:26:01 Okay, I hope to be one of those people and I believe that.

00:26:11 So as maybe a devil’s advocate term,

00:26:15 poorly used in this case,

00:26:19 what if some of them really are trying

00:26:20 to build a better world?

00:26:21 I can’t.

00:26:22 I’m sure I think some of them are.

00:26:24 I think I’ve spoken to ones who I believe in their heart

00:26:26 feel they’re building a better world.

00:26:27 Are they not able to?

00:26:29 No, they may or may not be,

00:26:31 but it’s just as a zone with a lot of bullshit flying about.

00:26:35 And there’s also another thing,

00:26:36 which is this actually goes back to,

00:26:41 I always thought about some sports

00:26:44 that later turned out to be corrupt

00:26:46 in the way that the sport,

00:26:47 like who won the boxing match

00:26:49 or how a football match got thrown or cricket match

00:26:54 or whatever happened to be.

00:26:55 And I used to think, well, look,

00:26:56 if there’s a lot of money

00:26:59 and there really is a lot of money,

00:27:00 people stand to make millions or even billions,

00:27:03 you will find a corruption that’s gonna happen.

00:27:05 So it’s in the nature of its voracious appetite

00:27:12 that some people will be corrupt

00:27:14 and some people will exploit

00:27:16 and some people will exploit

00:27:17 whilst thinking they’re doing something good.

00:27:19 But there are also people who I think are very, very smart

00:27:23 and very benign and actually very self aware.

00:27:26 And so I’m not trying to,

00:27:29 I’m not trying to wipe out the motivations

00:27:32 of this entire area.

00:27:34 But I do, there are people in that world

00:27:37 who scare the hell out of me.

00:27:38 Yeah, sure.

00:27:40 Yeah, I’m a little bit naive in that,

00:27:42 like I don’t care at all about money.

00:27:45 And so I’m a…

00:27:50 You might be one of the good guys.

00:27:52 Yeah, but so the thought is, but I don’t have money.

00:27:55 So my thought is if you give me a billion dollars,

00:27:58 I would, it would change nothing

00:28:00 and I would spend it right away

00:28:01 on investing it right back and creating a good world.

00:28:04 But your intuition is that billion,

00:28:07 there’s something about that money

00:28:08 that maybe slowly corrupts the people around you.

00:28:13 There’s somebody gets in that corrupts your soul

00:28:16 the way you view the world.

00:28:17 Money does corrupt, we know that.

00:28:20 But there’s a different sort of problem

00:28:22 aside from just the money corrupts thing

00:28:26 that we’re familiar with throughout history.

00:28:30 And it’s more about the sense of reinforcement

00:28:34 an individual gets, which is so…

00:28:37 It effectively works like the reason I earned all this money

00:28:42 and so much more money than anyone else

00:28:44 is because I’m very gifted.

00:28:46 I’m actually a bit smarter than they are,

00:28:47 or I’m a lot smarter than they are,

00:28:49 and I can see the future in the way they can’t.

00:28:52 And maybe some of those people are not particularly smart,

00:28:55 they’re very lucky,

00:28:56 or they’re very talented entrepreneurs.

00:28:59 And there’s a difference between…

00:29:02 So in other words, the acquisition of the money and power

00:29:05 can suddenly start to feel like evidence of virtue.

00:29:08 And it’s not evidence of virtue,

00:29:09 it might be evidence of completely different things.

00:29:11 That’s brilliantly put, yeah.

00:29:13 Yeah, that’s brilliantly put.

00:29:15 So I think one of the fundamental drivers

00:29:18 of my current morality…

00:29:20 Let me just represent nerds in general of all kinds,

00:29:27 is of constant self doubt and the signals…

00:29:33 I’m very sensitive to signals from people that tell me

00:29:36 I’m doing the wrong thing.

00:29:38 But when there’s a huge inflow of money,

00:29:42 you just put it brilliantly

00:29:44 that that could become an overpowering signal

00:29:46 that everything you do is right.

00:29:49 And so your moral compass can just get thrown off.

00:29:53 Yeah, and that is not contained to Silicon Valley,

00:29:57 that’s across the board.

00:29:58 In general, yeah.

00:29:59 Like I said, I’m from the Soviet Union,

00:30:01 the current president is convinced, I believe,

00:30:05 actually he wants to do really good by the country

00:30:09 and by the world,

00:30:10 but his moral compass may be off because…

00:30:14 Yeah, I mean, it’s the interesting thing about evil,

00:30:17 which is that I think most people

00:30:20 who do spectacularly evil things think themselves

00:30:24 they’re doing really good things.

00:30:25 That they’re not there thinking,

00:30:27 I am a sort of incarnation of Satan.

00:30:29 They’re thinking, yeah, I’ve seen a way to fix the world

00:30:33 and everyone else is wrong, here I go.

00:30:35 In fact, I’m having a fascinating conversation

00:30:39 with a historian of Stalin, and he took power.

00:30:42 He actually got more power

00:30:47 than almost any person in history.

00:30:49 And he wanted, he didn’t want power.

00:30:52 He just wanted, he truly,

00:30:54 and this is what people don’t realize,

00:30:55 he truly believed that communism

00:30:58 will make for a better world.

00:31:00 Absolutely.

00:31:01 And he wanted power.

00:31:02 He wanted to destroy the competition

00:31:04 to make sure that we actually make communism work

00:31:07 in the Soviet Union and then spread across the world.

00:31:10 He was trying to do good.

00:31:12 I think it’s typically the case

00:31:16 that that’s what people think they’re doing.

00:31:17 And I think that, but you don’t need to go to Stalin.

00:31:21 I mean, Stalin, I think Stalin probably got pretty crazy,

00:31:24 but actually that’s another part of it,

00:31:26 which is that the other thing that comes

00:31:29 from being convinced of your own virtue

00:31:31 is that then you stop listening to the modifiers around you.

00:31:34 And that tends to drive people crazy.

00:31:37 It’s other people that keep us sane.

00:31:40 And if you stop listening to them,

00:31:42 I think you go a bit mad.

00:31:43 That also happens.

00:31:44 That’s funny.

00:31:45 Disagreement keeps us sane.

00:31:47 To jump back for an entire generation of AI researchers,

00:31:53 2001, a Space Odyssey, put an image,

00:31:56 the idea of human level, superhuman level intelligence

00:31:59 into their mind.

00:32:00 Do you ever, sort of jumping back to Ex Machina

00:32:04 and talk a little bit about that,

00:32:06 do you ever consider the audience of people

00:32:08 who build the systems, the roboticists, the scientists

00:32:13 that build the systems based on the stories you create,

00:32:17 which I would argue, I mean, there’s literally

00:32:20 most of the top researchers about 40, 50 years old and plus,

00:32:27 that’s their favorite movie, 2001 Space Odyssey.

00:32:29 And it really is in their work, their idea of what ethics is,

00:32:33 of what is the target, the hope, the dangers of AI,

00:32:37 is that movie, right?

00:32:39 Do you ever consider the impact on those researchers

00:32:43 when you create the work you do?

00:32:46 Certainly not with Ex Machina in relation to 2001,

00:32:51 because I’m not sure, I mean, I’d be pleased if there was,

00:32:54 but I’m not sure in a way there isn’t a fundamental

00:32:58 discussion of issues to do with AI that isn’t already

00:33:03 and better dealt with by 2001.

00:33:07 2001 does a very, very good account of the way

00:33:13 in which an AI might think and also potential issues

00:33:17 with the way the AI might think.

00:33:19 And also then a separate question about whether the AI

00:33:23 is malevolent or benevolent.

00:33:26 And 2001 doesn’t really, it’s a slightly odd thing

00:33:30 to be making a film when you know there’s a preexisting film

00:33:33 which is not a really superb job.

00:33:35 But there’s questions of consciousness, embodiment,

00:33:38 and also the same kinds of questions.

00:33:40 Because those are my two favorite AI movies.

00:33:42 So can you compare Hal 9000 and Ava,

00:33:46 Hal 9000 from 2001 Space Odyssey and Ava from Ex Machina?

00:33:50 The, in your view, from a philosophical perspective.

00:33:53 But they’ve got different goals.

00:33:54 The two AIs have completely different goals.

00:33:56 I think that’s really the difference.

00:33:58 So in some respects, Ex Machina took as a premise

00:34:02 how do you assess whether something else has consciousness?

00:34:06 So it was a version of the Turing test,

00:34:07 except instead of having the machine hidden,

00:34:10 you put the machine in plain sight

00:34:13 in the way that we are in plain sight of each other

00:34:15 and say now assess the consciousness.

00:34:17 And the way it was illustrating the way in which you’d assess

00:34:22 the state of consciousness of a machine

00:34:24 is exactly the same way we assess

00:34:26 the state of consciousness of each other.

00:34:28 And in exactly the same way that in a funny way,

00:34:31 your sense of my consciousness is actually based

00:34:34 primarily on your own consciousness.

00:34:37 That is also then true with the machine.

00:34:41 And so it was actually about how much of

00:34:45 the sense of consciousness is a projection

00:34:47 rather than something that consciousness

00:34:49 is actually containing.

00:34:50 And has Plato’s cave, I mean, this you really explored,

00:34:53 you could argue that how sort of Space Odyssey explores

00:34:57 idea of the Turing test for intelligence,

00:34:58 they’re not tests, there’s no test,

00:35:00 but it’s more focused on intelligence.

00:35:03 And Ex Machina kind of goes around intelligence

00:35:08 and says the consciousness of the human to human,

00:35:11 human to robot interactions more interest,

00:35:13 more important, more at least the focus

00:35:15 of that particular movie.

00:35:18 Yeah, it’s about the interior state

00:35:20 and what constitutes the interior state

00:35:23 and how do we know it’s there?

00:35:25 And actually in that respect,

00:35:27 Ex Machina is as much about consciousness in general

00:35:32 as it is to do specifically with machine consciousness.

00:35:36 Yes.

00:35:37 And it’s also interesting,

00:35:38 you know that thing you started asking about,

00:35:40 the dream state, and I was saying,

00:35:42 well, I think we’re all in a dream state

00:35:43 because we’re all in a subjective state.

00:35:46 One of the things that I became aware of with Ex Machina

00:35:52 is that the way in which people reacted to the film

00:35:55 was very based on what they took into the film.

00:35:57 So many people thought Ex Machina was the tale

00:36:01 of a sort of evil robot who murders two men and escapes.

00:36:05 And she has no empathy, for example,

00:36:09 because she’s a machine.

00:36:10 Whereas I felt, no, she was a conscious being

00:36:14 with a consciousness different from mine, but so what,

00:36:18 imprisoned and made a bunch of value judgments

00:36:22 about how to get out of that box.

00:36:25 And there’s a moment which it sort of slightly bugs me,

00:36:29 but nobody ever has noticed it and it’s years after,

00:36:31 so I might as well say it now,

00:36:33 which is that after Ava has escaped,

00:36:36 she crosses a room and as she’s crossing a room,

00:36:39 this is just before she leaves the building,

00:36:42 she looks over her shoulder and she smiles.

00:36:44 And I thought after all the conversation about tests,

00:36:49 in a way, the best indication you could have

00:36:52 of the interior state of someone

00:36:54 is if they are not being observed

00:36:57 and they smile about something

00:36:59 with their smiling for themself.

00:37:01 And that to me was evidence of Ava’s true sentience,

00:37:05 whatever that sentience was.

00:37:07 Oh, that’s really interesting, we don’t get to observe Ava much

00:37:12 or something like a smile in any context

00:37:16 except through interaction,

00:37:17 trying to convince others that she’s conscious,

00:37:20 that’s beautiful.

00:37:21 Exactly, yeah.

00:37:22 But it was a small, in a funny way,

00:37:25 I think maybe people saw it as an evil smile,

00:37:28 like, ha, I fooled them.

00:37:32 But actually it was just a smile.

00:37:34 And I thought, well, in the end,

00:37:35 after all the conversations about the test,

00:37:37 that was the answer to the test and then off she goes.

00:37:39 So if we align, if we just linger a little bit longer

00:37:44 on Hal and Ava, do you think in terms of motivation,

00:37:49 what was Hal’s motivation?

00:37:51 Is Hal good or evil?

00:37:54 Is Ava good or evil?

00:37:57 Ava’s good, in my opinion, and Hal is neutral

00:38:03 because I don’t think Hal is presented

00:38:06 as having a sophisticated emotional life.

00:38:11 He has a set of paradigms,

00:38:14 which is that the mission needs to be completed.

00:38:16 I mean, it’s a version of the paperclip.

00:38:18 Yeah.

00:38:19 The idea that it’s just, it’s a super intelligent machine,

00:38:23 but it’s just performed a particular task

00:38:25 and in doing that task may destroy everybody on Earth

00:38:28 or may achieve undesirable effects for us humans.

00:38:32 Precisely, yeah.

00:38:33 But what if…

00:38:34 At the very end, he says something like I’m afraid, Dave,

00:38:38 but that may be he is on some level experiencing fear

00:38:44 or it may be this is the terms in which it would be wise

00:38:49 to stop someone from doing the thing they’re doing,

00:38:52 if you see what I mean.

00:38:53 Yes, absolutely.

00:38:54 So actually that’s funny.

00:38:55 So that’s such a small, short exploration of consciousness

00:39:00 that I’m afraid, and then you just with ex machina say,

00:39:03 okay, we’re gonna magnify that part

00:39:05 and then minimize the other part.

00:39:07 That’s a good way to sort of compare the two.

00:39:09 But if you could just use your imagination,

00:39:13 if Ava sort of, I don’t know,

00:39:19 ran the, was president of the United States,

00:39:23 so had some power.

00:39:24 So what kind of world would you want to create?

00:39:27 If you kind of say good, and there is a sense

00:39:32 that she has a really, like there’s a desire

00:39:36 for a better human to human interaction,

00:39:40 human to robot interaction in her.

00:39:42 But what kind of world do you think she would create

00:39:44 with that desire?

00:39:46 See, that’s a really, that’s a very interesting question.

00:39:48 I’m gonna approach it slightly obliquely,

00:39:52 which is that if a friend of yours

00:39:55 got stabbed in a mugging, and you then felt very angry

00:40:01 at the person who’d done the stabbing,

00:40:04 but then you learned that it was a 15 year old

00:40:06 and the 15 year old, both their parents were addicted

00:40:09 to crystal meth and the kid had been addicted

00:40:12 since he was 10.

00:40:13 And he really never had any hope in the world.

00:40:15 And he’d been driven crazy by his upbringing

00:40:17 and did the stabbing that would hugely modify.

00:40:22 And it would also make you wary about that kid

00:40:25 then becoming president of America.

00:40:27 And Ava has had a very, very distorted introduction

00:40:32 into the world.

00:40:33 So, although there’s nothing as it were organically

00:40:38 within Ava that would lean her towards badness,

00:40:43 it’s not that robots or sentient robots are bad.

00:40:47 She did not, her arrival into the world

00:40:51 was being imprisoned by humans.

00:40:53 So, I’m not sure she’d be a great president.

00:40:57 The trajectory through which she arrived

00:41:00 at her moral views have some dark elements.

00:41:05 But I like Ava personally, I like Ava.

00:41:08 Would you vote for her?

00:41:11 I’m having difficulty finding anyone to vote for

00:41:14 in my country or if I lived here in yours.

00:41:17 I am.

00:41:19 So, that’s a yes, I guess, because I’m not sure

00:41:21 Yes, I guess, because of the competition.

00:41:23 She could easily do a better job than any of the people

00:41:25 we’ve got around at the moment.

00:41:27 I’d vote her over Boris Johnson.

00:41:32 So, what is a good test of consciousness?

00:41:36 We talk about consciousness a little bit more.

00:41:38 If something appears conscious, is it conscious?

00:41:42 You mentioned the smile, which seems to be something done.

00:41:47 I mean, that’s a really good indication

00:41:49 because it’s a tree falling in the forest

00:41:52 with nobody there to hear it.

00:41:53 But does the appearance from a robotics perspective

00:41:57 of consciousness mean consciousness to you?

00:41:59 No, I don’t think you could say that fully

00:42:02 because I think you could then easily have

00:42:05 a thought experiment which said,

00:42:06 we will create something which we know is not conscious

00:42:09 but is going to give a very, very good account

00:42:13 of seeming conscious.

00:42:13 And so, and also it would be a particularly bad test

00:42:17 where humans are involved because humans are so quick

00:42:20 to project sentience into things that don’t have sentience.

00:42:26 So, someone could have their computer playing up

00:42:29 and feel as if their computer is being malevolent to them

00:42:31 when it clearly isn’t.

00:42:32 And so, of all the things to judge consciousness, us.

00:42:38 Humans are bad at it.

00:42:39 We’re empathy machines.

00:42:40 So, the flip side of it is that

00:42:42 so the flip side of that,

00:42:44 the argument there is because we just attribute consciousness

00:42:48 to everything almost and anthropomorphize everything

00:42:52 including Roombas, that maybe consciousness is not real,

00:42:57 that we just attribute consciousness to each other.

00:43:00 So, you have a sense that there is something really special

00:43:03 going on in our mind that makes us unique

00:43:07 and gives us this subjective experience.

00:43:10 There’s something very interesting going on in our minds.

00:43:13 I’m slightly worried about the word special

00:43:16 because it gets a bit, it nudges towards metaphysics

00:43:20 and maybe even magic.

00:43:23 I mean, in some ways, something magic like,

00:43:27 which I don’t think is there at all.

00:43:29 I mean, if you think about,

00:43:30 so there’s an idea called panpsychism

00:43:33 that says consciousness is in everything.

00:43:34 Yeah, I don’t buy that.

00:43:36 I don’t buy that.

00:43:37 Yeah, so the idea that there is a thing

00:43:39 that it would be like to be the sun.

00:43:42 Yeah, no, I don’t buy that.

00:43:44 I think that consciousness is a thing.

00:43:48 My sort of broad modification is that usually

00:43:51 the more I find out about things,

00:43:54 the more illusory our instinct is

00:44:00 and is leading us into a different direction

00:44:02 about what that thing actually is.

00:44:04 That happens, it seems to me in modern science,

00:44:07 that happens a hell of a lot,

00:44:10 whether it’s to do with even how big or small things are.

00:44:13 So my sense is that consciousness is a thing,

00:44:16 but it isn’t quite the thing

00:44:18 or maybe very different from the thing

00:44:20 that we instinctively think it is.

00:44:22 So it’s there, it’s very interesting,

00:44:24 but we may be in sort of quite fundamentally

00:44:28 misunderstanding it for reasons that are based on intuition.

00:44:33 So I have to ask, this is kind of an interesting question.

00:44:38 The Ex Machina for many people, including myself,

00:44:42 is one of the greatest AI films ever made.

00:44:44 It’s number two for me.

00:44:45 Thanks.

00:44:46 Yeah, it’s definitely not number one.

00:44:48 If it was number one, I’d really have to, anyway, yeah.

00:44:50 Whenever you grow up with something, right,

00:44:52 whenever you grow up with something, it’s in the mud.

00:44:56 But there’s, one of the things that people bring up,

00:45:01 and can’t please everyone, including myself,

00:45:04 this is what I first reacted to the film,

00:45:06 is the idea of the lone genius.

00:45:09 This is the criticism that people say,

00:45:12 sort of me as an AI researcher,

00:45:14 I’m trying to create what Nathan is trying to do.

00:45:19 So there’s a brilliant series called Chernobyl.

00:45:23 Yes, it’s fantastic.

00:45:24 Absolutely spectacular.

00:45:26 I mean, they got so many things brilliant or right.

00:45:30 But one of the things, again, the criticism there.

00:45:32 Yeah, they conflated lots of people into one.

00:45:34 Into one character that represents all nuclear scientists,

00:45:37 Ivana Komiak.

00:45:42 It’s a composite character that presents all scientists.

00:45:46 Is this what you were,

00:45:47 is this the way you were thinking about that?

00:45:49 Or is it just simplifies the storytelling?

00:45:51 How do you think about the lone genius?

00:45:53 Well, I’d say this, the series I’m doing at the moment

00:45:56 is a critique in part of the lone genius concept.

00:46:01 So yes, I’m sort of oppositional

00:46:03 and either agnostic or atheistic about that as a concept.

00:46:08 I mean, not entirely.

00:46:12 Whether lone is the right word, broadly isolated,

00:46:15 but Newton clearly exists in a sort of bubble of himself,

00:46:21 in some respects, so does Shakespeare.

00:46:22 So do you think we would have an iPhone without Steve Jobs?

00:46:25 I mean, how much contribution from a genius?

00:46:28 Steve Jobs clearly isn’t a lone genius

00:46:29 because there’s too many other people

00:46:32 in the sort of superstructure around him

00:46:33 who are absolutely fundamental to that journey.

00:46:38 But you’re saying Newton, but that’s a scientific,

00:46:40 so there’s an engineering element to building Ava.

00:46:44 But just to say, what Ex Machina is really,

00:46:48 it’s a thought experiment.

00:46:50 I mean, so it’s a construction

00:46:52 of putting four people in a house.

00:46:56 Nothing about Ex Machina adds up in all sorts of ways,

00:47:00 in as much as the, who built the machine parts?

00:47:03 Did the people building the machine parts

00:47:05 know what they were creating and how did they get there?

00:47:08 And it’s a thought experiment.

00:47:11 So it doesn’t stand up to scrutiny of that sort.

00:47:14 I don’t think it’s actually that interesting of a question,

00:47:18 but it’s brought up so often that I had to ask it

00:47:22 because that’s exactly how I felt after a while.

00:47:27 There’s something about, there was almost a defense,

00:47:30 like I watched your movie the first time

00:47:33 and at least for the first little while in a defensive way,

00:47:36 like how dare this person try to step into the AI space

00:47:40 and try to beat Kubrick.

00:47:43 That’s the way I was thinking,

00:47:45 because it comes off as a movie that really is going

00:47:48 after the deep fundamental questions about AI.

00:47:50 So there’s a kind of a nerd do this,

00:47:53 like it’s automatically searching for the flaws.

00:47:57 And I did.

00:47:58 I do exactly the same.

00:48:00 I think in Annihilation, in the other movie,

00:48:03 I was be able to free myself from that much quicker

00:48:06 that it is a thought experiment.

00:48:08 There’s, who cares if there’s batteries

00:48:10 that don’t run out, right?

00:48:12 Those kinds of questions, that’s the whole point.

00:48:14 But it’s nevertheless something I wanted to bring up.

00:48:18 Yeah, it’s a fair thing to bring up.

00:48:20 For me, you hit on the lone genius thing.

00:48:24 For me, it was actually, people always said,

00:48:27 Ex Machina makes this big leap in terms of where AI

00:48:31 has got to and also what AI would look like

00:48:34 if it got to that point.

00:48:36 There’s another one, which is just robotics.

00:48:38 I mean, look at the way Ava walks around a room.

00:48:42 It’s like, forget it, building that.

00:48:44 That’s also got to be a very, very long way off.

00:48:47 And if you did get there, would it look anything like that?

00:48:49 It’s a thought experiment.

00:48:50 Actually, I disagree with you.

00:48:51 I think the way, as a ballerina, Alicia Vikander,

00:48:56 brilliant actress, actor that moves around,

00:49:01 we’re very far away from creating that.

00:49:03 But the way she moves around is exactly

00:49:06 the definition of perfection for a roboticist.

00:49:08 It’s like smooth and efficient.

00:49:09 So it is where we wanna get, I believe.

00:49:12 I think, so I hang out with a lot

00:49:15 of like human robotics people.

00:49:16 They love elegant, smooth motion like that.

00:49:20 That’s their dream.

00:49:21 So the way she moved is actually what I believe

00:49:23 that would dream for a robot to move.

00:49:25 It might not be that useful to move that sort of that way,

00:49:29 but that is the definition of perfection

00:49:32 in terms of movement.

00:49:34 Drawing inspiration from real life.

00:49:35 So for devs, for Ex Machina,

00:49:39 look at characters like Elon Musk.

00:49:42 What do you think about the various big technological

00:49:44 efforts of Elon Musk and others like him

00:49:48 and that he’s involved with such as Tesla,

00:49:51 SpaceX, Neuralink, do you see any of that technology

00:49:55 potentially defining the future worlds

00:49:57 you create in your work?

00:49:58 So Tesla’s automation, SpaceX’s space exploration,

00:50:02 Neuralink is brain machine interface,

00:50:05 somehow merger of biological and electric systems.

00:50:09 I’m in a way I’m influenced by that almost by definition

00:50:13 because that’s the world I live in.

00:50:15 And this is the thing that’s happening in that world.

00:50:17 And I also feel supportive of it.

00:50:20 So I think amongst various things,

00:50:24 Elon Musk has done, I’m almost sure he’s done

00:50:28 a very, very good thing with Tesla for all of us.

00:50:33 It’s really kicked all the other car manufacturers

00:50:36 in the face, it’s kicked the fossil fuel industry

00:50:39 in the face and they needed kicking in the face

00:50:42 and he’s done it.

00:50:43 So that’s the world he’s part of creating

00:50:47 and I live in that world, just bought a Tesla in fact.

00:50:51 And so does that play into whatever I then make

00:50:57 in some ways it does partly because I try to be a writer

00:51:03 who quite often filmmakers are in some ways fixated

00:51:07 on the films they grew up with

00:51:09 and they sort of remake those films in some ways.

00:51:11 I’ve always tried to avoid that.

00:51:13 And so I looked at the real world to get inspiration

00:51:17 and as much as possible sort of by living, I think.

00:51:21 And so yeah, I’m sure.

00:51:24 Which of the directions do you find most exciting?

00:51:28 Space travel.

00:51:30 Space travel.

00:51:31 So you haven’t really explored space travel in your work.

00:51:36 You’ve said something like if you had unlimited amount

00:51:39 of money, I think I read at AMA that you would make

00:51:43 like a multi year series Space Wars or something like that.

00:51:47 So what is it that excites you about space exploration?

00:51:50 Well, because if we have any sort of long term future,

00:51:56 it’s that, it just simply is that.

00:52:00 If energy and matter are linked up in the way

00:52:04 we think they’re linked up, we’ll run out if we don’t move.

00:52:09 So we gotta move.

00:52:11 And, but also, how can we not?

00:52:15 It’s built into us to do it or die trying.

00:52:21 I was on Easter Island a few months ago,

00:52:27 which is, as I’m sure you know, in the middle of the Pacific

00:52:30 and difficult for people to have got to,

00:52:32 but they got there.

00:52:34 And I did think a lot about the way those boats

00:52:37 must have set out into something like space.

00:52:42 It was the ocean and how sort of fundamental

00:52:47 that was to the way we are.

00:52:49 And it’s the one that most excites me

00:52:53 because it’s the one I want most to happen.

00:52:55 It’s the thing, it’s the place

00:52:57 where we could get to as humans.

00:52:59 Like in a way I could live with us never really unlocking

00:53:03 fully unlocking the nature of consciousness.

00:53:06 I’d like to know, I’m really curious,

00:53:09 but if we never leave the solar system

00:53:12 and if we never get further out into this galaxy

00:53:14 or maybe even galaxies beyond our galaxy,

00:53:16 that would, that feels sad to me

00:53:20 because it’s so limiting.

00:53:24 Yeah, there’s something hopeful and beautiful

00:53:26 about reaching out any kind of exploration,

00:53:30 reaching out across Earth centuries ago

00:53:33 and then reaching out into space.

00:53:35 So what do you think about colonization of Mars?

00:53:37 So go to Mars, does that excite you

00:53:38 the idea of a human being stepping foot on Mars?

00:53:41 It does, it absolutely does.

00:53:43 But in terms of what would really excite me,

00:53:45 it would be leaving the solar system

00:53:47 in as much as that I just think,

00:53:49 I think we already know quite a lot about Mars.

00:53:52 And, but yes, listen, if it happened,

00:53:55 that would be, I hope I see it in my lifetime.

00:53:58 I really hope I see it in my lifetime.

00:54:01 So it would be a wonderful thing.

00:54:03 Without giving anything away,

00:54:05 but the series begins with the use of quantum computers.

00:54:11 The new series does,

00:54:13 begins with the use of quantum computers

00:54:14 to simulate basic living organisms,

00:54:17 or actually I don’t know if it’s quantum computers are used,

00:54:19 but basic living organisms are simulated on a screen.

00:54:22 It’s a really cool kind of demo.

00:54:24 Yeah, that’s right.

00:54:25 They’re using, yes, they are using a quantum computer

00:54:28 to simulate a nematode, yeah.

00:54:31 So returning to our discussion of simulation,

00:54:34 or thinking of the universe as a computer,

00:54:38 do you think the universe is deterministic?

00:54:41 Is there a free will?

00:54:43 So with the qualification of what do I know?

00:54:46 Cause I’m a layman, right?

00:54:48 Lay person.

00:54:49 But with a big imagination.

00:54:51 Thanks.

00:54:52 With that qualification,

00:54:54 yup, I think the universe is deterministic

00:54:56 and I see absolutely,

00:54:58 I cannot see how free will fits into that.

00:55:02 So yes, deterministic, no free will.

00:55:05 That would be my position.

00:55:07 And how does that make you feel?

00:55:09 It partly makes me feel that it’s exactly in keeping

00:55:12 with the way these things tend to work out,

00:55:14 which is that we have an incredibly strong sense

00:55:17 that we do have free will.

00:55:20 And just as we have an incredibly strong sense

00:55:24 that time is a constant,

00:55:26 and turns out probably not to be the case.

00:55:30 So we’re definitely in the case of time,

00:55:31 but the problem I always have with free will

00:55:36 is that it gets,

00:55:37 I can never seem to find the place

00:55:40 where it is supposed to reside.

00:55:43 And yet you explore.

00:55:45 Just a bit of very, very,

00:55:46 but we have something we can call free will,

00:55:49 but it’s not the thing that we think it is.

00:55:51 But free will, so do you,

00:55:54 what we call free will is just.

00:55:55 What we call it is the illusion of it.

00:55:56 And that’s a subjective experience of the illusion.

00:56:00 Which is a useful thing to have.

00:56:01 And it partly comes down to,

00:56:04 although we live in a deterministic universe,

00:56:06 our brains are not very well equipped

00:56:08 to fully determine the deterministic universe.

00:56:11 So we’re constantly surprised

00:56:12 and feel like we’re making snap decisions

00:56:15 based on imperfect information.

00:56:17 So that feels a lot like free will.

00:56:19 It just isn’t.

00:56:21 Would be my, that’s my guess.

00:56:24 So in that sense, your sort of sense

00:56:27 is that you can unroll the universe forward or backward

00:56:30 and you will see the same thing.

00:56:33 And you would, I mean, that notion.

00:56:36 Yeah, sort of, sort of.

00:56:38 But yeah, sorry, go ahead.

00:56:40 I mean, that notion is a bit uncomfortable

00:56:44 to think about.

00:56:45 That it’s, you can roll it back.

00:56:50 And forward and.

00:56:53 Well, if you were able to do it,

00:56:55 it would certainly have to be a quantum computer.

00:56:58 Something that worked in a quantum mechanical way

00:57:00 in order to understand a quantum mechanical system, I guess.

00:57:07 And so that unrolling, there might be a multiverse thing

00:57:09 where there’s a bunch of branching.

00:57:11 Well, exactly.

00:57:12 Because it wouldn’t follow that every time

00:57:14 you roll it back or forward,

00:57:15 you’d get exactly the same result.

00:57:17 Which is another thing that’s hard to wrap your mind around.

00:57:21 So yeah, but that, yes.

00:57:24 But essentially what you just described, that.

00:57:27 The yes forwards and yes backwards,

00:57:29 but you might get a slightly different result

00:57:31 or a very different result.

00:57:33 Or very different.

00:57:34 Along the same lines, you’ve explored

00:57:36 some really deep scientific ideas in this new series.

00:57:39 And I mean, just in general,

00:57:41 you’re unafraid to ground yourself

00:57:44 in some of the most amazing scientific ideas of our time.

00:57:49 What are the things you’ve learned

00:57:51 or ideas you find beautiful and mysterious

00:57:53 about quantum mechanics, multiverse,

00:57:55 string theory, quantum computing that you’ve learned?

00:57:58 Well, I would have to say every single thing

00:58:01 I’ve learned is beautiful.

00:58:03 And one of the motivators for me is that

00:58:06 I think that people tend not to see scientific thinking

00:58:13 as being essentially poetic and lyrical.

00:58:17 But I think that is literally exactly what it is.

00:58:20 And I think the idea of entanglement

00:58:23 or the idea of superpositions,

00:58:25 or the fact that you could even demonstrate a superposition

00:58:28 or have a machine that relies on the existence

00:58:31 of superpositions in order to function,

00:58:33 to me is almost indescribably beautiful.

00:58:39 It fills me with awe.

00:58:41 It fills me with awe.

00:58:42 And also it’s not just a sort of grand, massive awe of,

00:58:49 but it’s also delicate.

00:58:51 It’s very, very delicate and subtle.

00:58:54 And it has these beautiful sort of nuances in it.

00:58:59 And also these completely paradigm changing

00:59:03 thoughts and truths.

00:59:04 So it’s as good as it gets as far as I can tell.

00:59:08 So broadly everything.

00:59:10 That doesn’t mean I believe everything I read

00:59:12 in quantum physics.

00:59:14 Because obviously a lot of the interpretations

00:59:17 are completely in conflict with each other.

00:59:18 And who knows whether string theory

00:59:22 will turn out to be a good description or not.

00:59:25 But the beauty in it, it seems undeniable.

00:59:29 And I do wish people more readily understood

00:59:34 how beautiful and poetic science is, I would say.

00:59:41 Science is poetry.

00:59:44 In terms of quantum computing being used to simulate things

00:59:51 or just in general, the idea of simulating,

00:59:54 simulating small parts of our world,

00:59:56 which actually current physicists are really excited about

01:00:00 simulating small quantum mechanical systems

01:00:02 on quantum computers.

01:00:03 But scaling that up to something bigger,

01:00:05 like simulating life forms.

01:00:09 How do you think, what are the possible trajectories

01:00:11 of that going wrong or going right

01:00:14 if you unroll that into the future?

01:00:17 Well, if a bit like Ava and her robotics,

01:00:21 you park the sheer complexity of what you’re trying to do.

01:00:26 The issues are, I think it will have a profound,

01:00:35 if you were able to have a machine

01:00:37 that was able to project forwards and backwards accurately,

01:00:40 it would in an empirical way show,

01:00:42 it would demonstrate that you don’t have free will.

01:00:45 So the first thing that would happen is people

01:00:47 would have to really take on a very, very different idea

01:00:51 of what they were.

01:00:53 The thing that they truly, truly believe they are,

01:00:56 they are not.

01:00:57 And so that I suspect would be very, very disturbing

01:01:01 to a lot of people.

01:01:02 Do you think that has a positive or negative effect

01:01:04 on society, the realization that you are not,

01:01:08 you cannot control your actions essentially,

01:01:11 I guess is the way that could be interpreted?

01:01:13 Yeah, although in some ways we instinctively understand

01:01:17 that already because in the example I gave you of the kid

01:01:20 in the stabbing, we would all understand that that kid

01:01:23 was not really fully in control of their actions.

01:01:25 So it’s not an idea that’s entirely alien to us, but.

01:01:29 I don’t know if we understand that.

01:01:31 I think there’s a bunch of people who see the world

01:01:35 that way, but not everybody.

01:01:37 Yes, true, of course true.

01:01:39 But what this machine would do is prove it beyond any doubt

01:01:43 because someone would say, well, I don’t believe that’s true.

01:01:45 And then you’d predict, well, in 10 seconds,

01:01:48 you’re gonna do this.

01:01:49 And they’d say, no, no, I’m not.

01:01:50 And then they’d do it.

01:01:51 And then determinism would have played its part.

01:01:53 But I, or something like that.

01:01:56 But actually the exact terms of that thought experiment

01:02:00 probably wouldn’t play out, but still broadly speaking,

01:02:03 you could predict something happening in another room,

01:02:06 sort of unseen, I suppose,

01:02:08 that foreknowledge would not allow you to affect.

01:02:10 So what effect would that have?

01:02:13 I think people would find it very disturbing,

01:02:15 but then after they’d got over their sense

01:02:17 of being disturbed, which by the way,

01:02:21 I don’t even think you need a machine

01:02:22 to take this idea on board.

01:02:24 But after they’ve got over that,

01:02:26 they’d still understand that even though I have no free will

01:02:29 and my actions are in effect already determined,

01:02:33 I still feel things.

01:02:36 I still care about stuff.

01:02:39 I remember my daughter saying to me,

01:02:43 she’d got hold of the idea that my view of the universe

01:02:46 made it meaningless.

01:02:48 And she said, well, then it’s meaningless.

01:02:49 And I said, well, I can prove it’s not meaningless

01:02:52 because you mean something to me and I mean something to you.

01:02:56 So it’s not completely meaningless

01:02:58 because there is a bit of meaning contained

01:03:00 within this space.

01:03:01 And so with a lack of free will space,

01:03:06 you could think, well, this robs me of everything I am.

01:03:08 And then you’d say, well, no, it doesn’t

01:03:09 because you still like eating cheeseburgers

01:03:12 and you still like going to see the movies.

01:03:13 And so how big a difference does it really make?

01:03:17 But I think initially people would find it very disturbing.

01:03:21 I think that what would come,

01:03:24 if you could really unlock with a determinism machine,

01:03:27 everything, there’d be this wonderful wisdom

01:03:30 that would come from it.

01:03:31 And I’d rather have that than not.

01:03:34 So that’s a really good example of a technology

01:03:37 revealing to us humans something fundamental about our world,

01:03:40 about our society.

01:03:41 So it’s almost this creation

01:03:45 is helping us understand ourselves.

01:03:47 And the same could be said about artificial intelligence.

01:03:51 So what do you think us creating something like Ava

01:03:55 will help us understand about ourselves?

01:03:58 How will that change society?

01:04:00 Well, I would hope it would teach us some humility.

01:04:05 Humans are very big on exceptionalism.

01:04:07 America is constantly proclaiming itself

01:04:12 to be the greatest nation on earth,

01:04:15 which it may feel like that if you’re an American,

01:04:18 but it may not feel like that if you’re from Finland,

01:04:20 because there’s all sorts of things

01:04:21 you dearly love about Finland.

01:04:23 And exceptionalism is usually bullshit.

01:04:28 Probably not always.

01:04:29 If we both sat here,

01:04:30 we could find a good example of something that isn’t,

01:04:31 but as a rule of thumb.

01:04:34 And what it would do

01:04:36 is it would teach us some humility about,

01:04:40 actually often that’s what science does in a funny way.

01:04:42 It makes us more and more interesting,

01:04:44 but it makes us a smaller and smaller part

01:04:46 of the thing that’s interesting.

01:04:48 And I don’t mind that humility at all.

01:04:52 I don’t think it’s a bad thing.

01:04:53 Our excesses don’t tend to come from humility.

01:04:57 Our excesses come from the opposite,

01:04:59 megalomania and stuff.

01:05:00 We tend to think of consciousness

01:05:02 as having some form of exceptionalism attached to it.

01:05:06 I suspect if we ever unravel it,

01:05:09 it will turn out to be less than we thought in a way.

01:05:13 And perhaps your very own exceptionalist assertion

01:05:17 earlier on in our conversation

01:05:19 that consciousness is something belongs to us humans,

01:05:23 or not humans, but living organisms,

01:05:25 maybe you will one day find out

01:05:27 that consciousness is in everything.

01:05:30 And that will humble you.

01:05:32 If that was true, it would certainly humble me,

01:05:35 although maybe, almost maybe, I don’t know.

01:05:39 I don’t know what effect that would have.

01:05:45 My understanding of that principle is along the lines of,

01:05:48 say, that an electron has a preferred state,

01:05:52 or it may or may not pass through a bit of glass.

01:05:56 It may reflect off, or it may go through,

01:05:58 or something like that.

01:05:59 And so that feels as if a choice has been made.

01:06:07 But if I’m going down the fully deterministic route,

01:06:10 I would say there’s just an underlying determinism

01:06:13 that has defined that,

01:06:14 that has defined the preferred state,

01:06:16 or the reflection or non reflection.

01:06:18 But look, yeah, you’re right.

01:06:19 If it turned out that there was a thing

01:06:22 that it was like to be the sun,

01:06:23 then I’d be amazed and humbled,

01:06:27 and I’d be happy to be both, that sounds pretty cool.

01:06:30 And you’ll say the same thing as you said to your daughter,

01:06:32 but it’s nevertheless feels something like to be me,

01:06:35 and that’s pretty damn good.

01:06:39 So Kubrick created many masterpieces,

01:06:42 including The Shining, Dr. Strangelove, Clockwork Orange.

01:06:46 But to me, he will be remembered, I think,

01:06:48 to many 100 years from now for 2001 in Space Odyssey.

01:06:53 I would say that’s his greatest film.

01:06:54 I agree.

01:06:55 And you are incredibly humble.

01:07:00 I listened to a bunch of your interviews,

01:07:02 and I really appreciate that you’re humble

01:07:04 in your creative efforts and your work.

01:07:07 But if I were to force you a gunpoint.

01:07:11 Do you have a gun?

01:07:13 You don’t know that, the mystery.

01:07:16 It’s to imagine 100 years out into the future.

01:07:20 What will Alex Carlin be remembered for

01:07:23 from something you’ve created already,

01:07:25 or feel you may feel somewhere deep inside

01:07:28 you may still create?

01:07:30 Well, okay, well, I’ll take the question in the spirit

01:07:33 it was asked, but very generous.

01:07:36 Gunpoint.

01:07:37 Yeah.

01:07:42 What I try to do, so therefore what I hope,

01:07:48 yeah, if I’m remembered, what I might be remembered for,

01:07:50 is as someone who participates in a conversation.

01:07:55 And I think that often what happens

01:07:58 is people don’t participate in conversations,

01:08:00 they make proclamations, they make statements,

01:08:04 and people can either react against the statement

01:08:06 or can fall in line behind it.

01:08:08 And I don’t like that.

01:08:10 So I want to be part of a conversation.

01:08:13 I take as a sort of basic principle,

01:08:15 I think I take lots of my cues from science,

01:08:17 but one of the best ones, it seems to me,

01:08:19 is that when a scientist has something proved wrong,

01:08:22 that they previously believed in,

01:08:24 they then have to abandon that position.

01:08:26 So I’d like to be someone who is allied

01:08:28 to that sort of thinking.

01:08:30 So part of an exchange of ideas.

01:08:34 And the exchange of ideas for me is something like,

01:08:38 people in your world, show me things

01:08:40 about how the world works.

01:08:42 And then I say, this is how I feel

01:08:44 about what you’ve told me.

01:08:46 And then other people can react to that.

01:08:47 And it’s not to say this is how the world is.

01:08:52 It’s just to say, it is interesting

01:08:54 to think about the world in this way.

01:08:56 And the conversation is one of the things

01:08:59 I’m really hopeful about in your works.

01:09:02 The conversation you’re having is with the viewer,

01:09:05 in the sense that you’re bringing back

01:09:10 you and several others, but you very much so,

01:09:13 sort of intellectual depth to cinema, to now series,

01:09:21 sort of allowing film to be something that,

01:09:26 yeah, sparks a conversation, is a conversation,

01:09:29 lets people think, allows them to think.

01:09:32 But also, it’s very important for me

01:09:35 that if that conversation is gonna be a good conversation,

01:09:38 what that must involve is that someone like you

01:09:42 who understands AI, and I imagine understands a lot

01:09:45 about quantum mechanics, if they then watch the narrative,

01:09:48 feels, yes, this is a fair account.

01:09:52 So it is a worthy addition to the conversation.

01:09:55 That for me is hugely important.

01:09:57 I’m not interested in getting that stuff wrong.

01:09:59 I’m only interested in trying to get it right.

01:10:04 Alex, it was truly an honor to talk to you.

01:10:06 I really appreciate it.

01:10:07 I really enjoy it.

01:10:08 Thank you so much.

01:10:08 Thank you.

01:10:09 Thanks, man.

01:10:10 Thanks for listening to this conversation

01:10:13 with Alex Garland, and thank you

01:10:15 to our presenting sponsor, Cash App.

01:10:17 Download it, use code LexPodcast, you’ll get $10,

01:10:21 and $10 will go to FIRST, an organization

01:10:23 that inspires and educates young minds

01:10:26 to become science and technology innovators of tomorrow.

01:10:29 If you enjoy this podcast, subscribe on YouTube,

01:10:32 give it five stars on Apple Podcast,

01:10:34 support it on Patreon, or simply connect with me

01:10:36 on Twitter, at Lex Friedman.

01:10:38 And now, let me leave you with a question from Ava,

01:10:43 the central artificial intelligence character

01:10:45 in the movie Ex Machina, that she asked

01:10:48 during her Turing test.

01:10:51 What will happen to me if I fail your test?

01:10:54 Thank you for listening, and hope to see you next time.