Transcript
00:00:00 The following is a conversation with David Chalmers.
00:00:02 He’s a philosopher and cognitive scientist
00:00:05 specializing in the areas of philosophy of mind,
00:00:08 philosophy of language, and consciousness.
00:00:11 He’s perhaps best known for formulating
00:00:13 the hard problem of consciousness,
00:00:15 which could be stated as why does the feeling
00:00:17 which accompanies awareness of sensory information
00:00:20 exist at all?
00:00:22 Consciousness is almost entirely a mystery.
00:00:25 Many people who worry about AI safety and ethics
00:00:28 believe that, in some form, consciousness can
00:00:31 and should be engineered into AI systems of the future.
00:00:35 So while there’s much mystery, disagreement,
00:00:38 discoveries yet to be made about consciousness,
00:00:40 these conversations, while fundamentally philosophical
00:00:44 in nature, may nevertheless be very important
00:00:47 for engineers of modern AI systems to engage in.
00:00:51 This is the Artificial Intelligence Podcast.
00:00:53 If you enjoy it, subscribe on YouTube,
00:00:56 give it five stars on Apple Podcast,
00:00:58 support it on Patreon, or simply connect with me
00:01:00 on Twitter at Lex Friedman, spelled F R I D M A N.
00:01:05 As usual, I’ll do one or two minutes of ads now
00:01:08 and never any ads in the middle
00:01:09 that can break the flow of the conversation.
00:01:11 I hope that works for you
00:01:13 and doesn’t hurt the listening experience.
00:01:15 This show is presented by Cash App,
00:01:17 the number one finance app in the App Store.
00:01:19 When you get it, use code LEXBODCAST.
00:01:23 Cash App lets you send money to friends,
00:01:25 buy Bitcoin, and invest in the stock market
00:01:27 with as little as one dollar.
00:01:29 Brokerage services are provided by Cash App Investing,
00:01:32 subsidiary of Square, and member SIPC.
00:01:36 Since Cash App does fractional share trading,
00:01:38 let me mention that the order execution algorithm
00:01:40 that works behind the scenes to create the abstraction
00:01:43 of fractional orders is an algorithmic marvel.
00:01:46 So big props to the Cash App engineers
00:01:49 for solving a hard problem that, in the end,
00:01:51 provides an easy interface that takes a step up
00:01:54 to the next layer of abstraction over the stock market,
00:01:57 making trading more accessible for new investors
00:01:59 and diversification much easier.
00:02:02 If you get Cash App from the App Store or Google Play
00:02:05 and use the code LEXBODCAST, you’ll get $10,
00:02:08 and Cash App will also donate $10 to FIRST,
00:02:11 one of my favorite organizations
00:02:13 that is helping to advance robotics and STEM education
00:02:16 for young people around the world.
00:02:18 And now, here’s my conversation with David Chalmers.
00:02:23 Do you think we’re living in a simulation?
00:02:25 I don’t rule it out.
00:02:27 There’s probably gonna be a lot of simulations
00:02:29 in the history of the cosmos.
00:02:32 If the simulation is designed well enough,
00:02:34 it’ll be indistinguishable from a non simulated reality.
00:02:39 And although we could keep searching for evidence
00:02:43 that we’re not in a simulation,
00:02:46 any of that evidence in principle could be simulated.
00:02:48 So I think it’s a possibility.
00:02:50 But do you think the thought experiment is interesting
00:02:53 or useful to calibrate how we think
00:02:56 about the nature of reality?
00:02:58 Yeah, I definitely think it’s interesting and useful.
00:03:01 In fact, I’m actually writing a book about this right now,
00:03:03 all about the simulation idea,
00:03:05 using it to shed light
00:03:07 on a whole bunch of philosophical questions.
00:03:10 So the big one is how do we know anything
00:03:13 about the external world?
00:03:15 Descartes said, maybe you’re being fooled by an evil demon
00:03:19 who’s stimulating your brain into thinking,
00:03:21 all this stuff is real when in fact, it’s all made up.
00:03:25 Well, the modern version of that is,
00:03:28 how do you know you’re not in a simulation?
00:03:30 Then the thought is, if you’re in a simulation,
00:03:33 none of this is real.
00:03:34 So that’s teaching you something about knowledge.
00:03:37 How do you know about the external world?
00:03:39 I think there’s also really interesting questions
00:03:41 about the nature of reality right here.
00:03:43 If we are in a simulation, is all this real?
00:03:46 Is there really a table here?
00:03:48 Is it really a microphone?
00:03:49 Do I really have a body?
00:03:50 The standard view would be, no, we don’t.
00:03:54 None of this would be real.
00:03:55 My view is actually that’s wrong.
00:03:56 And even if we are in a simulation, all of this is real.
00:03:59 That’s why I called this reality 2.0.
00:04:01 New version of reality, different version of reality,
00:04:04 still reality.
00:04:05 So what’s the difference between quote unquote,
00:04:08 real world and the world that we perceive?
00:04:12 So we interact with the world by perceiving it.
00:04:17 It only really exists through the window
00:04:22 of our perception system and in our mind.
00:04:25 So what’s the difference between something
00:04:27 that’s quote unquote real, that exists perhaps
00:04:30 without us being there, and the world as you perceive it?
00:04:36 Well the world as we perceive it is a very simplified
00:04:39 and distorted version of what’s going on underneath.
00:04:42 We already know that from just thinking about science.
00:04:45 You don’t see too many obviously quantum mechanical effects
00:04:48 in what we perceive, but we still know quantum mechanics
00:04:51 is going on under all things.
00:04:53 So I like to think the world we perceive
00:04:55 is this very kind of simplified picture of colors
00:05:00 and shapes existing in space and so on.
00:05:04 We know there’s a, that’s what the philosopher
00:05:07 Wilfred Sellers called the manifest image.
00:05:09 The world as it seems to us, we already know
00:05:11 underneath all that is a very different scientific image
00:05:14 with atoms or quantum wave functions or super strings
00:05:19 or whatever the latest thing is.
00:05:22 And that’s the ultimate scientific reality.
00:05:24 So I think of the simulation idea as basically
00:05:28 another hypothesis about what the ultimate
00:05:31 say quasi scientific or metaphysical reality
00:05:34 is going on underneath the world of the manifest image.
00:05:37 The world of the manifest image is this very simple thing
00:05:41 that we interact with that’s neutral
00:05:43 on the underlying stuff of reality.
00:05:46 Science can help tell us about that.
00:05:48 Maybe philosophy can help tell us about that too.
00:05:51 And if we eventually take the red pill
00:05:53 and find out we’re in a simulation,
00:05:54 my view is that’s just another view
00:05:56 about what reality is made of.
00:05:58 The philosopher Immanuel Kant said,
00:06:00 what is the nature of the thing in itself?
00:06:02 I’ve got a glass here and it’s got all these,
00:06:05 it appears to me a certain way, a certain shape,
00:06:07 it’s liquid, it’s clear.
00:06:10 And he said, what is the nature of the thing
00:06:13 in itself?
00:06:14 Well, I think of the simulation idea,
00:06:15 it’s a hypothesis about the nature of the thing in itself.
00:06:18 It turns out if we’re in a simulation,
00:06:20 the thing in itself nature of this glass,
00:06:22 it’s okay, it’s actually a bunch of data structures
00:06:25 running on a computer in the next universe up.
00:06:28 Yeah, that’s what people tend to do
00:06:30 when they think about simulation.
00:06:31 They think about our modern computers
00:06:34 and somehow trivially crudely just scaled up in some sense.
00:06:39 But do you think the simulation,
00:06:44 I mean, in order to actually simulate
00:06:47 something as complicated as our universe
00:06:50 that’s made up of molecules and atoms
00:06:53 and particles and quarks and maybe even strings,
00:06:57 all of that would require something
00:06:59 just infinitely many orders of magnitude more
00:07:03 of scale and complexity.
00:07:06 Do you think we’re even able to even like conceptualize
00:07:12 what it would take to simulate our universe?
00:07:16 Or does it just slip into this idea
00:07:18 that you basically have to build a universe,
00:07:21 something so big to simulate it?
00:07:24 Does it get this into this fuzzy area
00:07:26 that’s not useful at all?
00:07:28 Yeah, well, I mean, our universe
00:07:30 is obviously incredibly complicated.
00:07:33 And for us within our universe to build a simulation
00:07:37 of a universe as complicated as ours
00:07:40 is gonna have obvious problems here.
00:07:42 If the universe is finite,
00:07:44 there’s just no way that’s gonna work.
00:07:45 Maybe there’s some cute way to make it work
00:07:48 if the universe is infinite,
00:07:51 maybe an infinite universe could somehow simulate
00:07:53 a copy of itself, but that’s gonna be hard.
00:07:57 Nonetheless, just that we are in a simulation,
00:07:59 I think there’s no particular reason
00:08:01 why we have to think the simulating universe
00:08:04 has to be anything like ours.
00:08:06 You’ve said before that it might be,
00:08:09 so you could think of it in turtles all the way down.
00:08:12 You could think of the simulating universe
00:08:15 different than ours, but we ourselves
00:08:17 could also create another simulating universe.
00:08:20 So you said that there could be these
00:08:21 kind of levels of universes.
00:08:24 And you’ve also mentioned this hilarious idea,
00:08:27 maybe tongue in cheek, maybe not,
00:08:29 that there may be simulations within simulations,
00:08:31 arbitrarily stacked levels,
00:08:33 and that there may be, that we may be in level 42.
00:08:37 Oh yeah.
00:08:38 Along those stacks, referencing Hitchhiker’s Guide
00:08:40 to the Universe.
00:08:41 If we’re indeed in a simulation within a simulation
00:08:45 at level 42, what do you think level zero looks like?
00:08:51 The originating universe.
00:08:52 I would expect that level zero is truly enormous.
00:08:55 I mean, not just, if it’s finite,
00:08:57 at some extraordinarily large finite capacity,
00:09:01 much more likely it’s infinite.
00:09:03 Maybe it’s got some very high cardinality
00:09:06 that enables it to support just any number of simulations.
00:09:11 So high degree of infinity at level zero,
00:09:14 slightly smaller degree of infinity at level one.
00:09:18 So by the time you get down to us at level 42,
00:09:21 maybe there’s plenty of room for lots of simulations
00:09:25 of finite capacity.
00:09:29 If the top universe is only a small finite capacity,
00:09:34 then obviously that’s gonna put very, very serious limits
00:09:36 on how many simulations you’re gonna be able to get running.
00:09:40 So I think we can certainly confidently say
00:09:42 that if we’re at level 42,
00:09:44 then the top level’s pretty damn big.
00:09:47 So it gets more and more constrained
00:09:49 as we get down levels, more and more simplified
00:09:52 and constrained and limited in resources.
00:09:54 Yeah, we still have plenty of capacity here.
00:09:56 What was it Feynman said?
00:09:58 He said there’s plenty of room at the bottom.
00:10:01 We’re still a number of levels above the degree
00:10:04 where there’s room for fundamental computing,
00:10:06 physical computing capacity,
00:10:08 quantum computing capacity at the bottom level.
00:10:11 So we’ve got plenty of room to play with
00:10:13 and we probably have plenty of room
00:10:15 for simulations of pretty sophisticated universes,
00:10:19 perhaps none as complicated as our universe,
00:10:22 unless our universe is infinite,
00:10:25 but still at the very least
00:10:27 for pretty serious finite universes,
00:10:29 but maybe universes somewhat simpler than ours,
00:10:31 unless of course we’re prepared to take certain shortcuts
00:10:35 in the simulation,
00:10:36 which might then increase the capacity significantly.
00:10:38 Do you think the human mind, us people,
00:10:42 in terms of the complexity of simulation
00:10:44 is at the height of what the simulation
00:10:47 might be able to achieve?
00:10:48 Like if you look at incredible entities
00:10:51 that could be created in this universe of ours,
00:10:54 do you have an intuition about
00:10:56 how incredible human beings are on that scale?
00:11:00 I think we’re pretty impressive,
00:11:02 but we’re not that impressive.
00:11:03 Are we above average?
00:11:06 I mean, I think human beings are at a certain point
00:11:09 in the scale of intelligence,
00:11:11 which made many things possible.
00:11:14 You get through evolution, through single cell organisms,
00:11:19 through fish and mammals and primates,
00:11:22 and something happens.
00:11:24 Once you get to human beings,
00:11:25 we’ve just reached that level
00:11:27 where we get to develop language,
00:11:29 we get to develop certain kinds of culture,
00:11:31 and we get to develop certain kinds of collective thinking
00:11:34 that has enabled all this amazing stuff to happen,
00:11:38 science and literature and engineering
00:11:40 and culture and so on.
00:11:43 So we had just at the beginning of that
00:11:46 on the evolutionary threshold,
00:11:47 it’s kind of like we just got there,
00:11:49 who knows, a few thousand or tens of thousands of years ago.
00:11:54 So we’re probably just at the very beginning
00:11:56 for what’s possible there.
00:11:57 So I’m inclined to think among the scale
00:12:01 of intelligent beings,
00:12:02 we’re somewhere very near the bottom.
00:12:05 I would expect that, for example,
00:12:06 if we’re in a simulation,
00:12:08 then the simulators who created us
00:12:10 have got the capacity to be far more sophisticated.
00:12:14 If we’re at level 42,
00:12:15 who knows what the ones at level zero are like.
00:12:19 It’s also possible that this is the epitome
00:12:22 of what is possible to achieve.
00:12:24 So we as human beings see ourselves maybe as flawed,
00:12:27 see all the constraints, all the limitations,
00:12:29 but maybe that’s the magical, the beautiful thing.
00:12:32 Maybe those limitations are the essential elements
00:12:36 for an interesting sort of that edge of chaos,
00:12:39 that interesting existence,
00:12:41 that if you make us much more intelligent,
00:12:43 if you make us much more powerful
00:12:46 in any kind of dimension of performance,
00:12:50 maybe you lose something fundamental
00:12:52 that makes life worth living.
00:12:55 So you kind of have this optimistic view
00:12:57 that we’re this little baby,
00:13:00 that then there’s so much growth and potential,
00:13:03 but this could also be it.
00:13:05 This is the most amazing thing is us.
00:13:09 Maybe what you’re saying is consistent
00:13:11 with what I’m saying.
00:13:12 I mean, we could still have levels of intelligence
00:13:14 far beyond us,
00:13:15 but maybe those levels of intelligence on your view
00:13:17 would be kind of boring.
00:13:19 And we kind of get so good at everything
00:13:21 that life suddenly becomes uni dimensional.
00:13:24 So we’re just inhabiting this one spot
00:13:26 of like maximal romanticism in the history of evolution.
00:13:30 You get to humans and it’s like, yeah,
00:13:32 and then years to come, our super intelligent descendants
00:13:34 are gonna look back at us and say,
00:13:37 those were the days when they just hit
00:13:39 the point of inflection and life was interesting.
00:13:42 I am an optimist.
00:13:43 So I’d like to think that if there is super intelligent
00:13:47 somewhere in the future,
00:13:49 they’ll figure out how to make life super interesting
00:13:51 and super romantic.
00:13:52 Well, you know what they’re gonna do.
00:13:54 So what they’re gonna do is they realize
00:13:56 how boring life is when you’re super intelligent.
00:13:58 So they create a new level of assimilation
00:14:02 and sort of live through the things they’ve created
00:14:05 by watching them stumble about
00:14:09 in their flawed ways.
00:14:10 So maybe that’s, so you create a new level of assimilation
00:14:13 every time you get really bored with how smart and.
00:14:17 This would be kind of sad though,
00:14:19 because if we showed the peak of their existence
00:14:20 would be like watching simulations for entertainment.
00:14:23 Not like saying the peak of our existence now is Netflix.
00:14:26 No, it’s all right.
00:14:27 A flip side of that could be the peak of our existence
00:14:31 for many people having children and watching them grow.
00:14:34 That becomes very meaningful.
00:14:35 Okay, you create a simulation that’s like creating a family.
00:14:38 Creating like, well, any kind of creation
00:14:40 is kind of a powerful act.
00:14:43 Do you think it’s easier to simulate the mind
00:14:46 or the universe?
00:14:47 So I’ve heard several people, including Nick Bostrom,
00:14:51 think about ideas of maybe you don’t need
00:14:54 to simulate the universe,
00:14:55 you can just simulate the human mind.
00:14:57 Or in general, just the distinction
00:15:00 between simulating the entirety of it,
00:15:02 the entirety of the physical world,
00:15:04 or just simulating the mind.
00:15:06 Which one do you see as more challenging?
00:15:09 Well, I think in some sense, the answer is obvious.
00:15:12 It has to be simpler to simulate the mind
00:15:15 than to simulate the universe,
00:15:16 because the mind is part of the universe.
00:15:18 And in order to fully simulate the universe,
00:15:20 you’re gonna have to simulate the mind.
00:15:22 So unless we’re talking about partial simulations.
00:15:25 And I guess the question is which comes first?
00:15:27 Does the mind come before the universe
00:15:29 or does the universe come before the mind?
00:15:32 So the mind could just be an emergent phenomena
00:15:36 in this universe.
00:15:37 So simulation is an interesting thing
00:15:42 that it’s not like creating a simulation perhaps
00:15:47 requires you to program every single thing
00:15:50 that happens in it.
00:15:51 It’s just defining a set of initial conditions
00:15:54 and rules based on which it behaves.
00:15:59 Simulating the mind requires you
00:16:01 to have a little bit more,
00:16:05 we’re now in a little bit of a crazy land,
00:16:07 but it requires you to understand
00:16:10 the fundamentals of cognition,
00:16:11 perhaps of consciousness,
00:16:13 of perception of everything like that,
00:16:16 that’s not created through some kind of emergence
00:16:23 from basic physics laws,
00:16:25 but more requires you to actually understand
00:16:27 the fundamentals of the mind.
00:16:29 How about if we said to simulate the brain?
00:16:31 The brain.
00:16:32 Rather than the mind.
00:16:33 So the brain is just a big physical system.
00:16:36 The universe is a giant physical system.
00:16:38 To simulate the universe at the very least,
00:16:40 you’re gonna have to simulate the brains
00:16:42 as well as all the other physical systems within it.
00:16:46 And it’s not obvious that the problems are any worse
00:16:50 for the brain than for,
00:16:53 it’s a particularly complex physical system.
00:16:56 But if we can simulate arbitrary physical systems,
00:16:58 we can simulate brains.
00:16:59 There is this further question of whether,
00:17:02 when you simulate a brain,
00:17:03 will that bring along all the features of the mind with it?
00:17:07 Like will you get consciousness?
00:17:08 Will you get thinking?
00:17:09 Will you get free will?
00:17:11 And so on.
00:17:12 And that’s something philosophers have argued over
00:17:16 for years.
00:17:17 My own view is if you simulate the brain well enough,
00:17:20 that will also simulate the mind.
00:17:22 But yeah, there’s plenty of people who would say no.
00:17:24 You’d merely get like a zombie system,
00:17:27 a simulation of a brain without any true consciousness.
00:17:31 But for you, you put together a brain,
00:17:33 the consciousness comes with it, arise.
00:17:36 Yeah, I don’t think it’s obvious.
00:17:38 That’s your intuition.
00:17:39 My view is roughly that yeah,
00:17:41 what is responsible for consciousness,
00:17:43 it’s in the patterns of information processing and so on
00:17:46 rather than say the biology that it’s made of.
00:17:50 There’s certainly plenty of people out there
00:17:51 who think consciousness has to be say biological.
00:17:54 So if you merely replicate the patterns of information
00:17:57 processing in a nonbiological substrate,
00:17:59 you’ll miss what’s crucial for consciousness.
00:18:02 I mean, I just don’t think there’s any particular reason
00:18:04 to think that biology is special here.
00:18:07 You can imagine substituting the biology
00:18:09 for nonbiological systems, say silicon circuits
00:18:13 that play the same role.
00:18:15 The behavior will continue to be the same.
00:18:17 And I think just thinking about what is the true,
00:18:21 when I think about the connection,
00:18:22 the isomorphisms between consciousness and the brain,
00:18:25 the deepest connections to me seem to connect consciousness
00:18:28 to patterns of information processing,
00:18:30 not to specific biology.
00:18:32 So I at least adopted as my working hypothesis
00:18:35 that basically it’s the computation and the information
00:18:38 that matters for consciousness.
00:18:39 Same time, we don’t understand consciousness,
00:18:41 so all this could be wrong.
00:18:43 So the computation, the flow, the processing,
00:18:48 manipulation of information,
00:18:49 the process is where the consciousness,
00:18:54 the software is where the consciousness comes from,
00:18:56 not the hardware.
00:18:57 Roughly the software, yeah.
00:18:59 The patterns of information processing at least
00:19:01 in the hardware, which we could view as software.
00:19:05 It may not be something you can just like program
00:19:07 and load and erase and so on in the way we can
00:19:11 with ordinary software, but it’s something at the level
00:19:14 of information processing rather than at the level
00:19:16 of implementation.
00:19:17 So on that, what do you think of the experience of self,
00:19:22 just the experience of the world in a virtual world,
00:19:26 in virtual reality?
00:19:27 Is it possible that we can create sort of
00:19:33 offsprings of our consciousness by existing
00:19:36 in a virtual world long enough?
00:19:38 So yeah, can we be conscious in the same kind
00:19:44 of deep way that we are in this real world
00:19:47 by hanging out in a virtual world?
00:19:51 Yeah, well, the kind of virtual worlds we have now
00:19:54 are interesting but limited in certain ways.
00:19:58 In particular, they rely on us having a brain and so on,
00:20:01 which is outside the virtual world.
00:20:03 Maybe I’ll strap on my VR headset or just hang out
00:20:07 in a virtual world on a screen, but my brain
00:20:12 and then my physical environment might be simulated
00:20:16 if I’m in a virtual world, but right now,
00:20:18 there’s no attempt to simulate my brain.
00:20:21 There might be some non player characters
00:20:24 in these virtual worlds that have simulated
00:20:27 cognitive systems of certain kinds
00:20:29 that dictate their behavior, but mostly,
00:20:31 they’re pretty simple right now.
00:20:33 I mean, some people are trying to combine,
00:20:34 put a bit of AI in their non player characters
00:20:36 to make them smarter, but for now,
00:20:41 inside virtual world, the actual thinking
00:20:43 is interestingly distinct from the physics
00:20:46 of those virtual worlds.
00:20:47 In a way, actually, I like to think this is kind of
00:20:48 reminiscent of the way that Descartes
00:20:50 thought our physical world was.
00:20:52 There’s physics, and there’s the mind,
00:20:54 and they’re separate.
00:20:55 Now we think the mind is somehow connected
00:20:58 to physics pretty deeply, but in these virtual worlds,
00:21:01 there’s a physics of a virtual world,
00:21:03 and then there’s this brain which is totally
00:21:04 outside the virtual world that controls it
00:21:06 and interacts it when anyone exercises agency
00:21:10 in a video game, that’s actually somebody
00:21:12 outside the virtual world moving a controller,
00:21:14 controlling the interaction of things
00:21:16 inside the virtual world.
00:21:18 So right now, in virtual worlds,
00:21:20 the mind is somehow outside the world,
00:21:22 but you could imagine in the future,
00:21:25 once we have developed serious AI,
00:21:29 artificial general intelligence, and so on,
00:21:31 then we could come to virtual worlds
00:21:34 which have enough sophistication,
00:21:35 you could actually simulate a brain
00:21:38 or have a genuine AGI, which would then presumably
00:21:42 be able to act in equally sophisticated ways,
00:21:45 maybe even more sophisticated ways,
00:21:47 inside the virtual world to how it might
00:21:50 in the physical world, and then the question’s
00:21:52 gonna come along, that would be kind of a VR,
00:21:56 virtual world internal intelligence,
00:21:59 and then the question is could they have consciousness,
00:22:01 experience, intelligence, free will,
00:22:04 all the things that we have, and again,
00:22:06 my view is I don’t see why not.
00:22:08 To linger on it a little bit, I find virtual reality really
00:22:13 incredibly powerful, just even the crude virtual reality
00:22:15 we have now of perhaps there’s psychological effects
00:22:21 that make some people more amenable
00:22:23 to virtual worlds than others, but I find myself
00:22:26 wanting to stay in virtual worlds for the most part.
00:22:28 You do?
00:22:29 Yes.
00:22:30 With a headset or on a desktop?
00:22:32 No, with a headset.
00:22:33 Really interesting, because I am totally addicted
00:22:35 to using the internet and things on a desktop,
00:22:40 but when it comes to VR, with a headset,
00:22:43 I don’t typically use it for more than 10 or 20 minutes.
00:22:46 There’s something just slightly aversive about it, I find,
00:22:48 so I don’t, right now, even though I have Oculus Rift
00:22:52 and Oculus Quest and HTC Vive and Samsung, this and that.
00:22:55 You just don’t wanna stay in that world for long.
00:22:57 Not for extended periods.
00:22:58 You actually find yourself hanging out in that.
00:23:01 Something about, it’s both a combination
00:23:03 of just imagination and considering the possibilities
00:23:08 of where this goes in the future.
00:23:10 It feels like I want to almost prepare my brain for it.
00:23:17 I wanna explore sort of Disneyland
00:23:19 when it’s first being built in the early days,
00:23:23 and it feels like I’m walking around
00:23:27 almost imagining the possibilities,
00:23:31 and something through that process allows my mind
00:23:33 to really enter into that world,
00:23:36 but you say that the brain is external to that virtual world.
00:23:41 It is, strictly speaking, true, but…
00:23:46 If you’re in VR and you do brain surgery on an avatar,
00:23:50 and you’re gonna open up that skull,
00:23:51 what are you gonna find?
00:23:53 Sorry, nothing there.
00:23:53 Nothing.
00:23:54 The brain is elsewhere.
00:23:55 You don’t think it’s possible to kind of separate them,
00:23:59 and I don’t mean in a sense like Descartes,
00:24:02 like a hard separation, but basically,
00:24:06 do you think it’s possible with the brain outside
00:24:09 of the virtual rhythm, when you’re wearing a headset,
00:24:14 create a new consciousness for prolonged periods of time?
00:24:19 Really feel, like really, like forget
00:24:24 that your brain is outside.
00:24:26 So this is, okay, this is gonna be the case
00:24:27 where the brain is still outside.
00:24:29 It’s still outside.
00:24:30 But could living in the VR, I mean,
00:24:32 we already find this, right, with video games.
00:24:35 Exactly.
00:24:35 They’re completely immersive, and you get taken up
00:24:39 by living in those worlds,
00:24:40 and it becomes your reality for a while.
00:24:43 So they’re not completely immersive,
00:24:44 they’re just very immersive.
00:24:46 Completely immersive.
00:24:46 You don’t forget the external world, no.
00:24:48 Exactly, so that’s what I’m asking.
00:24:50 Do you think it’s almost possible
00:24:52 to really forget the external world?
00:24:55 Really, really immerse yourself.
00:24:58 To forget completely?
00:24:59 Why would we forget?
00:25:00 We got pretty good memories.
00:25:02 Maybe you can stop paying attention to the external world,
00:25:06 but this already happens a lot.
00:25:07 I go to work, and maybe I’m not paying attention
00:25:10 to my home life.
00:25:11 I go to a movie, and I’m immersed in that.
00:25:14 So that degree of immersion, absolutely.
00:25:17 But we still have the capacity to remember it,
00:25:19 to completely forget the external world.
00:25:21 I’m thinking that would probably take some,
00:25:23 I don’t know, some pretty serious drugs or something
00:25:25 to make your brain do that.
00:25:27 Is that possible?
00:25:28 So, I mean, I guess what I’m getting at
00:25:31 is consciousness truly a property
00:25:35 that’s tied to the physical brain?
00:25:41 Or can you create sort of different offspring,
00:25:45 copies of consciousnesses based on the worlds
00:25:47 that you enter?
00:25:49 Well, the way we’re doing it now,
00:25:51 at least with a standard VR, there’s just one brain.
00:25:54 Interacts with the physical world.
00:25:56 Plays a video game, puts on a video headset,
00:25:59 interacts with this virtual world.
00:26:01 And I think we’d typically say there’s one consciousness here
00:26:04 that nonetheless undergoes different environments,
00:26:07 takes on different characters in different environments.
00:26:11 This is already something that happens
00:26:13 in the nonvirtual world.
00:26:14 I might interact one way in my home life,
00:26:17 my work life, my social life, and so on.
00:26:21 So at the very least, that will happen
00:26:23 in a virtual world very naturally.
00:26:25 People sometimes adopt the character of avatars
00:26:30 very different from themselves,
00:26:32 maybe even a different gender, different race,
00:26:34 different social background.
00:26:37 So that much is certainly possible.
00:26:38 I would see that as a single consciousness
00:26:41 is taking on different personas.
00:26:43 If you want literal splitting of consciousness
00:26:46 into multiple copies,
00:26:47 I think it’s gonna take something more radical than that.
00:26:50 Like maybe you can run different simulations of your brain
00:26:54 in different realities
00:26:56 and then expose them to different histories.
00:26:57 And then you’d split yourself
00:27:00 into 10 different simulated copies,
00:27:01 which then undergo different environments
00:27:04 and then ultimately do become 10
00:27:05 very different consciousnesses.
00:27:07 Maybe that could happen,
00:27:08 but now we’re not talking about something
00:27:10 that’s possible in the near term.
00:27:12 We’re gonna have to have brain simulations
00:27:14 and AGI for that to happen.
00:27:17 Got it.
00:27:18 So before any of that happens,
00:27:20 it’s fundamentally you see it as a singular consciousness,
00:27:23 even though it’s experiencing different environments,
00:27:26 virtual or not,
00:27:27 it’s still connected to same set of memories,
00:27:30 same set of experiences and therefore,
00:27:32 one sort of joint conscious system.
00:27:38 Yeah, or at least no more multiple
00:27:40 than the kind of multiple consciousness
00:27:42 that we get from inhabiting different environments
00:27:45 in a non virtual world.
00:27:46 So you said as a child,
00:27:48 you were a music color synesthete.
00:27:53 So where songs had colors for you.
00:27:56 So what songs had what colors?
00:27:59 You know, this is funny.
00:28:00 I didn’t pay much attention to this at the time,
00:28:04 but I’d listen to a piece of music
00:28:05 and I’d get some kind of imagery
00:28:07 of a kind of color.
00:28:11 The weird thing is mostly they were kind of murky,
00:28:16 dark greens and olive browns
00:28:18 and the colors weren’t all that interesting.
00:28:21 I don’t know what the reason is.
00:28:22 I mean, my theory is that maybe it’s like different chords
00:28:25 and tones provided different colors
00:28:27 and they all tended to get mixed together
00:28:29 into these somewhat uninteresting browns and greens.
00:28:33 But every now and then there’d be something
00:28:35 that had a really pure color.
00:28:37 So there’s just a few that I remember.
00:28:39 There was a Here, There and Everywhere by the Beatles
00:28:42 was bright red and has this very distinctive tonality
00:28:46 and it’s called structure at the beginning.
00:28:49 So that was bright red.
00:28:50 There was this song by the Alan Parsons Project
00:28:53 called Ammonia Avenue that was kind of a pure, a pure blue.
00:28:59 Anyway, I’ve got no idea how this happened.
00:29:02 I didn’t even pay that much attention
00:29:03 until it went away when I was about 20.
00:29:05 This synesthesia often goes away.
00:29:07 So is it purely just the perception of a particular color
00:29:10 or was there a positive or negative experience?
00:29:14 Like was blue associated with a positive
00:29:16 and red with a negative?
00:29:17 Or is it simply the perception of color
00:29:20 associated with some characteristic of the song?
00:29:23 For me, I don’t remember a lot of association
00:29:25 with emotion or with value.
00:29:28 It was just this kind of weird and interesting fact.
00:29:30 I mean, at the beginning, I thought this was something
00:29:32 that happened to everyone, songs of colors.
00:29:35 Maybe I mentioned it once or twice and people said, nope.
00:29:40 I thought it was kind of cool when there was one
00:29:42 that had one of these especially pure colors,
00:29:44 but only much later once I became a grad student
00:29:48 thinking about the mind that I read about this phenomenon
00:29:50 called synesthesia and I was like, hey, that’s what I had.
00:29:53 And now I occasionally talk about it in my classes,
00:29:56 in intro class and it still happens sometimes.
00:29:58 A student comes up and says, hey, I have that.
00:30:01 I never knew about that.
00:30:01 I never knew it had a name.
00:30:04 You said that it went away at age 20 or so.
00:30:08 And that you have a journal entry from around then saying,
00:30:13 songs don’t have colors anymore.
00:30:15 What happened?
00:30:16 What happened?
00:30:16 Yeah, it was definitely sad that it was gone.
00:30:18 In retrospect, it was like, hey, that’s cool.
00:30:20 The colors have gone.
00:30:21 Yeah, can you think about that for a little bit?
00:30:25 Do you miss those experiences?
00:30:27 Because it’s a fundamentally different set of experiences
00:30:31 that you no longer have.
00:30:35 Or is it just a nice thing to have had?
00:30:38 You don’t see them as that fundamentally different
00:30:40 than you visiting a new country and experiencing
00:30:43 new environments.
00:30:44 I guess for me, when I had these experiences,
00:30:47 they were somewhat marginal.
00:30:48 They were like a little bonus kind of experience.
00:30:51 I know there are people who have much more serious forms
00:30:55 of synesthesia than this for whom it’s absolutely central
00:30:58 to their lives.
00:30:59 I know people who, when they experience new people,
00:31:01 they have colors, maybe they have tastes and so on.
00:31:04 Every time they see writing, it has colors.
00:31:08 Some people, whenever they hear music,
00:31:09 it’s got a certain really rich color pattern.
00:31:15 For some synesthetes, it’s absolutely central.
00:31:17 I think if they lost it, they’d be devastated.
00:31:20 Again, for me, it was a very, very mild form
00:31:23 of synesthesia, and it’s like, yeah,
00:31:25 it’s like those interesting experiences
00:31:29 you might get under different altered states
00:31:31 of consciousness and so on.
00:31:33 It’s kind of cool, but not necessarily
00:31:36 the single most important experiences in your life.
00:31:39 Got it.
00:31:40 So let’s try to go to the very simplest question
00:31:43 that you’ve answered many a time,
00:31:45 but perhaps the simplest things can help us reveal,
00:31:48 even in time, some new ideas.
00:31:51 So what, in your view, is consciousness?
00:31:55 What is qualia?
00:31:56 What is the hard problem of consciousness?
00:32:00 Consciousness, I mean, the word is used many ways,
00:32:03 but the kind of consciousness that I’m interested in
00:32:06 is basically subjective experience,
00:32:10 what it feels like from the inside to be a human being
00:32:14 or any other conscious being.
00:32:16 I mean, there’s something it’s like to be me right now.
00:32:19 I have visual images that I’m experiencing.
00:32:23 I’m hearing my voice.
00:32:25 I’ve got maybe some emotional tone.
00:32:29 I’ve got a stream of thoughts running through my head.
00:32:31 These are all things that I experience
00:32:33 from the first person point of view.
00:32:36 I’ve sometimes called this the inner movie in the mind.
00:32:39 It’s not a perfect metaphor.
00:32:41 It’s not like a movie in every way,
00:32:44 and it’s very rich.
00:32:45 But yeah, it’s just direct, subjective experience.
00:32:49 And I call that consciousness,
00:32:51 or sometimes philosophers use the word qualia,
00:32:54 which you suggested.
00:32:55 People tend to use the word qualia
00:32:57 for things like the qualities of things like colors,
00:33:00 redness, the experience of redness
00:33:02 versus the experience of greenness,
00:33:04 the experience of one taste or one smell versus another,
00:33:08 the experience of the quality of pain.
00:33:10 And yeah, a lot of consciousness
00:33:12 is the experience of those qualities.
00:33:17 Well, consciousness is bigger,
00:33:18 the entirety of any kinds of experiences.
00:33:21 Consciousness of thinking is not obviously qualia.
00:33:23 It’s not like specific qualities like redness or greenness,
00:33:26 but still I’m thinking about my hometown.
00:33:29 I’m thinking about what I’m gonna do later on.
00:33:31 Maybe there’s still something running through my head,
00:33:34 which is subjective experience.
00:33:36 Maybe it goes beyond those qualities or qualia.
00:33:39 Philosophers sometimes use the word phenomenal consciousness
00:33:43 for consciousness in this sense.
00:33:44 I mean, people also talk about access consciousness,
00:33:47 being able to access information in your mind,
00:33:50 reflective consciousness,
00:33:52 being able to think about yourself.
00:33:53 But it looks like the really mysterious one,
00:33:55 the one that really gets people going
00:33:57 is phenomenal consciousness.
00:33:58 The fact that there’s subjective experience
00:34:02 and all this feels like something at all.
00:34:05 And then the hard problem is how is it that,
00:34:08 why is it that there is phenomenal consciousness at all?
00:34:11 And how is it that physical processes in a brain
00:34:15 could give you subjective experience?
00:34:19 It looks like on the face of it,
00:34:21 you’d have all this big complicated physical system
00:34:23 in a brain running without a given
00:34:27 subjective experience at all.
00:34:28 And yet we do have subjective experience.
00:34:30 So the hard problem is just explain that.
00:34:34 Explain how that comes about.
00:34:35 We haven’t been able to build machines
00:34:37 where a red light goes on that says it’s not conscious.
00:34:41 So how do we actually create that?
00:34:45 Or how do humans do it?
00:34:47 And how do we ourselves do it?
00:34:49 We do every now and then create machines that can do this.
00:34:51 We create babies that are conscious.
00:34:55 They’ve got these brains.
00:34:56 That brain does produce consciousness.
00:34:58 But even though we can create it,
00:35:00 we still don’t understand why it happens.
00:35:02 Maybe eventually we’ll be able to create machines,
00:35:05 which as a matter of fact, AI machines,
00:35:07 which as a matter of fact are conscious.
00:35:10 But that won’t necessarily make the hard problem go away
00:35:13 any more than it does with babies.
00:35:15 Cause we still wanna know how and why is it
00:35:17 that these processes give you consciousness?
00:35:19 You just made me realize for a second,
00:35:22 maybe it’s a totally dumb realization, but nevertheless,
00:35:28 that as a useful way to think about
00:35:31 the creation of consciousness is looking at a baby.
00:35:35 So that there’s a certain point
00:35:38 at which that baby is not conscious.
00:35:44 The baby starts from maybe, I don’t know,
00:35:47 from a few cells, right?
00:35:49 There’s a certain point at which it becomes consciousness,
00:35:52 arrives, it’s conscious.
00:35:54 Of course, we can’t know exactly that line,
00:35:56 but that’s a useful idea that we do create consciousness.
00:36:02 Again, a really dumb thing for me to say,
00:36:04 but not until now did I realize
00:36:07 we do engineer consciousness.
00:36:09 We get to watch the process happen.
00:36:12 We don’t know which point it happens or where it is,
00:36:16 but we do see the birth of consciousness.
00:36:19 Yeah, I mean, there’s a question, of course,
00:36:21 is whether babies are conscious when they’re born.
00:36:25 And it used to be, it seems,
00:36:26 at least some people thought they weren’t,
00:36:28 which is why they didn’t give anesthetics
00:36:30 to newborn babies when they circumcised them.
00:36:33 And so now people think, oh, that would be incredibly cruel.
00:36:36 Of course, babies feel pain.
00:36:38 And now the dominant view is that the babies can feel pain.
00:36:42 Actually, my partner Claudia works on this whole issue
00:36:45 of whether there’s consciousness in babies
00:36:48 and of what kind.
00:36:49 And she certainly thinks that newborn babies
00:36:53 come into the world with some degree of consciousness.
00:36:55 Of course, then you can just extend the question backwards
00:36:57 to fetuses and suddenly you’re into
00:36:59 politically controversial territory.
00:37:02 But the question also arises in the animal kingdom.
00:37:06 Where does consciousness start or stop?
00:37:08 Is there a line in the animal kingdom
00:37:11 where the first conscious organisms are?
00:37:15 It’s interesting, over time,
00:37:16 people are becoming more and more liberal
00:37:18 about ascribing consciousness to animals.
00:37:21 People used to think maybe only mammals could be conscious.
00:37:24 Now most people seem to think, sure, fish are conscious.
00:37:27 They can feel pain.
00:37:28 And now we’re arguing over insects.
00:37:31 You’ll find people out there who say plants
00:37:33 have some degree of consciousness.
00:37:35 So, you know, who knows where it’s gonna end.
00:37:37 The far end of this chain is the view
00:37:39 that every physical system has some degree of consciousness.
00:37:43 Philosophers call that panpsychism.
00:37:45 You know, I take that view.
00:37:48 I mean, that’s a fascinating way to view reality.
00:37:50 So if you could talk about,
00:37:52 if you can linger on panpsychism for a little bit,
00:37:56 what does it mean?
00:37:58 So it’s not just plants are conscious.
00:38:00 I mean, it’s that consciousness
00:38:02 is a fundamental fabric of reality.
00:38:05 What does that mean to you?
00:38:07 How are we supposed to think about that?
00:38:09 Well, we’re used to the idea that some things in the world
00:38:12 are fundamental, right, in physics.
00:38:15 Like what?
00:38:16 We take things like space or time or space time,
00:38:18 mass, charges, fundamental properties of the universe.
00:38:23 You don’t reduce them to something simpler.
00:38:25 You take those for granted.
00:38:26 You’ve got some laws that connect them.
00:38:30 Here is how mass and space and time evolve.
00:38:33 Theories like relativity or quantum mechanics
00:38:36 or some future theory that will unify them both.
00:38:39 But everyone says you gotta take some things as fundamental.
00:38:42 And if you can’t explain one thing,
00:38:44 in terms of the previous fundamental things,
00:38:47 you have to expand.
00:38:49 Maybe something like this happened with Maxwell.
00:38:52 He ended up with fundamental principles
00:38:54 of electromagnetism and took charge as fundamental
00:38:57 because it turned out that was the best way to explain it.
00:39:00 So I at least take seriously the possibility
00:39:02 something like that could happen with consciousness.
00:39:06 Take it as a fundamental property,
00:39:07 like space, time, and mass.
00:39:10 And instead of trying to explain consciousness wholly
00:39:13 in terms of the evolution of space, time, and mass,
00:39:17 and so on, take it as a primitive
00:39:20 and then connect it to everything else
00:39:23 by some fundamental laws.
00:39:25 Because there’s this basic problem
00:39:27 that the physics we have now looks great
00:39:29 for solving the easy problems of consciousness,
00:39:31 which are all about behavior.
00:39:35 They give us a complicated structure and dynamics.
00:39:37 They tell us how things are gonna behave,
00:39:39 what kind of observable behavior they’ll produce,
00:39:43 which is great for the problems of explaining how we walk
00:39:46 and how we talk and so on.
00:39:48 Those are the easy problems of consciousness.
00:39:50 But the hard problem was this problem
00:39:52 about subjective experience just doesn’t look
00:39:55 like that kind of problem about structure,
00:39:57 dynamics, how things behave.
00:39:58 So it’s hard to see how existing physics
00:40:01 is gonna give you a full explanation of that.
00:40:04 Certainly trying to get a physics view of consciousness,
00:40:08 yes, there has to be a connecting point
00:40:10 and it could be at the very axiomatic
00:40:12 at the very beginning level.
00:40:14 But first of all, there’s a crazy idea
00:40:21 that sort of everything has properties of consciousness.
00:40:27 At that point, the word consciousness
00:40:30 is already beyond the reach of our current understanding.
00:40:33 Like far, because it’s so far from,
00:40:35 at least for me, maybe you can correct me,
00:40:38 as far from the experiences that I have as a human being.
00:40:45 To say that everything is conscious,
00:40:47 that means that basically another way to put that,
00:40:52 if that’s true, then we understand almost nothing
00:40:56 about that fundamental aspect of the world.
00:41:00 How do you feel about saying an ant is conscious?
00:41:02 Do you get the same reaction to that
00:41:04 or is that something you can understand?
00:41:05 I can understand ant,
00:41:06 I can understand an atom, a particle.
00:41:10 Plants?
00:41:12 Plant, so I’m comfortable with living things on Earth
00:41:16 being conscious because there’s some kind of agency
00:41:22 where they’re similar size to me
00:41:26 and they can be born and they can die.
00:41:30 And that is understandable intuitively.
00:41:34 Of course, you anthropomorphize,
00:41:36 you put yourself in the place of the plant,
00:41:41 but I can understand it.
00:41:43 I mean, I’m not like, I don’t believe actually
00:41:47 that plants are conscious or that plants suffer,
00:41:49 but I can understand that kind of belief, that kind of idea.
00:41:52 How do you feel about robots?
00:41:54 Like the kind of robots we have now?
00:41:56 If I told you like that a Roomba
00:41:58 had some degree of consciousness
00:42:02 or some deep neural network.
00:42:06 I could understand that a Roomba has consciousness.
00:42:08 I just had spent all day at I, robot.
00:42:12 And I mean, I personally love robots
00:42:15 and I have a deep connection with robots.
00:42:16 So I can, I also probably anthropomorphize them.
00:42:20 There’s something about the physical object.
00:42:23 So there’s a difference than a neural network,
00:42:26 a neural network running a software.
00:42:28 To me, the physical object,
00:42:31 something about the human experience
00:42:32 allows me to really see that physical object as an entity.
00:42:36 And if it moves and moves in a way that it,
00:42:40 there’s a, like I didn’t program it,
00:42:44 where it feels that it’s acting based on its own perception.
00:42:49 And yes, self awareness and consciousness,
00:42:53 even if it’s a Roomba,
00:42:55 then you start to assign it some agency, some consciousness.
00:43:00 So, but to say that panpsychism,
00:43:03 that consciousness is a fundamental property of reality
00:43:08 is a much bigger statement.
00:43:11 That it’s like turtles all the way.
00:43:13 It’s like every, it’s, it doesn’t end.
00:43:16 The whole thing is, so like how,
00:43:18 I know it’s full of mystery,
00:43:21 but if you can linger on it,
00:43:23 like how would it, how do you think about reality
00:43:27 if consciousness is a fundamental part of its fabric?
00:43:31 The way you get there is from thinking,
00:43:33 can we explain consciousness given the existing fundamentals?
00:43:36 And then if you can’t, as at least right now, it looks like,
00:43:41 then you’ve got to add something.
00:43:42 It doesn’t follow that you have to add consciousness.
00:43:44 Here’s another interesting possibility is,
00:43:47 well, we’ll add something else.
00:43:48 Let’s call it proto consciousness or X.
00:43:51 And then it turns out space, time, mass plus X
00:43:56 will somehow collectively give you the possibility
00:43:58 for consciousness.
00:44:00 Why don’t rule out that view?
00:44:01 Either I call that pan proto psychism,
00:44:04 because maybe there’s some other property,
00:44:06 proto consciousness at the bottom level.
00:44:08 And if you can’t imagine there’s actually
00:44:10 genuine consciousness at the bottom level,
00:44:12 I think we should be open to the idea
00:44:14 there’s this other thing X.
00:44:16 Maybe we can’t imagine that somehow gives you consciousness.
00:44:19 But if we are playing along with the idea
00:44:22 that there really is genuine consciousness
00:44:24 at the bottom level, of course,
00:44:25 this is going to be way out and speculative,
00:44:28 but at least in, say, if it was classical physics,
00:44:32 then we’d have to, you’d end up saying,
00:44:33 well, every little atom, every little,
00:44:35 with a bunch of particles in space time,
00:44:37 each of these particles has some kind of consciousness
00:44:41 whose structure mirrors maybe their physical properties,
00:44:44 like its mass, its charge, its velocity, and so on.
00:44:49 The structure of its consciousness
00:44:50 would roughly correspond to that.
00:44:52 And the physical interactions between particles,
00:44:55 I mean, there’s this old worry about physics.
00:44:58 I mentioned this before in this issue
00:44:59 about the manifest image.
00:45:01 We don’t really find out
00:45:02 about the intrinsic nature of things.
00:45:04 Physics tells us about how a particle relates
00:45:07 to other particles and interacts.
00:45:09 It doesn’t tell us about what the particle is in itself.
00:45:12 That was Kant’s thing in itself.
00:45:14 So here’s a view.
00:45:17 The nature in itself of a particle is something mental.
00:45:20 A particle is actually a conscious,
00:45:22 a little conscious subject
00:45:24 with properties of its consciousness
00:45:27 that correspond to its physical properties.
00:45:29 The laws of physics are actually ultimately relating
00:45:32 these properties of conscious subjects.
00:45:34 So in this view, a Newtonian world
00:45:36 actually would be a vast collection
00:45:38 of little conscious subjects at the bottom level,
00:45:41 way, way simpler than we are without free will
00:45:44 or rationality or anything like that.
00:45:47 But that’s what the universe would be like.
00:45:48 Now, of course, that’s a vastly speculative view.
00:45:51 No particular reason to think it’s correct.
00:45:53 Furthermore, non Newtonian physics,
00:45:56 say quantum mechanical wave function,
00:45:58 suddenly it starts to look different.
00:46:00 It’s not a vast collection of conscious subjects.
00:46:02 Maybe there’s ultimately one big wave function
00:46:05 for the whole universe.
00:46:06 Corresponding to that might be something more
00:46:08 like a single conscious mind
00:46:12 whose structure corresponds
00:46:13 to the structure of the wave function.
00:46:16 People sometimes call this cosmo psychism.
00:46:19 And now, of course, we’re in the realm
00:46:20 of extremely speculative philosophy.
00:46:23 There’s no direct evidence for this,
00:46:25 but yeah, but if you want a picture
00:46:27 of what that universe would be like,
00:46:29 think, yeah, giant cosmic mind
00:46:31 with enough richness and structure among it
00:46:33 to replicate all the structure of physics.
00:46:36 I think therefore I am at the level of particles
00:46:39 and with quantum mechanics
00:46:40 at the level of the wave function.
00:46:42 It’s kind of an exciting, beautiful possibility,
00:46:49 of course, way out of reach of physics currently.
00:46:51 It is interesting that some neuroscientists
00:46:55 are beginning to take panpsychism seriously,
00:46:58 that you find consciousness even in very simple systems.
00:47:02 So for example, the integrated information theory
00:47:05 of consciousness, a lot of neuroscientists
00:47:07 are taking seriously.
00:47:08 Actually, I just got this new book
00:47:09 by Christoph Koch just came in,
00:47:11 The Feeling of Life Itself,
00:47:13 why consciousness is widespread, but can’t be computed.
00:47:17 He likes, he basically endorses a panpsychist view
00:47:20 where you get consciousness
00:47:22 with the degree of information processing
00:47:24 or integrated information processing in a simple,
00:47:26 in a system and even very, very simple systems,
00:47:29 like a couple of particles will have some degree of this.
00:47:32 So he ends up with some degree of consciousness
00:47:35 in all matter.
00:47:36 And the claim is that this theory
00:47:38 can actually explain a bunch of stuff
00:47:40 about the connection between the brain and consciousness.
00:47:43 Now, that’s very controversial.
00:47:45 I think it’s very, very early days
00:47:46 in the science of consciousness.
00:47:48 It’s interesting that it’s not just philosophy
00:47:50 that might lead you in this direction,
00:47:52 but there are ways of thinking quasi scientifically
00:47:55 that lead you there too.
00:47:57 But maybe it’s different than panpsychism.
00:48:01 What do you think?
00:48:02 So Alan Watts has this quote that I’d like to ask you about.
00:48:06 The quote is, through our eyes,
00:48:10 the universe is perceiving itself.
00:48:12 Through our ears, the universe is listening
00:48:14 to its harmonies.
00:48:16 We are the witnesses through which the universe
00:48:18 becomes conscious of its glory, of its magnificence.
00:48:22 So that’s not panpsychism.
00:48:24 Do you think that we are essentially the tools,
00:48:30 the senses the universe created to be conscious of itself?
00:48:35 It’s an interesting idea.
00:48:37 Of course, if you went for the giant cosmic mind view,
00:48:40 then the universe was conscious all along.
00:48:43 It didn’t need us.
00:48:44 We’re just little components of the universal consciousness.
00:48:48 Likewise, if you believe in panpsychism,
00:48:50 then there was some little degree of consciousness
00:48:52 at the bottom level all along.
00:48:54 And we were just a more complex form of consciousness.
00:48:58 So I think maybe the quote you mentioned works better.
00:49:02 If you’re not a panpsychist, you’re not a cosmo psychist,
00:49:05 you think consciousness just exists
00:49:07 at this intermediate level.
00:49:09 And of course, that’s the Orthodox view.
00:49:12 That you would say is the common view?
00:49:14 So is your own view with panpsychism a rare view?
00:49:19 I think it’s generally regarded certainly
00:49:22 as a speculative view held by a fairly small minority
00:49:26 of at least theorists, most philosophers
00:49:30 and most scientists who think about consciousness
00:49:33 are not panpsychists.
00:49:34 There’s been a bit of a movement in that direction
00:49:36 the last 10 years or so.
00:49:37 It seems to be quite popular,
00:49:38 especially among the younger generation,
00:49:41 but it’s still very definitely a minority view.
00:49:43 Many people think it’s totally batshit crazy
00:49:47 to use the technical term.
00:49:48 But the philosophical term.
00:49:51 So the Orthodox view, I think is still consciousness
00:49:53 is something that humans have
00:49:55 and some good number of nonhuman animals have,
00:49:59 and maybe AIs might have one day, but it’s restricted.
00:50:02 On that view, then there was no consciousness
00:50:04 at the start of the universe.
00:50:05 There may be none at the end,
00:50:07 but it is this thing which happened at some point
00:50:09 in the history of the universe, consciousness developed.
00:50:13 And yes, that’s a very amazing event on this view
00:50:17 because many people are inclined to think consciousness
00:50:20 is what somehow gives meaning to our lives.
00:50:23 Without consciousness, there’d be no meaning,
00:50:25 no true value, no good versus bad and so on.
00:50:29 So with the advent of consciousness,
00:50:32 suddenly the universe went from meaningless
00:50:36 to somehow meaningful.
00:50:38 Why did this happen?
00:50:39 I guess the quote you mentioned was somehow,
00:50:42 this was somehow destined to happen
00:50:44 because the universe needed to have consciousness
00:50:47 within it to have value and have meaning.
00:50:49 And maybe you could combine that with a theistic view
00:50:52 or a teleological view.
00:50:54 The universe was inexorably evolving towards consciousness.
00:50:58 Actually, my colleague here at NYU, Tom Nagel,
00:51:01 wrote a book called Mind and Cosmos a few years ago
00:51:04 where he argued for this teleological view
00:51:06 of evolution toward consciousness,
00:51:09 saying this led the problems for Darwinism.
00:51:12 It’s got him on, this is very, very controversial.
00:51:15 Most people didn’t agree.
00:51:16 I don’t myself agree with this teleological view,
00:51:20 but it is at least a beautiful speculative view
00:51:24 of the cosmos.
00:51:26 What do you think people experience?
00:51:30 What do they seek when they believe in God
00:51:32 from this kind of perspective?
00:51:36 I’m not an expert on thinking about God and religion.
00:51:41 I’m not myself religious at all.
00:51:43 When people sort of pray, communicate with God,
00:51:46 which whatever form,
00:51:48 I’m not speaking to sort of the practices
00:51:51 and the rituals of religion.
00:51:53 I mean the actual experience of that people
00:51:56 really have a deep connection with God in some cases.
00:52:00 What do you think that experience is?
00:52:06 It’s so common, at least throughout the history
00:52:08 of civilization, that it seems like we seek that.
00:52:16 At the very least, it is an interesting
00:52:17 conscious experience that people have
00:52:19 when they experience religious awe or prayer and so on.
00:52:24 Neuroscientists have tried to examine
00:52:27 what bits of the brain are active and so on.
00:52:30 But yeah, there’s this deeper question
00:52:32 of what are people looking for when they’re doing this?
00:52:34 And like I said, I’ve got no real expertise on this,
00:52:38 but it does seem that one thing people are after
00:52:40 is a sense of meaning and value,
00:52:43 a sense of connection to something greater than themselves
00:52:48 that will give their lives meaning and value.
00:52:50 And maybe the thought is if there is a God,
00:52:52 then God somehow is a universal consciousness
00:52:56 who has invested this universe with meaning
00:53:01 and somehow connection to God might give your life meaning.
00:53:05 I guess I can kind of see the attractions of that,
00:53:09 but it still makes me wonder why is it exactly
00:53:13 that a universal consciousness, God,
00:53:15 would be needed to give the world meaning?
00:53:18 If universal consciousness can give the world meaning,
00:53:21 why can’t local consciousness give the world meaning too?
00:53:25 So I think my consciousness gives my world meaning.
00:53:28 Is the origin of meaning for your world.
00:53:31 Yeah, I experience things as good or bad,
00:53:33 happy, sad, interesting, important.
00:53:37 So my consciousness invests this world with meaning.
00:53:40 Without any consciousness,
00:53:42 maybe it would be a bleak, meaningless universe.
00:53:45 But I don’t see why I need someone else’s consciousness
00:53:47 or even God’s consciousness to give this universe meaning.
00:53:51 Here we are, local creatures
00:53:53 with our own subjective experiences.
00:53:55 I think we can give the universe meaning ourselves.
00:53:58 I mean, maybe to some people that feels inadequate.
00:54:02 Our own local consciousness is somehow too puny
00:54:04 and insignificant to invest any of this
00:54:07 with cosmic significance.
00:54:09 And maybe God gives you a sense of cosmic significance,
00:54:13 but I’m just speculating here.
00:54:15 So it’s a really interesting idea
00:54:19 that consciousness is the thing that makes life meaningful.
00:54:24 If you could maybe just briefly explore that for a second.
00:54:30 So I suspect just from listening to you now,
00:54:33 you mean in an almost trivial sense,
00:54:37 just the day to day experiences of life have,
00:54:42 because of you attach identity to it,
00:54:46 they become, I guess I wanna ask something
00:54:54 I would always wanted to ask
00:54:57 a legit world renowned philosopher.
00:55:01 What is the meaning of life?
00:55:05 So I suspect you don’t mean consciousness gives
00:55:08 any kind of greater meaning to it all.
00:55:11 And more to day to day.
00:55:13 But is there a greater meaning to it all?
00:55:16 I think life has meaning for us because we are conscious.
00:55:20 So without consciousness, no meaning,
00:55:24 consciousness invests our life with meaning.
00:55:27 So consciousness is the source of the meaning of life,
00:55:30 but I wouldn’t say consciousness itself
00:55:33 is the meaning of life.
00:55:34 I’d say what’s meaningful in life
00:55:36 is basically what we find meaningful,
00:55:40 what we experience as meaningful.
00:55:42 So if you find meaning and fulfillment and value
00:55:46 in say, intellectual work, like understanding,
00:55:49 then that’s a very significant part
00:55:51 of the meaning of life for you.
00:55:53 If you find that in social connections
00:55:55 or in raising a family,
00:55:57 then that’s the meaning of life for you.
00:55:58 The meaning kind of comes from what you value
00:56:02 as a conscious creature.
00:56:04 So I think there’s no, on this view,
00:56:05 there’s no universal solution.
00:56:08 No universal answer to the question,
00:56:10 what is the meaning of life?
00:56:11 The meaning of life is where you find it
00:56:13 as a conscious creature,
00:56:14 but it’s consciousness that somehow makes value possible.
00:56:18 Experiencing some things as good or as bad
00:56:21 or as meaningful,
00:56:22 something comes from within consciousness.
00:56:24 So you think consciousness is a crucial component,
00:56:28 ingredient of assigning value to things?
00:56:33 I mean, it’s kind of a fairly strong intuition
00:56:36 that without consciousness,
00:56:37 there wouldn’t really be any value
00:56:39 if we just had a purely universe of unconscious creatures.
00:56:44 Would anything be better or worse than anything else?
00:56:47 Certainly when it comes to ethical dilemmas,
00:56:50 you know about the old trolley problem.
00:56:53 Do you kill one person
00:56:56 or do you switch to the other track to kill five?
00:56:59 Well, I’ve got a variant on this,
00:57:01 the zombie trolley problem,
00:57:03 where there’s a one conscious being on one track
00:57:06 and five humanoid zombies.
00:57:09 Let’s make them robots who are not conscious
00:57:12 on the other track.
00:57:15 Do you, given that choice,
00:57:16 do you kill the one conscious being
00:57:17 or the five unconscious robots?
00:57:21 Most people have a fairly clear intuition here.
00:57:23 Kill the unconscious beings
00:57:25 because they basically, they don’t have a meaningful life.
00:57:28 They’re not really persons, conscious beings at all.
00:57:33 We don’t have good intuition
00:57:36 about something like an unconscious being.
00:57:42 So in philosophical terms, you referred to as a zombie.
00:57:46 It’s a useful thought experiment construction
00:57:51 in philosophical terms, but we don’t yet have them.
00:57:55 So that’s kind of what we may be able to create with robots.
00:58:00 And I don’t necessarily know what that even means.
00:58:05 Yeah, they’re merely hypothetical.
00:58:07 For now, they’re just a thought experiment.
00:58:09 They may never be possible.
00:58:11 I mean, the extreme case of a zombie
00:58:13 is a being which is physically, functionally,
00:58:16 behaviorally identical to me, but not conscious.
00:58:19 That’s a mere,
00:58:20 I don’t think that could ever be built in this universe.
00:58:23 The question is just could we,
00:58:24 does that hypothetically make sense?
00:58:27 That’s kind of a useful contrast class
00:58:29 to raise questions like, why aren’t we zombies?
00:58:31 How does it come about that we’re conscious?
00:58:33 And we’re not like that.
00:58:34 But there are less extreme versions of this like robots,
00:58:38 which are maybe not physically identical to us,
00:58:41 maybe not even functionally identical to us.
00:58:43 Maybe they’ve got a different architecture,
00:58:45 but they can do a lot of sophisticated things,
00:58:47 maybe carry on a conversation, but they’re not conscious.
00:58:51 And that’s not so far out.
00:58:52 We’ve got simple computer systems,
00:58:54 at least tending in that direction now.
00:58:57 And presumably this is gonna get more and more sophisticated
00:59:01 over years to come where we may have some pretty,
00:59:05 it’s at least quite straightforward to conceive
00:59:07 of some pretty sophisticated robot systems
00:59:11 that can use language and be fairly high functioning
00:59:14 without consciousness at all.
00:59:16 Then I stipulate that.
00:59:17 I mean, we’ve caused, there’s this tricky question
00:59:21 of how you would know whether they’re conscious.
00:59:23 But let’s say we’ve somehow solved that.
00:59:25 And we know that these high functioning robots
00:59:27 aren’t conscious.
00:59:27 Then the question is, do they have moral status?
00:59:30 Does it matter how we treat them?
00:59:33 What does moral status mean, sir?
00:59:35 Basically it’s that question.
00:59:37 Can they suffer?
00:59:38 Does it matter how we treat them?
00:59:41 For example, if I mistreat this glass, this cup
00:59:46 by shattering it, then that’s bad.
00:59:49 Why is it bad though?
00:59:50 It’s gonna make a mess.
00:59:51 It’s gonna be annoying for me and my partner.
00:59:53 And so it’s not bad for the cup.
00:59:55 No one would say the cup itself has moral status.
00:59:59 Hey, you hurt the cup and that’s doing it a moral harm.
01:00:07 Likewise, plants, well, again, if they’re not conscious,
01:00:09 most people think by uprooting a plant,
01:00:11 you’re not harming it.
01:00:13 But if a being is conscious on the other hand,
01:00:16 then you are harming it.
01:00:17 So Siri, or I dare not say the name of Alexa.
01:00:24 Anyway, so we don’t think we’re morally harming Alexa
01:00:28 by turning her off or disconnecting her
01:00:30 or even destroying her, whether it’s the system
01:00:34 or the underlying software system,
01:00:36 because we don’t really think she’s conscious.
01:00:39 On the other hand, you move to like the disembodied being
01:00:42 in the movie, her, Samantha,
01:00:45 I guess she was kind of presented as conscious.
01:00:47 And then if you destroyed her,
01:00:49 you’d certainly be committing a serious harm.
01:00:51 So I think our strong sense is if a being is conscious
01:00:55 and can undergo subjective experiences,
01:00:57 then it matters morally how we treat them.
01:01:00 So if a robot is conscious, it matters,
01:01:03 but if a robot is not conscious,
01:01:05 then they’re basically just meat or a machine
01:01:07 and it doesn’t matter.
01:01:10 So I think at least maybe how we think about this stuff
01:01:13 is fundamentally wrong,
01:01:13 but I think a lot of people
01:01:15 who think about this stuff seriously,
01:01:17 including people who think about,
01:01:18 say the moral treatment of animals and so on,
01:01:20 come to the view that consciousness
01:01:23 is ultimately kind of the line between systems
01:01:25 that where we have to take them into account
01:01:29 and thinking morally about how we act
01:01:32 and systems for which we don’t.
01:01:34 And I think I’ve seen you the writer talk about
01:01:38 the demonstration of consciousness from a system like that,
01:01:41 from a system like Alexa or a conversational agent
01:01:48 that what you would be looking for
01:01:51 is kind of at the very basic level
01:01:54 for the system to have an awareness
01:01:58 that I’m just a program
01:02:00 and yet, why do I experience this?
01:02:03 Or not to have that experience,
01:02:06 but to communicate that to you.
01:02:08 So that’s what us humans would sound like.
01:02:10 If you all of a sudden woke up one day,
01:02:13 like Kafka, right, in a body of a bug or something,
01:02:15 but in a computer, you all of a sudden realized
01:02:18 you don’t have a body
01:02:19 and yet you were feeling what you were feeling,
01:02:22 you would probably say those kinds of things.
01:02:25 So do you think a system essentially becomes conscious
01:02:29 by convincing us that it’s conscious
01:02:34 through the words that I just mentioned?
01:02:36 So by being confused about the fact
01:02:40 that why am I having these experiences?
01:02:45 So basically.
01:02:45 I don’t think this is what makes you conscious,
01:02:48 but I do think being puzzled about consciousness
01:02:50 is a very good sign that a system is conscious.
01:02:53 So if I encountered a robot
01:02:55 that actually seemed to be genuinely puzzled
01:02:58 by its own mental states
01:03:01 and saying, yeah, I have all these weird experiences
01:03:04 and I don’t see how to explain them.
01:03:06 I know I’m just a set of silicon circuits,
01:03:08 but I don’t see how that would give you my consciousness.
01:03:11 I would at least take that as some evidence
01:03:13 that there’s some consciousness going on there.
01:03:16 I don’t think a system needs to be puzzled
01:03:19 about consciousness to be conscious.
01:03:21 Many people aren’t puzzled by their consciousness.
01:03:24 Animals don’t seem to be puzzled at all.
01:03:26 I still think they’re conscious.
01:03:28 So I don’t think that’s a requirement on consciousness,
01:03:30 but I do think if we’re looking for signs
01:03:33 for consciousness, say in AI systems,
01:03:37 one of the things that will help convince me
01:03:39 that an AI system is conscious is if it shows signs of,
01:03:44 if it shows signs of introspectively recognizing something
01:03:47 like consciousness and finding this philosophically puzzling
01:03:51 in the way that we do.
01:03:54 It’s such an interesting thought, though,
01:03:55 because a lot of people sort of would,
01:03:57 at the Shao level, criticize the Turing test for language.
01:04:02 It’s essentially what I heard Dan Dennett
01:04:07 criticize it in this kind of way,
01:04:09 which is it really puts a lot of emphasis on lying.
01:04:13 Yeah, and being able to imitate
01:04:17 human beings, yeah, there’s this cartoon
01:04:20 of the AI system studying for the Turing test.
01:04:23 It’s gotta read this book called Talk Like a Human.
01:04:26 It’s like, man, why do I have to waste my time
01:04:28 learning how to imitate humans?
01:04:30 Maybe the AI system is gonna be way beyond
01:04:32 the hard problem of consciousness,
01:04:33 and it’s gonna be just like,
01:04:34 why do I need to waste my time pretending
01:04:36 that I recognize the hard problem of consciousness
01:04:40 in order for people to recognize me as conscious?
01:04:42 Yeah, it just feels like, I guess the question is,
01:04:45 do you think we can ever really create
01:04:48 a test for consciousness?
01:04:49 Because it feels like we’re very human centric,
01:04:53 and so the only way we would be convinced
01:04:57 that something is conscious is basically
01:05:00 the thing demonstrates the illusion of consciousness,
01:05:06 that we can never really know whether it’s conscious or not,
01:05:10 and in fact, that almost feels like it doesn’t matter then,
01:05:14 or does it still matter to you that something is conscious
01:05:18 or it demonstrates consciousness?
01:05:20 You still see that fundamental distinction.
01:05:22 I think to a lot of people,
01:05:24 whether a system is conscious or not
01:05:27 matters hugely for many things,
01:05:28 like how we treat it, can it suffer, and so on,
01:05:33 but still, that leaves open the question,
01:05:35 how can we ever know?
01:05:36 And it’s true that it’s awfully hard
01:05:38 to see how we can know for sure
01:05:40 whether a system is conscious.
01:05:42 I suspect that sociologically,
01:05:44 the thing that’s gonna convince us
01:05:46 that a system is conscious is, in part,
01:05:50 things like social interaction, conversation, and so on,
01:05:53 where they seem to be conscious,
01:05:56 they talk about their conscious states
01:05:57 or just talk about being happy or sad
01:06:00 or finding things meaningful or being in pain.
01:06:02 That will tend to convince us if we don’t,
01:06:06 if a system genuinely seems to be conscious,
01:06:08 we don’t treat it as such,
01:06:10 eventually it’s gonna seem like a strange form
01:06:11 of racism or speciesism or somehow,
01:06:14 not to acknowledge them as conscious.
01:06:16 I truly believe that, by the way.
01:06:17 I believe that there is going to be
01:06:21 something akin to the Civil Rights Movement,
01:06:23 but for robots.
01:06:25 I think the moment you have a Roomba say,
01:06:30 please don’t kick me, that hurts, just say it.
01:06:32 Yeah.
01:06:33 I think that will fundamentally change
01:06:37 the fabric of our society.
01:06:40 I think you’re probably right,
01:06:41 although it’s gonna be very tricky
01:06:42 because, just say we’ve got the technology
01:06:44 where these conscious beings can just be created
01:06:47 and multiplied by the thousands by flicking a switch.
01:06:54 The legal status is gonna be different,
01:06:55 but ultimately their moral status ought to be the same,
01:06:58 and yeah, the civil rights issue is gonna be a huge mess.
01:07:03 So if one day somebody clones you,
01:07:06 another very real possibility.
01:07:10 In fact, I find the conversation between
01:07:13 two copies of David Chalmers quite interesting.
01:07:21 Very thought.
01:07:22 Who is this idiot?
01:07:25 He’s not making any sense.
01:07:26 So what, do you think he would be conscious?
01:07:32 I do think he would be conscious.
01:07:34 I do think in some sense,
01:07:35 I’m not sure it would be me,
01:07:37 there would be two different beings at this point.
01:07:39 I think they’d both be conscious
01:07:41 and they both have many of the same mental properties.
01:07:45 I think they both in a way have the same moral status.
01:07:49 It’d be wrong to hurt either of them
01:07:51 or to kill them and so on.
01:07:54 Still, there’s some sense in which probably
01:07:55 their legal status would have to be different.
01:07:58 If I’m the original and that one’s just a clone,
01:08:01 then creating a clone of me,
01:08:03 presumably the clone doesn’t, for example,
01:08:05 automatically own the stuff that I own
01:08:08 or I’ve got a certain connect,
01:08:14 the things that the people I interact with,
01:08:16 my family, my partner and so on,
01:08:19 I’m gonna somehow be connected to them
01:08:21 in a way in which the clone isn’t, so.
01:08:24 Because you came slightly first?
01:08:26 Yeah.
01:08:27 Because a clone would argue that they have
01:08:31 really as much of a connection.
01:08:33 They have all the memories of that connection.
01:08:35 Then a way you might say it’s kind of unfair
01:08:37 to discriminate against them,
01:08:38 but say you’ve got an apartment
01:08:40 that only one person can live in
01:08:41 or a partner who only one person can be with.
01:08:44 But why should it be you, the original?
01:08:47 It’s an interesting philosophical question,
01:08:49 but you might say because I actually have this history,
01:08:53 if I am the same person as the one that came before
01:08:56 and the clone is not,
01:08:58 then I have this history that the clone doesn’t.
01:09:01 Of course, there’s also the question,
01:09:03 isn’t the clone the same person too?
01:09:05 This is a question about personal identity.
01:09:07 If I continue and I create a clone over there,
01:09:10 I wanna say this one is me and this one is someone else.
01:09:14 But you could take the view that a clone is equally me.
01:09:17 Of course, in a movie like Star Trek
01:09:20 where they have a teletransporter
01:09:21 basically creates clones all the time.
01:09:23 They treat the clones as if they’re the original person.
01:09:25 Of course, they destroy the original body in Star Trek.
01:09:29 So there’s only one left around
01:09:31 and only very occasionally do things go wrong
01:09:32 and you get two copies of Captain Kirk.
01:09:35 But somehow our legal system at the very least
01:09:37 is gonna have to sort out some of these issues
01:09:40 and that maybe that’s what’s moral
01:09:42 and what’s legally acceptable are gonna come apart.
01:09:47 What question would you ask a clone of yourself?
01:09:52 Is there something useful you can find out from him
01:09:56 about the fundamentals of consciousness even?
01:10:00 I mean, kind of in principle,
01:10:03 I know that if it’s a perfect clone,
01:10:06 it’s gonna behave just like me.
01:10:09 So I’m not sure I’m gonna be able to,
01:10:11 I can discover whether it’s a perfect clone
01:10:13 by seeing whether it answers like me.
01:10:15 But otherwise I know what I’m gonna find is a being
01:10:18 which is just like me,
01:10:19 except that it’s just undergone this great shock
01:10:21 of discovering that it’s a clone.
01:10:24 So just say you woke me up tomorrow and said,
01:10:26 hey Dave, sorry to tell you this,
01:10:29 but you’re actually the clone
01:10:31 and you provided me really convincing evidence,
01:10:34 showed me the film of my being cloned
01:10:36 and then all wrapped in here being here and waking up.
01:10:41 So you proved to me I’m a clone,
01:10:42 well, yeah, okay, I would find that shocking
01:10:44 and who knows how I would react to this.
01:10:46 So maybe by talking to the clone,
01:10:48 I’d find something about my own psychology
01:10:50 that I can’t find out so easily,
01:10:52 like how I’d react upon discovering that I’m a clone.
01:10:55 I could certainly ask the clone if it’s conscious
01:10:57 and what his consciousness is like and so on,
01:10:59 but I guess I kind of know if it’s a perfect clone,
01:11:02 it’s gonna behave roughly like me.
01:11:04 Of course, at the beginning,
01:11:06 there’ll be a question
01:11:07 about whether a perfect clone is possible.
01:11:08 So I may wanna ask it lots of questions
01:11:11 to see if it’s consciousness
01:11:12 and the way it talks about its consciousness
01:11:14 and the way it reacts to things in general is likely.
01:11:17 And that will occupy us for a while.
01:11:22 So basic unit testing on the early models.
01:11:25 So if it’s a perfect clone,
01:11:28 you say that it’s gonna behave exactly like you.
01:11:30 So that takes us to free will.
01:11:35 Is there free will?
01:11:37 Are we able to make decisions that are not predetermined
01:11:41 from the initial conditions of the universe?
01:11:44 You know, philosophers do this annoying thing
01:11:46 of saying it depends what you mean.
01:11:48 So in this case, yeah, it really depends on what you mean,
01:11:52 by free will.
01:11:54 If you mean something which was not determined in advance,
01:11:58 could never have been determined,
01:12:00 then I don’t know we have free will.
01:12:02 I mean, there’s quantum mechanics
01:12:03 and who’s to say if that opens up some room,
01:12:06 but I’m not sure we have free will in that sense.
01:12:09 But I’m also not sure that’s the kind of free will
01:12:12 that really matters.
01:12:13 You know, what matters to us
01:12:15 is being able to do what we want
01:12:17 and to create our own futures.
01:12:19 We’ve got this distinction between having our lives
01:12:21 be under our control and under someone else’s control.
01:12:26 We’ve got the sense of actions that we are responsible for
01:12:29 versus ones that we’re not.
01:12:31 I think you can make those distinctions
01:12:33 even in a deterministic universe.
01:12:36 And this is what people call the compatibilist view
01:12:38 of free will, where it’s compatible with determinism.
01:12:41 So I think for many purposes,
01:12:42 the kind of free will that matters
01:12:45 is something we can have in a deterministic universe.
01:12:48 And I can’t see any reason in principle
01:12:50 why an AI system couldn’t have free will of that kind.
01:12:54 If you mean super duper free will,
01:12:55 the ability to violate the laws of physics
01:12:57 and doing things that in principle could not be predicted.
01:13:01 I don’t know, maybe no one has that kind of free will.
01:13:04 What’s the connection between the reality of free will
01:13:10 and the experience of it,
01:13:11 the subjective experience in your view?
01:13:15 So how does consciousness connect
01:13:17 to the reality and the experience of free will?
01:13:22 It’s certainly true that when we make decisions
01:13:24 and when we choose and so on,
01:13:26 we feel like we have an open future.
01:13:28 Feel like I could do this, I could go into philosophy
01:13:32 or I could go into math, I could go to a movie tonight,
01:13:36 I could go to a restaurant.
01:13:39 So we experience these things as if the future is open.
01:13:42 And maybe we experience ourselves
01:13:44 as exerting a kind of effect on the future
01:13:50 that somehow picking out one path
01:13:51 from many paths were previously open.
01:13:54 And you might think that actually
01:13:56 if we’re in a deterministic universe,
01:13:58 there’s a sense of which objectively
01:13:59 those paths weren’t really open all along,
01:14:03 but subjectively they were open.
01:14:05 And that’s, I think that’s what really matters
01:14:07 in making a decisions where our experience
01:14:09 of making a decision is choosing a path for ourselves.
01:14:14 I mean, in general, our introspective models of the mind,
01:14:18 I think are generally very distorted representations
01:14:20 of the mind.
01:14:21 So it may well be that our experience of ourself
01:14:24 in making a decision, our experience of what’s going on
01:14:27 doesn’t terribly well mirror what’s going on.
01:14:31 I mean, maybe there are antecedents in the brain
01:14:33 way before anything came into consciousness
01:14:37 and so on.
01:14:39 Those aren’t represented in our introspective model.
01:14:41 So in general, our experience of perception,
01:14:46 so I experience a perceptual image of the external world.
01:14:50 It’s not a terribly good model of what’s actually going on
01:14:53 in my visual cortex and so on,
01:14:55 which has all these layers and so on.
01:14:57 It’s just one little snapshot of one bit of that.
01:14:59 So in general, introspective models
01:15:02 are very over oversimplified.
01:15:05 And it wouldn’t be surprising
01:15:07 if that was true of free will as well.
01:15:09 This also incidentally can be applied to consciousness itself.
01:15:12 There is this very interesting view
01:15:13 that consciousness itself is an introspective illusion.
01:15:17 In fact, we’re not conscious,
01:15:19 but the brain just has these introspective models of itself
01:15:24 or oversimplifies everything and represents itself
01:15:27 as having these special properties of consciousness.
01:15:31 It’s a really simple way to kind of keep track of itself
01:15:33 and so on.
01:15:34 And then on the illusionist view,
01:15:36 yeah, that’s just an illusion.
01:15:39 I find this view, when I find it implausible,
01:15:42 I do find it very attractive in some ways,
01:15:44 because it’s easy to tell some story
01:15:46 about how the brain would create introspective models
01:15:50 of its own consciousness, of its own free will
01:15:53 as a way of simplifying itself.
01:15:55 I mean, it’s a similar way when we perceive
01:15:57 the external world, we perceive it as having these colors
01:16:00 that maybe it doesn’t really have,
01:16:02 but of course that’s a really useful way
01:16:04 of keeping tracks, of keeping track.
01:16:06 Did you say that you find it not very plausible?
01:16:08 Because I find it both plausible
01:16:11 and attractive in some sense,
01:16:14 because I mean, that kind of view
01:16:18 is one that has the minimum amount of mystery around it.
01:16:25 You can kind of understand that kind of view.
01:16:28 Everything else says we don’t understand
01:16:31 so much of this picture.
01:16:33 No, it is very attractive, I recently wrote an article
01:16:36 about this kind of issue called
01:16:38 the meta problem of consciousness.
01:16:41 The hard problem is how does a brain
01:16:43 give you consciousness?
01:16:44 The meta problem is why are we puzzled
01:16:46 by the hard problem of consciousness?
01:16:49 Because being puzzled by it,
01:16:50 that’s ultimately a bit of behavior.
01:16:53 We might be able to explain that bit of behavior
01:16:54 as one of the easy problems, consciousness.
01:16:57 So maybe there’ll be some computational model
01:17:00 that explains why we’re puzzled by consciousness.
01:17:03 The meta problem has come up with that model.
01:17:05 And I’ve been thinking about that a lot lately.
01:17:07 There’s some interesting stories you can tell
01:17:09 about why the right kind of computational system
01:17:13 might develop these introspective models of itself
01:17:17 that attributed itself, these special properties.
01:17:21 So that meta problem is a research program for everyone.
01:17:25 And then if you’ve got attraction
01:17:27 to sort of simple views, desert landscapes and so on,
01:17:31 then you can go all the way
01:17:32 with what people call illusionism
01:17:34 and say, in fact, consciousness itself is not real.
01:17:37 What is real is just these introspective models
01:17:42 we have that tell us that we’re conscious.
01:17:46 So the view is very simple, very attractive, very powerful.
01:17:49 The trouble is, of course, it has to say
01:17:51 that deep down, consciousness is not real.
01:17:55 We’re not actually experiencing right now.
01:17:57 And it looks like it’s just contradicting
01:17:59 a fundamental datum of our existence.
01:18:02 And this is why most people find this view crazy.
01:18:06 Just as they find panpsychism crazy in one way,
01:18:08 people find illusionism crazy in another way.
01:18:13 But I mean, so yes, it has to deny
01:18:18 this fundamental datum of our existence.
01:18:20 Now, that makes the view sort of frankly unbelievable
01:18:24 for most people.
01:18:25 On the other hand, the view developed right
01:18:28 might be able to explain why we find it unbelievable.
01:18:31 Because these models are so deeply hardwired into our head.
01:18:34 And they’re all integrated.
01:18:36 You can’t escape the illusion.
01:18:38 And it’s a crazy possibility.
01:18:40 Is it possible that the entirety of the universe,
01:18:43 our planet, all the people in New York,
01:18:46 all the organisms on our planet,
01:18:49 including me here today, are not real in that sense?
01:18:54 They’re all part of an illusion inside of Dave Chalmers’s head.
01:18:59 I think all this could be a simulation.
01:19:02 No, but not just a simulation.
01:19:04 Because the simulation kind of is outside of you.
01:19:09 A dream?
01:19:10 What if it’s all an illusion?
01:19:12 Yes, a dream that you’re experiencing.
01:19:14 That’s, it’s all in your mind, right?
01:19:18 Is that, can you take illusionism that far?
01:19:23 Well, there’s illusionism about the external world
01:19:26 and illusionism about consciousness.
01:19:28 And these might go in different.
01:19:30 Illusionism about the external world
01:19:31 kind of takes you back to Descartes.
01:19:34 And yeah, could all this be produced by an evil demon?
01:19:37 Descartes himself also had the dream argument.
01:19:39 He said, how do you know you’re not dreaming right now?
01:19:42 How do you know this is not an amazing dream?
01:19:43 And it’s at least a possibility that yeah,
01:19:46 this could be some super duper complex dream
01:19:49 in the next universe up.
01:19:51 I guess though, my attitude is that just as,
01:19:57 when Descartes thought that if the evil demon was doing it,
01:20:00 it’s not real.
01:20:01 A lot of people these days say if a simulation is doing it,
01:20:04 it’s not real.
01:20:05 As I was saying before, I think even if it’s a simulation,
01:20:08 that doesn’t stop this from being real.
01:20:09 It just tells us what the world is made of.
01:20:11 Likewise, if it’s a dream,
01:20:12 it could turn out that all this is like my dream
01:20:15 created by my brain in the next universe up.
01:20:19 My own view is that wouldn’t stop this physical world
01:20:21 from being real.
01:20:22 It would turn out this cup at the most fundamental level
01:20:26 was made of a bit of say my consciousness
01:20:28 in the dreaming mind at the next level up.
01:20:31 Maybe that would give you a kind of weird kind of panpsychism
01:20:35 about reality, but it wouldn’t show that the cup isn’t real.
01:20:39 It would just tell us it’s ultimately made of processes
01:20:42 in my dreaming mind.
01:20:43 So I’d resist the idea that if the physical world is a dream,
01:20:48 then it’s an illusion.
01:20:50 That’s right.
01:20:52 By the way, perhaps you have an interesting thought
01:20:54 about it.
01:20:55 Why is Descartes demon or genius considered evil?
01:21:02 Why couldn’t have been a benevolent one
01:21:04 that had the same powers?
01:21:05 Yeah, I mean, Descartes called it the malign genie,
01:21:08 the evil genie or evil genius.
01:21:12 Malign, I guess was the word.
01:21:14 But yeah, it’s an interesting question.
01:21:15 I mean, a later philosophy, Barclay said,
01:21:20 no, in fact, all this is done by God.
01:21:25 God actually supplies you all of these perceptions
01:21:30 and ideas and that’s how physical reality is sustained.
01:21:33 And interestingly, Barclay’s God is doing something
01:21:36 that doesn’t look so different
01:21:38 from what Descartes evil demon was doing.
01:21:41 It’s just that Descartes thought it was deception
01:21:43 and Barclay thought it was not.
01:21:46 And I’m actually more sympathetic to Barclay here.
01:21:51 Yeah, this evil demon may be trying to deceive you,
01:21:54 but I think, okay, well, the evil demon
01:21:56 may just be working under a false philosophical theory.
01:22:01 It thinks it’s deceiving you, it’s wrong.
01:22:02 It’s like there’s machines in the matrix.
01:22:04 They thought they were deceiving you
01:22:06 that all this stuff is real.
01:22:07 I think, no, if we’re in a matrix, it’s all still real.
01:22:11 Yeah, the philosopher O.K. Bousma had a nice story
01:22:15 about this about 50 years ago, about Descartes evil demon,
01:22:19 where he said this demon spends all its time
01:22:21 trying to fool people, but fails
01:22:24 because somehow all the demon ends up doing
01:22:26 is constructing realities for people.
01:22:30 So yeah, I think that maybe it’s a very natural
01:22:33 to take this view that if we’re in a simulation
01:22:35 or evil demon scenario or something,
01:22:38 then none of this is real.
01:22:40 But I think it may be ultimately a philosophical mistake,
01:22:43 especially if you take on board sort of the view of reality
01:22:46 where what matters to reality is really its structure,
01:22:50 something like its mathematical structure and so on,
01:22:52 which seems to be the view that a lot of people take
01:22:54 from contemporary physics.
01:22:56 And it looks like you can find
01:22:57 all that mathematical structure in a simulation,
01:23:01 maybe even in a dream and so on.
01:23:03 So as long as that structure is real,
01:23:05 I would say that’s enough for the physical world to be real.
01:23:08 Yeah, the physical world may turn out
01:23:10 to be somewhat more intangible than we had thought
01:23:13 and have a surprising nature of it.
01:23:15 We’re already gotten very used to that from modern science.
01:23:19 See, you’ve kind of alluded
01:23:21 that you don’t have to have consciousness
01:23:23 for high levels of intelligence,
01:23:25 but to create truly general intelligence systems,
01:23:29 AGI systems at human level intelligence
01:23:32 and perhaps super human level intelligence,
01:23:34 you’ve talked about that you feel like
01:23:37 that kind of thing might be very far away,
01:23:38 but nevertheless, when we reached that point,
01:23:43 do you think consciousness
01:23:46 from an engineering perspective is needed
01:23:49 or at least highly beneficial for creating an AGI system?
01:23:54 Yeah, no one knows what consciousness is for functionally.
01:23:57 So right now there’s no specific thing we can point to
01:24:00 and say, you need consciousness for that.
01:24:05 So my inclination is to believe
01:24:06 that in principle AGI is possible.
01:24:09 The very least I don’t see why
01:24:11 someone couldn’t simulate a brain,
01:24:13 ultimately have a computational system
01:24:16 that produces all of our behavior.
01:24:18 And if that’s possible,
01:24:19 I’m sure vastly many other computational systems
01:24:22 of equal or greater sophistication are possible
01:24:27 with all of our cognitive functions and more.
01:24:29 My inclination is to think that
01:24:32 once you’ve got all these cognitive functions,
01:24:35 perception, attention, reasoning,
01:24:39 introspection, language, emotion, and so on,
01:24:44 it’s very likely you’ll have consciousness as well.
01:24:49 So at least it’s very hard for me to see
01:24:50 how you’d have a system that had all those things
01:24:52 while bypassing somehow conscious.
01:24:55 So just naturally it’s integrated quite naturally.
01:25:00 There’s a lot of overlap about the kind of function
01:25:02 that required to achieve each of those things
01:25:04 that’s, so you can’t disentangle them
01:25:07 even when you’re recreating.
01:25:08 It seems to, at least in us,
01:25:09 but we don’t know what the causal role of consciousness
01:25:13 in the physical world, what it does.
01:25:14 I mean, just say it turns out
01:25:15 consciousness does something very specific
01:25:17 in the physical world like collapsing wave functions
01:25:20 as on one common interpretation of quantum mechanics.
01:25:24 Then ultimately we might find some place
01:25:25 where it actually makes a difference
01:25:27 and we could say, ah,
01:25:28 here is where in collapsing wave functions
01:25:30 it’s driving the behavior of a system.
01:25:32 And maybe it could even turn out that for AGI,
01:25:37 you’d need something playing that.
01:25:39 I mean, if you wanted to connect this to free will,
01:25:41 some people think consciousness collapsing wave functions,
01:25:43 that would be how the conscious mind exerts effect
01:25:47 on the physical world and exerts its free will.
01:25:50 And maybe it could turn out that any AGI
01:25:53 that didn’t utilize that mechanism would be limited
01:25:56 in the kinds of functionality that it had.
01:25:59 I don’t myself find that plausible.
01:26:02 I think probably that functionality could be simulated.
01:26:05 But you can imagine once we had a very specific idea
01:26:07 about the role of consciousness in the physical world,
01:26:10 this would have some impact on the capacity of AGI’s.
01:26:14 And if it was a role that could not be duplicated elsewhere,
01:26:17 then we’d have to find some way to either
01:26:22 get consciousness in the system to play that role
01:26:24 or to simulate it.
01:26:25 If we can isolate a particular role to consciousness,
01:26:29 of course, it seems like an incredibly difficult thing.
01:26:35 Do you have worries about existential threats
01:26:39 of conscious intelligent beings that are not us?
01:26:46 So certainly, I’m sure you’re worried about us
01:26:50 from an existential threat perspective,
01:26:52 but outside of us, AI systems.
01:26:55 There’s a couple of different kinds
01:26:56 of existential threats here.
01:26:58 One is an existential threat to consciousness generally.
01:27:01 I mean, yes, I care about humans
01:27:04 and the survival of humans and so on,
01:27:05 but just say it turns out that eventually we’re replaced
01:27:10 by some artificial beings that aren’t humans,
01:27:12 but are somehow our successors.
01:27:15 They still have good lives.
01:27:16 They still do interesting and wonderful things
01:27:19 with the universe.
01:27:20 I don’t think that’s not so bad.
01:27:23 That’s just our successors.
01:27:24 We were one stage in evolution.
01:27:26 Something different, maybe better came next.
01:27:29 If on the other hand, all of consciousness was wiped out,
01:27:33 that would be a very serious moral disaster.
01:27:36 One way that could happen is by all intelligent life
01:27:40 being wiped out.
01:27:42 And many people think that, yeah,
01:27:43 once you get to humans and AIs and amazing sophistication
01:27:47 where everyone has got the ability to create weapons
01:27:51 that can destroy the whole universe just by pressing a button,
01:27:55 then maybe it’s inevitable all intelligent life will die out.
01:28:00 That would certainly be a disaster.
01:28:03 And we’ve got to think very hard about how to avoid that.
01:28:06 But yeah, another interesting kind of disaster
01:28:08 is that maybe intelligent life is not wiped out,
01:28:12 but all consciousness is wiped out.
01:28:14 So just say your thought,
01:28:16 unlike what I was saying a moment ago,
01:28:18 that there are two different kinds of intelligent systems,
01:28:21 some which are conscious and some which are not.
01:28:25 And just say it turns out that we create AGI
01:28:28 with a high degree of intelligence,
01:28:30 meaning high degree of sophistication and its behavior,
01:28:34 but with no consciousness at all.
01:28:37 That AGI could take over the world maybe,
01:28:39 but then there’d be no consciousness in this world.
01:28:42 This would be a world of zombies.
01:28:44 Some people have called this the zombie apocalypse
01:28:48 because it’s an apocalypse for consciousness.
01:28:50 Consciousness is gone.
01:28:51 You’ve merely got this super intelligent,
01:28:53 nonconscious robots.
01:28:54 And I would say that’s a moral disaster in the same way,
01:28:58 in almost the same way that the world
01:28:59 with no intelligent life is a moral disaster.
01:29:02 All value and meaning may be gone from that world.
01:29:06 So these are both threats to watch out for.
01:29:09 Now, my own view is if you get super intelligence,
01:29:11 you’re almost certainly gonna bring consciousness with it.
01:29:13 So I hope that’s not gonna happen.
01:29:15 But of course, I don’t understand consciousness.
01:29:18 No one understands consciousness.
01:29:20 This is one reason for,
01:29:21 this is one reason at least among many
01:29:23 for thinking very seriously about consciousness
01:29:25 and thinking about the kind of future
01:29:27 we want to create in a world with humans and or AIs.
01:29:33 How do you feel about the possibility
01:29:35 if consciousness so naturally does come with AGI systems
01:29:39 that we are just a step in the evolution?
01:29:42 That we will be just something, a blimp on the record
01:29:47 that’ll be studied in books
01:29:49 by the AGI systems centuries from now?
01:29:51 I mean, I think I’d probably be okay with that,
01:29:55 especially if somehow humans are continuous with AGI.
01:29:58 I mean, I think something like this is inevitable.
01:30:01 The very least humans are gonna be transformed.
01:30:03 We’re gonna be augmented by technology.
01:30:06 It’s already happening in all kinds of ways.
01:30:08 We’re gonna be transformed by technology
01:30:11 where our brains are gonna be uploaded
01:30:13 and computationally enhanced.
01:30:15 And eventually that line between what’s a human
01:30:18 and what’s an AI may be kind of hard to draw.
01:30:23 How much does it matter, for example,
01:30:24 that some future being a thousand years from now
01:30:28 that somehow descended from us actually still has biology?
01:30:32 I think it would be nice if you kind of point
01:30:34 to its cognitive system, point to some parts
01:30:36 that had some roots in us and trace a continuous line there.
01:30:40 That would be selfishly nice for me to think that,
01:30:43 okay, I’m connected to this thread line
01:30:46 through the future of the world,
01:30:48 but if it turns out, okay, there’s a jump there.
01:30:50 They found a better way to design cognitive systems.
01:30:53 They designed a wholly new kind of thing.
01:30:55 And the only line is some causal chain of designing
01:30:59 and systems that design better systems.
01:31:03 Is that so much worse?
01:31:04 I don’t know.
01:31:05 We’re still at least part of a causal chain of design.
01:31:08 And yes, they’re not humans,
01:31:09 but still they’re our successes.
01:31:11 So, I mean, ultimately I think it’s probably inevitable
01:31:14 that something like that will happen.
01:31:16 And at least we were part of the process.
01:31:19 It’d be nice if they still cared enough about us
01:31:23 to maybe to engage with our arguments.
01:31:28 I’m really hoping that the AGI’s are gonna solve
01:31:30 all the problems of philosophy.
01:31:31 They’ll come back and read all this crappy work
01:31:35 for the 20th and 21st century,
01:31:36 hard problem of consciousness.
01:31:37 And here is why they got it wrong.
01:31:39 And so, and if that happened,
01:31:41 then I’d really feel like I was part of
01:31:42 at least an intellectual process over centuries.
01:31:44 And that would be kind of cool.
01:31:45 Well, I’m pretty sure they would clone
01:31:48 or they would recreate David Chalmers
01:31:50 and for the fun of it,
01:31:52 sort of bring back other philosophers.
01:31:55 Yeah, bring back Descartes.
01:31:56 Descartes and just put them in a room and just watch.
01:31:58 It’ll be a Netflix of the future show
01:32:02 where you bring philosophers from different human,
01:32:04 100% human philosophers from previous generations,
01:32:07 put them in a room and see them.
01:32:09 I am totally up for that.
01:32:11 Simulators, AGI’s of the future,
01:32:14 if you’re watching this podcast, do that.
01:32:17 I would like to be recreated and ending out with Descartes.
01:32:20 Where Descartes would be the first,
01:32:22 if you could hang out as part of such a TV show
01:32:26 with a philosopher that’s no longer with us from long ago,
01:32:30 who would you choose?
01:32:33 Descartes would have to be right up there.
01:32:34 Oh, actually a couple of months ago,
01:32:36 I got to have a conversation with Descartes,
01:32:39 an actor who’s actually a philosopher
01:32:40 came out on stage playing Descartes.
01:32:42 I didn’t know this was gonna happen.
01:32:43 And I just after I gave a talk
01:32:46 and told me about how my ideas were crap
01:32:50 and all derived from him.
01:32:51 And so we had a long argument.
01:32:53 This was great.
01:32:54 I would love to see what Descartes would think about AI,
01:32:57 for example, and the modern neuroscience.
01:32:59 And so I suspect not too much would surprise him,
01:33:01 but yeah, William James,
01:33:07 for a psychologist of consciousness,
01:33:08 I think James was probably the richest.
01:33:14 But, oh, there are Immanuel Kant.
01:33:17 I never really understood what he was up to
01:33:19 if I got to actually talk to him about some of this.
01:33:22 Hey, there was Princess Elizabeth who talked with Descartes
01:33:25 and who really got at the problems
01:33:28 of how Descartes ideas of a nonphysical mind
01:33:32 interacting with the physical body couldn’t really work.
01:33:37 She’s been kind of, most philosophers
01:33:39 think she’s been proved right.
01:33:40 So maybe put me in a room with Descartes
01:33:42 and Princess Elizabeth and we can all argue it out.
01:33:47 What kind of future?
01:33:49 So we talked about zombies, a concerning future,
01:33:53 but what kind of future excites you?
01:33:56 What do you think if we look forward sort of,
01:34:00 we’re at the very early stages
01:34:02 of understanding consciousness.
01:34:04 And we’re now at the early stages
01:34:05 of being able to engineer complex, interesting systems
01:34:10 that have degrees of intelligence.
01:34:11 And maybe one day we’ll have degrees of consciousness,
01:34:14 maybe be able to upload brains,
01:34:17 all those possibilities, virtual reality.
01:34:20 Is there a particular aspect to this future world
01:34:22 that just excites you?
01:34:24 Well, I think there are lots of different aspects.
01:34:26 I mean, frankly, I want it to hurry up and happen.
01:34:29 It’s like, yeah, we’ve had some progress lately in AI and VR,
01:34:33 but in the grand scheme of things, it’s still kind of slow.
01:34:35 The changes are not yet transformative.
01:34:38 And I’m in my fifties, I’ve only got so long left.
01:34:42 I’d like to see really serious AI in my lifetime
01:34:45 and really serious virtual worlds.
01:34:48 Cause yeah, once people,
01:34:49 I would like to be able to hang out in a virtual reality,
01:34:52 which is richer than this reality
01:34:56 to really get to inhabit fundamentally different kinds
01:35:00 of spaces.
01:35:02 Well, I would very much like to be able to upload
01:35:05 my mind onto a computer.
01:35:07 So maybe I don’t have to die.
01:35:11 If this is maybe gradually replaced my neurons
01:35:14 with a Silicon chips and inhabit a computer.
01:35:17 Selfishly, that would be wonderful.
01:35:19 I suspect I’m not gonna quite get there in my lifetime,
01:35:24 but once that’s possible,
01:35:26 then you’ve got the possibility of transforming
01:35:28 your consciousness in remarkable ways,
01:35:30 augmenting it, enhancing it.
01:35:33 So let me ask then,
01:35:34 if such a system is a possibility within your lifetime
01:35:39 and you were given the opportunity to become immortal
01:35:44 in this kind of way, would you choose to be immortal?
01:35:50 Yes, I totally would.
01:35:52 I know some people say they couldn’t,
01:35:54 it’d be awful to be immortal, be so boring or something.
01:35:59 I don’t see, I really don’t see why this might be.
01:36:04 I mean, even if it’s just ordinary life that continues,
01:36:07 ordinary life is not so bad.
01:36:09 But furthermore, I kind of suspect that,
01:36:12 if the universe is gonna go on forever or indefinitely,
01:36:16 it’s gonna continue to be interesting.
01:36:19 I don’t think your view was that we just have to get
01:36:22 this one romantic point of interest now
01:36:24 and afterwards it’s all gonna be boring,
01:36:26 super intelligent stasis.
01:36:28 I guess my vision is more like,
01:36:30 no, it’s gonna continue to be infinitely interesting.
01:36:32 Something like as you go up the set theoretic hierarchy,
01:36:36 you go from the finite cardinals to Aleph zero
01:36:42 and then through there to all the Aleph one and Aleph two
01:36:46 and maybe the continuum and you keep taking power sets
01:36:49 and in set theory, they’ve got these results
01:36:51 that actually all this is fundamentally unpredictable.
01:36:54 It doesn’t follow any simple computational patterns.
01:36:57 There’s new levels of creativity
01:36:58 as the set theoretic universe expands and expands.
01:37:01 I guess that’s my future.
01:37:03 That’s my vision of the future.
01:37:04 That’s my optimistic vision
01:37:06 of the future of super intelligence.
01:37:08 It will keep expanding and keep growing,
01:37:09 but still being fundamentally unpredictable at many points.
01:37:12 I mean, yes, this creates all kinds of worries
01:37:15 like couldn’t all be fragile and be destroyed at any point.
01:37:18 So we’re gonna need a solution to that problem.
01:37:21 But if we get to stipulate that I’m immortal,
01:37:23 well, I hope that I’m not just immortal and stuck
01:37:25 in the single world forever,
01:37:27 but I’m immortal and get to take part in this process
01:37:30 of going through infinitely rich, created futures.
01:37:34 Rich, unpredictable, exciting.
01:37:36 Well, I think I speak for a lot of people in saying,
01:37:39 I hope you do become immortal and there’ll be
01:37:41 that Netflix show, The Future,
01:37:43 where you get to argue with Descartes,
01:37:47 perhaps for all eternity.
01:37:49 So David, it was an honor.
01:37:51 Thank you so much for talking today.
01:37:52 Thanks, it was a pleasure.
01:37:55 Thanks for listening to this conversation
01:37:57 and thank you to our presenting sponsor, Cash App.
01:38:00 Download it, use code LexPodcast,
01:38:02 you’ll get $10 and $10 will go to FIRST,
01:38:05 an organization that inspires and educates young minds
01:38:08 to become science and technology innovators of tomorrow.
01:38:12 If you enjoy this podcast, subscribe on YouTube,
01:38:14 give it five stars on Apple Podcast,
01:38:16 follow on Spotify, support it on Patreon,
01:38:19 or simply connect with me on Twitter at Lex Friedman.
01:38:23 And now let me leave you with some words
01:38:24 from David Chalmers.
01:38:26 Materialism is a beautiful and compelling view of the world,
01:38:30 but to account for consciousness,
01:38:32 we have to go beyond the resources it provides.
01:38:35 Thank you for listening and hope to see you next time.