Scott Aaronson: Computational Complexity and Consciousness #130

Transcript

00:00:00 The following is a conversation with Scott Aaronson, his second time on the podcast.

00:00:04 He is a professor at UT Austin, director of the Quantum Information Center,

00:00:10 and previously a professor at MIT. Last time we talked about quantum computing. This time

00:00:17 we talk about computation complexity, consciousness, and theories of everything.

00:00:23 I’m recording this intro, as you may be able to tell, in a very strange room in the middle of the

00:00:31 night. I’m not really sure how I got here or how I’m going to get out, but Hunter S. Thompson

00:00:39 saying I think applies to today and the last few days and actually the last couple of weeks.

00:00:46 Life should not be a journey to the grave with the intention of arriving safely in a pretty and well

00:00:51 preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally

00:00:59 worn out, and loudly proclaiming, wow, what a ride. So I figured whatever I’m up to here,

00:01:08 and yes, lots of wine is involved, I’m going to have to improvise, have to improvise,

00:01:14 have to improvise, hence this recording. Okay, quick mention of each sponsor,

00:01:20 followed by some thoughts related to the episode. First sponsor is SimpliSafe, a home security

00:01:25 company I use to monitor and protect my apartment, though of course I’m always prepared with a fall

00:01:32 back plan, as a man in this world must always be. Second sponsor is 8sleep, a mattress that cools

00:01:43 itself, measures heart rate variability, has a nap, and has given me yet another reason to look

00:01:50 forward to sleep, including the all important power nap. Third sponsor is ExpressVPN, the VPN

00:01:57 I’ve used for many years to protect my privacy on the internet. Finally, the fourth sponsor is Better

00:02:05 Help, online therapy when you want to face your demons with a licensed professional, not just

00:02:11 by doing David Goggins like physical challenges like I seem to do on occasion. Please check out

00:02:17 these sponsors in the description to get a discount and to support the podcast.

00:02:22 As a side note, let me say that this is the second time I’ve recorded a conversation outdoors.

00:02:28 The first one was with Steven Wolfram when it was actually sunny out, in this case it was raining,

00:02:34 which is why I found a covered outdoor patio. But I learned a valuable lesson, which is that

00:02:40 raindrops can be quite loud on the hard metal surface of a patio cover. I did my best with

00:02:47 the audio, I hope it still sounds okay to you. I’m learning, always improving. In fact, as Scott says,

00:02:55 if you always win, then you’re probably doing something wrong. To be honest, I get pretty upset

00:03:00 with myself when I fail, small or big, but I’ve learned that this feeling is priceless. It can be

00:03:08 fuel, when channeled into concrete plans of how to improve. So if you enjoy this thing, subscribe

00:03:16 on YouTube, review the Five Stars in Apple podcast, follow on Spotify, support on Patreon,

00:03:22 or connect with me on Twitter at Lex Friedman. And now, here’s my conversation with Scott Aaronson.

00:03:30 Let’s start with the most absurd question, but I’ve read you write some fascinating stuff about

00:03:34 it, so let’s go there. Are we living in a simulation? What difference does it make,

00:03:40 Lex? I mean, I’m serious. What difference? Because if we are living in a simulation,

00:03:46 it raises the question, how real does something have to be in simulation for it to be sufficiently

00:03:52 immersive for us humans? But I mean, even in principle, how could we ever know if we were in

00:03:57 one, right? A perfect simulation, by definition, is something that’s indistinguishable from the

00:04:02 real thing. Well, we didn’t say anything about perfect. No, no, that’s right. Well, if it was

00:04:07 an imperfect simulation, if we could hack it, find a bug in it, then that would be one thing,

00:04:13 right? If this was like The Matrix and there was a way for me to do flying kung fu moves or

00:04:19 something by hacking the simulation, well then we would have to cross that bridge when we came to

00:04:24 it, wouldn’t we? At that point, it’s hard to see the difference between that and just what people

00:04:33 would ordinarily refer to as a world with miracles. What about from a different perspective, thinking

00:04:39 about the universe as a computation, like a program running on a computer? That’s kind of

00:04:44 a neighboring concept. It is. It is an interesting and reasonably well defined question to ask,

00:04:50 is the world computable? Does the world satisfy what we would call in CS the church touring

00:04:57 thesis? That is, could we take any physical system and simulate it to any desired precision by a

00:05:07 touring machine, given the appropriate input data, right? And so far, I think the indications are

00:05:13 pretty strong that our world does seem to satisfy the church touring thesis. At least if it doesn’t,

00:05:20 then we haven’t yet discovered why not. But now, does that mean that our universe is a simulation?

00:05:27 Well, that word seems to suggest that there is some other larger universe in which it is running.

00:05:34 And the problem there is that if the simulation is perfect, then we’re never going to be able to get

00:05:40 any direct evidence about that other universe. We will only be able to see the effects of the

00:05:47 computation that is running in this universe. Well, let’s imagine an analogy. Let’s imagine

00:05:53 a PC, a personal computer, a computer. Is it possible with the advent of artificial intelligence

00:06:01 for the computer to look outside of itself to see, to understand its creator? I mean,

00:06:08 that’s a simple, is that a ridiculous analogy? Well, I mean, with the computers that we actually

00:06:14 have, I mean, first of all, we all know that humans have done an imperfect job of enforcing

00:06:23 the abstraction boundaries of computers, right? Like you may try to confine some program to a

00:06:29 playpen, but as soon as there’s one memory allocation error in the C program, then the

00:06:37 program has gotten out of that playpen and it can do whatever it wants, right? This is how most hacks

00:06:43 work, you know, viruses and worms and exploits. And, you know, you would have to imagine that an

00:06:49 AI would be able to discover something like that. Now, you know, of course, if we could actually

00:06:55 discover some exploit of reality itself, then, you know, then this whole, I mean, then in some

00:07:02 sense we wouldn’t have to philosophize about this, right? This would no longer be a metaphysical

00:07:08 conversation. But the question is, what would that hack look like? Yeah, well, I have no idea. I mean,

00:07:18 Peter Shor, you know, the very famous person in quantum computing, of course, has joked that

00:07:25 maybe the reason why we haven’t yet, you know, integrated general relativity in quantum mechanics

00:07:31 is that, you know, the part of the universe that depends on both of them was actually left

00:07:36 unspecified. And if we ever tried to do an experiment involving the singularity of a black

00:07:42 hole or something like that, then, you know, the universe would just generate an overflow error or

00:07:47 something, right? Yeah, we would just crash the universe. Now, you know, the universe, you know,

00:07:55 has seemed to hold up pretty well for, you know, 14 billion years, right? So, you know, my, you know,

00:08:03 a Occam’s razor kind of guess has to be that, you know, it will continue to hold up, you know,

00:08:09 that the fact that we don’t know the laws of physics governing some phenomenon is not a strong

00:08:15 sign that probing that phenomenon is going to crash the universe, right? But, you know, of course,

00:08:21 I could be wrong. But do you think on the physics side of things, you know, there’s been recently a

00:08:28 few folks, Eric Weinstein and Stephen Wolfram that came out with a theory of everything. I think

00:08:33 there’s a history of physicists dreaming and working on the unification of all the laws of

00:08:39 physics. Do you think it’s possible that once we understand more physics, not necessarily the

00:08:46 unification of the laws, but just understand physics more deeply at the fundamental level,

00:08:50 we’ll be able to start, you know, I mean, part of this is humorous, but looking to see if there’s

00:08:58 any bugs in the universe that could be exploited for, you know, traveling at not just speed of

00:09:05 light, but just traveling faster than our current spaceships can travel, all that kind of stuff.

00:09:10 Well, I mean, to travel faster than our current spaceships could travel, you wouldn’t need to

00:09:15 find any bug in the universe, right? The known laws of physics, you know, let us go much faster

00:09:20 up to the speed of light, right? And, you know, when people want to go faster than the speed of

00:09:25 light, well, we actually know something about what that would entail, namely that, you know,

00:09:30 according to relativity, that seems to entail communication backwards in time. Okay, so then

00:09:36 you have to worry about closed time like curves and all of that stuff. So, you know, in some sense,

00:09:41 we sort of know the price that you have to pay for these things, right?

00:09:45 But under the current understanding of physics.

00:09:48 That’s right. That’s right. We can’t, you know, say that they’re impossible, but we, you know,

00:09:53 we know that sort of a lot else in physics breaks, right? So, now regarding Eric Weinstein

00:10:01 and Stephen Wolfram, like, I wouldn’t say that either of them has a theory of everything. I

00:10:06 would say that they have ideas that they hope, you know, could someday lead to a theory of everything.

00:10:11 Is that a worthy pursuit?

00:10:13 Well, I mean, certainly, let’s say by theory of everything, you know, we don’t literally mean a

00:10:18 theory of cats and of baseball and, you know, but we just mean it in the more limited sense of

00:10:24 everything, a fundamental theory of physics, right? Of all of the fundamental interactions of

00:10:31 physics, of course, such a theory, even after we had it, you know, would leave the entire question

00:10:38 of all the emergent behavior, right? You know, to be explored. So, it’s only everything for a

00:10:45 specific definition of everything. Okay, but in that sense, I would say, of course, that’s worth

00:10:50 pursuing. I mean, that is the entire program of fundamental physics, right? All of my friends who

00:10:56 do quantum gravity, who do string theory, who do anything like that, that is what’s motivating them.

00:11:02 Yeah, it’s funny, though, but, I mean, Eric Weinstein talks about this. It is, I don’t know

00:11:06 much about the physics world, but I know about the AI world, and it is a little, it is a little bit

00:11:11 taboo to talk about AGI, for example, on the AI side. So, really, to talk about the big dream of

00:11:22 the community, I would say, because it seems so far away, it’s almost taboo to bring it up, because,

00:11:29 you know, it’s seen as the kind of people that dream about creating a truly superhuman level

00:11:34 intelligence. That’s really far out there, people, because we’re not even close to that. And it feels

00:11:40 like the same thing is true for the physics community. I mean, Stephen Hawking certainly

00:11:45 talked constantly about theory of everything, right? You know, I mean, people, you know,

00:11:51 use those terms who were, you know, some of the most respected people in the whole world of

00:11:57 physics, right? But, I mean, I think that the distinction that I would make is that people

00:12:03 might react badly if you use the term in a way that suggests that you, you know, thinking about

00:12:09 it for five minutes, have come up with this major new insight about it, right? It’s difficult. Stephen

00:12:16 Hawking is not a great example, because I think you can do whatever the heck you want when you

00:12:23 get to that level. And I certainly see, like, senior faculty, you know, that, you know, at that

00:12:29 point, that’s one of the nice things about getting older is you stop giving a damn. But

00:12:35 community as a whole, they tend to roll their eyes very quickly at stuff that’s outside the

00:12:40 quote unquote mainstream. Well, let me put it this way. I mean, if you asked, you know,

00:12:44 Ed Witten, let’s say, who is, you know, you might consider the leader of the string community,

00:12:49 and thus, you know, very, very mainstream, in a certain sense, but he would have no hesitation

00:12:54 in saying, you know, of course, you know, they’re looking for a, you know, you know, a unified

00:13:01 description of nature of, you know, of general relativity of quantum mechanics of all the

00:13:07 fundamental interactions of nature, right? Now, you know, whether people would call that a theory

00:13:13 of everything, whether they would use that term, that might vary. You know, Lenny Susskind would

00:13:18 definitely have no problem telling you that, you know, if that’s what we want, right?

00:13:21 TK For me, who loves human beings and psychology,

00:13:25 it’s kind of ridiculous to say a theory that unifies the laws of physics gets you to understand

00:13:33 everything. I would say you’re not even close to understanding everything.

00:13:36 TK Yeah, right. I mean, the word everything is a little ambiguous here. And then people will get

00:13:43 into debates about, you know, reductionism versus emergentism and blah, blah, blah. And so in not

00:13:50 wanting to say theory of everything, people might just be trying to short circuit that debate and

00:13:55 say, you know, look, you know, yes, we want a fundamental theory of, you know, the particles

00:14:01 and interactions of nature.

00:14:02 TK Let me bring up the next topic that people don’t want to mention, although they’re getting

00:14:05 more comfortable with it, is consciousness. You mentioned that you have a talk on consciousness

00:14:10 that I watched five minutes of, but the internet connection was really bad.

00:14:13 TK Was this my talk about, you know, refuting the integrated information theory?

00:14:18 TK Yes.

00:14:18 TK Which was a particular account of consciousness that, yeah, I think one can just show it doesn’t

00:14:22 work. Much harder to say what does work.

00:14:25 TK Let me ask, maybe it’d be nice to comment on, you talk about also like the semi hard problem

00:14:34 of consciousness or like almost hard problem or kind of hard.

00:14:36 TK Pretty hard problem, I think I call it.

00:14:38 TK So maybe can you talk about that, their idea of the approach to modeling consciousness and

00:14:47 why you don’t find it convincing? What is it, first of all?

00:14:49 TK Okay, well, so what I called the pretty hard problem of consciousness, this is my term,

00:14:55 although many other people have said something equivalent to this, okay? But it’s just, you know,

00:15:02 the problem of, you know, giving an account of just which physical systems are conscious and

00:15:09 which are not. Or, you know, if there are degrees of consciousness, then quantifying how conscious

00:15:15 a given system is.

00:15:16 TK Oh, awesome. So that’s the pretty hard problem.

00:15:19 TK Yeah, that’s what I mean.

00:15:20 TK That’s it. I’m adopting it. I love it. That’s a good ring to it.

00:15:23 TK And so, you know, the infamous hard problem of consciousness is to explain how something

00:15:29 like consciousness could arise at all, you know, in a material universe, right? Or, you know,

00:15:34 why does it ever feel like anything to experience anything, right? And, you know, so I’m trying to

00:15:40 distinguish from that problem, right? And say, you know, no, okay, I would merely settle for an

00:15:46 account that could say, you know, is a fetus conscious? You know, if so, at which trimester?

00:15:52 You know, is a dog conscious? You know, what about a frog, right?

00:15:58 TK Or even as a precondition, you take that both these things are conscious,

00:16:02 tell me which is more conscious.

00:16:03 TK Yeah, for example, yes. Yeah, yeah. I mean, if consciousness is some multidimensional vector,

00:16:09 well, just tell me in which respects these things are conscious and in which respect they aren’t,

00:16:14 right? And, you know, and have some principled way to do it where you’re not, you know,

00:16:19 carving out exceptions for things that you like or don’t like, but could somehow take a description

00:16:24 of an arbitrary physical system, and then just based on the physical properties of that system,

00:16:32 or the informational properties, or how it’s connected, or something like that,

00:16:36 just in principle, calculate, you know, its degree of consciousness, right? I mean, this,

00:16:42 this would be the kind of thing that we would need, you know, if we wanted to address questions,

00:16:47 like, you know, what does it take for a machine to be conscious, right? Or when are, you know,

00:16:52 when should we regard AIs as being conscious? So now this IIT, this integrated information theory,

00:17:01 which has been put forward by Giulio Tinoni and a bunch of his

00:17:09 collaborators over the last decade or two, this is noteworthy, I guess, as a direct attempt to

00:17:17 answer that question, to, you know, answer the, to address the pretty hard problem,

00:17:22 right? And they give a, a criterion that’s just based on how a system is connected. So you,

00:17:29 so it’s up to you to sort of abstract the system, like a brain or a microchip, as a collection of

00:17:36 components that are connected to each other by some pattern of connections, you know, and,

00:17:41 and to specify how the components can influence each other, you know, like where the inputs go,

00:17:48 you know, where they affect the outputs. But then once you’ve specified that,

00:17:51 then they give this quantity that they call phi, you know, the Greek letter phi.

00:17:56 And the definition of phi has actually changed over time. It changes from one paper to another,

00:18:02 but in all of the variations, it involves something about what we in computer science

00:18:08 would call graph expansion. So basically what this means is that they want, in order to get a

00:18:14 large value of phi, it should not be possible to take your system and partition it into two

00:18:22 components that are only weakly connected to each other. Okay. So whenever we take our system and

00:18:28 sort of try to split it up into two, then there should be lots and lots of connections going

00:18:33 between the two components. Okay. Well, I understand what that means on a graph.

00:18:37 Do they formalize what, how to construct such a graph or data structure, whatever,

00:18:44 or is this one of the criticism I’ve heard you kind of say is that a lot of the very interesting

00:18:50 specifics are usually communicated through like natural language, like through words.

00:18:56 So it’s like the details aren’t always clear. Well, it’s true. I mean, they have nothing even

00:19:02 resembling a derivation of this phi. Okay. So what they do is they state a whole bunch of postulates,

00:19:09 you know, axioms that they think that consciousness should satisfy. And then there’s some verbal

00:19:15 discussion. And then at some point, phi appears. Right. Right. And this, this was what the first

00:19:20 thing that really made the hair stand on my neck, to be honest, because they are acting as if there

00:19:26 is a derivation. They’re acting as if, you know, you’re supposed to think that this is a derivation

00:19:31 and there’s nothing even remotely resembling a derivate. They just pull the phi out of a hat

00:19:36 completely. Is one of the key criticisms to you is that details are missing or is there something

00:19:41 more fundamental? That’s not even the key criticism. That’s just, that’s just a side point.

00:19:45 Okay. The, the core of it is that I think that the, you know, that they want to say that a system

00:19:50 is more conscious the larger its value of phi. And I think that that is obvious nonsense. Okay. As

00:19:57 soon as you think about it for like a minute, as soon as you think about it in terms of, could I

00:20:02 construct a system that had an enormous value of phi, like, you know, even larger than the brain

00:20:08 has, but that is just implementing an error correcting code, you know, doing nothing that we

00:20:13 would associate with, you know, intelligence or consciousness or any of it. The answer is yes,

00:20:20 it is easy to do that. Right. And so I wrote blog posts, just making this point that, yeah, it’s

00:20:25 easy to do that. Now, you know, Tinoni’s response to that was actually kind of incredible, right?

00:20:31 I mean, I, I, I admired it in a way because instead of disputing any of it, he just bit the

00:20:36 bullet in the sense, you know, he was one of the, the, uh, the most, uh, uh, audacious bullet

00:20:42 bitings I’ve ever seen in my career. Okay. He said, okay, then fine. You know, this system that

00:20:49 just applies this error correcting code it’s conscious, you know, and if it has a much larger

00:20:54 value of phi than you or me, it’s much more conscious than you and me. You know, you,

00:20:59 we just have to accept what the theory says because, you know, science is not about confirming

00:21:04 our intuitions. It’s about challenging them. And, you know, this is what my theory predicts that

00:21:10 this thing is conscious and, you know, or super duper conscious. And how are you going to prove

00:21:15 me wrong? So the way I would argue against your blog posts is I would say, yes, sure. You’re

00:21:21 right in general, but for naturally arising systems developed through the process of evolution on

00:21:28 earth, the, this rule of the larger fee being associated, being associated with more consciousness

00:21:33 is correct. Yeah. So that’s not what he said at all. Right. Right. Because he wants this to be

00:21:38 completely general. So we can apply to even computers. Yeah. I mean, I mean, the, the whole

00:21:43 interest of the theory is the, you know, the hope that it could be completely general apply to aliens,

00:21:48 to computers, to animals, coma patients, to any of it. Right. And so, so, so he just said, well,

00:21:59 you know, Scott is relying on his intuition, but, you know, I’m relying on this theory and,

00:22:04 you know, to me it was almost like, you know, are we being serious here? Like, like, like,

00:22:10 you know, like, like, okay, yes, in science we try to learn highly nonintuitive things,

00:22:16 but what we do is we first test the theory on cases where we already know the answer. Right.

00:22:22 Like if we, if someone had a new theory of temperature, right, then, you know, maybe we

00:22:27 could check that it says that boiling water is hotter than ice. And then if it says that the sun

00:22:33 is hotter than anything, you know, you’ve ever experienced, then maybe we, we trust that

00:22:38 extrapolation. Right. But like this, this theory, like if, if, you know, it’s now saying that, you

00:22:46 know, a, a gigantic grit, like regular grid of exclusive or gates can be way more conscious than,

00:22:53 you know, a person or than, than any animal can be, you know, even if it, you know, is, you know,

00:22:59 is, is, is, is so uniform that it might as well just be a blank wall. Right. And, and so now the

00:23:06 point is if, if this theory is sort of getting wrong, the question is a blank wall, you know,

00:23:11 more conscious than a person, then I would say, what is, what is there for it to get right?

00:23:15 So your, your sense is a blank wall is not more conscious than a human being.

00:23:22 Yeah. I mean, I mean, I mean, you could say that I am taking that as one of my axioms.

00:23:27 I’m saying, I’m saying that if, if a theory of consciousness is, is getting that wrong,

00:23:33 then whatever it is talking about at that point, I, I, I’m not going to call it consciousness.

00:23:39 I’m going to use a different word.

00:23:40 You have to use a different word. I mean, it’s also, it’s possible just like with intelligence

00:23:45 that us humans conveniently define these very difficult to understand concepts

00:23:49 in a very human centric way. Just like the Turing test really seems to define intelligence as a

00:23:55 thing that’s human like. Right. But I would say that with any, uh, concept, you know, there’s,

00:24:01 uh, uh, uh, you know, like we, we, we, we first need to define it. Right. And a definition is

00:24:07 only a good definition if it matches what we thought we were talking about prior to having

00:24:12 a definition. Right. And I would say that, you know, uh, fee as a definition of consciousness

00:24:19 fails that test. That is my argument. So, okay. So let’s take a further step. So you mentioned

00:24:26 that the universe might be a Turing machine. So like it might be computations or simulatable

00:24:31 by one anyway, simulated by one. So what’s your sense about consciousness? Do you think

00:24:38 consciousness is computation that we don’t need to go to any place outside of the computable universe

00:24:46 to, uh, you know, to, to understand consciousness, to build consciousness, to measure consciousness,

00:24:52 all those kinds of things? I don’t know. These are what, uh, you know, have been called the,

00:24:57 the vertiginous questions, right? There’s the questions like, like, uh, you know,

00:25:02 you get a feeling of vertigo and thinking about them. Right. I mean, I certainly feel like, uh,

00:25:08 I am conscious in a way that is not reducible to computation, but why should you believe me?

00:25:14 Right. I mean, and, and, and if you said the same to me, then why should I believe you?

00:25:19 But as computer scientists, I feel like a computer could be, could achieve human level intelligence,

00:25:27 but, and that’s actually a feeling and a hope. That’s not a scientific belief. It’s just,

00:25:33 we’ve built up enough intuition, the same kind of intuition you use in your blog.

00:25:37 You know, that’s what scientists do. They, I mean, some of it is a scientific method,

00:25:41 but some of it is just damn good intuition. I don’t have a good intuition about consciousness.

00:25:45 Yeah. I’m not sure that anyone does or has in the, you know,

00:25:49 2,500 years that these things have been discussed, Lex.

00:25:53 But do you think we will? Like one of the, I’ve gotten a chance to attend,

00:25:57 can’t wait to hear your opinion on this, but attend the Neuralink event.

00:26:01 And, uh, one of the dreams there is to, uh, you know, basically push neuroscience forward.

00:26:07 And the hope with neuroscience is that, uh, we can inspect the machinery from which all this

00:26:14 fun stuff emerges and see, we’re going to notice something special, some special sauce from which

00:26:19 something like consciousness or cognition emerges. Yeah. Well, it’s clear that we’ve learned an

00:26:24 enormous amount about neuroscience. We’ve learned an enormous amount about computation, you know,

00:26:30 about machine learning, about AI, how to get it to work. We’ve learned, uh, an enormous amount about

00:26:36 the underpinnings of the physical world, you know, and, you know, from one point of view,

00:26:42 that’s like, uh, an enormous distance that we’ve traveled along the road to understanding

00:26:47 consciousness. From another point of view, you know, the distance still to be traveled on the

00:26:52 road, you know, maybe seems no shorter than it was at the beginning. Right? So it’s very hard to say.

00:26:58 I mean, you know, these are questions like, like in, in, in sort of trying to have a theory

00:27:03 of consciousness, there’s sort of a problem where it feels like it’s not just that we don’t know

00:27:08 how to make progress. It’s that it’s hard to specify what could even count as progress,

00:27:13 right? Because no matter what scientific theory someone proposed, someone else could come along

00:27:18 and say, well, you’ve just talked about the mechanism. You haven’t said anything about

00:27:22 what breathes fire into the mechanism, what really makes there something that it’s like to be it.

00:27:27 Right. And that seems like an objection that you could always raise no matter,

00:27:32 you know, how much someone elucidated the details of how the brain works.

00:27:35 Okay. Let’s go to the Turing test and the Lobner Prize. I have this intuition, call me crazy,

00:27:40 but we, that a machine to pass the Turing test and it’s full, whatever the spirit of it is,

00:27:48 we can talk about how to formulate the perfect Turing test, that that machine has to be conscious.

00:27:55 We at least have to, I have a very low bar of what consciousness is. I tend to, I tend to think that

00:28:03 the emulation of consciousness is as good as consciousness. So the consciousness is just a

00:28:08 dance, a social, a social, a shortcut, like a nice, useful tool, but I tend to connect intelligence

00:28:16 consciousness together. So by, by that, do you, maybe just to ask what, what role does consciousness

00:28:25 play? Do you think it passed in the Turing test? Well, look, I mean, it’s almost tautologically

00:28:29 true that if we had a machine that passed the Turing test, then it would be emulating consciousness.

00:28:35 Right? So if your position is that, you know, emulation of consciousness is consciousness,

00:28:40 then so, you know, by, by definition, any machine that passed the Turing test would be conscious.

00:28:45 But it’s, but I mean, we know that you could say that, you know, that, that is just a way to

00:28:50 rephrase the original question, you know, is an emulation of consciousness, you know, necessarily

00:28:55 conscious. Right. And you can, can, you know, I hear, I’m not saying anything new that hasn’t been

00:29:01 debated ad nauseum in the literature. Okay. But, you know, you could imagine some very hard cases,

00:29:07 like imagine a machine that passed the Turing test, but that did so just by an enormous

00:29:13 cosmological sized lookup table that just cashed every possible conversation that could be had.

00:29:19 The old Chinese room.

00:29:21 Well, well, yeah, yeah. But, but this is, I mean, I mean, the Chinese room actually would be doing

00:29:26 some computation, at least in Searle’s version. Right. Here, I’m just talking about a table lookup.

00:29:31 Okay. Now it’s true that for conversations of a reasonable length, this, you know, lookup table

00:29:37 would be so enormous that wouldn’t even fit in the observable universe. Okay. But supposing that

00:29:42 you could build a big enough lookup table and then just, you know, pass the Turing test just

00:29:48 by looking up what the person said. Right. Are you going to regard that as conscious?

00:29:52 Okay. Let me try to make this formal and then you can shut it down. I think that the emulation of

00:30:00 something is that something, if there exists in that system, a black box that’s full of mystery.

00:30:07 So like, full of mystery to whom?

00:30:11 To human specters.

00:30:13 So does that mean that consciousness is relative to the observer? Like,

00:30:17 could something be conscious for us, but not conscious for an alien that understood better

00:30:22 what was happening inside the black box? Yes. So that if inside the black box is just a lookup

00:30:27 table, the alien that saw that would say this is not conscious. To us, another way to phrase the

00:30:33 black box is layers of abstraction, which make it very difficult to see to the actually underlying

00:30:38 functionality of the system. And then we observe just the abstraction. And so it looks like magic

00:30:44 to us. But once we understand the inner machinery, it stops being magic. And so like, that’s a

00:30:51 prerequisite is that you can’t know how it works, or some part of it, because then there has to be

00:30:57 in our human mind, entry point for the magic. So that’s a formal definition of the system.

00:31:05 Yeah, well, look, I mean, I explored a view in this essay I wrote called The Ghost in the Quantum

00:31:10 Touring Machine seven years ago that is related to that, except that I did not want to have

00:31:17 consciousness be relative to the observer, right? Because I think that if consciousness means

00:31:22 anything, it is something that is experienced by the entity that is conscious, right? Like,

00:31:27 I don’t need you to tell me that I’m conscious, nor do you need me to tell you that you are,

00:31:35 right? But basically, what I explored there is are there aspects of a system like a brain that just

00:31:47 could not be predicted even with arbitrarily advanced future technologies? It’s because of

00:31:52 chaos combined with quantum mechanical uncertainty and things like that. I mean, that actually could

00:31:59 be a property of the brain, you know, if true, that would distinguish it in a principled way,

00:32:06 at least from any currently existing computer. Not from any possible computer, but yeah, yeah.

00:32:11 This is a thought experiment. So if I gave you information that the entire history of your life,

00:32:20 basically explain away free will with a lookup table, say that this was all predetermined,

00:32:26 that everything you experienced has already been predetermined, wouldn’t that take away

00:32:29 your consciousness? Wouldn’t you, yourself, wouldn’t the experience of the world change for

00:32:34 you in a way that you can’t take back? Well, let me put it this way. If you could

00:32:39 do like in a Greek tragedy where, you know, you would just write down a prediction for what I’m

00:32:44 going to do and then maybe you put the prediction in a sealed box and maybe, you know, you open it

00:32:52 later and you show that you knew everything I was going to do or, you know, of course,

00:32:56 the even creepier version would be you tell me the prediction and then I try to falsify it,

00:33:01 my very effort to falsify it makes it come true, right? Let’s even forget that, you know,

00:33:07 that version as convenient as it is for fiction writers, right? Let’s just do the version where

00:33:13 you put the prediction into a sealed envelope, okay? But if you could reliably predict everything

00:33:19 that I was going to do, I’m not sure that that would destroy my sense of being conscious,

00:33:24 but I think it really would destroy my sense of having free will, you know, and much, much more

00:33:30 than any philosophical conversation could possibly do that, right? And so I think it becomes extremely

00:33:37 interesting to ask, you know, could such predictions be done, you know, even in principle,

00:33:43 is it consistent with the laws of physics to make such predictions, to get enough data about someone

00:33:49 that you could actually generate such predictions without having to kill them in the process to,

00:33:53 you know, slice their brain up into little slivers or something.

00:33:57 I mean, it’s theoretically possible, right?

00:33:59 Well, I don’t know. I mean, it might be possible, but only at the cost of destroying the person,

00:34:04 right? I mean, it depends on how low you have to go in sort of the substrate. Like if there was

00:34:11 a nice digital abstraction layer, if you could think of each neuron as a kind of transistor

00:34:16 computing a digital function, then you could imagine some nanorobots that would go in and

00:34:22 would just scan the state of each transistor, you know, of each neuron and then, you know, make a

00:34:28 good enough copy, right? But if it was actually important to get down to the molecular or the

00:34:34 atomic level, then, you know, eventually you would be up against quantum effects.

00:34:38 You would be up against the unclonability of quantum states. So I think it’s a question of

00:34:43 how good of a replica, how good does the replica have to be before you’re going to count it as

00:34:49 actually a copy of you or as being able to predict your actions.

00:34:54 That’s a totally open question.

00:34:55 Yeah, yeah, yeah. And especially once we say that, well, look, maybe there’s no way to,

00:35:02 you know, to make a deterministic prediction because, you know, we know that there’s noise

00:35:07 buffeting the brain around, presumably even quantum mechanical uncertainty,

00:35:12 you know, affecting the sodium ion channels, for example, whether they open or they close.

00:35:18 You know, there’s no reason why over a certain time scale that shouldn’t be amplified, just like

00:35:24 we imagine happens with the weather or with any other, you know, chaotic system. So if that stuff

00:35:33 is important, right, then we would say, well, you know, you can’t, you know, you’re never going to

00:35:43 be able to make an accurate enough copy. But now the hard part is, well, what if someone can make

00:35:48 a copy that sort of no one else can tell apart from you, right? It says the same kinds of things

00:35:54 that you would have said, maybe not exactly the same things because we agree that there’s noise,

00:35:59 but it says the same kinds of things. And maybe you alone would say, no, I know that that’s not

00:36:04 me, you know, it’s, it doesn’t share my, I haven’t felt my consciousness leap over to that other

00:36:10 thing. I still feel it localized in this version, right? And then why should anyone else believe

00:36:15 you? What are your thoughts? I’d be curious, you’re a really good person to ask, which is

00:36:20 Penrose’s, Roger Penrose’s work on consciousness, saying that there, you know, there is some,

00:36:26 there’s some, with axons and so on, there might be some biological places where quantum mechanics

00:36:32 can come into play and through that create consciousness somehow.

00:36:35 Yeah. Okay. Well, um, uh, of course, you know, I read Penrose’s books as a teenager. They had

00:36:42 a huge impact on me. Uh, uh, five or six years ago, I had the privilege to actually talk these

00:36:47 things over with Penrose, you know, at some length at a conference in Minnesota. And, uh, you know,

00:36:53 he is, uh, uh, you know, an amazing, uh, personality. I admire the fact that he was

00:36:58 even raising such, uh, audacious questions at all. Uh, but you know, to, to, to answer your

00:37:04 question, I think the first thing we need to get clear on is that he is not merely saying that

00:37:09 quantum mechanics is relevant to consciousness, right? That would be like, um, you know, that would

00:37:15 be tame compared to what he is saying, right? He is saying that, you know, even quantum mechanics

00:37:20 is not good enough, right? If, because if, if supposing for example, that the brain were a

00:37:25 quantum computer, I know that’s still a computer, you know, in fact, a quantum computer can be

00:37:30 simulated by an ordinary computer. It might merely need exponentially more time in order to do so,

00:37:36 right? So that’s simply not good enough for him. Okay. So what he wants is for the brain to be a

00:37:42 quantum gravitational computer or, or, uh, uh, he wants the brain to be exploiting as yet unknown

00:37:50 laws of quantum gravity. Okay. Which would, which would be uncomputable. That’s the key point. Okay.

00:37:57 Yes. Yes. That would be literally uncomputable. And I’ve asked him, you know, to clarify this,

00:38:02 but uncomputable, even if you had an Oracle for the halting problem or, you know, and, and, or,

00:38:09 you know, as high up as you want to go and the sort of high, the usual hierarchy of uncomputability,

00:38:15 he wants to go beyond all of that. Okay. So, so, you know, just, just to be clear, like, you know,

00:38:20 if we’re keeping count of how many speculations, you know, there’s probably like at least five or

00:38:26 six of them, right? There’s first of all, that there is some quantum gravity theory that would

00:38:30 involve this kind of uncomputability, right? Most people who study quantum gravity would not agree

00:38:36 with that. They would say that what we’ve learned, you know, what little we know about quantum

00:38:41 gravity from the, this ADS CFT correspondence, for example, has been very much consistent with

00:38:48 the broad idea of nature being computable, right? But, but all right, but, but supposing that he’s

00:38:55 right about that, then, you know, what most physicists would say is that whatever new

00:39:01 phenomena there are in quantum gravity, you know, they might be relevant at the singularities of

00:39:07 black holes. They might be relevant at the big bang. They are plainly not relevant to something

00:39:15 like the brain, you know, that is operating at ordinary temperatures, you know, with ordinary

00:39:21 chemistry and, you know, the, the, the physics underlying the brain, they, they would say that

00:39:28 we have, you know, the fundamental physics of the brain, they would say that we’ve pretty much

00:39:32 completely known for, for generations now, right? Because, you know, quantum field theory lets us

00:39:39 sort of parameterize our ignorance, right? I mean, Sean Carroll has made this case and,

00:39:44 you know, in great detail, right? That sort of whatever new effects are coming from quantum

00:39:49 gravity, you know, they are sort of screened off by quantum field theory, right? And this is,

00:39:55 this brings, you know, brings us to the whole idea of effective theories, right? But the,

00:39:59 like we have, you know, the, in like in the standard model of elementary particles, right?

00:40:04 We have a quantum field theory that seems totally adequate for all of the terrestrial phenomena,

00:40:12 right? The only things that it doesn’t, you know, explain are, well, first of all, you know,

00:40:16 the details of gravity, if you were to probe it, like at, at, you know, extremes of, you know,

00:40:23 curvature or like incredibly small distances, it doesn’t explain dark matter. It doesn’t explain

00:40:29 black hole singularities, right? But these are all very exotic things, very, you know, far removed

00:40:35 from our life on earth, right? So for Penrose to be right, he needs, you know, these phenomena to

00:40:41 somehow affect the brain. He needs the brain to contain antennae that are sensitive to this as

00:40:49 yet unknown physics, right? And then he needs a modification of quantum mechanics, okay? So he

00:40:55 needs quantum mechanics to actually be wrong, okay? He needs, what he wants is what he calls

00:41:02 an objective reduction mechanism or an objective collapse. So this is the idea that once quantum

00:41:09 states get large enough, then they somehow spontaneously collapse, right? That, you know,

00:41:17 and this is an idea that lots of people have explored. You know, there’s something called the

00:41:23 GRW proposal that tries to, you know, say something along those lines, you know, and these are

00:41:29 theories that actually make testable predictions, right? Which is a nice feature that they have.

00:41:34 But, you know, the very fact that they’re testable may mean that in the, you know, in the coming

00:41:39 decades, we may well be able to test these theories and show that they’re wrong, right? You know, we

00:41:45 may be able to test some of Penrose’s ideas. If not, not his ideas about consciousness, but at

00:41:50 least his ideas about an objective collapse of quantum states, right? And people have actually,

00:41:56 like Dick Balmeister, have actually been working to try to do these experiments. They haven’t been

00:42:01 able to do it yet to test Penrose’s proposal, okay? But Penrose would need more than just

00:42:07 an objective collapse of quantum states, which would already be the biggest development in

00:42:11 physics for a century since quantum mechanics itself, okay? He would need for consciousness

00:42:18 to somehow be able to influence the direction of the collapse so that it wouldn’t be completely

00:42:24 random, but that, you know, your dispositions would somehow influence the quantum state

00:42:29 to collapse more likely this way or that way, okay? Finally, Penrose, you know, says that all

00:42:36 of this has to be true because of an argument that he makes based on Gödel’s incompleteness theorem,

00:42:42 okay? Now, like I would say the overwhelming majority of computer scientists and mathematicians

00:42:49 who have thought about this, I don’t think that Gödel’s incompleteness theorem can do what he

00:42:53 needs it to do here, right? I don’t think that that argument is sound, okay? But that is, you know,

00:43:00 that is sort of the tower that you have to ascend to if you’re going to go where Penrose goes.

00:43:04 And the intuition he uses with the incompleteness theorem is that basically

00:43:09 that there’s important stuff that’s not computable? Is that where he takes it?

00:43:13 It’s not just that because, I mean, everyone agrees that there are problems that are uncomputable,

00:43:18 right? That’s a mathematical theorem, right? But what Penrose wants to say is that, you know,

00:43:26 for example, there are statements, you know, given any formal system, you know, for doing math,

00:43:33 right? There will be true statements of arithmetic that that formal system, you know,

00:43:39 if it’s adequate for math at all, if it’s consistent and so on, will not be able to prove.

00:43:44 A famous example being the statement that that system itself is consistent,

00:43:49 right? No, you know, good formal system can actually prove its own consistency.

00:43:55 That can only be done from a stronger formal system, which then can’t prove its own consistency

00:44:00 and so on forever, okay? That’s Gödel’s theorem. But now, why is that relevant to consciousness,

00:44:08 right? Well, you know, I mean, the idea that it might have something to do with consciousness

00:44:13 as an old one, Gödel himself apparently thought that it did. You know, Lucas thought so, I think,

00:44:22 in the 60s. And Penrose is really just, you know, sort of updating what they and others had said.

00:44:29 I mean, you know, the idea that Gödel’s theorem could have something to do with consciousness was,

00:44:34 you know, in 1950, when Alan Turing wrote his article about the Turing test, he already, you

00:44:40 know, was writing about that as like an old and well known idea and as a wrong one that he wanted

00:44:47 to dispense with. Okay, but the basic problem with this idea is, you know, Penrose wants to say

00:44:54 that and all of his predecessors here, you know, want to say that, you know, even though, you know,

00:45:00 this given formal system cannot prove its own consistency, we as humans sort of looking at it

00:45:07 from the outside can just somehow see its consistency, right? And the, you know, the rejoinder

00:45:15 to that, you know, from the very beginning has been, well, can we really? I mean, maybe, you

00:45:21 know, maybe he, Penrose can, but, you know, can the rest of us, right? And, you know, I noticed

00:45:28 that, you know, I mean, it is perfectly plausible to imagine a computer that could say, you know,

00:45:36 it would not be limited to working within a single formal system, right? They could say,

00:45:41 I am now going to adopt the hypothesis that my formal system is consistent, right? And I’m now

00:45:47 going to see what can be done from that stronger vantage point and so on. And, you know, and I’m

00:45:52 going to add new axioms to my system. Totally plausible. There’s absolutely, Gödel’s theorem

00:45:58 has nothing to say about against an AI that could repeatedly add new axioms. All it says is that

00:46:05 there is no absolute guarantee that when the AI adds new axioms that it will always be right.

00:46:12 Okay. And, you know, and that’s, of course, the point that Penrose pounces on,

00:46:15 but the reply is obvious. And, you know, it’s one that Alan Turing made 70 years ago. Namely,

00:46:21 we don’t have an absolute guarantee that we’re right when we add a new axiom. We never have,

00:46:26 and plausibly we never will. So on Alan Turing, you took part in the Lubna Prize?

00:46:32 Not really. No, I didn’t. I mean, there was this kind of ridiculous claim that was made

00:46:39 some almost a decade ago about a chat bot called Eugene Goostman.

00:46:46 I guess you didn’t participate as a judge in the Lubna Prize.

00:46:48 I didn’t.

00:46:49 But you participated as a judge in that, I guess it was an exhibition event or something like that,

00:46:54 or with Eugene…

00:46:56 Eugene Goostman, that was just me writing a blog post because some journalist called me to ask

00:47:01 about it.

00:47:01 Did you ever chat with him? I thought that…

00:47:03 I did chat with Eugene Goostman. I mean, it was available on the web.

00:47:06 Oh, interesting. I didn’t know that.

00:47:07 So yeah. So all that happened was that a bunch of journalists started writing breathless articles

00:47:14 about a first chat bot that passes the Turing test. And it was this thing called Eugene Goostman

00:47:21 that was supposed to simulate a 13 year old boy. And apparently someone had done some test where

00:47:29 people were less than perfect, let’s say, distinguishing it from a human. And they said,

00:47:36 well, if you look at Turing’s paper and you look at the percentages that he talked about,

00:47:42 then it seemed like we’re past that threshold.

00:47:45 And I had a different way to look at it instead of the legalistic way, like let’s just try the

00:47:53 actual thing out and let’s see what it can do with questions like, is Mount Everest bigger

00:47:59 than a shoebox? Or just like the most obvious questions. And the answer is, well, it just kind

00:48:08 of parries you because it doesn’t know what you’re talking about.

00:48:10 So just to clarify exactly in which way they’re obvious. They’re obvious in the sense that

00:48:17 you convert the sentences into the meaning of the objects they represent and then do some basic

00:48:23 obvious common sense reasoning with the objects that the sentences represent.

00:48:29 Right. It was not able to answer or even intelligently respond to basic common sense

00:48:35 questions. But let me say something stronger than that. There was a famous chatbot in the 60s

00:48:39 called Eliza that managed to actually fool a lot of people. Or people would pour their hearts out

00:48:48 into this Eliza because it simulated a therapist. And most of what it would do is it would just

00:48:54 throw back at you whatever you said. And this turned out to be incredibly effective.

00:49:00 Maybe therapists know this. This is one of their tricks. But it really had some people convinced.

00:49:10 But this thing was just like, I think it was literally just a few hundred lines of Lisp code.

00:49:17 It was not only was it not intelligent, it wasn’t especially sophisticated. It was

00:49:22 like a simple little hobbyist program. And Eugene Goostman, from what I could see,

00:49:27 was not a significant advance compared to Eliza. And that was really the point I was making.

00:49:38 In some sense, you didn’t need a computer science professor to sort of say this. Anyone who was

00:49:45 looking at it and who just had an ounce of sense could have said the same thing.

00:49:50 But because these journalists were calling me, the first thing I said was,

00:49:58 well, I’m a quantum computing person. I’m not an AI person. You shouldn’t ask me. Then they said,

00:50:04 look, you can go here and you can try it out. I said, all right. All right. So I’ll try it out.

00:50:10 This whole discussion, it got a whole lot more interesting in just the last few months.

00:50:15 Yeah. I’d love to hear your thoughts about GPT3. In the last few months, the world has now seen

00:50:24 a chat engine or a text engine, I should say, called GPT3. I think it still does not pass

00:50:33 a Turing test. There are no real claims that it passes the Turing test. This comes out of the

00:50:40 group at OpenAI, and they’ve been relatively careful in what they’ve claimed about the system.

00:50:47 But I think as clearly as Eugene Goostman was not in advance over Eliza, it is equally clear that

00:50:56 this is a major advance over Eliza or really over anything that the world has seen before.

00:51:03 This is a text engine that can come up with kind of on topic, reasonable sounding completions to

00:51:12 just about anything that you ask. You can ask it to write a poem about topic X in the style of poet

00:51:20 Y and it will have a go at that. And it will do not a great job, not an amazing job, but a passable

00:51:29 job. Definitely as good as, in many cases, I would say better than I would have done.

00:51:37 You can ask it to write an essay, like a student essay, about pretty much any topic and it will

00:51:43 get something that I am pretty sure would get at least a B minus in the most high school or

00:51:50 even college classes. And in some sense, the way that it did this, the way that it achieves this,

00:51:56 Scott Alexander of the much mourned blog, Slate Star Codex, had a wonderful way of putting it.

00:52:03 He said that they basically just ground up the entire internet into a slurry.

00:52:10 And to tell you the truth, I had wondered for a while why nobody had tried that. Why not write

00:52:16 a chat bot by just doing deep learning over a corpus consisting of the entire web? And so

00:52:24 now they finally have done that. And the results are very impressive. It’s not clear that people

00:52:35 can argue about whether this is truly a step toward general AI or not, but this is an amazing

00:52:41 capability that we didn’t have a few years ago. A few years ago, if you had told me that we would

00:52:50 have it now, that would have surprised me. And I think that anyone who denies that is just not

00:52:55 engaging with what’s there. So their model, it takes a large part of the internet and compresses

00:53:02 it in a small number of parameters relative to the size of the internet and is able to, without

00:53:10 fine tuning, do a basic kind of a querying mechanism, just like you described where you

00:53:16 specify a kind of poet and then you want to write a poem. And it somehow is able to do basically a

00:53:21 lookup on the internet of relevant things. How else do you explain it?

00:53:27 Well, okay. The training involved massive amounts of data from the internet and actually took

00:53:34 lots and lots of computer power, lots of electricity. There are some very prosaic

00:53:40 reasons why this wasn’t done earlier. But it costs some tens of millions of dollars, I think.

00:53:46 Less, but approximately like a few million dollars.

00:53:49 Oh, okay. Oh, really? Okay.

00:53:51 It’s more like four or five.

00:53:53 Oh, all right. All right. Thank you. I mean, as they scale it up, it will…

00:53:57 It’ll cost, but then the hope is cost comes down and all that kind of stuff.

00:54:02 But basically, it is a neural net or what’s now called a deep net,

00:54:09 but they’re basically the same thing. So it’s a form of algorithm that people

00:54:15 have known about for decades. But it is constantly trying to solve the problem,

00:54:21 predict the next word. So it’s just trying to predict what comes next. It’s not trying to

00:54:30 decide what it should say, what ought to be true. It’s trying to predict what someone who had said

00:54:37 all of the words up to the preceding one would say next.

00:54:40 Although to push back on that, that’s how it’s trained.

00:54:43 That’s right. No, of course.

00:54:45 It’s arguable that our very cognition could be a mechanism as that simple.

00:54:50 Oh, of course. Of course. I never said that it wasn’t.

00:54:52 Right. But…

00:54:54 Yeah. I mean, and sometimes that is… If there is a deep philosophical question that’s

00:55:00 raised by GPT3, then that is it, right? Are we doing anything other than this predictive

00:55:06 processing, just trying to constantly trying to fill in a blank of what would come next

00:55:12 after what we just said up to this point? Is that what I’m doing right now?

00:55:16 It’s impossible. So the intuition that a lot of people have, well, look,

00:55:20 this thing is not going to be able to reason, the Mountain Everest question.

00:55:24 Do you think it’s possible that GPT5, 6, and 7 would be able to, with this exact same process,

00:55:31 begin to do something that looks like… Is indistinguishable to us humans from reasoning?

00:55:38 I mean, the truth is that we don’t really know what the limits are, right?

00:55:42 Right. Exactly.

00:55:44 Because what we’ve seen so far is that GPT3 was basically the same thing as GPT2,

00:55:51 but just with a much larger network, more training time, bigger training corpus,

00:55:59 right? And it was very noticeably better than its immediate predecessor.

00:56:05 So we don’t know where you hit the ceiling here, right? I mean, that’s the amazing part and maybe

00:56:12 also the scary part, right? Now, my guess would be that at some point, there has to be diminishing

00:56:19 returns. It can’t be that simple, can it? Right? But I wish that I had more to base that guess on.

00:56:27 Right. Yeah. I mean, some people say that there will be a limitation on the…

00:56:31 We’re going to hit a limit on the amount of data that’s on the internet.

00:56:34 Yes. Yeah. So sure. So there’s certainly that limit. I mean, there’s also…

00:56:41 If you are looking for questions that will stump GPT3, you can come up with some without…

00:56:48 Even getting it to learn how to balance parentheses, right? It doesn’t do such a great job,

00:56:55 right? And its failures are ironic, right? Like basic arithmetic, right?

00:57:04 And you think, isn’t that what computers are supposed to be best at? Isn’t that where

00:57:08 computers already had us beat a century ago? Right? And yet that’s where GPT3 struggles,

00:57:14 right? But it’s amazing that it’s almost like a young child in that way, right? But somehow,

00:57:23 because it is just trying to predict what comes next, it doesn’t know when it should stop doing

00:57:30 that and start doing something very different, like some more exact logical reasoning, right?

00:57:36 And so one is naturally led to guess that our brain sort of has some element of predictive

00:57:45 processing, but that it’s coupled to other mechanisms, right? That it’s coupled to,

00:57:50 first of all, visual reasoning, which GPT3 also doesn’t have any of, right?

00:57:55 Although there’s some demonstration that there’s a lot of promise there using…

00:57:58 Oh yeah, it can complete images. That’s right.

00:58:00 And using exact same kind of transformer mechanisms to like watch videos on YouTube.

00:58:06 And so the same self supervised mechanism to be able to look,

00:58:11 it’d be fascinating to think what kind of completions you could do.

00:58:14 Oh yeah, no, absolutely. Although like if we ask it to like, you know,

00:58:17 a word problem that involve reasoning about the locations of things in space,

00:58:22 I don’t think it does such a great job on those, right? To take an example. And so

00:58:26 the guess would be, well, you know, humans have a lot of predictive processing,

00:58:31 a lot of just filling in the blanks, but we also have these other mechanisms that we can

00:58:35 couple to, or that we can sort of call as subroutines when we need to.

00:58:39 And that maybe, you know, to go further, that one would want to integrate other forms of reasoning.

00:58:46 Let me go on another topic that is amazing, which is complexity.

00:58:52 And then start with the most absurdly romantic question of what’s the most beautiful idea in

00:59:00 computer science or theoretical computer science to you? Like what just early on in your life,

00:59:05 or in general, have captivated you and just grabbed you?

00:59:08 I think I’m going to have to go with the idea of universality. You know,

00:59:13 if you’re really asking for the most beautiful. I mean, so universality is the idea that, you know,

00:59:20 you put together a few simple operations, like in the case of Boolean logic, that might be the AND

00:59:27 gate, the OR gate, the NOT gate, right? And then your first guess is, okay, this is a good start,

00:59:33 but obviously, as I want to do more complicated things, I’m going to need more complicated building

00:59:38 blocks to express that, right? And that was actually my guess when I first learned what

00:59:44 programming was. I mean, when I was, you know, an adolescent and someone showed me Apple basic,

00:59:50 and then, you know, GW basic, if anyone listening remembers that. Okay. But, you know,

00:59:57 I thought, okay, well, now, you know, I mean, I thought I felt like this is a revelation. You know,

01:00:03 it’s like finding out where babies come from. It’s like that level of, you know, why didn’t

01:00:08 anyone tell me this before, right? But I thought, okay, this is just the beginning. Now I know how

01:00:12 to write a basic program, but, you know, really write an interesting program, like, you know,

01:00:18 a video game, which had always been my dream as a kid to, you know, create my own Nintendo games,

01:00:24 right? You know, but, you know, obviously I’m going to need to learn some way more complicated

01:00:29 form of programming than that. Okay. But, you know, eventually I learned this incredible idea

01:00:35 of universality. And that says that, no, you throw in a few rules and then you already have

01:00:42 enough to express everything. Okay. So for example, the AND, the OR and the NOT gate can all,

01:00:48 or in fact, even just the AND and the NOT gate, or even just the NAND gate, for example,

01:00:55 is already enough to express any Boolean function on any number of bits. You just have to string

01:01:00 together enough of them. You can build a universe with NAND gates. You can build the universe out of

01:01:04 NAND gates. Yeah. You know, the simple instructions of BASIC are already enough, at least in principle,

01:01:12 you know, if we ignore details like how much memory can be accessed and stuff like that,

01:01:17 that is enough to express what could be expressed by any programming language whatsoever.

01:01:22 And the way to prove that is very simple. We simply need to show that in BASIC or whatever,

01:01:28 we could write an interpreter or a compiler for whatever other programming language we care about,

01:01:35 like C or Java or whatever. And as soon as we had done that, then ipso facto, anything that’s

01:01:41 expressible in C or Java is also expressible in BASIC. Okay. And so this idea of universality,

01:01:49 you know, goes back at least to Alan Turing in the 1930s when, you know, he

01:01:54 wrote down this incredibly simple pared down model of a computer, the Turing machine, right,

01:02:01 which, you know, he pared down the instruction set to just read a symbol, you know, write a symbol,

01:02:08 move to the left, move to the right, halt, change your internal state, right? That’s it. Okay.

01:02:15 And anybody proved that, you know, this could simulate all kinds of other things, you know,

01:02:22 and so in fact, today we would say, well, we would call it a Turing universal model of computation

01:02:28 that is, you know, just as it has just the same expressive power that BASIC or Java or C++ or any

01:02:37 of those other languages have because anything in those other languages could be compiled down

01:02:43 to Turing machine. Now, Turing also proved a different related thing, which is that there is

01:02:48 a single Turing machine that can simulate any other Turing machine if you just describe that

01:02:57 other machine on its tape, right? And likewise, there is a single Turing machine that will run

01:03:03 any C program, you know, if you just put it on its tape. That’s a second meaning of universality.

01:03:08 First of all, he couldn’t visualize it and that was in the 30s.

01:03:12 Yeah, the 30s. That’s right.

01:03:13 That’s before computers really, I mean, I don’t know how, I wonder what that felt like,

01:03:21 you know, learning that there’s no Santa Claus or something. Because I don’t know if that’s

01:03:27 empowering or paralyzing because it doesn’t give you any, it’s like you can’t write a software

01:03:34 engineering book and make that the first chapter and say we’re done.

01:03:38 Well, I mean, right. I mean, in one sense, it was this enormous flattening of the universe.

01:03:44 Yes.

01:03:44 I had imagined that there was going to be some infinite hierarchy of more and more powerful

01:03:50 programming languages, you know, and then I kicked myself for having such a stupid idea.

01:03:55 But apparently, Gödel had had the same conjecture in the 30s.

01:03:58 Oh, good. You’re in good company.

01:04:00 Yeah. And then Gödel read Turing’s paper and he kicked himself and he said, yeah, I was completely

01:04:10 wrong about that. But I had thought that maybe where I can contribute will be to invent a new

01:04:17 more powerful programming language that lets you express things that could never be expressed in

01:04:22 BASIC. And how would you do that? Obviously, you couldn’t do it itself in BASIC. But there

01:04:30 is this incredible flattening that happens once you learn what is universality. But then it’s also

01:04:39 an opportunity because it means once you know these rules, then the sky is the limit, right?

01:04:44 Then you have kind of the same weapons at your disposal that the world’s greatest programmer has.

01:04:51 It’s now all just a question of how you wield them.

01:04:54 Right. Exactly. So every problem is solvable, but some problems are harder than others.

01:05:00 Well, yeah, there’s the question of how much time, you know, of how hard is it to write a program?

01:05:06 And then there’s also the questions of what resources does the program need? You know,

01:05:11 how much time, how much memory? Those are much more complicated questions. Of course,

01:05:15 ones that we’re still struggling with today.

01:05:17 Exactly. So you’ve, I don’t know if you created Complexity Zoo or…

01:05:21 I did create the Complexity Zoo.

01:05:23 What is it? What’s complexity?

01:05:24 Oh, all right, all right, all right. Complexity theory is the study of sort of the

01:05:29 inherent resources needed to solve computational problems, okay? So it’s easiest to give an example.

01:05:38 Like, let’s say we want to add two numbers, right? If I want to add them, you know, if the numbers

01:05:47 are twice as long, then it only, it will take me twice as long to add them, but only twice as long,

01:05:52 right? It’s no worse than that.

01:05:54 Or a computer.

01:05:55 For a computer or for a person. We’re using pencil and paper, for that matter.

01:05:59 If you have a good algorithm.

01:06:00 Yeah, that’s right. I mean, even if you just use the elementary school algorithm of just carrying,

01:06:05 you know, then it takes time that is linear in the length of the numbers, right? Now,

01:06:10 multiplication, if you use the elementary school algorithm, is harder because you have to multiply

01:06:17 each digit of the first number by each digit of the second one. And then deal with all the

01:06:22 carries. So that’s what we call a quadratic time algorithm, right? If the numbers become twice as

01:06:28 long, now you need four times as much time, okay? So now, as it turns out, people discovered much

01:06:38 faster ways to multiply numbers using computers. And today we know how to multiply two numbers

01:06:45 that are n digits long using a number of steps that’s nearly linear in n. These are questions you

01:06:50 can ask. But now, let’s think about a different thing that people, you know, they’ve encountered

01:06:56 in elementary school, factoring a number. Okay? Take a number and find its prime factors, right?

01:07:03 And here, you know, if I give you a number with ten digits, I ask you for its prime factors.

01:07:08 Well, maybe it’s even, so you know that two is a factor. You know, maybe it ends in zero,

01:07:13 so you know that ten is a factor, right? But, you know, other than a few obvious things like that,

01:07:18 you know, if the prime factors are all very large, then it’s not clear how you even get started,

01:07:24 right? You know, it seems like you have to do an exhaustive search among an enormous number of

01:07:29 factors. Now, and as many people might know, for better or worse, the security, you know,

01:07:39 of most of the encryption that we currently use to protect the internet is based on the belief,

01:07:45 and this is not a theorem, it’s a belief, that factoring is an inherently hard problem

01:07:52 for our computers. We do know algorithms that are better than just trial division, than just trying

01:07:58 all the possible divisors, but they are still basically exponential. And exponential is hard.

01:08:05 Yeah, exactly. So the fastest algorithms that anyone has discovered, at least publicly

01:08:11 discovered, you know, I’m assuming that the NSA doesn’t know something better,

01:08:15 okay? But they take time that basically grows exponentially with the cube root of the size of

01:08:21 the number that you’re factoring, right? So that cube root, that’s the part that takes all the

01:08:26 cleverness, okay? But there’s still an exponential. There’s still an exponentiality there. But what

01:08:31 that means is that, like, when people use a thousand bit keys for their cryptography,

01:08:37 that can probably be broken using the resources of the NSA or the world’s other intelligence

01:08:42 agencies. You know, people have done analyses that say, you know, with a few hundred million

01:08:47 dollars of computer power, they could totally do this. And if you look at the documents that Snowden

01:08:53 released, you know, it looks a lot like they are doing that or something like that. It would kind

01:08:59 of be surprising if they weren’t, okay? But, you know, if that’s true, then in some ways that’s

01:09:05 reassuring. Because if that’s the best that they can do, then that would say that they can’t break

01:09:10 2,000 bit numbers, right? Then 2,000 bit numbers would be beyond what even they could do.

01:09:16 They haven’t found an efficient algorithm. That’s where all the worries and the concerns of quantum

01:09:21 computing came in, that there could be some kind of shortcut around that.

01:09:24 Right. So complexity theory is a huge part of, let’s say, the theoretical core of computer

01:09:31 science. You know, it started in the 60s and 70s as, you know, sort of an autonomous field. So it

01:09:39 was, you know, already, you know, I mean, you know, it was well developed even by the time that

01:09:45 I was born, okay? But in 2002, I made a website called the Complexity Zoo, to answer your question,

01:09:54 where I just tried to catalog the different complexity classes, which are classes of problems

01:10:01 that are solvable with different kinds of resources, okay? So these are kind of, you know,

01:10:06 you could think of complexity classes as like being almost to theoretical computer science,

01:10:13 like what the elements are to chemistry, right? They’re sort of, you know, there are our most

01:10:18 basic objects in a certain way. I feel like the elements

01:10:25 have a characteristic to them where you can’t just add an infinite number.

01:10:29 Well, you could, but beyond a certain point, they become unstable, right? Right. So it’s like,

01:10:34 you know, in theory, you can have atoms with, you know, and look, look, I mean, I mean,

01:10:39 a neutron star, you know, is a nucleus with, you know, uncalled billions of neutrons in it,

01:10:48 of hadrons in it, okay? But, you know, for sort of normal atoms, right, probably you can’t get

01:10:56 much above a hundred atomic weight, 150 or so, or sorry, sorry, I mean, beyond 150 or so protons

01:11:04 without it, you know, very quickly fissioning. With complexity classes, well, yeah, you can have

01:11:10 an infinity of complexity classes, but, you know, maybe there’s only a finite number of them that

01:11:16 are particularly interesting, right? Just like with anything else, you know, you care about

01:11:21 some more than about others. So what kind of interesting classes are there? I mean,

01:11:25 you could have just, maybe say, what are the, if you take any kind of computer science class,

01:11:31 what are the classes you learn? Good. Let me tell you sort of the biggest ones,

01:11:36 the ones that you would learn first. So, you know, first of all, there is P, that’s what it’s called,

01:11:41 okay? It stands for polynomial time. And this is just the class of all of the problems that you

01:11:47 could solve with a conventional computer, like your iPhone or your laptop, you know,

01:11:54 by a completely deterministic algorithm, right? Using a number of steps that grows only like the

01:12:01 size of the input raised to some fixed power, okay? So, if your algorithm is linear time,

01:12:09 like, you know, for adding numbers, okay, that problem is in P. If you have an algorithm that’s

01:12:14 quadratic time, like the elementary school algorithm for multiplying two numbers, that’s also

01:12:20 in P, even if it was the size of the input to the 10th power or to the 50th power, well, that wouldn’t

01:12:26 be very good in practice. But, you know, formally, we would still count that, that would still be in

01:12:32 P, okay? But if your algorithm takes exponential time, meaning like if every time I add one more

01:12:41 data point to your input, if the time needed by the algorithm doubles, if you need time like two

01:12:48 to the power of the amount of input data, then that we call an exponential time algorithm, okay?

01:12:56 And that is not polynomial, okay? So, P is all of the problems that have some polynomial time

01:13:03 algorithm, okay? So, that includes most of what we do with our computers on a day to day basis,

01:13:09 you know, all the, you know, sorting, basic arithmetic, you know, whatever is going on in

01:13:14 your email reader or in Angry Birds, okay? It’s all in P. Then the next super important class

01:13:21 is called NP. That stands for non deterministic polynomial, okay? It does not stand for not

01:13:28 polynomial, which is a common confusion. But NP was basically all of the problems

01:13:35 where if there is a solution, then it is easy to check the solution if someone shows it to you,

01:13:41 okay? So, actually a perfect example of a problem in NP is factoring, the one I told you about

01:13:48 before. Like if I gave you a number with thousands of digits and I told you that, you know, I asked

01:13:56 you, does this have at least three non trivial divisors, right? That might be a super hard problem

01:14:05 to solve, right? It might take you millions of years using any algorithm that’s known, at least

01:14:09 running on our existing computers, okay? But if I simply showed you the divisors, I said,

01:14:16 here are three divisors of this number, then it would be very easy for you to ask your computer

01:14:22 to just check each one and see if it works. Just divide it in, see if there’s any remainder,

01:14:27 right? And if they all go in, then you’ve checked. Well, I guess there were, right? So any problem

01:14:35 where, you know, wherever there’s a solution, there is a short witness that can be easily,

01:14:40 like a polynomial size witness that can be checked in polynomial time, that we call an NP problem,

01:14:48 okay? And yeah, so every problem that’s in P is also in NP, right? Because, you know, you could

01:14:55 always just ignore the witness and just, you know, if a problem is in P, you can just solve it

01:14:59 yourself, okay? But now, in some sense, the central, you know, mystery of theoretical computer science

01:15:07 is every NP problem in P. So if you can easily check the answer to a computational problem,

01:15:15 does that mean that you can also easily find the answer?

01:15:18 Even though there’s all these problems that appear to be very difficult to find the answer,

01:15:23 it’s still an open question whether a good answer exists.

01:15:26 Because no one has proven that there’s no way to do it.

01:15:29 It’s arguably the most, I don’t know, the most famous, the most maybe interesting,

01:15:36 maybe you disagree with that, problem in theoretical computer science. So what’s your

01:15:40 The most famous, for sure.

01:15:41 P equals NP. If you were to bet all your money, where do you put your money?

01:15:45 That’s an easy one. P is not equal to NP. I like to say that if we were physicists,

01:15:49 we would have just declared that to be a law of nature, you know, just like thermodynamics.

01:15:54 That’s hilarious.

01:15:55 Given ourselves Nobel Prizes for its discovery. Yeah, you know, and look, if later it turned out

01:16:01 that we were wrong, we just give ourselves more Nobel Prizes.

01:16:04 So harsh, but so true.

01:16:09 I mean, no, I mean, I mean, it’s really just because we are mathematicians or descended

01:16:14 from mathematicians, you know, we have to call things conjectures that other people

01:16:19 would just call empirical facts or discoveries, right?

01:16:23 But one shouldn’t read more into that difference in language, you know,

01:16:26 about the underlying truth.

01:16:28 So, okay, so you’re a good investor and good spender of money. So then let me ask another

01:16:33 way. Is it possible at all? And what would that look like if P indeed equals NP?

01:16:41 Well, I do think that it’s possible. I mean, in fact, you know, when people really pressed

01:16:45 me on my blog for what odds would I put, I put, you know, two or three percent odds.

01:16:50 Wow, that’s pretty good.

01:16:51 That P equals NP. Yeah. Well, because, you know, when P, I mean, you really have to think

01:16:57 about, like, if there were 50, you know, mysteries like P versus NP, and if I made a guess about

01:17:04 every single one of them, would I expect to be right 50 times? Right? And the truthful

01:17:09 answer is no. Okay.

01:17:10 Yeah.

01:17:11 So, you know, and that’s what you really mean in saying that, you know, you have, you know,

01:17:16 better than 98% odds for something. Okay. But so, yeah, you know, I mean, there could

01:17:22 certainly be surprises. And look, if P equals NP, well, then there would be the further

01:17:27 question of, you know, is the algorithm actually efficient in practice? Right? I mean, Don

01:17:33 Knuth, who I know that you’ve interviewed as well, right, he likes to conjecture that

01:17:39 P equals NP, but that the algorithm is so inefficient that it doesn’t matter anyway.

01:17:44 Right?

01:17:45 No, I don’t know. I’ve listened to him say that. I don’t know whether he says that just

01:17:50 because he has an actual reason for thinking it’s true or just because it sounds cool.

01:17:54 Yeah.

01:17:54 Okay. But, you know, that’s a logical possibility, right, that the algorithm could be n to the

01:18:00 10,000 time, or it could even just be n squared time, but with a leading constant of, it could

01:18:06 be a Google times n squared or something like that. And in that case, the fact that P equals

01:18:12 NP, well, it would ravage the whole theory of complexity. We would have to rebuild from

01:18:19 the ground up. But in practical terms, it might mean very little, right, if the algorithm

01:18:25 was too inefficient to run. If the algorithm could actually be run in practice, like if

01:18:31 it had small enough constants, or if you could improve it to where it had small enough constants

01:18:38 that was efficient in practice, then that would change the world. Okay?

01:18:42 You think it would have, like, what kind of impact would it have?

01:18:44 Well, okay, I mean, here’s an example. I mean, you could, well, okay, just for starters,

01:18:49 you could break basically all of the encryption that people use to protect the internet.

01:18:53 That’s just for starters.

01:18:54 You could break Bitcoin and every other cryptocurrency, or, you know,

01:18:58 mine as much Bitcoin as you wanted, right? You know, become a super duper billionaire,

01:19:06 right? And then plot your next move.

01:19:09 Right. That’s just for starters. That’s a good point.

01:19:11 Now, your next move might be something like, you know, you now have, like, a theoretically

01:19:16 optimal way to train any neural network, to find parameters for any neural network, right?

01:19:22 So you could now say, like, is there any small neural network that generates the entire content

01:19:27 of Wikipedia, right? If, you know, and now the question is not, can you find it? The

01:19:33 question has been reduced to, does that exist or not? If it does exist, then the answer would be,

01:19:39 yes, you can find it, okay? If you had this algorithm in your hands, okay?

01:19:44 You could ask your computer, you know, I mean, P versus NP is one of these seven problems that

01:19:50 carries this million dollar prize from the Clay Foundation. You know, if you solve it,

01:19:54 you know, and others are the Riemann hypothesis, the Poincare conjecture, which was solved,

01:20:00 although the solver turned down the prize, right, and four others. But what I like to say,

01:20:06 the way that we can see that P versus NP is the biggest of all of these questions

01:20:11 is that if you had this fast algorithm, then you could solve all seven of them,

01:20:15 okay? You just ask your computer, you know, is there a short proof of the Riemann hypothesis,

01:20:20 right? You know, that a machine could, in a language where a machine could verify it,

01:20:25 and provided that such a proof exists, then your computer finds it

01:20:28 in a short amount of time without having to do a brute force search, okay? So, I mean,

01:20:33 those are the stakes of what we’re talking about. But I hope that also helps to give your listeners

01:20:38 some intuition of why I and most of my colleagues would put our money on P not equaling NP.

01:20:46 Is it possible, I apologize this is a really dumb question, but is it possible to,

01:20:50 that a proof will come out that P equals NP, but an algorithm that makes P equals NP

01:20:59 is impossible to find? Is that like crazy? Okay, well, if P equals NP, it would mean

01:21:05 that there is such an algorithm. That it exists, yeah.

01:21:09 But, you know, it would mean that it exists. Now, you know, in practice, normally the way that we

01:21:17 would prove anything like that would be by finding the algorithm. But there is such a thing as a

01:21:23 nonconstructive proof that an algorithm exists. You know, this has really only reared its head,

01:21:28 I think, a few times in the history of our field, right? But, you know, it is theoretically possible

01:21:35 that such a thing could happen. But, you know, there are, even here, there are some amusing

01:21:40 observations that one could make. So there is this famous observation of Leonid Levin, who was,

01:21:47 you know, one of the original discoverers of NP completeness, right? And he said,

01:21:51 we’ll consider the following algorithm that I guarantee will solve the NP problems efficiently,

01:21:58 just as provided that P equals NP, okay? Here is what it does. It just runs, you know,

01:22:05 it enumerates every possible algorithm in a gigantic infinite list, right? From like in

01:22:11 like alphabetical order, right? You know, and many of them maybe won’t even compile,

01:22:15 so we just ignore those, okay? But now, we just, you know, run the first algorithm,

01:22:20 then we run the second algorithm, we run the first one a little bit more,

01:22:24 then we run the first three algorithms for a while, we run the first four for a while.

01:22:28 This is called dovetailing, by the way. This is a known trick in theoretical computer science,

01:22:35 okay? But we do it in such a way that, you know, whatever is the algorithm out there in our list

01:22:42 that solves NP complete, you know, the NP problems efficiently, will eventually hit that one,

01:22:48 right? And now, the key is that whenever we hit that one, you know, by assumption,

01:22:54 it has to solve the problem, it has to find the solution, and once it claims to find a solution,

01:22:59 then we can check that ourselves, right? Because these are NP problems, then we can check it.

01:23:04 Now, this is utterly impractical, right? You know, you’d have to do this enormous exhaustive search

01:23:11 among all the algorithms, but from a certain theoretical standpoint, that is merely a constant

01:23:16 prefactor, right? That’s merely a multiplier of your running time. So, there are tricks like that

01:23:22 one can do to say that, in some sense, the algorithm would have to be constructive. But,

01:23:27 you know, in the human sense, you know, it is possible that to, you know, it’s conceivable

01:23:33 that one could prove such a thing via a nonconstructive method. Is that likely? I don’t

01:23:38 think so. Not personally. So, that’s P and NP, but the complexity zoo is full of wonderful

01:23:46 creatures. Well, it’s got about 500 of them. 500. So, how do you get, yeah, how do you get more?

01:23:56 I mean, just for starters, there is everything that we could do with a conventional computer

01:24:02 with a polynomial amount of memory, okay, but possibly an exponential amount of time,

01:24:08 because we get to reuse the same memory over and over again. Okay, that is called P space,

01:24:13 okay? And that’s actually, we think, an even larger class than NP. Okay, well, P is contained

01:24:21 in NP, which is contained in P space. And we think that those containments are strict.

01:24:26 And the constraint there is on the memory. The memory has to grow

01:24:31 polynomially with the size of the process. That’s right. That’s right. But in P space,

01:24:35 we now have interesting things that were not in NP, like as a famous example, you know,

01:24:41 from a given position in chess, you know, does white or black have the win? Let’s say,

01:24:46 assuming provided that the game lasts only for a reasonable number of moves, okay? Or likewise,

01:24:53 for go, okay? And, you know, even for the generalizations of these games to arbitrary

01:24:57 size boards, because with an eight by eight board, you could say that’s just a constant

01:25:01 size problem. You just, you know, in principle, you just solve it in O of one time, right?

01:25:06 But so we really mean the generalizations of, you know, games to arbitrary size boards here.

01:25:14 Or another thing in P space would be, like, I give you some really hard constraint satisfaction

01:25:21 problem, like, you know, a traveling salesperson or, you know, packing boxes into the trunk of

01:25:28 your car or something like that. And I ask, not just is there a solution, which would be an NP

01:25:33 problem, but I ask how many solutions are there, okay? That, you know, count the number of valid

01:25:41 solutions. That actually gives, those problems lie in a complexity class called sharp P, or like,

01:25:49 it looks like hashtag, like hashtag P, okay, which sits between NP and P space.

01:25:55 There’s all the problems that you can do in exponential time, okay? That’s called exp. So,

01:26:01 and by the way, it was proven in the 60s that exp is larger than P, okay? So we know that much.

01:26:09 We know that there are problems that are solvable in exponential time that are not solvable in

01:26:14 polynomial time, okay? In fact, we even know, we know that there are problems that are solvable in

01:26:20 n cubed time that are not solvable in n squared time. And that, those don’t help us with a

01:26:26 controversy between P and NP at all. Unfortunately, it seems not, or certainly not yet, right?

01:26:31 The techniques that we use to establish those things, they’re very, very related to how Turing

01:26:37 proved the unsolvability of the halting problem, but they seem to break down when we’re comparing

01:26:42 two different resources, like time versus space, or like, you know, P versus NP, okay? But, you know,

01:26:50 I mean, there’s what you can do with a randomized algorithm, right? That can be done with a

01:26:55 random algorithm, right? That can sometimes, you know, has some probability of making a mistake.

01:27:01 That’s called BPP, bounded error probabilistic polynomial time. And then, of course, there’s

01:27:07 one that’s very close to my own heart, what you can efficiently do in polynomial time using a

01:27:13 quantum computer, okay? And that’s called BQP, right? And so, you know, what’s understood about

01:27:20 it? Okay, so P is contained in BPP, which is contained in BQP, which is contained in P space,

01:27:27 okay? So anything you can, in fact, in something very similar to sharp P. BQP is basically,

01:27:35 you know, well, it’s contained in like P with the magic power to solve sharp P problems, okay?

01:27:41 Why is BQP contained in P space?

01:27:44 Oh, that’s an excellent question. So there is, well, I mean, one has to prove that, okay? But

01:27:53 the proof, you could think of it as using Richard Feynman’s picture of quantum mechanics,

01:28:00 which is that you can always, you know, we haven’t really talked about quantum mechanics in this

01:28:06 conversation. We did in our previous one.

01:28:08 Yeah, we did last time.

01:28:09 But yeah, we did last time, okay? But basically, you could always think of a quantum computation

01:28:16 as like a branching tree of possibilities where each possible path that you could take

01:28:24 through, you know, the space has a complex number attached to it called an amplitude, okay? And now

01:28:30 the rule is, you know, when you make a measurement at the end, well, you see a random answer,

01:28:36 okay? But quantum mechanics is all about calculating the probability that you’re

01:28:40 going to see one potential answer versus another one, right? And the rule for calculating the

01:28:47 probability that you’ll see some answer is that you have to add up the amplitudes for all of the

01:28:53 paths that could have led to that answer. And then, you know, that’s a complex number, so that,

01:28:58 you know, how could that be a probability? Then you take the squared absolute value of the result.

01:29:04 That gives you a number between zero and one, okay? So yeah, I just summarized quantum mechanics

01:29:10 in like 30 seconds, okay? But now, you know, what this already tells us is that anything I can do

01:29:17 with a quantum computer, I could simulate with a classical computer if I only have exponentially

01:29:23 more time, okay? And why is that? Because if I have exponential time, I could just write down this

01:29:30 entire branching tree and just explicitly calculate each of these amplitudes, right? You know, that

01:29:36 will be very inefficient, but it will work, right? It’s enough to show that quantum computers could

01:29:42 not solve the halting problem or, you know, they could never do anything that is literally

01:29:47 uncomputable in Turing’s sense, okay? But now, as I said, there’s even a stronger result which says

01:29:54 that BQP is contained in PSPACE. The way that we prove that is that we say, if all I want is to

01:30:02 calculate the probability of some particular output happening, you know, which is all I need to

01:30:08 simulate a quantum computer, really, then I don’t need to write down the entire quantum state,

01:30:13 which is an exponentially large object. All I need to do is just calculate what is the amplitude for

01:30:20 that final state. And to do that, I just have to sum up all the amplitudes that lead to that state.

01:30:27 Okay, so that’s an exponentially large sum, but I can calculate it just reusing the same memory over

01:30:34 and over for each term in the sum. And hence the p, in the PSPACE? Hence the PSPACE. Yeah.

01:30:39 So what, out of that whole complexity zoo, and it could be BQP, what do you find is the most,

01:30:46 the class that captured your heart the most, the most beautiful class that’s just, yeah.

01:30:53 I used, as my email address, bqpqpoly at gmail.com. Yes, because BQP slash Qpoly,

01:31:03 well, you know, amazingly no one had taken it.

01:31:06 Amazing, amazing.

01:31:07 But, you know, this is a class that I was involved in sort of defining,

01:31:12 proving the first theorems about in 2003 or so. So it was kind of close to my heart.

01:31:18 But this is like, if we extended BQP, which is the class of everything we can do efficiently

01:31:24 with a quantum computer, to allow quantum advice, which means imagine that you had some

01:31:31 special initial state, okay, that could somehow help you do computation. And maybe

01:31:36 such a state would be exponentially hard to prepare, okay, but maybe somehow these states

01:31:43 were formed in the Big Bang or something, and they’ve just been sitting around ever since,

01:31:46 right? If you found one, and if this state could be like ultra power, there are no limits on how

01:31:53 powerful it could be, except that this state doesn’t know in advance which input you’ve got,

01:31:58 right? It only knows the size of your input. You know, and that’s BQP slash Qpoly. So that’s

01:32:05 one that I just personally happen to love, okay? But, you know, if you’re asking like what’s the,

01:32:11 you know, there’s a class that I think is way more beautiful or fundamental than a lot of people

01:32:18 even within this field realize that it is. That class is called SZK, or Statistical Zero Knowledge.

01:32:28 And, you know, there’s a very, very easy way to define this class, which is to say, suppose that

01:32:32 I have two algorithms that each sample from probability distributions, right? So each one

01:32:39 just outputs random samples according to, you know, possibly different distributions. And now

01:32:45 the question I ask is, you know, let’s say distributions over strings of n bits, you know,

01:32:50 so over an exponentially large space. Now I ask, are these two distributions close or far as

01:32:57 close or far as probability distributions? Okay. Any problem that can be reduced to that,

01:33:04 you know, that can be put into that form is an SZK problem. And the way that this class was

01:33:10 originally discovered was completely different from that and was kind of more complicated. It

01:33:15 was discovered as the class of all of the problems that have a certain kind of what’s called zero

01:33:21 knowledge proof. Zero knowledge proofs are one of the central ideas in cryptography. You know,

01:33:27 Shafi Goldwasser and Silvio McCauley won the Turing Award for, you know, inventing them.

01:33:33 And they’re at the core of even some cryptocurrencies that, you know, people use

01:33:38 nowadays. But zero knowledge proofs are ways of proving to someone that something is true,

01:33:45 like, you know, that there is a solution to this, you know, optimization problem or that these two

01:33:53 graphs are isomorphic to each other or something, but without revealing why it’s true, without

01:33:59 revealing anything about why it’s true. Okay. SZK is all of the problems for which there is such a

01:34:06 proof that doesn’t rely on any cryptography. Okay. And if you wonder, like, how could such a thing

01:34:13 possibly exist, right? Well, like, imagine that I had two graphs and I wanted to convince you

01:34:20 that these two graphs are not isomorphic, meaning, you know, I cannot permute one of them so that

01:34:26 it’s the same as the other one, right? You know, that might be a very hard statement to prove,

01:34:30 right? I might need, you know, you might have to do a very exhaustive enumeration of, you know,

01:34:35 all the different permutations before you were convinced that it was true. But what if there were

01:34:40 some all knowing wizard that said to you, look, I’ll tell you what, just pick one of the graphs

01:34:45 randomly, then randomly permute it, then send it to me and I will tell you which graph you started

01:34:52 with. Okay. And I will do that every single time. Right. And let’s say that that wizard did that a

01:35:02 hundred times and it was right every time. Yeah. Right. Now, if the graphs were isomorphic, then,

01:35:08 you know, it would have been flipping a coin each time, right? It would have had only a one and two

01:35:13 to the 100 power chance of, you know, of guessing right each time. But, you know, so, so if it’s

01:35:18 right every time, then now you’re statistically convinced that these graphs are not isomorphic,

01:35:24 even though you’ve learned nothing new about why they aren’t. So fascinating. So yeah. So,

01:35:28 so SDK is all of the problems that have protocols like that one, but it has this beautiful other

01:35:35 characterization. It’s shown up again and again in my, in my own work and, you know, a lot of

01:35:40 people’s work. And I think that it really is one of the most fundamental classes. It’s just that

01:35:45 people didn’t realize that when it was first discovered. So we’re living in the middle of

01:35:49 a pandemic currently. Yeah. How has your life been changed or no better to ask, like, how has your

01:35:56 perspective of the world change with this world changing event of a pandemic overtaking the entire

01:36:03 world? Yeah. Well, I mean, I mean, all of our lives have changed, you know, like, I guess,

01:36:08 as with no other event since I was born, you know, you would have to go back to world war II

01:36:13 for something, I think of this magnitude, you know, on, you know, the way that we live our lives

01:36:19 as for how it has changed my worldview. I think that the, the failure of institutions,

01:36:26 you know, like, like, like the CDC, like, you know, other institutions that we sort of thought

01:36:32 were, were trustworthy, like a lot of the media was staggering, was, was absolutely breathtaking.

01:36:40 It is something that I would not have predicted. Right. I think I, I wrote on my blog that, you

01:36:46 know, the, you know, it’s, it’s, it’s fascinating to like rewatch the movie Contagion from a decade

01:36:53 ago, right. That correctly foresaw so many aspects of, you know, what was going on, you know, an

01:37:00 airborne, you know, virus originates in China, spreads to, you know, much of the world, you know,

01:37:06 shuts everything down until a vaccine can be developed. You know, everyone has to stay at home,

01:37:12 you know, you know, it gets, you know, an enormous number of things, right. Okay. But the one thing

01:37:18 that they could not imagine, you know, is that like in this movie, everyone from the government

01:37:23 is like hyper competent, hyper, you know, dedicated to the public good, right. And you

01:37:30 know, yeah, they’re the, they’re the best of the best, you know, they could, you know, and, and

01:37:33 there are these conspiracy theorists, right. Who think, you know, you know, this is all fake news.

01:37:39 There’s no, there’s not really a pandemic. And those are some random people on the internet who

01:37:44 the hyper competent government people have to, you know, oppose, right. They, you know, in, in trying

01:37:49 to envision the worst thing that could happen, like, you know, the, the, there was a failure of

01:37:55 imagination. The movie makers did not imagine that the conspiracy theorists and the, you know,

01:38:01 and the incompetence and the nutcases would have captured our institutions and be the ones actually

01:38:07 running things. So you had a certain, I love competence in all walks of life. I love, I get

01:38:13 so much energy. I’m so excited by people who do amazing job. And I like you, or maybe you can

01:38:19 clarify, but I had maybe not intuition, but I hope that government at its best could be ultra

01:38:24 competent. What, first of all, two questions, like how do you explain the lack of confidence

01:38:31 and the other, maybe on the positive side, how can we build a more competent government?

01:38:36 Well, there’s an election in two months. I mean, you have a faith that the election,

01:38:41 I, you know, it’s not going to fix everything, but you know, it’s like,

01:38:45 I feel like there is a ship that is sinking and you could at least stop the sinking.

01:38:49 But, you know, I think that there are much, much deeper problems. I mean, I think that,

01:38:56 you know, it is plausible to me that, you know, a lot of the failures, you know, with the CDC,

01:39:03 with some of the other health agencies, even, you know, predate Trump, you know, predate the,

01:39:09 you know, right wing populism that has sort of taken over much of the world now. And, you know,

01:39:16 I think that, you know, it is, you know, it is very, I’m actually, you know, I’ve actually been

01:39:23 strongly in favor of, you know, rushing vaccines of, you know, I thought that we could have done,

01:39:31 you know, human challenge trials, you know, which were not done, right? We could have, you know,

01:39:36 like had, you know, volunteers, you know, to actually, you know, be, you know, get vaccines,

01:39:44 get, you know, exposed to COVID. So innovative ways of accelerating what we’ve done previously

01:39:49 over a long time. I thought that, you know, each month that a vaccine is closer is like trillions

01:39:56 of dollars. Are you surprised? And of course, lives, you know, at least, you know, hundreds

01:40:01 of thousands of lives. Are you surprised that it’s taking this long? We still don’t have a plan.

01:40:05 There’s still not a feeling like anyone is actually doing anything in terms of alleviating,

01:40:11 like any kind of plan. So there’s a bunch of stuff, there’s vaccine, but you could also do

01:40:16 a testing infrastructure where everybody’s tested nonstop with contact tracing, all that kind of.

01:40:21 Well, I mean, I’m as surprised as almost everyone else. I mean, this is a historic failure. It is

01:40:27 one of the biggest failures in the 240 year history of the United States, right? And we should

01:40:33 be, you know, crystal clear about that. And, you know, one thing that I think has been missing,

01:40:38 you know, even from the more competent side is like, you know, is sort of the World War II

01:40:45 mentality, right? The, you know, the mentality of, you know, let’s just, you know, you know,

01:40:52 if we can, by breaking a whole bunch of rules, you know, get a vaccine and, you know, and even

01:40:59 half the amount of time as we thought, then let’s just do that because, you know, like we have to

01:41:07 weigh all of the moral qualms that we have about doing that against the moral qualms of not doing.

01:41:13 And one key little aspect to that that’s deeply important to me, and we’ll go into that topic

01:41:18 next, is the World War II mentality wasn’t just about, you know, breaking all the rules to get

01:41:24 the job done. There was a togetherness to it. So I would, if I were president right now, it seems

01:41:31 quite elementary to unite the country because we’re facing a crisis. It’s easy to make the

01:41:39 virus the enemy. And it’s very surprising to me that the division has increased as opposed to

01:41:46 decrease. That’s heartbreaking. Yeah. Well, look, I mean, it’s been said by others that this is the

01:41:51 first time in the country’s history that we have a president who does not even pretend to, you know,

01:41:57 want to unite the country. I mean, Lincoln, who fought a civil war, said he wanted to unite the

01:42:06 country. And I do worry enormously about what happens if the results of this election are

01:42:15 contested. And will there be violence as a result of that? And will we have a clear path of succession?

01:42:22 And, you know, look, I mean, you know, this is all we’re going to find out the answers to this in

01:42:27 two months. And if none of that happens, maybe I’ll look foolish. But I am willing to go on the

01:42:31 record and say, I am terrified about that. Yeah, I’ve been reading The Rise and Fall of the Third

01:42:37 Reich. So if I can, this is like one little voice just to put out there that I think November will

01:42:46 be a really critical month for people to breathe and put love out there. Do not, you know, anger in

01:42:55 those in that context, no matter who wins, no matter what is said, will destroy our country,

01:43:01 may destroy our country, may destroy the world because of the power of the country. So it’s

01:43:05 really important to be patient, loving, empathetic. Like one of the things that troubles me is that

01:43:11 even people on the left are unable to have a love and respect for people who voted for Trump. They

01:43:17 can’t imagine that there’s good people that could vote for the opposite side. Oh, I know there are

01:43:23 because I know some of them, right? I mean, you know, it’s still, you know, maybe it baffles me,

01:43:29 but, you know, I know such people. Let me ask you this. It’s also heartbreaking to me

01:43:34 on the topic of cancel culture. So in the machine learning community, I’ve seen it a little bit

01:43:39 that there’s aggressive attacking of people who are trying to have a nuanced conversation about

01:43:46 things. And it’s troubling because it feels like nuanced conversation is the only way to talk about

01:43:55 difficult topics. And when there’s a thought police and speech police on any nuanced conversation

01:44:02 that everybody has to like in a animal farm chant that racism is bad and sexism is bad, which is

01:44:09 things that everybody believes and they can’t possibly say anything nuanced. It feels like it

01:44:15 goes against any kind of progress from my kind of shallow perspective. But you’ve written a little

01:44:20 bit about cancel culture. Do you have thoughts there? Well, I mean, to say that I am opposed to,

01:44:28 you know, this trend of cancellations or of shouting people down rather than engaging them,

01:44:35 that would be a massive understatement, right? And I feel like, you know, I have put my money

01:44:40 where my mouth is, you know, not as much as some people have, but, you know, I’ve tried to do

01:44:46 something. I mean, I have defended, you know, some unpopular people and unpopular, you know, ideas

01:44:52 on my blog. I’ve, you know, tried to defend, you know, norms of open discourse, of, you know,

01:45:02 reasoning with our opponents, even when I’ve been shouted down for that on social media,

01:45:07 you know, called a racist, called a sexist, all of those things. And which, by the way,

01:45:11 I should say, you know, I would be perfectly happy to, you know, if we had time to say, you know,

01:45:17 you know, 10,000 times, you know, my hatred of racism, of sexism, of homophobia, right?

01:45:25 But what I don’t want to do is to cede to some particular political faction the right to define

01:45:33 exactly what is meant by those terms to say, well, then you have to agree with all of these other

01:45:39 extremely contentious positions or else you are a misogynist or else you are a racist, right?

01:45:46 I say that, well, no, you know, don’t I or, you know, don’t people like me also get a say in the

01:45:54 discussion about, you know, what is racism, about what is going to be the most effective to combat

01:46:00 racism, right? And, you know, this cancellation mentality, I think, is spectacularly ineffective

01:46:08 at its own professed gall of, you know, combating racism and sexism.

01:46:13 What’s a positive way out? So I, I try to, I don’t know if you see what I do on Twitter,

01:46:19 but I, on Twitter, I mostly, in my whole, in my life, I’ve actually, it’s who I am to the core is

01:46:25 like, I really focus on the positive and I try to put love out there in the world. And still,

01:46:32 I get attacked. And I look at that and I wonder like,

01:46:36 You too? I didn’t know.

01:46:38 Like, I haven’t actually said anything difficult and nuanced. You talk about somebody like

01:46:43 Steven Pinker, who I’m actually don’t know the full range of things that he’s attacked for,

01:46:50 but he tries to say difficult. He tries to be thoughtful about difficult topics.

01:46:55 He does.

01:46:55 And obviously he just gets slaughtered by.

01:46:59 Well, I mean, yes, but it’s also amazing how well Steve has withstood it. I mean,

01:47:06 he just survived that attempt to cancel him just a couple of months ago, right?

01:47:10 Psychologically, he survives it too, which worries me because I don’t think I can.

01:47:15 Yeah, I’ve gotten to know Steve a bit. He is incredibly unperturbed by this stuff.

01:47:20 And I admire that and I envy it. I wish that I could be like that. I mean, my impulse when I’m

01:47:26 getting attacked is I just want to engage every single like anonymous person on Twitter and Reddit

01:47:32 who is saying mean stuff about me. And I want to just say, well, look, can we just talk this over

01:47:37 for an hour? And then you’ll see that I’m not that bad. And sometimes that even works. The

01:47:43 problem is then there’s the 20,000 other ones.

01:47:48 That’s not, but psychologically, does that wear on you?

01:47:51 It does. It does. But yeah, I mean, in terms of what is the solution, I mean, I wish I knew,

01:47:56 right? And so in a certain way, these problems are maybe harder than P versus NP, right?

01:48:02 I mean, but I think that part of it has to be that I think that there’s a lot of sort of silent

01:48:10 support for what I’ll call the open discourse side, the reasonable enlightenment side.

01:48:17 And I think that that support has to become less silent, right? I think that a lot of people

01:48:23 just sort of agree that a lot of these cancellations and attacks are ridiculous,

01:48:30 but are just afraid to say so, right? Or else they’ll get shouted down as well, right? That’s

01:48:36 just the standard witch hunt dynamic, which, of course, this faction understands and exploits to

01:48:42 its great advantage. But more people just said, we’re not going to stand for this, right? This

01:48:52 is, guess what? We’re against racism too. But what you’re doing is ridiculous, right? And the

01:49:01 hard part is it takes a lot of mental energy. It takes a lot of time. Even if you feel like

01:49:07 you’re not going to be canceled or you’re staying on the safe side, it takes a lot of time to

01:49:13 phrase things in exactly the right way and to respond to everything people say.

01:49:19 So, but I think that the more people speak up from all political persuasions, from all walks

01:49:29 of life, then the easier it is to move forward. Since we’ve been talking about love, can you,

01:49:37 last time I talked to you about meaning of life a little bit, but here has, it’s a weird question

01:49:43 to ask a computer scientist, but has love for other human beings, for things, for the world

01:49:50 around you played an important role in your life? Have you, it’s easy for a world class

01:49:59 computer scientist, you could even call yourself like a physicist, everything to be lost in the

01:50:06 books. Is the connection to other humans, love for other humans played an important role?

01:50:11 I love my kids. I love my wife. I love my parents. I’m probably not different from most people in

01:50:24 loving their families and in that being very important in my life. Now, I should remind you

01:50:32 that I am a theoretical computer scientist. If you’re looking for deep insight about the nature

01:50:38 of love, you’re probably looking in the wrong place to ask me, but sure, it’s been important.

01:50:45 But is there something from a computer science perspective to be said about love? Is that even

01:50:53 beyond into the realm of consciousness? There was this great cartoon, I think it

01:50:59 was one of the classic XKCDs where it shows a heart and it’s squaring the heart, taking the

01:51:07 four year transform of the heart, integrating the heart, each thing and then it says my normal

01:51:15 approach is useless here. I’m so glad I asked this question. I think there’s no better way to

01:51:22 end this. I hope we get a chance to talk again. This has been an amazing, cool experiment to do

01:51:26 it outside. I’m really glad you made it out. Yeah. Well, I appreciate it a lot. It’s been a

01:51:31 pleasure and I’m glad you were able to come out to Austin. Thanks. Thanks for listening to this

01:51:36 conversation with Scott Aaronson. And thank you to our sponsors, 8sleep, SimpliSafe, ExpressVPN,

01:51:44 and BetterHelp. Please check out these sponsors in the description to get a discount and to

01:51:50 support this podcast. If you enjoy this thing, subscribe on YouTube, review it with five stars

01:51:56 on Apple Podcast, follow on Spotify, support on Patreon, or connect with me on Twitter

01:52:01 at Lex Friedman. And now let me leave you with some words from Scott Aaronson that I also gave

01:52:07 to you in the introduction, which is, if you always win, then you’re probably doing something

01:52:14 wrong. Thank you for listening and for putting up with the intro and outro in this strange room in

01:52:21 the middle of nowhere. And I very much hope to see you next time in many more ways than one.