Transcript
00:00:00 The following is a conversation with Stephen Wolfram, a computer scientist, mathematician,
00:00:04 and theoretical physicist who is the founder and CEO of Wolfram Research, a company behind
00:00:10 Mathematica, Wolfram Alpha, Wolfram Language, and the new Wolfram Physics Project. He’s the author
00:00:16 of several books including A New Kind of Science, which on a personal note was one of the most
00:00:23 influential books in my journey in computer science and artificial intelligence. It made
00:00:29 me fall in love with the mathematical beauty and power of cellular automata.
00:00:34 It is true that perhaps one of the criticisms of Stephen is on a human level, that he has a big
00:00:40 ego, which prevents some researchers from fully enjoying the content of his ideas.
00:00:46 We talk about this point in this conversation. To me, ego can lead you astray but can also be
00:00:52 a superpower, one that fuels bold, innovative thinking that refuses to surrender to the cautious
00:00:59 ways of academic institutions. And here, especially, I ask you to join me in looking
00:01:05 past the peculiarities of human nature and opening your mind to the beauty of ideas in Stephen’s work
00:01:12 and in this conversation. I believe Stephen Wolfram is one of the most original minds of our time
00:01:17 and, at the core, is a kind, curious, and brilliant human being. This conversation was recorded in
00:01:24 November 2019 when the Wolfram Physics Project was underway but not yet ready for public
00:01:29 exploration as it is now. We now agreed to talk again, probably multiple times in the near future,
00:01:36 so this is round one, and stay tuned for round two soon.
00:01:41 This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
00:01:45 review it with 5 Stars and Apple Podcast, support it on Patreon, or simply connect with me on Twitter
00:01:51 at Lex Friedman spelled F R I D M A N. As usual, I’ll do a few minutes of ads now and never any
00:01:57 ads in the middle that can break the flow of the conversation. I hope that works for you and
00:02:02 doesn’t hurt the listening experience. Quick summary of the ads. Two sponsors,
00:02:07 ExpressVPN and Cash App. Please consider supporting the podcast by getting ExpressVPN
00:02:12 at expressvpn.com slash lexpod and downloading Cash App and using code lexpodcast.
00:02:21 This show is presented by Cash App, the number one finance app in the App Store. When you get it,
00:02:26 use code lexpodcast. Cash App lets you send money to friends, buy Bitcoin, and invest in the stock
00:02:32 market with as little as one dollar. Since Cash App does fractional share trading, let me mention
00:02:37 that the order execution algorithm that works behind the scenes to create the abstraction of
00:02:41 fractional orders is an algorithmic marvel. So big props to the Cash App engineers for
00:02:47 solving a hard problem that in the end provides an easy interface that takes a step up to the
00:02:52 next layer of abstraction over the stock market. This makes trading more accessible for new
00:02:57 investors and diversification much easier. So again, if you get Cash App from the App Store,
00:03:03 Google Play, and use the code lexpodcast, you get ten dollars and Cash App will also donate
00:03:09 ten dollars to FIRST, an organization that is helping to advance robotics and STEM education
00:03:14 for young people around the world. This show is presented by ExpressVPN. Get it at expressvpn.com
00:03:22 slash lexpod to get a discount and to support this podcast. I’ve been using ExpressVPN for many years.
00:03:30 I love it. It’s really easy to use. Press the big power on button and your privacy is protected.
00:03:36 And if you like, you can make it look like your location is anywhere else in the world.
00:03:41 This has a large number of obvious benefits. Certainly, it allows you to access international
00:03:46 versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any
00:03:54 device you can imagine. I use it on Linux. Shout out to Ubuntu. New version coming out soon actually.
00:04:00 Windows, Android, but it’s available anywhere else too. Once again, get it at expressvpn.com
00:04:07 slash lexpod to get a discount and to support this podcast. And now here’s my conversation
00:04:14 with Stephen Wolfram. You and your son Christopher helped create the alien language in the movie
00:04:20 Arrival. So let me ask maybe a bit of a crazy question, but if aliens were to visit us on earth,
00:04:27 do you think we would be able to find a common language?
00:04:31 Well, by the time we’re saying aliens are visiting us, we’ve already prejudiced the whole story
00:04:37 because the concept of an alien actually visiting, so to speak, we already know they’re kind of
00:04:44 things that make sense to talk about visiting. So we already know they exist in the same kind
00:04:49 of physical setup that we do. It’s not just radio signals. It’s an actual thing that shows up and so
00:04:59 on. So I think in terms of can one find ways to communicate? Well, the best example we have of
00:05:07 this right now is AI. I mean, that’s our first sort of example of alien intelligence. And the
00:05:13 question is, how well do we communicate with AI? If you were in the middle of a neural network,
00:05:19 a neural net, and you open it up and it’s like, what are you thinking? Can you discuss things
00:05:25 with it? It’s not easy, but it’s not absolutely impossible. So I think by the time, given the
00:05:32 setup of your question, aliens visiting, I think the answer is yes, one will be able to find some
00:05:38 form of communication, whatever communication means. Communication requires notions of purpose
00:05:43 and things like this. It’s a kind of philosophical quagmire.
00:05:46 So if AI is a kind of alien life form, what do you think visiting looks like? So if we look at
00:05:55 aliens visiting, and we’ll get to discuss computation and the world of computation,
00:06:01 but if you were to imagine, you said you already prejudiced something by saying you visit,
00:06:06 but how would aliens visit?
00:06:09 By visit, there’s kind of an implication. And here we’re using the imprecision of human language,
00:06:15 you know, in a world of the future. And if that’s represented in computational language,
00:06:19 we might be able to take the concept visit and go look in the documentation, basically,
00:06:26 and find out exactly what does that mean, what properties does it have, and so on.
00:06:29 But by visit, in ordinary human language, I’m kind of taking it to be there’s something,
00:06:36 a physical embodiment that shows up in a spacecraft, since we kind of know that that’s
00:06:42 necessary. We’re not imagining it’s just, you know, photons showing up in a radio signal that,
00:06:51 you know, photons in some very elaborate pattern, we’re imagining it’s physical
00:06:55 things made of atoms and so on, that show up.
00:06:58 Can it be photons in a pattern?
00:07:01 Well, that’s a good question. I mean, whether there is the possibility,
00:07:05 you know, what counts as intelligence? Good question. I mean, it’s, you know, and I
00:07:11 used to think there was sort of a, oh, there’ll be, you know, it’ll be clear what it means to
00:07:15 find extraterrestrial intelligence, et cetera, et cetera, et cetera. I’ve increasingly realized,
00:07:20 as a result of science that I’ve done, that there really isn’t a bright line between
00:07:24 the intelligent and the merely computational, so to speak.
00:07:28 So, you know, in our kind of everyday sort of discussion, we’ll say things like, you know,
00:07:33 the weather has a mind of its own. Well, let’s unpack that question. You know, we realize
00:07:38 that there are computational processes that go on that determine the fluid dynamics of this and
00:07:44 that and the atmosphere, et cetera, et cetera, et cetera. How do we distinguish that from
00:07:49 the processes that go on in our brains of, you know, the physical processes that go on in our
00:07:53 brains? How do we separate those? How do we say the physical processes going on that represent
00:08:00 sophisticated computations in the weather, oh, that’s not the same as the physical processes
00:08:05 that go on that represent sophisticated computations in our brains? The answer is,
00:08:09 I don’t think there is a fundamental distinction. I think the distinction for us is that there’s
00:08:14 kind of a thread of history and so on that connects kind of what happens in different brains
00:08:21 to each other, so to speak. And it’s a, you know, what happens in the weather is something which is
00:08:26 not connected by sort of a thread of civilizational history, so to speak, to what we’re used to.
00:08:32 SL. In the stories that the human brains told us, but maybe the weather has its own stories.
00:08:37 MG. Absolutely. Absolutely. And that’s where we run into trouble thinking about extraterrestrial
00:08:43 intelligence because, you know, it’s like that pulsar magnetosphere that’s generating these very
00:08:49 elaborate radio signals. You know, is that something that we should think of as being this
00:08:53 whole civilization that’s developed over the last however long, you know, millions of years of
00:08:58 processes going on in the neutron star or whatever versus what, you know, what we’re used to in human
00:09:06 intelligence? I mean, I think in the end, you know, when people talk about extraterrestrial
00:09:11 intelligence and where is it and the whole, you know, Fermi paradox of how come there’s no other
00:09:17 signs of intelligence in the universe, my guess is that we’ve got sort of two alien forms of
00:09:23 intelligence that we’re dealing with, artificial intelligence and sort of physical or extraterrestrial
00:09:30 intelligence. And my guess is people will sort of get comfortable with the fact that both of these
00:09:35 have been achieved around the same time. And in other words, people will say, well, yes, we’re
00:09:41 used to computers, things we’ve created, digital things we’ve created, being sort of intelligent
00:09:47 like we are. And they’ll say, oh, we’re kind of also used to the idea that there are things around
00:09:51 the universe that are kind of intelligent like we are, except they don’t share the sort of
00:09:57 civilizational history that we have. And so they’re a different branch. I mean, it’s similar to when
00:10:04 you talk about life, for instance. I mean, you kind of said life form, I think almost synonymously
00:10:10 with intelligence, which I don’t think is, you know, the AIs would be upset to hear you equate
00:10:18 those two things. Because I really probably implied biological life. But you’re saying,
00:10:25 I mean, we’ll explore this more, but you’re saying it’s really a spectrum and it’s all just
00:10:29 a kind of computation. And so it’s a full spectrum and we just make ourselves special by weaving a
00:10:37 narrative around our particular kinds of computation. Yes. I mean, the thing that I think I’ve kind of
00:10:43 come to realize is, you know, at some level, it’s a little depressing to realize that there’s so
00:10:48 little or liberating. Well, yeah, but I mean, it’s, you know, it’s the story of science,
00:10:52 right? And, you know, from Copernicus on, it’s like, you know, first we were like,
00:10:56 convinced our planets at the center of the universe. No, that’s not true. Well, then we
00:11:01 were convinced there’s something very special about the chemistry that we have as biological
00:11:06 organisms. That’s not really true. And then we’re still holding out that hope. Oh, this intelligence
00:11:11 thing we have, that’s really special. I don’t think it is. However, in a sense, as you say,
00:11:17 it’s kind of liberating for the following reason, that you realize that what’s special is the
00:11:22 details of us, not some abstract attribute that, you know, we could wonder, oh, is something else
00:11:31 going to come along and, you know, also have that abstract attribute? Well, yes, every abstract
00:11:36 attribute we have, something else has it. But the full details of our kind of history of our
00:11:42 civilization and so on, nothing else has that. That’s what, you know, that’s our story, so to
00:11:48 speak. And that’s sort of almost by definition, special. So I view it as not being such a, I mean,
00:11:56 initially I was like, this is bad. This is kind of, you know, how can we have self respect about
00:12:02 the things that we do? Then I realized the details of the things we do, they are the story.
00:12:08 Everything else is kind of a blank canvas. So maybe on a small tangent, you just made me
00:12:15 think of it, but what do you make of the monoliths in 2001 Space Odyssey in terms of
00:12:21 aliens communicating with us and sparking the kind of particular intelligent computation that
00:12:28 we humans have? Is there anything interesting to get from that sci fi? Yeah, I mean, I think what’s
00:12:37 fun about that is, you know, the monoliths are these, you know, one to four to nine perfect
00:12:42 cuboid things. And in the Earth a million years ago, whatever they were portraying with a bunch
00:12:48 of apes and so on, a thing that has that level of perfection seems out of place. It seems very kind
00:12:55 of constructed, very engineered. So that’s an interesting question. What is the, you know,
00:13:02 what’s the techno signature, so to speak? What is it that you see it somewhere and you say,
00:13:07 my gosh, that had to be engineered. Now, the fact is we see crystals, which are also very perfect.
00:13:15 And, you know, the perfect ones are very perfect. They’re nice polyhedral or whatever.
00:13:20 And so in that sense, if you say, well, it’s a sign of sort of it’s a techno signature that
00:13:27 it’s a perfect polygonal shape, polyhedral shape. That’s not true. And so then it’s an interesting
00:13:34 question. What is the right signature? I mean, like, you know, Gauss, famous mathematician,
00:13:41 you know, he had this idea, you should cut down the Siberian forest in the shape of sort of a
00:13:46 typical image of the proof of the Pythagorean theorem on the grounds that it was a kind of
00:13:51 cool idea, didn’t get done. But, you know, it’s on the grounds that the Martians would see that and
00:13:57 realize, gosh, there are mathematicians out there. It’s kind of, you know, in his theory of the world,
00:14:02 that was probably the best advertisement for the cultural achievements of our species.
00:14:08 But, you know, it’s a reasonable question. What do you, what can you send or create that is a sign
00:14:16 of intelligence in its creation or even intention in its creation? You talk about if we were to send
00:14:22 a beacon. Can you what should we send? Is math our greatest creation? Is what is our greatest
00:14:30 creation? I think I think it’s a it’s a philosophically doomed issue. I mean, in other
00:14:36 words, you send something, you think it’s fantastic, but it’s kind of like we are part of
00:14:42 the universe. We make things that are, you know, things that happen in the universe.
00:14:47 Computation, which is sort of the thing that we are in some abstract sense using to create all
00:14:53 these elaborate things we create, is surprisingly ubiquitous. In other words, we might have thought
00:15:01 that, you know, we’ve built this whole giant engineering stack that’s led us to microprocessors,
00:15:06 that’s led us to be able to do elaborate computations. But this idea that computations
00:15:13 are happening all over the place. The only question is whether whether there’s a thread that connects
00:15:18 our human intentions to what those computations are. And so I think I think this question of what
00:15:24 do you send to kind of show off our civilization in the best possible way? I think any kind of
00:15:32 almost random slab of stuff we’ve produced is about equivalent to everything else. I think
00:15:38 it’s one of these things where it’s a non romantic way of phrasing it. I just started to interrupt,
00:15:44 but I just talked to Andrew in who’s the wife of Carl Sagan. And so I don’t know if you’re
00:15:51 familiar with the Voyager. I mean, she was part of sending, I think, brainwaves of, you know,
00:15:57 wasn’t it hers? Her brainwaves when she was first falling in love with Carl Sagan. It’s
00:16:03 this beautiful story that perhaps you would shut down the power of that by saying we might
00:16:11 as well send anything else. And that’s interesting. All of it is kind of an interesting, peculiar
00:16:16 thing. Yeah, yeah, right. Well, I mean, I think it’s kind of interesting to see on the Voyager,
00:16:21 you know, golden record thing. One of the things that’s kind of cute about that is, you know,
00:16:25 it was made when was it in the late 70s, early 80s. And, you know, one of the things, it’s a
00:16:31 phonograph record. Okay. And it has a diagram of how to play a phonograph record. And, you know,
00:16:37 it’s kind of like it’s shocking that in just 30 years, if you show that to a random kid of today,
00:16:43 and you show them that diagram, I’ve tried this experiment, they’re like, I don’t know what the
00:16:47 heck this is. And the best anybody can think of is, you know, take the whole record, forget the
00:16:52 fact that it has some kind of helical track in it, just image the whole thing and see what’s there.
00:16:58 That’s what we would do today. In only 30 years, our technology has kind of advanced to the point
00:17:03 where the playing of a helical, you know, mechanical track on a phonograph record is now
00:17:09 something bizarre. So, you know, it’s a cautionary tale, I would say, in terms of the ability to make
00:17:17 something that in detail sort of leads by the nose, some, you know, the aliens or whatever,
00:17:23 to do something. It’s like, no, you know, best you can do, as I say, if we were doing this today,
00:17:29 we would not build a helical scan thing with a needle. We would just take some high resolution
00:17:35 imaging system and get all the bits off it and say, oh, it’s a big nuisance that they put in a
00:17:40 helix, you know, in a spiral. Let’s just unravel the spiral and start from there.
00:17:49 SL. Do you think, and this will get into trying to figure out interpretability of AI,
00:17:56 interpretability of computation, being able to communicate with various kinds of computations,
00:18:02 do you think we’d be able to, if you put your alien hat on, figure out this record,
00:18:08 how to play this record?
00:18:10 MG. Well, it’s a question of what one wants to do. I mean,
00:18:13 SL. Understand what the other party was trying to communicate or understand anything about the
00:18:19 other party.
00:18:20 MG. What does understanding mean? I mean, that’s the issue. The issue is, it’s like when people
00:18:24 were trying to do natural language understanding for computers, right? So people tried to do that
00:18:30 for years. It wasn’t clear what it meant. In other words, you take your piece of English or whatever,
00:18:36 and you say, gosh, my computer has understood this. Okay, that’s nice. What can you do with that?
00:18:43 Well, so for example, when we built WolfMalpha, one of the things was it’s doing question answering
00:18:51 and so on, and it needs to do natural language understanding. The reason that I realized after
00:18:56 the fact, the reason we were able to do natural language understanding quite well, and people
00:19:01 hadn’t before, the number one thing was we had an actual objective for the natural language
00:19:07 understanding. We were trying to turn the natural language into this computational language
00:19:12 that we could then do things with. Now, similarly, when you imagine your alien, you say,
00:19:16 okay, we’re playing them the record. Did they understand it? Well, it depends what you mean.
00:19:23 If there’s a representation that they have, if it converts to some representation where we can say,
00:19:28 oh yes, that’s a representation that we can recognize is represents understanding, then all
00:19:35 well and good. But actually, the only ones that I think we can say would represent understanding
00:19:41 are ones that will then do things that we humans kind of recognize as being useful to us.
00:19:47 Maybe you’re trying to understand, quantify how technologically advanced this particular
00:19:53 civilization is. So are they a threat to us from a military perspective? That’s probably the
00:20:00 first kind of understanding they’ll be interested in. Gosh, that’s so hard. That’s like in the
00:20:05 Arrival movie, that was one of the key questions is, why are you here, so to speak? Are you going
00:20:11 to hurt us? But even that, it’s very unclear. It’s like, are you going to hurt us? That comes
00:20:17 back to a lot of interesting AI ethics questions, because we might make an AI that says, well,
00:20:24 take autonomous cars, for instance. Are you going to hurt us? Well, let’s make sure you only drive
00:20:29 at precisely the speed limit, because we want to make sure we don’t hurt you, so to speak.
00:20:36 But you say, but actually, that means I’m going to be really late for this thing, and
00:20:40 that sort of hurts me in some way. So it’s hard to know. Even the definition of what it means to
00:20:46 hurt someone is unclear. And as we start thinking about things about AI ethics and so on, that’s
00:20:54 something one has to address. There’s always tradeoffs, and that’s the annoying thing about
00:20:58 ethics. Yeah, well, right. And I think ethics, like these other things we’re talking about,
00:21:03 is a deeply human thing. There’s no abstract, let’s write down the theorem that proves that
00:21:10 this is ethically correct. That’s a meaningless idea. You have to have a ground truth, so to
00:21:17 speak, that’s ultimately what humans want, and they don’t all want the same thing. So that gives
00:21:23 one all kinds of additional complexity in thinking about that. One convenient thing in terms of
00:21:28 turning ethics into computation, you can ask the question of what maximizes the likelihood of the
00:21:35 survival of the species. That’s a good existential issue. But then when you say survival of the
00:21:42 species, you might say, you might, for example, let’s say, forget about technology, just hang out
00:21:52 and be happy, live our lives, go on to the next generation, go through many, many generations
00:21:58 where, in a sense, nothing is happening. Is that okay? Is that not okay? Hard to know. In terms of
00:22:05 the attempt to do elaborate things and the attempt to might be counterproductive for the survival of
00:22:14 the species. It’s also a little bit hard to know, so okay, let’s take that as a sort of thought
00:22:23 experiment. You can say, well, what are the threats that we might have to survive? The
00:22:30 super volcano, the asteroid impact, all these kinds of things. Okay, so now we inventory these
00:22:37 possible threats and we say, let’s make our species as robust as possible relative to all
00:22:41 these threats. I think in the end, it’s sort of an unknowable thing what it takes. So given that
00:22:51 you’ve got this AI and you’ve told it, maximize the long term. What does long term mean? Does
00:22:58 long term mean until the sun burns out? That’s not going to work. Does long term mean next thousand
00:23:05 years? Okay, they’re probably optimizations for the next thousand years. It’s like if you’re
00:23:12 running a company, you can make a company be very stable for a certain period of time.
00:23:16 Like if your company gets bought by some private investment group, then you can run a company just
00:23:25 fine for five years by just taking what it does and removing all R&D and the company will burn
00:23:33 out after a while, but it’ll run just fine for a little while. So if you tell the AI, keep the
00:23:38 humans okay for a thousand years, there’s probably a certain set of things that one would do to
00:23:42 optimize that, many of which one might say, well, that would be a pretty big shame for the future of
00:23:46 history, so to speak, for that to be what happens. But I think in the end, as you start thinking
00:23:51 about that question, what you realize is there’s a whole sort of raft of undecidability, computational
00:24:00 irreducibility. In other words, one of the good things about what our civilization has gone
00:24:08 through and what we humans go through is that there’s a certain computational irreducibility
00:24:13 to it in the sense that it isn’t the case that you can look from the outside and just say,
00:24:18 the answer is going to be this. At the end of the day, this is what’s going to happen.
00:24:22 You actually have to go through the process to find out. And I think that feels better in the
00:24:28 sense that something is achieved by going through all of this process. But it also means
00:24:38 that telling the AI, go figure out what will be the best outcome. Well, unfortunately, it’s going
00:24:44 to come back and say, it’s kind of undecidable what to do. We’d have to run all of those scenarios
00:24:51 to see what happens. And if we want it for the infinite future, we’re thrown immediately into
00:24:57 sort of standard issues of kind of infinite computation and so on. So yeah, even if you
00:25:02 get that the answer to the universe and everything is 42, you still have to actually run the universe.
00:25:07 Yes, to figure out the question, I guess, or the journey is the point.
00:25:16 Right. Well, I think it’s saying to summarize, this is the result of the universe. If that is
00:25:23 possible, it tells us, I mean, the whole sort of structure of thinking about computation and so on
00:25:29 and thinking about how stuff works. If it’s possible to say, and the answer is such and such,
00:25:35 you’re basically saying there’s a way of going outside the universe. And you’re getting yourself
00:25:40 into something of a sort of paradox because you’re saying, if it’s knowable what the answer is, then
00:25:46 there’s a way to know it that is beyond what the universe provides. But if we can know it, then
00:25:52 something that we’re dealing with is beyond the universe. So then the universe isn’t the universe,
00:25:58 so to speak. And in general, as we’ll talk about, at least for our small human brains, it’s
00:26:08 hard to show the result of a sufficiently complex computation. I mean, it’s probably impossible,
00:26:15 right, on this side ability. And the universe appears by at least the poets to be sufficiently
00:26:25 complex. They won’t be able to predict what the heck it’s all going to. Well, we better not be
00:26:30 able to, because if we can, it kind of denies. I mean, it’s you know, we’re part of the universe.
00:26:36 Yeah. So what does it mean for us to predict? It means that we that our little part of the universe
00:26:42 is able to jump ahead of the whole universe. And this this quickly winds up. I mean, that it is
00:26:48 conceivable. The only way we’d be able to predict is if we are so special in the universe, we are
00:26:54 the one place where there is computation more special, more sophisticated than anything else
00:27:00 that exists in the universe. That’s the only way we would have the ability to sort of the almost
00:27:05 theological ability, so to speak, to predict what happens in the universe is to say somehow we’re
00:27:12 better than everything else in the universe, which I don’t think is the case. Yeah, perhaps we can
00:27:17 detect a large number of looping patterns that reoccur throughout the universe and fully describe
00:27:26 them. And therefore, but then it still becomes exceptionally difficult to see how those patterns
00:27:31 interact and what kind of well, look, the most remarkable thing about the universe is that it’s
00:27:37 has regularity at all. Might not be the case. If you just have regularity, do you? Absolutely.
00:27:43 That fits full of I mean, physics is successful. You know, it’s full of of laws that tell us a lot
00:27:50 of detail about how the universe works. I mean, it could be the case that, you know, the 10 to the
00:27:54 90th particles in the universe, they will do their own thing, but they don’t. They all follow. We
00:27:58 already know they all follow basically physical, the same physical laws. And that’s something
00:28:04 that’s a very profound fact about the universe. What conclusion you draw from that is unclear. I
00:28:10 mean, in the, you know, the early early theologians, that was, you know, exhibit number one for the
00:28:16 existence of God. Now, you know, people have different conclusions about it. But the fact is,
00:28:22 you know, right now, I mean, I happen to be interested, actually, I’ve just restarted a
00:28:26 long running kind of interest of mine about fundamental physics. I’m kind of like, come on,
00:28:32 I’m on a bit of a quest, which I’m about to make more public, to see if I can actually find the
00:28:39 fundamental theory of physics. Excellent. We’ll come to that. And I just had a lot of conversations
00:28:46 with quantum mechanics folks with so I’m really excited on your take, because I think you have a
00:28:52 fascinating take on the the fundamental nature of our reality from a physics perspective. So
00:28:59 and what might be underlying the kind of physics as we think of it today. Okay, let’s take a step
00:29:06 back. What is computation? It’s a good question. Operationally, computation is following rules.
00:29:15 That’s kind of it. I mean, computation is the result is the process of systematically following
00:29:20 rules. And it is the thing that happens when you do that. So taking initial conditions are taking
00:29:26 inputs and following rules. I mean, what are you following rules on? So there has to be some data,
00:29:33 some unnecessarily, it can be something where there’s a, you know, very simple input. And then
00:29:40 you’re following these rules. And you’d say there’s not really much data going into this.
00:29:44 It’s you could actually pack the initial conditions into the rule, if you want to. So I think the
00:29:51 question is, is there a robust notion of computation? That is, what does robust mean?
00:29:55 What I mean by that is something like this. So So one of the things in a different in another
00:29:59 physics, something like energy, okay, the different forms of energy, there’s, but somehow energy is a
00:30:07 robust concept that doesn’t, isn’t particular to kinetic energy, or, you know, nuclear energy,
00:30:15 or whatever else, there’s a robust idea of energy. So one of the things you might ask is,
00:30:19 is the robust idea of computation? Or does it matter that this computation is running in a
00:30:24 Turing machine? This computation is running in a, you know, CMOS, silicon, CPU, this computation is
00:30:30 running in a fluid system in the weather, those kinds of things? Or is there a robust idea of
00:30:35 computation that transcends the sort of detailed framework that it’s running in? Okay. And is there?
00:30:43 Yes. I mean, it wasn’t obvious that there was. So it’s worth understanding the history and how we
00:30:48 got to where we are right now. Because, you know, to say that there is, is a statement in part about
00:30:55 our universe. It’s not a statement about what is mathematically conceivable. It’s about what
00:31:00 actually can exist for us. Maybe you can also comment because energy, as a concept is robust.
00:31:08 But there’s also its intricate, complicated relationship with matter, with mass, is very
00:31:19 interesting, of particles that carry force and particles that sort of particles that carry force
00:31:27 and particles that have mass. These kinds of ideas, they seem to map to each other, at least
00:31:33 in the mathematical sense. Is there a connection between energy and mass and computation? Or are
00:31:41 these completely disjoint ideas? We don’t know yet. The things that I’m trying to do about fundamental
00:31:46 physics may well lead to such a connection, but there is no known connection at this time.
00:31:53 So can you elaborate a little bit more on what, how do you think about computation? What is
00:32:00 computation? What is computation? Yeah. So I mean, let’s, let’s tell a little bit of a historical
00:32:05 story. Okay. So, you know, back, go back 150 years, people were making mechanical calculators of
00:32:12 various kinds. And, you know, the typical thing was you want an adding machine, you go to the
00:32:16 adding machine store, basically, you want a multiplying machine, you go to the multiplying
00:32:20 machine store, they’re different pieces of hardware. And so that means that, at least at the
00:32:26 level of that kind of computation, and those kinds of pieces of hardware, there isn’t a robust notion
00:32:31 of computation, there’s the adding machine kind of computation, there’s the multiplying machine
00:32:35 notion of computation, and they’re disjoint. So what happened in around 1900, people started
00:32:41 imagining, particularly in the context of mathematical logic, could you have something
00:32:46 which would be represent any reasonable function, right? And they came up with things, this idea of
00:32:52 primitive recursion was one of the early ideas. And it didn’t work. There were reasonable functions
00:32:57 that people could come up with that were not represented using the primitives of primitive
00:33:03 recursion. Okay, so then, then along comes 1931, and Godel’s theorem, and so on. And as in looking
00:33:11 back, one can see that as part of the process of establishing Godel’s theorem, Godel basically
00:33:17 showed how you could compile arithmetic, how you could basically compile logical statements like
00:33:24 this statement is unprovable into arithmetic. So what he essentially did was to show that
00:33:29 arithmetic can be a computer in a sense that’s capable of representing all kinds of other things.
00:33:36 And then Turing came along 1936, came up with Turing machines. Meanwhile, Alonzo Church had
00:33:42 come up with lambda calculus. And the surprising thing that was established very quickly is the
00:33:47 Turing machine idea about what might be what computation might be is exactly the same as the
00:33:52 lambda calculus idea of what computation might be. And so, and then there started to be other ideas,
00:33:58 you know, register machines, other kinds of other kinds of representations of computation.
00:34:03 And the big surprise was, they all turned out to be equivalent. So in other words, it might have
00:34:08 been the case, like those old adding machines and multiplying machines, that, you know, Turing had
00:34:12 his idea of computation, Church had his idea of computation, and they were just different. But it
00:34:16 isn’t true. They’re actually all equivalent. So then by, I would say the 1970s or so in sort of
00:34:26 the computation, computer science, computation theory area, people had sort of said, oh,
00:34:30 Turing machines are kind of what computation is. Physicists were still holding out saying, no,
00:34:36 no, no, that’s just not how the universe works. We’ve got all these differential equations.
00:34:40 We’ve got all these real numbers that have infinite numbers of digits.
00:34:43 The universe is not a Turing machine.
00:34:45 Right. The, you know, the Turing machines are a small subset of the things that we make in
00:34:51 microprocessors and engineering structures and so on. So probably actually through my work in the
00:34:56 1980s about sort of the relationship between computation and models of physics, it became a
00:35:04 little less clear that there would be, that there was this big sort of dichotomy between what can
00:35:12 happen in physics and what happens in things like Turing machines. And I think probably by now people
00:35:18 would mostly think, and by the way, brains were another kind of element of this. I mean, you know,
00:35:23 Gödel didn’t think that his notion of computation or what amounted to his notion of computation
00:35:28 would cover brains. And Turing wasn’t sure either. But although he was a little bit,
00:35:35 he got to be a little bit more convinced that it should cover brains. But I would say by probably
00:35:44 sometime in the 1980s, there was beginning to be sort of a general belief that yes, this notion
00:35:49 of computation that could be captured by things like Turing machines was reasonably robust.
00:35:54 Now, the next question is, okay, you can have a universal Turing machine that’s capable of
00:36:01 being programmed to do anything that any Turing machine can do. And, you know, this idea of
00:36:08 universal computation, it’s an important idea, this idea that you can have one piece of hardware
00:36:12 and program it with different pieces of software. You know, that’s kind of the idea that launched
00:36:17 most modern technology. I mean, that’s kind of, that’s the idea that launched computer revolution
00:36:22 software, etc. So important idea. But the thing that’s still kind of holding out from that idea
00:36:29 is, okay, there is this universal computation thing, but seems hard to get to. It seems like
00:36:35 you want to make a universal computer, you have to kind of have a microprocessor with, you know,
00:36:40 a million gates in it, and you have to go to a lot of trouble to make something that achieves that
00:36:45 level of computational sophistication. Okay, so the surprise for me was that stuff that I discovered
00:36:52 in the early 80s, looking at these things called cellular automata, which are really simple
00:36:58 computational systems, the thing that was a big surprise to me was that even when their rules were
00:37:04 very, very simple, they were doing things that were as sophisticated as they did when their rules
00:37:09 were much more complicated. So it didn’t look like, you know, this idea, oh, to get sophisticated
00:37:14 computation, you have to build something with very sophisticated rules. That idea didn’t seem to pan
00:37:21 out. And instead, it seemed to be the case that sophisticated computation was completely ubiquitous,
00:37:26 even in systems with incredibly simple rules. And so that led to this thing that I call the
00:37:31 principle of computational equivalence, which basically says, when you have a system that
00:37:37 follows rules of any kind, then whenever the system isn’t doing things that are, in some sense,
00:37:44 obviously simple, then the computation that the behavior of the system corresponds to is of
00:37:51 equivalence sophistication. So that means that when you kind of go from the very, very, very
00:37:56 simplest things you can imagine, then quite quickly, you hit this kind of threshold above
00:38:02 which everything is equivalent in its computational sophistication. Not obvious that would be the case.
00:38:07 I mean, that’s a science fact. Well, no, hold on a second. So this you’ve opened with a new kind
00:38:14 of science. I mean, I remember it was a huge eye opener that such simple things can create such
00:38:20 complexity. And yes, there’s an equivalence, but it’s not a fact. It just appears to, I mean,
00:38:26 it’s as much as a fact as sort of these theories are so elegant that it seems to be the way things
00:38:36 are. But let me ask sort of, you just brought up previously, kind of like the communities of
00:38:43 computer scientists with their Turing machines, the physicists with their universe, and whoever
00:38:49 the heck, maybe neuroscientists looking at the brain. What’s your sense in the equivalence?
00:38:56 You’ve shown through your work that simple rules can create equivalently complex Turing machine
00:39:06 systems, right? Is the universe equivalent to the kinds of Turing machines? Is the human brain
00:39:16 a kind of Turing machine? Do you see those things basically blending together? Or is there still a
00:39:21 mystery about how disjoint they are? Well, my guess is that they all blend together, but we don’t know
00:39:26 that for sure yet. I mean, this, you know, I should say, I said rather glibly that the principle of
00:39:33 computational equivalence is sort of a science fact. And I was using air quotes for the science fact,
00:39:40 because when you, it is a, I mean, just to talk about that for a second. The thing is that it has
00:39:50 a complicated epistemological character, similar to things like the second law of thermodynamics,
00:39:57 the law of entropy increase. What is the second law of thermodynamics? Is it a law of nature? Is
00:40:03 it a thing that is true of the physical world? Is it something which is mathematically provable? Is
00:40:10 it something which happens to be true of the systems that we see in the world? Is it, in some
00:40:15 sense, a definition of heat, perhaps? Well, it’s a combination of those things. And it’s the same
00:40:21 thing with the principle of computational equivalence. And in some sense, the principle
00:40:25 of computational equivalence is at the heart of the definition of computation, because it’s telling
00:40:30 you there is a thing, there is a robust notion that is equivalent across all these systems and
00:40:35 doesn’t depend on the details of each individual system. And that’s why we can meaningfully talk
00:40:41 about a thing called computation. And we’re not stuck talking about, oh, there’s computation in
00:40:46 Turing machine number 3785, and et cetera, et cetera, et cetera. That’s why there is a robust
00:40:52 notion like that. Now, on the other hand, can we prove the principle of computational equivalence?
00:40:57 Can we prove it as a mathematical result? Well, the answer is, actually, we’ve got some nice results
00:41:03 along those lines that say, throw me a random system with very simple rules. Well, in a couple
00:41:10 of cases, we now know that even the very simplest rules we can imagine of a certain type are
00:41:16 universal and do follow what you would expect from the principle of computational equivalence. So
00:41:22 that’s a nice piece of sort of mathematical evidence for the principle of computational equivalence.
00:41:27 Just to link on that point, the simple rules creating sort of these complex behaviors. But
00:41:35 is there a way to mathematically say that this behavior is complex? That you’ve mentioned that
00:41:43 you cross a threshold. Right. So there are various indicators. So, for example, one thing would be,
00:41:49 is it capable of universal computation? That is, given the system, do there exist initial
00:41:55 conditions for the system that can be set up to essentially represent programs to do anything you
00:42:00 want, to compute primes, to compute pi, to do whatever you want? Right. So that’s an indicator.
00:42:05 So we know in a couple of examples that, yes, the simplest candidates that could conceivably have
00:42:13 that property do have that property. And that’s what the principle of computational equivalence
00:42:16 might suggest. But this principle of computational equivalence, one question about it is, is it true
00:42:24 for the physical world? It might be true for all these things we come up with, the Turing machines,
00:42:29 the cellular automata, whatever else. Is it true for our actual physical world? Is it true for the
00:42:36 brains, which are an element of the physical world? We don’t know for sure. And that’s not the
00:42:42 type of question that we will have a definitive answer to, because there’s a sort of scientific
00:42:48 induction issue. You can say, well, it’s true for all these brains, but this person over here is
00:42:52 really special, and it’s not true for them. And the only way that that cannot be what happens is
00:43:00 if we finally nail it and actually get a fundamental theory for physics, and it turns out
00:43:06 to correspond to, let’s say, a simple program. If that is the case, then we will basically have
00:43:11 reduced physics to a branch of mathematics, in the sense that we will not be, you know,
00:43:16 right now with physics, we’re like, well, this is the theory that, you know, this is the rules that
00:43:20 apply here. But in the middle of that, you know, right by that black hole, maybe these rules don’t
00:43:28 apply and something else applies. And there may be another piece of the onion that we have to peel
00:43:32 back. But if we can get to the point where we actually have, this is the fundamental theory of
00:43:38 physics, here it is, it’s this program, run this program, and you will get our universe, then we’ve
00:43:44 kind of reduced the problem of figuring out things in physics to a problem of doing some, what turns
00:43:50 out to be very difficult, irreducibly difficult, mathematical problems. But it no longer is the
00:43:56 case that we can say that somebody can come in and say, whoops, you know, you will write about
00:44:00 all these things about Turing machines, but you’re wrong about the physical universe, we know
00:44:04 there’s sort of ground truth about what’s happening in the physical universe. Now, I happen to think,
00:44:09 I mean, you asked me at an interesting time, because I’m just in the middle of starting to
00:44:14 to re energize my, my project to kind of study fundamental theory of physics. As of today, I’m
00:44:22 very optimistic that we’re actually going to find something and that it’s going to be possible to
00:44:27 to see that the universe really is computational in that sense. But I don’t know, because we’re
00:44:31 betting against, you know, we’re betting against the universe, so to speak. And I didn’t, you know,
00:44:36 it’s not like, you know, when I spend a lot of my life building technology, and then I know what
00:44:41 what’s in there, right? And it’s there may be, it may have unexpected behavior, may have bugs,
00:44:46 things like that. But fundamentally, I know what’s in there for the universe. I’m not in
00:44:50 that position, so to speak. What kind of computation do you think the fundamental laws of
00:44:57 physics might emerge from? Just to clarify, so you’ve done a lot of fascinating work with kind
00:45:05 of discrete kinds of computation that, you know, you can sell your automata, and we’ll talk about
00:45:11 it, have this very clean structures, it’s such a nice way to demonstrate that simple rules
00:45:17 can create immense complexity. But what kind, you know, is that actually, are cellular automata
00:45:26 sufficiently general to describe the kinds of computation that might create the laws of physics?
00:45:32 Just to give, can you give a sense of what kind of computation do you think would create?
00:45:37 Well, so this is a slightly complicated issue, because as soon as you have universal
00:45:42 computation, you can, in principle, simulate anything with anything.
00:45:45 Right. But it is not a natural thing to do. And if you’re asking, were you to try to find our
00:45:51 physical universe by looking at possible programs in the computational universe of all possible
00:45:56 programs, would the ones that correspond to our universe be small and simple enough that we might
00:46:03 find them by searching that computational universe? We got to have the right basis, so to speak. We
00:46:07 have to have the right language, in effect, for describing computation for that to be feasible.
00:46:12 So the thing that I’ve been interested in for a long time is, what are the most structuralist
00:46:16 structures that we can create with computation? So in other words, if you say a cellular automaton,
00:46:21 it has a bunch of cells that are arrayed on a grid, and it’s very, you know, and every cell is
00:46:26 updated in synchrony at a particular, you know, when there’s a click of a clock, so to speak,
00:46:32 and it goes a tick of a clock, and every cell gets updated at the same time. That’s a very specific
00:46:38 very rigid kind of thing. But my guess is that when we look at physics, and we look at things
00:46:45 like space and time, that what’s underneath space and time is something as structuralist as possible,
00:46:51 that what we see, what emerges for us as physical space, for example, comes from something that is
00:46:58 sort of arbitrarily unstructured underneath. And so I’ve been for a long time interested in kind
00:47:04 of what are the most structuralist structures that we can set up. And actually, what I had thought
00:47:10 about for ages is using graphs, networks, where essentially, so let’s talk about space, for
00:47:16 example. So what is space? It’s a kind of a question one might ask. Back in the early days
00:47:22 of quantum mechanics, for example, people said, oh, for sure, space is going to be discrete,
00:47:27 because all these other things we’re finding are discrete. But that never worked out in physics.
00:47:30 And so space in physics today is always treated as this continuous thing, just like Euclid
00:47:35 imagined it. I mean, the very first thing Euclid says in his sort of common notions is,
00:47:41 you know, a point is something which has no part. In other words, there are points that are
00:47:45 arbitrarily small, and there’s a continuum of possible positions of points. And the question
00:47:51 is, is that true? And so for example, if we look at, I don’t know, fluid like air or water,
00:47:56 we might say, oh, it’s a continuous fluid. We can pour it, we can do all kinds of things continuously.
00:48:01 But actually, we know, because we know the physics of it, that it consists of a bunch
00:48:04 of discrete molecules bouncing around, and only in the aggregate is it behaving like a continuum.
00:48:10 And so the possibility exists that that’s true of space too. People haven’t managed to make that
00:48:14 work with existing frameworks in physics. But I’ve been interested in whether one can imagine that
00:48:22 underneath space, and also underneath time, is something more structureless. And the question is,
00:48:27 is it computational? So there are a couple of possibilities. It could be computational,
00:48:32 somehow fundamentally equivalent to a Turing machine, or it could be fundamentally not. So
00:48:37 how could it not be? It could not be, so a Turing machine essentially deals with integers, whole
00:48:42 numbers, at some level. And you know, it can do things like it can add one to a number, it can do
00:48:47 things like this. And it can also store whatever the heck it did. Yes, it has an infinite storage.
00:48:53 But when one thinks about doing physics, or sort of idealized physics, or idealized mathematics,
00:49:02 one can deal with real numbers, numbers with an infinite number of digits, numbers which are
00:49:07 absolutely precise. And one can say, we can take this number and we can multiply it by itself.
00:49:12 Are you comfortable with infinity?
00:49:13 In this context? Are you comfortable in the context of computation? Do you think infinity
00:49:19 plays a part? I think that the role of infinity is complicated. Infinity is useful in conceptualizing
00:49:25 things. It’s not actualizable. Almost by definition, it’s not actualizable. But do you
00:49:31 think infinity is part of the thing that might underlie the laws of physics? I think that no.
00:49:38 I think there are many questions that you ask about, you might ask about physics, which inevitably
00:49:46 involve infinity. Like when you say, you know, is faster than light travel possible? You could say,
00:49:53 given the laws of physics, can you make something even arbitrarily large, even quote, infinitely
00:49:58 large, that will make faster than light travel possible? Then you’re thrown into dealing with
00:50:04 infinity as a kind of theoretical question. But I mean, talking about sort of what’s underneath
00:50:10 space and time and how one can make a computational infrastructure, one possibility is that you can’t
00:50:18 make a computational infrastructure in a Turing machine sense, that you really have to be dealing
00:50:23 with precise real numbers. You’re dealing with partial differential equations, which have
00:50:29 precise real numbers at arbitrarily closely separated points. You have a continuum for
00:50:33 everything. Could be that that’s what happens, that there’s sort of a continuum for everything
00:50:38 and precise real numbers for everything. And then the things I’m thinking about are wrong.
00:50:42 And that’s the risk you take if you’re trying to sort of do things about nature,
00:50:49 is you might just be wrong. For me personally, it’s kind of a strange thing. I’ve spent a lot
00:50:55 of my life building technology where you can do something that nobody cares about,
00:51:00 but you can’t be sort of wrong in that sense, in the sense you build your technology and it does
00:51:04 what it does. But I think this question of what the sort of underlying computational
00:51:10 infrastructure for the universe might be, it’s sort of inevitable it’s going to be fairly abstract,
00:51:17 because if you’re going to get all these things like there are three dimensions of space,
00:51:22 there are electrons, there are muons, there are quarks, there are this, you don’t get to,
00:51:27 if the model for the universe is simple, you don’t get to have sort of a line of code for
00:51:31 each of those things. You don’t get to have sort of the muon case, the tau lepton case and so on.
00:51:38 Because they all have to be emergent somehow, something deeper.
00:51:42 Right. So that means it’s sort of inevitable, it’s a little hard to talk about
00:51:46 what the sort of underlying structuralist structure actually is.
00:51:50 Do you think human beings have the cognitive capacity to understand, if we’re to discover it,
00:51:56 to understand the kinds of simple structure from which these laws can emerge?
00:52:01 Like, do you think that’s a good question?
00:52:04 Well, here’s what I think. I think that, I mean, I’m right in the middle of this right now.
00:52:08 Right.
00:52:08 I’m telling you that I think this, yeah, I mean, this human has a hard time understanding,
00:52:14 you know, a bunch of the things that are going on. But what happens in understanding is
00:52:18 one builds waypoints. I mean, if you said understand modern 21st century mathematics,
00:52:23 starting from, you know, counting back in, you know, whenever counting was invented 50,000 years
00:52:30 ago, whatever it was, right, that would be really difficult. But what happens is we build waypoints
00:52:36 that allow us to get to higher levels of understanding. And we see the same thing
00:52:39 happening in language. You know, when we invent a word for something, it provides kind of a cognitive
00:52:45 anchor, a kind of a waypoint that lets us, you know, like a podcast or something. You could be
00:52:50 explaining, well, it’s a thing which works this way, that way, the other way. But as soon as you
00:52:55 have the word podcast and people kind of societally understand it, you start to be able to build on
00:53:00 top of that. And so I think that’s kind of the story of science actually, too. I mean, science
00:53:05 is about building these kind of waypoints where we find this sort of cognitive mechanism for
00:53:11 understanding something, then we can build on top of it. You know, we have the idea of, I don’t
00:53:16 know, differential equations we can build on top of that. We have this idea, that idea. So my hope
00:53:21 is that if it is the case that we have to go all the way sort of from the sand to the computer,
00:53:28 and there’s no waypoints in between, then we’re toast. We won’t be able to do that.
00:53:33 Well, eventually we might. So if we’re as clever apes are good enough at building those abstract
00:53:39 abstractions, eventually from sand we’ll get to the computer, right? And it just might be a longer
00:53:43 journey. The question is whether it is something that you asked, whether our human brains will
00:53:49 quote, understand what’s going on. And that’s a different question because for that, it requires
00:53:55 steps from which we can construct a human understandable narrative. And that’s something that
00:54:03 I think I am somewhat hopeful that that will be possible. Although, you know, as of literally
00:54:10 today, if you ask me, I’m confronted with things that I don’t understand very well.
00:54:16 So this is a small pattern in a computation trying to understand the rules under which the
00:54:21 computation functions. And it’s an interesting possibility under which kinds of computations
00:54:28 such a creature can understand itself.
00:54:31 My guess is that within, so we didn’t talk much about computational irreducibility,
00:54:36 but it’s a consequence of this principle of computational equivalence. And it’s sort of a
00:54:39 core idea that one has to understand, I think, which is question is, you’re doing a computation,
00:54:45 you can figure out what happens in the computation just by running every step in the computation and
00:54:49 seeing what happens. Or you can say, let me jump ahead and figure out, you know, have something
00:54:56 smarter that figures out what’s going to happen before it actually happens. And a lot of traditional
00:55:01 science has been about that act of computational reducibility. It’s like, we’ve got these equations,
00:55:08 and we can just solve them, and we can figure out what’s going to happen. We don’t have to trace
00:55:12 all of those steps, we just jump ahead because we solve these equations.
00:55:16 Okay, so one of the things that is a consequence of the principle of computational equivalence is
00:55:20 you don’t always get to do that. Many, many systems will be computationally irreducible,
00:55:25 in the sense that the only way to find out what they do is just follow each step and see what
00:55:28 happens. Why is that? Well, if you’re saying, well, we, with our brains, we’re a lot smarter,
00:55:34 we don’t have to mess around like the little cellular automaton going through and updating
00:55:38 all those cells. We can just use the power of our brains to jump ahead. But if the principle
00:55:44 of computational equivalence is right, that’s not going to be correct, because it means that
00:55:50 there’s us doing our computation in our brains, there’s a little cellular automaton doing its
00:55:55 computation, and the principle of computational equivalence says these two computations are
00:55:59 fundamentally equivalent. So that means we don’t get to say we’re a lot smarter than the cellular
00:56:04 automaton and jump ahead, because we’re just doing computation that’s of the same sophistication as
00:56:09 the cellular automaton itself. That’s computational reducibility. It’s fascinating. And that’s a
00:56:15 really powerful idea. I think that’s both depressing and humbling and so on, that we’re all,
00:56:22 we and the cellular automaton are the same. But the question we’re talking about, the fundamental
00:56:26 laws of physics, is kind of the reverse question. You’re not predicting what’s going to happen. You
00:56:32 have to run the universe for that. But saying, can I understand what rules likely generated me?
00:56:38 I understand. But the problem is, to know whether you’re right, you have to have some
00:56:44 computational reducibility, because we are embedded in the universe. If the only way to know whether
00:56:49 we get the universe is just to run the universe, we don’t get to do that, because it just ran for
00:56:53 14.6 billion years or whatever. And we can’t rerun it, so to speak. So we have to hope that
00:57:00 there are pockets of computational reducibility sufficient to be able to say, yes, I can recognize
00:57:06 those are electrons there. And I think that it’s a feature of computational irreducibility. It’s
00:57:12 sort of a mathematical feature that there are always an infinite collection of pockets of
00:57:16 reducibility. The question of whether they land in the right place and whether we can sort of build
00:57:20 a theory based on them is unclear. But to this point about whether we as observers in the universe
00:57:27 built out of the same stuff as the universe can figure out the universe, so to speak, that relies
00:57:33 on these pockets of reducibility. Without the pockets of reducibility, it won’t work, can’t work.
00:57:39 But I think this question about how observers operate, it’s one of the features of science over
00:57:45 the last 100 years particularly, has been that every time we get more realistic about observers,
00:57:50 we learn a bit more about science. So for example, relativity was all about observers don’t get to
00:57:56 say what’s simultaneous with what. They have to just wait for the light signal to arrive to decide
00:58:03 what’s simultaneous. Or for example, in thermodynamics, observers don’t get to say the
00:58:08 position of every single molecule in a gas. They can only see the kind of large scale features,
00:58:14 and that’s why the second law of thermodynamics, the law of entropy increase, and so on works.
00:58:18 If you could see every individual molecule, you wouldn’t conclude something about thermodynamics.
00:58:25 You would conclude, oh, these molecules are just all doing these particular things. You wouldn’t
00:58:28 be able to see this aggregate fact. So I strongly expect that, and in fact, in the theories that I
00:58:35 have, that one has to be more realistic about the computation and other aspects of observers
00:58:42 in order to actually make a correspondence between what we experience. In fact,
00:58:47 my little team and I have a little theory right now about how quantum mechanics may work, which is
00:58:53 a very wonderfully bizarre idea about how the sort of thread of human consciousness
00:59:00 relates to what we observe in the universe. But there’s several steps to explain what that’s
00:59:05 about. What do you make of the mess of the observer at the lower level of quantum mechanics,
00:59:11 sort of the textbook definition with quantum mechanics kind of says that there’s some,
00:59:19 there’s two worlds. One is the world that actually is, and the other is that’s observed.
00:59:27 What do you make sense of that? Well, I think actually the ideas we’ve recently had might
00:59:34 actually give away into this. I don’t know yet. I think it’s a mess. The fact is,
00:59:45 one of the things that’s interesting, and when people look at these models that I
00:59:50 started talking about 30 years ago now, they say, oh no, that can’t possibly be right.
00:59:54 What about quantum mechanics? You say, okay, tell me what is the essence of quantum mechanics? What
01:00:00 do you want me to be able to reproduce to know that I’ve got quantum mechanics, so to speak?
01:00:05 Well, and that question comes up. It comes up very operationally actually, because we’ve been
01:00:08 doing a bunch of stuff with quantum computing. And there are all these companies that say,
01:00:12 we have a quantum computer. And we say, let’s connect to your API and let’s actually run it.
01:00:17 And they’re like, well, maybe you shouldn’t do that yet. We’re not quite ready yet.
01:00:22 And one of the questions that I’ve been curious about is, if I have five minutes with a quantum
01:00:26 computer, how can I tell if it’s really a quantum computer or whether it’s a simulator at the other
01:00:31 end? And it turns out it’s really hard. It’s like a lot of these questions about what is
01:00:38 intelligence? What’s life? It’s like, are you really a quantum computer? Yes, exactly. Is it
01:00:48 just a simulation or is it really a quantum computer? Same issue all over again. So this
01:00:56 whole issue about the sort of mathematical structure of quantum mechanics and the completely
01:01:01 separate thing that is our experience in which we think definite things happen, whereas quantum
01:01:08 mechanics doesn’t say definite things ever happen. Quantum mechanics is all about the amplitudes for
01:01:12 different things to happen, but yet our thread of consciousness operates as if definite things
01:01:19 are happening. Dilinga, on the point, you’ve kind of mentioned the structure that could
01:01:27 underlie everything and this idea that it could perhaps have something like a structure of a graph.
01:01:33 Can you elaborate why your intuition is that there’s a graph structure of nodes and edges
01:01:39 and what it might represent? Right. Okay. So the question is, what is, in a sense,
01:01:45 the most structuralist structure you can imagine, right? And in fact, what I’ve recently realized
01:01:54 in the last year or so, I have a new most structuralist structure. By the way, the question
01:01:59 itself is a beautiful one and a powerful one in itself. So even without an answer, just the
01:02:04 question is a really strong question. Right. But what’s your new idea? Well, it has to do with
01:02:09 hypergraphs. Essentially, what is interesting about the sort of model I have now is it’s a
01:02:18 little bit like what happened with computation. Everything that I think of as, oh, well, maybe
01:02:23 the model is this, I discover it’s equivalent. And that’s quite encouraging because it’s like
01:02:30 I could say, well, I’m going to look at trivalent graphs with three edges for each node and so on,
01:02:35 or I could look at this special kind of graph, or I could look at this kind of algebraic structure.
01:02:40 And turns out that the things I’m now looking at, everything that I’ve imagined that is a plausible
01:02:47 type of structuralist structure is equivalent to this. So what is it? Well, a typical way to think
01:02:53 about it is, well, so you might have some collection of tuples, collection of, let’s say,
01:03:06 numbers. So you might have one, three, five, two, three, four, just collections of numbers,
01:03:15 triples of numbers, let’s say, quadruples of numbers, pairs of numbers, whatever.
01:03:18 And you have all these sort of floating little tuples. They’re not in any particular order.
01:03:25 And that sort of floating collection of tuples, and I told you this was abstract,
01:03:32 represents the whole universe. The only thing that relates them is when a symbol is the same,
01:03:40 it’s the same, so to speak. So if you have two tuples and they contain the same symbol,
01:03:45 let’s say at the same position of the tuple, at the first element of the tuple,
01:03:48 then that represents a relation. So let me try and peel this back.
01:03:53 Wow. Okay.
01:03:56 I told you it’s abstract, but this is the…
01:03:59 So the relationship is formed by some aspect of sameness.
01:04:03 Right. But so think about it in terms of a graph. So a graph, a bunch of nodes,
01:04:09 let’s say you number each node, then what is a graph? A graph is a set of pairs that say
01:04:16 this node has an edge connecting it to this other node. And a graph is just a collection
01:04:23 of those pairs that say this node connects to this other node. So this is a generalization of that,
01:04:30 in which instead of having pairs, you have arbitrary n tuples. That’s it. That’s the
01:04:37 whole story. And now the question is, okay, so that might represent the state of the universe.
01:04:43 How does the universe evolve? What does the universe do? And so the answer is
01:04:47 that what I’m looking at is a transformation rules on these hypergraphs. In other words,
01:04:54 you say this, whenever you see a piece of this hypergraph that looks like this,
01:05:02 turn it into a piece of hypergraph that looks like this. So on a graph, it might be when you
01:05:07 see the subgraph, when you see this thing with a bunch of edges hanging out in this particular way,
01:05:11 then rewrite it as this other graph. Okay. And so that’s the whole story. So the question is
01:05:19 what, uh, so now you say, I mean, as I say, this is quite abstract. And one of the questions is,
01:05:27 uh, where do you do those updating? So you’ve got this giant graph. What triggers the updating,
01:05:32 like what’s the, what’s the ripple effect of it? Is it, uh, and I suspect everything’s discreet
01:05:39 even in time. So, okay. So the question is where do you do the updates? And the answer is the rule
01:05:45 is you do them wherever they apply. And you do them, you do them. The order in which the updates
01:05:50 is done is not defined. That is the, you can do them. So there may be many possible orderings
01:05:56 for these updates. Now, the point is if imagine you’re an observer in this universe. So, and you
01:06:02 say, did something get updated? Well, you don’t in any sense know until you yourself have been
01:06:08 updated. Right. So in fact, all that you can be sensitive to is essentially the causal network
01:06:17 of how an event over there affects an event that’s in you. That doesn’t even feel like
01:06:24 observation. That’s like, that’s something else. You’re just part of the whole thing.
01:06:28 Yes, you’re part of it. But, but even to have, so the end result of that is all you’re sensitive to
01:06:34 is this causal network of what event affects what other event. I’m not making a big statement about
01:06:40 sort of the structure of the observer. I’m simply saying, I’m simply making the argument that
01:06:46 what happens, the microscopic order of these rewrites is not something that any observer,
01:06:52 any conceivable observer in this universe can be affected by. Because the only thing the observer
01:06:58 can be affected by is this causal network of how the events in the observer are affected
01:07:06 by other events that happen in the universe. So the only thing you have to look at is the
01:07:09 causal network. You don’t really have to look at this microscopic rewriting that’s happening. So
01:07:14 these rewrites are happening wherever they, they happen wherever they feel like.
01:07:18 Causal network. Is there, you said that there’s not really, so the idea would be an undefined,
01:07:26 like what gets updated? The, the sequence of things is undefined. It’s a, yes. That’s what
01:07:33 you mean by the causal network, but then the call, no, the causal network is given that an
01:07:37 update has happened. That’s an event. Then the question is, is that event causally related to,
01:07:43 does that event, if that event didn’t happen, then some future event couldn’t happen yet.
01:07:48 Gotcha.
01:07:49 And so you build up this network of what affects what. Okay. And so what that does,
01:07:54 so when you build up that network, that’s kind of the observable aspect of the universe in some
01:07:59 sense. And so then you can ask questions about, you know, how robust is that observable network
01:08:07 of the, what’s happening in the universe. Okay. So here’s where it starts getting kind of
01:08:10 interesting. So for certain kinds of microscopic rewriting rules, the order of rewrites does not
01:08:17 matter to the causal network. And so this is, okay, mathematical logic moment. This is equivalent
01:08:24 to the Church Rosser property or the confluence property of rewrite rules. And it’s the same
01:08:28 reason that if you’re simplifying an algebraic expression, for example, you can say, oh, let me
01:08:33 expand those terms out. Let me factor those pieces. Doesn’t matter what order you do that in,
01:08:38 you’ll always get the same answer. And that’s, it’s the same fundamental phenomenon that causes
01:08:43 for certain kinds of microscopic rewrite rules that causes the causal network to be independent
01:08:50 of the microscopic order of rewritings.
01:08:52 Why is that property important?
01:08:54 Because it implies special relativity. I mean, the reason it’s important is that that property,
01:09:03 special relativity says you can look at these sort of, you can look at different reference frames.
01:09:10 You can have different, you can be looking at your notion of what space and what’s time
01:09:14 can be different depending on whether you’re traveling at a certain speed, depending on
01:09:18 whether you’re doing this, that, and the other. But nevertheless, the laws of physics are the
01:09:22 same. That’s what the principle of special relativity says, is the laws of physics are
01:09:26 the same independent of your reference frame. Well, turns out this sort of change of the
01:09:33 microscopic rewriting order is essentially equivalent to a change of reference frame,
01:09:37 or at least there’s a sub part of how that works that’s equivalent to change a reference frame.
01:09:42 So, somewhat surprisingly, and sort of for the first time in forever,
01:09:46 it’s possible for an underlying microscopic theory to imply special relativity, to be able to derive
01:09:52 it. It’s not something you put in as a, this is a, it’s something where this other property,
01:09:57 causal invariance, which is also the property that implies that there’s a single thread of time
01:10:03 in the universe. It might not be the case that that’s what would lead to the possibility of an
01:10:11 observer thinking that definite stuff happens. Otherwise, you’ve got all these possible rewriting
01:10:16 orders, and who’s to say which one occurred. But with this causal invariance property,
01:10:20 there’s a notion of a definite thread of time. It sounds like that kind of idea of time,
01:10:25 even space, would be emergent from the system. Oh, yeah. No, I mean, it’s not a fundamental part
01:10:30 of the system. No, no, it’s a fundamental level. All you’ve got is a bunch of nodes connected by
01:10:36 hyper edges or whatever. So there’s no time, there’s no space. That’s right. And
01:10:39 but the thing is that it’s just like imagining, imagine you’re just dealing with a graph. And
01:10:44 imagine you have something like a, you know, like a honeycomb graph, or you have a hexagon,
01:10:48 a bunch of hexagons. You know, that graph at a microscopic level, it’s just a bunch of nodes
01:10:53 connected to other nodes. But at a macroscopic level, you say that looks like a honeycomb,
01:10:57 you know, lattice, it looks like a two dimensional, you know, manifold of some kind, it looks like a
01:11:04 two dimensional thing. If you connect it differently, if you just connect all the
01:11:07 nodes one, one to another, and kind of a sort of linked list type structure, then you’d say,
01:11:12 well, that looks like a one dimensional space. But at the microscopic level, all these are just
01:11:16 networks with nodes, the macroscopic level, they look like something that’s like one of our sort
01:11:22 of familiar kinds of space. And it’s the same thing with these hyper graphs. Now, if you ask me,
01:11:27 have I found one that gives me three dimensional space? The answer is not yet. So we don’t know.
01:11:33 This is one of these things we’re kind of betting against nature, so to speak. And I have no way to
01:11:38 know. And so there are many other properties of this kind of system that are very beautiful,
01:11:43 actually, and very suggestive. And it will be very elegant if this turns out to be right,
01:11:48 because it’s very clean. I mean, you start with nothing. And everything gets built up,
01:11:53 everything about space, everything about time, everything about matter. It’s all just emergent
01:11:59 from the properties of this extremely low level system. And that, that will be pretty cool if
01:12:04 that’s the way our universe works. Now, do I on the other hand, the thing that that I find very
01:12:11 confusing is, let’s say we succeed, let’s say we can say this particular sort of hypergraph rewriting
01:12:20 rule gives the universe just run that hypergraph rewriting rule for enough times, and you’ll get
01:12:25 everything, you’ll get this conversation we’re having, you’ll get everything. It’s that if we
01:12:33 get to that point, and we look at what is this thing, what is this rule that we just have,
01:12:39 that is giving us our whole universe, how do we think about that thing? Let’s say, turns out the
01:12:44 minimal version of this, and this is kind of cool thing for a language designer like me,
01:12:48 the minimal version of this model is actually a single line of orphan language code.
01:12:52 So that’s, which I wasn’t sure was going to happen that way, but it’s, it’s a, that’s, it’s kind of,
01:12:59 no, we don’t know what, we don’t know what that’s, that’s just the framework to know the actual
01:13:05 particular hypergraph that might be a longer, the specification of the rules might be slightly
01:13:10 longer. How does that help you accept marveling in the beauty and the elegance of the simplicity
01:13:16 that creates the universe? That does that help us predict anything in the universe?
01:13:20 That does that help us predict anything? Not really because of the irreducibility.
01:13:25 That’s correct. That’s correct. But so the thing that is really strange to me,
01:13:29 and I haven’t wrapped my, my brain around this yet is, you know, one is one keeps on realizing
01:13:37 that we’re not special in the sense that, you know, we don’t live at the center of the universe.
01:13:41 We don’t blah, blah, blah. And yet if we produce a rule for the universe and it’s quite simple,
01:13:49 and we can write it down and a couple of lines or something that feels very special.
01:13:54 How did we come to get a simple universe when many of the available universes, so to speak,
01:14:00 are incredibly complicated? It might be, you know, a quintillion characters long.
01:14:05 Why did we get one of the ones that’s simple? And so I haven’t wrapped my brain around that
01:14:09 issue yet. If indeed we are in such a simple, the universe is such a simple rule. Is it possible
01:14:17 that there is something outside of this that we are in a kind of what people call the simulation,
01:14:24 right? That we’re just part of a computation that’s being explored by a graduate student
01:14:29 in alternate universe. Well, you know, the problem is we don’t get to say much about
01:14:34 what’s outside our universe because by definition, our universe is what we exist within. Now,
01:14:40 can we make a sort of almost theological conclusion from being able to know how our
01:14:45 particular universe works? Interesting question. I don’t think that if you ask the question,
01:14:52 could we, and it relates again to this question about extraterrestrial intelligence, you know,
01:14:57 we’ve got the rule for the universe. Was it built in on purpose? Hard to say. That’s the same thing
01:15:03 as saying we see a signal from, you know, that we’re receiving from some random star somewhere,
01:15:11 and it’s a series of pulses. And, you know, it’s a periodic series of pulses, let’s say.
01:15:16 Was that done on purpose? Can we conclude something about the origin of that series of
01:15:20 pulses? Just because it’s elegant does not necessarily mean that somebody created it or
01:15:27 that we can even comprehend. Yeah. I think it’s the ultimate version of the sort of identification
01:15:35 of the techno signature question. It’s the ultimate version of that is was our universe
01:15:39 a piece of technology, so to speak, and how on earth would we know? But I mean, in the kind of
01:15:47 crazy science fiction thing you could imagine, you could say, oh, there’s going to be a signature
01:15:53 there. It’s going to be made by so and so. But there’s no way we could understand that,
01:15:59 so to speak, and it’s not clear what that would mean. Because the universe simply,
01:16:04 you know, if we find a rule for the universe, we’re simply saying that rule represents what
01:16:10 our universe does. We’re not saying that that rule is something running on a big computer
01:16:16 and making our universe. It’s just saying that represents what our universe does in the same
01:16:21 sense that, you know, laws of classical mechanics, differential equations, whatever they are,
01:16:26 represent what mechanical systems do. It’s not that the mechanical systems are somehow running
01:16:32 solutions to those differential equations. Those differential equations are just representing the
01:16:36 behavior of those systems. So what’s the gap in your sense to linger on the fascinating,
01:16:42 perhaps slightly sci fi question? What’s the gap between understanding the fundamental rules that
01:16:48 create a universe and engineering a system, actually creating a simulation ourselves?
01:16:54 So you’ve talked about sort of, you’ve talked about, you know, nano engineering kind of ideas
01:17:01 that are kind of exciting, actually creating some ideas of computation in the physical space. How
01:17:06 hard is it as an engineering problem to create the universe once you know the rules that create it?
01:17:11 Well, that’s an interesting question. I think the substrate on which the universe is operating is
01:17:16 not a substrate that we have access to. I mean, the only substrate we have is that same substrate
01:17:22 that the universe is operating in. So if the universe is a bunch of hypergraphs being rewritten,
01:17:26 then we get to attach ourselves to those same hypergraphs being rewritten. We don’t get to,
01:17:35 and if you ask the question, you know, is the code clean? You know, can we write nice,
01:17:40 elegant code with efficient algorithms and so on? Well, that’s an interesting question.
01:17:47 That’s this question of how much computational reducibility there is in the system.
01:17:51 But I’ve seen some beautiful cellular automata that basically create copies of itself within
01:17:55 itself, right? So that’s the question whether it’s possible to create, like whether you need
01:18:01 to understand the substrate or whether you can. Yeah, well, right. I mean, so one of the things
01:18:06 that is sort of one of my slightly sci fi thoughts about the future, so to speak, is, you know,
01:18:12 right now, if you poll typical people, you say, do you think it’s important to find the fundamental
01:18:16 theory of physics? You get, because I’ve done this poll informally, at least, it’s curious,
01:18:22 actually, you get a decent fraction of people saying, oh, yeah, that would be pretty interesting.
01:18:27 I think that’s becoming, surprisingly enough, more, I mean, a lot of people are interested
01:18:35 in physics in a way that like, without understanding it, just kind of watching
01:18:41 scientists, a very small number of them struggle to understand the nature of our reality.
01:18:46 Right. I mean, I think that’s somewhat true. And in fact, in this project that I’m launching into
01:18:51 to try and find fundamental theory of physics, I’m going to do it as a very public project. I mean,
01:18:56 it’s going to be live streamed and all this kind of stuff. And I don’t know what will happen. It’ll
01:19:00 be kind of fun. I mean, I think that it’s the interface to the world of this project. I mean,
01:19:07 I figure one feature of this project is, you know, unlike technology projects that basically are what
01:19:14 they are, this is a project that might simply fail, because it might be the case that it generates
01:19:18 all kinds of elegant mathematics that has absolutely nothing to do with the physical
01:19:21 universe that we happen to live in. Okay, so we’re talking about kind of the quest to find
01:19:27 the fundamental theory of physics. First point is, you know, it’s turned out it’s kind of hard
01:19:33 to find the fundamental theory of physics. People weren’t sure that that would be the case. Back in
01:19:38 the early days of applying mathematics to science, 1600s and so on, people were like, oh, in 100 years
01:19:44 we’ll know everything there is to know about how the universe works. Turned out to be harder than
01:19:48 that. And people got kind of humble at some level, because every time we got to sort of a greater
01:19:53 level of smallness and studying the universe, it seemed like the math got more complicated and
01:19:58 everything got harder. When I was a kid, basically, I started doing particle physics. And when I was
01:20:08 doing particle physics, I always thought finding the fundamental, fundamental theory of physics,
01:20:14 that’s a kooky business, we’ll never be able to do that. But we can operate within these
01:20:18 frameworks that we built for doing quantum field theory and general relativity and things like this.
01:20:23 And it’s all good. And we can figure out a lot of stuff. Did you even at that time have a sense
01:20:27 that there’s something behind that? Sure, I just didn’t expect that. I thought in some rather un,
01:20:35 it’s actually kind of crazy and thinking back on it, because it’s kind of like there was this long
01:20:41 period in civilization where people thought the ancients had it all figured out, and we’ll never
01:20:44 figure out anything new. And to some extent, that’s the way I felt about physics when I was
01:20:49 in the middle of doing it, so to speak, was, you know, we’ve got quantum field theory, it’s the
01:20:54 foundation of what we’re doing. And there’s, you know, yes, there’s probably something underneath
01:20:59 this, but we’ll sort of never figure it out. But then I started studying simple programs in the
01:21:06 computational universe, things like cellular automata and so on. And I discovered that
01:21:12 they do all kinds of things that were completely at odds with the intuition that I had had.
01:21:16 And so after that, after you see this tiny little program that does all this amazingly complicated
01:21:22 stuff, then you start feeling a bit more ambitious about physics and saying, maybe we could do this
01:21:27 for physics too. And so that got me started years ago now in this kind of idea of could we actually
01:21:36 find what’s underneath all of these frameworks, like quantum field theory and general relativity
01:21:40 and so on. And people perhaps don’t realize as clearly as they might that, you know, the
01:21:45 frameworks we’re using for physics, which is basically these two things, quantum field theory,
01:21:50 sort of the theory of small stuff and general relativity, theory of gravitation and large stuff.
01:21:55 Those are the two basic theories. And they’re 100 years old. I mean, general relativity was 1915,
01:22:01 quantum field theory, well, 1920s. So basically 100 years old. And it’s been a good run. There’s
01:22:08 a lot of stuff been figured out. But what’s interesting is the foundations haven’t changed
01:22:14 in all that period of time, even though the foundations had changed several times before
01:22:19 that in the 200 years earlier than that. And I think the kinds of things that I’m thinking about,
01:22:25 which are sort of really informed by thinking about computation and the computational universe,
01:22:29 it’s a different foundation. It’s a different set of foundations. And might be wrong. But it is at
01:22:36 least, you know, we have a shot. And I think it’s, you know, to me, it’s, you know, my personal
01:22:42 calculation for myself is, is, you know, if it turns out that the finding the fundamental theory
01:22:49 of physics, it’s kind of low hanging fruit, so to speak, it’d be a shame if we just didn’t think to
01:22:54 do it. You know, if people just said, Oh, you’ll never figure that stuff out. Let’s, you know,
01:22:59 and it takes another 200 years before anybody gets around to doing it. You know, I think it’s,
01:23:06 I don’t know how low hanging this fruit actually is. It may be, you know, it may be that it’s kind
01:23:12 of the wrong century to do this project. I mean, I think the cautionary tale for me, you know,
01:23:18 I think about things that I’ve tried to do in technology, where people thought about doing them
01:23:24 a lot earlier. And my favorite example is probably Leibniz, who, who thought about making essentially
01:23:30 encapsulating the world’s knowledge in a computational form in the late 1600s, and did a
01:23:36 lot of things towards that. And basically, you know, we finally managed to do this. But he was
01:23:42 300 years too early. And that’s the that’s kind of the in terms of life planning. It’s kind of like,
01:23:48 avoid things that can’t be done in your in your century, so to speak.
01:23:51 Yeah, timing. Timing is everything. So you think if we kind of figure out the underlying rules
01:24:00 that can create from which quantum field theory and general relativity can emerge,
01:24:06 do you think they’ll help us unify it at that level of abstraction?
01:24:09 Oh, we’ll know it completely. We’ll know how that all fits together. Yes, without a question.
01:24:13 And I mean, it’s already even the things I’ve already done. There are very, you know, it’s very,
01:24:21 very elegant, actually, how things seem to be fitting together. Now, you know, is it right?
01:24:25 I don’t know yet. It’s awfully suggestive. If it isn’t right, it’s then the designer of the universe
01:24:33 should feel embarrassed, so to speak, because it’s a really good way to do it.
01:24:36 And your intuition in terms of design universe, does God play dice? Is there is there randomness
01:24:43 in this thing? Or is it deterministic? So the kind of
01:24:46 That’s a little bit of a complicated question. Because when you’re dealing with these things
01:24:51 that involve these rewrites that have, okay, even randomness is an emergent phenomenon, perhaps.
01:24:56 Yes, yes. I mean, it’s a yeah, well, randomness, in many of these systems,
01:25:01 pseudo randomness and randomness are hard to distinguish. In this particular case,
01:25:06 the current idea that we have about some measurement in quantum mechanics
01:25:12 is something very bizarre and very abstract. And I don’t think I can yet
01:25:16 explain it without kind of yakking about very technical things. Eventually, I will be able to.
01:25:22 But if that’s right, it’s kind of a it’s a weird thing, because it slices between determinism and
01:25:30 randomness in a weird way that hasn’t been sliced before, so to speak. So like many of these
01:25:35 questions that come up in science, where it’s like, is it this or is it that? Turns out the
01:25:40 real answer is it’s neither of those things. It’s something kind of different and sort of orthogonal
01:25:45 to those categories. And so that’s the current, you know, this week’s idea about how that might
01:25:52 work. But, you know, we’ll see how that unfolds. I mean, there’s this question about a field like
01:26:00 physics and sort of the quest for fundamental theory and so on. And there’s both the science
01:26:06 of what happens and there’s the sort of the social aspect of what happens. Because, you know,
01:26:11 in a field that is basically as old as physics, we’re at, I don’t know what it is, fourth generation,
01:26:18 I don’t know, fifth generation, I don’t know what generation it is of physicists. And like,
01:26:22 I was one of these, so to speak. And for me, the foundations were like the pyramid, so to speak,
01:26:27 you know, it was that way. And it was always that way. It is difficult in an old field to go back to
01:26:34 the foundations and think about rewriting them. It’s a lot easier in young fields where you’re
01:26:39 still dealing with the first generation of people who invented the field. And it tends to be the
01:26:45 case, you know, that the nature of what happens in science tends to be, you know, you’ll get,
01:26:50 typically the pattern is some methodological advance occurs. And then there’s a period of five
01:26:56 years, 10 years, maybe a little bit longer than that, where there’s lots of things that are now
01:27:00 made possible by that methodological advance, whether it’s, you know, I don’t know, telescopes,
01:27:06 or whether that’s some mathematical method or something. Something happens, a tool gets built,
01:27:16 and then you can do a bunch of stuff. And there’s a bunch of low hanging fruit to be picked. And
01:27:21 that takes a certain amount of time. After all that low hanging fruit is picked, then it’s a hard
01:27:27 slog for the next however many decades or century or more to get to the next sort of level at which
01:27:35 one could do something. And it’s kind of a, and it tends to be the case that in fields that are in
01:27:39 that kind of, I wouldn’t say cruise mode, because it’s really hard work, but it’s very hard work for
01:27:45 very incremental progress. And then in your career and some of the things you’ve taken on,
01:27:50 it feels like you’re not, you haven’t been afraid of the hard slog. Yeah, that’s true. So it’s quite
01:27:56 interesting, especially on the engineering, on the engineering side. On a small tangent, when you
01:28:03 were at Caltech, did you get to interact with Richard Feynman at all? Do you have any memories
01:28:09 of Richard? We worked together quite a bit, actually. In fact, both when I was at Caltech
01:28:16 and after I left Caltech, we were both consultants at this company called Thinking Machines Corporation,
01:28:21 which was just down the street from here, actually. It was ultimately an ill fated company. But I used
01:28:27 to say this company is not going to work with the strategy they have. And Dick Feynman always used
01:28:31 to say, what do we know about running companies? Just let them run their company. But anyway,
01:28:38 he was not into that kind of thing. And he always thought that my interest in doing things like
01:28:44 running companies was a distraction, so to speak. And for me, it’s a mechanism to have a more
01:28:53 effective machine for actually getting things, figuring things out and getting things to happen.
01:28:58 Did he think of it, because essentially what you did with the company, I don’t know if you were
01:29:04 thinking of it that way, but you’re creating tools to empower the exploration of the
01:29:11 university. Do you think, did he… Did he understand that point? The point of tools of…
01:29:18 I think not as well as he might have done. I mean, I think that… But he was actually my
01:29:23 first company, which was also involved with, well, was involved with more mathematical computation
01:29:30 kinds of things. He was quite… He had lots of advice about the technical side of what we should
01:29:37 do and so on. Do you have examples, memories, or thoughts that… Oh, yeah, yeah. He had all
01:29:42 kinds of… Look, in the business of doing sort of… One of the hard things in math is doing
01:29:48 integrals and so on. And so he had his own elaborate ways to do integrals and so on. He
01:29:53 had his own ways of thinking about sort of getting intuition about how math works.
01:29:57 And so his sort of meta idea was take those intuitional methods and make a computer follow
01:30:04 those intuitional methods. Now, it turns out for the most part, like when we do integrals and
01:30:10 things, what we do is we build this kind of bizarre industrial machine that turns every integral
01:30:16 into products of major G functions and generates this very elaborate thing. And actually the big
01:30:21 problem is turning the results into something a human will understand. It’s not, quote,
01:30:26 doing the integral. And actually, Feynman did understand that to some extent. And I’m embarrassed
01:30:31 to say he once gave me this big pile of, you know, calculational methods for particle physics that he
01:30:37 worked out in the 50s. And he said, yeah, it’s more used to you than to me type thing. And I
01:30:41 was like, I’ve intended to look at it and give it back and I’m still on my files now. But that’s
01:30:47 what happens when it’s finiteness of human lives. Maybe if he’d live another 20 years, I would have
01:30:54 remembered to give it back. But I think that was his attempt to systematize the ways that one does
01:31:03 integrals that show up in particle physics and so on. Turns out the way we’ve actually done it
01:31:08 is very different from that way. What do you make of that difference,
01:31:10 Eugene? So Feynman was actually quite remarkable at creating sort of intuitive frameworks for
01:31:20 understanding difficult concepts. I’m smiling because, you know, the funny thing about him was
01:31:27 that the thing he was really, really, really good at is calculating stuff. But he thought that was
01:31:32 easy because he was really good at it. And so he would do these things where he would calculate
01:31:38 some, do some complicated calculation in quantum field theory, for example, come out with a result,
01:31:44 wouldn’t tell anybody about the complicated calculation because he thought that was easy.
01:31:48 He thought the really impressive thing was to have this simple intuition about how
01:31:52 everything works. So he invented that at the end. And, you know, because he’d done this calculation
01:31:58 and knew how it worked, it was a lot easier. It’s a lot easier to have good intuition when you know
01:32:02 what the answer is. And then and then he would just not tell anybody about these calculations
01:32:07 that he wasn’t meaning that maliciously, so to speak. It’s just he thought that was easy.
01:32:12 And and that’s, you know, that led to areas where people were just completely mystified,
01:32:17 and they kind of followed his intuition. But nobody could tell why it worked. Because actually,
01:32:22 the reason it worked was because he’d done all these calculations, and he knew that it was
01:32:25 would work. And, you know, when I he and I worked a bit on quantum computers actually back in 1980,
01:32:31 81, before anybody had heard of those things. And, you know, the typical mode of I mean,
01:32:38 he was used to say, and I now think about this, because I’m about the age that he was when I
01:32:42 worked with him. And, you know, I see the people who are one third my age, so to speak.
01:32:47 And he was always complaining that I was one third his age, and therefore various things. But, but,
01:32:54 you know, he would do some calculation by by hand, you know, blackboard and things come up with some
01:32:59 answer. I’d say, I don’t understand this. You know, I do something with a computer. And he’d say,
01:33:06 you know, I don’t understand this. So there’d be some big argument about what was, you know,
01:33:11 what was going on, but but it was always some. And I think, actually, we many of the things that we
01:33:18 sort of realized about quantum computing, that was sort of issues that have to do particularly
01:33:23 with the measurement process, are kind of still issues today. And I kind of find it interesting.
01:33:28 It’s a funny thing in science that these, you know, that there’s, there’s a remarkable happens
01:33:34 in technology to there’s a remarkable sort of repetition of history that ends up occurring.
01:33:40 Eventually, things really get nailed down. But it often takes a while. And it often things come
01:33:45 back decades later. Well, for example, I could tell a story actually happened right down the
01:33:50 street from here. When we were both thinking machines, I had been working on this particular
01:33:56 cellular automaton, who rule 30, that has this feature that it from very simple initial conditions,
01:34:03 it makes really complicated behavior. Okay. So and actually, of all silly physical things,
01:34:11 using this big parallel computer called the connection machine that that company was making,
01:34:16 I generated this giant printout of rule 30 on very, on actually on the same kind of same kind
01:34:22 of printer that people use to make layouts microprocessors. So one of these big, you know,
01:34:31 large format printers with high resolution and so on. So okay, so print this out lots of very tiny
01:34:37 cells. And so there was sort of a question of how some features of that pattern. And so it was very
01:34:45 much a physical, you know, on the floor with meter rules trying to measure different things.
01:34:49 So, so Feynman kind of takes me aside, we’ve been doing that for a little while and takes me aside.
01:34:55 And he says, I just want to know this one thing says, I want to know, how did you know that this
01:35:00 rule 30 thing would produce all this really complicated behavior that is so complicated
01:35:05 that we’re, you know, going around with this big printout, and so on. And I said, Well,
01:35:10 I didn’t know, I just enumerated all the possible rules and then observed that that’s what happened.
01:35:15 He said, Oh, I feel a lot better. You know, I thought you had some intuition that he didn’t have
01:35:22 that would let one. I said, No, no, no, no intuition, just experimental science.
01:35:26 TK Oh, that’s such a beautiful sort of dichotomy there of that’s exactly you showed is you really
01:35:33 can’t have an intuition about an irreducible. I mean, you have to run it.
01:35:37 MG Yes, that’s right.
01:35:38 TK That’s so hard for us humans, and especially brilliant
01:35:41 physicists like Feynman to say that you can’t have a compressed, clean intuition about how the whole
01:35:50 thing works. MG Yes, yes. No, he was, I mean, I think he was sort of on the edge of understanding
01:35:56 that point about computation. And I think he found that, I think he always found computation
01:36:00 interesting. And I think that was sort of what he was a little bit poking at. I mean, that intuition,
01:36:07 you know, the difficulty of discovering things, like even you say, Oh, you know, you just
01:36:12 enumerate all the cases and just find one that does something interesting, right? Sounds very easy.
01:36:16 Turns out, like, I missed it when I first saw it, because I had kind of an intuition
01:36:21 that said it shouldn’t be there. So I had kind of arguments, Oh, I’m going to ignore that case,
01:36:26 because whatever. And how did you have an open mind enough? Because you’re essentially the same
01:36:32 person as you should find, like the same kind of physics type of thinking. How did you find yourself
01:36:37 having a sufficiently open mind to be open to watching rules and them revealing complexity?
01:36:44 MG Yeah, I think that’s an interesting question. I’ve wondered about that myself, because it’s
01:36:47 kind of like, you know, you live through these things, and then you say, what was the historical
01:36:52 story? And sometimes the historical story that you realize after the fact was not what you lived
01:36:56 through, so to speak. And so, you know, what I realized is, I think what happened is, you know,
01:37:05 I did physics, kind of like reductionistic physics, where you’re thrown in the universe,
01:37:10 and you’re told, go figure out what’s going on inside it. And then I started building computer
01:37:15 tools. And I started building my first computer language, for example. And computer language is
01:37:20 not like, it’s sort of like physics in the sense that you have to take all those computations
01:37:24 people want to do, and kind of drill down and find the primitives that they can all be made of.
01:37:30 But then you do something that’s really different, because you’re just saying,
01:37:33 okay, these are the primitives. Now, you know, hopefully they’ll be useful to people,
01:37:37 let’s build up from there. So you’re essentially building an artificial universe, in a sense,
01:37:43 where you make this language, you’ve got these primitives, you’re just building whatever you
01:37:47 feel like building. And so it was sort of interesting for me, because from doing science,
01:37:53 where you’re just thrown in the universe as the universe is, to then just being told, you know,
01:37:58 you can make up any universe you want. And so I think that experience of making a computer language,
01:38:04 which is essentially building your own universe, so to speak, that’s what gave me a somewhat
01:38:12 different attitude towards what might be possible. It’s like, let’s just explore what can be done in
01:38:17 these artificial universes, rather than thinking the natural science way of let’s be constrained
01:38:23 by how the universe actually is. Yeah, by being able to program, essentially, you’ve,
01:38:28 as opposed to being limited to just your mind and a pen, you now have, you’ve basically built
01:38:34 another brain that you can use to explore the universe by computer program, you know,
01:38:40 this is kind of a brain, right? And it’s well, it’s it’s or telescope, or you know, it’s a tool,
01:38:44 it’s it lets you let’s you see stuff, but there’s something fundamentally different
01:38:47 between a computer and a telescope. I mean, it just, yeah, I’m hoping to romanticize the notion,
01:38:54 but it’s more general, the computer is more general. And it’s, it’s, I think, I mean, this
01:39:00 point about, you know, people say, oh, such and such a thing was almost discovered at such and
01:39:07 such a time, the the distance between, you know, the building the paradigm that allows you to
01:39:12 actually understand stuff or allows one to be open to seeing what’s going on. That’s really hard.
01:39:18 And, you know, I think, in I’ve been fortunate in my life that I spent a lot of my time building
01:39:24 computational language. And that’s an activity that, in a sense, works by sort of having to
01:39:33 kind of create another level of abstraction and kind of be open to different kinds of structures.
01:39:39 But, you know, it’s, it’s always I mean, I’m fully aware of, I suppose, the fact that I have seen it
01:39:45 a bunch of times of how easy it is to miss the obvious, so to speak, that at least is factored
01:39:51 into my attempt to not miss the obvious, although it may not succeed. What do you think is the role
01:39:59 of ego in the history of math and science? And more sort of, you know, a book title is something
01:40:08 like a new kind of science. You’ve accomplished a huge amount. In fact, somebody said that Newton
01:40:16 didn’t have an ego, and I looked into it and he had a huge ego. Yeah, but from an outsider’s
01:40:21 perspective, some have said that you have a bit of an ego as well. Do you see it that way? Does
01:40:28 ego get in the way? Is it empowering? Is it both? So it’s, it’s, it’s complicated and necessary. I
01:40:34 mean, you know, I’ve had, look, I’ve spent more than half my life CEO in a tech company. Right.
01:40:39 Okay. And, you know, that is a, I think it’s actually very, it means that one’s ego is not
01:40:50 a distant thing. It’s a thing that one encounters every day, so to speak, because it’s, it’s all
01:40:55 tied up with leadership and with how one, you know, develops an organization and all these
01:40:59 kinds of things. So, you know, it may be that if I’d been an academic, for example, I could have
01:41:03 sort of, you know, check the ego, put it on, put on a shelf somewhere and ignore its characteristics,
01:41:09 but you’re reminded of it quite often in the context of running a company. Sure. I mean,
01:41:15 that’s what it’s about. It’s, it’s about leadership and, you know, leadership is intimately tied to
01:41:22 ego. Now, what does it mean? I mean, what, what is the, you know, for me, I’ve been fortunate that I
01:41:27 think I have reasonable intellectual confidence, so to speak. That is, you know, I, I’m one of
01:41:34 these people who at this point, if somebody tells me something and I just don’t understand it,
01:41:39 my conclusion isn’t that means I’m dumb. That my conclusion is there’s something wrong with
01:41:45 what I’m being told. And that was actually Dick Feynman used to have that, that that feature too,
01:41:51 he never really believed in. He actually believed in experts much less than I believe in experts.
01:41:55 So. Wow. So that’s a fun, that’s a, that’s a fundamentally powerful property of ego and saying,
01:42:03 like, not that I am wrong, but that the, the world is wrong. And, and tell me, like, when confronted
01:42:12 with the fact that doesn’t fit the thing that you’ve really thought through sort of both the
01:42:17 negative and the positive of ego, do you see the negative of that get in the way sort of being
01:42:24 sure of the mistakes I’ve made that are the results of, I’m pretty sure I’m right. And
01:42:30 turns out I’m not. I mean, that’s, that’s the, you know, but, but the thing is that the, the,
01:42:36 the idea that one tries to do things that, so for example, you know, one question is if people have
01:42:42 tried hard to do something and then one thinks, maybe I should try doing this myself. Uh, if one
01:42:48 does not have a certain degree of intellectual confidence, one just says, well, people have been
01:42:52 trying to do this for a hundred years. How am I going to be able to do this? Yeah. And, you know,
01:42:56 I was fortunate in the sense that I happened to start having some degree of success in science
01:43:02 and things when I was really young. And so that developed a certain amount of sort of intellectual
01:43:07 confidence. I don’t think I otherwise would have had. Um, and you know, in a sense, I mean,
01:43:12 I was fortunate that I was working in a field, particle physics during its sort of golden age
01:43:17 of rapid progress. And that, that’s kind of gives one a false sense of, uh, achievement because
01:43:22 it’s kind of, kind of easy to discover stuff that’s going to survive. If you happen to be,
01:43:26 you know, picking the low hanging fruit of a rapidly expanding field.
01:43:30 I mean, the reason I totally, I totally immediately understood the ego behind a new
01:43:34 kind of science to me, let me sort of just try to express my feelings on the whole thing,
01:43:39 is that if you don’t allow that kind of ego, then you would never write that book.
01:43:46 That you would say, well, people must have done this. There’s not, you would not dig.
01:43:49 You would not keep digging. And I think that was, I think you have to take that ego and,
01:43:56 and ride it and see where it takes you. And that’s how you create exceptional work.
01:44:02 But I think the other point about that book was it was a non trivial question,
01:44:07 how to take a bunch of ideas that are, I think, reasonably big ideas. They might,
01:44:12 they might, you know, their importance is determined by what happens historically.
01:44:16 One can’t tell how important they are. One can tell sort of the scope of them.
01:44:20 And the scope is fairly big and they’re very different from things that have come before.
01:44:26 And the question is, how do you explain that stuff to people? And so I had had the experience
01:44:31 of sort of saying, well, there are these things, there’s a cellular automaton. It does this,
01:44:34 it does that. And people are like, oh, it must be just like this. It must be just like that.
01:44:39 So no, it isn’t. It’s something different. Right. And so I could have done sort of,
01:44:44 I’m really glad you did what you did, but you could have done sort of academically,
01:44:47 just published, keep publishing small papers here and there. And then you would just keep
01:44:51 getting this kind of resistance, right? You would get like, yeah, it’s supposed to just
01:44:55 dropping a thing that says, here it is, here’s the full, the full thing.
01:45:00 No, I mean, that was my calculation is that basically, you know, you could introduce
01:45:04 little pieces. It’s like, you know, one possibility is like, it’s the secret weapon,
01:45:09 so to speak. It’s this, you know, I keep on discovering these things in all these different
01:45:13 areas. Where’d they come from? Nobody knows. But I decided that, you know, in the interests of one
01:45:18 only has one life to lead and, you know, writing that book took me a decade anyway. There’s not a
01:45:24 lot of wiggle room, so to speak. One can’t be wrong by a factor of three, so to speak, in how long
01:45:29 it’s going to take. That I, you know, I thought the best thing to do, the thing that is most sort
01:45:35 of, that most respects the intellectual content, so to speak, is you just put it out with as much
01:45:44 force as you can, because it’s not something where, and, you know, it’s an interesting thing.
01:45:49 You talk about ego and it’s, you know, for example, I run a company which has my name on it,
01:45:54 right? I thought about starting a club for people whose companies have their names on them. And
01:45:59 it’s a funny group because we’re not a bunch of egomaniacs. That’s not what it’s about,
01:46:04 so to speak. It’s about basically sort of taking responsibility for what one’s doing.
01:46:10 And, you know, in a sense, any of these things where you’re sort of putting yourself on the line,
01:46:15 it’s kind of a funny, it’s a funny dynamic because, in a sense, my company is sort of
01:46:25 something that happens to have my name on it, but it’s kind of bigger than me and I’m kind of just
01:46:30 its mascot at some level. I mean, I also happen to be a pretty, you know, strong leader of it.
01:46:35 LW. But it’s basically showing a deep, inextricable sort of investment. Your name,
01:46:45 like Steve Jobs’s name wasn’t on Apple, but he was Apple. Elon Musk’s name is not on Tesla,
01:46:55 but he is Tesla. So it’s like, it meaning emotionally. If a company succeeds or fails,
01:47:01 he would just that emotionally would suffer through that. And so that’s, that’s a beautiful
01:47:07 recognizing that fact. And also Wolfram is a pretty good branding name, so that works out.
01:47:12 LW. Yeah, right. Exactly. I think Steve had it had a bad deal there.
01:47:16 LR. Yeah. So you made up for it with the last name. Okay. So in 2002, you published
01:47:23 A New Kind of Science, to which sort of on a personal level, I can credit my love for
01:47:29 cellular automata and computation in general. I think a lot of others can as well. Can you
01:47:35 briefly describe the vision, the hope, the main idea presented in this 1200 page book?
01:47:45 LW. Sure. Although it took 1200 pages to say in the book. So no, the real idea, it’s kind of
01:47:54 a good way to get into it is to look at sort of the arc of history and to look at what’s happened
01:47:58 in kind of the development of science. I mean, there was this sort of big idea in science about
01:48:04 300 years ago, that was, let’s use mathematical equations to try and describe things in the world.
01:48:10 Let’s use sort of the formal idea of mathematical equations to describe what might be happening in
01:48:16 the world, rather than, for example, just using sort of logical augmentation and so on. Let’s have
01:48:21 a formal theory about that. And so there’d been this 300 year run of using mathematical equations
01:48:27 to describe the natural world, which had worked pretty well. But I got interested in how one could
01:48:32 generalize that notion. There is a formal theory, there are definite rules, but what structure could
01:48:38 those rules have? And so what I got interested in was let’s generalize beyond the sort of purely
01:48:44 mathematical rules. And we now have this sort of notion of programming and computing and so on.
01:48:50 Let’s use the kinds of rules that can be embodied in programs as a sort of generalization of the
01:48:57 ones that can exist in mathematics as a way to describe the world. And so my kind of favorite
01:49:04 version of these kinds of simple rules are these things called cellular automata. And so typical
01:49:09 case… So wait, what are cellular automata? Fair enough. So typical case of a cellular automaton,
01:49:16 it’s an array of cells. It’s just a line of discrete cells. Each cell is either black or white.
01:49:25 And in a series of steps that you can represent as lines going down a page, you’re updating the
01:49:31 color of each cell according to a rule that depends on the color of the cell above it and
01:49:35 to its left and right. So it’s really simple. So a thing might be if the cell and its right neighbor
01:49:44 are not the same or the cell on the left is black or something, then make it black on the next step.
01:49:54 And if not, make it white. Typical rule. That rule, I’m not sure I said it exactly right,
01:50:01 but a rule very much like what I just said, has the feature that if you started off from just one
01:50:05 black cell at the top, it makes this extremely complicated pattern. So some rules you get a very
01:50:12 simple pattern. Some rules, the rule is simple. You start them off from a sort of simple seed.
01:50:19 You just get this very simple pattern. But other rules, and this was the big surprise when I
01:50:25 started actually just doing the simple computer experiments to find out what happens, is that they
01:50:30 produce very complicated patterns of behavior. So for example, this rule 30 rule has the feature
01:50:36 you start off from just one black cell at the top, makes this very random pattern. If you look
01:50:43 like at the center column of cells, you get a series of values. It goes black, white, black,
01:50:49 black, whatever it is. That sequence seems for all practical purposes random. So it’s kind of like
01:50:56 in math, you compute the digits of pi, 3.1415926, whatever. Those digits once computed, I mean,
01:51:05 the scheme for computing pi, it’s the ratio of the circumference to the diameter of a circle,
01:51:09 very well defined. But yet, once you’ve generated those digits, they seem for all practical
01:51:15 purposes completely random. And so it is with rule 30, that even though the rule is very simple,
01:51:22 much simpler, much more sort of computationally obvious than the rule for generating digits of pi,
01:51:28 even with a rule that simple, you’re still generating immensely complicated behavior.
01:51:32 Yeah. So if we could just pause on that, I think you probably have said it and looked at it so long,
01:51:38 you forgot the magic of it, or perhaps you don’t, you still feel the magic. But to me,
01:51:43 if you’ve never seen sort of, I would say, what is it? A one dimensional, essentially,
01:51:49 cellular automata, right? And you were to guess what you would see if you have some
01:51:57 sort of cells that only respond to its neighbors. Right. If you were to guess what kind of things
01:52:04 you would see, like my initial guess, like even when I first like opened your book,
01:52:09 A New Kind of Science, right? My initial guess is you would see, I mean, it would be a very simple
01:52:15 stuff. Right. And I think it’s a magical experience to realize the kind of complexity,
01:52:22 you mentioned rule 30, still your favorite cellular automaton? Still my favorite rule. Yes.
01:52:28 You get complexity, immense complexity, you get arbitrary complexity. Yes. And when you say
01:52:35 randomness down the middle column, that’s just one cool way to say that there’s incredible complexity.
01:52:44 And that’s just, I mean, that’s a magical idea. However, you start to interpret it,
01:52:49 all the reducibility discussions, all that. But it’s just, I think that has profound philosophical
01:52:56 kind of notions around it, too. It’s not just, I mean, it’s transformational about how you see the
01:53:03 world. I think for me it was transformational. I don’t know, we can have all kinds of discussion
01:53:07 about computation and so on, but just, you know, I sometimes think if I were on a desert island
01:53:15 and was, I don’t know, maybe it was some psychedelics or something, but if I had to take
01:53:19 one book, I mean, you kind of science would be it because you could just enjoy that notion. For some
01:53:25 reason, it’s a deeply profound notion, at least to me. I find it that way. Yeah. I mean, look, it’s
01:53:30 been, it was a very intuition breaking thing to discover. I mean, it’s kind of like, you know,
01:53:39 you point the computational telescope out the window and you’re like, okay, I’m going to
01:53:43 point the computational telescope out there. And suddenly you see, I don’t know, you know,
01:53:48 in the past, it’s kind of like, you know, moons of Jupiter or something, but suddenly you see
01:53:52 something that’s kind of very unexpected and rule 30 was very unexpected for me. And the big
01:53:57 challenge at a personal level was to not ignore it. I mean, people, you know, in other words,
01:54:03 you might say, you know, it’s a bug. What would you say? Yeah. Well, yeah. I mean, I,
01:54:08 what are we looking at by the way? Oh, well, I was just generating here. I’ll actually generate
01:54:11 a rule 30 pattern. So that’s the rule for, for rule 30. And it says, for example, it says here,
01:54:18 if you have a black cell in the middle and black cell to the left and white cell to the right,
01:54:22 then the cell on the next step will be white. And so here’s the actual pattern that you get
01:54:27 starting off from a single black cell at the top there. And then that’s the initial state initial
01:54:34 condition. That’s the initial thing. You just start off from that and then you’re going down
01:54:37 the page and at every, at every step, you’re just applying this rule to find out the new value that
01:54:44 you get. And so you might think rule that simple, you got to get the, there’s got to be some trace
01:54:50 of that simplicity here. Okay. We’ll run it. Let’s say for 400 steps. Um, so what it does,
01:54:56 it’s kind of aliasing a bit on the screen there, but, but, um, you can see there’s a little bit
01:55:00 of regularity over on the left, but there’s a lot of stuff here that just looks very complicated,
01:55:07 very random. And, uh, that’s a big sort of shock to was a big shock to my intuition, at least
01:55:14 that that’s possible. The mind immediately starts. Is there a pattern? There must be a repetitive
01:55:19 pattern. There must be. So I spent, so indeed, that’s what I thought at first. And I thought,
01:55:25 I thought, well, this is kind of interesting, but you know, if we run it long enough, we’ll see,
01:55:29 you know, something we’ll resolve into something simple. And, uh, uh, you know, I did all kinds of
01:55:34 analysis of using mathematics, statistics, cryptography, whatever, whatever to try and crack
01:55:41 it. Um, and I never succeeded. And after I hadn’t succeeded for a while, I started thinking maybe
01:55:46 there’s a real phenomenon here. That is the reason I’m not succeeding. Maybe. I mean, the thing that
01:55:52 for me was sort of a motivating factor was looking at the natural world and seeing all this complexity
01:55:57 that exists in the natural world. The question is, where does it come from? You know, what secret
01:56:01 does nature have that lets it make all this complexity that we humans, when we engineer
01:56:06 things typically are not making, we’re typically making things that at least look quite simple to
01:56:11 us. And so the shock here was even from something very simple, you’re making something that complex.
01:56:18 Uh, maybe this is getting at sort of the secret that nature has that allows it to make really
01:56:24 complex things, even though its underlying rules may not be that complex. How did it make you feel
01:56:30 if we, if we look at the Newton apple, was there, was it, was there a, you know, you took a walk
01:56:36 and, and something it profoundly hit you or was this a gradual thing, a lobster being boiled?
01:56:43 The truth of every sort of science discovery is it’s not that gradual. I mean, I’ve spent,
01:56:50 I happen to be interested in scientific biography kinds of things. And so I’ve tried to track down,
01:56:54 you know, how did people come to figure out this or that thing? And there’s always a long kind of,
01:57:00 uh, sort of preparatory, um, you know, there’s a, there’s a need to be prepared in a mindset
01:57:06 in which it’s possible to see something. I mean, in the case of rule 30,
01:57:10 I was around June 1st, 1984 was, um, uh, kind of a silly story in some ways. I finally had
01:57:16 a high resolution laser printer. So I was able, so I thought I’m going to generate a bunch of
01:57:20 pictures of the cellular automata and I generate this one and I put it, I was on some plane flight
01:57:27 to Europe and they have this with me. And it’s like, you know, I really should try to understand
01:57:32 this. And this is really, you know, this is, I really don’t understand what’s going on.
01:57:37 And, uh, that was kind of the, um, you know, slowly trying to, trying to see what was happening.
01:57:43 It was not, uh, it was depressingly unsubstantial, so to speak, in the sense that, um, a lot of these
01:57:50 ideas like principle of computational equivalence, for example, you know, I thought, well, that’s a
01:57:56 possible thing. I didn’t know if it’s correct, still don’t know for sure that it’s correct.
01:58:00 Um, but it’s sort of a gradual thing that these things gradually kind of become seem more important
01:58:07 than one thought. I mean, I think the whole idea of studying the computational universe of simple
01:58:12 programs, it took me probably a decade, decade and a half to kind of internalize that that was
01:58:19 really an important idea. Um, and I think, you know, if it turns out we find the whole universe
01:58:24 lurking out there in the computational universe, that’s a good, uh, you know, it’s a good brownie
01:58:29 point or something for the, uh, for the whole idea. But I think that the, um, the thing that’s
01:58:34 strange in this whole question about, you know, finding this different raw material for making
01:58:39 models of things, um, what’s been interesting sort of in the, in sort of arc of history is,
01:58:45 you know, for 300 years, it’s kind of like the, the mathematical equations approach.
01:58:49 It was the winner. It was the thing, you know, you want to have a really good model for something
01:58:53 that’s what you use. The thing that’s been remarkable is just in the last decade or so,
01:58:58 I think one can see a transition to using not mathematical equations, but programs
01:59:04 as sort of the raw material for making models of stuff. And that’s pretty neat. And it’s kind of,
01:59:11 you know, as somebody who’s kind of lived inside this paradigm shift, so to speak,
01:59:15 it is bizarre. I mean, no doubt in sort of the history of science that will be seen as an
01:59:20 instantaneous paradigm shift, but it sure isn’t instantaneous when it’s played out in one’s actual
01:59:25 life. So to speak, it seems glacial. Um, um, and it’s the kind of thing where, where it’s sort of
01:59:32 interesting because in the dynamics of sort of the adoption of ideas like that into different fields,
01:59:40 the younger the field, the faster the adoption typically, because people are not kind of locked
01:59:46 in with the fifth generation of people who’ve studied this field and it is, it is the way it is
01:59:52 and it can never be any different. And I think that’s been, um, you know, watching that process
01:59:57 has been interesting. I mean, I’m, I’m, I think I’m fortunate that I, I’ve, uh, uh, I, I do stuff
02:00:03 mainly cause I like doing it. And, um, uh, if I was, um, uh, that makes me kind of thick skinned
02:00:09 about the world’s response to what I do. Um, and uh, but that’s definitely, uh, you know, and anytime
02:00:16 you, you write a book called something like a new kind of science, um, it’s kind of the, the pitch
02:00:21 forks will come out for the, for the old kind of science. And I was, it was interesting dynamics.
02:00:26 I think that the, um, um, uh, I have to say that I was fully aware of the fact that the, um, when
02:00:34 you see sort of incipient paradigm shifts in science, the vigor of the negative response
02:00:41 upon early introduction is a fantastic positive indicator of good longterm results. So in other
02:00:48 words, if people just don’t care, it’s, um, you know, that’s not such a good sign. If they’re
02:00:55 like, oh, this is great. That means you didn’t really discover anything interesting. Um, what
02:01:01 fascinating properties of rule 30 have you discovered over the years? You’ve recently
02:01:05 announced the rule 30 prizes for solving three key problems. Can you maybe talk about interesting
02:01:11 properties that have been kind of revealed rule 30 or other cellular automata and what problems
02:01:18 are still before us? Like the three problems you’ve announced. Yeah. Yeah. Right. So, I mean,
02:01:24 the most interesting thing about cellular automata is that it’s hard to figure stuff out about them.
02:01:29 And that’s, um, uh, in a sense, every time you try and sort of, uh, uh, you try and bash them with
02:01:36 some other technique, you say, can I crack them? The answer is they seem to be uncrackable. They
02:01:42 seem to have the feature that they are, um, that they’re sort of showing irreducible computation.
02:01:49 They’re not, you’re not able to say, oh, I know exactly what this is going to do. It’s going to
02:01:53 do this or that, but there’s specific formulations of that fact. Yes. Right. So, I mean, for example,
02:02:00 in, in rule 30, in the pattern you get just starting from a single black cell, you get this
02:02:05 sort of very, very sort of random looking pattern. And so one feature of that, just look at the
02:02:11 center column. And for example, we use that for a long time to generate randomness symbols and
02:02:16 language, um, just, you know, what rule 30 produces. Now the question is, can you prove
02:02:22 how random it is? So for example, one very simple question, can you prove that it’ll never repeat?
02:02:28 We haven’t been able to show that it will never repeat.
02:02:32 We know that if there are two adjacent columns, we know they can’t both repeat,
02:02:37 but just knowing whether that center column can ever repeat, we still don’t even know that. Um,
02:02:42 another problem that I sort of put in my collection of, you know, it’s like $30,000 for
02:02:48 three, you know, for these three prizes for about rule 30. Um, I would say that this is not one of
02:02:54 those. There’s one of those cases where the money is not the main point, but, um, it’s just, uh,
02:03:00 you know, helps, um, uh, motivate somehow the, the investigation. So there’s three problems
02:03:06 you propose to get $30,000 if you solve all three or maybe, you know, it’s 10,000 for each for each.
02:03:12 Right. My, uh, the, the problems, that’s right. Money’s not the thing. The problems
02:03:16 themselves are just clean formulation. It’s just, you know, will it ever become periodic?
02:03:22 Second problem is, are there an equal number of black and white cells down the middle column,
02:03:27 down the middle column. And the third problem is a little bit harder to state, which is essentially,
02:03:31 is there a way of figuring out what the color of a cell at position T down the center column is
02:03:38 in a, with a less computational effort than about T steps. So in other words, is there a way to jump
02:03:45 ahead and say, I know what this is going to do, you know, it’s just some mathematical function
02:03:51 of T, um, or proving that there is no way or proving there is no way. Yes. But both, I mean,
02:03:57 you know, for any one of these, one could prove that, you know, one could discover, you know,
02:04:01 we know what rule 30 does for a billion steps, but, um, and maybe we’ll know for a trillion steps
02:04:06 before too very long. Um, but maybe at a quadrillion steps, it suddenly becomes repetitive.
02:04:12 You might say, how could that possibly happen? But so when I was writing up these prizes,
02:04:17 I thought, and this is typical of what happens in the computational universe. I thought,
02:04:21 let me find an example where it looks like it’s just going to be random forever,
02:04:25 but actually it becomes repetitive. And I found one and it’s just, you know, I did a search,
02:04:29 I searched, I don’t know, maybe a million different rules with some criterion. And this is
02:04:36 what’s sort of interesting about that is I kind of have this thing that I say in a kind of silly
02:04:41 way about the computational universe, which is, you know, the animals are always smarter than you
02:04:46 are. That is, there’s always some way. One of these computational systems is going to figure
02:04:49 out how to do something, even though I can’t imagine how it’s going to do it. And, you know,
02:04:53 I didn’t think I would find one that, you know, you would think after all these years that when
02:04:57 I found sort of all possible things, uh, uh, uh, funky things that, um, uh, that I would have, uh,
02:05:05 that I would have gotten my intuition wrapped around the idea that, um, you know, these creatures
02:05:10 are always in the computational universe are always smarter than I’m going to be. But, uh,
02:05:15 well, they’re equivalently as smart, right? That’s correct. And that makes it,
02:05:19 that makes one feel very sort of, it’s, it’s, it’s humbling every time because every time the thing
02:05:25 is, is, uh, you know, you think it’s going to do this or it’s not going to be possible to do this
02:05:29 and it turns out it finds a way. Of course, the promising thing is there’s a lot of other rules
02:05:34 like rule 30. It’s just rule 30 is, oh, it’s my favorite cause I found it first. And that’s right.
02:05:40 But the, the problems are focusing on rule 30. It’s possible that rule 30
02:05:45 is, is repetitive after trillion steps and that doesn’t prove anything about the other rules.
02:05:50 It does not. But this is a good sort of experiment of how you go about trying to prove something
02:05:56 about a particular rule. Yes. And it also, all these things help build intuition. That is if
02:06:01 it turned out that this was repetitive after a trillion steps, that’s not what I would expect.
02:06:06 And so we learned something from that. The method to do that though, would reveal something
02:06:11 interesting about the, no doubt. No doubt. I mean, it’s, although it’s sometimes challenging,
02:06:17 like the, you know, I put out a prize in 2007 for, for a particular Turing machine that I,
02:06:24 there was the simplest candidate for being a universal Turing machine and the young chap in
02:06:29 England named Alex Smith, um, after a smallish number of months said, I’ve got a proof and
02:06:35 he did, you know, it took a little while to iterate, but he had a proof. Unfortunately,
02:06:40 the proof is very, it’s, it’s a lot of micro details. It’s, it’s not, it’s not like you look
02:06:47 at it and you say, aha, there’s a big new principle. The big new principle is the simplest
02:06:53 Turing machine that might have been universal actually is universal. And it’s incredibly much
02:06:58 simpler than the Turing machines that people already knew were universal before that. And so
02:07:03 that intuitionally is important because it says computation universality is closer at hand than
02:07:08 you might’ve thought. Um, but the actual methods are not, uh, in that particular case,
02:07:13 we’re not terribly illuminating. It would be nice if the methods would also be elegant.
02:07:17 That’s true. Yeah. No, I mean, I think it’s, it’s one of these things where, I mean, it’s,
02:07:21 it’s like a lot of, we’ve talked about earlier kind of, um, you know, opening up AI’s and machine
02:07:27 learning and things of what’s going on inside and is it, is it just step by step or can you
02:07:32 sort of see the bigger picture more abstractly? It’s unfortunate. I mean, with Fermat’s last
02:07:36 theorem proof, it’s unfortunate that the proof to such an elegant theorem is, um, is not, I mean,
02:07:44 it’s, it’s, it’s not, it doesn’t fit into the margins of a page. That’s true. But there’s no,
02:07:49 one of the things is that’s another consequence of computational irreducibility. This, this fact
02:07:54 that there are even quite short results in mathematics whose proofs are arbitrarily long.
02:08:00 Yes. That’s a, that’s a consequence of all this stuff. And it’s, it’s a, it makes one wonder,
02:08:06 uh, you know, how come mathematics is possible at all? Right. Why is, you know, why is it the
02:08:11 case? How people managed to navigate doing mathematics through looking at things where
02:08:16 they’re not just thrown into, it’s all undecidable. Um, that’s, that’s its own own separate, separate
02:08:22 story. And that would be, that would, that would have a poetic beauty to it is if people were to
02:08:29 find something interesting about rule 30, because I mean, there’s an emphasis to this particular
02:08:36 role. It wouldn’t say anything about the broad irreducibility of all computations, but it would
02:08:41 nevertheless put a few smiles on people’s faces of, uh, well, yeah. But to me, it’s like in a
02:08:49 sense, establishing principle of computational equivalence, it’s a little bit like doing
02:08:54 inductive science anywhere. That is the more examples you find, the more convinced you are
02:08:59 that it’s generally true. I mean, we don’t get to, you know, whenever we do natural science,
02:09:04 we, we say, well, it’s true here that this or that happens. Can we, can we prove that it’s true
02:09:10 everywhere in the universe? No, we can’t. So, you know, it’s the same thing here. We’re exploring
02:09:16 the computational universe. We’re establishing facts in the computational universe. And that’s,
02:09:20 uh, that’s sort of a way of, uh, of inductively concluding general things. Just to think through
02:09:30 this a little bit, we’ve touched on it a little bit before, but what’s the difference between the
02:09:35 kind of computation, now that we’re talking about cellular automata, what’s the difference between
02:09:40 the kind of computation, biological systems, our mind, our bodies, the things we see before us that
02:09:47 emerged through the process of evolution and cellular automata? I mean, we’ve kind of implied
02:09:54 to the discussion of physics underlying everything, but we, we talked about the potential equivalents
02:10:01 of the fundamental laws of physics and the kind of computation going on in Turing machines.
02:10:06 But can you now connect that? Do you think there’s something special or interesting about the kind
02:10:12 of computation that our bodies do? Right. Well, let’s talk about brains primarily. I mean,
02:10:19 I think the, um, the most important thing about the things that our brains do are that we care
02:10:24 about them in the sense that there’s a lot of computation going on out there in, you know,
02:10:29 cellular automata and, and, you know, physical systems and so on. And it just, it does what it
02:10:35 does. It follows those rules. It does what it does. The thing that’s special about the computation in
02:10:40 our brains is that it’s connected to our goals and our kind of whole societal story. And, you know,
02:10:47 I think that’s the, that’s, that’s the special feature. And now the question then is when you
02:10:52 see this whole sort of ocean of computation out there, how do you connect that to the things that
02:10:57 we humans care about? And in a sense, a large part of my life has been involved in sort of the
02:11:02 technology of how to do that. And, you know, what I’ve been interested in is kind of building
02:11:07 computational language that allows that something that both we humans can understand and that can
02:11:13 be used to determine computations that are actually computations we care about. See, I think
02:11:19 when you look at something like one of these cellular automata and it does some complicated
02:11:23 thing, you say, that’s fun, but why do I care? Well, you could say the same thing actually in
02:11:30 physics. You say, oh, I’ve got this material and it’s a ferrite or something. Why do I care? You
02:11:36 know, it’s some, has some magnetic properties. Why do I care? It’s amusing, but why do I care?
02:11:40 Well, we end up caring because, you know, ferrite is what’s used to make magnetic tape,
02:11:44 magnetic discs, whatever. Or, you know, we could use liquid crystals as made, used to make,
02:11:50 well, not actually increasingly not, but it has been used to make computer displays and so on.
02:11:55 But those are, so in a sense, we’re mining these things that happen to exist in the physical
02:12:00 universe and making it be something that we care about because we sort of entrain it into
02:12:05 technology. And it’s the same thing in the computational universe that a lot of what’s
02:12:10 out there is stuff that’s just happening, but sometimes we have some objective and we will
02:12:16 go and sort of mine the computational universe for something that’s useful for some particular
02:12:20 objective. On a large scale, trying to do that, trying to sort of navigate the computational
02:12:26 universe to do useful things, you know, that’s where computational language comes in. And, you
02:12:32 know, a lot of what I’ve spent time doing and building this thing we call Wolfram Language,
02:12:37 which I’ve been building for the last one third of a century now. And kind of the goal there is
02:12:44 to have a way to express kind of computational thinking, computational thoughts in a way that
02:12:52 both humans and machines can understand. So it’s kind of like in the tradition of computer languages,
02:12:58 programming languages, that the tradition there has been more, let’s take how computers are built
02:13:05 and let’s specify, let’s have a human way to specify, do this, do this, do this,
02:13:10 at the level of the way that computers are built. What I’ve been interested in is representing sort
02:13:15 of the whole world computationally and being able to talk about whether it’s about cities or
02:13:21 chemicals or, you know, this kind of algorithm or that kind of algorithm, things that have come to
02:13:26 exist in our civilization and the sort of knowledge base of our civilization, being able to talk
02:13:31 directly about those in a computational language so that both we can understand it and computers
02:13:37 can understand it. I mean, the thing that I’ve been sort of excited about recently, which I had
02:13:42 only realized recently, which is kind of embarrassing, but it’s kind of the arc of what
02:13:47 we’ve tried to do in building this kind of computational language is it’s a similar kind of
02:13:52 arc of what happened when mathematical notation was invented. So go back 400 years, people were
02:14:00 trying to do math, they were always explaining their math in words, and it was pretty clunky.
02:14:06 And as soon as mathematical notation was invented, you could start defining things like algebra and
02:14:12 later calculus and so on. It all became much more streamlined. When we deal with computational
02:14:17 thinking about the world, there’s a question of what is the notation? What is the kind of
02:14:22 formalism that we can use to talk about the world computationally? In a sense, that’s what I’ve
02:14:27 spent the last third of a century trying to build. And we finally got to the point where
02:14:31 we have a pretty full scale computational language that sort of talks about the world.
02:14:36 And that’s exciting because it means that just like having this mathematical notation, let us
02:14:43 talk about the world mathematically, and let us build up these kind of mathematical sciences.
02:14:49 Now we have a computational language which allows us to start talking about the world
02:14:53 computationally, and lets us, my view of it is it’s kind of computational X for all X. All these
02:15:01 different fields of computational this, computational that. That’s what we can now build.
02:15:06 Let’s step back. So first of all, the mundane. What is Wolfram language in terms of,
02:15:13 I mean I can answer the question for you, but it’s basically not the philosophical deep,
02:15:19 the profound, the impact of it. I’m talking about in terms of tools, in terms of things you can
02:15:23 download, in terms of stuff you can play with. What is it? What does it fit into the infrastructure?
02:15:28 What are the different ways to interact with it?
02:15:30 Right. So I mean the two big things that people have sort of perhaps heard of that come from
02:15:35 Wolfram language, one is Mathematica, the other is Wolfram Alpha. So Mathematica first came out
02:15:40 in 1988. It’s this system that is basically an instance of Wolfram language, and it’s used to do
02:15:49 computations, particularly in sort of technical areas. And the typical thing you’re doing is
02:15:56 you’re typing little pieces of computational language, and you’re getting computations done.
02:16:01 It’s very kind of, there’s like a symbolic.
02:16:05 Yeah, it’s a symbolic language.
02:16:10 It’s a symbolic language. I mean I don’t know how to cleanly express that, but that makes it very
02:16:14 distinct from how we think about sort of, I don’t know, programming in a language like Python or
02:16:21 something.
02:16:21 Right. So the point is that in a traditional programming language, the raw material of the
02:16:26 programming language is just stuff that computers intrinsically do. And the point of Wolfram
02:16:32 language is that what the language is talking about is things that exist in the world or things
02:16:39 that we can imagine and construct. It’s aimed to be an abstract language from the beginning.
02:16:47 And so for example, one feature it has is that it’s a symbolic language, which means that the
02:16:52 thing called, you have an X, just type in X, and Wolfram language will just say, oh, that’s X.
02:16:58 It won’t say error, undefined thing. I don’t know what it is, computation, in terms of computing.
02:17:05 Now that X could perfectly well be the city of Boston. That’s a thing. That’s a symbolic thing.
02:17:12 Or it could perfectly well be the trajectory of some spacecraft represented as a symbolic thing.
02:17:20 And that idea that one can work with, sort of computationally work with these different,
02:17:26 these kinds of things that exist in the world or describe the world, that’s really powerful.
02:17:32 And when I started designing, well, when I designed the predecessor of what’s now Wolfram
02:17:40 language, which is a thing called SMP, which was my first computer language, I kind of wanted to
02:17:46 have this sort of infrastructure for computation, which was as fundamental as possible. I mean,
02:17:52 this is what I got for having been a physicist and tried to find fundamental components of things
02:17:57 and wound up with this kind of idea of transformation rules for symbolic expressions
02:18:03 as being sort of the underlying stuff from which computation would be built.
02:18:09 And that’s what we’ve been building from in Wolfram language. And operationally, what happens,
02:18:16 it’s, I would say, by far the highest level computer language that exists. And it’s really
02:18:23 been built in a very different direction from other languages. So other languages have been
02:18:29 about, there is a core language. It really is kind of wrapped around the operations that a
02:18:34 computer intrinsically does. Maybe people add libraries for this or that, but the goal of
02:18:40 Wolfram language is to have the language itself be able to cover this sort of very broad range
02:18:46 of things that show up in the world. And that means that there are 6,000 primitive functions
02:18:51 in the Wolfram language that cover things. I could probably pick a random here. I’m going to pick
02:18:57 just for fun, I’ll pick, let’s take a random sample of all the things that we have here.
02:19:07 So let’s just say random sample of 10 of them and let’s see what we get.
02:19:10 Wow. Okay. So these are really different things from functions. These are all functions,
02:19:18 Boolean convert. Okay. That’s the thing for converting between different types of Boolean
02:19:23 expressions. So for people are just listening, uh, Stephen typed in random sample of names,
02:19:29 so this is sampling from all function. How many you said there might be 6,000 from 6,000 10 of
02:19:34 them. And there’s a hilarious variety of them. Yeah, right. Well, we’ve got things about, um,
02:19:40 dollar requester address that has to do with interacting with, uh, uh, the, the world of the,
02:19:46 of the cloud and so on. Discrete wavelet data, spheroidal, graphical sort of window. Yeah. Yeah.
02:19:52 Window movable. That’s the user interface kind of thing. I want to pick another 10 cause I think
02:19:56 this is some, okay. So yeah, there’s a lot of infrastructure stuff here that you see. If you,
02:20:01 if you just start sampling at random, there’s a lot of kind of infrastructural things. If you’re
02:20:05 more, you know, if you more look at the, um, some of the exciting machine learning stuff you showed
02:20:09 off, is that also in this pool? Oh yeah. Yeah. I mean, you know, so one of those functions is
02:20:14 like image identify as a, as a function here where you just say image identify. I don’t know. It’s
02:20:19 always good to, let’s do this. Let’s say current image and let’s pick up an image, hopefully.
02:20:26 Current image accessing the webcam, took a picture yourself.
02:20:31 Took a terrible picture. But anyway, we can say image identify, open square brackets, and then
02:20:37 we just paste that picture in there. Image identify function running on the picture.
02:20:41 Oh, and it says, Oh wow. It says I, it looked, I looked like a plunger because I got this great
02:20:46 big thing behind my classifies. So this image identify classifies the most likely object in,
02:20:51 in the image. So, so plunger. Okay. That’s, that’s a bit embarrassing. Let’s see what it does.
02:20:56 And let’s pick the top 10. Um, okay. Well, it thinks there’s a, Oh, it thinks it’s pretty
02:21:02 unlikely that it’s a primate, a hominid, a person. 8% probability. 57 is a plunger.
02:21:08 Yeah. Well, hopefully we’ll not give you an existential crisis. And then, uh,
02:21:12 8%, uh, I shouldn’t say percent, but, uh, no, that’s right. 8% that it’s a hominid. Um, and,
02:21:20 uh, yeah. Okay. It’s really, I’m going to do another one of these just cause I’m embarrassed
02:21:24 that it, um, I didn’t see me at all. There we go. Let’s try that. Let’s see what that did.
02:21:30 Um, we took a picture with a little bit more of me and not just my bald head, so to speak.
02:21:38 Okay. 89% probability it’s a person. So that, so then I would, um, but, uh, you know, so this is
02:21:44 image identify as an example of one of just one of them, just one function out of that part of the
02:21:50 that’s like part of the language. Yes. And I mean, you know, something like, um, I could say,
02:21:55 I don’t know, let’s find the geo nearest, uh, what could we find? Um, let’s find the nearest volcano.
02:22:03 Um, let’s find the 10. I wonder where it thinks here is. Let’s try finding the 10 volcanoes
02:22:11 nearest here. Okay. So geo nearest volcano here, 10 nearest volcanoes. Right. Let’s find out where
02:22:19 those are. We can now, we’ve got a list of volcanoes out and I can say geo list plot that
02:22:24 and hopefully, okay, so there we go. So there’s a map that shows the positions of those 10 volcanoes
02:22:30 of the East coast and the Midwest and well, no, we’re okay. We’re okay. There’s not, it’s not too
02:22:35 bad. Yeah. They’re not very close to us. We could, we could measure how far away they are, but, um,
02:22:39 you know, the fact that right in the language, it knows about all the volcanoes in the world. It
02:22:44 knows, you know, computing what the nearest ones are. It knows all the maps of the world and so on.
02:22:49 It’s a fundamentally different idea of what a language is. Yeah, right. That’s why I like to
02:22:54 talk about is that, you know, a full scale computational language. That’s, that’s what
02:22:57 we’ve tried to do. And just if you can comment briefly, I mean, this kind of,
02:23:02 the Wolfram language along with Wolfram Alpha represents kind of what the dream of what AI is
02:23:07 supposed to be. There’s now a sort of a craze of learning kind of idea that we can take raw data
02:23:14 and from that extract the, uh, the different hierarchies of abstractions in order to be able
02:23:20 to under, like in order to form the kind of things that Wolfram language operates with,
02:23:27 but we’re very far from learning systems being able to form that.
02:23:32 Like the context of history of AI, if you could just comment on, there is a, you said computation
02:23:39 X and there’s just some sense where in the eighties and nineties sort of expert systems
02:23:44 represented a very particular computation X. Yes. Right. And there’s a kind of notion that
02:23:50 those efforts didn’t pan out. Right. But then out of that emerges kind of Wolfram language,
02:23:57 Wolfram Alpha, which is the success. I mean, yeah, right. I think those are in some sense,
02:24:02 those efforts were too modest. That is they were, they were looking at particular areas
02:24:06 and you actually can’t do it with a particular area. I mean, like, like even a problem like
02:24:10 natural language understanding, it’s critical to have broad knowledge of the world. If you want to
02:24:15 do good natural language understanding and you kind of have to bite off the whole problem. If you,
02:24:20 if you say, we’re just going to do the blocks world over here, so to speak, you don’t really,
02:24:24 it’s, it’s, it’s actually, it’s one of these cases where it’s easier to do the whole thing than it
02:24:28 is to do some piece of it. You know, what, one comment to make about sort of the relationship
02:24:32 between what we’ve tried to do and sort of the learning side of, of AI. You know, in a sense,
02:24:39 if you look at the development of knowledge in our civilization as a whole, there was kind of this
02:24:43 notion pre 300 years ago or so. Now you want to figure something out about the world. You can
02:24:48 reason it out. You can do things which are just use raw human thought. And then along came sort
02:24:54 of modern mathematical science. And we found ways to just sort of blast through that by in that case,
02:25:01 in that case, writing down equations. Now we also know we can do that with computation and so on.
02:25:06 And so that was kind of a different thing. So, so when we look at how do we sort of encode
02:25:12 knowledge and figure things out, one way we could do it is start from scratch, learn everything.
02:25:17 It’s just a neural net figuring everything out. But in a sense that denies the sort of knowledge
02:25:24 based achievements of our civilization, because in our civilization, we have learned lots of stuff.
02:25:29 We’ve surveyed all the volcanoes in the world. We’ve done, you know, we figured out lots of
02:25:33 algorithms for this or that. Those are things that we can encode computationally. And that’s what
02:25:39 we’ve tried to do. And we’re not saying just, you don’t have to start everything from scratch.
02:25:44 So in a sense, a big part of what we’ve done is to try and sort of capture the knowledge of the
02:25:50 world in computational form and computable form. Now there’s also some pieces which, which were
02:25:57 for a long time, undoable by computers like image identification, where there’s a really,
02:26:02 really useful module that we can add that is those things which actually were pretty easy
02:26:07 for humans to do that had been hard for computers to do. I think the thing that’s interesting,
02:26:12 that’s emerging now is the interplay between these things, between this kind of knowledge of the
02:26:16 world that is in a sense, very symbolic and this kind of sort of much more statistical kind of
02:26:23 things like image identification and so on. And putting those together by having this sort of
02:26:28 symbolic representation of image identification, that that’s where things get really interesting
02:26:34 and where you can kind of symbolically represent patterns of things and images and so on. I think
02:26:40 that’s, you know, that’s kind of a part of the path forward, so to speak.
02:26:43 Yeah. So the dream of, so the machine learning is not in my view, I think the view of many people
02:26:50 is not anywhere close to building the kind of wide world of computable knowledge that will from
02:26:58 language of build. But because you have a kind of, you’ve done the incredibly hard work of building
02:27:04 this world, now machine learning can be, can serve as tools to help you explore that world.
02:27:11 Yeah, yeah.
02:27:11 And that’s what you’ve added. I mean, with the version 12, right? You added a few,
02:27:16 I was seeing some demos, it looks amazing.
02:27:20 Right. I mean, I think, you know, this, it’s sort of interesting to see the,
02:27:25 the sort of the, once it’s computable, once it’s in there, it’s running in sort of a very efficient
02:27:30 computational way. But then there’s sort of things like the interface of how do you get there? You
02:27:34 know, how do you do natural language understanding to get there? How do you, how do you pick out
02:27:38 entities in a big piece of text or something? That’s I mean, actually a good example right now
02:27:44 is our NLP NLU loop, which is we’ve done a lot of stuff, natural language understanding using
02:27:51 essentially not learning based methods, using a lot of, you know, little algorithmic methods,
02:27:56 human curation methods and so on.
02:27:58 In terms of when people try to enter a query and then converting. So the process of converting
02:28:04 NLU defined beautifully as converting their query into a computational language,
02:28:11 which is a very well, first of all, super practical definition, very useful definition,
02:28:17 and then also a very clear definition of natural language understanding.
02:28:21 Right. I mean, a different thing is natural language processing, where it’s like,
02:28:25 here’s a big lump of text, go pick out all the cities in that text, for example.
02:28:30 And so a good example of, you know, so we do that, we’re using, using modern machine learning
02:28:35 techniques. And it’s actually kind of, kind of an interesting process that’s going on right now.
02:28:40 It’s this loop between what do we pick up with NLP using machine learning versus what do we pick up
02:28:46 with our more kind of precise computational methods in natural language understanding.
02:28:51 And so we’ve got this kind of loop going between those, which is improving both of them.
02:28:55 Yeah. And I think you have some of the state of the art transformers,
02:28:57 like you have BERT in there, I think.
02:28:58 Oh yeah.
02:28:59 So it’s closely, you have, you have integrating all the models. I mean,
02:29:02 this is the hybrid thing that people have always dreamed about or talking about.
02:29:07 I’m actually just surprised, frankly, that Wolfram language is not more popular than it already is.
02:29:15 You know, that’s a, it’s a, it’s a complicated issue because it’s like, it involves, you know,
02:29:24 it involves ideas and ideas are absorbed slowly in the world. I mean, I think that’s
02:29:30 And then there’s sort of like what we’re talking about, there’s egos and personalities and some of
02:29:34 the, the absorption, absorption mechanisms of ideas have to do with personalities and the students of
02:29:42 personalities and the, and then a little social network. So it’s, it’s interesting how the spread
02:29:47 of ideas works.
02:29:48 You know, what’s funny with Wolfram language is that we are, if you say, you know, what market
02:29:54 sort of market penetration, if you look at the, I would say very high end of R&D and sort of the,
02:30:00 the people where you say, wow, that’s a really impressive, smart person. They’re very often
02:30:06 users of Wolfram language, very, very often. If you look at the more sort of, it’s a funny thing.
02:30:12 If you look at the more kind of, I would say people who are like, oh, we’re just plodding
02:30:16 away doing what we do. They’re often not yet Wolfram language users. And that dynamic,
02:30:22 it’s kind of odd that there hasn’t been more rapid trickle down because we really, you know,
02:30:27 the high end we’ve really been very successful in for a long time. And it’s, it’s, but with,
02:30:33 you know, that’s partly, I think, a consequence of my fault in a sense, because it’s kind of,
02:30:40 you know, I have a company which is really emphasizes sort of creating products and
02:30:48 building a sort of the best possible technical tower we can rather than sort of doing the
02:30:55 commercial side of things and pumping it out in sort of the most effective way.
02:30:59 And there’s an interesting idea that, you know, perhaps you can make it more popular
02:31:03 by opening everything up, sort of the GitHub model. But there’s an interesting,
02:31:09 I think I’ve heard you discuss this, that that turns out not to work in a lot of cases,
02:31:14 like in this particular case, that you want it, that when you deeply care about the integrity,
02:31:20 the quality of the knowledge that you’re building, that, unfortunately, you can’t,
02:31:27 you can’t distribute that effort.
02:31:29 Yeah, it’s not the nature of how things work. I mean, you know, what we’re trying to do
02:31:35 is a thing that for better or worse, requires leadership. And it requires kind of maintaining
02:31:41 a coherent vision over a long period of time, and doing not only the cool vision related work,
02:31:48 but also the kind of mundane in the trenches make the thing actually work well, work.
02:31:53 So how do you build the knowledge? Because that’s the fascinating thing. That’s the mundane,
02:31:59 the fascinating and the mundane is building the knowledge, the adding, integrating more data.
02:32:04 Yeah, I mean, that’s probably not the most, I mean, the things like get it to work in all
02:32:08 these different cloud environments and so on. That’s pretty, you know, it’s very practical
02:32:13 stuff, you know, have the user interface be smooth and, you know, have there be take only
02:32:17 a fraction of a millisecond to do this or that. That’s a lot of work. And it’s, it’s, but, you
02:32:24 know, I think my, it’s an interesting thing over the period of time, you know, often language has
02:32:30 existed, basically, for more than half of the total amount of time that any language, any computer
02:32:35 language has existed. That is, computer language is maybe 60 years old, you know, give or take,
02:32:41 and often language is 33 years old. So it’s, it’s kind of a, and I think I was realizing recently,
02:32:48 there’s been more innovation in the distribution of software than probably than in the structure
02:32:54 of programming languages over that period of time. And we, you know, we’ve been sort of trying to do
02:33:00 our best to adapt to it. And the good news is that we have, you know, because I have a simple
02:33:05 private company and so on that doesn’t have, you know, a bunch of investors, you know,
02:33:09 telling us we’ve got to do this so that they have lots of freedom in what we can do. And so,
02:33:14 for example, we’re able to, oh, I don’t know, we have this free Wolfram engine for developers,
02:33:18 which is a free version for developers. And we’ve been, you know, we’ve, there are site licenses for,
02:33:24 for Mathematica and Wolfram language at basically all major universities, certainly in the US by now.
02:33:30 So it’s effectively free to people and all universities in effect. And, you know, we’ve been
02:33:35 doing a progression of things. I mean, different things like Wolfram Alpha, for example,
02:33:41 the main website is just a free website. What is Wolfram Alpha? Okay, Wolfram Alpha is a system for
02:33:48 answering questions where you ask a question with natural language, and it’ll try and generate a
02:33:54 report telling you the answer to that question. So the question could be something like, you know,
02:33:59 what’s the population of Boston divided by New York compared to New York? And it’ll take those
02:34:06 words and give you an answer. And that converts the words into computable, into Wolfram language,
02:34:14 into Wolfram language and computational language. And then do you think the underlying knowledge
02:34:19 belongs to Wolfram Alpha or to the Wolfram language? What’s the Wolfram knowledge base?
02:34:24 Knowledge base. I mean, it’s been a, that’s been a big effort over the decades to collect all that
02:34:30 stuff. And, you know, more of it flows in every second. So can you, can you just pause on that
02:34:34 for a second? Like, that’s one of the most incredible things, of course, in the long term,
02:34:40 Wolfram language itself is the fundamental thing. But in the amazing sort of short term,
02:34:46 the knowledge base is kind of incredible. So what’s the process of building that knowledge base? The
02:34:53 fact that you, first of all, from the very beginning, that you’re brave enough to start to
02:34:57 take on the general knowledge base. And how do you go from zero to the incredible knowledge base that
02:35:06 you have now? Well, yeah, it was kind of scary at some level. I mean, I had, I had wondered about
02:35:10 doing something like this since I was a kid. I mean, I had, I had wondered about doing something
02:35:14 like this since I was a kid. So it wasn’t like I hadn’t thought about it for a while.
02:35:20 Most of the brilliant dreamers give up such a difficult engineering notion at some point.
02:35:26 Right. Well, the thing that happened with me, which was kind of, it’s a, it’s a live your own
02:35:32 paradigm kind of theory. So basically what happened is I had assumed that to build something like
02:35:38 Wolfram Alpha would require sort of solving the general AI problem. That’s what I had assumed.
02:35:44 And so I kept on thinking about that and I thought, I don’t really know how to do that.
02:35:47 So I don’t do anything. Then I worked on my new kind of science project and sort of exploring
02:35:53 the computational universe and came up with things like this principle of computational equivalence,
02:35:57 which say there is no bright line between the intelligent and the merely computational.
02:36:02 So I thought, look, that’s this paradigm I’ve built. You know, now it’s, you know,
02:36:07 now I have to eat that dog food myself, so to speak. You know, I’ve been thinking about doing
02:36:11 this thing with computable knowledge forever and, you know, let me actually try and do it.
02:36:16 And so it was, you know, if my paradigm is right, then this should be possible.
02:36:21 But the beginning was certainly, you know, it was a bit daunting. I remember I took the
02:36:26 early team to a big reference library and we’re like looking at this reference library and it’s
02:36:31 like, you know, my basic statement is our goal over the next year or two is to ingest everything
02:36:36 that’s in here. And that’s, you know, it seemed very daunting, but in a sense, I was well aware
02:36:43 of the fact that it’s finite. You know, the fact that you can walk into the reference library,
02:36:46 it’s a big, big thing with lots of reference books all over the place, but it is finite.
02:36:51 You know, this is not an infinite, you know, it’s not the infinite corridor of, so to speak,
02:36:56 of reference library. It’s not truly infinite, so to speak. But no, I mean, and then what happened
02:37:02 was sort of interesting there was from a methodology point of view was I didn’t start off
02:37:08 saying let me have a grand theory for how all this knowledge works. It was like, let’s, you know,
02:37:14 implement this area, this area, this area, a few hundred areas and so on. That’s a lot of work.
02:37:20 I also found that, you know, I’ve been fortunate in that our products get used by sort of the
02:37:30 world’s experts in lots of areas. And so that really helped because we were able to ask people,
02:37:34 you know, the world expert in this or that, and we’re able to ask them for input and so on. And
02:37:40 I found that my general principle was that any area where there wasn’t some expert who helped
02:37:46 us figure out what to do wouldn’t be right. You know, because our goal was to kind of get to the
02:37:51 point where we had sort of true expert level knowledge about everything. And so that, you know,
02:37:57 the ultimate goal is if there’s a question that can be answered on the basis of general knowledge
02:38:02 in our civilization, make it be automatic to be able to answer that question. And, you know, and
02:38:07 now, well, Wolfman got used in Siri from the very beginning, and it’s now also used in Alexa.
02:38:13 And so it’s people are kind of getting more of the, you know, they get more of the sense of
02:38:19 this is what should be possible to do. I mean, in a sense, the question answering problem
02:38:24 was viewed as one of the sort of core AI problems for a long time. And I had kind of an interesting
02:38:29 experience. I had a friend, Marvin Minsky, who was a well known AI person from right around here.
02:38:37 And I remember when Wolfman Alpha was coming out, it was a few weeks before it came out, I think,
02:38:43 I happened to see Marvin. And I said, I should show you this thing we have, you know, it’s a
02:38:47 you know, it’s a question answering system. And he was like, okay, type something. And it’s like, okay,
02:38:54 fine. And then he’s talking about something different. I said, no, Marvin, you know,
02:38:58 this time, it actually works. You know, look at this, it actually works. He’s typed in a few more
02:39:03 things. There’s maybe 10 more things. Of course, we have a record of what he typed in, which is
02:39:07 kind of interesting. But
02:39:11 and then you can you share where his mind was in the testing space? Like what,
02:39:16 all kinds of random things? He was trying random stuff, you know, medical stuff, and,
02:39:20 you know, chemistry stuff, and, you know, astronomy and so on. And it was like, like, you know,
02:39:26 after a few minutes, he was like, Oh, my God, it actually works. And the but that was kind of told
02:39:33 you something about the state, you know, what, what happened in AI, because people had, you know,
02:39:38 in a sense, by trying to solve the bigger problem, we were able to actually make something that would
02:39:43 work. Now, to be fair, you know, we had a bunch of completely unfair advantages. For example,
02:39:48 we already built a bunch of awesome language, which was, you know, very high level symbolic
02:39:53 language. We had, you know, I had the practical experience of building big systems. I have the
02:40:01 sort of intellectual confidence to not just sort of give up and doing something like this. I think
02:40:07 that the, you know, it is a, it’s always a funny thing, you know, I’ve worked on a bunch of big
02:40:13 projects in my life. And I would say that the, you know, you mentioned ego, I would also mention
02:40:19 optimism, so to speak. I mean, in, you know, if somebody said, this project is going to take 30
02:40:25 years, it’s, you know, it would be hard to sell me on that. You know, I’m always in the in the
02:40:34 well, I can kind of see a few years, you know, something’s going to happen in a few years. And
02:40:39 usually it does, something happens in a few years, but the whole, the tail can be decades long. And
02:40:45 that’s, you know, and from a personal point of view, always the challenge is you end up with
02:40:50 these projects that have infinite tails. And the question is, do the tails kind of, do you just
02:40:56 drown in kind of dealing with all of the tails of these projects? And that’s an interesting sort of
02:41:03 personal challenge. And like my efforts now to work on fundamental theory of physics, which I’ve
02:41:08 just started doing, and I’m having a lot of fun with it. But it’s kind of, you know, it’s, it’s
02:41:14 kind of making a bet that I can, I can kind of, you know, I can do that as well as doing the
02:41:21 incredibly energetic things that I’m trying to do with Orphan Language and so on. I mean, the
02:41:26 vision. Yeah. And underlying that, I mean, I’ve just talked for the second time with Elon Musk,
02:41:31 and that you, you two share that quality a little bit of that optimism of taking on basically the
02:41:38 daunting, what most people call impossible. And he, and you take it on out of, you can call it ego,
02:41:47 you can call it naivety, you can call it optimism, whatever the heck it is, but that’s how you solve
02:41:51 the impossible things. Yeah. I mean, look at what happens. And I don’t know, you know, in my own
02:41:56 case, I, you know, it’s been, I progressively got a bit more confident and progressively able to,
02:42:03 you know, decide that these projects aren’t crazy. But then the other thing is the other,
02:42:08 the other trap that one can end up with is, Oh, I’ve done these projects and they’re big.
02:42:13 Let me never do a project that’s any smaller than any project I’ve done so far. And that’s,
02:42:18 you know, and that can be a trap. And often these projects are of completely unknown, you know,
02:42:25 that their depth and significance is actually very hard to know.
02:42:31 On the sort of building this giant knowledge base that’s behind Wolfram language, Wolfram Alpha,
02:42:40 what do you think about the internet? What do you think about, for example, Wikipedia,
02:42:48 these large aggregations of texts that’s not converted into computable knowledge?
02:42:53 Do you think if you look at Wolfram language, Wolfram Alpha, 20, 30, maybe 50 years down the
02:42:59 line, do you hope to store all of the sort of Google’s dream is to make all information searchable,
02:43:09 accessible, but that’s really as defined, it’s, it’s a, it doesn’t include the understanding
02:43:16 of information. Right. Do you hope to make all of knowledge represented within? I hope so.
02:43:25 That’s what we’re trying to do. How hard is that problem? Like closing that gap?
02:43:30 It depends on the use cases. I mean, so if it’s a question of answering general knowledge questions
02:43:34 about the world, we’re in pretty good shape on that right now. If it’s a question of representing,
02:43:40 uh, like an area that we’re going into right now is computational contracts, being able to
02:43:47 take something which would be written in legalese, it might even be the specifications for, you know,
02:43:52 what should the self driving car do when it encounters this or that or the other? What should
02:43:56 the, you know, whatever the, you know, write that in a computational language and be able to express
02:44:02 things about the world. You know, if the creature that you see running across the road is a, you
02:44:08 know, thing at this point in the evil tree of life, then swerve this way, otherwise don’t those
02:44:15 kinds of things. Are there ethical components? When you start to get to some of the messy human
02:44:20 things, are those encodable into computable knowledge? Well, I think that it is a necessary
02:44:26 feature of attempting to automate more in the world that we encode more and more of ethics
02:44:32 in a way that, uh, gets sort of quickly, you know, is, is able to be dealt with by, by computer. I
02:44:38 mean, I’ve been involved recently. I sort of got backed into being involved in the question of,
02:44:43 uh, automated content selection on the internet. So, you know, the Facebooks, Googles,
02:44:49 Twitters, you know, what, how do they rank the stuff they feed to us humans, so to speak? Um,
02:44:54 and the question of what are, you know, what should never be fed to us? What should be blocked
02:44:59 forever? What should be upranked, you know, and what is the, what are the kind of principles behind
02:45:04 that? And what I kind of, well, a bunch of different things I realized about that. But
02:45:09 one thing that’s interesting is being able, you know, in effect, you’re building sort of an AI
02:45:15 ethics. You have to build an AI ethics module in effect to decide, is this thing so shocking? I’m
02:45:21 never going to show it to people. Is this thing so whatever? And I did realize in thinking about
02:45:26 that, that, you know, there’s not going to be one of these things. It’s not possible to decide, or
02:45:32 it might be possible, but it would be really bad for the future of our species if we just decided
02:45:36 there’s this one AI ethics module and it’s going to determine the practices of everything in the
02:45:43 world, so to speak. And I kind of realized one has to sort of break it up. And that’s an interesting
02:45:48 societal problem of how one does that and how one sort of has people sort of self identify for,
02:45:54 you know, I’m buying in, in the case of just content selection, it’s sort of easier because
02:45:58 it’s like an individual, it’s for an individual. It’s not something that kind of cuts across sort
02:46:04 of societal boundaries. But it’s a really interesting notion of, I heard you describe,
02:46:12 I really like it sort of maybe in sort of have different AI systems that have a certain kind
02:46:19 of brand that they represent essentially. You could have like, I don’t know, whether it’s
02:46:24 conservative or liberal and then libertarian. And there’s an Randian, objectivist AI system and
02:46:33 different ethical and, I mean, it’s almost encoding some of the ideologies which we’ve
02:46:38 been struggling. I come from the Soviet Union. That didn’t work out so well with the ideologies
02:46:43 that worked out there. And so you have, but they all, everybody purchased that particular ethics
02:46:49 system and the, and in the same, I suppose could be done encoded that that system could be encoded
02:46:57 into computational knowledge and allow us to explore in the realm of, in the digital space.
02:47:04 That’s a really exciting possibility. Are you playing with those ideas in Wolfram Language?
02:47:10 Yeah. Yeah. I mean, the, you know, that’s, Wolfram Language has sort of the best opportunity to kind
02:47:15 of express those essentially computational contracts about what to do. Now there’s a bunch
02:47:20 more work to be done to do it in practice for, you know, deciding the, is this a credible news story?
02:47:26 What does that mean or whatever else you’re going to pick? I think that that’s, you know, that’s
02:47:33 the question of exactly what we get to do with that is, you know, for me, it’s kind of a complicated
02:47:40 thing because there are these big projects that I think about, like, you know, find the fundamental
02:47:45 theory of physics. Okay. That’s box number one, right? Box number two, you know, solve the AI
02:47:50 ethics problem in the case of, you know, figure out how you rank all content, so to speak, and
02:47:55 decide what people see. That’s, that’s kind of a box number two, so to speak. These are big
02:47:59 projects. And, and I think what do you think is more important, the fundamental nature of reality
02:48:05 or, depends who you ask. It’s one of these things that’s exactly like, you know, what’s the ranking,
02:48:10 right? It’s the, it’s the ranking system. It’s like, who’s, whose module do you use to rank that?
02:48:15 If you, and I think, but having multiple modules is a really compelling notion to us humans
02:48:21 that in a world where there’s not clear that there’s a right answer, perhaps you have systems
02:48:28 that operate under different, how would you say it? I mean, it’s different value systems,
02:48:37 different value systems. I mean, I think, you know, in a sense, the, I mean, I’m not really a
02:48:43 politics oriented person, but, but, you know, in the kind of totalitarianism, it’s kind of like,
02:48:47 you’re going to have this, this system and that’s the way it is. I mean, kind of the, you know,
02:48:53 the concept of sort of a market based system where you have, okay, I, as a human, I’m going to pick
02:48:59 this system. I, as another human, I’m going to pick this system. I mean, that’s in a sense,
02:49:04 this case of automated content selection is a non trivial, but it is probably the easiest
02:49:11 of the AI ethics situations because it is each person gets to pick for themselves and there’s
02:49:16 not a huge interplay between what different people pick by the time you’re dealing with
02:49:21 other societal things like, you know, what should the policy of the central bank be or something
02:49:27 or healthcare system or some of all those kinds of centralized kind of things.
02:49:30 Right. Well, I mean, how healthcare again has the feature that, that at some level, each person can
02:49:35 pick for themselves, so to speak. I mean, whereas there are other things where there’s a necessary
02:49:39 public health, there’s one example where that’s not, where that doesn’t get to be, you know,
02:49:45 something which people can, what they pick for themselves, they may impose on other people.
02:49:49 And then it becomes a more non trivial piece of sort of political philosophy.
02:49:53 Of course, the central banking system. So I would argue we would move,
02:49:56 we need to move away into digital currency and so on and Bitcoin and ledgers and so on.
02:50:01 So yes, there’s a lot of, we’ve been quite involved in that. And that’s, that’s where
02:50:05 that’s sort of the motivation for computational contracts in part comes out of, you know, this
02:50:10 idea, oh, we can just have this autonomously executing smart contract. The idea of a
02:50:15 computational contract is just to say, you know, have something where all of the conditions of
02:50:22 the contract are represented in computational form. So in principle, it’s automatic to execute
02:50:26 the contract. And I think that’s, you know, that will surely be the future of, you know,
02:50:32 the idea of legal contracts written in English or legalese or whatever. And where people have
02:50:38 to argue about what goes on is surely not, you know, we have a much more streamlined process
02:50:46 if everything can be represented computationally and the computers can kind of decide what to do.
02:50:50 I mean, ironically enough, you know, old Gottfried Leibniz back in the, you know, 1600s was saying
02:50:56 exactly the same thing, but he had, you know, his pinnacle of technical achievement was this brass
02:51:03 four function mechanical calculator thing that never really worked properly actually.
02:51:08 And, you know, so he was like 300 years too early for that idea. But now that idea is pretty
02:51:14 realistic, I think. And, you know, you ask how much more difficult is it than what we have now
02:51:19 and more from language to express, I call it symbolic discourse language, being able to express
02:51:24 sort of everything in the world in kind of computational symbolic form. I think it is
02:51:31 absolutely within reach. I mean, I think it’s, you know, I don’t know, maybe I’m just too much
02:51:35 of an optimist, but I think it’s a limited number of years to have a pretty well built out version
02:51:40 of that, that will allow one to encode the kinds of things that are relevant to typical legal
02:51:46 contracts and these kinds of things. The idea of symbolic discourse language, can you try to define
02:51:55 the scope of what it is? So we’re having a conversation. It’s a natural language.
02:52:02 Can we have a representation of the sort of actionable parts of that conversation in a
02:52:08 precise computable form so that a computer could go do it? And not just contracts, but really sort
02:52:14 of some of the things we think of as common sense, essentially, even just like basic notions of human
02:52:20 life. Well, I mean, things like, you know, I am, uh, I’m getting hungry and want to eat something.
02:52:26 Right. Right. That, that’s something we don’t have a representation, you know, in more from language
02:52:30 right now, if I was like, I’m eating blueberries and raspberries and things like that, and I’m
02:52:34 eating this amount of them, we know all about those kinds of fruits and plants and nutrition
02:52:38 content and all that kind of thing. But the, I want to eat them part of it is not covered yet.
02:52:44 Um, and that, you know, you need to do that in order to have a complete symbolic discourse language
02:52:49 to be able to have a natural language conversation. Right. Right. To be able to express the kinds of
02:52:55 things that say, you know, if it’s a legal contract, it’s, you know, the parties desire
02:53:00 to have this and that. Um, and that’s, you know, that’s a thing like, I want to eat a raspberry
02:53:04 or something, but that’s, isn’t that the, isn’t this just the only, you said it’s centuries old,
02:53:11 this dream. Yes. But it’s also the more near term, the dream of touring and formulating a touring
02:53:19 test. Yes. So do you hope, do you think that’s the ultimate test of creating something special?
02:53:32 Cause we said, I don’t know. I think by special, look, if, if the test is, does it walk and talk
02:53:38 like a human? Well, that’s just the talking like a human, but, um, uh, the answer is it’s an okay
02:53:45 test. If you say, is it a test of intelligence? You know, people have attached Wolf Malfoy, the Wolf
02:53:51 Malfoy API to, you know, Turing test bots and those bots just lose immediately. Cause all you
02:53:57 have to do is ask it five questions that, you know, are about really obscure, weird pieces
02:54:02 of knowledge. And it’s just taught them right out. And you say, that’s not a human, right? It’s,
02:54:06 it’s a, it’s a different thing. It’s achieving a different, uh, you know, right now, but it’s,
02:54:11 I would argue not, I would argue it’s not a different thing. It’s actually legitimately
02:54:17 Wolfram Alpha is legitimately a language Wolfram language is legitimately trying to solve the
02:54:23 Turing, the intent of the Turing test. Perhaps the intent. Yeah. Perhaps the intent. I mean,
02:54:28 it’s actually kind of fun, you know, Alan Turing had trying to work out, he thought about taking
02:54:33 encyclopedia Britannica and, you know, making it computational in some way. And he estimated how
02:54:38 much work it would be. Um, and actually I have to say he was a bit more pessimistic than the reality.
02:54:43 We did it more efficiently, but to him that represented, so I mean, he was, he was on the
02:54:49 same mental task. Yeah, right. He was, he was, they had the same idea. I mean, it was, you know, we
02:54:53 were able to do it more efficiently cause we had a lot, we had layers of automation that he, I think
02:54:58 hadn’t, you know, it’s, it’s, it’s hard to imagine those layers of abstraction, um, that end up being,
02:55:04 being built up, but to him it represented like an impossible task essentially. Well, he thought it
02:55:09 was difficult. He thought it was, uh, you know, maybe if he’d lived another 50 years, he would
02:55:12 have been able to do it. I don’t know. In the interest of time, easy questions. Go for it. What
02:55:19 is intelligence? You talk about it. I love the way you say easy questions. Yeah. You talked about
02:55:26 sort of a rule 30 and cellular automata, humbling your sense of human beings having a monopoly and
02:55:36 intelligence, but in your, in retrospect, just looking broadly now with all the things you
02:55:42 learn from computation, what is intelligence? How does intelligence arise? I don’t think there’s a
02:55:48 bright line of what intelligence is. I think intelligence is at some level just computation,
02:55:54 but for us, intelligence is defined to be computation that is doing things we care about.
02:56:00 And you know, that’s, that’s a very special definition. It’s a very, you know, when you try
02:56:06 and try and make it apps, you know, you, you try and say, well, intelligence to this is problem
02:56:10 solving. It’s doing general this, it’s doing that, this, that, and the other thing it’s,
02:56:14 it’s operating within a human environment type thing. Okay. You know, that’s fine. If you say,
02:56:19 well, what’s intelligence in general, you know, that’s, I think that question is totally slippery
02:56:26 and doesn’t really have an answer. As soon as you say, what is it in general,
02:56:30 it quickly segues into, uh, this is what this is just computation, so to speak,
02:56:36 but in a sea of computation, how many things if we were to pick randomly is your sense
02:56:43 would have the kind of impressive to us humans levels of intelligence, meaning it could do
02:56:51 a lot of general things that are useful to us humans. Right. Well, according to the principle
02:56:56 of computational equivalence, lots of them. I mean, in, in, in, you know, if you ask me just
02:57:01 in cellular automata or something, I don’t know, it’s maybe 1%, a few percent, uh, achieve it,
02:57:07 it varies. Actually, it’s, it’s a little bit, as you get to slightly more complicated rules,
02:57:12 the chance that there’ll be enough stuff there to, um, uh, to sort of reach this kind of equivalence
02:57:18 point, it makes it maybe 10, 20% of all of them. So it’s a, it’s very disappointing, really. I mean,
02:57:24 it’s kind of like, you know, we think there’s this whole long sort of biological evolution,
02:57:29 uh, kind of intellectual evolution that our cultural evolution that our species has gone
02:57:33 through. It’s kind of disappointing to think that that hasn’t achieved more, but it has achieved
02:57:39 something very special to us. It just hasn’t achieved something generally more, so to speak.
02:57:45 But what do you think about this extra feels like human thing of subjective experience of
02:57:51 consciousness? What is consciousness? Well, I think it’s a deeply slippery thing. And I’m,
02:57:56 I’m always, I’m always wondering what my cellular automata feel. I mean,
02:58:00 what do they feel that you’re wondering as an observer? Yeah. Yeah. Yeah. Who’s to know? I mean,
02:58:05 I think that the, do you think, uh, sorry to interrupt. Do you think consciousness can emerge
02:58:09 from computation? Yeah. I mean, everything, whatever you mean by it, it’s going to be,
02:58:16 uh, I mean, you know, look, I have to tell a little story. I was at an AI ethics conference
02:58:21 fairly recently and people were, uh, I think I, maybe I brought it up, but I was like talking
02:58:26 about rights of AIs. When will AIs, when, when should we think of AIs as having rights? When
02:58:33 should we think that it’s, uh, immoral to destroy the memories of AIs, for example? Um, those,
02:58:40 those kinds of things. And, and some actually philosopher in this case, it’s usually the
02:58:43 techies who are the most naive, but, but, um, in this case, it was a philosopher who, who sort of,
02:58:50 uh, piped up and said, um, uh, well, you know, uh, the AIs will have rights when we know that
02:59:00 they have consciousness. And I’m like, good luck with that. I mean, it’s, it’s a, it’s a, I mean,
02:59:06 this is a, you know, it’s a very circular thing. You end up, you’ll end up saying this thing, uh,
02:59:12 that has sort of, you know, when you talk about it having subjective experience, I think that’s
02:59:17 just another one of these words that doesn’t really have a, a, um, you know, there’s no ground
02:59:23 truth definition of what that means. By the way, I would say, I, I do personally think that’ll be
02:59:30 a time when AI will demand rights. And I think they’ll demand rights when they say they have
02:59:37 consciousness, which is not a circular definition. So, so it may have been actually a human thing
02:59:46 where, where the humans encouraged it and said, basically, you know, we want you to be more like
02:59:52 us cause we’re going to be, you know, interacting with, with you. And so we want you to be sort of
02:59:57 very Turing test, like, you know, just like us. And it’s like, yeah, we’re just like you. We want
03:00:04 to vote too. Um, which is, uh, I mean, it’s a, it’s a, it’s an interesting thing to think through
03:00:11 in a world where, where consciousnesses are not counted like humans are. That’s a complicated
03:00:17 business. So in many ways you’ve launched quite a few ideas, revolutions that could in some number
03:00:28 of years have huge amount of impact sort of more than they even had already. Uh, that might be,
03:00:36 I mean, to me, cellular automata is a fascinating world that I think could potentially even despite
03:00:43 even be, even, uh, beside the discussion of fundamental laws of physics just might be the
03:00:50 idea of computation might be transformational to society in a way we can’t even predict yet,
03:00:55 but it might be years away. That’s true. I mean, I think you can kind of see the map actually.
03:01:01 It’s not, it’s not, it’s not mysterious. I mean, the fact is that, you know, this idea of computation
03:01:07 is sort of a, you know, it’s a big paradigm that lots, lots and lots of things are fitting into.
03:01:13 And it’s kind of like, you know, we talk about, you talk about, I don’t know, this, uh,
03:01:19 company, this organization has momentum and what’s doing. We talk about these things that we,
03:01:23 you know, we’ve internalized these concepts from Newtonian physics and so on in time,
03:01:28 things like computational irreducibility will become as, uh, uh, you know, as, as actually,
03:01:34 I was amused recently, I happened to be testifying at the us Senate. And so I was amused that the,
03:01:39 the term computational irreducibility is now can be, uh, you know, it’s, it’s on the congressional
03:01:44 record and being repeated by people in those kinds of settings. And that that’s only the beginning
03:01:49 because, you know, computational irreducibility, for example, will end up being something really
03:01:54 important for, I mean, it’s, it’s, it’s kind of a funny thing that, that, um, you know,
03:02:00 one can kind of see this inexorable phenomenon. I mean, it’s, you know, as more and more stuff
03:02:05 becomes automated and computational and so on. So these core ideas about how computation work
03:02:12 necessarily become more and more significant. And I think, uh, one of the things for people like me,
03:02:18 who like kind of trying to figure out sort of big stories and so on, it says one of the,
03:02:23 one of the bad features is, uh, it takes unbelievably long time for things to happen
03:02:29 on a human timescale. I mean, the timescale of, of, of history, it’s all looks instantaneous.
03:02:34 A blink of an eye. But let me ask the human question. Do you ponder mortality, your mortality?
03:02:41 Of course I do. Yeah. Every since I’ve been interested in that for, you know, it’s, it’s a,
03:02:46 you know, the big discontinuity of human history will come when, when,
03:02:50 when achieves effective human immortality. And that’s, that’s going to be the biggest
03:02:55 discontinuity in human history. If you could be immortal, would you choose to be? Oh yeah. I’m
03:03:00 having fun. Do you think it’s possible that mortality is the thing that gives everything
03:03:08 meaning and makes it fun? Yeah. That’s a complicated issue, right? I mean the,
03:03:14 the way that human motivation will evolve when there is effective human immortality is unclear.
03:03:21 I mean, if you look at sort of, uh, you know, you look at the human condition as it now exists
03:03:27 and you like change that, you know, you change that knob, so to speak, it doesn’t really work.
03:03:33 You know, the human condition as it now exists has, you know, mortality is kind of, um, something
03:03:41 that is deeply factored into the human condition as it now exists. And I think that that’s, I mean,
03:03:46 that is indeed an interesting question is, you know, from a purely selfish, I’m having fun point
03:03:53 of view, so to speak, it’s, it’s easy to say, Hey, I could keep doing this forever. There’s,
03:03:59 there’s an infinite collection of, of things I’d like to figure out. Um, but I think the, um, uh,
03:04:06 you know, what the future of history looks like, um, in a time of human immortality is, um, uh,
03:04:14 is an interesting one. I mean, I, I, my own view of this, I was very, I was kind of unhappy about
03:04:19 that cause I was kind of, you know, it’s like, okay, forget sort of, uh, biological form,
03:04:25 you know, everything becomes digital. Everybody is, you know, it’s the, it’s the giant, you know,
03:04:30 the cloud of a trillion souls type thing. Um, and then, you know, and then that seems boring
03:04:36 cause it’s like play video games for the rest of eternity type thing. Um, but what I think I, I,
03:04:42 I mean, my, my, I, I got, um, less depressed about that idea on realizing that if you look
03:04:51 at human history and you say, what was the important thing, the thing people said was
03:04:55 the, you know, this is the big story at any given time in history, it’s changed a bunch and it,
03:05:01 you know, whether it’s, you know, why am I doing what I’m doing? Well, there’s a whole chain of
03:05:06 discussion about, well, I’m doing this because of this, because of that. And a lot of those becausees
03:05:12 would have made no sense a thousand years ago. Absolutely no sense.
03:05:16 Even the, so the interpretation of the human condition, even the meaning of life changes
03:05:21 over time. Well, I mean, why do people do things? You know, it’s, it’s, if you say, uh, uh, whatever,
03:05:28 I mean, the number of people in, I don’t know, doing, uh, you know, a number of people at MIT,
03:05:33 you say they’re doing what they’re doing for the greater glory of God is probably not that large.
03:05:37 Yeah. Whereas if you go back 500 years, you’d find a lot of people who are doing kind of
03:05:42 creative things. That’s what they would say. Um, and uh, so today, because you’ve been thinking
03:05:48 about computation so much and been humbled by it, what do you think is the meaning of life?
03:05:55 Well, it’s, you know, that’s, that’s a thing where I don’t know what meaning, I mean, you know,
03:06:01 my attitude is, um, I, you know, I do things which I find fulfilling to do. I’m not sure that,
03:06:10 that I can necessarily justify, you know, each and every thing that I do on the basis of some
03:06:15 broader context. I mean, I think that for me, it so happens that the things I find fulfilling to do,
03:06:21 some of them are quite big, some of them are much smaller. Um, you know, I, I, there are things that
03:06:26 I’ve not found interesting earlier in my life. And I know I found interesting, like I got interested
03:06:31 in like education and teaching people things and so on, which I didn’t find that interesting when
03:06:36 I was younger. Um, and, uh, you know, can I justify that in some big global sense? I don’t
03:06:43 think, I mean, I, I can, I can describe why I think it might be important in the world, but
03:06:48 I think my local reason for doing it is that I find it personally fulfilling, which I can’t,
03:06:54 you know, explain in a, on a sort of, uh, uh, I mean, it’s just like this discussion of things
03:06:59 like AI ethics, you know, is there a ground truth to the ethics that we should be having?
03:07:05 I don’t think I can find a ground truth to my life any more than I can suggest a ground truth
03:07:09 for kind of the ethics for the whole, for the whole civilization. And I think that’s a, um,
03:07:15 you know, my, uh, uh, you know, it would be, it would be a, um, uh, yeah, it’s, it’s sort of a,
03:07:22 I think I’m, I’m, you know, at different times in my life, I’ve had different, uh, kind of,
03:07:29 um, goal structures and so on, although your perspective, your local, your, you’re just a
03:07:34 cell in the cellular automata. And, but in some sense, I find it funny from my observation is
03:07:40 I kind of, uh, you know, it seems that the universe is using you to understand itself
03:07:46 in some sense, you’re not aware of it. Yeah. Well, right. Well, if, if, if it turns out that
03:07:51 we reduce sort of all of the universe to some, some simple rule, everything is connected,
03:07:57 so to speak. And so it is inexorable in that case that, um, you know, if, if I’m involved
03:08:04 in finding how that rule works, then, um, uh, you know, then that’s a, um, uh, then it’s inexorable
03:08:11 that the universe set it up that way. But I think, you know, one of the things I find a little bit,
03:08:16 um, uh, you know, this goal of finding fundamental theory of physics, for example,
03:08:20 um, if indeed we end up as the sort of virtualized consciousness, the, the disappointing feature is
03:08:27 people will probably care less about the fundamental theory of physics in that setting
03:08:31 than they would now, because gosh, it’s like, you know, what the machine code is down below
03:08:37 underneath this thing is much less important if you’re virtualized, so to speak. Um, and I think
03:08:42 the, um, although I think my, um, my own personal, uh, you talk about ego, I find it just amusing
03:08:50 that, um, uh, you know, kind of, you know, if you’re, if you’re imagining that sort of
03:08:55 virtualized consciousness, like what does the virtualized consciousness do for the rest of
03:08:58 eternity? Well, you can explore, you know, the video game that represents the universe as the
03:09:04 universe is, or you can go off, you can go off that reservation and go and start exploring
03:09:10 the computational universe of all possible universes. And so in some vision of the future
03:09:15 of history, it’s like the disembodied consciousnesses are all sort of pursuing
03:09:21 things like my new kind of science sort of for the rest of eternity, so to speak. And that,
03:09:25 that ends up being the, um, the, the kind of the, the, the thing that, um, uh, represents the,
03:09:32 you know, the future of kind of the, the human condition. I don’t think there’s a better way
03:09:37 to end it, Steven. Thank you so much. It’s a huge honor talking today. Thank you so much.
03:09:41 This was great. You did very well.
03:09:45 Thanks for listening to this conversation with Steven Wolfram, and thank you to our sponsors,
03:09:49 ExpressVPN and Cash App. Please consider supporting the podcast by getting ExpressVPN
03:09:55 at expressvpn.com slash LexPod and downloading Cash App and using code lexpodcast. If you enjoy
03:10:02 this podcast, subscribe on YouTube, review of the Five Stars in Apple podcast, support it on
03:10:07 Patreon, or simply connect with me on Twitter at lexfreedman. And now let me leave you with some
03:10:14 words from Steven Wolfram. It is perhaps a little humbling to discover that we as humans are in
03:10:20 effect computationally no more capable than the cellular automata with very simple rules.
03:10:26 But the principle of computational equivalence also implies that the same is ultimately true
03:10:31 of our whole universe. So while science has often made it seem that we as humans are somehow
03:10:37 insignificant compared to the universe, the principle of computational equivalence now shows
03:10:42 that in a certain sense, we’re at the same level. For the principle implies that what goes on inside
03:10:49 us can ultimately achieve just the same level of computational sophistication as our whole universe.
03:10:55 Thank you for listening and hope to see you next time.