Transcript
00:00:00 The following is a conversation with Guido van Rossum, creator of Python, one of the most popular
00:00:05 programming languages in the world, used in almost any application that involves computers
00:00:11 from web back end development to psychology, neuroscience, computer vision, robotics, deep
00:00:17 learning, natural language processing, and almost any subfield of AI. This conversation is part of
00:00:24 MIT course on artificial general intelligence and the artificial intelligence podcast.
00:00:29 If you enjoy it, subscribe on YouTube, iTunes, or your podcast provider of choice, or simply connect
00:00:36 with me on Twitter at Lex Friedman, spelled F R I D. And now, here’s my conversation with Guido van
00:00:44 Rossum. You were born in the Netherlands in 1956. Your parents and the world around you was deeply
00:00:53 deeply impacted by World War Two, as was my family from the Soviet Union. So with that context,
00:01:02 what is your view of human nature? Are some humans inherently good,
00:01:07 and some inherently evil? Or do we all have both good and evil within us?
00:01:12 Guido van Rossum Ouch, I did not expect such a deep one. I, I guess we all have good and evil
00:01:24 potential in us. And a lot of it depends on circumstances and context.
00:01:31 Peter Bell out of that world, at least on the Soviet Union side in Europe, sort of out of
00:01:38 suffering, out of challenge, out of that kind of set of traumatic events, often emerges beautiful
00:01:46 art, music, literature. In an interview I read or heard, you said you enjoyed Dutch literature
00:01:54 when you were a child. Can you tell me about the books that had an influence on you in your
00:01:59 childhood? Guido van Rossum
00:02:01 Well, with as a teenager, my favorite writer was my favorite Dutch author was a guy named Willem
00:02:09 Frederik Hermans, who’s writing, certainly his early novels were all about sort of
00:02:19 ambiguous things that happened during World War Two. I think he was a young adult during that time.
00:02:31 And he wrote about it a lot, and very interesting, very good books, I thought, I think.
00:02:40 Peter Bell In a nonfiction way?
00:02:42 Guido van Rossum No, it was all fiction, but it was
00:02:46 very much set in the ambiguous world of resistance against the Germans,
00:02:54 where often you couldn’t tell whether someone was truly in the resistance or really a spy for the
00:03:03 Germans. And some of the characters in his novels sort of crossed that line, and you never really
00:03:11 find out what exactly happened.
00:03:13 Peter Bell And in his novels, there’s always a
00:03:16 good guy and a bad guy, the nature of good and evil. Is it clear there’s a hero?
00:03:22 Guido van Rossum No, his heroes are often more,
00:03:25 his main characters are often anti heroes. And so they’re not very heroic. They’re often,
00:03:36 they fail at some level to accomplish their lofty goals.
00:03:40 Peter Bell And looking at the trajectory
00:03:43 through the rest of your life, has literature, Dutch or English or translation had an impact
00:03:50 outside the technical world that you existed in?
00:03:54 Guido van Rossum I still read novels.
00:04:00 I don’t think that it impacts me that much directly.
00:04:05 Peter Bell It doesn’t impact your work.
00:04:07 Guido van Rossum It’s a separate world.
00:04:10 My work is highly technical and sort of the world of art and literature doesn’t really
00:04:17 directly have any bearing on it.
00:04:19 Peter Bell You don’t think there’s a creative element
00:04:22 to the design? You know, some would say design of a language is art.
00:04:26 Guido van Rossum I’m not disagreeing with that.
00:04:32 I’m just saying that sort of I don’t feel direct influences from more traditional art
00:04:39 on my own creativity.
00:04:40 Peter Bell Right. Of course, you don’t feel doesn’t mean
00:04:43 it’s not somehow deeply there in your subconscious.
00:04:46 Guido van Rossum Who knows?
00:04:48 Peter Bell Who knows? So let’s go back to your early
00:04:51 teens. Your hobbies were building electronic circuits, building mechanical models.
00:04:57 What if you can just put yourself back in the mind of that young Guido 12, 13, 14, was
00:05:06 that grounded in a desire to create a system? So to create something? Or was it more just
00:05:12 tinkering? Just the joy of puzzle solving?
00:05:14 Guido van Rossum I think it was more the latter, actually.
00:05:18 I maybe towards the end of my high school period, I felt confident enough that that
00:05:29 I designed my own circuits that were sort of interesting somewhat. But a lot of that
00:05:39 time, I literally just took a model kit and follow the instructions, putting the things
00:05:46 together. I mean, I think the first few years that I built electronics kits, I really did
00:05:51 not have enough understanding of sort of electronics to really understand what I was doing. I mean,
00:05:59 I could debug it, and I could sort of follow the instructions very carefully, which has
00:06:06 always stayed with me. But I had a very naive model of, like, how do I build a circuit?
00:06:14 Of, like, how a transistor works? And I don’t think that in those days, I had any understanding
00:06:22 of coils and capacitors, which actually sort of was a major problem when I started to build
00:06:32 more complex digital circuits, because I was unaware of the sort of the analog part of
00:06:39 the – how they actually work. And I would have things that – the schematic looked
00:06:50 – everything looked fine, and it didn’t work. And what I didn’t realize was that
00:06:57 there was some megahertz level oscillation that was throwing the circuit off, because
00:07:02 I had a sort of – two wires were too close, or the switches were kind of poorly built.
00:07:13 But through that time, I think it’s really interesting and instructive to think about,
00:07:19 because echoes of it are in this time now. So in the 1970s, the personal computer was
00:07:24 being born. So did you sense, in tinkering with these circuits, did you sense the encroaching
00:07:33 revolution in personal computing? So if at that point, we would sit you down and ask
00:07:39 you to predict the 80s and the 90s, do you think you would be able to do so successfully
00:07:46 to unroll the process that’s happening? No, I had no clue. I remember, I think, in
00:07:55 the summer after my senior year – or maybe it was the summer after my junior year – well,
00:08:03 at some point, I think, when I was 18, I went on a trip to the Math Olympiad in Eastern
00:08:11 Europe, and there was like – I was part of the Dutch team, and there were other nerdy
00:08:16 kids that sort of had different experiences, and one of them told me about this amazing
00:08:23 thing called a computer. And I had never heard that word. My own explorations in electronics
00:08:31 were sort of about very simple digital circuits, and I had sort of – I had the idea that
00:08:40 I somewhat understood how a digital calculator worked. And so there is maybe some echoes
00:08:49 of computers there, but I never made that connection. I didn’t know that when my parents
00:08:56 were paying for magazine subscriptions using punched cards, that there was something called
00:09:03 a computer that was involved that read those cards and transferred the money between accounts.
00:09:08 I was also not really interested in those things. It was only when I went to university
00:09:15 to study math that I found out that they had a computer, and students were allowed to use
00:09:23 it.
00:09:24 And there were some – you’re supposed to talk to that computer by programming it.
00:09:27 What did that feel like, finding –
00:09:29 Yeah, that was the only thing you could do with it. The computer wasn’t really connected
00:09:35 to the real world. The only thing you could do was sort of – you typed your program
00:09:41 on a bunch of punched cards. You gave the punched cards to the operator, and an hour
00:09:47 later the operator gave you back your printout. And so all you could do was write a program
00:09:55 that did something very abstract. And I don’t even remember what my first forays into programming
00:10:04 were, but they were sort of doing simple math exercises and just to learn how a programming
00:10:13 language worked.
00:10:15 Did you sense, okay, first year of college, you see this computer, you’re able to have
00:10:21 a program and it generates some output. Did you start seeing the possibility of this,
00:10:29 or was it a continuation of the tinkering with circuits? Did you start to imagine that
00:10:34 one, the personal computer, but did you see it as something that is a tool, like a word
00:10:42 processing tool, maybe for gaming or something? Or did you start to imagine that it could
00:10:47 be going to the world of robotics, like the Frankenstein picture that you could create
00:10:53 an artificial being? There’s like another entity in front of you. You did not see the
00:10:59 computer.
00:11:00 I don’t think I really saw it that way. I was really more interested in the tinkering.
00:11:05 It’s maybe not a sort of a complete coincidence that I ended up sort of creating a programming
00:11:14 language which is a tool for other programmers. I’ve always been very focused on the sort
00:11:20 of activity of programming itself and not so much what happens with the program you
00:11:28 write.
00:11:29 Right.
00:11:30 I do remember, and I don’t remember, maybe in my second or third year, probably my second
00:11:37 actually, someone pointed out to me that there was this thing called Conway’s Game of Life.
00:11:46 You’re probably familiar with it. I think –
00:11:50 In the 70s, I think is when they came up with it.
00:11:53 So there was a Scientific American column by someone who did a monthly column about
00:12:00 mathematical diversions. I’m also blanking out on the guy’s name. It was very famous
00:12:06 at the time and I think up to the 90s or so. And one of his columns was about Conway’s
00:12:12 Game of Life and he had some illustrations and he wrote down all the rules and sort of
00:12:18 there was the suggestion that this was philosophically interesting, that that was why Conway had
00:12:23 called it that. And all I had was like the two pages photocopy of that article. I don’t
00:12:31 even remember where I got it. But it spoke to me and I remember implementing a version
00:12:40 of that game for the batch computer we were using where I had a whole Pascal program that
00:12:49 sort of read an initial situation from input and read some numbers that said do so many
00:12:56 generations and print every so many generations and then out would come pages and pages of
00:13:05 sort of things.
00:13:08 I remember much later I’ve done a similar thing using Python but that original version
00:13:18 I wrote at the time I found interesting because I combined it with some trick I had learned
00:13:27 during my electronics hobbyist times. I essentially first on paper I designed a simple circuit
00:13:36 built out of logic gates that took nine bits of input which is sort of the cell and its
00:13:45 neighbors and produced a new value for that cell and it’s like a combination of a half
00:13:54 adder and some other clipping. It’s actually a full adder. And so I had worked that out
00:14:01 and then I translated that into a series of Boolean operations on Pascal integers where
00:14:10 you could use the integers as bitwise values. And so I could basically generate 60 bits
00:14:21 of a generation in like eight instructions or so.
00:14:28 Nice.
00:14:29 So I was proud of that.
00:14:32 It’s funny that you mentioned, so for people who don’t know Conway’s Game of Life, it’s
00:14:38 a cellular automata where there’s single compute units that kind of look at their neighbors
00:14:44 and figure out what they look like in the next generation based on the state of their
00:14:50 neighbors and this is deeply distributed system in concept at least. And then there’s simple
00:14:57 rules that all of them follow and somehow out of this simple rule when you step back
00:15:04 and look at what occurs, it’s beautiful. There’s an emergent complexity. Even though the underlying
00:15:13 rules are simple, there’s an emergent complexity. Now the funny thing is you’ve implemented
00:15:17 this and the thing you’re commenting on is you’re proud of a hack you did to make it
00:15:23 run efficiently. When you’re not commenting on, it’s a beautiful implementation, you’re
00:15:30 not commenting on the fact that there’s an emergent complexity that you’ve coded a simple
00:15:36 program and when you step back and you print out the following generation after generation,
00:15:42 that’s stuff that you may have not predicted would happen is happening.
00:15:48 And is that magic? I mean, that’s the magic that all of us feel when we program. When
00:15:53 you create a program and then you run it and whether it’s Hello World or it shows something
00:15:59 on screen, if there’s a graphical component, are you seeing the magic in the mechanism
00:16:03 of creating that?
00:16:05 I think I went back and forth. As a student, we had an incredibly small budget of computer
00:16:14 time that we could use. It was actually measured. I once got in trouble with one of my professors
00:16:20 because I had overspent the department’s budget. It’s a different story.
00:16:29 I actually wanted the efficient implementation because I also wanted to explore what would
00:16:36 happen with a larger number of generations and a larger size of the board. Once the implementation
00:16:48 was flawless, I would feed it different patterns and then I think maybe there was a follow
00:16:57 up article where there were patterns that were like gliders, patterns that repeated
00:17:03 themselves after a number of generations but translated one or two positions to the right
00:17:13 or up or something like that. I remember things like glider guns. Well, you can Google Conway’s
00:17:21 Game of Life. People still go aww and ooh over it.
00:17:27 For a reason because it’s not really well understood why. I mean, this is what Stephen
00:17:32 Wolfram is obsessed about. We don’t have the mathematical tools to describe the kind of
00:17:40 complexity that emerges in these kinds of systems. The only way you can do is to run
00:17:45 it.
00:17:47 I’m not convinced that it’s sort of a problem that lends itself to classic mathematical
00:17:55 analysis.
00:17:59 One theory of how you create an artificial intelligence or artificial being is you kind
00:18:05 of have to, same with the Game of Life, you kind of have to create a universe and let
00:18:10 it run. That creating it from scratch in a design way, coding up a Python program that
00:18:17 creates a fully intelligent system may be quite challenging. You might need to create
00:18:22 a universe just like the Game of Life.
00:18:27 You might have to experiment with a lot of different universes before there is a set
00:18:33 of rules that doesn’t essentially always just end up repeating itself in a trivial
00:18:41 way.
00:18:42 Yeah, and Stephen Wolfram works with these simple rules, says that it’s kind of surprising
00:18:49 how quickly you find rules that create interesting things. You shouldn’t be able to, but somehow
00:18:55 you do. And so maybe our universe is laden with rules that will create interesting things
00:19:02 that might not look like humans, but emergent phenomena that’s interesting may not be as
00:19:07 difficult to create as we think.
00:19:09 Sure.
00:19:10 But let me sort of ask, at that time, some of the world, at least in popular press, was
00:19:17 kind of captivated, perhaps at least in America, by the idea of artificial intelligence, that
00:19:25 these computers would be able to think pretty soon. And did that touch you at all? In science
00:19:33 fiction or in reality in any way?
00:19:37 I didn’t really start reading science fiction until much, much later. I think as a teenager
00:19:49 I read maybe one bundle of science fiction stories.
00:19:54 Was it in the background somewhere, like in your thoughts?
00:19:57 That sort of the using computers to build something intelligent always felt to me, because
00:20:04 I felt I had so much understanding of what actually goes on inside a computer. I knew
00:20:12 how many bits of memory it had and how difficult it was to program. And sort of, I didn’t believe
00:20:22 at all that you could just build something intelligent out of that, that would really
00:20:30 sort of satisfy my definition of intelligence. I think the most influential thing that I
00:20:40 read in my early twenties was Gödel Escherbach. That was about consciousness, and that was
00:20:48 a big eye opener in some sense.
00:20:54 In what sense? So, on your own brain, did you at the time or do you now see your own
00:21:00 brain as a computer? Or is there a total separation of the way? So yeah, you’re very pragmatically
00:21:07 practically know the limits of memory, the limits of this sequential computing or weakly
00:21:14 paralyzed computing, and you just know what we have now, and it’s hard to see how it creates.
00:21:21 But it’s also easy to see, it was in the 40s, 50s, 60s, and now at least similarities between
00:21:29 the brain and our computers.
00:21:31 Oh yeah, I mean, I totally believe that brains are computers in some sense. I mean, the rules
00:21:43 they use to play by are pretty different from the rules we can sort of implement in our
00:21:51 current hardware, but I don’t believe in, like, a separate thing that infuses us with
00:22:02 intelligence or consciousness or any of that. There’s no soul, I’ve been an atheist
00:22:10 probably from when I was 10 years old, just by thinking a bit about math and the universe,
00:22:18 and well, my parents were atheists. Now, I know that you could be an atheist and still
00:22:26 believe that there is something sort of about intelligence or consciousness that cannot
00:22:34 possibly emerge from a fixed set of rules. I am not in that camp. I totally see that,
00:22:44 sort of, given how many millions of years evolution took its time, DNA is a particular
00:22:53 machine that sort of encodes information and an unlimited amount of information in chemical
00:23:07 form and has figured out a way to replicate itself.
00:23:12 I thought that that was, maybe it’s 300 million years ago, but I thought it was closer
00:23:16 to half a billion years ago, that that’s sort of originated and it hasn’t really changed,
00:23:25 that the sort of the structure of DNA hasn’t changed ever since. That is like our binary
00:23:32 code that we have in hardware. I mean…
00:23:35 The basic programming language hasn’t changed, but maybe the programming itself…
00:23:39 Obviously, it did sort of, it happened to be a set of rules that was good enough to
00:23:48 sort of develop endless variability and sort of the idea of self replicating molecules
00:23:59 competing with each other for resources and one type eventually sort of always taking
00:24:05 over. That happened before there were any fossils, so we don’t know how that exactly
00:24:12 happened, but I believe it’s clear that that did happen.
00:24:17 Can you comment on consciousness and how you see it? Because I think we’ll talk about
00:24:25 programming quite a bit. We’ll talk about, you know, intelligence connecting to programming
00:24:30 fundamentally, but consciousness is this whole other thing. Do you think about it often as
00:24:38 a developer of a programming language and as a human?
00:24:45 Those are pretty sort of separate topics. Sort of my line of work working with programming
00:24:55 does not involve anything that goes in the direction of developing intelligence or consciousness,
00:25:02 but sort of privately as an avid reader of popular science writing, I have some thoughts
00:25:13 which is mostly that I don’t actually believe that consciousness is an all or nothing thing.
00:25:25 I have a feeling that, and I forget what I read that influenced this, but I feel that
00:25:35 if you look at a cat or a dog or a mouse, they have some form of intelligence. If you
00:25:41 look at a fish, it has some form of intelligence, and that evolution just took a long time,
00:25:54 but I feel that the sort of evolution of more and more intelligence that led to sort of
00:26:01 the human form of intelligence followed the evolution of the senses, especially the visual
00:26:12 sense. I mean, there is an enormous amount of processing that’s needed to interpret
00:26:20 a scene, and humans are still better at that than computers are.
00:26:28 And I have a feeling that there is a sort of, the reason that like mammals in particular
00:26:39 developed the levels of consciousness that they have and that eventually sort of going
00:26:47 from intelligence to self awareness and consciousness has to do with sort of being a robot that
00:26:55 has very highly developed senses.
00:26:58 Has a lot of rich sensory information coming in, so that’s a really interesting thought
00:27:04 that whatever that basic mechanism of DNA, whatever that basic building blocks of programming,
00:27:14 if you just add more abilities, more high resolution sensors, more sensors, you just
00:27:21 keep stacking those things on top that this basic programming in trying to survive develops
00:27:26 very interesting things that start to us humans to appear like intelligence and consciousness.
00:27:35 As far as robots go, I think that the self driving cars have that sort of the greatest
00:27:42 opportunity of developing something like that, because when I drive myself, I don’t just
00:27:50 pay attention to the rules of the road.
00:27:53 I also look around and I get clues from that, oh, this is a shopping district, oh, here’s
00:28:01 an old lady crossing the street, oh, here is someone carrying a pile of mail, there’s
00:28:08 a mailbox, I bet you they’re going to cross the street to reach that mailbox.
00:28:14 And I slow down, and I don’t even think about that.
00:28:17 And so, there is so much where you turn your observations into an understanding of what
00:28:25 other consciousnesses are going to do, or what other systems in the world are going
00:28:32 to be, oh, that tree is going to fall.
00:28:37 I see sort of, I see much more of, I expect somehow that if anything is going to become
00:28:46 unconscious, it’s going to be the self driving car and not the network of a bazillion computers
00:28:55 in a Google or Amazon data center that are all networked together to do whatever they
00:29:03 do.
00:29:04 So, in that sense, so you actually highlight, because that’s what I work in Thomas Vehicles,
00:29:09 you highlight the big gap between what we currently can’t do and what we truly need
00:29:15 to be able to do to solve the problem.
00:29:18 Under that formulation, then consciousness and intelligence is something that basically
00:29:24 a system should have in order to interact with us humans, as opposed to some kind of
00:29:30 abstract notion of a consciousness.
00:29:35 Consciousness is something that you need to have to be able to empathize, to be able to
00:29:39 fear, understand what the fear of death is, all these aspects that are important for interacting
00:29:47 with pedestrians, you need to be able to do basic computation based on our human desires
00:29:56 and thoughts.
00:29:57 And if you sort of, yeah, if you look at the dog, the dog clearly knows, I mean, I’m
00:30:02 not the dog owner, but I have friends who have dogs, the dogs clearly know what the
00:30:07 humans around them are going to do, or at least they have a model of what those humans
00:30:11 are going to do and they learn.
00:30:14 Some dogs know when you’re going out and they want to go out with you, they’re sad when
00:30:19 you leave them alone, they cry, they’re afraid because they were mistreated when they were
00:30:26 younger.
00:30:31 We don’t assign sort of consciousness to dogs, or at least not all that much, but I also
00:30:39 don’t think they have none of that.
00:30:42 So I think it’s consciousness and intelligence are not all or nothing.
00:30:50 The spectrum is really interesting.
00:30:52 But in returning to programming languages and the way we think about building these
00:30:58 kinds of things, about building intelligence, building consciousness, building artificial
00:31:03 beings.
00:31:04 So I think one of the exciting ideas came in the 17th century and with Leibniz, Hobbes,
00:31:10 Descartes, where there’s this feeling that you can convert all thought, all reasoning,
00:31:18 all the thing that we find very special in our brains, you can convert all of that into
00:31:24 logic.
00:31:25 So you can formalize it, formal reasoning, and then once you formalize everything, all
00:31:30 of knowledge, then you can just calculate and that’s what we’re doing with our brains
00:31:34 is we’re calculating.
00:31:35 So there’s this whole idea that this is possible, that this we can actually program.
00:31:40 But they weren’t aware of the concept of pattern matching in the sense that we are aware of
00:31:46 it now.
00:31:47 They sort of thought they had discovered incredible bits of mathematics like Newton’s calculus
00:31:57 and their sort of idealism, their sort of extension of what they could do with logic
00:32:06 and math sort of went along those lines and they thought there’s like, yeah, logic.
00:32:18 There’s like a bunch of rules and a bunch of input.
00:32:22 They didn’t realize that how you recognize a face is not just a bunch of rules but is
00:32:28 a shit ton of data plus a circuit that sort of interprets the visual clues and the context
00:32:39 and everything else and somehow can massively parallel pattern match against stored rules.
00:32:49 I mean, if I see you tomorrow here in front of the Dropbox office, I might recognize you.
00:32:56 Even if I’m wearing a different shirt, yeah, but if I see you tomorrow in a coffee shop
00:33:01 in Belmont, I might have no idea that it was you or on the beach or whatever.
00:33:06 I make those kind of mistakes myself all the time.
00:33:10 I see someone that I only know as like, oh, this person is a colleague of my wife’s and
00:33:16 then I see them at the movies and I didn’t recognize them.
00:33:20 But do you see those, you call it pattern matching, do you see that rules is unable
00:33:29 to encode that?
00:33:32 Everything you see, all the pieces of information you look around this room, I’m wearing a black
00:33:36 shirt, I have a certain height, I’m a human, all these, there’s probably tens of thousands
00:33:41 of facts you pick up moment by moment about this scene.
00:33:45 You take them for granted and you aggregate them together to understand the scene.
00:33:50 You don’t think all of that could be encoded to where at the end of the day, you can just
00:33:53 put it all on the table and calculate?
00:33:57 I don’t know what that means.
00:33:58 I mean, yes, in the sense that there is no actual magic there, but there are enough layers
00:34:08 of abstraction from the facts as they enter my eyes and my ears to the understanding of
00:34:17 the scene that I don’t think that AI has really covered enough of that distance.
00:34:29 It’s like if you take a human body and you realize it’s built out of atoms, well, that
00:34:37 is a uselessly reductionist view, right?
00:34:41 The body is built out of organs, the organs are built out of cells, the cells are built
00:34:46 out of proteins, the proteins are built out of amino acids, the amino acids are built
00:34:53 out of atoms and then you get to quantum mechanics.
00:34:58 So that’s a very pragmatic view.
00:34:59 I mean, obviously as an engineer, I agree with that kind of view, but you also have
00:35:03 to consider the Sam Harris view of, well, intelligence is just information processing.
00:35:13 Like you said, you take in sensory information, you do some stuff with it and you come up
00:35:17 with actions that are intelligent.
00:35:20 That makes it sound so easy.
00:35:22 I don’t know who Sam Harris is.
00:35:24 Oh, well, it’s a philosopher.
00:35:26 So like this is how philosophers often think, right?
00:35:29 And essentially that’s what Descartes was, is wait a minute, if there is, like you said,
00:35:33 no magic, so he basically says it doesn’t appear like there’s any magic, but we know
00:35:39 so little about it that it might as well be magic.
00:35:44 So just because we know that we’re made of atoms, just because we know we’re made
00:35:47 of organs, the fact that we know very little how to get from the atoms to organs in a way
00:35:53 that’s recreatable means that you shouldn’t get too excited just yet about the fact that
00:36:00 you figured out that we’re made of atoms.
00:36:02 Right, and the same about taking facts as our sensory organs take them in and turning
00:36:11 that into reasons and actions, that sort of, there are a lot of abstractions that we haven’t
00:36:19 quite figured out how to deal with those.
00:36:23 I mean, sometimes, I don’t know if I can go on a tangent or not, so if I take a simple
00:36:37 program that parses, say I have a compiler that parses a program, in a sense the input
00:36:45 routine of that compiler, of that parser, is a sensing organ, and it builds up a mighty
00:36:55 complicated internal representation of the program it just saw, it doesn’t just have
00:37:01 a linear sequence of bytes representing the text of the program anymore, it has an abstract
00:37:08 syntax tree, and I don’t know how many of your viewers or listeners are familiar with
00:37:15 compiler technology, but there’s…
00:37:18 Fewer and fewer these days, right?
00:37:21 That’s also true, probably.
00:37:24 People want to take a shortcut, but there’s sort of, this abstraction is a data structure
00:37:30 that the compiler then uses to produce outputs that is relevant, like a translation of that
00:37:37 program to machine code that can be executed by hardware, and then that data structure
00:37:47 gets thrown away.
00:37:50 When a fish or a fly sees, sort of gets visual impulses, I’m sure it also builds up some
00:38:02 data structure, and for the fly that may be very minimal, a fly may have only a few, I
00:38:10 mean, in the case of a fly’s brain, I could imagine that there are few enough layers of
00:38:17 abstraction that it’s not much more than when it’s darker here than it is here, well
00:38:24 it can sense motion, because a fly sort of responds when you move your arm towards it,
00:38:29 so clearly its visual processing is intelligent, well, not intelligent, but it has an abstraction
00:38:39 for motion, and we still have similar things in, but much more complicated in our brains,
00:38:46 I mean, otherwise you couldn’t drive a car if you couldn’t, if you didn’t have an
00:38:50 incredibly good abstraction for motion.
00:38:53 Yeah, in some sense, the same abstraction for motion is probably one of the primary
00:38:59 sources of our, of information for us, we just know what to do, I think we know what
00:39:05 to do with that, we’ve built up other abstractions on top.
00:39:08 We build much more complicated data structures based on that, and we build more persistent
00:39:14 data structures, sort of after some processing, some information sort of gets stored in our
00:39:20 memory pretty much permanently, and is available on recall, I mean, there are some things that
00:39:27 you sort of, you’re conscious that you’re remembering it, like, you give me your phone
00:39:34 number, I, well, at my age I have to write it down, but I could imagine, I could remember
00:39:39 those seven numbers, or ten digits, and reproduce them in a while, if I sort of repeat them
00:39:46 to myself a few times, so that’s a fairly conscious form of memorization.
00:39:53 On the other hand, how do I recognize your face, I have no idea.
00:39:57 My brain has a whole bunch of specialized hardware that knows how to recognize faces,
00:40:04 I don’t know how much of that is sort of coded in our DNA, and how much of that is
00:40:10 trained over and over between the ages of zero and three, but somehow our brains know
00:40:17 how to do lots of things like that, that are useful in our interactions with other humans,
00:40:26 without really being conscious of how it’s done anymore.
00:40:29 Right, so our actual day to day lives, we’re operating at the very highest level of abstraction,
00:40:36 we’re just not even conscious of all the little details underlying it.
00:40:39 There’s compilers on top of, it’s like turtles on top of turtles, or turtles all the way
00:40:43 down, there’s compilers all the way down, but that’s essentially, you say that there’s
00:40:48 no magic, that’s what I, what I was trying to get at, I think, is with Descartes started
00:40:54 this whole train of saying that there’s no magic, I mean, there’s all this beforehand.
00:40:59 Well didn’t Descartes also have the notion though that the soul and the body were fundamentally
00:41:06 separate?
00:41:07 Separate, yeah, I think he had to write in God in there for political reasons, so I don’t
00:41:11 know actually, I’m not a historian, but there’s notions in there that all of reasoning, all
00:41:17 of human thought can be formalized.
00:41:20 I think that continued in the 20th century with Russell and with Gadot’s incompleteness
00:41:28 theorem, this debate of what are the limits of the things that could be formalized, that’s
00:41:33 where the Turing machine came along, and this exciting idea, I mean, underlying a lot of
00:41:37 computing that you can do quite a lot with a computer.
00:41:43 You can encode a lot of the stuff we’re talking about in terms of recognizing faces and so
00:41:47 on, theoretically, in an algorithm that can then run on a computer.
00:41:53 And in that context, I’d like to ask programming in a philosophical way, what does it mean
00:42:05 to program a computer?
00:42:06 So you said you write a Python program or compiled a C++ program that compiles to some
00:42:13 byte code, it’s forming layers, you’re programming a layer of abstraction that’s higher, how
00:42:21 do you see programming in that context?
00:42:24 Can it keep getting higher and higher levels of abstraction?
00:42:29 I think at some point the higher levels of abstraction will not be called programming
00:42:35 and they will not resemble what we call programming at the moment.
00:42:44 There will not be source code, I mean, there will still be source code sort of at a lower
00:42:52 level of the machine, just like there are still molecules and electrons and sort of
00:42:59 proteins in our brains, but, and so there’s still programming and system administration
00:43:09 and who knows what, to keep the machine running, but what the machine does is a different level
00:43:15 of abstraction in a sense, and as far as I understand the way that for the last decade
00:43:23 or more people have made progress with things like facial recognition or the self driving
00:43:28 cars is all by endless, endless amounts of training data where at least as a lay person,
00:43:38 and I feel myself totally as a lay person in that field, it looks like the researchers
00:43:47 who publish the results don’t necessarily know exactly how their algorithms work, and
00:43:57 I often get upset when I sort of read a sort of a fluff piece about Facebook in the newspaper
00:44:04 or social networks and they say, well, algorithms, and that’s like a totally different interpretation
00:44:12 of the word algorithm, because for me, the way I was trained or what I learned when I
00:44:19 was eight or ten years old, an algorithm is a set of rules that you completely understand
00:44:25 that can be mathematically analyzed and you can prove things.
00:44:30 You can like prove that Aristotelian sieve produces all prime numbers and only prime
00:44:37 numbers.
00:44:38 Yeah.
00:44:39 So I don’t know if you know who Andrej Karpathy is, I’m afraid not.
00:44:44 So he’s a head of AI at Tesla now, but he was at Stanford before and he has this cheeky
00:44:51 way of calling this concept software 2.0.
00:44:56 So let me disentangle that for a second.
00:45:00 So kind of what you’re referring to is the traditional, the algorithm, the concept of
00:45:06 an algorithm, something that’s there, it’s clear, you can read it, you understand it,
00:45:09 you can prove it’s functioning as kind of software 1.0.
00:45:14 And what software 2.0 is, is exactly what you described, which is you have neural networks,
00:45:21 which is a type of machine learning that you feed a bunch of data and that neural network
00:45:26 learns to do a function.
00:45:30 All you specify is the inputs and the outputs you want and you can’t look inside.
00:45:35 You can’t analyze it.
00:45:37 All you can do is train this function to map the inputs to the outputs by giving a lot
00:45:41 of data.
00:45:42 And that’s as programming becomes getting a lot of data.
00:45:47 That’s what programming is.
00:45:48 Well, that would be programming 2.0.
00:45:52 To programming 2.0.
00:45:53 I wouldn’t call that programming.
00:45:55 It’s just a different activity.
00:45:57 Just like building organs out of cells is not called chemistry.
00:46:02 Well, so let’s just step back and think sort of more generally, of course.
00:46:09 But you know, it’s like as a parent teaching your kids, things can be called programming.
00:46:18 In that same sense, that’s how programming is being used.
00:46:22 You’re providing them data, examples, use cases.
00:46:27 So imagine writing a function not by, not with for loops and clearly readable text,
00:46:36 but more saying, well, here’s a lot of examples of what this function should take.
00:46:42 And here’s a lot of examples of when it takes those functions, it should do this.
00:46:47 And then figure out the rest.
00:46:50 So that’s the 2.0 concept.
00:46:52 And so the question I have for you is like, it’s a very fuzzy way.
00:46:58 This is the reality of a lot of these pattern recognition systems and so on.
00:47:01 It’s a fuzzy way of quote unquote programming.
00:47:05 What do you think about this kind of world?
00:47:09 Should it be called something totally different than programming?
00:47:13 If you’re a software engineer, does that mean you’re designing systems that are very, can
00:47:21 be systematically tested, evaluated, they have a very specific specification and then this
00:47:28 other fuzzy software 2.0 world, machine learning world, that’s something else totally?
00:47:33 Or is there some intermixing that’s possible?
00:47:41 Well the question is probably only being asked because we don’t quite know what that software
00:47:48 2.0 actually is.
00:47:51 And I think there is a truism that every task that AI has tackled in the past, at some point
00:48:02 we realized how it was done and then it was no longer considered part of artificial intelligence
00:48:09 because it was no longer necessary to use that term.
00:48:15 It was just, oh now we know how to do this.
00:48:21 And a new field of science or engineering has been developed and I don’t know if sort
00:48:30 of every form of learning or sort of controlling computer systems should always be called programming.
00:48:39 So I don’t know, maybe I’m focused too much on the terminology.
00:48:43 But I expect that there just will be different concepts where people with sort of different
00:48:56 education and a different model of what they’re trying to do will develop those concepts.
00:49:07 I guess if you could comment on another way to put this concept is, I think the kind of
00:49:17 functions that neural networks provide is things as opposed to being able to upfront
00:49:23 prove that this should work for all cases you throw at it.
00:49:28 All you’re able, it’s the worst case analysis versus average case analysis.
00:49:32 All you’re able to say is it seems on everything we’ve tested to work 99.9% of the time, but
00:49:39 we can’t guarantee it and it fails in unexpected ways.
00:49:44 We can’t even give you examples of how it fails in unexpected ways, but it’s like really
00:49:48 good most of the time.
00:49:50 Is there no room for that in current ways we think about programming?
00:50:00 programming 1.0 is actually sort of getting to that point too, where the sort of the ideal
00:50:11 of a bug free program has been abandoned long ago by most software developers.
00:50:21 We only care about bugs that manifest themselves often enough to be annoying.
00:50:30 And we’re willing to take the occasional crash or outage or incorrect result for granted
00:50:40 because we can’t possibly, we don’t have enough programmers to make all the code bug free
00:50:47 and it would be an incredibly tedious business.
00:50:50 And if you try to throw formal methods at it, it becomes even more tedious.
00:50:56 So every once in a while the user clicks on a link and somehow they get an error and the
00:51:05 average user doesn’t panic.
00:51:07 They just click again and see if it works better the second time, which often magically
00:51:14 it does, or they go up and they try some other way of performing their tasks.
00:51:21 So that’s sort of an end to end recovery mechanism and inside systems there is all
00:51:29 sorts of retries and timeouts and fallbacks and I imagine that that sort of biological
00:51:39 systems are even more full of that because otherwise they wouldn’t survive.
00:51:46 Do you think programming should be taught and thought of as exactly what you just said?
00:51:54 I come from this kind of, you’re always denying that fact always.
00:52:01 In sort of basic programming education, the sort of the programs you’re having students
00:52:12 write are so small and simple that if there is a bug you can always find it and fix it.
00:52:23 Because the sort of programming as it’s being taught in some, even elementary, middle schools,
00:52:29 in high school, introduction to programming classes in college typically, it’s programming
00:52:36 in the small.
00:52:38 Very few classes sort of actually teach software engineering, building large systems.
00:52:47 Every summer here at Dropbox we have a large number of interns.
00:52:51 Every tech company on the West Coast has the same thing.
00:52:56 These interns are always amazed because this is the first time in their life that they
00:53:02 see what goes on in a really large software development environment.
00:53:12 Everything they’ve learned in college was almost always about a much smaller scale and
00:53:20 somehow that difference in scale makes a qualitative difference in how you do things and how you
00:53:27 think about it.
00:53:29 If you then take a few steps back into decades, 70s and 80s, when you were first thinking
00:53:36 about Python or just that world of programming languages, did you ever think that there would
00:53:41 be systems as large as underlying Google, Facebook, and Dropbox?
00:53:46 Did you, when you were thinking about Python?
00:53:51 I was actually always caught by surprise by sort of this, yeah, pretty much every stage
00:53:57 of computing.
00:53:59 So maybe just because you’ve spoken in other interviews, but I think the evolution of programming
00:54:07 languages are fascinating and it’s especially because it leads from my perspective towards
00:54:13 greater and greater degrees of intelligence.
00:54:15 I learned the first programming language I played with in Russia was with the Turtle
00:54:21 logo.
00:54:22 Logo, yeah.
00:54:24 And if you look, I just have a list of programming languages, all of which I’ve now played with
00:54:29 a little bit.
00:54:30 I mean, they’re all beautiful in different ways from Fortran, Cobalt, Lisp, Algol 60,
00:54:36 Basic, Logo again, C, as a few, the object oriented came along in the 60s, Simula, Pascal,
00:54:46 Smalltalk.
00:54:47 All of that leads.
00:54:48 They’re all the classics.
00:54:49 The classics.
00:54:50 Yeah.
00:54:51 The classic hits, right?
00:54:52 Steam, that’s built on top of Lisp.
00:54:58 On the database side, SQL, C++, and all of that leads up to Python, Pascal too, and that’s
00:55:05 before Python, MATLAB, these kind of different communities, different languages.
00:55:10 So can you talk about that world?
00:55:13 I know that sort of Python came out of ABC, which I actually never knew that language.
00:55:18 I just, having researched this conversation, went back to ABC and it looks remarkably,
00:55:24 it has a lot of annoying qualities, but underneath those, like all caps and so on, but underneath
00:55:31 that, there’s elements of Python that are quite, they’re already there.
00:55:35 That’s where I got all the good stuff.
00:55:37 All the good stuff.
00:55:38 So, but in that world, you’re swimming these programming languages, were you focused on
00:55:41 just the good stuff in your specific circle, or did you have a sense of what is everyone
00:55:48 chasing?
00:55:49 You said that every programming language is built to scratch an itch.
00:55:57 Were you aware of all the itches in the community?
00:55:59 And if not, or if yes, I mean, what itch were you trying to scratch with Python?
00:56:05 Well, I’m glad I wasn’t aware of all the itches because I would probably not have been able
00:56:12 to do anything.
00:56:14 I mean, if you’re trying to solve every problem at once, you’ll solve nothing.
00:56:19 Well, yeah, it’s too overwhelming.
00:56:23 And so I had a very, very focused problem.
00:56:28 I wanted a programming language that sat somewhere in between shell scripting and C. And now,
00:56:41 arguably, there is like, one is higher level, one is lower level.
00:56:48 And Python is sort of a language of an intermediate level, although it’s still pretty much at
00:56:56 the high level end.
00:57:00 I was thinking about much more about, I want a tool that I can use to be more productive
00:57:11 as a programmer in a very specific environment.
00:57:16 And I also had given myself a time budget for the development of the tool.
00:57:22 And that was sort of about three months for both the design, like thinking through what
00:57:29 are all the features of the language syntactically and semantically, and how do I implement the
00:57:38 whole pipeline from parsing the source code to executing it.
00:57:43 So I think both with the timeline and the goals, it seems like productivity was at the
00:57:51 core of it as a goal.
00:57:54 So like, for me in the 90s, and the first decade of the 21st century, I was always doing
00:58:01 machine learning, AI programming for my research was always in C++.
00:58:07 And then the other people who are a little more mechanical engineering, electrical engineering,
00:58:14 are MATLABby.
00:58:15 They’re a little bit more MATLAB focused.
00:58:18 Those are the world, and maybe a little bit Java too.
00:58:21 But people who are more interested in emphasizing the object oriented nature of things.
00:58:29 So within the last 10 years or so, especially with the oncoming of neural networks and these
00:58:34 packages that are built on Python to interface with neural networks, I switched to Python
00:58:41 and it’s just, I’ve noticed a significant boost that I can’t exactly, because I don’t
00:58:47 think about it, but I can’t exactly put into words why I’m just much, much more productive.
00:58:52 Just being able to get the job done much, much faster.
00:58:56 So how do you think, whatever that qualitative difference is, I don’t know if it’s quantitative,
00:59:01 it could be just a feeling, I don’t know if I’m actually more productive, but how
00:59:07 do you think about…
00:59:08 You probably are.
00:59:09 Yeah.
00:59:10 Well, that’s right.
00:59:11 I think there’s elements, let me just speak to one aspect that I think that was affecting
00:59:15 my productivity is C++ was, I really enjoyed creating performant code and creating a beautiful
00:59:26 structure where everything that, you know, this kind of going into this, especially with
00:59:31 the newer and newer standards of templated programming of just really creating this beautiful
00:59:37 formal structure that I found myself spending most of my time doing that as opposed to getting
00:59:42 it, parsing a file and extracting a few keywords or whatever the task was trying to do.
00:59:47 So what is it about Python?
00:59:49 How do you think of productivity in general as you were designing it now, sort of through
00:59:54 the decades, last three decades, what do you think it means to be a productive programmer?
01:00:00 And how did you try to design it into the language?
01:00:03 There are different tasks and as a programmer, it’s useful to have different tools available
01:00:10 that sort of are suitable for different tasks.
01:00:13 So I still write C code, I still write shell code, but I write most of my things in Python.
01:00:25 Why do I still use those other languages, because sometimes the task just demands it.
01:00:33 And well, I would say most of the time the task actually demands a certain language because
01:00:39 the task is not write a program that solves problem X from scratch, but it’s more like
01:00:45 fix a bug in existing program X or add a small feature to an existing large program.
01:00:56 But even if you’re not constrained in your choice of language by context like that, there
01:01:10 is still the fact that if you write it in a certain language, then you have this balance
01:01:21 between how long does it take you to write the code and how long does the code run?
01:01:31 And when you’re in the phase of exploring solutions, you often spend much more time
01:01:42 writing the code than running it because every time you’ve run it, you see that the output
01:01:50 is not quite what you wanted and you spend some more time coding.
01:01:58 And a language like Python just makes that iteration much faster because there are fewer
01:02:06 details that you have to get right before your program compiles and runs.
01:02:19 There are libraries that do all sorts of stuff for you, so you can sort of very quickly take
01:02:26 a bunch of existing components, put them together, and get your prototype application running.
01:02:36 Just like when I was building electronics, I was using a breadboard most of the time,
01:02:42 so I had this sprawl out circuit that if you shook it, it would stop working because it
01:02:51 was not put together very well, but it functioned and all I wanted was to see that it worked
01:02:58 and then move on to the next schematic or design or add something to it.
01:03:05 Once you’ve sort of figured out, oh, this is the perfect design for my radio or light
01:03:10 sensor or whatever, then you can say, okay, how do we design a PCB for this?
01:03:15 How do we solder the components in a small space?
01:03:19 How do we make it so that it is robust against, say, voltage fluctuations or mechanical disruption?
01:03:32 I know nothing about that when it comes to designing electronics, but I know a lot about
01:03:37 that when it comes to writing code.
01:03:40 So the initial steps are efficient, fast, and there’s not much stuff that gets in the
01:03:46 way, but you’re kind of describing, like Darwin described the evolution of species, right?
01:03:56 You’re observing of what is true about Python.
01:04:00 Now if you take a step back, if the act of creating languages is art and you had three
01:04:07 months to do it, initial steps, so you just specified a bunch of goals, sort of things
01:04:15 that you observe about Python, perhaps you had those goals, but how do you create the
01:04:19 rules, the syntactic structure, the features that result in those?
01:04:25 So I have in the beginning and I have follow up questions about through the evolution of
01:04:29 Python too, but in the very beginning when you were sitting there creating the lexical
01:04:35 analyzer or whatever.
01:04:37 Python was still a big part of it because I sort of, I said to myself, I don’t want
01:04:47 to have to design everything from scratch, I’m going to borrow features from other languages
01:04:53 that I like.
01:04:54 Oh, interesting.
01:04:55 So you basically, exactly, you first observe what you like.
01:04:58 Yeah, and so that’s why if you’re 17 years old and you want to sort of create a programming
01:05:05 language, you’re not going to be very successful at it because you have no experience with
01:05:11 other languages, whereas I was in my, let’s say mid 30s, I had written parsers before,
01:05:24 so I had worked on the implementation of ABC, I had spent years debating the design of ABC
01:05:30 with its authors, with its designers, I had nothing to do with the design, it was designed
01:05:37 fully as it ended up being implemented when I joined the team.
01:05:42 But so you borrow ideas and concepts and very concrete sort of local rules from different
01:05:51 languages like the indentation and certain other syntactic features from ABC, but I chose
01:05:58 to borrow string literals and how numbers work from C and various other things.
01:06:07 So in then, if you take that further, so yet you’ve had this funny sounding, but I think
01:06:13 surprisingly accurate and at least practical title of benevolent dictator for life for
01:06:21 quite, you know, for the last three decades or whatever, or no, not the actual title,
01:06:25 but functionally speaking.
01:06:27 So you had to make decisions, design decisions.
01:06:34 Can you maybe, let’s take Python 2, so releasing Python 3 as an example.
01:06:41 It’s not backward compatible to Python 2 in ways that a lot of people know.
01:06:47 So what was that deliberation, discussion, decision like?
01:06:50 Yeah.
01:06:51 What was the psychology of that experience?
01:06:54 Do you regret any aspects of how that experience undergone that?
01:06:58 Well, yeah, so it was a group process really.
01:07:03 At that point, even though I was BDFL in name and certainly everybody sort of respected
01:07:11 my position as the creator and the current sort of owner of the language design, I was
01:07:22 looking at everyone else for feedback.
01:07:26 Sort of Python 3.0 in some sense was sparked by other people in the community pointing
01:07:35 out, oh, well, there are a few issues that sort of bite users over and over.
01:07:46 Can we do something about that?
01:07:48 And for Python 3, we took a number of those Python words as they were called at the time
01:07:56 and we said, can we try to sort of make small changes to the language that address those
01:08:04 words?
01:08:06 And we had sort of in the past, we had always taken backwards compatibility very seriously.
01:08:15 And so many Python words in earlier versions had already been resolved because they could
01:08:20 be resolved while maintaining backwards compatibility or sort of using a very gradual path of evolution
01:08:29 of the language in a certain area.
01:08:31 And so we were stuck with a number of words that were widely recognized as problems, not
01:08:39 like roadblocks, but nevertheless sort of things that some people trip over and you know that
01:08:47 that’s always the same thing that people trip over when they trip.
01:08:52 And we could not think of a backwards compatible way of resolving those issues.
01:08:58 But it’s still an option to not resolve the issues, right?
01:09:01 And so yes, for a long time, we had sort of resigned ourselves to, well, okay, the language
01:09:07 is not going to be perfect in this way and that way and that way.
01:09:13 And we sort of, certain of these, I mean, there are still plenty of things where you
01:09:19 can say, well, that particular detail is better in Java or in R or in Visual Basic or whatever.
01:09:32 And we’re okay with that because, well, we can’t easily change it.
01:09:37 It’s not too bad.
01:09:38 We can do a little bit with user education or we can have a static analyzer or warnings
01:09:47 in the parse or something.
01:09:49 But there were things where we thought, well, these are really problems that are not going
01:09:54 away.
01:09:55 They are getting worse in the future.
01:10:00 We should do something about that.
01:10:03 But ultimately there is a decision to be made, right?
01:10:05 So was that the toughest decision in the history of Python you had to make as the benevolent
01:10:13 dictator for life?
01:10:15 Or if not, what are there, maybe even on the smaller scale, what was the decision where
01:10:20 you were really torn up about?
01:10:22 Well, the toughest decision was probably to resign.
01:10:25 All right, let’s go there.
01:10:28 Hold on a second then.
01:10:29 Let me just, because in the interest of time too, because I have a few cool questions for
01:10:33 you and let’s touch a really important one because it was quite dramatic and beautiful
01:10:38 in certain kinds of ways.
01:10:40 In July this year, three months ago, you wrote, now that PEP 572 is done, I don’t ever want
01:10:47 to have to fight so hard for a PEP and find that so many people despise my decisions.
01:10:52 I would like to remove myself entirely from the decision process.
01:10:56 I’ll still be there for a while as an ordinary core developer and I’ll still be available
01:11:01 to mentor people, possibly more available.
01:11:05 But I’m basically giving myself a permanent vacation from being BDFL, benevolent dictator
01:11:11 for life.
01:11:12 And you all will be on your own.
01:11:14 First of all, it’s almost Shakespearean.
01:11:19 I’m not going to appoint a successor.
01:11:22 So what are you all going to do?
01:11:24 Create a democracy, anarchy, a dictatorship, a federation?
01:11:29 So that was a very dramatic and beautiful set of statements.
01:11:34 It’s almost, it’s open ended nature called the community to create a future for Python.
01:11:40 It’s just kind of a beautiful aspect to it.
01:11:43 So what, and dramatic, you know, what was making that decision like?
01:11:48 What was on your heart, on your mind, stepping back now a few months later?
01:11:54 I’m glad you liked the writing because it was actually written pretty quickly.
01:12:02 It was literally something like after months and months of going around in circles, I had
01:12:14 finally approved PEP572, which I had a big hand in its design, although I didn’t initiate
01:12:26 it originally.
01:12:27 I sort of gave it a bunch of nudges in a direction that would be better for the language.
01:12:36 So sorry, just to ask, is async IO, that’s the one or no?
01:12:40 PEP572 was actually a small feature, which is assignment expressions.
01:12:49 That had been, there was just a lot of debate where a lot of people claimed that they knew
01:12:58 what was Pythonic and what was not Pythonic, and they knew that this was going to destroy
01:13:04 the language.
01:13:06 This was like a violation of Python’s most fundamental design philosophy, and I thought
01:13:11 that was all bullshit because I was in favor of it, and I would think I know something
01:13:17 about Python’s design philosophy.
01:13:19 So I was really tired and also stressed of that thing, and literally after sort of announcing
01:13:26 I was going to accept it, a certain Wednesday evening I had finally sent the email, it’s
01:13:34 accepted.
01:13:35 I can just go implement it.
01:13:38 So I went to bed feeling really relieved, that’s behind me.
01:13:44 And I wake up Thursday morning, 7 a.m., and I think, well, that was the last one that’s
01:13:54 going to be such a terrible debate, and that’s the last time that I let myself be so stressed
01:14:03 out about a pep decision.
01:14:06 I should just resign.
01:14:07 I’ve been sort of thinking about retirement for half a decade, I’ve been joking and sort
01:14:15 of mentioning retirement, sort of telling the community at some point in the future
01:14:22 I’m going to retire, don’t take that FL part of my title too literally.
01:14:29 And I thought, okay, this is it.
01:14:32 I’m done, I had the day off, I wanted to have a good time with my wife, we were going to
01:14:39 a little beach town nearby, and in I think maybe 15, 20 minutes I wrote that thing that
01:14:48 you just called Shakespearean.
01:14:51 The funny thing is I didn’t even realize what a monumental decision it was, because
01:15:01 five minutes later I read that link to my message back on Twitter, where people were
01:15:09 already discussing on Twitter, Guido resigned as the BDFL.
01:15:15 And I had posted it on an internal forum that I thought was only read by core developers,
01:15:22 so I thought I would at least have one day before the news would sort of get out.
01:15:28 The on your own aspects had also an element of quite, it was quite a powerful element
01:15:36 of the uncertainty that lies ahead, but can you also just briefly talk about, for example
01:15:43 I play guitar as a hobby for fun, and whenever I play people are super positive, super friendly,
01:15:49 they’re like, this is awesome, this is great.
01:15:52 But sometimes I enter as an outside observer, I enter the programming community and there
01:15:57 seems to sometimes be camps on whatever the topic, and the two camps, the two or plus
01:16:05 camps, are often pretty harsh at criticizing the opposing camps.
01:16:11 As an onlooker, I may be totally wrong on this, but what do you think of this?
01:16:14 Yeah, holy wars are sort of a favorite activity in the programming community.
01:16:19 And what is the psychology behind that?
01:16:22 Is that okay for a healthy community to have?
01:16:25 Is that a productive force ultimately for the evolution of a language?
01:16:29 Well, if everybody is patting each other on the back and never telling the truth, it would
01:16:39 not be a good thing.
01:16:40 I think there is a middle ground where sort of being nasty to each other is not okay,
01:16:52 but there is a middle ground where there is healthy ongoing criticism and feedback that
01:17:01 is very productive.
01:17:04 And you mean at every level you see that.
01:17:07 I mean, someone proposes to fix a very small issue in a code base, chances are that some
01:17:17 reviewer will sort of respond by saying, well, actually, you can do it better the other way.
01:17:27 When it comes to deciding on the future of the Python core developer community, we now
01:17:34 have, I think, five or six competing proposals for a constitution.
01:17:41 So that future, do you have a fear of that future, do you have a hope for that future?
01:17:48 I’m very confident about that future.
01:17:51 By and large, I think that the debate has been very healthy and productive.
01:17:58 And I actually, when I wrote that resignation email, I knew that Python was in a very good
01:18:07 spot and that the Python core developer community, the group of 50 or 100 people who sort of
01:18:16 write or review most of the code that goes into Python, those people get along very well
01:18:24 most of the time.
01:18:27 A large number of different areas of expertise are represented, different levels of experience
01:18:40 in the Python core dev community, different levels of experience completely outside it
01:18:45 in software development in general, large systems, small systems, embedded systems.
01:18:53 So I felt okay resigning because I knew that the community can really take care of itself.
01:19:03 And out of a grab bag of future feature developments, let me ask if you can comment, maybe on all
01:19:12 very quickly, concurrent programming, parallel computing, async IO.
01:19:19 These are things that people have expressed hope, complained about, whatever, have discussed
01:19:24 on Reddit.
01:19:25 Async IO, so the parallelization in general, packaging, I was totally clueless on this.
01:19:32 I just used pip to install stuff, but apparently there’s pipenv, poetry, there’s these dependency
01:19:38 packaging systems that manage dependencies and so on.
01:19:41 They’re emerging and there’s a lot of confusion about what’s the right thing to use.
01:19:45 Then also functional programming, are we going to get more functional programming or not,
01:19:56 this kind of idea.
01:19:59 And of course the GIL connected to the parallelization, I suppose, the global interpreter lock problem.
01:20:08 Can you just comment on whichever you want to comment on?
01:20:12 Well, let’s take the GIL and parallelization and async IO as one topic.
01:20:25 I’m not that hopeful that Python will develop into a sort of high concurrency, high parallelism
01:20:35 language.
01:20:37 That’s sort of the way the language is designed, the way most users use the language, the way
01:20:44 the language is implemented, all make that a pretty unlikely future.
01:20:50 So you think it might not even need to, really the way people use it, it might not be something
01:20:56 that should be of great concern.
01:20:58 I think async IO is a special case because it sort of allows overlapping IO and only
01:21:05 IO and that is a sort of best practice of supporting very high throughput IO, many connections
01:21:18 per second.
01:21:21 I’m not worried about that.
01:21:22 I think async IO will evolve.
01:21:25 There are a couple of competing packages.
01:21:27 We have some very smart people who are sort of pushing us to make async IO better.
01:21:36 Parallel computing, I think that Python is not the language for that.
01:21:43 There are ways to work around it, but you can’t expect to write an algorithm in Python
01:21:53 and have a compiler automatically parallelize that.
01:21:57 What you can do is use a package like NumPy and there are a bunch of other very powerful
01:22:03 packages that sort of use all the CPUs available because you tell the package, here’s the data,
01:22:12 here’s the abstract operation to apply over it, go at it, and then we’re back in the C++
01:22:19 world.
01:22:20 Those packages are themselves implemented usually in C++.
01:22:24 That’s where TensorFlow and all these packages come in, where they parallelize across GPUs,
01:22:28 for example, they take care of that for you.
01:22:30 In terms of packaging, can you comment on the future of packaging in Python?
01:22:36 Packaging has always been my least favorite topic.
01:22:42 It’s a really tough problem because the OS and the platform want to own packaging, but
01:22:55 their packaging solution is not specific to a language.
01:23:01 If you take Linux, there are two competing packaging solutions for Linux or for Unix
01:23:07 in general, but they all work across all languages.
01:23:15 Several languages like Node, JavaScript, Ruby, and Python all have their own packaging solutions
01:23:24 that only work within the ecosystem of that language.
01:23:29 What should you use?
01:23:31 That is a tough problem.
01:23:34 My own approach is I use the system packaging system to install Python, and I use the Python
01:23:43 packaging system then to install third party Python packages.
01:23:49 That’s what most people do.
01:23:51 Ten years ago, Python packaging was really a terrible situation.
01:23:56 Nowadays, pip is the future, there is a separate ecosystem for numerical and scientific Python
01:24:05 based on Anaconda.
01:24:08 Those two can live together.
01:24:09 I don’t think there is a need for more than that.
01:24:13 That’s packaging.
01:24:14 Well, at least for me, that’s where I’ve been extremely happy.
01:24:18 I didn’t even know this was an issue until it was brought up.
01:24:22 In the interest of time, let me sort of skip through a million other questions I have.
01:24:27 So I watched the five and a half hour oral history that you’ve done with the Computer
01:24:32 History Museum, and the nice thing about it, it gave this, because of the linear progression
01:24:37 of the interview, it gave this feeling of a life, you know, a life well lived with interesting
01:24:44 things in it, sort of a pretty, I would say a good spend of this little existence we have
01:24:52 on Earth.
01:24:53 So, outside of your family, looking back, what about this journey are you really proud
01:24:59 of?
01:25:00 Are there moments that stand out, accomplishments, ideas?
01:25:07 Is it the creation of Python itself that stands out as a thing that you look back and say,
01:25:14 damn, I did pretty good there?
01:25:16 Well, I would say that Python is definitely the best thing I’ve ever done, and I wouldn’t
01:25:25 sort of say just the creation of Python, but the way I sort of raised Python, like a baby.
01:25:36 I didn’t just conceive a child, but I raised a child, and now I’m setting the child free
01:25:42 in the world, and I’ve set up the child to sort of be able to take care of himself, and
01:25:50 I’m very proud of that.
01:25:52 And as the announcer of Monty Python’s Flying Circus used to say, and now for something
01:25:56 completely different, do you have a favorite Monty Python moment, or a moment in Hitchhiker’s
01:26:02 Guide, or any other literature show or movie that cracks you up when you think about it?
01:26:07 You can always play me the dead parrot sketch.
01:26:11 Oh, that’s brilliant.
01:26:13 That’s my favorite as well.
01:26:14 It’s pushing up the daisies.
01:26:15 Okay, Greta, thank you so much for talking with me today.
01:26:20 Lex, this has been a great conversation.