Transcript
00:00:00 The following is a conversation with Brian Kernighan,
00:00:03 a professor of computer science at Princeton University.
00:00:07 He was a key figure in the computer science community
00:00:10 in the early Unix days, alongside Unix creators,
00:00:13 Ken Thompson and Dennis Ritchie.
00:00:16 He coauthored the C programming language with Dennis Ritchie,
00:00:20 the creator of C, and has written a lot of books
00:00:23 on programming, computers, and life,
00:00:26 including The Practice of Programming,
00:00:28 the Go programming language, and his latest,
00:00:31 Unix, A History and a Memoir.
00:00:34 He cocreated AUK, the text processing language
00:00:36 used by Linux folks like myself.
00:00:39 He co designed Ample, an algebraic modeling language
00:00:43 that I personally love and have used a lot in my life
00:00:47 for large scale optimization.
00:00:49 I think I can keep going for a long time
00:00:51 with his creations and accomplishments,
00:00:54 which is funny because given all that,
00:00:56 he’s one of the most humble and kind people
00:00:59 I’ve spoken to on this podcast.
00:01:01 Quick summary of the ads, two new sponsors,
00:01:04 the amazing self cooling 8sleep mattress
00:01:10 and Raycon earbuds.
00:01:13 Please consider supporting the podcast
00:01:15 by going to 8sleep.com slash Lex
00:01:19 and going to buyraycon.com slash Lex.
00:01:23 Click the links, buy the stuff.
00:01:25 It really is the best way to support this podcast
00:01:27 and the journey I’m on.
00:01:29 If you enjoy this thing, subscribe on YouTube,
00:01:32 review it with 5,000 Apple podcasts,
00:01:34 support it on Patreon,
00:01:35 or connect with me on Twitter at Lex Friedman.
00:01:39 As usual, I’ll do a few minutes of ads now
00:01:41 and never any ads in the middle
00:01:43 that could break the flow of the conversation.
00:01:45 This show is sponsored by 8sleep
00:01:49 and it’s incredible pod pro mattress
00:01:51 that you can check out at 8sleep.com slash Lex
00:01:54 to get $200 off.
00:01:57 The mattress controls temperature with an app
00:02:00 and can cool down to as low as 55 degrees.
00:02:03 Research shows that temperature has a big impact
00:02:07 on the quality of our sleep.
00:02:09 Anecdotally, it’s been a game changer for me.
00:02:11 I love it.
00:02:12 The pod pro is packed with sensors
00:02:14 that track heart rate, heart rate variability,
00:02:17 and respiratory rate,
00:02:18 showing it all on their app once you wake up.
00:02:21 Plus, if you have a partner,
00:02:23 you can control the temperature of each side of the bed.
00:02:26 I don’t happen to have one,
00:02:28 but the 8sleep app reminds me
00:02:30 that I should probably get on that.
00:02:32 So ladies, if a temperature controlled mattress
00:02:34 isn’t a good reason to apply,
00:02:36 I don’t know what is.
00:02:38 The app’s health metrics are amazing,
00:02:41 but the cooling alone is honestly worth the money.
00:02:44 As some of you know, I don’t always sleep,
00:02:47 but when I do, I choose the 8sleep pod pro mattress.
00:02:51 Check it out at 8sleep.com slash Lex
00:02:54 to get $200 off.
00:02:57 This show is also sponsored by Raycon earbuds.
00:03:01 Get them at buyraycon.com slash lex.
00:03:06 They’ve quickly become my main method
00:03:07 of listening to podcasts, audio books,
00:03:09 and music when I run,
00:03:11 do the pushups and pullups
00:03:13 that I’ve begun to hate at this point,
00:03:15 or just living life.
00:03:17 In fact, I often listen to brown noise with these
00:03:20 when I’m thinking deeply about something.
00:03:22 It helps me focus the mind.
00:03:24 They’re super comfortable, pair easily,
00:03:26 great sound, great bass, six hours of playtime.
00:03:30 In fact, for fun, I have one of the earbuds in now
00:03:33 and I’m listening to Europa by Santana,
00:03:36 probably one of my favorite guitar songs.
00:03:39 It kind of makes me feel like I’m in a music video.
00:03:41 So they told me to say that a bunch of celebrities
00:03:44 use these like Snoop Dogg, Melissa Etheridge, and Cardi B.
00:03:50 I don’t even know who Cardi B is,
00:03:52 but her earbud game is on point.
00:03:55 To mention celebrities I actually care about,
00:03:58 I’m sure if Richard Feynman was still with us,
00:04:01 he’d be listening to the Joe Rogan Experience
00:04:03 with Raycon earbuds.
00:04:06 Get them at buyraycon.com slash lex.
00:04:09 It’s how they know I sent you
00:04:11 and increases the chance that he’ll support
00:04:12 this podcast in the future.
00:04:14 So for all of the sponsors, click all of the links.
00:04:17 It really helps this podcast.
00:04:19 And now, here’s my conversation with Brian Kernighan.
00:04:25 Unix started being developed 50 years ago.
00:04:28 It’d be more than 50 years ago.
00:04:30 Can you tell the story like you describe in your new book
00:04:33 of how Unix was created?
00:04:35 Ha, if I can remember that far back,
00:04:38 it was some while ago.
00:04:40 So I think the gist of it is that at Bell Labs,
00:04:44 in 1969, there were a group of people
00:04:46 who had just finished working on the Multics project,
00:04:49 which was itself a follow on to CTSS.
00:04:54 So we can go back sort of an infinite regress in time,
00:04:57 but the CTSS was a very, very, very nice time sharing system.
00:05:01 It was very nice to use.
00:05:02 I actually used it that summer I spent in Cambridge in 1966.
00:05:06 What was the hardware there?
00:05:08 So what’s the operating system, what’s the hardware there?
00:05:10 What’s the CTSS look like?
00:05:12 So CTSS looked like kind of like
00:05:14 a standard time sharing system.
00:05:17 Certainly at the time, it was the only time sharing.
00:05:19 Let’s go back to the basics.
00:05:20 What’s a time sharing system?
00:05:22 Okay, in the beginning was the word
00:05:23 and the word was the system.
00:05:24 And then there was time sharing systems.
00:05:27 Yeah, if we go back into, let’s call it the 1950s
00:05:29 and early 1960s, most computing was done on very big
00:05:34 computers, physically big, although not terribly powerful
00:05:36 by today’s standards, that were maintained
00:05:39 in very large rooms and you use things like punch cards
00:05:45 to write your programs on and talk to them.
00:05:47 So you would take a deck of cards,
00:05:49 write your program on it, send it over a counter,
00:05:51 hand it to an operator and some while later
00:05:54 back would come something that said,
00:05:55 oh, you made a mistake and then you’d recycle.
00:05:58 And so it was very, very slow.
00:05:59 So the idea of time sharing was that you take
00:06:02 basically that same computer, but connect to it
00:06:06 with something that looked like an electric typewriter.
00:06:09 They could be a long distance away, it could be close,
00:06:11 but fundamentally what the operating system did
00:06:14 was to give each person who was connected to it
00:06:18 and wanting to do something a small slice of time
00:06:23 to do a particular job.
00:06:24 So I might be editing a file, so I would be typing
00:06:28 and every time I hit a keystroke,
00:06:29 the operating system would wake up and said,
00:06:31 oh, he typed character, let me remember that.
00:06:33 Then it’d go back to doing something else.
00:06:35 So it’d be going around and around a group of people
00:06:38 who were trying to get something done, giving each
00:06:40 a small slice of time and giving them each the illusion
00:06:45 that they pretty much had the whole machine to themselves
00:06:47 and hence time sharing, that is sharing the computing time
00:06:51 resource of the computer among a number of people
00:06:54 who were doing it.
00:06:55 Without the individual people being aware
00:06:56 that there’s others in a sense, the illusion,
00:06:59 the feelings that the machine is your own.
00:07:02 Pretty much that was the idea.
00:07:04 Yes, if it were well done and if it were fast enough
00:07:08 and other people weren’t doing too much,
00:07:09 you did have the illusion that you had the whole machine
00:07:12 to yourself and it was very much better
00:07:14 than the punch card model.
00:07:16 And so CTSS, the compatible time sharing system
00:07:19 was I think arguably the first of these.
00:07:22 It was done I guess technically in 64 or something like that.
00:07:26 It ran on an IBM 7094, slightly modified
00:07:30 to have twice as much memory as the norm.
00:07:32 It had two banks of 32K words instead of one.
00:07:37 So.
00:07:38 32K words, yeah.
00:07:40 Each word was 36 bits, so call it
00:07:42 about 150 kilobytes times two.
00:07:46 So by today’s standards, that’s down in the noise.
00:07:49 But at the time, that was a lot of memory
00:07:51 and memory was expensive.
00:07:53 So CTSS was just a wonderful environment to work on.
00:07:56 It was done by the people at MIT,
00:07:58 led by Fernando Corbato, Corby who died just earlier
00:08:03 this year, and a bunch of other folks.
00:08:06 So I spent the summer of 66 working on that,
00:08:09 had a great time, met a lot of really nice people
00:08:12 and indirectly knew of people at Bell Labs
00:08:17 who were also working on a follow on to CTSS
00:08:22 that was called Multics.
00:08:24 So Multics was meant to be the system
00:08:26 that would do everything that CTSS did
00:08:27 but do it better for a larger population.
00:08:30 All the usual stuff.
00:08:31 Now the actual time sharing, the scheduling,
00:08:36 what’s the algorithm that performs the scheduling?
00:08:39 What’s that look like?
00:08:39 How much magic is there?
00:08:40 What are the metrics?
00:08:42 How does it all work in the beginning?
00:08:44 So the answer is I don’t have a clue.
00:08:46 I think the basic idea was nothing more
00:08:48 than who all wants to get something done.
00:08:50 Suppose that things are very quiet
00:08:52 in the middle of the night,
00:08:53 then I get all the time that I want.
00:08:55 Suppose that you and I are contending at high noon
00:08:58 for something like that,
00:08:59 then probably the simplest algorithm is a round robin one
00:09:02 that gives you a bit of time, gives me a bit of time.
00:09:05 And then we could adapt to that.
00:09:07 Like what are you trying to do?
00:09:08 Are you text editing or are you compiling or something?
00:09:12 And then we might adjust the scheduler
00:09:13 according to things like that.
00:09:15 So okay, so Multics was trying to just do some of the,
00:09:19 clean it up a little bit.
00:09:20 Well, it was meant to be much more than that.
00:09:22 So Multics was the multiplexed information
00:09:24 and computing service and it was meant to be
00:09:27 a very large thing that would provide computing utility.
00:09:29 Something that where you could actually think of it
00:09:32 as just a plug in the wall service.
00:09:35 Sort of like cloud computing today.
00:09:37 Same idea, but 50 odd years earlier.
00:09:40 And so what Multics offered
00:09:43 was a richer operating system environment,
00:09:46 a piece of hardware that was better designed
00:09:48 for doing the kind of sharing of resources.
00:09:53 And presumably lots of other things.
00:09:55 Do you think people at that time had the dream
00:09:58 of what cloud computing is starting to become now,
00:10:01 which is computing is everywhere.
00:10:03 That you can just plug in almost,
00:10:06 and you never know how the magic works.
00:10:09 You just kind of plug in, add your little computation
00:10:11 that you need to perform and it does it.
00:10:13 Was that the dream?
00:10:14 I don’t know where that was the dream.
00:10:16 I wasn’t part of it at that point.
00:10:17 I remember I was an intern for summer.
00:10:19 But my sense is given that it was over 50 years ago,
00:10:23 yeah, they had that idea that it was an information utility.
00:10:26 That it was something where if you had a computing task to do,
00:10:29 you could just go and do it.
00:10:31 Now I’m betting that they didn’t have the same view
00:10:35 of computing for the masses, let’s call it.
00:10:38 The idea that your grandmother would be shopping on Amazon.
00:10:43 I don’t think that was part of it.
00:10:45 But if your grandmother were a programmer,
00:10:47 it might be very easy for her to go and use
00:10:49 this kind of utility.
00:10:51 What was your dream of computers at that time?
00:10:53 What did you see as the future of computers?
00:10:55 Because you have predicted what computers are today.
00:10:59 Oh, short answer, absolutely not.
00:11:01 I have no clue.
00:11:02 I’m not sure I had a dream.
00:11:03 It was a dream job in the sense that I really enjoyed
00:11:06 what I was doing.
00:11:07 I was surrounded by really, really nice people.
00:11:10 Cambridge is a very fine city to live in in the summer,
00:11:12 less so in the winter when it snows.
00:11:14 But in the summer, it was a delightful time.
00:11:16 And so I really enjoyed all of that stuff.
00:11:19 And I learned things.
00:11:20 And I think the good fortune of being there for summer
00:11:25 led me then to get a summer job at Bell Labs
00:11:27 the following summer.
00:11:28 And that was quite useful for the future.
00:11:31 So Bell Labs is this magical, legendary place.
00:11:35 So first of all, where is Bell Labs?
00:11:39 And can you start talking about that journey
00:11:44 towards Unix at Bell Labs?
00:11:46 Yeah, so Bell Labs is physically scattered around,
00:11:50 at the time, scattered around New Jersey.
00:11:52 The primary location is in a town called Murray Hill,
00:11:54 or a location called Murray Hill is actually
00:11:57 across the boundary between two small towns in New Jersey
00:12:00 called New Providence and Berkeley Heights.
00:12:03 Think of it as about 15, 20 miles straight west
00:12:05 of New York City, and therefore about an hour north
00:12:08 of here in Princeton.
00:12:11 And at that time, it had, make up a number,
00:12:15 three or 4,000 people there, many of whom had PhDs
00:12:18 and mostly doing physical sciences,
00:12:20 chemistry, physics, materials kinds of things,
00:12:24 but very strong math and rapidly growing interest
00:12:29 in computing as people realized you could do things
00:12:31 with computers that you might not have been able
00:12:34 to do before.
00:12:35 You could replace labs with computers
00:12:37 that had worked on models of what was going on.
00:12:41 So that was the essence of Bell Labs.
00:12:44 And again, I wasn’t a permanent employee there.
00:12:46 That was another internship.
00:12:47 I got lucky in internships.
00:12:50 I mean, if you could just linger on it a little bit,
00:12:52 what was the, what was in the air there?
00:12:55 Because some of the, the number of Nobel Prizes,
00:12:57 the number of Turing Awards and just legendary
00:13:00 computer scientists that come from their inventions,
00:13:03 including developments, including Unix,
00:13:05 it’s just, it’s unbelievable.
00:13:07 So was there something special about that place?
00:13:11 Oh, I think there was very definitely something special.
00:13:14 I mentioned the number of people,
00:13:15 it’s a very large number of people, very highly skilled
00:13:19 and working in an environment
00:13:20 where there was always something interesting to work on
00:13:23 because the goal of Bell Labs,
00:13:25 which was a small part of AT&T,
00:13:27 which provided basically the country’s phone service.
00:13:30 The goal of AT&T was to provide service for everybody.
00:13:33 And the goal of Bell Labs was to try and make that service
00:13:36 keep getting better, so improving service.
00:13:39 And that meant doing research on a lot of different things,
00:13:43 physical devices, like the transistor
00:13:46 or fiber optical cables or microwave systems,
00:13:50 all of these things the labs worked on.
00:13:53 And it was kind of just the beginning of real boom times
00:13:56 in computing as well.
00:13:58 Because when I was there, I went there first in 66.
00:14:01 So computing was at that point fairly young.
00:14:04 And so people were discovering
00:14:06 that you could do lots of things with computers.
00:14:08 So how was Unix born?
00:14:10 So Multics, in spite of having an enormous number
00:14:14 of really good ideas and lots of good people working on it,
00:14:16 fundamentally didn’t live up, at least in the short run,
00:14:20 and I think ultimately really ever,
00:14:22 to its goal of being this information utility.
00:14:25 It was too expensive and certainly what was promised
00:14:29 was delivered much too late.
00:14:31 And so in roughly the beginning of 1969,
00:14:34 Bell Labs pulled out of the project.
00:14:37 The project at that point had included MIT, Bell Labs,
00:14:42 and General Electric, General Electric made computers.
00:14:45 So General Electric was the hardware operation.
00:14:48 So Bell Labs, realizing this wasn’t going anywhere
00:14:50 on a timescale they cared about, pulled out of the project.
00:14:54 And this left several people with an acquired taste
00:14:59 for really, really nice computing environments,
00:15:01 but no computing environment.
00:15:03 And so they started thinking about what could you do
00:15:06 if you were going to design a new operating system
00:15:09 that would provide the same kind of comfortable computing
00:15:12 as CTSS had, but also the facilities of something
00:15:16 like Multics sort of brought forward.
00:15:19 And so they did a lot of paper design stuff.
00:15:21 And at the same time, Ken Thompson found
00:15:23 what is characterized as a little used PDP 7,
00:15:27 where he started to do experiments with file systems,
00:15:31 just how do you store information on a computer
00:15:33 in a efficient way, and then this famous story
00:15:36 that his wife went away to California for three weeks,
00:15:39 taking their one year old son, and three weeks,
00:15:43 and he sat down and wrote an operating system,
00:15:45 which ultimately became Unix.
00:15:47 So software productivity was good in those days.
00:15:50 So PDP, what’s a PDP 7?
00:15:52 So it’s a piece of hardware.
00:15:53 Yeah, it’s a piece of hardware.
00:15:54 It was one of early machines made
00:15:56 by Digital Equipment Corporation, DEC,
00:15:59 and it was a mini computer, so called.
00:16:03 It had, I would have to look up the numbers exactly,
00:16:07 but it had a very small amount of memory,
00:16:09 maybe 16K, 16 bit words, or something like that,
00:16:13 relatively slow, probably not super expensive.
00:16:17 Maybe, again, making this up, I’d have to look it up,
00:16:19 $100,000 or something like that.
00:16:21 Which is not super expensive in those days, right?
00:16:24 It was expensive.
00:16:25 It was enough that you and I probably
00:16:26 wouldn’t be able to buy one,
00:16:27 but a modest group of people could get together.
00:16:30 But in any case, it came out, if I recall, in 1964.
00:16:34 So by 1969, it was getting a little obsolete,
00:16:38 and that’s why it was little used.
00:16:41 If you can sort of comment,
00:16:42 what do you think it’s like
00:16:43 to write an operating system like that?
00:16:45 So that process that Ken went through in three weeks,
00:16:49 because you were, I mean, you’re a part of that process.
00:16:52 You contributed a lot to Unix’s early development.
00:16:57 So what do you think it takes to do that first step,
00:17:01 that first kind of, from design to reality on the PDP?
00:17:05 Well, let me correct one thing.
00:17:07 I had nothing to do with it.
00:17:08 So I did not write it.
00:17:10 I have never written operating system code.
00:17:13 And so I don’t know.
00:17:16 Now an operating system is simply code.
00:17:18 And this first one wasn’t very big,
00:17:21 but it’s something that lets you run processes,
00:17:24 lets you execute some kind of code that has been written.
00:17:27 It lets you store information for periods of time
00:17:30 so that it doesn’t go away when you turn the power off
00:17:33 or reboot or something like that.
00:17:36 And there’s kind of a core set of tools
00:17:38 that are technically not part of an operating system,
00:17:40 but you probably need them.
00:17:42 In this case, Ken wrote an assembler
00:17:44 for the PDP 7 that worked.
00:17:46 He needed a text editor
00:17:47 so that he could actually create text.
00:17:49 He had the file system stuff that he had been working on,
00:17:52 and then the rest of it was just a way
00:17:53 to load things, executable code from the file system
00:17:57 into the memory, give it control,
00:18:00 and then recover control when it was finished
00:18:02 or in some other way quit.
00:18:04 What was the code written in,
00:18:06 primarily the programming language?
00:18:08 Was it in assembly?
00:18:09 Yeah, PDP 7 assembler that Ken created.
00:18:13 These things were assembly language
00:18:15 until probably the, call it 1973 or 74, something like that.
00:18:21 Forgive me if it’s a dumb question,
00:18:23 but it feels like a daunting task
00:18:25 to write any kind of complex system in assembly.
00:18:28 Absolutely.
00:18:31 It feels like impossible to do any kind
00:18:32 of what we think of as software engineering with assembly,
00:18:36 because to work on a big picture sort of.
00:18:40 I think it’s hard.
00:18:41 It’s been a long time since I wrote assembly language.
00:18:43 It is absolutely true that in assembly language,
00:18:45 if you make a mistake, nobody tells you.
00:18:47 There are no training wheels whatsoever.
00:18:49 And so stuff doesn’t work.
00:18:51 Now what?
00:18:51 There’s no debuggers.
00:18:53 Well, there could be debuggers,
00:18:54 but that’s the same problem, right?
00:18:56 How do you actually get something
00:18:58 that will help you debug it?
00:19:00 So part of it is an ability to see the big picture.
00:19:05 Now these systems were not big in the sense
00:19:07 that today’s pictures are.
00:19:08 So the big picture was in some sense more manageable.
00:19:11 I mean, then realistically,
00:19:13 there’s an enormous variation
00:19:15 in the capabilities of programmers.
00:19:17 And Ken Thompson, who did that first one,
00:19:20 is kind of the singularity, in my experience, of programmers.
00:19:25 With no disrespect to you or even to me,
00:19:27 he’s in several leagues removed.
00:19:31 I know there’s levels.
00:19:33 It’s a fascinating thing that there are unique stars
00:19:37 in particular in the programming space
00:19:39 and at a particular time.
00:19:40 You know, the time matters too,
00:19:42 the timing of when that person comes along.
00:19:44 And a wife does have to leave.
00:19:47 There’s this weird timing that happens
00:19:49 and then all of a sudden something beautiful is created.
00:19:52 I mean, how does it make you feel
00:19:53 that there’s a system that was created in three weeks
00:19:58 or maybe you can even say on a whim,
00:20:01 but not really, but of course, quickly,
00:20:03 that is now, you could think of most of the computers
00:20:07 in the world run on a Unix like system?
00:20:10 Right.
00:20:12 How do you interpret, like,
00:20:14 if you kind of zoom from the alien perspective,
00:20:16 if you were just observing Earth,
00:20:18 and all of a sudden these computers took over the world
00:20:20 and they started from this little initial seed of Unix,
00:20:24 how does that make you feel?
00:20:26 It’s quite surprising.
00:20:27 And you asked earlier about prediction.
00:20:30 The answer is no.
00:20:31 There’s no way you could predict that kind of evolution.
00:20:33 And I don’t know whether it was inevitable
00:20:37 or just a whole sequence of blind luck.
00:20:39 I suspect more of the latter.
00:20:40 And so I look at it and think, gee, that’s kind of neat.
00:20:45 I think the real question is what does Ken think about that?
00:20:49 Because he’s the guy arguably from whom it really came.
00:20:53 You know, tremendous contributions from Dennis Ritchie
00:20:55 and then others around in that Bell Labs environment.
00:20:58 But, you know, if you had to pick a single person,
00:21:01 that would be Ken.
00:21:02 So you’ve written a new book,
00:21:04 Unix, a history and a memoir.
00:21:06 Are there some memorable human stories,
00:21:08 funny or profound from that time
00:21:10 that just kind of stand out?
00:21:12 Oh, there’s a lot of them in his book.
00:21:14 Oh, there’s a lot of them in a sense.
00:21:15 And again, it’s a question of can you resurrect them
00:21:18 in real time?
00:21:19 Never.
00:21:19 His memory fails.
00:21:21 But I think part of it was that Bell Labs at the time
00:21:25 was a very special kind of place to work
00:21:27 because there were a lot of interesting people
00:21:28 and the environment was very, very open and free.
00:21:31 It was a very cooperative environment,
00:21:33 very friendly environment.
00:21:34 And so if you had an interesting problem,
00:21:35 you go and talk to somebody
00:21:37 and they might help you with the solution.
00:21:40 And it was a kind of a fun environment too,
00:21:43 in which people did strange things
00:21:46 and often tweaking the bureaucracy in one way or another.
00:21:52 So rebellious in certain kinds of ways.
00:21:54 In some ways, yeah, absolutely.
00:21:56 I think most people didn’t take too kindly
00:21:58 to the bureaucracy and I’m sure the bureaucracy
00:22:01 put up with an enormous amount
00:22:03 that they didn’t really want to.
00:22:06 So maybe to linger on it a little bit,
00:22:09 do you have a sense of what the philosophy
00:22:11 that characterizes Unix is, the design?
00:22:15 Not just the initial, but just carry through the years,
00:22:18 just being there, being around it.
00:22:20 What’s the fundamental philosophy behind the system?
00:22:23 I think one aspect of fundamental philosophy
00:22:25 was to provide an environment that made it easy to write
00:22:29 or easier, productive to write programs.
00:22:31 So it was meant as a programmer environment.
00:22:33 It wasn’t meant specifically as something
00:22:36 to do some other kind of job.
00:22:38 For example, it was used extensively for word processing,
00:22:41 but it wasn’t designed as a word processing system.
00:22:43 It was used extensively for lab control,
00:22:45 but it wasn’t designed for that.
00:22:47 It was used extensively as a front end
00:22:49 for big other systems, big dumb systems,
00:22:52 but it wasn’t designed for that.
00:22:53 It was meant to be an environment
00:22:55 where it was really easy to write programs.
00:22:57 So the programmers could be highly productive.
00:23:00 And part of that was to be a community.
00:23:03 And there’s some observation from Dennis Ritchie,
00:23:05 I think at the end of the book,
00:23:06 that says that from his standpoint,
00:23:09 the real goal was to create a community
00:23:11 where people could work as programmers on a system.
00:23:17 And I think in that sense, certainly for many, many years,
00:23:19 it succeeded quite well at that.
00:23:22 And part of that is the technical aspects
00:23:25 of because it made it really easy to write programs,
00:23:27 people did write interesting programs.
00:23:29 Those programs tended to be used by other programmers.
00:23:32 And so it was kind of a virtuous circle
00:23:34 of more and more stuff coming up
00:23:36 that was really good for programmers.
00:23:39 And you were part of that community of programmers.
00:23:41 So what was it like writing programs in that early Unix?
00:23:45 It was a blast.
00:23:46 It really was.
00:23:50 You know, I like to program.
00:23:51 I’m not a terribly good programmer,
00:23:52 but it was a lot of fun to write code.
00:23:55 And in the early days, there was an enormous amount
00:23:57 of what you would today, I suppose,
00:23:59 called low hanging fruit.
00:24:00 People hadn’t done things before.
00:24:02 And this was this new environment
00:24:04 and the whole combination of nice tools
00:24:07 and very responsive system and tremendous colleagues
00:24:11 made it possible to write code.
00:24:13 You could have an idea in the morning.
00:24:16 You could do an experiment with it.
00:24:19 You could have something limping along that night
00:24:21 or the next day and people would react to it.
00:24:23 And they would say, oh, that’s wonderful,
00:24:25 but you’re really screwed up here.
00:24:27 And the feedback loop was then very, very short and tight.
00:24:31 And so a lot of things got developed fairly quickly
00:24:34 that in many cases still exist today.
00:24:39 And I think that was part of what made it fun
00:24:43 because programming itself is fun.
00:24:44 It’s puzzle solving in a variety of ways,
00:24:46 but I think it’s even more fun when you do something
00:24:50 that somebody else then uses.
00:24:52 Even if they whine about it not working,
00:24:54 the fact that they used it is part of the reward mechanism.
00:24:58 And what was the method of interaction,
00:25:00 the communication, that feedback loop?
00:25:03 I mean, this is before the internet.
00:25:05 Certainly before the internet.
00:25:07 It was mostly physical right there.
00:25:11 Somebody would come into your office and say something.
00:25:13 So these places are all close by,
00:25:15 like offices are nearby, so really lively interaction.
00:25:18 Yeah, yeah.
00:25:19 Bell Labs was fundamentally one giant building
00:25:22 and most of the people were involved in this unique stuff.
00:25:24 We’re in two or three quarters and there was a room.
00:25:27 Oh, how big was it?
00:25:29 Probably call it 50 feet by 50 feet.
00:25:33 Make up a number of that which had some access
00:25:37 to computers there as well as in offices
00:25:39 and people hung out there and it had a coffee machine.
00:25:42 And so it was mostly very physical.
00:25:46 We did use email, of course.
00:25:49 But it was fundamentally, for a long time,
00:25:52 all on one machine.
00:25:54 So there was no need for internet.
00:25:56 It’s fascinating to think about what computing
00:25:58 would be today without Bell Labs.
00:26:00 It seems so many, the people being in the vicinity
00:26:05 of each other, sort of getting that quick feedback,
00:26:08 working together, so many brilliant people.
00:26:11 I don’t know where else that could have existed
00:26:13 in the world given how that came together.
00:26:18 Yeah, how does that make you feel
00:26:19 that little element of history?
00:26:23 Well, I think that’s very nice,
00:26:24 but in a sense it’s survivor bias
00:26:26 and if it hadn’t happened at Bell Labs,
00:26:29 there were other places that were doing
00:26:31 really interesting work as well.
00:26:32 Xerox PARC is perhaps the most obvious one.
00:26:35 Xerox PARC contributed an enormous amount
00:26:36 of good material and many of the things
00:26:39 we take for granted today in the same way
00:26:41 came from Xerox PARC experience.
00:26:43 I don’t think they capitalized in the long run as much.
00:26:46 Their parent company was perhaps not as lucky
00:26:49 in capitalizing on this, who knows?
00:26:52 But that’s certainly another place
00:26:55 where there was a tremendous amount of influence.
00:26:58 There were a lot of good university activities.
00:27:00 MIT was obviously no slouch in this kind of thing
00:27:03 and others as well.
00:27:07 So Unix turned out to be open source
00:27:10 because of the various ways that AT&T operated
00:27:13 and sort of it had to, the focus was on telephones.
00:27:19 I think that’s a mischaracterization in a sense.
00:27:21 It absolutely was not open source.
00:27:23 It was very definitely proprietary, licensed,
00:27:27 but it was licensed freely to universities
00:27:30 in source code form for many years.
00:27:33 And because of that, generations of university students
00:27:37 and their faculty people grew up knowing about Unix
00:27:41 and there was enough expertise in the community
00:27:44 that it then became possible for people
00:27:46 to kind of go off in their own direction
00:27:48 and build something that looked Unix like.
00:27:51 The Berkeley version of Unix started with that licensed code
00:27:56 and gradually picked up enough of its own code contributions,
00:28:01 notably from people like Bill Joy,
00:28:04 that eventually it was able to become completely free
00:28:08 of any AT&T code.
00:28:10 Now, there was an enormous amount of legal jockeying
00:28:13 around this in the late, early to late 80s, early 90s,
00:28:17 something like that.
00:28:19 And then, I guess the open source movement
00:28:24 might’ve started when Richard Stallman started
00:28:27 to think about this in the late 80s.
00:28:29 And by 1991, when Torvalds decided he was going
00:28:33 to do a Unix like operating system,
00:28:37 there was enough expertise in the community
00:28:40 that first he had a target, he could see what to do
00:28:44 because the kind of the Unix system call interface
00:28:47 and the tools and so on were there.
00:28:50 And so he was able to build an operating system
00:28:53 that at this point, when you say Unix,
00:28:56 in many cases, what you’re really thinking is Linux.
00:28:58 Linux, yeah.
00:28:59 But it’s funny that from my distant perception,
00:29:02 I felt that Unix was open source
00:29:05 without actually knowing it.
00:29:07 But what you’re really saying, it was just freely licensed.
00:29:11 It was freely licensed.
00:29:12 So it felt open source in a sense
00:29:14 because universities are not trying to make money,
00:29:16 so it felt open source in a sense
00:29:19 that you can get access if you wanted.
00:29:20 Right, and a very, very, very large number of universities
00:29:23 had the license and they were able to talk
00:29:25 to all the other universities who had the license.
00:29:27 And so technically not open,
00:29:30 technically belonging to AT&T, pragmatically pretty open.
00:29:34 And so there’s a ripple effect
00:29:36 that all the faculty and the students then all grew up
00:29:39 and then they went throughout the world
00:29:41 and permeated in that kind of way.
00:29:45 So what kind of features do you think make
00:29:49 for a good operating system?
00:29:52 If you take the lessons of Unix,
00:29:54 you said make it easy for programmers.
00:29:59 That seems to be an important one.
00:30:03 But also Unix turned out to be exceptionally robust
00:30:07 and efficient.
00:30:08 Right.
00:30:09 So is that an accident when you focus on the programmer
00:30:12 or is that a natural outcome?
00:30:14 I think part of the reason for efficiency
00:30:17 was that it began on extremely modest hardware,
00:30:21 very, very, very tiny.
00:30:22 And so you couldn’t get carried away.
00:30:24 You couldn’t do a lot of complicated things
00:30:27 because you just didn’t have the resources,
00:30:30 either processor speed or memory.
00:30:32 And so that enforced a certain minimality of mechanisms
00:30:37 and maybe a search for generalizations
00:30:40 so that you would find one mechanism
00:30:41 that served for a lot of different things
00:30:43 rather than having lots of different special cases.
00:30:45 I think the file system in Unix is a good example
00:30:48 of that file system interface in its fundamental form
00:30:51 is extremely straightforward.
00:30:53 And that means that you can write code
00:30:56 very, very effectively for the file system.
00:30:58 And then one of those ideas, one of those generalizations
00:31:02 is that gee, that file system interface works
00:31:04 for all kinds of other things as well.
00:31:06 And so in particular, the idea of reading
00:31:09 and writing to devices is the same as reading
00:31:11 and writing to a disc that has a file system.
00:31:14 And then that gets carried further in other parts
00:31:17 of the world.
00:31:18 Processes become, in effect, files in a file system.
00:31:24 And the Plan 9 operating system, which came along,
00:31:26 I guess, in the late 80s or something like that,
00:31:29 took a lot of those ideas from the original Unix
00:31:31 and tried to push the generalization even further
00:31:34 so that in Plan 9, a lot of different resources
00:31:37 are file systems.
00:31:38 They all share that interface.
00:31:40 So that would be one example where finding the right model
00:31:44 of how to do something means that an awful lot of things
00:31:46 become simpler, and it means, therefore,
00:31:48 that more people can do useful, interesting things
00:31:51 with them without having to think as hard about it.
00:31:54 So you said you’re not a very good programmer.
00:31:56 That’s true.
00:31:58 You’re the most modest human being, okay,
00:32:01 but you’ll continue saying that.
00:32:03 I understand how this works.
00:32:04 But you do radiate a sort of love for programming.
00:32:07 So let me ask, do you think programming
00:32:10 is more an art or a science?
00:32:13 Is it creativity or kind of rigor?
00:32:16 I think it’s some of each.
00:32:18 It’s some combination.
00:32:20 Some of the art is figuring out what it is
00:32:22 that you really want to do.
00:32:23 What should that program be?
00:32:25 What would make a good program?
00:32:27 And that’s some understanding of what the task is,
00:32:30 what the people who might use this program want.
00:32:33 And I think that’s art in many respects.
00:32:37 The science part is trying to figure out how to do it well.
00:32:40 And some of that is real computer sciencey stuff,
00:32:45 like what algorithm should we use at some point?
00:32:48 Mostly in the sense of being careful to use algorithms
00:32:52 that will actually work properly, scale properly,
00:32:56 avoiding quadratic algorithms
00:32:58 when a linear algorithm should be the right thing,
00:33:01 that kind of more formal view of it.
00:33:04 Same thing for data structures.
00:33:06 But also it’s, I think, an engineering field as well.
00:33:10 And engineering is not quite the same as science
00:33:12 because engineering, you’re working with constraints.
00:33:15 You have to figure out not only what
00:33:19 is a good algorithm for this kind of thing,
00:33:21 but what’s the most appropriate algorithm given
00:33:23 the amount of time we have to compute,
00:33:26 the amount of time we have to program,
00:33:28 what’s likely to happen in the future with maintenance,
00:33:30 who’s going to pick this up in the future, all
00:33:33 of those kind of things that if you’re an engineer,
00:33:35 you get to worry about.
00:33:37 Whereas if you think of yourself as a scientist,
00:33:39 well, you can maybe push them over the horizon in a way.
00:33:42 And if you’re an artist, what’s that?
00:33:45 So just on your own personal level,
00:33:48 what’s your process like of writing a program?
00:33:50 Say, a small and large sort of tinkering with stuff.
00:33:55 Do you just start coding right away
00:33:58 and just kind of evolve iteratively with a loose notion?
00:34:03 Or do you plan on a sheet of paper first
00:34:05 and then kind of design in what they teach you
00:34:09 in the kind of software engineering courses
00:34:12 in undergrad or something like that?
00:34:13 What’s your process like?
00:34:14 It’s certainly much more the informal incremental.
00:34:17 First, I don’t write big programs at this point.
00:34:19 It’s been a long time since I wrote a program that
00:34:21 was more than I call it a few hundred or more lines,
00:34:25 something like that.
00:34:26 Many of the programs I write are experiments
00:34:29 for either something I’m curious about
00:34:31 or often for something that I want to talk about in a class.
00:34:34 So those necessarily tend to be relatively small.
00:34:38 A lot of the kind of code I write these days
00:34:41 tends to be for sort of exploratory data analysis
00:34:44 where I’ve got some collection of data
00:34:46 and I want to try and figure out what on earth is going on in it.
00:34:49 And for that, those programs tend to be very small.
00:34:52 Sometimes you’re not even programming.
00:34:53 You’re just using existing tools like counting things.
00:34:57 Or sometimes you’re writing OX scripts
00:35:00 because two or three lines will tell you
00:35:02 something about a piece of data.
00:35:03 And then when it gets bigger, well, then I
00:35:05 will probably write something in Python
00:35:08 because that scales better up to call it a few hundred lines
00:35:13 or something like that.
00:35:14 And it’s been a long time since I wrote programs
00:35:16 that were much more than that.
00:35:18 Speaking of data exploration and OX, first, what is OX?
00:35:23 So OX is a scripting language that
00:35:25 was done by myself, Al Aho, and Peter Weinberger.
00:35:30 We did that originally in the late 70s.
00:35:32 It was a language that was meant to make it really easy
00:35:34 to do quick and dirty tasks like counting things
00:35:39 or selecting interesting information from basically
00:35:43 all text files, rearranging it in some way or summarizing it.
00:35:47 It runs a command on each line of a file.
00:35:51 I mean, it’s still exceptionally widely used today.
00:35:55 Oh, absolutely.
00:35:56 Yeah.
00:35:56 It’s so simple and elegant, sort of the way to explore data.
00:36:01 Turns out you can just write a script that
00:36:03 does something seemingly trivial in a single line,
00:36:07 and giving you that slice of the data
00:36:09 somehow reveals something fundamental about the data.
00:36:13 And that seems to work still.
00:36:17 Yeah, it’s very good for that kind of thing.
00:36:19 That’s sort of what it was meant for.
00:36:21 I think what we didn’t appreciate
00:36:22 was that the model was actually quite good for a lot of data
00:36:26 processing kinds of tasks and that it’s
00:36:29 kept going as long as it has because at this point,
00:36:31 it’s over 40 years old, and it’s still, I think, a useful tool.
00:36:35 And well, this is paternal interest, I guess.
00:36:38 But I think in terms of programming languages,
00:36:40 you get the most bang for the buck by learning AUC.
00:36:44 And it doesn’t scale the big programs,
00:36:46 but it does pretty darn well on these little things
00:36:49 where you just want to see all the somethings in something.
00:36:53 So yeah, I probably write more AUC than anything else
00:36:56 at this point.
00:36:58 So what kind of stuff do you love about AUC?
00:37:01 Is there, if you can comment on sort of things
00:37:06 that give you joy when you can, in a simple program,
00:37:10 reveal something about the data.
00:37:11 Is there something that stands out from particular features?
00:37:14 I think it’s mostly the selection of default behaviors.
00:37:19 You sort of hinted at it a moment ago.
00:37:21 What AUC does is to read through a set of files,
00:37:24 and then within each file, it writes
00:37:26 through each of the lines.
00:37:28 And then on each of the lines, it has a set of patterns
00:37:31 that it looks for.
00:37:33 That’s your AUC program.
00:37:34 And if one of the patterns matches,
00:37:36 there is a corresponding action that you might perform.
00:37:40 And so it’s kind of a quadruply nested loop or something
00:37:43 like that.
00:37:45 And that’s all completely automatic.
00:37:46 You don’t have to say anything about it.
00:37:48 You just write the pattern and the action,
00:37:49 and then run the data by it.
00:37:52 And so that paradigm for programming
00:37:54 is a very natural and effective one.
00:37:56 And I think we captured that reasonably well in AUC.
00:38:00 And it does other things for free as well.
00:38:01 It splits the data into fields so that on each line,
00:38:05 there is fields separated by white space or something.
00:38:07 And so it does that for free.
00:38:08 You don’t have to say anything about it.
00:38:11 And it collects information as it goes along,
00:38:13 like what line are we on?
00:38:15 How many fields are there on this line?
00:38:18 So lots of things that just make it so that a program which
00:38:21 in another language, let’s say Python,
00:38:24 would be five, 10, 20 lines in AUC is one or two lines.
00:38:28 And so because it’s one or two lines,
00:38:29 you can do it on the shell.
00:38:31 You don’t have to open up another whole thing.
00:38:33 You can just do it right there in the interaction
00:38:35 with the operatives directly.
00:38:38 Is there other shell commands that you love over the years
00:38:44 like you really enjoy using?
00:38:46 Oh, grep.
00:38:47 Grep?
00:38:47 Grep’s the only one.
00:38:49 Yeah, grep does everything.
00:38:51 So grep is a simpler version of AUC, I would say?
00:38:55 In some sense, yeah, right.
00:38:58 What is grep?
00:38:58 So grep basically searches the input
00:39:01 for particular patterns, regular expressions,
00:39:04 technically, of a certain class.
00:39:06 And it has that same paradigm that AUC does.
00:39:08 It’s a pattern action thing.
00:39:10 It reads through all the files and then
00:39:12 all the lines in each file.
00:39:13 But it has a single pattern, which
00:39:15 is the regular expression you’re looking for,
00:39:17 and a single action printed if it matches.
00:39:20 So in that sense, it’s a much simpler version.
00:39:22 And you could write grep in AUC as a one liner.
00:39:26 And I use grep probably more than anything else
00:39:30 at this point just because it’s so convenient and natural.
00:39:35 Why do you think it’s such a powerful tool, grep and AUC?
00:39:38 Why do you think operating systems like Windows,
00:39:41 for example, don’t have it?
00:39:45 You can, of course, I use, which is amazing now,
00:39:48 there’s Windows for Linux.
00:39:50 So which you could basically use all the fun stuff
00:39:54 like AUC and grep inside of Windows.
00:39:57 But Windows naturally, as part of the graphical interface,
00:40:00 the simplicity of grep, searching
00:40:03 through a bunch of files and just popping up naturally.
00:40:06 Why do you think that’s unique to the Linux environment?
00:40:11 I don’t know.
00:40:12 It’s not strictly unique, but it’s certainly focused there.
00:40:16 And I think some of it’s the weight of history
00:40:19 that Windows came from MS DOS.
00:40:22 MS DOS was a pretty pathetic operating system,
00:40:24 although common on an unboundedly large number
00:40:27 of machines.
00:40:28 But somewhere in roughly the 90s,
00:40:32 Windows became a graphical system.
00:40:34 And I think Microsoft spent a lot of their energy
00:40:37 on making that graphical interface what it is.
00:40:41 And that’s a different model of computing.
00:40:44 It’s a model of computing where you point and click
00:40:47 and sort of experiment with menus.
00:40:49 It’s a model of computing works rather well for people
00:40:53 who are not programmers and just want to get something done,
00:40:56 whereas teaching something like the command line
00:40:59 to nonprogrammers turns out to sometimes be
00:41:01 an uphill struggle.
00:41:02 And so I think Microsoft probably
00:41:04 was right in what they did.
00:41:06 Now you mentioned Whistle or whatever
00:41:07 it’s called, the Winix, Linux.
00:41:09 Whistle.
00:41:10 I wonder what it’s pronounced.
00:41:11 WSL is what I’ve never actually pronounced.
00:41:13 Whistle, I like it.
00:41:13 I have no idea.
00:41:15 But there have been things like that for longest.
00:41:17 Cygwin, for example, which is a wonderful collection of take
00:41:21 all your favorite tools from Unix and Linux
00:41:23 and just make them work perfectly on Windows.
00:41:25 And so that’s something that’s been going on
00:41:27 for at least 20 years, if not longer.
00:41:29 And I use that on my one remaining Windows machine
00:41:34 routinely because if you’re doing something that
00:41:37 is batch computing, suitable for command line,
00:41:41 that’s the right way to do it.
00:41:42 Because the Windows equivalents are, if nothing else,
00:41:45 not familiar to me.
00:41:47 But I would definitely recommend to people
00:41:50 if they don’t use Cygwin to try Whistle.
00:41:52 Yes.
00:41:54 I’ve been so excited that I could write scripts quickly
00:41:59 in Windows.
00:42:00 It’s changed my life.
00:42:03 OK, what’s your perfect programming setup?
00:42:06 What computer, what operating system, what keyboard,
00:42:09 what editor?
00:42:10 Yeah, perfect is too strong a word.
00:42:13 It’s way too strong a word.
00:42:15 What I use by default, I have, at this point,
00:42:18 a 13 inch MacBook Air, which I use
00:42:22 because it’s kind of a reasonable balance
00:42:24 of the various things I need.
00:42:25 I can carry it around.
00:42:26 It’s got enough computing, horsepower, screen’s
00:42:28 big enough, keyboard’s OK.
00:42:31 And so I basically do most of my computing on that.
00:42:34 I have a big iMac in my office that I use from time to time
00:42:38 as well, especially when I need a big screen,
00:42:41 but otherwise, it tends not to be used that much.
00:42:47 Editor.
00:42:48 I use mostly SAM, which is an editor that Rob Pike wrote
00:42:52 long ago at Bell Labs.
00:42:56 Sorry to interrupt.
00:42:56 Does that precede VI?
00:42:58 Does that precede iMac?
00:43:00 It post dates both VI and iMacs.
00:43:04 It is derived from Rob’s experience with ED and VI.
00:43:11 What’s ED?
00:43:12 That’s the original Unix editor.
00:43:14 Oh, wow.
00:43:16 Dated probably before you were born.
00:43:19 So actually, what’s the history of editors?
00:43:23 Can you briefly, because it’s such a fact.
00:43:26 I use Emacs, I’m sorry to say.
00:43:28 Sorry to come out with that.
00:43:30 But what’s the kind of interplay there?
00:43:33 So in ancient times, call it the first time sharing systems,
00:43:39 going back to what we were talking about.
00:43:41 There was an editor on CTSS that I don’t even
00:43:44 remember what it was called.
00:43:45 It might have been edit, where you could type text, program
00:43:50 text, and it would do something, or document text.
00:43:53 You could save the text.
00:43:54 And save it.
00:43:55 You could edit it.
00:43:57 The usual thing that you would get in an editor.
00:44:00 And Ken Thompson wrote an editor called QED, which
00:44:04 was very, very powerful.
00:44:05 But these were all totally A, command based.
00:44:08 They were not mouse or cursor based,
00:44:10 because it was before mice and even before cursors,
00:44:13 because they were running on terminals that printed on paper.
00:44:17 No CRT type displays, let alone LEDs.
00:44:21 And so then when Unix came along, Ken took QED
00:44:26 and stripped it way, way, way, way down.
00:44:28 And that became an editor that he called ED.
00:44:30 And it was very simple.
00:44:31 But it was a line oriented editor.
00:44:33 And so you could load a file.
00:44:36 And then you could talk about the lines one
00:44:38 through the last line.
00:44:39 And you could print ranges of lines.
00:44:41 You could add text.
00:44:43 You could delete text.
00:44:44 You could change text.
00:44:44 Or you could do a substitute command
00:44:46 that would change things within a line or within groups
00:44:48 of lines.
00:44:49 So you can work on parts of a file, essentially.
00:44:51 Yeah.
00:44:51 You can work on any part of it, the whole thing or whatever.
00:44:54 But it was entirely command line based.
00:44:57 And it was entirely on paper.
00:45:00 Paper.
00:45:01 And that meant that you changed it.
00:45:02 Yeah, right.
00:45:03 Real paper.
00:45:04 And so if you changed a line, you
00:45:06 had to print that line using up another line of paper
00:45:09 to see what the change caused.
00:45:12 So when CRT displays came along, then you
00:45:18 could start to use cursor control.
00:45:19 And you could sort of move where you were on the screen.
00:45:24 Without reprinting every time.
00:45:26 Without reprinting.
00:45:27 And there were a number of editors there.
00:45:29 The one that I was most familiar with and still use
00:45:32 is VI, which was done by Bill Choi.
00:45:35 And so that dates from probably the late 70s, as I guess.
00:45:40 And it took full advantage of the cursor controls.
00:45:45 I suspect that Emacs was roughly at the same time.
00:45:48 But I don’t know.
00:45:49 I’ve never internalized Emacs.
00:45:51 So at this point, I stopped using ED, although I still can.
00:45:56 I use VI sometimes, and I use SAM when I can.
00:46:00 And SAM is available on most systems?
00:46:02 It is available.
00:46:04 You have to download it yourself from, typically,
00:46:06 the Plan 9 operating system distribution.
00:46:08 It’s been maintained by people there.
00:46:11 And so I’ll get home tonight.
00:46:13 I’ll try it.
00:46:14 It’s cool.
00:46:14 It sounds fascinating.
00:46:17 Although my love is with Lisp and Emacs,
00:46:20 I’ve went into that hippie world of.
00:46:25 I think it’s a lot of things.
00:46:26 Religion, where you’re brought up with.
00:46:27 Yeah, that’s true.
00:46:28 That’s true.
00:46:29 Most of the actual programming I do is C, C++, and Python.
00:46:34 But my weird sort of, yeah, my religious upbringing is in Lisp.
00:46:38 So can you take on the impossible task
00:46:41 and give a brief history of programming languages
00:46:44 from your perspective?
00:46:46 So I guess you could say programming languages started
00:46:48 probably in, what, the late 40s or something like that.
00:46:52 People used to program computers by basically putting
00:46:55 in zeros and ones.
00:46:56 Using something like switches on a console.
00:46:59 And then, or maybe holes in paper tapes.
00:47:03 Something like that.
00:47:04 So extremely tedious, awful, whatever.
00:47:08 And so I think the first programming languages
00:47:10 were relatively crude assembly languages,
00:47:14 where people would basically write
00:47:17 a program that would convert mnemonics like add ADD
00:47:22 into whatever the bit pattern was
00:47:24 that corresponded to an ADD instruction.
00:47:26 And they would do the clerical work of figuring out
00:47:28 where things were.
00:47:30 So you could put a name on a location in a program,
00:47:32 and the assembler would figure out
00:47:34 where that corresponded to when the thing was all put together
00:47:37 and dropped into memory.
00:47:40 And early on, and this would be the late 40s and very early
00:47:46 50s, there were assemblers written for the various machines
00:47:50 that people used.
00:47:51 You may have seen in the paper just a couple of days ago,
00:47:53 Tony Berker died.
00:47:54 He did this thing in Manchester called AutoCode, a language
00:47:58 which I knew only by name.
00:48:01 But it sounds like it was a flavor of assembly language,
00:48:04 sort of a little higher in some ways.
00:48:06 And it replaced a language that Alan Turing wrote,
00:48:09 which you put in zeros and ones.
00:48:10 But you put it in backwards order,
00:48:12 because that was a hardware word.
00:48:14 Very strange.
00:48:14 That’s right.
00:48:15 Yeah, yeah, that’s right.
00:48:16 Backwards.
00:48:17 So assembly languages, let’s call that the early 1950s.
00:48:22 And so every different flavor of computer
00:48:24 has its own assembly language.
00:48:25 So the EDSAC had its, and the Manchester had its,
00:48:28 and the IBM whatever, 790 or 704, or whatever had its,
00:48:33 and so on.
00:48:34 So everybody had their own assembly language.
00:48:36 And assembly languages have a few commands, additions,
00:48:38 subtraction, then branching of some kind,
00:48:41 if then type of situation.
00:48:42 Right, they have exactly, in their simplest form at least,
00:48:46 one instruction per, or one assembly language instruction
00:48:50 per instruction in the machine’s repertoire.
00:48:52 And so you have to know the machine intimately
00:48:54 to be able to write programs in it.
00:48:56 And if you write an assembly language program
00:48:58 for one kind of machine, and then you say,
00:49:00 gee, it’s nice, I’d like a different machine, start over.
00:49:03 OK, so very bad.
00:49:06 And so what happened in the late 50s
00:49:08 was people realized you could play this game again,
00:49:10 and you could move up a level in writing or creating languages
00:49:15 that were closer to the way that real people might think
00:49:18 about how to write code.
00:49:20 And there were, I guess, arguably three or four
00:49:24 at that time period.
00:49:25 There was FORTRAN, which came from IBM,
00:49:28 which was formula translation, meant
00:49:29 to make it easy to do scientific and engineering
00:49:32 computations.
00:49:32 I didn’t know that, formula translation, that’s wow.
00:49:34 That’s what I stood for.
00:49:35 There was COBOL, which is the Common Business Oriented
00:49:37 Language that Grace Hopper and others worked on,
00:49:40 which was aimed at business kinds of tasks.
00:49:44 There was ALGOL, which was mostly
00:49:45 meant to describe algorithmic computations.
00:49:49 I guess you could argue BASIC was in there somewhere.
00:49:51 I think it’s just a little later.
00:49:54 And so all of those moved the level up,
00:49:56 and so they were closer to what you and I might think of
00:49:59 as we were trying to write a program.
00:50:02 And they were focused on different domains, FORTRAN
00:50:06 for formula translation, engineering computations,
00:50:09 let’s say COBOL for business, that kind of thing.
00:50:11 And still used today, at least FORTRAN probably.
00:50:14 Oh, yeah, COBOL, too.
00:50:16 But the deal was that once you moved up that level,
00:50:19 then you, let’s call it FORTRAN, you
00:50:21 had a language that was not tied to a particular kind
00:50:24 of hardware, because a different compiler would compile
00:50:26 for a different kind of hardware.
00:50:28 And that meant two things.
00:50:30 It meant you only had to write the program once, which
00:50:32 is very important.
00:50:33 And it meant that you could, in fact,
00:50:35 if you were a random engineer, physicist, whatever,
00:50:38 you could write that program yourself.
00:50:39 You didn’t have to hire a programmer to do it for you.
00:50:42 It might not be as good as you’d get with a real programmer,
00:50:44 but it was pretty good.
00:50:45 And so it democratized and made much more broadly available
00:50:49 the ability to write code.
00:50:51 So it puts the power of programming
00:50:53 into the hands of people like you.
00:50:54 Yeah, anybody who is willing to invest some time in learning
00:50:58 a programming language and is not then tied
00:51:00 to a particular kind of computer.
00:51:03 And then in the 70s, you get system programming languages,
00:51:06 of which C is the survivor.
00:51:08 And what does system programming language mean?
00:51:11 Programs that, programming languages
00:51:14 that would take on the kinds of things
00:51:16 that were necessary to write so called system programs.
00:51:19 Things like text editors, or assemblers, or compilers,
00:51:22 or operating systems themselves.
00:51:24 Those kinds of things.
00:51:26 And Fortran.
00:51:28 They have to be feature rich.
00:51:29 They have to be able to do a lot of stuff.
00:51:30 A lot of memory management, access processes,
00:51:33 and all that kind of stuff.
00:51:35 It’s a different flavor of what they’re doing.
00:51:37 They’re much more in touch with the actual machine,
00:51:41 but in a positive way.
00:51:42 That is, you can talk about memory in a more controlled
00:51:44 way.
00:51:45 You can talk about the different data types
00:51:48 that the machine supports, and more ways
00:51:52 to structure and organize data.
00:51:54 And so the system programming languages,
00:51:57 there was a lot of effort in that in the,
00:51:59 call it the late 60s, early 70s.
00:52:02 C is, I think, the only real survivor of that.
00:52:06 And then what happens after that?
00:52:09 You get things like object oriented programming languages.
00:52:12 Because as you write programs in a language like C,
00:52:14 at some point scale gets to you.
00:52:16 And it’s too hard to keep track of the pieces.
00:52:18 And there’s no guardrails, or training wheels,
00:52:21 or something like that to prevent you
00:52:22 from doing bad things.
00:52:24 So C++ comes out of that tradition.
00:52:28 And then it took off from there.
00:52:29 I mean, there’s also a parallel, slightly parallel track
00:52:32 with a little bit of functional stuff with Lisp and so on.
00:52:35 But I guess from that point, it’s
00:52:37 just an explosion of languages.
00:52:38 There’s the Java story.
00:52:40 There’s the JavaScript.
00:52:41 There’s all the stuff that the cool kids these days
00:52:44 are doing with Rust and all that.
00:52:48 So what’s to you?
00:52:50 You wrote a book, C Programming Language.
00:52:53 And C is probably one of the most important languages
00:52:56 in the history of programming languages,
00:52:58 if you kind of look at impact.
00:53:01 What do you think is the most elegant or powerful part of C?
00:53:06 Why did it survive?
00:53:07 Why did it have such a long lasting impact?
00:53:11 I think it found a sweet spot of expressiveness,
00:53:16 so that you could rewrite things in a pretty natural way,
00:53:19 and efficiency, which was particularly important when
00:53:22 computers were not nearly as powerful as they are today.
00:53:25 You’ve got to put yourself back 50 years,
00:53:28 almost, in terms of what computers could do.
00:53:31 And that’s roughly four or five generations,
00:53:35 decades of Moore’s law, right?
00:53:37 So expressiveness and efficiency and, I don’t know,
00:53:42 perhaps the environment that it came with as well,
00:53:45 which was Unix.
00:53:46 So it meant if you wrote a program,
00:53:47 it could be used on all those computers that ran Unix.
00:53:50 And that was all of those computers,
00:53:51 because they were all written in C.
00:53:53 And that was Unix, the operating system itself,
00:53:56 was portable, as were all the tools.
00:53:58 So it all worked together, again,
00:54:00 in one of these things where things
00:54:02 fit on each other in a positive cycle.
00:54:05 What did it take to write sort of a definitive book,
00:54:10 probably definitive book on all of program,
00:54:11 like it’s more definitive to a particular language
00:54:14 than any other book on any other language,
00:54:16 and did two really powerful things,
00:54:19 which is popularized the language,
00:54:22 at least from my perspective, maybe you can correct me.
00:54:24 And second is created a standard of how, you know,
00:54:29 how this language is supposed to be used and applied.
00:54:33 So what did it take?
00:54:34 Did you have those kinds of ambitions in mind
00:54:37 when working on that?
00:54:38 Is this some kind of joke?
00:54:39 No, of course not.
00:54:42 So it’s an accident of timing, skill, and just luck?
00:54:48 A lot of it is, clearly.
00:54:50 Timing was good.
00:54:51 Now, Dennis and I wrote the book in 1977.
00:54:54 Dennis Ritchie.
00:54:54 Yeah, right.
00:54:56 And at that point, Unix was starting to spread.
00:54:58 I don’t know how many there were,
00:55:00 but it would be dozens to hundreds of Unix systems.
00:55:03 And C was also available on other kinds of computers
00:55:06 that had nothing to do with Unix.
00:55:08 And so the language had some potential.
00:55:13 And there were no other books on C,
00:55:17 and Bell Labs was really the only source for it.
00:55:20 And Dennis, of course, was authoritative
00:55:22 because it was his language.
00:55:23 And he had written the reference manual,
00:55:26 which is a marvelous example
00:55:28 of how to write a reference manual.
00:55:29 Really, really very, very well done.
00:55:31 So I twisted his arm until he agreed to write a book,
00:55:34 and then we wrote a book.
00:55:35 And the virtue or advantage, at least,
00:55:38 I guess, of going first is that then other people
00:55:40 have to follow you if they’re gonna do anything.
00:55:44 And I think it worked well because Dennis
00:55:49 was a superb writer.
00:55:50 I mean, he really, really did.
00:55:51 And the reference manual in that book is his, period.
00:55:55 I had nothing to do with that at all.
00:55:58 So just crystal clear prose and very, very well expressed.
00:56:02 And then he and I, I wrote most of the expository material.
00:56:07 And then he and I sort of did the usual ping ponging
00:56:10 back and forth, refining it.
00:56:13 But I spent a lot of time trying to find examples
00:56:15 that would sort of hang together
00:56:16 and that would tell people what they might need
00:56:18 to know at about the right time
00:56:20 that they should be thinking about needing it.
00:56:22 And I’m not sure it completely succeeded,
00:56:25 but it mostly worked out fairly well.
00:56:28 What do you think is the power of example?
00:56:30 I mean, you’re the creator, at least one of the first people
00:56:35 to do the Hello World program, which is like the example.
00:56:40 If aliens discover our civilization hundreds of years
00:56:43 from now, it’ll probably be Hello World programs,
00:56:46 just like a half broken robot communicating with them
00:56:49 with the Hello World.
00:56:50 So what, and that’s a representative example.
00:56:53 So what do you find powerful about examples?
00:56:57 I think a good example will tell you how to do something
00:57:01 and it will be representative of,
00:57:03 you might not want to do exactly that,
00:57:05 but you will want to do something that’s at least
00:57:07 in that same general vein.
00:57:10 And so a lot of the examples in the C book were picked
00:57:14 for these very, very simple, straightforward
00:57:16 text processing problems that were typical of Unix.
00:57:19 I want to read input and write it out again.
00:57:23 There’s a copy command.
00:57:24 I want to read input and do something to it
00:57:27 and write it out again.
00:57:27 There’s a grab.
00:57:28 And so that kind of find things that are representative
00:57:33 of what people want to do and spell those out
00:57:36 so that they can then take those and see the core parts
00:57:42 and modify them to their taste.
00:57:45 And I think that a lot of programming books that,
00:57:48 I don’t look at programming books
00:57:51 a tremendous amount these days, but when I do,
00:57:52 a lot of them don’t do that.
00:57:54 They don’t give you examples that are both realistic
00:57:59 and something you might want to do.
00:58:01 Some of them are pure syntax.
00:58:03 Here’s how you add three numbers.
00:58:05 Well, come on, I could figure that out.
00:58:07 Tell me how I would get those three numbers
00:58:09 into the computer and how we would do something useful
00:58:11 with them and then how I put them back out again,
00:58:14 neatly formatted.
00:58:15 And especially if you follow that example,
00:58:17 there is something magical of doing something
00:58:19 that feels useful.
00:58:21 Yeah, right.
00:58:21 And I think it’s the attempt,
00:58:23 and it’s absolutely not perfect,
00:58:26 but the attempt in all cases was to get something
00:58:28 that was going to be either directly useful
00:58:31 or would be very representative of useful things
00:58:35 that a programmer might want to do.
00:58:37 But within that vein of fundamentally text processing,
00:58:41 reading text, doing something, writing text.
00:58:43 So you’ve also written a book on Go language.
00:58:47 I have to admit, so I worked at Google for a while
00:58:50 and I’ve never used Go.
00:58:53 Well, you missed something.
00:58:54 Well, I know I missed something for sure.
00:58:56 I mean, so Go and Rust are two languages
00:58:59 that I hear very, spoken very highly of
00:59:04 and I wish I would like to, well, there’s a lot of them.
00:59:06 There’s Julia, there’s all these incredible modern languages.
00:59:10 But if you can comment before,
00:59:12 or maybe comment on what do you find,
00:59:16 where does Go sit in this broad spectrum of languages?
00:59:19 And also, how do you yourself feel
00:59:22 about this wide range of powerful, interesting languages
00:59:26 that you may never even get to try to explore
00:59:30 because of time?
00:59:31 So I think, so Go first comes from that same
00:59:36 Bell Labs tradition in part, not exclusively,
00:59:39 but two of the three creators, Ken Thompson and Rob Pike.
00:59:42 So literally, the people.
00:59:44 Yeah, the people.
00:59:45 And then with this very, very useful influence
00:59:49 from the European school in particular,
00:59:51 the Claude Speer influence through Robert Griesemer,
00:59:55 who was, I guess, a second generation down student at ETH.
01:00:01 And so that’s an interesting combination of things.
01:00:03 And so some ways, Go captures the good parts of C,
01:00:08 it looks sort of like C, it’s sometimes characterized as C
01:00:11 for the 21st century.
01:00:14 On the surface, it looks very, very much like C.
01:00:17 But at the same time, it has some interesting
01:00:20 data structuring capabilities.
01:00:21 And then I think the part that I would say
01:00:25 is particularly useful, and again, I’m not a Go expert.
01:00:29 In spite of coauthoring the book,
01:00:31 about 90% of the work was done by Alan Donovan,
01:00:34 my coauthor, who is a Go expert.
01:00:36 But Go provides a very nice model of concurrency.
01:00:40 It’s basically the cooperating,
01:00:42 communicating sequential processes that Tony Hoare
01:00:46 set forth, jeez, I don’t know, 40 plus years ago.
01:00:50 And Go routines are, to my mind, a very natural way
01:00:53 to talk about parallel computation.
01:00:57 And in the few experiments I’ve done with them,
01:00:59 they’re easy to write, and typically it’s gonna work,
01:01:02 and very efficient as well.
01:01:05 So I think that’s one place where Go stands out,
01:01:07 that that model of parallel computation
01:01:10 is very, very easy and nice to work with.
01:01:14 Just to comment on that, do you think C foresaw,
01:01:17 or the early Unix days foresaw threads
01:01:20 and massively parallel computation?
01:01:23 I would guess not really.
01:01:25 I mean, maybe it was seen, but not at the level
01:01:28 where it was something you had to do anything about.
01:01:31 For a long time, processors got faster,
01:01:35 and then processors stopped getting faster
01:01:38 because of things like power consumption
01:01:40 and heat generation.
01:01:43 And so what happened instead was that instead
01:01:46 of processors getting faster,
01:01:47 there started to be more of them.
01:01:49 And that’s where that parallel thread stuff comes in.
01:01:53 So if you can comment on all the other languages,
01:01:58 is it break your heart that you’ll never get to explore them?
01:02:01 How do you feel about the full variety?
01:02:04 It’s not break my heart,
01:02:05 but I would love to be able to try more of these languages.
01:02:10 The closest I’ve come is in a class
01:02:11 that I often teach in the spring here.
01:02:14 It’s a programming class, and I often give,
01:02:18 I have one sort of small example that I will write
01:02:21 in as many languages as I possibly can.
01:02:24 I’ve got it in 20 languages.
01:02:26 At this point, and that’s so I do a minimal experiment
01:02:31 with a language just to say, okay,
01:02:33 I have this trivial task, which I understand the task,
01:02:35 and it takes 15 lines in awk,
01:02:38 and not much more in a variety of other languages.
01:02:41 So how big is it?
01:02:42 How fast does it run?
01:02:43 And what pain did I go through to learn how to do it?
01:02:47 And that’s like anecdotal, right?
01:02:52 It’s very, very, very, very, very, very, very,
01:02:55 very, very narrowly focused.
01:02:57 I think data, I like that term.
01:02:59 So yeah, but still, it’s a little sample,
01:03:01 because you get to, I think the hardest step
01:03:04 of the programming language is probably the first step,
01:03:06 right, so there you’re taking the first step.
01:03:08 Yeah, and so my experience with some languages
01:03:13 is very positive, like Lua,
01:03:14 a scripting language I had never used,
01:03:17 and I took my little program.
01:03:19 The program is a trivial formatter.
01:03:21 It just takes in lines of text of varying lengths,
01:03:24 and it puts them out in lines
01:03:26 that have no more than 60 characters on each line.
01:03:28 So think of it as just kind of the flow of process
01:03:31 in a browser or something.
01:03:34 So it’s a very short program.
01:03:36 And in Lua, I downloaded Lua,
01:03:39 and in an hour, I had it working,
01:03:41 never having written Lua in my life,
01:03:43 just going with online documentation.
01:03:44 I did the same thing in Scala,
01:03:46 which you can think of as a flavor of Java, equally trivial.
01:03:51 I did it in Haskell.
01:03:52 It took me several weeks.
01:03:53 But it did run like a turtle.
01:03:57 And I did it in Fortran 90, and it was painful,
01:04:05 but it worked, and I tried it in Rust,
01:04:07 and it took me several days to get it working
01:04:10 because the model of memory management
01:04:12 was just a little unfamiliar to me.
01:04:13 And the problem I had with Rust,
01:04:15 and it’s back to what we were just talking about,
01:04:18 I couldn’t find good, consistent documentation on Rust.
01:04:21 Now, this was several years ago,
01:04:22 and I’m sure things have stabilized,
01:04:24 but at the time, everything in the Rust world
01:04:26 seemed to be changing rapidly,
01:04:27 and so you would find what looked like a working example,
01:04:30 and it wouldn’t work with the version
01:04:32 of the language that I had.
01:04:34 So it took longer than it should have.
01:04:37 Rust is a language I would like to get back to,
01:04:39 but probably won’t.
01:04:41 I think one of the issues,
01:04:42 you have to have something you want to do.
01:04:44 If you don’t have something that is the right combination,
01:04:47 if I want to do it, and yet I have enough disposable time,
01:04:51 whatever, to make it worth learning a new language
01:04:55 at the same time, it’s never gonna happen.
01:04:58 So what do you think about another language of JavaScript?
01:05:02 That’s this…
01:05:04 Well, let me just sort of comment on what I said.
01:05:06 When I was brought up, sort of JavaScript was seen as
01:05:12 probably like the ugliest language possible,
01:05:15 and yet it’s quite arguably, quite possibly taking over,
01:05:18 not just the front end and the back end of the internet,
01:05:21 but possibly in the future taking over everything,
01:05:24 because they’ve now learned to make it very efficient.
01:05:27 And so what do you think about this?
01:05:29 Yeah, well, I think you’ve captured it in a lot of ways.
01:05:32 When it first came out,
01:05:32 JavaScript was deemed to be fairly irregular
01:05:35 and an ugly language, and certainly in the academy,
01:05:37 if you said you were working on JavaScript,
01:05:39 people would ridicule you.
01:05:40 It was just not fit for academics to work on.
01:05:43 I think a lot of that has evolved.
01:05:45 The language itself has evolved,
01:05:47 and certainly the technology of compiling it
01:05:50 is fantastically better than it was.
01:05:53 And so in that sense,
01:05:54 it’s absolutely a viable solution on back ends,
01:05:58 as well as the front ends.
01:06:01 Used well, I think it’s a pretty good language.
01:06:03 I’ve written a modest amount of it,
01:06:06 and I’ve played with JavaScript translators
01:06:09 and things like that.
01:06:10 I’m not a real expert,
01:06:12 and it’s hard to keep up even there
01:06:13 with the new things that come along with it.
01:06:15 So I don’t know whether it will ever take over the world.
01:06:19 I think not, but it’s certainly an important language,
01:06:24 and worth knowing more about.
01:06:27 There’s, maybe to get your comment on something,
01:06:30 which JavaScript, and actually most languages,
01:06:33 sort of Python, such a big part of the experience
01:06:37 of programming with those languages includes libraries,
01:06:40 sort of using, building on top of the code
01:06:42 that other people have built.
01:06:43 I think that’s probably different from the experience
01:06:45 that we just talked about from Unix and C days,
01:06:49 when you’re building stuff from scratch.
01:06:51 What do you think about this world
01:06:53 of essentially leveraging, building up libraries
01:06:55 on top of each other and leveraging them?
01:06:57 Yeah, no, that’s a very perceptive kind of question.
01:07:01 One of the reasons programming was fun in the old days
01:07:04 was that you were really building it all yourself.
01:07:06 The number of libraries you had to deal with
01:07:08 was quite small.
01:07:09 Maybe it was printf, or the standard library,
01:07:11 or something like that, and that is not the case today.
01:07:15 And if you want to do something in,
01:07:18 you mentioned Python and JavaScript,
01:07:20 and those are the two fine examples,
01:07:22 you have to typically download a boatload of other stuff,
01:07:25 and you have no idea what you’re getting,
01:07:27 absolutely nothing.
01:07:29 I’ve been doing some playing with machine learning
01:07:31 over the last couple of days,
01:07:33 and geez, something doesn’t work.
01:07:36 Well, you pip install this, okay,
01:07:38 and down comes another one,
01:07:40 okay, and down comes another gazillion megabytes of something
01:07:44 and you have no idea what it was.
01:07:46 And if you’re lucky, it works.
01:07:47 And if it doesn’t work, you have no recourse.
01:07:51 There’s absolutely no way you could figure out
01:07:52 which of these thousand different packages.
01:07:55 And I think it’s worse in the NPM environment
01:07:59 for JavaScript.
01:08:00 I think there’s less discipline, less control there.
01:08:02 And there’s aspects of not just not understanding
01:08:06 how it works, but there’s security issues,
01:08:07 there’s robustness issues,
01:08:09 so you don’t wanna run a nuclear power plant
01:08:11 using JavaScript, essentially.
01:08:14 Probably not.
01:08:16 So speaking to the variety of languages,
01:08:18 do you think that variety is good,
01:08:20 or do you hope, think that over time,
01:08:23 we should converge towards one, two, three
01:08:25 programming languages?
01:08:28 You mentioned to the Bell Lab days
01:08:29 when people could sort of, the community of it,
01:08:32 and the more languages you have,
01:08:34 the more you separate the communities.
01:08:36 There’s the Ruby community,
01:08:38 there’s the Python community,
01:08:40 there’s C++ community.
01:08:42 Do you hope that they’ll unite one day
01:08:45 to just one or two languages?
01:08:47 I certainly don’t hope it.
01:08:48 I’m not sure that that’s right,
01:08:49 because I honestly don’t think there is one language
01:08:51 that will suffice for all the programming needs of the world.
01:08:55 Are there too many at this point?
01:08:56 Well, arguably.
01:08:58 But I think if you look at the sort of the distribution
01:09:01 of how they are used,
01:09:03 there’s something called a dozen languages
01:09:06 that probably account for 95% of all programming
01:09:10 at this point, and that doesn’t seem unreasonable.
01:09:13 And then there’s another, well, 2,000 languages
01:09:17 that are still in use that nobody uses,
01:09:19 and, or at least don’t use in any quantity.
01:09:23 But I think new languages are a good idea in many respects,
01:09:25 because they’re often a chance to explore an idea
01:09:30 of how language might help.
01:09:32 I think that’s one of the positive things
01:09:35 about functional languages, for example.
01:09:36 They’re a particularly good place
01:09:38 where people have explored ideas
01:09:42 that at the time didn’t seem feasible,
01:09:45 but ultimately have wound up
01:09:47 as part of mainstream languages as well.
01:09:50 I mean, just go back as early as Recursion Lisp
01:09:52 and then follow forward functions as first class citizens
01:09:57 and pattern based languages,
01:09:59 and gee, I don’t know, closures,
01:10:02 and just on and on and on.
01:10:04 Lambda’s interesting ideas that showed up first
01:10:07 in, let’s call it broadly,
01:10:08 the functional programming community,
01:10:10 and then find their way into mainstream languages.
01:10:13 Yeah, it’s a playground for rebels.
01:10:15 Yeah, exactly, and so I think the languages
01:10:19 in the playground themselves are probably not going
01:10:22 to be the mainstream, at least for some while,
01:10:25 but the ideas that come from there are invaluable.
01:10:29 So let’s go to something that, when I found out recently,
01:10:33 so I’ve known that you’ve done a million things,
01:10:36 but one of the things I wasn’t aware of,
01:10:37 that you had a role in Ample,
01:10:39 and before you interrupt me by minimizing your role in it.
01:10:44 Ample is for minimizing functions.
01:10:46 Yeah, minimizing functions, right, exactly.
01:10:51 Can I just say that the elegance and abstraction power
01:10:53 of Ample is incredible,
01:10:57 when I first came to it about 10 years ago or so.
01:11:01 Can you describe what is the Ample language?
01:11:04 Sure, so Ample is a language for mathematical programming,
01:11:08 technical term, think of it as linear programming,
01:11:10 that is setting up systems of linear equations
01:11:14 that are of some sort of system of constraints,
01:11:18 so that you have a bunch of things
01:11:20 that have to be less than this, greater than that,
01:11:22 whatever, and you’re trying to find a set of values
01:11:25 for some decision variables that will maximize
01:11:29 or minimize some objective function,
01:11:32 so it’s a way of solving a particular kind
01:11:35 of optimization problem,
01:11:38 a very formal sort of optimization problem,
01:11:40 but one that’s exceptionally useful.
01:11:42 And it specifies, so there’s objective function constraints
01:11:45 and variables that become separate
01:11:48 from the data it operates on.
01:11:50 Right.
01:11:50 So that kind of separation allows you to,
01:11:56 put on different hats,
01:11:58 one put the hat of an optimization person
01:12:00 and then put another hat of a data person
01:12:03 and dance back and forth,
01:12:04 and also separate the actual solvers,
01:12:08 the optimization systems that do the solving.
01:12:11 Then you can have other people come to the table
01:12:14 and then build their solvers,
01:12:15 whether it’s linear or nonlinear,
01:12:19 convex, nonconvex, that kind of stuff.
01:12:21 So what is the,
01:12:25 to you as, maybe you can comment
01:12:28 how you got into that world
01:12:30 and what is the beautiful or interesting idea to you
01:12:33 from the world of optimization?
01:12:35 Sure.
01:12:36 So I preface it by saying I’m absolutely not an expert
01:12:39 on this and most of the important work in AMPL
01:12:42 comes from my two partners in crime on that,
01:12:45 Bob Forer, who was a professor
01:12:48 in the Industrial Engineering
01:12:50 and Management Science Department at Northwestern,
01:12:52 and my colleague at Bell Labs, Dave Gay,
01:12:54 who was a numerical analyst and optimization person.
01:12:59 So the deal is linear programming.
01:13:02 Preface this by saying I don’t.
01:13:03 Let’s stay with linear programming.
01:13:05 Yeah, linear programming is the simplest example of this.
01:13:07 So linear programming, as it’s taught in school,
01:13:09 is that you have a big matrix,
01:13:11 which is always called A,
01:13:12 and you say AX is less than or equal to B.
01:13:14 So B is a set of constraints,
01:13:16 X is the decision variables,
01:13:18 and A is how the decision variables are combined
01:13:22 to set up the various constraints.
01:13:24 So A is a matrix and X and B are vectors.
01:13:28 And then there’s an objective function,
01:13:30 which is just a sum of a bunch of Xs
01:13:32 and some coefficients on them,
01:13:33 and that’s the thing you want to optimize.
01:13:37 The problem is that in the real world,
01:13:40 that matrix A is a very, very, very intricate,
01:13:43 very large and very sparse matrix
01:13:45 where the various components of the model
01:13:47 are distributed among the coefficients
01:13:50 in a way that is totally unobvious to anybody.
01:13:54 And so what you need is some way
01:13:57 to express the original model,
01:13:59 which you and I would write,
01:14:01 you know, we’d write mathematics on the board,
01:14:03 and the sum of this is greater
01:14:04 than the sum of that kind of thing.
01:14:06 So you need a language to write those kinds of constraints.
01:14:10 And Bob Forer, for a long time,
01:14:12 had been interested in modeling languages,
01:14:14 languages that made it possible to do this.
01:14:16 There was a modeling language around called GAMS,
01:14:19 the General Algebraic Modeling System,
01:14:21 but it looked very much like Fortran.
01:14:22 It was kind of clunky.
01:14:24 And so Bob spent a sabbatical year at Bell Labs in 1984,
01:14:29 and he and, there’s only the office across from me,
01:14:32 and it’s always geography,
01:14:35 and he and Dave Gay and I started talking
01:14:38 about this kind of thing,
01:14:39 and he wanted to design a language that would make it
01:14:43 so that you could take these algebraic specifications,
01:14:46 you know, summation signs over sets,
01:14:48 and that you would write on the board
01:14:51 and convert them into basically this A matrix,
01:14:55 and then pass that off to a solver,
01:14:58 which is an entirely separate thing.
01:15:01 And so we talked about the design of the language.
01:15:05 I don’t remember any of the details of this now,
01:15:07 but it’s kind of an obvious thing.
01:15:08 You’re just writing out mathematical expressions
01:15:11 in a Fortran like, sorry,
01:15:13 an algebraic but textual like language.
01:15:15 And I wrote the first version of this Ample program,
01:15:22 my first C++ program, and.
01:15:26 It’s written in C++?
01:15:27 Yeah.
01:15:28 And so I did that fairly quickly.
01:15:30 We wrote, it was, you know, 3,000 lines or something,
01:15:33 so it wasn’t very big,
01:15:34 but it sort of showed the feasibility of it
01:15:36 that you could actually do something that was easy
01:15:38 for people to specify models
01:15:41 and convert it into something that a solver could work with.
01:15:44 At the same time, as you say,
01:15:45 the model and the data are separate things.
01:15:47 So one model would then work with all kinds
01:15:50 of different data in the same way
01:15:51 that lots of programs do the same thing,
01:15:53 but with different data.
01:15:54 So one of the really nice things
01:15:55 is the specification of the models,
01:15:58 human, just kind of like, as you say, is human readable.
01:16:01 Like I literally, I remember on stuff I worked,
01:16:04 I would send it to colleagues
01:16:07 that I’m pretty sure never programmed in their life,
01:16:10 just to understand what the optimization problem is.
01:16:15 I think, how hard is it to convert that?
01:16:18 You said there’s a first prototype in C++
01:16:20 to convert that into something
01:16:22 that could actually be used by the solver.
01:16:24 It’s not too bad,
01:16:25 because most of the solvers have some mechanism
01:16:27 that lets them import a model in a form.
01:16:30 It might be as simple as the matrix itself
01:16:32 in just some representation,
01:16:35 or if you’re doing things that are not linear programming,
01:16:38 then there may be some mechanism
01:16:39 that lets you provide things like functions to be called,
01:16:43 or other constraints on the model.
01:16:47 So all AMPL does is to generate that kind of thing,
01:16:51 and then solver deals with all the hard work,
01:16:54 and then when the solver comes back with numbers,
01:16:57 AMPL converts those back into your original form,
01:17:00 so you know how much of each thing you should be buying,
01:17:03 or making, or shipping, or whatever.
01:17:05 So we did that in 84, and I haven’t had a lot to do
01:17:11 with it since, except that we wrote a couple of versions
01:17:13 of a book on it.
01:17:14 Which is one of the greatest books ever written.
01:17:16 I love that book.
01:17:18 I don’t know why.
01:17:19 It’s an excellent book.
01:17:20 Bob Farrer wrote most of it,
01:17:22 and so it’s really, really well done.
01:17:23 He must have been a dynamite teacher.
01:17:25 And typeset in LaTeX.
01:17:27 No, no, no, are you kidding?
01:17:29 I remember liking the typography, so I don’t know.
01:17:32 We did it with DROF.
01:17:34 I don’t even know what that is.
01:17:35 Yeah, exactly.
01:17:36 You’re too young.
01:17:37 Uh oh, oh boy.
01:17:38 I think of DROF as a predecessor
01:17:42 to the tech family of things.
01:17:44 It’s a formatter that was done at Bell Labs
01:17:46 in this same period of the very early 70s
01:17:49 that predates tech and things like that
01:17:52 by five to 10 years.
01:17:54 But it was nevertheless, I’m going by memories.
01:17:58 I remember it being beautiful.
01:18:00 Yeah, it was nicely done.
01:18:01 Outside of Unix, C, A, Golang,
01:18:03 all the things we talked about.
01:18:05 All the amazing work you’ve done.
01:18:07 You’ve also done work in graph theory.
01:18:12 Let me ask this crazy out there question.
01:18:16 If you had to make a bet,
01:18:17 and I had to force you to make a bet,
01:18:19 do you think P equals NP?
01:18:23 The answer is no,
01:18:24 although I’m told that somebody asked Jeff Dean
01:18:27 if that was, under what conditions P would equal NP,
01:18:30 and he said either P is zero or N is one.
01:18:33 Or vice versa, I’ve forgotten.
01:18:35 This is why Jeff Dean is a lot smarter than I am.
01:18:38 Yeah.
01:18:40 So, but your intuition is, uh.
01:18:42 I have no, I have no intuition,
01:18:44 but I’ve got a lot of colleagues who’ve got intuition
01:18:46 and their betting is no.
01:18:48 That’s the popular, that’s the popular bet.
01:18:51 Okay, so what is computational complexity theory?
01:18:55 And do you think these kinds of complexity classes,
01:18:58 especially as you’ve taught in this modern world,
01:19:01 are still a useful way to understand
01:19:04 the hardness of problems?
01:19:06 I don’t do that stuff.
01:19:07 The last time I touched anything to do with that
01:19:09 was before. Many, many years ago.
01:19:10 Was before it was invented.
01:19:12 Because I, it’s literally true.
01:19:14 I did my PhD thesis on graph.
01:19:17 Before Big O notation.
01:19:18 Oh, absolutely.
01:19:19 Before, I did this in 1968,
01:19:24 and I worked on graph partitioning,
01:19:25 which is this question.
01:19:26 You’ve got a graph that is a nodes and edges kind of graph,
01:19:30 and the edges have weights,
01:19:31 and you just want to divide the nodes into two piles
01:19:34 of equal size so that the number of edges
01:19:36 that goes from one side to the other
01:19:38 is as small as possible.
01:19:40 And we.
01:19:41 You developed, so that problem is hard.
01:19:45 Well, as it turns out,
01:19:47 I worked with Shen Lin at Bell Labs on this,
01:19:49 and we were never able to come up with anything
01:19:52 that was guaranteed to give the right answer.
01:19:54 We came up with heuristics that worked pretty darn well,
01:19:57 and I peeled off some special cases for my thesis,
01:20:01 but it was just hard.
01:20:02 And that was just about the time that Steve Cook
01:20:04 was showing that there were classes of problems
01:20:06 that appeared to be really hard,
01:20:08 of which graph partitioning was one.
01:20:10 But this, my expertise, such as it was,
01:20:13 totally predates that development.
01:20:16 Oh, interesting.
01:20:17 So the heuristic, which now,
01:20:20 carries the two of yours names
01:20:21 for the traveling salesman problem,
01:20:23 and then for the graph partitioning.
01:20:25 That was, like, how did you,
01:20:27 you weren’t even thinking in terms of classes.
01:20:29 You were just trying to find.
01:20:30 There was no such idea.
01:20:31 A heuristic that kinda does the job pretty well.
01:20:34 You were trying to find something that did the job,
01:20:36 and there was nothing that you would call,
01:20:38 let’s say, a closed form or algorithmic thing
01:20:41 that would give you a guaranteed right answer.
01:20:44 I mean, compare graph partitioning to max flow min cut,
01:20:48 or something like that.
01:20:50 That’s the same problem,
01:20:51 except there’s no constraint on the number of nodes
01:20:53 on one side or the other of the cut.
01:20:56 And that means it’s an easy problem,
01:20:58 at least as I understand it.
01:21:00 Whereas the constraint that says
01:21:01 the two have to be constrained in size
01:21:03 makes it a hard problem.
01:21:05 Yeah, so Robert Frost says that poem
01:21:07 where you had to choose two paths.
01:21:09 So why did you,
01:21:12 is there another alternate universe
01:21:13 in which you pursued the Don Knuth path
01:21:16 of algorithm design, sort of?
01:21:19 Not smart enough.
01:21:21 Not smart enough.
01:21:25 You’re infinitely modest,
01:21:27 but so you pursued your kind of love of programming.
01:21:31 I mean, when you look back to those,
01:21:33 I mean, just looking into that world,
01:21:35 does that just seem like a distant world
01:21:37 of theoretical computer science?
01:21:40 Then is it fundamentally different
01:21:42 from the world of programming?
01:21:44 I don’t know.
01:21:45 I mean, certainly, in all seriousness,
01:21:47 I just didn’t have the talent for it.
01:21:49 When I got here as a grad student to Princeton
01:21:51 and I started to think about research
01:21:53 at the end of my, I don’t know,
01:21:55 first year or something like that,
01:21:56 I worked briefly with John Hopcroft,
01:21:59 who is absolutely, you know,
01:22:00 you mentioned during award winner, et cetera,
01:22:02 a great guy, and it became crystal clear
01:22:05 I was not cut out for this stuff, period, okay.
01:22:09 And so I moved into things
01:22:11 where I was more cut out for it,
01:22:13 and that tended to be things like writing programs
01:22:16 and then ultimately writing books.
01:22:20 You said that in Toronto as an undergrad,
01:22:22 you did a senior thesis or a literature survey
01:22:26 on artificial intelligence.
01:22:28 This was 1964.
01:22:30 Correct.
01:22:32 What was the AI landscape, ideas, dreams at that time?
01:22:37 I think that was one of the,
01:22:39 well, you’ve heard of AI winners.
01:22:40 This is whatever the opposite was,
01:22:41 AI summer or something.
01:22:43 It was one of these things where people thought
01:22:46 that, boy, we could do anything with computers,
01:22:49 that all these hard problems, we could,
01:22:51 computers will solve them.
01:22:52 They will do machine translation.
01:22:54 They will play games like chess.
01:22:57 They will do, you know, prove theorems in geometry.
01:23:02 There are all kinds of examples like that
01:23:04 where people thought, boy,
01:23:06 we could really do those sorts of things.
01:23:09 And, you know, I read The Kool Aid in some sense.
01:23:14 There’s a wonderful collection of papers
01:23:16 called Computers and Thought that was published
01:23:18 in about that era and people were very optimistic.
01:23:22 And then of course it turned out that
01:23:24 what people thought was just a few years down the pike
01:23:28 was more than a few years down the pike.
01:23:31 And some parts of that are more or less now
01:23:34 sort of under control.
01:23:36 We finally do play games like Go and chess
01:23:38 and so on better than people do,
01:23:41 but there are others and machine translation
01:23:43 is a lot better than it used to be,
01:23:45 but that’s, you know, 50, close to 60 years of progress
01:23:49 and a lot of evolution in hardware
01:23:51 and a tremendous amount more data up on which
01:23:53 you can build systems that actually can learn
01:23:57 from some of that data.
01:23:58 And the infrastructure to support developers
01:24:02 working together, like an open source movement,
01:24:05 the internet, period, is also empowering.
01:24:08 But what lessons do you draw from that,
01:24:11 the opposite of winter, that optimism?
01:24:14 Well, I guess the lesson is that in the short run
01:24:19 it’s pretty easy to be too pessimistic
01:24:23 or maybe too optimistic and in the long run
01:24:25 you probably shouldn’t be too pessimistic.
01:24:27 I’m not saying that very well.
01:24:28 It reminds me of this remark from Arthur Clarke,
01:24:32 a science fiction author, who says, you know,
01:24:34 when some distinguished but elderly person
01:24:36 says that something is possible, he’s probably right.
01:24:41 And if he says it’s impossible, he’s almost surely wrong.
01:24:44 But you don’t know what the time scale is.
01:24:45 The time scale is critical, right.
01:24:48 So what are your thoughts on this new summer of AI
01:24:52 now in the work with machine learning and neural networks?
01:24:55 You’ve kind of mentioned that you started to try to explore
01:24:57 and look into this world that seems fundamentally different
01:25:01 from the world of heuristics and algorithms like search,
01:25:06 that it’s now purely sort of trying to take
01:25:08 huge amounts of data and learn from that data, right,
01:25:12 programs from the data.
01:25:14 Yeah, look, I think it’s very interesting.
01:25:17 I am incredibly far from an expert.
01:25:19 Most of what I know I’ve learned from my students
01:25:21 and they’re probably disappointed
01:25:24 in how little I’ve learned from them.
01:25:26 But I think it has tremendous potential
01:25:29 for certain kinds of things.
01:25:30 I mean, games is one where it obviously has had an effect
01:25:34 on some of the others as well.
01:25:36 I think there’s, and this is speaking from
01:25:39 definitely not expertise,
01:25:40 I think there are serious problems
01:25:42 in certain kinds of machine learning at least
01:25:45 because what they’re learning from
01:25:47 is the data that we give them.
01:25:49 And if the data we give them has something wrong with it,
01:25:52 then what they learn from it is probably wrong too.
01:25:54 And the obvious thing is some kind of bias in the data.
01:25:59 That the data has stuff in it like, I don’t know,
01:26:02 women aren’t as good as men at something, okay.
01:26:05 That’s just flat wrong.
01:26:07 But if it’s in the data because of historical treatment,
01:26:11 then that machine learning stuff will propagate that.
01:26:15 And that is a serious worry.
01:26:18 The positive part of that is what machine learning does
01:26:22 is reveal the bias in the data
01:26:24 and puts a mirror to our own society.
01:26:27 And in so doing helps us remove the bias,
01:26:30 you know, helps us work on ourselves.
01:26:33 Puts a mirror to ourselves.
01:26:35 Yeah, that’s an optimistic point of view.
01:26:37 And if it works that way, that would be absolutely great.
01:26:40 And what I don’t know is whether it does work that way
01:26:42 or whether the AI mechanisms
01:26:46 or machine learning mechanisms reinforce
01:26:49 and amplify things that have been wrong in the past.
01:26:52 And I don’t know, but I think that’s a serious thing
01:26:56 that we have to be concerned about.
01:26:58 Let me ask you an out there question, okay.
01:27:01 I know nobody knows, but what do you think it takes
01:27:03 to build a system of human level intelligence?
01:27:07 That’s been the dream from the 60s.
01:27:09 We talk about games, about language,
01:27:12 about image recognition, but really the dream
01:27:16 is to create human level or superhuman level intelligence.
01:27:19 What do you think it takes to do that?
01:27:21 And are we close?
01:27:23 I haven’t a clue and I don’t know, roughly speaking.
01:27:26 I mean, this was Turing.
01:27:27 I was trying to trick you into a hypothesis.
01:27:30 Yeah, I mean, Turing talked about this
01:27:31 in his paper on machine intelligence back in, geez,
01:27:34 I don’t know, early 50s or something like that.
01:27:36 And he had the idea of the Turing test.
01:27:38 And I don’t know what the Turing test is.
01:27:41 It’s a good test of intelligence.
01:27:41 I don’t know.
01:27:42 It’s an interesting test.
01:27:43 At least it’s in some vague sense objective,
01:27:45 whether you can read anything into the conclusions
01:27:48 is a different story.
01:27:50 Do you have worries, concerns, excitement
01:27:55 about the future of artificial intelligence?
01:27:57 So there’s a lot of people who are worried
01:27:58 and you can speak broadly
01:28:00 than just artificial intelligence.
01:28:01 It’s basically computing taking over the world
01:28:05 in various forms.
01:28:06 Are you excited by this future,
01:28:09 this possibility of computing being everywhere
01:28:12 or are you worried?
01:28:14 It’s some combination of those.
01:28:16 I think almost all technologies over the long run
01:28:21 are for good, but there’s plenty of examples
01:28:24 where they haven’t been good either over a long run
01:28:27 for some people or over a short run.
01:28:30 And computing is one of those.
01:28:33 And AI within it is gonna be one of those as well,
01:28:36 but computing broadly.
01:28:37 I mean, for just a today example is privacy,
01:28:41 that the use of things like social media and so on
01:28:46 means that, and the commercial surveillance
01:28:49 means that there’s an enormous amount more known about us
01:28:52 by people, other businesses, government, whatever,
01:28:56 than perhaps one ought to feel comfortable with.
01:28:59 So that’s an example.
01:29:04 So that’s an example of a possible negative effect
01:29:07 of computing being everywhere.
01:29:09 It’s an interesting one
01:29:11 because it could also be a positive, if leveraged correctly.
01:29:16 There’s a big if there.
01:29:18 So I have a deep interest in human psychology
01:29:22 and humans seem to be very paranoid about this data thing
01:29:27 that varies depending on age group.
01:29:31 It seems like the younger folks.
01:29:32 So it’s exciting to me to see what society looks like
01:29:35 50 years from now, that the concerns about privacy
01:29:39 might be flipped on their head
01:29:40 based purely on human psychology
01:29:42 versus actual concerns or not.
01:29:47 What do you think about Moore’s Law?
01:29:49 Well, you said a lot of stuff we’ve talked,
01:29:52 you talked about programming languages in their design,
01:29:55 in their ideas that come from the constraints
01:29:58 in the systems they operate in.
01:30:00 Do you think Moore’s Law,
01:30:04 the exponential improvement of systems
01:30:07 will continue indefinitely?
01:30:08 There’s a mix of opinions on that currently,
01:30:12 or do you think there’ll be a plateau?
01:30:19 Well, the frivolous answer is no exponential
01:30:21 it can go on forever.
01:30:24 You run out of something.
01:30:26 Just as we said, timescale matters.
01:30:27 So if it goes on long enough, that might be all we need.
01:30:30 Yeah, right, won’t matter to us.
01:30:33 So I don’t know, we’ve seen places
01:30:34 where Moore’s Law has changed.
01:30:35 For example, mentioned earlier,
01:30:37 processors don’t get faster anymore,
01:30:41 but you use that same growth of the ability
01:30:46 to put more things in a given area
01:30:48 to grow them horizontally instead of vertically as it were
01:30:51 so you can get more and more processors
01:30:52 or memory or whatever on the same chip.
01:30:55 Is that gonna run into a limitation?
01:30:57 Presumably, because at some point
01:31:00 you get down to the individual atoms.
01:31:03 And so you gotta find some way around that.
01:31:05 Will we find some way around that?
01:31:07 I don’t know, I just said that if I say it won’t,
01:31:10 I’ll be wrong, so perhaps we will.
01:31:12 So I just talked to Jim Keller and he says,
01:31:15 so he actually describes, he argues
01:31:16 that the Moore’s Law will continue for a long, long time
01:31:19 because you mentioned the atom.
01:31:21 We actually have, I think, a thousand fold increase,
01:31:25 still decreased in size, still possible
01:31:30 before we get to the quantum level.
01:31:32 So there’s still a lot of possibilities.
01:31:34 He thinks he’ll continue indefinitely,
01:31:36 which is an interesting optimistic viewpoint.
01:31:40 But how do you think the programming languages
01:31:43 will change with this increase?
01:31:45 Whether we hit a wall or not,
01:31:47 what do you think, do you think there’ll be
01:31:50 a fundamental change in the way
01:31:51 programming languages are designed?
01:31:54 I don’t know about that.
01:31:55 I think what will happen is continuation
01:31:58 of what we see in some areas, at least,
01:32:02 which is that more programming will be done
01:32:05 by programs than by people, and that more will be done
01:32:11 by sort of declarative rather than procedural mechanisms
01:32:14 where I’ll say, I want this to happen.
01:32:17 You figure out how.
01:32:19 And that is, in many cases, at this point,
01:32:24 domain of specialized languages for narrow domains,
01:32:28 but you can imagine that broadening out.
01:32:31 And so I don’t have to say so much, in so much detail,
01:32:35 some collection of software, let’s call it languages
01:32:39 or programs or something, will figure out
01:32:42 how to do what I want to do.
01:32:44 Interesting, so increased levels of abstraction.
01:32:47 Yeah.
01:32:48 And one day getting to the human level,
01:32:51 where we can just use natural language.
01:32:52 Could be possible.
01:32:54 So you taught, so teach a course,
01:32:56 Computers in Our World, here at Princeton,
01:32:59 that introduces computing and programming to nonmajors.
01:33:03 What, just from that experience,
01:33:06 what advice do you have for people
01:33:08 who don’t know anything about programming
01:33:10 but are kind of curious about this world,
01:33:12 or programming seems to become more and more
01:33:14 of a fundamental skill that people need to be
01:33:17 at least aware of?
01:33:18 Yeah, well, I couldn’t recommend a good book.
01:33:20 What’s that?
01:33:22 The book I wrote for the course.
01:33:24 I think this is one of these questions of,
01:33:26 should everybody know how to program?
01:33:28 And I think the answer is probably not,
01:33:31 but I think everybody should at least understand
01:33:33 sort of what it is, so that if you say to somebody,
01:33:35 I’m a programmer, they have a notion of what that might be,
01:33:38 or if you say this is a program,
01:33:40 or this was decided by a computer running a program,
01:33:43 that they have some vague intuitive understanding
01:33:47 and accurate understanding of what that might imply.
01:33:52 So part of what I’m doing in this course,
01:33:55 which is very definitely for nontechnical people,
01:33:57 and a typical person in it is a history or English major,
01:34:01 try and explain how computers work,
01:34:03 how they do their thing, what programming is,
01:34:06 how you write a program,
01:34:08 and how computers talk to each other,
01:34:11 and what do they do when they’re talking to each other.
01:34:14 And then I would say nobody, very rarely,
01:34:19 and does anybody in that course go on
01:34:21 to become a real serious programmer,
01:34:24 but at least they’ve got a somewhat better idea
01:34:27 of what all this stuff is about, not just the programming,
01:34:29 but the technology behind computers and communications.
01:34:32 Do they try and write a program themselves?
01:34:35 Oh yeah, yeah, a very small amount.
01:34:38 I introduced them to how machines work at a level below,
01:34:42 high level languages, so we have kind of a toy machine
01:34:45 that has a very small repertoire, a dozen instructions,
01:34:47 and they write trivial assembly language programs for that.
01:34:51 Wow, that’s interesting.
01:34:52 So can you just, if you were to give a flavor
01:34:55 to people of the programming world,
01:34:57 of the competing world,
01:34:59 what are the examples they should go with?
01:35:01 So a little bit of assembly to get a sense
01:35:04 at the lowest level of what the program is really doing.
01:35:08 Yeah, I mean, in some sense,
01:35:10 there’s no such thing as the lowest level
01:35:12 because you can keep going down,
01:35:13 but that’s the place where I drew the line.
01:35:15 So the idea that computers have a fairly small repertoire
01:35:19 of very simple instructions that they can do,
01:35:21 like add and subtract and branch and so on,
01:35:25 as you mentioned earlier,
01:35:27 and that you can write code at that level
01:35:31 and it will get things done,
01:35:33 and then you have the levels of abstraction
01:35:35 that we get with higher level languages,
01:35:37 like Fortran or C or whatever,
01:35:39 and that makes it easier to write the code
01:35:42 and less dependent on particular architectures.
01:35:45 And then we talk about a lot of the different kinds
01:35:48 of programs that they use all the time
01:35:50 that they don’t probably realize are programs,
01:35:52 like they’re running Mac OS on their computers
01:35:57 or maybe Windows, and they’re downloading apps
01:36:00 on their phones, and all of those things are programs
01:36:03 that are just what we just talked about,
01:36:05 except at a grand scale.
01:36:08 And it’s easy to forget that they’re actual programs
01:36:10 that people program.
01:36:11 There’s engineers that wrote those things.
01:36:14 Yeah, right.
01:36:14 And so in a way, I’m expecting them
01:36:18 to make an enormous conceptual leap
01:36:20 from their five or 10 line toy assembly language thing
01:36:24 that adds two or three numbers to something
01:36:28 that is a browser on their phone or whatever,
01:36:31 but it’s really the same thing.
01:36:34 So if you look in broad strokes at history,
01:36:38 what do you think the world,
01:36:39 how do you think the world changed because of computers?
01:36:42 It’s hard to sometimes see the big picture
01:36:45 when you’re in it, but I guess I’m asking
01:36:48 if there’s something you’ve noticed over the years
01:36:51 that, like you were mentioning,
01:36:54 the students are more distracted looking at their,
01:36:56 now there’s a device to look at.
01:36:58 Right.
01:36:59 I think computing has changed a tremendous amount,
01:37:01 obviously, but I think one aspect of that
01:37:03 is the way that people interact with each other,
01:37:06 both locally and far away.
01:37:08 And when I was the age of those kids,
01:37:12 making a phone call to somewhere was a big deal
01:37:15 because it costs serious money.
01:37:17 And this was in the 60s, right?
01:37:20 And today people don’t make phone calls,
01:37:22 they send texts or something like that.
01:37:25 So there’s an up and down in what people do.
01:37:29 People think nothing of having correspondence,
01:37:34 regular meetings, video, whatever,
01:37:36 with friends or family or whatever
01:37:38 in any other part of the world,
01:37:40 and they don’t think about that at all.
01:37:43 And so that’s just the communication aspect of it.
01:37:49 Do you think that brings us closer together
01:37:51 or does it make us,
01:37:53 does it take us away from the closeness
01:37:57 of human to human contact?
01:37:59 I think it depends a lot on all kinds of things.
01:38:02 So I trade mail with my brother and sister in Canada
01:38:05 much more often than I used to talk to them on the phone.
01:38:08 So probably every two or three days,
01:38:10 I get something or send something to them.
01:38:14 Whereas 20 years ago,
01:38:16 I probably wouldn’t have talked to them
01:38:19 on the phone nearly as much.
01:38:20 So in that sense, that’s brought my brother and sister
01:38:23 and I closer together.
01:38:24 That’s a good thing.
01:38:25 I watch the kids on campus
01:38:28 and they’re mostly walking around with their heads down,
01:38:30 fooling with their phones
01:38:32 to the point where I have to duck them.
01:38:34 I don’t know that that has brought them closer together
01:38:39 in some ways.
01:38:40 There’s sociological research that says people are,
01:38:43 in fact, not as close together as they used to be.
01:38:46 I don’t know where that’s really true,
01:38:47 but I can see potential downsides
01:38:50 and kids where you think,
01:38:53 come on, wake up and smell the coffee or whatever.
01:38:56 That’s right.
01:38:57 But if you look at, again, nobody can predict the future,
01:39:00 but are you excited?
01:39:02 Kind of touched this a little bit with AI,
01:39:04 but are you excited by the future in the next 10, 20 years
01:39:08 that computing will bring?
01:39:11 You were there when there was no computers really.
01:39:15 And now computers are everywhere all over the world
01:39:19 and Africa and Asia and just every person,
01:39:23 almost every person in the world has a device.
01:39:25 So are you hopeful, optimistic about that future?
01:39:30 It’s mixed, if the truth be told.
01:39:32 I mean, I think there are some things about that
01:39:34 that are good.
01:39:34 I think there’s the potential for people
01:39:36 to improve their lives all over the place
01:39:39 and that’s obviously good.
01:39:40 And at the same time, at least in the short run,
01:39:44 you can see lots and lots of bad
01:39:45 as people become more tribalistic or parochial
01:39:49 in their interests and it’s an enormous amount
01:39:51 more us than them and people are using computers
01:39:54 in all kinds of ways to mislead or misrepresent
01:39:58 or flat out lie about what’s going on
01:39:59 and that is affecting politics locally
01:40:02 and I think everywhere in the world.
01:40:05 Yeah, the long term effect on political systems
01:40:08 and so on is who knows.
01:40:10 Who knows indeed.
01:40:11 The people now have a voice which is a powerful thing.
01:40:18 People who are oppressed have a voice
01:40:21 but also everybody has a voice
01:40:24 and the chaos that emerges from that
01:40:25 is fascinating to watch.
01:40:26 Yeah, yeah, it’s kind of scary.
01:40:30 If you can go back and relive a moment in your life,
01:40:33 one that made you truly happy outside of family
01:40:37 or was profoundly transformative,
01:40:40 is there a moment or moments that jump out at you
01:40:44 from memory?
01:40:46 I don’t think specific moments.
01:40:48 I think there were lots and lots and lots of good times
01:40:50 at Bell Labs where you would build something
01:40:52 and it worked.
01:40:55 Did you say it worked?
01:40:56 So the moment it worked.
01:40:57 Yeah, and somebody used it and they said,
01:41:00 gee, that’s neat.
01:41:01 Those kinds of things happened quite often
01:41:04 in that sort of golden era in the 70s when Unix was young
01:41:09 and there was all this low hanging fruit
01:41:11 and interesting things to work on
01:41:13 and a group of people who kind of,
01:41:16 we were all together in this and if you did something,
01:41:18 they would try it out for you.
01:41:20 And I think that was in some sense,
01:41:22 a really, really good time.
01:41:24 And AUK was, was AUK an example of that?
01:41:27 That when you built it and people used it?
01:41:29 Yeah, absolutely.
01:41:30 And now millions of people use it.
01:41:32 And all your stupid mistakes are right there
01:41:34 for them to look at, right?
01:41:36 So it’s mixed.
01:41:37 Yeah, it’s terrifying, vulnerable
01:41:39 but it’s beautiful because it does have a positive impact
01:41:42 on so, so many people.
01:41:43 So I think there’s no better way to end it.
01:41:47 Brian, thank you so much for talking to us, it was an honor.
01:41:49 Okay, my pleasure.
01:41:51 Good fun.
01:41:52 Thank you for listening to this conversation
01:41:55 with Brian Kernighan and thank you to our sponsors,
01:41:58 8 Sleep Mattress and Raycon Earbuds.
01:42:02 Please consider supporting this podcast
01:42:05 by going to 8sleep.com slash Lex and to buyraycon.com
01:42:10 slash Lex, click the links, buy the stuff.
01:42:14 These both are amazing products.
01:42:16 It really is the best way to support this podcast
01:42:19 and the journey I’m on.
01:42:21 It’s how they know I sent you and increases the chance
01:42:24 that they’ll actually support this podcast in the future.
01:42:27 If you enjoy this thing, subscribe on YouTube,
01:42:30 review it with 5 Stars and Apple Podcast,
01:42:32 support it on Patreon or connect with me on Twitter
01:42:35 at Lex Friedman, spelled somehow miraculously
01:42:40 without the letter E, just F R I D M A N
01:42:44 because when we immigrated to this country,
01:42:46 we were not so good at spelling.
01:42:49 And now let me leave you with some words
01:42:51 from Brian Kernighan, don’t comment bad code, rewrite it.
01:42:56 Thank you for listening and hope to see you next time.