Transcript
00:00:00 The following is a conversation with Richard Karp,
00:00:02 a professor at Berkeley and one of the most important figures
00:00:06 in the history of theoretical computer science.
00:00:09 In 1985, he received the Turing Award
00:00:12 for his research in the theory of algorithms,
00:00:15 including the development of the Admirons Karp algorithm
00:00:18 for solving the max flow problem on networks,
00:00:21 Hopcroft Karp algorithm for finding maximum cardinality
00:00:25 matchings in bipartite graphs,
00:00:27 and his landmark paper in complexity theory
00:00:30 called Reduceability Among Combinatorial Problems,
00:00:35 in which he proved 21 problems to be NP complete.
00:00:38 This paper was probably the most important catalyst
00:00:41 in the explosion of interest in the study of NP completeness
00:00:45 and the P versus NP problem in general.
00:00:48 Quick summary of the ads.
00:00:50 Two sponsors, 8sleep mattress and Cash App.
00:00:53 Please consider supporting this podcast
00:00:55 by going to 8sleep.com slash Lex
00:00:58 and downloading Cash App and using code LexPodcast.
00:01:03 Click the links, buy the stuff.
00:01:04 It really is the best way to support this podcast.
00:01:08 If you enjoy this thing, subscribe on YouTube,
00:01:10 review it with five stars on Apple Podcast,
00:01:12 support it on Patreon,
00:01:13 or connect with me on Twitter at Lex Friedman.
00:01:16 As usual, I’ll do a few minutes of ads now
00:01:18 and never any ads in the middle
00:01:20 that can break the flow of the conversation.
00:01:22 This show is sponsored by 8sleep and its Pod Pro mattress
00:01:27 that you can check out at 8sleep.com slash Lex
00:01:30 to get $200 off.
00:01:32 It controls temperature with an app.
00:01:35 It can cool down to as low as 55 degrees
00:01:38 on each side of the bed separately.
00:01:41 Research shows that temperature has a big impact
00:01:43 on the quality of our sleep.
00:01:45 Anecdotally, it’s been a game changer for me.
00:01:47 I love it.
00:01:48 It’s been a couple of weeks now.
00:01:50 I’ve just been really enjoying it,
00:01:52 both in the fact that I’m getting better sleep
00:01:54 and that it’s a smart mattress, essentially.
00:01:58 I kind of imagine this being the early days
00:02:00 of artificial intelligence being a part
00:02:02 of every aspect of our lives.
00:02:04 And certainly infusing AI in one of the most important
00:02:07 aspects of life, which is sleep,
00:02:09 I think has a lot of potential for being beneficial.
00:02:13 The Pod Pro is packed with sensors that track heart rate,
00:02:16 heart rate variability, and respiratory rate,
00:02:19 showing it all in their app.
00:02:21 The app’s health metrics are amazing,
00:02:24 but the cooling alone is honestly worth the money.
00:02:27 I don’t always sleep, but when I do,
00:02:29 I choose the 8th Sleep Pod Pro mattress.
00:02:31 Check it out at 8thSleep.com slash Lex to get $200 off.
00:02:37 And remember, just visiting the site
00:02:39 and considering the purchase helps convince the folks
00:02:42 at 8th Sleep that this silly old podcast
00:02:44 is worth sponsoring in the future.
00:02:47 This show is also presented by the great
00:02:50 and powerful Cash App,
00:02:53 the number one finance app in the App Store.
00:02:55 When you get it, use code LEXPODCAST.
00:02:58 Cash App lets you send money to friends,
00:03:00 buy Bitcoin, and invest in the stock market
00:03:02 with as little as $1.
00:03:04 It’s one of the best designed interfaces
00:03:06 of an app that I’ve ever used.
00:03:08 To me, good design is when everything is easy and natural.
00:03:12 Bad design is when the app gets in the way,
00:03:15 either because it’s buggy,
00:03:16 or because it tries too hard to be helpful.
00:03:19 I’m looking at you, Clippy, from Microsoft,
00:03:21 even though I love you.
00:03:23 Anyway, there’s a big part of my brain and heart
00:03:25 that loves to design things
00:03:27 and also to appreciate great design by others.
00:03:30 So again, if you get Cash App from the App Store
00:03:32 or Google Play and use the code LEXPODCAST,
00:03:35 you get $10, and Cash App will also donate $10 to FIRST,
00:03:39 an organization that is helping to advance
00:03:41 robotics and STEM education
00:03:43 for young people around the world.
00:03:45 And now, here’s my conversation with Richard Karp.
00:03:50 You wrote that at the age of 13,
00:03:52 you were first exposed to plane geometry
00:03:55 and was wonderstruck by the power and elegance
00:03:58 of form of proofs.
00:04:00 Are there problems, proofs, properties, ideas
00:04:02 in plane geometry that from that time
00:04:04 that you remember being mesmerized by
00:04:07 or just enjoying to go through to prove various aspects?
00:04:12 So Michael Rabin told me this story
00:04:16 about an experience he had when he was a young student
00:04:20 who was tossed out of his classroom for bad behavior
00:04:25 and was wandering through the corridors of his school
00:04:29 and came upon two older students
00:04:32 who were studying the problem of finding
00:04:35 the shortest distance between two nonoverlapping circles.
00:04:42 And Michael thought about it and said,
00:04:49 you take the straight line between the two centers
00:04:52 and the segment between the two circles is the shortest
00:04:56 because a straight line is the shortest distance
00:04:58 between the two centers.
00:05:00 And any other line connecting the circles
00:05:03 would be on a longer line.
00:05:07 And I thought, and he thought, and I agreed
00:05:10 that this was just elegance, the pure reasoning
00:05:14 could come up with such a result.
00:05:17 Certainly the shortest distance
00:05:21 from the two centers of the circles is a straight line.
00:05:25 Could you once again say what’s the next step in that proof?
00:05:29 Well, any segment joining the two circles,
00:05:36 if you extend it by taking the radius on each side,
00:05:41 you get a path with three edges
00:05:46 which connects the two centers.
00:05:49 And this has to be at least as long as the shortest path,
00:05:52 which is the straight line.
00:05:53 The straight line, yeah.
00:05:54 Wow, yeah, that’s quite simple.
00:05:58 So what is it about that elegance
00:06:00 that you just find compelling?
00:06:04 Well, just that you could establish a fact
00:06:09 about geometry beyond dispute by pure reasoning.
00:06:18 I also enjoy the challenge of solving puzzles
00:06:22 in plain geometry.
00:06:23 It was much more fun than the earlier mathematics courses
00:06:27 which were mostly about arithmetic operations
00:06:31 and manipulating them.
00:06:32 Was there something about geometry itself,
00:06:35 the slightly visual component of it?
00:06:38 Oh, yes, absolutely,
00:06:40 although I lacked three dimensional vision.
00:06:44 I wasn’t very good at three dimensional vision.
00:06:47 You mean being able to visualize three dimensional objects?
00:06:49 Three dimensional objects or surfaces,
00:06:54 hyperplanes and so on.
00:06:57 So there I didn’t have an intuition.
00:07:01 But for example, the fact that the sum of the angles
00:07:06 of a triangle is 180 degrees is proved convincingly.
00:07:16 And it comes as a surprise that that can be done.
00:07:21 Why is that surprising?
00:07:23 Well, it is a surprising idea, I suppose.
00:07:30 Why is that proved difficult?
00:07:32 It’s not, that’s the point.
00:07:34 It’s so easy and yet it’s so convincing.
00:07:37 Do you remember what is the proof that it adds up to 180?
00:07:42 You start at a corner and draw a line
00:07:47 parallel to the opposite side.
00:07:56 And that line sort of trisects the angle
00:08:02 between the other two sides.
00:08:05 And you get a half plane which has to add up to 180 degrees.
00:08:10 It has to add up to 180 degrees and it consists
00:08:15 in the angles by the equality of alternate angles.
00:08:21 What’s it called?
00:08:24 You get a correspondence between the angles
00:08:27 created along the side of the triangle
00:08:31 and the three angles of the triangle.
00:08:34 Has geometry had an impact on when you look into the future
00:08:38 of your work with combinatorial algorithms?
00:08:41 Has it had some kind of impact in terms of, yeah,
00:08:45 being able, the puzzles, the visual aspects
00:08:48 that were first so compelling to you?
00:08:51 Not Euclidean geometry particularly.
00:08:54 I think I use tools like linear programming
00:09:00 and integer programming a lot.
00:09:03 But those require high dimensional visualization
00:09:08 and so I tend to go by the algebraic properties.
00:09:13 Right, you go by the linear algebra
00:09:16 and not by the visualization.
00:09:19 Well, the interpretation in terms of, for example,
00:09:23 finding the highest point on a polyhedron
00:09:26 as in linear programming is motivating.
00:09:32 But again, I don’t have the high dimensional intuition
00:09:37 that would particularly inform me
00:09:40 so I sort of lean on the algebra.
00:09:44 So to linger on that point,
00:09:46 what kind of visualization do you do
00:09:50 when you’re trying to think about,
00:09:53 we’ll get to combinatorial algorithms,
00:09:55 but just algorithms in general.
00:09:57 Yeah.
00:09:58 What’s inside your mind
00:09:59 when you’re thinking about designing algorithms?
00:10:02 Or even just tackling any mathematical problem?
00:10:09 Well, I think that usually an algorithm
00:10:12 involves a repetition of some inner loop
00:10:19 and so I can sort of visualize the distance
00:10:24 from the desired solution as iteratively reducing
00:10:29 until you finally hit the exact solution.
00:10:33 And try to take steps that get you closer to the.
00:10:35 Try to take steps that get closer
00:10:38 and having the certainty of converging.
00:10:41 So it’s basically the mechanics of the algorithm
00:10:46 is often very simple,
00:10:49 but especially when you’re trying something out
00:10:52 on the computer.
00:10:53 So for example, I did some work
00:10:57 on the traveling salesman problem
00:10:59 and I could see there was a particular function
00:11:03 that had to be minimized
00:11:04 and it was fascinating to see the successive approaches
00:11:08 to the minimum, to the optimum.
00:11:11 You mean, so first of all,
00:11:13 traveling salesman problem is where you have to visit
00:11:16 every city without ever, the only ones.
00:11:21 Yeah, that’s right.
00:11:22 Find the shortest path through a set of cities.
00:11:25 Yeah, which is sort of a canonical standard,
00:11:28 a really nice problem that’s really hard.
00:11:30 Right, exactly, yes.
00:11:32 So can you say again what was nice
00:11:34 about being able to think about the objective function there
00:11:38 and maximizing it or minimizing it?
00:11:41 Well, just that as the algorithm proceeded,
00:11:47 you were making progress, continual progress,
00:11:49 and eventually getting to the optimum point.
00:11:54 So there’s two parts, maybe.
00:11:57 Maybe you can correct me.
00:11:58 First is like getting an intuition
00:12:00 about what the solution would look like
00:12:02 and or even maybe coming up with a solution
00:12:05 and two is proving that this thing
00:12:07 is actually going to be pretty good.
00:12:10 What part is harder for you?
00:12:13 Where’s the magic happen?
00:12:14 Is it in the first sets of intuitions
00:12:17 or is it in the messy details of actually showing
00:12:22 that it is going to get to the exact solution
00:12:25 and it’s gonna run at a certain complexity?
00:12:32 Well, the magic is just the fact
00:12:34 that the gap from the optimum decreases monotonically
00:12:42 and you can see it happening
00:12:44 and various metrics of what’s going on
00:12:48 are improving all along until finally you hit the optimum.
00:12:53 Perhaps later we’ll talk about the assignment problem
00:12:56 and I can illustrate.
00:12:58 It illustrates a little better.
00:13:00 Now zooming out again, as you write,
00:13:03 Don Knuth has called attention to a breed of people
00:13:06 who derive great aesthetic pleasure
00:13:10 from contemplating the structure of computational processes.
00:13:13 So Don calls these folks geeks
00:13:16 and you write that you remember the moment
00:13:18 you realized you were such a person,
00:13:20 you were shown the Hungarian algorithm
00:13:23 to solve the assignment problem.
00:13:25 So perhaps you can explain what the assignment problem is
00:13:29 and what the Hungarian algorithm is.
00:13:33 So in the assignment problem,
00:13:35 you have n boys and n girls
00:13:40 and you are given the desirability of,
00:13:45 or the cost of matching the ith boy
00:13:50 with the jth girl for all i and j.
00:13:52 You’re given a matrix of numbers
00:13:55 and you want to find the one to one matching
00:14:02 of the boys with the girls
00:14:04 such that the sum of the associated costs will be minimized.
00:14:08 So the best way to match the boys with the girls
00:14:13 or men with jobs or any two sets.
00:14:16 Any possible matching is possible or?
00:14:21 Yeah, all one to one correspondences are permissible.
00:14:26 If there is a connection that is not allowed,
00:14:29 then you can think of it as having an infinite cost.
00:14:32 I see, yeah.
00:14:34 So what you do is to depend on the observation
00:14:39 that the identity of the optimal assignment
00:14:46 or as we call it, the optimal permutation
00:14:50 is not changed if you subtract a constant
00:14:57 from any row or column of the matrix.
00:15:01 You can see that the comparison
00:15:03 between the different assignments is not changed by that.
00:15:06 Because if you decrease a particular row,
00:15:11 all the elements of a row by some constant,
00:15:14 all solutions decrease by an amount equal to that constant.
00:15:21 So the idea of the algorithm is to start with a matrix
00:15:24 of non negative numbers and keep subtracting
00:15:31 from rows or from columns.
00:15:34 Subtracting from rows or entire columns
00:15:41 in such a way that you subtract the same constant
00:15:44 from all the elements of that row or column
00:15:48 while maintaining the property
00:15:50 that all the elements are non negative.
00:15:58 Simple.
00:15:59 Yeah, and so what you have to do
00:16:04 is find small moves which will decrease the total cost
00:16:13 while subtracting constants from rows or columns.
00:16:17 And there’s a particular way of doing that
00:16:19 by computing the kind of shortest path
00:16:22 through the elements in the matrix.
00:16:24 And you just keep going in this way
00:16:28 until you finally get a full permutation of zeros
00:16:33 while the matrix is non negative
00:16:34 and then you know that that has to be the cheapest.
00:16:38 Is that as simple as it sounds?
00:16:42 So the shortest path of the matrix part.
00:16:45 Yeah, the simplicity lies in how you find,
00:16:48 I oversimplified slightly what you,
00:16:53 you will end up subtracting a constant
00:16:56 from some rows or columns
00:16:59 and adding the same constant back to other rows and columns.
00:17:04 So as not to reduce any of the zero elements,
00:17:09 you leave them unchanged.
00:17:11 But each individual step modifies several rows and columns
00:17:22 by the same amount but overall decreases the cost.
00:17:26 So there’s something about that elegance
00:17:29 that made you go aha, this is a beautiful,
00:17:32 like it’s amazing that something like this,
00:17:35 something so simple can solve a problem like this.
00:17:38 Yeah, it’s really cool.
00:17:39 If I had mechanical ability,
00:17:42 I would probably like to do woodworking
00:17:44 or other activities where you sort of shape something
00:17:51 into something beautiful and orderly
00:17:55 and there’s something about the orderly systematic nature
00:18:00 of that iterative algorithm that is pleasing to me.
00:18:05 So what do you think about this idea of geeks
00:18:09 as Don Knuth calls them?
00:18:11 What do you think, is it something specific to a mindset
00:18:16 that allows you to discover the elegance
00:18:19 in computational processes or is this all of us,
00:18:23 can all of us discover this beauty?
00:18:25 Were you born this way?
00:18:28 I think so.
00:18:29 I always like to play with numbers.
00:18:30 I used to amuse myself by multiplying
00:18:35 by multiplying four digit decimal numbers in my head
00:18:40 and putting myself to sleep by starting with one
00:18:44 and doubling the number as long as I could go
00:18:48 and testing my memory, my ability to retain the information.
00:18:52 And I also read somewhere that you wrote
00:18:55 that you enjoyed showing off to your friends
00:18:59 by I believe multiplying four digit numbers.
00:19:04 Right.
00:19:05 Four digit numbers.
00:19:07 Yeah, I had a summer job at a beach resort
00:19:10 outside of Boston and the other employee,
00:19:16 I was the barker at a skee ball game.
00:19:20 Yeah.
00:19:21 I used to sit at a microphone saying come one,
00:19:26 come all, come in and play skee ball,
00:19:28 five cents to play, a nickel to win and so on.
00:19:31 That’s what a barker, I wasn’t sure if I should know
00:19:33 but barker, that’s, so you’re the charming,
00:19:38 outgoing person that’s getting people to come in.
00:19:41 Yeah, well I wasn’t particularly charming
00:19:43 but I could be very repetitious and loud.
00:19:47 And the other employees were sort of juvenile delinquents
00:19:53 who had no academic bent but somehow I found
00:19:58 that I could impress them by performing
00:20:03 this mental arithmetic.
00:20:06 Yeah, there’s something to that.
00:20:10 Some of the most popular videos on the internet
00:20:13 is there’s a YouTube channel called Numberphile
00:20:18 that shows off different mathematical ideas.
00:20:20 I see.
00:20:21 There’s still something really profoundly interesting
00:20:23 to people about math, the beauty of it.
00:20:27 Something, even if they don’t understand
00:20:31 the basic concept even being discussed,
00:20:33 there’s something compelling to it.
00:20:35 What do you think that is?
00:20:37 Any lessons you drew from your early teen years
00:20:40 when you were showing off to your friends with the numbers?
00:20:45 Like what is it that attracts us
00:20:47 to the beauty of mathematics do you think?
00:20:51 The general population, not just the computer scientists
00:20:54 and mathematicians.
00:20:56 I think that you can do amazing things.
00:20:59 You can test whether large numbers are prime.
00:21:04 You can solve little puzzles
00:21:09 about cannibals and missionaries.
00:21:13 And that’s a kind of achievement, it’s puzzle solving.
00:21:19 And at a higher level, the fact that you can do
00:21:22 this reasoning that you can prove
00:21:24 in an absolutely ironclad way that some of the angles
00:21:28 of a triangle is 180 degrees.
00:21:32 Yeah, it’s a nice escape from the messiness
00:21:35 of the real world where nothing can be proved.
00:21:38 So, and we’ll talk about it, but sometimes the ability
00:21:41 to map the real world into such problems
00:21:43 where you can’t prove it is a powerful step.
00:21:47 Yeah.
00:21:47 It’s amazing that we can do it.
00:21:48 Of course, another attribute of geeks
00:21:50 is they’re not necessarily endowed
00:21:53 with emotional intelligence, so they can live
00:21:57 in a world of abstractions without having
00:21:59 to master the complexities of dealing with people.
00:22:06 So just to link on the historical note,
00:22:09 as a PhD student in 1955, you joined the computational lab
00:22:13 at Harvard where Howard Aiken had built the Mark I
00:22:17 and the Mark IV computers.
00:22:19 Just to take a step back into that history,
00:22:22 what were those computers like?
00:22:26 The Mark IV filled a large room,
00:22:31 much bigger than this large office
00:22:33 that we were talking in now.
00:22:36 And you could walk around inside it.
00:22:39 There were rows of relays.
00:22:43 You could just walk around the interior
00:22:45 and the machine would sometimes fail because of bugs,
00:22:53 which literally meant flying creatures
00:22:56 landing on the switches.
00:22:59 So I never used that machine for any practical purpose.
00:23:06 The lab eventually acquired one of the earlier
00:23:12 commercial computers.
00:23:14 And this was already in the 60s?
00:23:16 No, in the mid 50s, or late 50s.
00:23:19 There was already commercial computers in the…
00:23:21 Yeah, we had a Univac, a Univac with 2,000 words of storage.
00:23:27 And so you had to work hard to allocate the memory properly
00:23:31 to also the excess time from one word to another
00:23:36 depended on the number of the particular words.
00:23:41 And so there was an art to sort of arranging
00:23:44 the storage allocation to make fetching data rapid.
00:23:51 Were you attracted to this actual physical world
00:23:54 implementation of mathematics?
00:23:58 So it’s a mathematical machine that’s actually doing
00:24:01 the math physically?
00:24:03 No, not at all.
00:24:04 I think I was attracted to the underlying algorithms.
00:24:09 But did you draw any inspiration?
00:24:12 So could you have imagined, like what did you imagine
00:24:17 was the future of these giant computers?
00:24:20 Could you have imagined that 60 years later
00:24:22 we’d have billions of these computers all over the world?
00:24:25 I couldn’t imagine that, but there was a sense
00:24:30 in the laboratory that this was the wave of the future.
00:24:36 In fact, my mother influenced me.
00:24:38 She told me that data processing was gonna be really big
00:24:42 and I should get into it.
00:24:43 You’re a smart woman.
00:24:47 Yeah, she was a smart woman.
00:24:49 And there was just a feeling that this was going
00:24:53 to change the world, but I didn’t think of it
00:24:55 in terms of personal computing.
00:24:57 I had no anticipation that we would be walking around
00:25:02 with computers in our pockets or anything like that.
00:25:06 Did you see computers as tools, as mathematical mechanisms
00:25:12 to analyze sort of the theoretical computer science,
00:25:16 or as the AI folks, which is an entire other community
00:25:21 of dreamers, as something that could one day
00:25:24 have human level intelligence?
00:25:26 Well, AI wasn’t very much on my radar.
00:25:29 I did read Turing’s paper about the…
00:25:33 The Turing Test, Computing and Intelligence.
00:25:38 Yeah, the Turing test.
00:25:40 What’d you think about that paper?
00:25:41 Was that just like science fiction?
00:25:45 I thought that it wasn’t a very good test
00:25:48 because it was too subjective.
00:25:50 So I didn’t feel that the Turing test
00:25:55 was really the right way to calibrate
00:25:58 how intelligent an algorithm could be.
00:26:01 But to linger on that, do you think it’s,
00:26:03 because you’ve come up with some incredible tests later on,
00:26:07 tests on algorithms, right, that are like strong,
00:26:12 reliable, robust across a bunch of different classes
00:26:15 of algorithms, but returning to this emotional mess
00:26:20 that is intelligence, do you think it’s possible
00:26:22 to come up with a test that’s as ironclad
00:26:26 as some of the computational complexity work?
00:26:31 Well, I think the greater question
00:26:32 is whether it’s possible to achieve human level intelligence.
00:26:38 Right, so first of all, let me, at the philosophical level,
00:26:42 do you think it’s possible to create algorithms
00:26:46 that reason and would seem to us
00:26:51 to have the same kind of intelligence as human beings?
00:26:54 It’s an open question.
00:26:58 It seems to me that most of the achievements
00:27:04 have operate within a very limited set of ground rules
00:27:11 and for a very limited, precise task,
00:27:15 which is a quite different situation
00:27:17 from the processes that go on in the minds of humans,
00:27:22 which where they have to sort of function
00:27:25 in changing environments, they have emotions,
00:27:30 they have physical attributes for exploring
00:27:37 their environment, they have intuition,
00:27:41 they have desires, emotions, and I don’t see anything
00:27:46 in the current achievements of what’s called AI
00:27:54 that come close to that capability.
00:27:56 I don’t think there’s any computer program
00:28:02 which surpasses a six month old child
00:28:05 in terms of comprehension of the world.
00:28:10 Do you think this complexity of human intelligence,
00:28:15 all the cognitive abilities we have,
00:28:17 all the emotion, do you think that could be reduced one day
00:28:21 or just fundamentally can it be reduced
00:28:23 to a set of algorithms or an algorithm?
00:28:27 So can a Turing machine achieve human level intelligence?
00:28:33 I am doubtful about that.
00:28:35 I guess the argument in favor of it
00:28:38 is that the human brain seems to achieve what we call
00:28:47 intelligence cognitive abilities of different kinds.
00:28:51 And if you buy the premise that the human brain
00:28:54 is just an enormous interconnected set of switches,
00:28:58 so to speak, then in principle, you should be able
00:29:03 to diagnose what that interconnection structure is like,
00:29:07 characterize the individual switches,
00:29:09 and build a simulation outside.
00:29:14 But while that may be true in principle,
00:29:17 that cannot be the way we’re eventually
00:29:19 gonna tackle this problem.
00:29:25 That does not seem like a feasible way to go about it.
00:29:29 So there is, however, an existence proof that
00:29:32 if you believe that the brain is just a network of neurons
00:29:42 operating by rules, I guess you could say
00:29:45 that that’s an existence proof of the capabilities
00:29:50 of a mechanism, but it would be almost impossible
00:29:55 to acquire the information unless we got enough insight
00:30:00 into the operation of the brain.
00:30:02 But there’s so much mystery there.
00:30:04 Do you think, what do you make of consciousness,
00:30:06 for example, as an example of something
00:30:11 we completely have no clue about?
00:30:13 The fact that we have this subjective experience.
00:30:15 Is it possible that this network of,
00:30:19 this circuit of switches is able to create
00:30:22 something like consciousness?
00:30:24 To know its own identity.
00:30:26 Yeah, to know the algorithm, to know itself.
00:30:30 To know itself.
00:30:32 I think if you try to define that rigorously,
00:30:35 you’d have a lot of trouble.
00:30:36 Yeah, that seems to be.
00:30:39 So I know that there are many who believe
00:30:47 that general intelligence can be achieved,
00:30:52 and there are even some who feel certain
00:30:55 that the singularity will come
00:30:59 and we will be surpassed by the machines
00:31:01 which will then learn more and more about themselves
00:31:04 and reduce humans to an inferior breed.
00:31:08 I am doubtful that this will ever be achieved.
00:31:12 Just for the fun of it, could you linger on why,
00:31:17 what’s your intuition, why you’re doubtful?
00:31:19 So there are quite a few people that are extremely worried
00:31:23 about this existential threat of artificial intelligence,
00:31:26 of us being left behind
00:31:29 by this super intelligent new species.
00:31:32 What’s your intuition why that’s not quite likely?
00:31:37 Just because none of the achievements in speech
00:31:43 or robotics or natural language processing
00:31:48 or creation of flexible computer assistants
00:31:52 or any of that comes anywhere near close
00:31:56 to that level of cognition.
00:32:00 What do you think about ideas of sort of,
00:32:02 if we look at Moore’s Law and exponential improvement
00:32:06 to allow us, that would surprise us?
00:32:09 Sort of our intuition fall apart
00:32:11 with exponential improvement because, I mean,
00:32:14 we’re not able to kind of,
00:32:16 we kind of think in linear improvement.
00:32:18 We’re not able to imagine a world
00:32:20 that goes from the Mark I computer to an iPhone X.
00:32:26 Yeah.
00:32:27 So do you think we could be really surprised
00:32:31 by the exponential growth?
00:32:33 Or on the flip side, is it possible
00:32:37 that also intelligence is actually way, way, way, way harder,
00:32:42 even with exponential improvement to be able to crack?
00:32:47 I don’t think any constant factor improvement
00:32:50 could change things.
00:32:53 I mean, given our current comprehension
00:32:58 of what cognition requires,
00:33:04 it seems to me that multiplying the speed of the switches
00:33:08 by a factor of a thousand or a million
00:33:13 will not be useful until we really understand
00:33:16 the organizational principle behind the network of switches.
00:33:23 Well, let’s jump into the network of switches
00:33:25 and talk about combinatorial algorithms if we could.
00:33:29 Let’s step back with the very basics.
00:33:31 What are combinatorial algorithms?
00:33:33 And what are some major examples
00:33:35 of problems they aim to solve?
00:33:38 A combinatorial algorithm is one which deals
00:33:43 with a system of discrete objects
00:33:48 that can occupy various states
00:33:52 or take on various values from a discrete set of values
00:33:59 and need to be arranged or selected
00:34:06 in such a way as to achieve some,
00:34:10 to minimize some cost function.
00:34:13 Or to prove the existence
00:34:16 of some combinatorial configuration.
00:34:19 So an example would be coloring the vertices of a graph.
00:34:25 What’s a graph?
00:34:27 Let’s step back.
00:34:28 So it’s fun to ask one of the greatest
00:34:34 computer scientists of all time
00:34:36 the most basic questions in the beginning of most books.
00:34:39 But for people who might not know,
00:34:41 but in general how you think about it, what is a graph?
00:34:44 A graph, that’s simple.
00:34:47 It’s a set of points, certain pairs of which
00:34:51 are joined by lines called edges.
00:34:55 And they sort of represent the,
00:34:58 in different applications represent the interconnections
00:35:02 between discrete objects.
00:35:05 So they could be the interactions,
00:35:07 interconnections between switches in a digital circuit
00:35:12 or interconnections indicating the communication patterns
00:35:16 of a human community.
00:35:19 And they could be directed or undirected
00:35:21 and then as you’ve mentioned before, might have costs.
00:35:25 Right, they can be directed or undirected.
00:35:27 They can be, you can think of them as,
00:35:30 if you think, if a graph were representing
00:35:34 a communication network, then the edge could be undirected
00:35:38 meaning that information could flow along it
00:35:41 in both directions or it could be directed
00:35:44 with only one way communication.
00:35:46 A road system is another example of a graph
00:35:49 with weights on the edges.
00:35:52 And then a lot of problems of optimizing the efficiency
00:35:57 of such networks or learning about the performance
00:36:04 of such networks are the object of combinatorial algorithms.
00:36:11 So it could be scheduling classes at a school
00:36:15 where the vertices, the nodes of the network
00:36:20 are the individual classes and the edges indicate
00:36:25 the constraints which say that certain classes
00:36:28 cannot take place at the same time
00:36:30 or certain teachers are available only for certain classes,
00:36:34 et cetera.
00:36:36 Or I talked earlier about the assignment problem
00:36:41 of matching the boys with the girls
00:36:43 where you have there a graph with an edge
00:36:48 from each boy to each girl with a weight
00:36:51 indicating the cost.
00:36:53 Or in logical design of computers,
00:36:58 you might want to find a set of so called gates,
00:37:04 switches that perform logical functions
00:37:07 which can be interconnected to each other
00:37:10 and perform logical functions which can be interconnected
00:37:13 to realize some function.
00:37:15 So you might ask how many gates do you need
00:37:22 in order for a circuit to give a yes output
00:37:33 if at least a given number of its inputs are ones
00:37:38 and no if fewer are present.
00:37:42 My favorite’s probably all the work with network flow.
00:37:46 So anytime you have, I don’t know why it’s so compelling
00:37:50 but there’s something just beautiful about it.
00:37:52 It seems like there’s so many applications
00:37:54 and communication networks and traffic flow
00:38:00 that you can map into these and then you could think
00:38:03 of pipes and water going through pipes
00:38:05 and you could optimize it in different ways.
00:38:07 There’s something always visually and intellectually
00:38:11 compelling to me about it.
00:38:12 And of course you’ve done work there.
00:38:15 Yeah, so there the edges represent channels
00:38:21 along which some commodity can flow.
00:38:24 It might be gas, it might be water, it might be information.
00:38:29 Maybe supply chain as well like products being.
00:38:33 Products flowing from one operation to another.
00:38:37 And the edges have a capacity which is the rate
00:38:41 at which the commodity can flow.
00:38:43 And a central problem is to determine
00:38:49 given a network of these channels.
00:38:51 In this case the edges are communication channels.
00:38:56 The challenge is to find the maximum rate
00:39:01 at which the information can flow along these channels
00:39:05 to get from a source to a destination.
00:39:09 And that’s a fundamental combinatorial problem
00:39:12 that I’ve worked on jointly with the scientist Jack Edmonds.
00:39:19 I think we’re the first to give a formal proof
00:39:22 that this maximum flow problem through a network
00:39:27 can be solved in polynomial time.
00:39:30 Which I remember the first time I learned that.
00:39:33 Just learning that in maybe even grad school.
00:39:39 I don’t think it was even undergrad.
00:39:41 No, algorithm, yeah.
00:39:43 Do network flows get taught in basic algorithms courses?
00:39:50 Yes, probably.
00:39:51 Okay, so yeah, I remember being very surprised
00:39:53 that max flow is a polynomial time algorithm.
00:39:56 That there’s a nice fast algorithm that solves max flow.
00:39:59 So there is an algorithm named after you in Edmonds.
00:40:06 The Edmond Carp algorithm for max flow.
00:40:08 So what was it like tackling that problem
00:40:12 and trying to arrive at a polynomial time solution?
00:40:15 And maybe you can describe the algorithm.
00:40:17 Maybe you can describe what’s the running time complexity
00:40:19 that you showed.
00:40:20 Yeah, well, first of all,
00:40:23 what is a polynomial time algorithm?
00:40:25 Perhaps we could discuss that.
00:40:28 So yeah, let’s actually just even, yeah.
00:40:31 What is algorithmic complexity?
00:40:34 What are the major classes of algorithm complexity?
00:40:38 So in a problem like the assignment problem
00:40:41 or scheduling schools or any of these applications,
00:40:48 you have a set of input data which might, for example,
00:40:53 be a set of vertices connected by edges
00:41:01 with you’re given for each edge the capacity of the edge.
00:41:07 And you have algorithms which are,
00:41:12 think of them as computer programs with operations
00:41:16 such as addition, subtraction, multiplication, division,
00:41:19 comparison of numbers, and so on.
00:41:22 And you’re trying to construct an algorithm
00:41:29 based on those operations, which will determine
00:41:33 in a minimum number of computational steps
00:41:37 the answer to the problem.
00:41:38 In this case, the computational step
00:41:41 is one of those operations.
00:41:43 And the answer to the problem is let’s say
00:41:46 the configuration of the network
00:41:50 that carries the maximum amount of flow.
00:41:54 And an algorithm is said to run in polynomial time
00:41:59 if as a function of the size of the input,
00:42:04 the number of vertices, the number of edges, and so on,
00:42:09 the number of basic computational steps grows
00:42:12 only as some fixed power of that size.
00:42:15 A linear algorithm would execute a number of steps
00:42:21 linearly proportional to the size.
00:42:24 Quadratic algorithm would be steps proportional
00:42:27 to the square of the size, and so on.
00:42:30 And algorithms whose running time is bounded
00:42:34 by some fixed power of the size
00:42:37 are called polynomial algorithms.
00:42:39 Yeah.
00:42:40 And that’s supposed to be relatively fast class
00:42:44 of algorithms.
00:42:44 That’s right.
00:42:45 Theoreticians take that to be the definition
00:42:49 of an algorithm being efficient.
00:42:54 And we’re interested in which problems can be solved
00:42:57 by such efficient algorithms.
00:43:02 One can argue whether that’s the right definition
00:43:04 of efficient because you could have an algorithm
00:43:08 whose running time is the 10,000th power
00:43:11 of the size of the input,
00:43:12 and that wouldn’t be really efficient.
00:43:15 And in practice, it’s oftentimes reducing
00:43:19 from an N squared algorithm to an N log N
00:43:22 or a linear time is practically the jump
00:43:26 that you wanna make to allow a real world system
00:43:30 to solve a problem.
00:43:31 Yeah, that’s also true because especially
00:43:33 as we get very large networks,
00:43:35 the size can be in the millions,
00:43:38 and then anything above N log N
00:43:43 where N is the size would be too much
00:43:46 for a practical solution.
00:43:49 Okay, so that’s polynomial time algorithms.
00:43:52 What other classes of algorithms are there?
00:43:56 What’s, so that usually they designate polynomials
00:44:01 of the letter P.
00:44:02 Yeah.
00:44:03 There’s also NP, NP complete and NP hard.
00:44:06 Yeah.
00:44:07 So can you try to disentangle those
00:44:10 by trying to define them simply?
00:44:14 Right, so a polynomial time algorithm
00:44:16 is one whose running time is bounded
00:44:20 by a polynomial in the size of the input.
00:44:24 Then the class of such algorithms is called P.
00:44:29 In the worst case, by the way, we should say, right?
00:44:31 Yeah.
00:44:32 So for every case of the problem.
00:44:33 Yes, that’s right, and that’s very important
00:44:34 that in this theory, when we measure the complexity
00:44:39 of an algorithm, we really measure the number of steps,
00:44:45 the growth of the number of steps in the worst case.
00:44:48 So you may have an algorithm that runs very rapidly
00:44:53 in most cases, but if there’s any case
00:44:56 where it gets into a very long computation,
00:45:00 that would increase the computational complexity
00:45:03 by this measure.
00:45:05 And that’s a very important issue
00:45:07 because there are, as we may discuss later,
00:45:10 there are some very important algorithms
00:45:13 which don’t have a good standing
00:45:15 from the point of view of their worst case performance
00:45:18 and yet are very effective.
00:45:20 So theoreticians are interested in P,
00:45:24 the class of problem solvable in polynomial time.
00:45:27 Then there’s NP, which is the class of problems
00:45:34 which may be hard to solve, but when confronted
00:45:42 with a solution, you can check it in polynomial time.
00:45:46 Let me give you an example there.
00:45:49 So if we look at the assignment problem,
00:45:52 so you have N boys, you have N girls,
00:45:55 the number of numbers that you need to write down
00:45:58 to specify the problem instance is N squared.
00:46:03 And the question is how many steps are needed to solve it?
00:46:11 And Jack Edmonds and I were the first to show
00:46:15 that it could be done in time and cubed.
00:46:20 Earlier algorithms required N to the fourth.
00:46:23 So as a polynomial function of the size of the input,
00:46:27 this is a fast algorithm.
00:46:30 Now to illustrate the class NP, the question is
00:46:34 how long would it take to verify
00:46:38 that a solution is optimal?
00:46:42 So for example, if the input was a graph,
00:46:48 we might want to find the largest clique in the graph
00:46:53 or a clique is a set of vertices such that any vertex,
00:46:58 each vertex in the set is adjacent to each of the others.
00:47:03 So the clique is a complete subgraph.
00:47:08 Yeah, so if it’s a Facebook social network,
00:47:11 everybody’s friends with everybody else, close clique.
00:47:15 No, that would be what’s called a complete graph.
00:47:17 It would be.
00:47:18 No, I mean within that clique.
00:47:20 Within that clique, yeah.
00:47:21 Yeah, they’re all friends.
00:47:25 So a complete graph is when?
00:47:27 Everybody is friendly.
00:47:28 As everybody is friends with everybody, yeah.
00:47:31 So the problem might be to determine whether
00:47:36 in a given graph there exists a clique of a certain size.
00:47:43 Now that turns out to be a very hard problem,
00:47:45 but if somebody hands you a clique and asks you to check
00:47:50 whether it is, hands you a set of vertices
00:47:55 and asks you to check whether it’s a clique,
00:47:58 you could do that simply by exhaustively looking
00:48:01 at all of the edges between the vertices and the clique
00:48:05 and verifying that they’re all there.
00:48:08 And that’s a polynomial time algorithm.
00:48:10 That’s a polynomial.
00:48:11 So the problem of finding the clique
00:48:17 appears to be extremely hard,
00:48:19 but the problem of verifying a clique
00:48:22 to see if it reaches a target number of vertices
00:48:28 is easy to verify.
00:48:31 So finding the clique is hard, checking it is easy.
00:48:35 Problems of that nature are called
00:48:39 nondeterministic polynomial time algorithms,
00:48:42 and that’s the class NP.
00:48:45 And what about NP complete and NP hard?
00:48:48 Okay, let’s talk about problems
00:48:50 where you’re getting a yes or no answer
00:48:53 rather than a numerical value.
00:48:55 So either there is a perfect matching
00:48:58 of the boys with the girls or there isn’t.
00:49:04 It’s clear that every problem in P is also in NP.
00:49:10 If you can solve the problem exactly,
00:49:12 then you can certainly verify the solution.
00:49:17 On the other hand, there are problems in the class NP.
00:49:23 This is the class of problems that are easy to check,
00:49:27 although they may be hard to solve.
00:49:29 It’s not at all clear that problems in NP lie in P.
00:49:33 So for example, if we’re looking
00:49:36 at scheduling classes at a school,
00:49:39 the fact that you can verify when handed a schedule
00:49:44 for the school, whether it meets all the requirements,
00:49:47 that doesn’t mean that you can find the schedule rapidly.
00:49:51 So intuitively, NP, nondeterministic polynomial checking
00:49:55 rather than finding, is going to be harder than,
00:50:03 is going to include, is easier.
00:50:06 Checking is easier, and therefore the class of problems
00:50:08 that can be checked appears to be much larger
00:50:11 than the class of problems that can be solved.
00:50:14 And then you keep adding appears to,
00:50:17 and sort of these additional words that designate
00:50:22 that we don’t know for sure yet.
00:50:24 We don’t know for sure.
00:50:25 So the theoretical question, which is considered
00:50:28 to be the most central problem
00:50:30 in theoretical computer science,
00:50:32 or at least computational complexity theory,
00:50:36 combinatorial algorithm theory,
00:50:38 the question is whether P is equal to NP.
00:50:43 If P were equal to NP, it would be amazing.
00:50:46 It would mean that every problem
00:50:54 where a solution can be rapidly checked
00:50:58 can actually be solved in polynomial time.
00:51:00 We don’t really believe that’s true.
00:51:03 If you’re scheduling classes at a school,
00:51:05 we expect that if somebody hands you a satisfying schedule,
00:51:13 you can verify that it works.
00:51:15 That doesn’t mean that you should be able
00:51:17 to find such a schedule.
00:51:18 So intuitively, NP encompasses a lot more problems than P.
00:51:24 So can we take a small tangent
00:51:28 and break apart that intuition?
00:51:30 So do you, first of all, think that the biggest
00:51:34 sort of open problem in computer science,
00:51:36 maybe mathematics, is whether P equals NP?
00:51:40 Do you think P equals NP,
00:51:43 or do you think P is not equal to NP?
00:51:46 If you had to bet all your money on it.
00:51:48 I would bet that P is unequal to NP,
00:51:52 simply because there are problems
00:51:54 that have been around for centuries
00:51:55 and have been studied intensively in mathematics,
00:51:58 and even more so in the last 50 years
00:52:02 since the P versus NP was stated.
00:52:05 And no polynomial time algorithms have been found
00:52:10 for these easy to check problems.
00:52:13 So one example is a problem that goes back
00:52:17 to the mathematician Gauss,
00:52:19 who was interested in factoring large numbers.
00:52:24 So we know what a number is prime
00:52:27 if it cannot be written as the product
00:52:31 of two or more numbers unequal to one.
00:52:36 So if we can factor a number like 91, it’s seven times 13.
00:52:43 But if I give you 20 digit or 30 digit numbers,
00:52:50 you’re probably gonna be at a loss
00:52:52 to have any idea whether they can be factored.
00:52:56 So the problem of factoring very large numbers
00:53:00 does not appear to have an efficient solution.
00:53:06 But once you have found the factors,
00:53:11 expressed the number as a product of two smaller numbers,
00:53:16 you can quickly verify that they are factors of the number.
00:53:19 And your intuition is a lot of people finding,
00:53:23 a lot of brilliant people have tried to find algorithms
00:53:25 for this one particular problem.
00:53:26 There’s many others like it that are really well studied
00:53:30 and it would be great to find an efficient algorithm for.
00:53:33 Right, and in fact, we have some results
00:53:40 that I was instrumental in obtaining following up on work
00:53:44 by the mathematician Stephen Cook
00:53:48 to show that within the class NP of easy to check problems,
00:53:53 easy to check problems, there’s a huge number
00:53:57 that are equivalent in the sense that either all of them
00:54:00 or none of them lie in P.
00:54:03 And this happens only if P is equal to NP.
00:54:06 So if P is unequal to NP, we would also know
00:54:11 that virtually all the standard combinatorial problems,
00:54:16 virtually all the standard combinatorial problems,
00:54:20 if P is unequal to NP, none of them can be solved
00:54:23 in polynomial time.
00:54:25 Can you explain how that’s possible
00:54:28 to tie together so many problems in a nice bunch
00:54:32 that if one is proven to be efficient, then all are?
00:54:36 The first and most important stage of progress
00:54:40 was a result by Stephen Cook who showed that a certain problem
00:54:49 called the satisfiability problem of propositional logic
00:54:55 is as hard as any problem in the class P.
00:55:00 So the propositional logic problem is expressed
00:55:05 in terms of expressions involving the logical operations
00:55:11 and, or, and not operating on variables
00:55:16 that can be either true or false.
00:55:19 So an instance of the problem would be some formula
00:55:24 involving and, or, and not.
00:55:28 And the question would be whether there is an assignment
00:55:31 of truth values to the variables in the problem
00:55:34 that would make the formula true.
00:55:37 So for example, if I take the formula A or B
00:55:42 and A or not B and not A or B and not A or not B
00:55:49 and take the conjunction of all four
00:55:51 of those so called expressions,
00:55:54 you can determine that no assignment of truth values
00:55:59 to the variables A and B will allow that conjunction
00:56:03 of what are called clauses to be true.
00:56:09 So that’s an example of a formula in propositional logic
00:56:16 involving expressions based on the operations and, or, and not.
00:56:23 That’s an example of a problem which is not satisfiable.
00:56:27 There is no solution that satisfies
00:56:29 all of those constraints.
00:56:31 I mean that’s like one of the cleanest and fundamental
00:56:33 problems in computer science.
00:56:35 It’s like a nice statement of a really hard problem.
00:56:37 It’s a nice statement of a really hard problem
00:56:39 and what Cook showed is that every problem in NP
00:56:50 can be reexpressed as an instance
00:56:53 of the satisfiability problem.
00:56:56 So to do that, he used the observation
00:57:02 that a very simple abstract machine
00:57:04 called the Turing machine can be used
00:57:08 to describe any algorithm.
00:57:14 An algorithm for any realistic computer
00:57:17 can be translated into an equivalent algorithm
00:57:22 on one of these Turing machines
00:57:25 which are extremely simple.
00:57:27 So a Turing machine, there’s a tape and you can
00:57:30 Yeah, you have data on a tape
00:57:33 and you have basic instructions,
00:57:35 a finite list of instructions which say,
00:57:39 if you’re reading a particular symbol on the tape
00:57:43 and you’re in a particular state,
00:57:45 then you can move to a different state
00:57:49 and change the state of the number
00:57:51 or the element that you were looking at,
00:57:53 the cell of the tape that you were looking at.
00:57:55 And that was like a metaphor and a mathematical construct
00:57:58 that Turing put together
00:57:59 to represent all possible computation.
00:58:02 All possible computation.
00:58:03 Now, one of these so called Turing machines
00:58:06 is too simple to be useful in practice,
00:58:09 but for theoretical purposes,
00:58:11 we can depend on the fact that an algorithm
00:58:15 for any computer can be translated
00:58:18 into one that would run on a Turing machine.
00:58:21 And then using that fact,
00:58:26 he could sort of describe
00:58:31 any possible non deterministic polynomial time algorithm.
00:58:35 Any algorithm for a problem in NP
00:58:40 could be expressed as a sequence of moves
00:58:44 of the Turing machine described
00:58:47 in terms of reading a symbol on the tape
00:58:54 while you’re in a given state
00:58:55 and moving to a new state and leaving behind a new symbol.
00:59:00 And given that fact
00:59:03 that any non deterministic polynomial time algorithm
00:59:07 can be described by a list of such instructions,
00:59:12 you could translate the problem
00:59:15 into the language of the satisfiability problem.
00:59:19 Is that amazing to you, by the way,
00:59:20 if you take yourself back
00:59:21 when you were first thinking about the space of problems?
00:59:24 How amazing is that?
00:59:26 It’s astonishing.
00:59:27 When you look at Cook’s proof,
00:59:30 it’s not too difficult to sort of figure out
00:59:34 why this is so,
00:59:38 but the implications are staggering.
00:59:40 It tells us that this, of all the problems in NP,
00:59:46 all the problems where solutions are easy to check,
00:59:49 they can all be rewritten
00:59:53 in terms of the satisfiability problem.
00:59:59 Yeah, it’s adding so much more weight
01:00:04 to the P equals NP question
01:00:06 because all it takes is to show that one algorithm
01:00:10 in this class.
01:00:11 So the P versus NP can be re expressed
01:00:13 as simply asking whether the satisfiability problem
01:00:17 of propositional logic is solvable in polynomial time.
01:00:23 But there’s more.
01:00:28 I encountered Cook’s paper
01:00:30 when he published it in a conference in 1971.
01:00:34 Yeah, so when I saw Cook’s paper
01:00:37 and saw this reduction of each of the problems in NP
01:00:44 by a uniform method to the satisfiability problem
01:00:49 of propositional logic,
01:00:52 that meant that the satisfiability problem
01:00:54 was a universal combinatorial problem.
01:00:59 And it occurred to me through experience I had had
01:01:04 in trying to solve other combinatorial problems
01:01:07 that there were many other problems
01:01:10 which seemed to have that universal structure.
01:01:16 And so I began looking for reductions
01:01:21 from the satisfiability to other problems.
01:01:26 And one of the other problems
01:01:31 would be the so called integer programming problem
01:01:35 of determining whether there’s a solution
01:01:40 to a set of linear inequalities involving integer variables.
01:01:48 Just like linear programming,
01:01:49 but there’s a constraint that the variables
01:01:51 must remain integers.
01:01:53 In fact, must be the zero or one
01:01:56 could only take on those values.
01:01:58 And that makes the problem much harder.
01:02:00 Yes, that makes the problem much harder.
01:02:03 And it was not difficult to show
01:02:07 that the satisfiability problem can be restated
01:02:11 as an integer programming problem.
01:02:13 So can you pause on that?
01:02:15 Was that one of the first mappings that you tried to do?
01:02:19 And how hard is that mapping?
01:02:20 You said it wasn’t hard to show,
01:02:21 but that’s a big leap.
01:02:27 It is a big leap, yeah.
01:02:29 Well, let me give you another example.
01:02:32 Another problem in NP
01:02:35 is whether a graph contains a clique of a given size.
01:02:42 And now the question is,
01:02:47 can we reduce the propositional logic problem
01:02:51 to the problem of whether there’s a clique
01:02:55 of a certain size?
01:02:58 Well, if you look at the propositional logic problem,
01:03:01 it can be expressed as a number of clauses,
01:03:05 each of which is a,
01:03:11 of the form A or B or C,
01:03:15 where A is either one of the variables in the problem
01:03:18 or the negation of one of the variables.
01:03:22 And an instance of the propositional logic problem
01:03:30 can be rewritten using operations of Boolean logic,
01:03:37 can be rewritten as the conjunction of a set of clauses,
01:03:41 the AND of a set of ORs,
01:03:43 where each clause is a disjunction, an OR of variables
01:03:49 or negated variables.
01:03:53 So the question in the satisfiability problem
01:04:01 is whether those clauses can be simultaneously satisfied.
01:04:07 Now, to satisfy all those clauses,
01:04:09 you have to find one of the terms in each clause,
01:04:13 which is going to be true in your truth assignment,
01:04:20 but you can’t make the same variable both true and false.
01:04:24 So if you have the variable A in one clause
01:04:29 and you want to satisfy that clause by making A true,
01:04:34 you can’t also make the complement of A true
01:04:38 in some other clause.
01:04:39 And so the goal is to make every single clause true
01:04:43 if it’s possible to satisfy this,
01:04:45 and the way you make it true is at least…
01:04:48 One term in the clause must be true.
01:04:52 Got it.
01:04:53 So now we, to convert this problem
01:04:58 to something called the independent set problem,
01:05:01 where you’re just sort of asking for a set of vertices
01:05:06 in a graph such that no two of them are adjacent,
01:05:08 sort of the opposite of the clique problem.
01:05:14 So we’ve seen that we can now express that as
01:05:24 finding a set of terms, one in each clause,
01:05:29 without picking both the variable
01:05:33 and the negation of that variable,
01:05:36 because if the variable is assigned the truth value,
01:05:40 the negated variable has to have the opposite truth value.
01:05:44 And so we can construct a graph where the vertices
01:05:49 are the terms in all of the clauses,
01:05:54 and you have an edge between two terms
01:06:07 if an edge between two occurrences of terms,
01:06:14 either if they’re both in the same clause,
01:06:16 because you’re only picking one element from each clause,
01:06:20 and also an edge between them if they represent
01:06:23 opposite values of the same variable,
01:06:26 because you can’t make a variable both true and false.
01:06:29 And so you get a graph where you have
01:06:31 all of these occurrences of variables,
01:06:34 you have edges, which mean that you’re not allowed
01:06:37 to choose both ends of the edge,
01:06:41 either because they’re in the same clause
01:06:43 or they’re negations of one another.
01:06:46 All right, and that’s a, first of all, sort of to zoom out,
01:06:50 that’s a really powerful idea that you can take a graph
01:06:55 and connect it to a logic equation somehow,
01:07:00 and do that mapping for all possible formulations
01:07:04 of a particular problem on a graph.
01:07:06 Yeah.
01:07:07 I mean, that still is hard for me to believe.
01:07:12 Yeah, it’s hard for me to believe.
01:07:14 It’s hard for me to believe that that’s possible.
01:07:17 That they’re, like, what do you make of that,
01:07:20 that there’s such a union of,
01:07:24 there’s such a friendship among all these problems across
01:07:28 that somehow are akin to combinatorial algorithms,
01:07:33 that they’re all somehow related?
01:07:35 I know it can be proven, but what do you make of it,
01:07:39 that that’s true?
01:07:41 Well, that they just have the same expressive power.
01:07:46 You can take any one of them
01:07:49 and translate it into the terms of the other.
01:07:53 The fact that they have the same expressive power
01:07:55 also somehow means that they can be translatable.
01:07:59 Right, and what I did in the 1971 paper
01:08:03 was to take 21 fundamental problems,
01:08:08 the commonly occurring problems of packing,
01:08:12 covering, matching, and so forth,
01:08:15 lying in the class NP,
01:08:19 and show that the satisfiability problem
01:08:21 can be reexpressed as any of those,
01:08:24 that any of those have the same expressive power.
01:08:30 And that was like throwing down the gauntlet
01:08:32 of saying there’s probably many more problems like this.
01:08:35 Right.
01:08:36 Saying that, look, that they’re all the same.
01:08:39 They’re all the same, but not exactly.
01:08:43 They’re all the same in terms of whether they are
01:08:49 rich enough to express any of the others.
01:08:53 But that doesn’t mean that they have
01:08:55 the same computational complexity.
01:08:57 But what we can say is that either all of these problems
01:09:02 or none of them are solvable in polynomial time.
01:09:05 Yeah, so what is NP completeness
01:09:08 and NP hard as classes?
01:09:11 Oh, that’s just a small technicality.
01:09:14 So when we’re talking about decision problems,
01:09:17 that means that the answer is just yes or no.
01:09:21 There is a clique of size 15
01:09:23 or there’s not a clique of size 15.
01:09:26 On the other hand, an optimization problem
01:09:28 would be asking find the largest clique.
01:09:33 The answer would not be yes or no.
01:09:35 It would be 15.
01:09:39 So when you’re asking for the,
01:09:42 when you’re putting a valuation on the different solutions
01:09:46 and you’re asking for the one with the highest valuation,
01:09:49 that’s an optimization problem.
01:09:51 And there’s a very close affinity
01:09:52 between the two kinds of problems.
01:09:55 But the counterpart of being the hardest decision problem,
01:10:02 the hardest yes, no problem,
01:10:04 the counterpart of that is to minimize
01:10:09 or maximize an objective function.
01:10:13 And so a problem that’s hardest in the class
01:10:17 when viewed in terms of optimization,
01:10:19 those are called NP hard rather than NP complete.
01:10:24 And NP complete is for decision problems.
01:10:26 And NP complete is for decision problems.
01:10:28 So if somebody shows that P equals NP,
01:10:35 what do you think that proof will look like
01:10:39 if you were to put on yourself,
01:10:41 if it’s possible to show that as a proof
01:10:45 or to demonstrate an algorithm?
01:10:49 All I can say is that it will involve concepts
01:10:52 that we do not now have and approaches that we don’t have.
01:10:56 Do you think those concepts are out there
01:10:58 in terms of inside complexity theory,
01:11:02 inside of computational analysis of algorithms?
01:11:04 Do you think there’s concepts
01:11:05 that are totally outside of the box
01:11:07 that we haven’t considered yet?
01:11:09 I think that if there is a proof that P is equal to NP
01:11:13 or that P is unequal to NP,
01:11:17 it’ll depend on concepts that are now outside the box.
01:11:22 Now, if that’s shown either way, P equals NP or P not,
01:11:25 well, actually P equals NP,
01:11:28 what impact, you kind of mentioned a little bit,
01:11:32 but can you linger on it?
01:11:34 What kind of impact would it have
01:11:36 on theoretical computer science
01:11:38 and perhaps software based systems in general?
01:11:42 Well, I think it would have enormous impact on the world
01:11:46 in either way case.
01:11:49 If P is unequal to NP, which is what we expect,
01:11:53 then we know that for the great majority
01:11:56 of the combinatorial problems that come up,
01:11:59 since they’re known to be NP complete,
01:12:02 we’re not going to be able to solve them
01:12:05 by efficient algorithms.
01:12:07 However, there’s a little bit of hope
01:12:11 in that it may be that we can solve most instances.
01:12:16 All we know is that if a problem is not NP,
01:12:19 then it can’t be solved efficiently on all instances.
01:12:22 But basically, if we find that P is unequal to NP,
01:12:32 it will mean that we can’t expect always
01:12:35 to get the optimal solutions to these problems.
01:12:38 And we have to depend on heuristics
01:12:41 that perhaps work most of the time
01:12:43 or give us good approximate solutions, but not.
01:12:47 So we would turn our eye towards the heuristics
01:12:51 with a little bit more acceptance and comfort on our hearts.
01:12:56 Exactly.
01:12:57 Okay, so let me ask a romanticized question.
01:13:02 What to you is one of the most
01:13:04 or the most beautiful combinatorial algorithm
01:13:08 in your own life or just in general in the field
01:13:10 that you’ve ever come across or have developed yourself?
01:13:14 Oh, I like the stable matching problem
01:13:17 or the stable marriage problem very much.
01:13:22 What’s the stable matching problem?
01:13:25 Yeah.
01:13:26 Imagine that you want to marry off N boys with N girls.
01:13:37 And each boy has an ordered list
01:13:39 of his preferences among the girls.
01:13:42 His first choice, his second choice,
01:13:44 through her, Nth choice.
01:13:47 And each girl also has an ordering of the boys,
01:13:55 his first choice, second choice, and so on.
01:13:58 And we’ll say that a matching,
01:14:03 a one to one matching of the boys with the girls is stable
01:14:07 if there are no two couples in the matching
01:14:15 such that the boy in the first couple
01:14:18 prefers the girl in the second couple to her mate
01:14:23 and she prefers the boy to her current mate.
01:14:27 In other words, if the matching is stable
01:14:31 if there is no pair who want to run away with each other
01:14:35 leaving their partners behind.
01:14:38 Gosh, yeah.
01:14:39 Yeah.
01:14:44 Actually, this is relevant to matching residents
01:14:49 with hospitals and some other real life problems,
01:14:52 although not quite in the form that I described.
01:14:56 So it turns out that there is,
01:15:00 for any set of preferences, a stable matching exists.
01:15:06 And moreover, it can be computed
01:15:11 by a simple algorithm
01:15:14 in which each boy starts making proposals to girls.
01:15:21 And if the girl receives the proposal,
01:15:23 she accepts it tentatively,
01:15:25 but she can drop it later
01:15:32 if she gets a better proposal from her point of view.
01:15:36 And the boys start going down their lists
01:15:39 proposing to their first, second, third choices
01:15:41 until stopping when a proposal is accepted.
01:15:50 But the girls meanwhile are watching the proposals
01:15:53 that are coming into them.
01:15:55 And the girl will drop her current partner
01:16:01 if she gets a better proposal.
01:16:03 And the boys never go back through the list?
01:16:06 They never go back, yeah.
01:16:07 So once they’ve been denied.
01:16:11 They don’t try again.
01:16:12 They don’t try again
01:16:14 because the girls are always improving their status
01:16:19 as they receive better and better proposals.
01:16:22 The boys are going down their lists starting
01:16:25 with their top preferences.
01:16:28 And one can prove that the process will come to an end
01:16:39 where everybody will get matched with somebody
01:16:43 and you won’t have any pair
01:16:46 that want to abscond from each other.
01:16:50 Do you find the proof or the algorithm itself beautiful?
01:16:54 Or is it the fact that with the simplicity
01:16:56 of just the two marching,
01:16:59 I mean the simplicity of the underlying rule
01:17:01 of the algorithm, is that the beautiful part?
01:17:04 Both I would say.
01:17:07 And you also have the observation that you might ask
01:17:11 who is better off, the boys who are doing the proposing
01:17:14 or the girls who are reacting to proposals.
01:17:17 And it turns out that it’s the boys
01:17:20 who are doing the best.
01:17:22 That is, each boy is doing at least as well
01:17:25 as he could do in any other staple matching.
01:17:30 So there’s a sort of lesson for the boys
01:17:33 that you should go out and be proactive
01:17:36 and make those proposals.
01:17:38 Go for broke.
01:17:41 I don’t know if this is directly mappable philosophically
01:17:44 to our society, but certainly seems
01:17:46 like a compelling notion.
01:17:48 And like you said, there’s probably a lot
01:17:51 of actual real world problems that this could be mapped to.
01:17:54 Yeah, well you get complications.
01:17:58 For example, what happens when a husband and wife
01:18:01 want to be assigned to the same hospital?
01:18:03 So you have to take those constraints into account.
01:18:10 And then the problem becomes NP hard.
01:18:15 Why is it a problem for the husband and wife
01:18:18 to be assigned to the same hospital?
01:18:20 No, it’s desirable.
01:18:22 Or at least go to the same city.
01:18:24 So you can’t, if you’re assigning residents to hospitals.
01:18:29 And then you have some preferences
01:18:32 for the husband and the wife or for the hospitals.
01:18:34 The residents have their own preferences.
01:18:39 Residents both male and female have their own preferences.
01:18:44 The hospitals have their preferences.
01:18:47 But if resident A, the boy, is going to Philadelphia,
01:18:55 then you’d like his wife also to be assigned
01:19:01 to a hospital in Philadelphia.
01:19:04 Which step makes it a NP hard problem that you mentioned?
01:19:08 The fact that you have this additional constraint.
01:19:11 That it’s not just the preferences of individuals,
01:19:15 but the fact that the two partners to a marriage
01:19:19 have to be assigned to the same place.
01:19:22 I’m being a little dense.
01:19:29 The perfect matching, no, not the perfect,
01:19:31 stable matching is what you referred to.
01:19:33 That’s when two partners are trying to.
01:19:36 Okay, what’s confusing you is that in the first
01:19:39 interpretation of the problem,
01:19:40 I had boys matching with girls.
01:19:42 Yes.
01:19:44 In the second interpretation,
01:19:46 you have humans matching with institutions.
01:19:49 With institutions.
01:19:51 I, and there’s a coupling between within the,
01:19:54 gotcha, within the humans.
01:19:56 Yeah.
01:19:56 Any added little constraint will make it an NP hard problem.
01:20:00 Well, yeah.
01:20:03 Okay.
01:20:05 By the way, the algorithm you mentioned
01:20:06 wasn’t one of yours or no?
01:20:07 No, no, that was due to Gale and Shapley
01:20:11 and my friend David Gale passed away
01:20:15 before he could get part of a Nobel Prize,
01:20:18 but his partner Shapley shared in a Nobel Prize
01:20:22 with somebody else for.
01:20:24 Economics?
01:20:25 For economics.
01:20:28 For ideas stemming from the stable matching idea.
01:20:32 So you’ve also have developed yourself
01:20:35 some elegant, beautiful algorithms.
01:20:38 Again, picking your children,
01:20:39 so the Robin Karp algorithm for string searching,
01:20:43 pattern matching, Edmund Karp algorithm for max flows
01:20:46 we mentioned, Hopcroft Karp algorithm for finding
01:20:49 maximum cardinality matchings in bipartite graphs.
01:20:52 Is there ones that stand out to you,
01:20:55 ones you’re most proud of or just
01:20:59 whether it’s beauty, elegance,
01:21:01 or just being the right discovery development
01:21:06 in your life that you’re especially proud of?
01:21:10 I like the Rabin Karp algorithm
01:21:12 because it illustrates the power of randomization.
01:21:17 So the problem there
01:21:23 is to decide whether a given long string of symbols
01:21:28 is to decide whether a given long string of symbols
01:21:34 from some alphabet contains a given word,
01:21:39 whether a particular word occurs
01:21:41 within some very much longer word.
01:21:45 And so the idea of the algorithm
01:21:52 is to associate with the word that we’re looking for,
01:21:57 a fingerprint, some number,
01:22:02 or some combinatorial object that describes that word,
01:22:10 and then to look for an occurrence of that same fingerprint
01:22:13 as you slide along the longer word.
01:22:18 And what we do is we associate with each word a number.
01:22:23 So first of all, we think of the letters
01:22:26 that occur in a word as the digits of, let’s say,
01:22:30 decimal or whatever base here,
01:22:36 whatever number of different symbols there are.
01:22:40 That’s the base of the numbers, yeah.
01:22:42 Right, so every word can then be thought of as a number
01:22:46 with the letters being the digits of that number.
01:22:50 And then we pick a random prime number in a certain range,
01:22:55 and we take that word viewed as a number,
01:23:00 and take the remainder on dividing that number by the prime.
01:23:09 So coming up with a nice hash function.
01:23:11 It’s a kind of hash function.
01:23:13 Yeah, it gives you a little shortcut
01:23:17 for that particular word.
01:23:22 Yeah, so that’s the…
01:23:26 It’s very different than other algorithms of its kind
01:23:31 that we’re trying to do search, string matching.
01:23:35 Yeah, which usually are combinatorial
01:23:38 and don’t involve the idea of taking a random fingerprint.
01:23:42 Yes.
01:23:43 And doing the fingerprinting has two advantages.
01:23:48 One is that as we slide along the long word,
01:23:51 digit by digit, we keep a window of a certain size,
01:23:57 the size of the word we’re looking for,
01:24:00 and we compute the fingerprint
01:24:03 of every stretch of that length.
01:24:07 And it turns out that just a couple of arithmetic operations
01:24:11 will take you from the fingerprint of one part
01:24:15 to what you get when you slide over by one position.
01:24:19 So the computation of all the fingerprints is simple.
01:24:26 And secondly, it’s unlikely if the prime is chosen randomly
01:24:32 from a certain range that you will get two of the segments
01:24:37 in question having the same fingerprint.
01:24:39 Right.
01:24:41 And so there’s a small probability of error
01:24:43 which can be checked after the fact,
01:24:46 and also the ease of doing the computation
01:24:48 because you’re working with these fingerprints
01:24:51 which are remainder’s modulo some big prime.
01:24:55 So that’s the magical thing about randomized algorithms
01:24:58 is that if you add a little bit of randomness,
01:25:02 it somehow allows you to take a pretty naive approach,
01:25:05 a simple looking approach, and allow it to run extremely well.
01:25:10 So can you maybe take a step back and say
01:25:14 what is a randomized algorithm, this category of algorithms?
01:25:18 Well, it’s just the ability to draw a random number
01:25:22 from such, from some range
01:25:27 or to associate a random number with some object
01:25:32 or to draw that random from some set.
01:25:35 So another example is very simple
01:25:41 if we’re conducting a presidential election
01:25:46 and we would like to pick the winner.
01:25:52 In principle, we could draw a random sample
01:25:57 of all of the voters in the country.
01:25:59 And if it was of substantial size, say a few thousand,
01:26:05 then the most popular candidate in that group
01:26:08 would be very likely to be the correct choice
01:26:12 that would come out of counting all the millions of votes.
01:26:15 And of course we can’t do this because first of all,
01:26:18 everybody has to feel that his or her vote counted.
01:26:21 And secondly, we can’t really do a purely random sample
01:26:25 from that population.
01:26:28 And I guess thirdly, there could be a tie
01:26:30 in which case we wouldn’t have a significant difference
01:26:34 between two candidates.
01:26:36 But those things aside,
01:26:37 if you didn’t have all that messiness of human beings,
01:26:40 you could prove that that kind of random picking
01:26:43 would come up again.
01:26:43 You just said random picking would solve the problem
01:26:48 with a very low probability of error.
01:26:51 Another example is testing whether a number is prime.
01:26:55 So if I wanna test whether 17 is prime,
01:27:01 I could pick any number between one and 17,
01:27:08 raise it to the 16th power modulo 17,
01:27:12 and you should get back the original number.
01:27:15 That’s a famous formula due to Fermat about,
01:27:19 it’s called Fermat’s Little Theorem,
01:27:21 that if you take any number a in the range
01:27:29 zero through n minus one,
01:27:32 and raise it to the n minus 1th power modulo n,
01:27:38 you’ll get back the number a if a is prime.
01:27:43 So if you don’t get back the number a,
01:27:45 that’s a proof that a number is not prime.
01:27:48 And you can show that suitably defined
01:27:57 the probability that you will get a value unequaled,
01:28:09 you will get a violation of Fermat’s result is very high.
01:28:14 And so this gives you a way of rapidly proving
01:28:18 that a number is not prime.
01:28:21 It’s a little more complicated than that
01:28:22 because there are certain values of n
01:28:26 where something a little more elaborate has to be done,
01:28:28 but that’s the basic idea.
01:28:32 Taking an identity that holds for primes,
01:28:34 and therefore, if it ever fails on any instance
01:28:39 for a non prime, you know that the number is not prime.
01:28:43 It’s a quick choice, a fast choice,
01:28:45 fast proof that a number is not prime.
01:28:48 Can you maybe elaborate a little bit more
01:28:50 what’s your intuition why randomness works so well
01:28:54 and results in such simple algorithms?
01:28:57 Well, the example of conducting an election
01:29:00 where you could take, in theory, you could take a sample
01:29:04 and depend on the validity of the sample
01:29:07 to really represent the whole
01:29:09 is just the basic fact of statistics,
01:29:12 which gives a lot of opportunities.
01:29:17 And I actually exploited that sort of random sampling idea
01:29:23 in designing an algorithm
01:29:25 for counting the number of solutions
01:29:30 that satisfy a particular formula
01:29:33 and propositional logic.
01:29:37 A particular, so some version of the satisfiability problem?
01:29:44 A version of the satisfiability problem.
01:29:47 Is there some interesting insight
01:29:49 that you wanna elaborate on,
01:29:50 like what some aspect of that algorithm
01:29:53 that might be useful to describe?
01:29:57 So you have a collection of formulas
01:30:02 and you want to count the number of solutions
01:30:14 that satisfy at least one of the formulas.
01:30:20 And you can count the number of solutions
01:30:23 that satisfy any particular one of the formulas,
01:30:27 but you have to account for the fact
01:30:29 that that solution might be counted many times
01:30:33 if it solves more than one of the formulas.
01:30:40 And so what you do is you sample from the formulas
01:30:46 according to the number of solutions
01:30:49 that satisfy each individual one.
01:30:53 In that way, you draw a random solution,
01:30:55 but then you correct by looking at
01:30:59 the number of formulas that satisfy that random solution
01:31:04 and don’t double count.
01:31:08 So you can think of it this way.
01:31:11 So you have a matrix of zeros and ones
01:31:16 and you wanna know how many columns of that matrix
01:31:18 contain at least one one.
01:31:22 And you can count in each row how many ones there are.
01:31:26 So what you can do is draw from the rows
01:31:29 according to the number of ones.
01:31:31 If a row has more ones, it gets drawn more frequently.
01:31:35 But then if you draw from that row,
01:31:39 you have to go up the column
01:31:41 and looking at where that same one is repeated
01:31:44 in different rows and only count it as a success or a hit
01:31:51 if it’s the earliest row that contains the one.
01:31:54 And that gives you a robust statistical estimate
01:32:00 of the total number of columns
01:32:02 that contain at least one of the ones.
01:32:04 So that is an example of the same principle
01:32:09 that was used in studying random sampling.
01:32:13 Another viewpoint is that if you have a phenomenon
01:32:18 that occurs almost all the time,
01:32:21 then if you sample one of the occasions where it occurs,
01:32:28 you’re most likely to,
01:32:30 and you’re looking for an occurrence,
01:32:32 a random occurrence is likely to work.
01:32:34 So that comes up in solving identities,
01:32:39 solving algebraic identities.
01:32:42 You get two formulas that may look very different.
01:32:46 You wanna know if they’re really identical.
01:32:49 What you can do is just pick a random value
01:32:52 and evaluate the formulas at that value
01:32:56 and see if they agree.
01:32:58 And you depend on the fact
01:33:02 that if the formulas are distinct,
01:33:04 then they’re gonna disagree a lot.
01:33:06 And so therefore, a random choice
01:33:08 will exhibit the disagreement.
01:33:12 If there are many ways for the two to disagree
01:33:16 and you only need to find one disagreement,
01:33:18 then random choice is likely to yield it.
01:33:22 And in general, so we’ve just talked
01:33:24 about randomized algorithms,
01:33:26 but we can look at the probabilistic analysis of algorithms.
01:33:29 And that gives us an opportunity to step back
01:33:32 and as you said, everything we’ve been talking about
01:33:35 is worst case analysis.
01:33:38 Could you maybe comment on the usefulness
01:33:43 and the power of worst case analysis
01:33:45 versus best case analysis, average case, probabilistic?
01:33:51 How do we think about the future
01:33:52 of theoretical computer science, computer science
01:33:56 in the kind of analysis we do of algorithms?
01:33:59 Does worst case analysis still have a place,
01:34:01 an important place?
01:34:02 Or do we want to try to move forward
01:34:04 towards kind of average case analysis?
01:34:07 And what are the challenges there?
01:34:09 So if worst case analysis shows
01:34:11 that an algorithm is always good,
01:34:15 that’s fine.
01:34:17 If worst case analysis is used to show that the problem,
01:34:25 that the solution is not always good,
01:34:29 then you have to step back and do something else
01:34:32 to ask how often will you get a good solution?
01:34:36 Just to pause on that for a second,
01:34:38 that’s so beautifully put
01:34:40 because I think we tend to judge algorithms.
01:34:43 We throw them in the trash
01:34:45 the moment their worst case is shown to be bad.
01:34:48 Right, and that’s unfortunate.
01:34:50 I think a good example is going back
01:34:56 to the satisfiability problem.
01:35:00 There are very powerful programs called SAT solvers
01:35:04 which in practice fairly reliably solve instances
01:35:09 with many millions of variables
01:35:11 that arise in digital design
01:35:14 or in proving programs correct in other applications.
01:35:20 And so in many application areas,
01:35:24 even though satisfiability as we’ve already discussed
01:35:27 is NP complete, the SAT solvers will work so well
01:35:34 that the people in that discipline
01:35:37 tend to think of satisfiability as an easy problem.
01:35:40 So in other words, just for some reason
01:35:45 that we don’t entirely understand,
01:35:47 the instances that people formulate
01:35:50 in designing digital circuits or other applications
01:35:54 are such that satisfiability is not hard to check
01:36:04 and even searching for a satisfying solution
01:36:07 can be done efficiently in practice.
01:36:11 And there are many examples.
01:36:13 For example, we talked about the traveling salesman problem.
01:36:18 So just to refresh our memories,
01:36:21 the problem is you’ve got a set of cities,
01:36:23 you have pairwise distances between cities
01:36:28 and you want to find a tour through all the cities
01:36:31 that minimizes the total cost of all the edges traversed,
01:36:36 all the trips between cities.
01:36:38 The problem is NP hard,
01:36:41 but people using integer programming codes
01:36:46 together with some other mathematical tricks
01:36:51 can solve geometric instances of the problem
01:36:57 where the cities are, let’s say points in the plane
01:37:01 and get optimal solutions to problems
01:37:03 with tens of thousands of cities.
01:37:05 Actually, it’ll take a few computer months
01:37:08 to solve a problem of that size,
01:37:10 but for problems of size a thousand or two,
01:37:13 it’ll rapidly get optimal solutions,
01:37:16 provably optimal solutions,
01:37:19 even though again, we know that it’s unlikely
01:37:23 that the traveling salesman problem
01:37:25 can be solved in polynomial time.
01:37:28 Are there methodologies like rigorous systematic methodologies
01:37:33 for, you said in practice.
01:37:38 In practice, this algorithm’s pretty good.
01:37:40 Are there systematic ways of saying
01:37:42 in practice, this algorithm’s pretty good?
01:37:43 So in other words, average case analysis.
01:37:46 Or you’ve also mentioned that average case
01:37:49 kind of requires you to understand what the typical case is,
01:37:52 typical instances, and that might be really difficult.
01:37:55 That’s very difficult.
01:37:56 So after I did my original work
01:37:59 on showing all these problems through NP complete,
01:38:06 I looked around for a way to shed some positive light
01:38:11 on combinatorial algorithms.
01:38:13 And what I tried to do was to study problems,
01:38:19 behavior on the average or with high probability.
01:38:24 But I had to make some assumptions
01:38:26 about what’s the probability space?
01:38:29 What’s the sample space?
01:38:30 What do we mean by typical problems?
01:38:33 That’s very hard to say.
01:38:35 So I took the easy way out
01:38:37 and made some very simplistic assumptions.
01:38:40 So I assumed, for example,
01:38:42 that if we were generating a graph
01:38:44 with a certain number of vertices and edges,
01:38:47 then we would generate the graph
01:38:48 by simply choosing one edge at a time at random
01:38:53 until we got the right number of edges.
01:38:56 That’s a particular model of random graphs
01:38:59 that has been studied mathematically a lot.
01:39:02 And within that model,
01:39:05 I could prove all kinds of wonderful things,
01:39:07 I and others who also worked on this.
01:39:10 So we could show that we know exactly
01:39:15 how many edges there have to be
01:39:16 in order for there be a so called Hamiltonian circuit.
01:39:24 That’s a cycle that visits each vertex exactly once.
01:39:31 We know that if the number of edges
01:39:35 is a little bit more than n log n,
01:39:37 where n is the number of vertices,
01:39:39 then such a cycle is very likely to exist.
01:39:44 And we can give a heuristic
01:39:45 that will find it with high probability.
01:39:48 And the community in which I was working
01:39:54 got a lot of results along these lines.
01:39:58 But the field tended to be rather lukewarm
01:40:04 about accepting these results as meaningful
01:40:07 because we were making such a simplistic assumption
01:40:09 about the kinds of graphs that we would be dealing with.
01:40:13 So we could show all kinds of wonderful things,
01:40:16 it was a great playground, I enjoyed doing it.
01:40:18 But after a while, I concluded that
01:40:27 it didn’t have a lot of bite
01:40:29 in terms of the practical application.
01:40:31 Oh the, okay, so there’s too much
01:40:33 into the world of toy problems.
01:40:35 Yeah.
01:40:36 That can, okay.
01:40:36 But all right, is there a way to find
01:40:41 nice representative real world impactful instances
01:40:45 of a problem on which demonstrate that an algorithm is good?
01:40:48 So this is kind of like the machine learning world,
01:40:51 that’s kind of what they at his best tries to do
01:40:54 is find a data set from like the real world
01:40:57 and show the performance, all the conferences
01:41:02 are all focused on beating the performance
01:41:04 of on that real world data set.
01:41:07 Is there an equivalent in complexity analysis?
01:41:11 Not really, Don Knuth started to collect examples
01:41:19 of graphs coming from various places.
01:41:21 So he would have a whole zoo of different graphs
01:41:26 that he could choose from and he could study
01:41:28 the performance of algorithms on different types of graphs.
01:41:31 But there it’s really important and compelling
01:41:37 to be able to define a class of graphs.
01:41:41 The actual act of defining a class of graphs
01:41:44 that you’re interested in, it seems to be
01:41:46 a non trivial step if we’re talking about instances
01:41:49 that we should care about in the real world.
01:41:51 Yeah, there’s nothing available there
01:41:55 that would be analogous to the training set
01:41:58 for supervised learning where you sort of assume
01:42:02 that the world has given you a bunch
01:42:05 of examples to work with.
01:42:10 We don’t really have that for problems,
01:42:14 for combinatorial problems on graphs and networks.
01:42:18 You know, there’s been a huge growth,
01:42:21 a big growth of data sets available.
01:42:23 Do you think some aspect of theoretical computer science
01:42:28 might be contradicting my own question while saying it,
01:42:30 but will there be some aspect,
01:42:33 an empirical aspect of theoretical computer science
01:42:36 which will allow the fact that these data sets are huge,
01:42:41 we’ll start using them for analysis.
01:42:44 Sort of, you know, if you want to say something
01:42:46 about a graph algorithm, you might take
01:42:50 a social network like Facebook and looking at subgraphs
01:42:55 of that and prove something about the Facebook graph
01:42:58 and be respected, and at the same time,
01:43:01 be respected in the theoretical computer science community.
01:43:03 That hasn’t been achieved yet, I’m afraid.
01:43:06 Is that P equals NP, is that impossible?
01:43:10 Is it impossible to publish a successful paper
01:43:14 in the theoretical computer science community
01:43:17 that shows some performance on a real world data set?
01:43:22 Or is that really just those are two different worlds?
01:43:25 They haven’t really come together.
01:43:27 I would say that there is a field
01:43:31 of experimental algorithmics where people,
01:43:34 sometimes they’re given some family of examples.
01:43:39 Sometimes they just generate them at random
01:43:41 and they report on performance,
01:43:45 but there’s no convincing evidence
01:43:50 that the sample is representative of anything at all.
01:43:57 So let me ask, in terms of breakthroughs
01:44:00 and open problems, what are the most compelling
01:44:04 open problems to you and what possible breakthroughs
01:44:07 do you see in the near term
01:44:08 in terms of theoretical computer science?
01:44:13 Well, there are all kinds of relationships
01:44:15 among complexity classes that can be studied,
01:44:18 just to mention one thing, I wrote a paper
01:44:22 with Richard Lipton in 1979,
01:44:28 where we asked the following question.
01:44:34 If you take a combinatorial problem in NP, let’s say,
01:44:39 and you choose, and you pick the size of the problem,
01:44:49 say it’s a traveling salesman problem, but of size 52,
01:44:55 and you ask, could you get an efficient,
01:45:00 a small Boolean circuit tailored for that size, 52,
01:45:05 where you could feed the edges of the graph
01:45:08 in as Boolean inputs and get, as an output,
01:45:12 the question of whether or not
01:45:13 there’s a tour of a certain length.
01:45:16 And that would, in other words, briefly,
01:45:19 what you would say in that case
01:45:21 is that the problem has small circuits,
01:45:24 polynomial size circuits.
01:45:28 Now, we know that if P is equal to NP,
01:45:31 then, in fact, these problems will have small circuits,
01:45:35 but what about the converse?
01:45:37 Could a problem have small circuits,
01:45:39 meaning that an algorithm tailored to any particular size
01:45:43 could work well, and yet not be a polynomial time algorithm?
01:45:48 That is, you couldn’t write it as a single,
01:45:50 uniform algorithm, good for all sizes.
01:45:52 Just to clarify, small circuits
01:45:55 for a problem of particular size,
01:45:57 by small circuits for a problem of particular size,
01:46:02 or even further constraint,
01:46:04 small circuit for a particular…
01:46:07 No, for all the inputs of that size.
01:46:10 Is that a trivial problem for a particular instance?
01:46:13 So, coming up, an automated way
01:46:15 of coming up with a circuit.
01:46:17 I guess that’s just an answer.
01:46:19 That would be hard, yeah.
01:46:22 But there’s the existential question.
01:46:25 Everybody talks nowadays about existential questions.
01:46:29 Existential challenges.
01:46:35 You could ask the question,
01:46:38 does the Hamiltonian circuit problem
01:46:43 have a small circuit for every size,
01:46:49 for each size, a different small circuit?
01:46:51 In other words, could you tailor solutions
01:46:55 depending on the size, and get polynomial size?
01:47:00 Even if P is not equal to NP.
01:47:02 Right.
01:47:06 That would be fascinating if that’s true.
01:47:08 Yeah, what we proved is that if that were possible,
01:47:14 then something strange would happen in complexity theory.
01:47:18 Some high level class which I could briefly describe,
01:47:26 something strange would happen.
01:47:28 So, I’ll take a stab at describing what I mean.
01:47:31 Sure, let’s go there.
01:47:33 So, we have to define this hierarchy
01:47:37 in which the first level of the hierarchy is P,
01:47:41 and the second level is NP.
01:47:44 And what is NP?
01:47:45 NP involves statements of the form
01:47:48 there exists a something such that something holds.
01:47:53 So, for example, there exists the coloring
01:47:59 such that a graph can be colored
01:48:01 with only that number of colors.
01:48:06 Or there exists a Hamiltonian circuit.
01:48:09 There’s a statement about this graph.
01:48:10 Yeah, so the NP deals with statements of that kind,
01:48:22 that there exists a solution.
01:48:26 Now, you could imagine a more complicated expression
01:48:32 which says for all x there exists a y
01:48:38 such that some proposition holds involving both x and y.
01:48:47 So, that would say, for example, in game theory,
01:48:50 for all strategies for the first player,
01:48:54 there exists a strategy for the second player
01:48:57 such that the first player wins.
01:48:59 That would be at the second level of the hierarchy.
01:49:03 The third level would be there exists an A
01:49:06 such that for all B there exists a C,
01:49:08 that something holds.
01:49:09 And you could imagine going higher and higher
01:49:11 in the hierarchy.
01:49:12 And you’d expect that the complexity classes
01:49:17 that correspond to those different cases
01:49:22 would get bigger and bigger.
01:49:27 What do you mean by bigger and bigger?
01:49:28 Sorry, sorry.
01:49:29 They’d get harder and harder to solve.
01:49:30 Harder and harder, right.
01:49:32 Harder and harder to solve.
01:49:35 And what Lipton and I showed was
01:49:37 that if NP had small circuits,
01:49:41 then this hierarchy would collapse down
01:49:44 to the second level.
01:49:46 In other words, you wouldn’t get any more mileage
01:49:48 by complicating your expressions with three quantifiers
01:49:51 or four quantifiers or any number.
01:49:55 I’m not sure what to make of that exactly.
01:49:57 Well, I think it would be evidence
01:49:59 that NP doesn’t have small circuits
01:50:02 because something so bizarre would happen.
01:50:07 But again, it’s only evidence, not proof.
01:50:09 Well, yeah, that’s not even evidence
01:50:12 because you’re saying P is not equal to NP
01:50:16 because something bizarre has to happen.
01:50:19 I mean, that’s proof by the lack of bizarreness
01:50:25 in our science.
01:50:26 But it seems like just the very notion
01:50:31 of P equals NP would be bizarre.
01:50:33 So any way you arrive at, there’s no way.
01:50:36 You have to fight the dragon at some point.
01:50:38 Yeah, okay.
01:50:39 Well, anyway, for whatever it’s worth,
01:50:41 that’s what we proved.
01:50:43 Awesome.
01:50:45 So that’s a potential space of interesting problems.
01:50:49 Yeah.
01:50:50 Let me ask you about this other world
01:50:54 that of machine learning, of deep learning.
01:50:57 What’s your thoughts on the history
01:50:59 and the current progress of machine learning field
01:51:02 that’s often progressed sort of separately
01:51:05 as a space of ideas and space of people
01:51:08 than the theoretical computer science
01:51:10 or just even computer science world?
01:51:12 Yeah, it’s really very different
01:51:15 from the theoretical computer science world
01:51:17 because the results about it,
01:51:22 algorithmic performance tend to be empirical.
01:51:25 It’s more akin to the world of SAT solvers
01:51:28 where we observe that for formulas arising in practice,
01:51:33 the solver does well.
01:51:35 So it’s of that type.
01:51:38 We’re moving into the empirical evaluation of algorithms.
01:51:45 Now, it’s clear that there’ve been huge successes
01:51:47 in image processing, robotics,
01:51:52 natural language processing, a little less so,
01:51:55 but across the spectrum of game playing is another one.
01:52:00 There’ve been great successes and one of those effects
01:52:07 is that it’s not too hard to become a millionaire
01:52:10 if you can get a reputation in machine learning
01:52:12 and there’ll be all kinds of companies
01:52:13 that will be willing to offer you the moon
01:52:16 because they think that if they have AI at their disposal,
01:52:23 then they can solve all kinds of problems.
01:52:25 But there are limitations.
01:52:30 One is that the solutions that you get
01:52:38 to supervise learning problems
01:52:44 through convolutional neural networks
01:52:50 seem to perform amazingly well
01:52:55 even for inputs that are outside the training set.
01:53:03 But we don’t have any theoretical understanding
01:53:06 of why that’s true.
01:53:09 Secondly, the solutions, the networks that you get
01:53:14 are very hard to understand
01:53:16 and so very little insight comes out.
01:53:19 So yeah, yeah, they may seem to work on your training set
01:53:23 and you may be able to discover whether your photos occur
01:53:28 in a different sample of inputs or not,
01:53:35 but we don’t really know what’s going on.
01:53:37 We don’t know the features that distinguish the photographs
01:53:41 or the objects are not easy to characterize.
01:53:49 Well, it’s interesting because you mentioned
01:53:51 coming up with a small circuit to solve
01:53:54 a particular size problem.
01:53:56 It seems that neural networks are kind of small circuits.
01:53:59 In a way, yeah.
01:54:01 But they’re not programs.
01:54:02 Sort of like the things you’ve designed
01:54:04 are algorithms, programs, algorithms.
01:54:08 Neural networks aren’t able to develop algorithms
01:54:12 to solve a problem.
01:54:14 Well, they are algorithms.
01:54:16 It’s just that they’re…
01:54:18 But sort of, yeah, it could be a semantic question,
01:54:25 but there’s not a algorithmic style manipulation
01:54:31 of the input.
01:54:33 Perhaps you could argue there is.
01:54:35 Yeah, well.
01:54:37 It feels a lot more like a function of the input.
01:54:40 Yeah, it’s a function.
01:54:41 It’s a computable function.
01:54:43 Once you have the network,
01:54:46 you can simulate it on a given input
01:54:49 and figure out the output.
01:54:51 But if you’re trying to recognize images,
01:54:58 then you don’t know what features of the image
01:55:00 are really being determinant of what the circuit is doing.
01:55:09 The circuit is sort of very intricate
01:55:14 and it’s not clear that the simple characteristics
01:55:21 that you’re looking for,
01:55:22 the edges of the objects or whatever they may be,
01:55:26 they’re not emerging from the structure of the circuit.
01:55:29 Well, it’s not clear to us humans,
01:55:31 but it’s clear to the circuit.
01:55:33 Yeah, well, right.
01:55:34 I mean, it’s not clear to sort of the elephant
01:55:39 how the human brain works,
01:55:44 but it’s clear to us humans,
01:55:46 we can explain to each other our reasoning
01:55:49 and that’s why the cognitive science
01:55:50 and psychology field exists.
01:55:52 Maybe the whole thing of being explainable to humans
01:55:56 is a little bit overrated.
01:55:57 Oh, maybe, yeah.
01:55:59 I guess you can say the same thing about our brain
01:56:02 that when we perform acts of cognition,
01:56:06 we have no idea how we do it really.
01:56:08 We do though, I mean, at least for the visual system,
01:56:13 the auditory system and so on,
01:56:15 we do get some understanding of the principles
01:56:19 that they operate under,
01:56:20 but for many deeper cognitive tasks, we don’t have that.
01:56:25 That’s right.
01:56:26 Let me ask, you’ve also been doing work on bioinformatics.
01:56:33 Does it amaze you that the fundamental building blocks?
01:56:36 So if we take a step back and look at us humans,
01:56:39 the building blocks used by evolution
01:56:41 to build us intelligent human beings
01:56:44 is all contained there in our DNA.
01:56:48 It’s amazing and what’s really amazing
01:56:51 is that we are beginning to learn how to edit DNA,
01:57:00 which is very, very, very fascinating.
01:57:05 This ability to take a sequence,
01:57:15 find it in the genome and do something to it.
01:57:18 I mean, that’s really taking our biological systems
01:57:21 towards the world of algorithms.
01:57:24 Yeah, but it raises a lot of questions.
01:57:30 You have to distinguish between doing it on an individual
01:57:33 or doing it on somebody’s germline,
01:57:35 which means that all of their descendants will be affected.
01:57:40 So that’s like an ethical.
01:57:42 Yeah, so it raises very severe ethical questions.
01:57:50 And even doing it on individuals,
01:57:56 there’s a lot of hubris involved
01:57:59 that you can assume that knocking out a particular gene
01:58:04 is gonna be beneficial
01:58:05 because you don’t know what the side effects
01:58:07 are going to be.
01:58:08 So we have this wonderful new world of gene editing,
01:58:20 which is very, very impressive
01:58:23 and it could be used in agriculture,
01:58:27 it could be used in medicine in various ways.
01:58:32 But very serious ethical problems arise.
01:58:37 What are to you the most interesting places
01:58:39 where algorithms, sort of the ethical side
01:58:44 is an exceptionally challenging thing
01:58:46 that I think we’re going to have to tackle
01:58:48 with all of genetic engineering.
01:58:51 But on the algorithmic side,
01:58:53 there’s a lot of benefit that’s possible.
01:58:55 So is there areas where you see exciting possibilities
01:59:00 for algorithms to help model, optimize,
01:59:03 study biological systems?
01:59:06 Yeah, I mean, we can certainly analyze genomic data
01:59:12 to figure out which genes are operative in the cell
01:59:17 and under what conditions
01:59:18 and which proteins affect one another,
01:59:21 which proteins physically interact.
01:59:27 We can sequence proteins and modify them.
01:59:32 Is there some aspect of that
01:59:33 that’s a computer science problem
01:59:35 or is that still fundamentally a biology problem?
01:59:39 Well, it’s a big data,
01:59:41 it’s a statistical big data problem for sure.
01:59:45 So the biological data sets are increasing,
01:59:49 our ability to study our ancestry,
01:59:55 to study the tendencies towards disease,
01:59:59 to personalize treatment according to what’s in our genomes
02:00:06 and what tendencies for disease we have,
02:00:10 to be able to predict what troubles might come upon us
02:00:13 in the future and anticipate them,
02:00:16 to understand whether you,
02:00:24 for a woman, whether her proclivity for breast cancer
02:00:31 is so strong enough that she would want
02:00:33 to take action to avoid it.
02:00:37 You dedicate your 1985 Turing Award lecture
02:00:41 to the memory of your father.
02:00:42 What’s your fondest memory of your dad?
02:00:53 Seeing him standing in front of a class at the blackboard,
02:00:57 drawing perfect circles by hand
02:01:03 and showing his ability to attract the interest
02:01:08 of the motley collection of eighth grade students
02:01:14 that he was teaching.
02:01:19 When did you get a chance to see him
02:01:21 draw the perfect circles?
02:01:24 On rare occasions, I would get a chance
02:01:27 to sneak into his classroom and observe him.
02:01:33 And I think he was at his best in the classroom.
02:01:36 I think he really came to life and had fun,
02:01:42 not only teaching, but engaging in chit chat
02:01:49 with the students and ingratiating himself
02:01:52 with the students.
02:01:53 And what I inherited from that is the great desire
02:02:00 to be a teacher.
02:02:01 I retired recently and a lot of my former students came,
02:02:08 students with whom I had done research
02:02:11 or who had read my papers or who had been in my classes.
02:02:15 And when they talked about me,
02:02:22 they talked not about my 1979 paper or 1992 paper,
02:02:27 but about what came away in my classes.
02:02:33 And not just the details, but just the approach
02:02:36 and the manner of teaching.
02:02:40 And so I sort of take pride in the,
02:02:43 at least in my early years as a faculty member at Berkeley,
02:02:47 I was exemplary in preparing my lectures
02:02:51 and I always came in prepared to the teeth,
02:02:56 and able therefore to deviate according
02:02:59 to what happened in the class,
02:03:01 and to really provide a model for the students.
02:03:08 So is there advice you can give out for others
02:03:14 on how to be a good teacher?
02:03:16 So preparation is one thing you’ve mentioned,
02:03:19 being exceptionally well prepared,
02:03:20 but there are other things,
02:03:21 pieces of advice that you can impart?
02:03:24 Well, the top three would be preparation, preparation,
02:03:27 and preparation.
02:03:29 Why is preparation so important, I guess?
02:03:32 It’s because it gives you the ease
02:03:34 to deal with any situation that comes up in the classroom.
02:03:38 And if you discover that you’re not getting through one way,
02:03:44 you can do it another way.
02:03:45 If the students have questions,
02:03:47 you can handle the questions.
02:03:49 Ultimately, you’re also feeling the crowd,
02:03:54 the students of what they’re struggling with,
02:03:57 what they’re picking up,
02:03:57 just looking at them through the questions,
02:03:59 but even just through their eyes.
02:04:01 Yeah, that’s right.
02:04:02 And because of the preparation, you can dance.
02:04:05 You can dance, you can say it another way,
02:04:09 or give it another angle.
02:04:11 Are there, in particular, ideas and algorithms
02:04:14 of computer science that you find
02:04:17 were big aha moments for students,
02:04:19 where they, for some reason, once they got it,
02:04:22 it clicked for them and they fell in love
02:04:24 with computer science?
02:04:26 Or is it individual, is it different for everybody?
02:04:29 It’s different for everybody.
02:04:30 You have to work differently with students.
02:04:32 Some of them just don’t need much influence.
02:04:40 They’re just running with what they’re doing
02:04:42 and they just need an ear now and then.
02:04:44 Others need a little prodding.
02:04:47 Others need to be persuaded to collaborate among themselves
02:04:50 rather than working alone.
02:04:53 They have their personal ups and downs,
02:04:57 so you have to deal with each student as a human being
02:05:03 and bring out the best.
02:05:06 Humans are complicated.
02:05:08 Yeah.
02:05:09 Perhaps a silly question.
02:05:11 If you could relive a moment in your life outside of family
02:05:15 because it made you truly happy,
02:05:17 or perhaps because it changed the direction of your life
02:05:19 in a profound way, what moment would you pick?
02:05:24 I was kind of a lazy student as an undergraduate,
02:05:28 and even in my first year in graduate school.
02:05:33 And I think it was when I started doing research,
02:05:37 I had a couple of summer jobs where I was able to contribute
02:05:41 and I had an idea.
02:05:45 And then there was one particular course
02:05:47 on mathematical methods and operations research
02:05:51 where I just gobbled up the material
02:05:53 and I scored 20 points higher than anybody else in the class
02:05:57 then came to the attention of the faculty.
02:06:00 And it made me realize that I had some ability
02:06:04 that I was going somewhere.
02:06:09 You realize you’re pretty good at this thing.
02:06:12 I don’t think there’s a better way to end it, Richard.
02:06:14 It was a huge honor.
02:06:15 Thank you for decades of incredible work.
02:06:18 Thank you for talking to me.
02:06:19 Thank you, it’s been a great pleasure.
02:06:21 You’re a superb interviewer.
02:06:23 I’ll stop it.
02:06:26 Thanks for listening to this conversation with Richard Karp.
02:06:29 And thank you to our sponsors, 8sleep and Cash App.
02:06:34 Please consider supporting this podcast
02:06:35 by going to 8sleep.com slash Lex
02:06:39 to check out their awesome mattress
02:06:41 and downloading Cash App and using code LexPodcast.
02:06:46 Click the links, buy the stuff,
02:06:48 even just visiting the site
02:06:49 but also considering the purchase.
02:06:51 Helps them know that this podcast
02:06:54 is worth supporting in the future.
02:06:55 It really is the best way to support this journey I’m on.
02:06:59 If you enjoy this thing, subscribe on YouTube,
02:07:02 review it with Five Stars and Apple Podcast,
02:07:04 support it on Patreon, connect with me on Twitter
02:07:07 at Lex Friedman if you can figure out how to spell that.
02:07:11 And now let me leave you with some words from Isaac Asimov.
02:07:16 I do not fear computers.
02:07:18 I fear lack of them.
02:07:19 Thank you for listening and hope to see you next time.