Transcript
00:00:00 The following is a conversation with Peter Norvig.
00:00:02 He’s the Director of Research at Google
00:00:05 and the coauthor with Stuart Russell of the book
00:00:07 Artificial Intelligence, A Modern Approach,
00:00:10 that educated and inspired a whole generation
00:00:13 of researchers, including myself,
00:00:15 to get into the field of artificial intelligence.
00:00:18 This is the Artificial Intelligence Podcast.
00:00:21 If you enjoy it, subscribe on YouTube,
00:00:24 give five stars on iTunes, support on Patreon,
00:00:27 or simply connect with me on Twitter.
00:00:29 I’m Lex Friedman, spelled F R I D M A N.
00:00:32 And now, here’s my conversation with Peter Norvig.
00:00:37 Most researchers in the AI community, including myself,
00:00:40 own all three editions, red, green, and blue,
00:00:43 of the Artificial Intelligence, A Modern Approach.
00:00:46 It’s a field defining textbook, as many people are aware,
00:00:49 that you wrote with Stuart Russell.
00:00:52 How has the book changed and how have you changed
00:00:55 in relation to it from the first edition
00:00:57 to the second to the third and now fourth edition
00:01:00 as you work on it?
00:01:00 Yeah, so it’s been a lot of years, a lot of changes.
00:01:04 One of the things changing from the first
00:01:05 to maybe the second or third
00:01:09 was just the rise of computing power, right?
00:01:12 So I think in the first edition, we said,
00:01:17 here’s predicate logic, but that only goes so far
00:01:22 because pretty soon you have millions of short little
00:01:27 predicate expressions and they can possibly fit in memory.
00:01:31 So we’re gonna use first order logic that’s more concise.
00:01:35 And then we quickly realized,
00:01:38 oh, predicate logic is pretty nice
00:01:40 because there are really fast SAT solvers and other things.
00:01:44 And look, there’s only millions of expressions
00:01:46 and that fits easily into memory,
00:01:48 or maybe even billions fit into memory now.
00:01:51 So that was a change of the type of technology we needed
00:01:54 just because the hardware expanded.
00:01:56 Even to the second edition,
00:01:58 resource constraints were loosened significantly
00:02:00 for the second.
00:02:01 And that was early 2000s second edition.
00:02:04 Right, so 95 was the first and then 2000, 2001 or so.
00:02:10 And then moving on from there,
00:02:12 I think we’re starting to see that again with the GPUs
00:02:17 and then more specific type of machinery
00:02:20 like the TPUs and you’re seeing custom ASICs and so on
00:02:25 for deep learning.
00:02:26 So we’re seeing another advance in terms of the hardware.
00:02:30 Then I think another thing that we especially noticed
00:02:33 this time around is in all three of the first editions,
00:02:37 we kind of said, well, we’re gonna find AI
00:02:40 as maximizing expected utility
00:02:43 and you tell me your utility function.
00:02:45 And now we’ve got 27 chapters where the cool techniques
00:02:49 for how to optimize that.
00:02:51 I think in this edition, we’re saying more,
00:02:54 you know what, maybe that optimization part
00:02:56 is the easy part and the hard part is deciding
00:02:59 what is my utility function?
00:03:01 What do I want?
00:03:03 And if I’m a collection of agents or a society,
00:03:06 what do we want as a whole?
00:03:08 So you touched that topic in this edition.
00:03:10 You get a little bit more into utility.
00:03:11 Yeah.
00:03:12 That’s really interesting.
00:03:13 On a technical level,
00:03:15 we’re almost pushing the philosophical.
00:03:17 I guess it is philosophical, right?
00:03:19 So we’ve always had a philosophy chapter,
00:03:21 which I was glad that we were supporting.
00:03:27 And now it’s less kind of the Chinese room type argument
00:03:33 and more of these ethical and societal type issues.
00:03:37 So we get into the issues of fairness and bias
00:03:41 and just the issue of aggregating utilities.
00:03:45 So how do you encode human values into a utility function?
00:03:49 Is this something that you can do purely through data
00:03:53 in a learned way or is there some systematic,
00:03:56 obviously there’s no good answers yet.
00:03:58 There’s just beginnings to this,
00:04:01 to even opening the doors to these questions.
00:04:02 So there is no one answer.
00:04:04 Yes, there are techniques to try to learn that.
00:04:07 So we talk about inverse reinforcement learning, right?
00:04:10 So reinforcement learning, you take some actions,
00:04:14 you get some rewards and you figure out
00:04:16 what actions you should take.
00:04:18 And inverse reinforcement learning,
00:04:20 you observe somebody taking actions and you figure out,
00:04:24 well, this must be what they were trying to do.
00:04:27 If they did this action, it must be because they want it.
00:04:30 Of course, there’s restrictions to that, right?
00:04:33 So lots of people take actions that are self destructive
00:04:37 or they’re suboptimal in certain ways.
00:04:39 So you don’t wanna learn that.
00:04:40 You wanna somehow learn the perfect actions
00:04:44 rather than the ones they actually take.
00:04:46 So that’s a challenge for that field.
00:04:51 Then another big part of it is just kind of theoretical
00:04:55 of saying, what can we accomplish?
00:04:58 And so you look at like this work on the programs
00:05:04 to predict recidivism and decide who should get parole
00:05:09 or who should get bail or whatever.
00:05:12 And how are you gonna evaluate that?
00:05:13 And one of the big issues is fairness
00:05:16 across protected classes.
00:05:18 Protected classes being things like sex and race and so on.
00:05:23 And so two things you want is you wanna say,
00:05:27 well, if I get a score of say six out of 10,
00:05:32 then I want that to mean the same
00:05:34 whether no matter what race I’m on, right?
00:05:37 Yes, right, so I wanna have a 60% chance
00:05:39 of reoccurring regardless.
00:05:44 And one of the makers of a commercial program to do that
00:05:48 says that’s what we’re trying to optimize
00:05:50 and look, we achieved that.
00:05:51 We’ve reached that kind of balance.
00:05:56 And then on the other side,
00:05:57 you also wanna say, well, if it makes mistakes,
00:06:01 I want that to affect both sides
00:06:04 of the protected class equally.
00:06:07 And it turns out they don’t do that, right?
00:06:09 So they’re twice as likely to make a mistake
00:06:12 that would harm a black person over a white person.
00:06:14 So that seems unfair.
00:06:16 So you’d like to say,
00:06:17 well, I wanna achieve both those goals.
00:06:19 And then it turns out you do the analysis
00:06:21 and it’s theoretically impossible
00:06:22 to achieve both those goals.
00:06:24 So you have to trade them off one against the other.
00:06:27 So that analysis is really helpful
00:06:29 to know what you can aim for and how much you can get.
00:06:32 You can’t have everything.
00:06:33 But the analysis certainly can’t tell you
00:06:35 where should we make that trade off point.
00:06:38 But nevertheless, then we can as humans deliberate
00:06:41 where that trade off should be.
00:06:43 Yeah, so at least we now we’re arguing in an informed way.
00:06:45 We’re not asking for something impossible.
00:06:48 We’re saying, here’s where we are
00:06:50 and here’s what we aim for.
00:06:51 And this strategy is better than that strategy.
00:06:55 So that’s, I would argue is a really powerful
00:06:58 and really important first step,
00:07:00 but it’s a doable one sort of removing
00:07:02 undesirable degrees of bias in systems
00:07:07 in terms of protected classes.
00:07:08 And then there’s something I listened
00:07:10 to your commencement speech,
00:07:12 or there’s some fuzzier things like,
00:07:15 you mentioned angry birds.
00:07:17 Do you wanna create systems that feed the dopamine enjoyment
00:07:23 that feed, that optimize for you returning to the system,
00:07:26 enjoying the moment of playing the game of getting likes
00:07:30 or whatever, this kind of thing,
00:07:32 or some kind of longterm improvement?
00:07:34 Right.
00:07:36 Are you even thinking about that?
00:07:39 That’s really going to the philosophical area.
00:07:43 No, I think that’s a really important issue too.
00:07:45 Certainly thinking about that.
00:07:46 I don’t think about that as an AI issue as much.
00:07:52 But as you say, the point is we’ve built this society
00:07:57 and this infrastructure where we say we have a marketplace
00:08:02 for attention and we’ve decided as a society
00:08:07 that we like things that are free.
00:08:09 And so we want all the apps on our phone to be free.
00:08:13 And that means they’re all competing for your attention.
00:08:15 And then eventually they make some money some way
00:08:17 through ads or in game sales or whatever.
00:08:22 But they can only win by defeating all the other apps
00:08:26 by instilling your attention.
00:08:28 And we build a marketplace where it seems like
00:08:34 they’re working against you rather than working with you.
00:08:38 And I’d like to find a way where we can change
00:08:41 the playing field so you feel more like,
00:08:43 well, these things are on my side.
00:08:46 Yes, they’re letting me have some fun in the short term,
00:08:49 but they’re also helping me in the long term
00:08:52 rather than competing against me.
00:08:54 And those aren’t necessarily conflicting objectives.
00:08:56 They’re just the incentives, the direct current incentives
00:09:00 as we try to figure out this whole new world
00:09:02 seem to be on the easier part of that,
00:09:06 which is feeding the dopamine, the rush.
00:09:08 Right.
00:09:09 But so maybe taking a quick step back at the beginning
00:09:15 of the Artificial Intelligence,
00:09:17 the Modern Approach book of writing.
00:09:19 So here you are in the 90s.
00:09:21 When you first sat down with Stuart to write the book
00:09:25 to cover an entire field,
00:09:27 which is one of the only books that’s successfully done that
00:09:30 for AI and actually in a lot of other computer science
00:09:33 fields, it’s a huge undertaking.
00:09:37 So it must’ve been quite daunting.
00:09:40 What was that process like?
00:09:42 Did you envision that you would be trying to cover
00:09:44 the entire field?
00:09:47 Was there a systematic approach to it
00:09:48 that was more step by step?
00:09:50 How was, how did it feel?
00:09:52 So I guess it came about,
00:09:54 go to lunch with the other AI faculty at Berkeley
00:09:57 and we’d say, the field is changing.
00:10:00 It seems like the current books are a little bit behind.
00:10:03 Nobody’s come out with a new book recently.
00:10:05 We should do that.
00:10:06 And everybody said, yeah, yeah, that’s a great thing to do.
00:10:09 And we never did anything.
00:10:10 Right.
00:10:11 And then I ended up heading off to industry.
00:10:14 I went to Sun Labs.
00:10:16 So I thought, well, that’s the end of my possible
00:10:19 academic publishing career.
00:10:21 But I met Stuart again at a conference like a year later
00:10:25 and said, you know that book we were always talking about,
00:10:28 you guys must be half done with it by now, right?
00:10:30 And he said, well, we keep talking, we never do anything.
00:10:34 So I said, well, you know, we should do it.
00:10:36 And I think the reason is that we all felt
00:10:40 it was a time where the field was changing.
00:10:44 And that was in two ways.
00:10:46 So, you know, the good old fashioned AI
00:10:49 was based primarily on Boolean logic.
00:10:52 And you had a few tricks to deal with uncertainty.
00:10:55 And it was based primarily on knowledge engineering.
00:10:59 That the way you got something done is you went out,
00:11:00 you interviewed an expert and you wrote down by hand
00:11:03 everything they knew.
00:11:05 And we saw in 95 that the field was changing in two ways.
00:11:10 One, we’re moving more towards probability
00:11:13 rather than Boolean logic.
00:11:15 And we’re moving more towards machine learning
00:11:17 rather than knowledge engineering.
00:11:20 And the other books hadn’t caught that way
00:11:22 if they were still in the, more in the old school.
00:11:26 Although, so certainly they had part of that on the way.
00:11:29 But we said, if we start now completely taking
00:11:33 that point of view, we can have a different kind of book.
00:11:36 And we were able to put that together.
00:11:39 And what was literally the process if you remember,
00:11:44 did you start writing a chapter?
00:11:46 Did you outline?
00:11:48 Yeah, I guess we did an outline
00:11:50 and then we sort of assigned chapters to each person.
00:11:55 At the time I had moved to Boston
00:11:58 and Stuart was in Berkeley.
00:12:00 So basically we did it over the internet.
00:12:04 And, you know, that wasn’t the same as doing it today.
00:12:08 It meant, you know, dial up lines and telnetting in.
00:12:13 And, you know, you telnet it into one shell
00:12:19 and you type cat file name
00:12:21 and you hoped it was captured at the other end.
00:12:23 And certainly you’re not sending images
00:12:26 and figures back and forth.
00:12:27 Right, right, that didn’t work.
00:12:29 But, you know, did you anticipate
00:12:31 where the field would go from that day, from the 90s?
00:12:37 Did you see the growth into learning based methods
00:12:42 and to data driven methods
00:12:44 that followed in the future decades?
00:12:47 We certainly thought that learning was important.
00:12:51 I guess we missed it as being as important as it is today.
00:12:58 We missed this idea of big data.
00:13:00 We missed that the idea of deep learning
00:13:02 hadn’t been invented yet.
00:13:04 We could have taken the book
00:13:07 from a complete machine learning point of view
00:13:11 right from the start.
00:13:12 We chose to do it more from a point of view
00:13:15 of we’re gonna first develop
00:13:16 different types of representations.
00:13:19 And we’re gonna talk about different types of environments.
00:13:24 Is it fully observable or partially observable?
00:13:26 And is it deterministic or stochastic and so on?
00:13:29 And we made it more complex along those axes
00:13:33 rather than focusing on the machine learning axis first.
00:13:38 Do you think, you know, there’s some sense
00:13:40 in which the deep learning craze is extremely successful
00:13:44 for a particular set of problems.
00:13:46 And, you know, eventually it’s going to,
00:13:49 in the general case, hit challenges.
00:13:52 So in terms of the difference between perception systems
00:13:56 and robots that have to act in the world,
00:13:59 do you think we’re gonna return
00:14:01 to AI modern approach type breadth
00:14:06 in addition five and six?
00:14:08 In future decades, do you think deep learning
00:14:12 will take its place as a chapter
00:14:14 in this bigger view of AI?
00:14:17 Yeah, I think we don’t know yet
00:14:19 how it’s all gonna play out.
00:14:21 So in the new edition, we have a chapter on deep learning.
00:14:26 We got Ian Goodfellow to be the guest author
00:14:29 for that chapter.
00:14:30 So he said he could condense his whole deep learning book
00:14:34 into one chapter.
00:14:35 I think he did a great job.
00:14:38 We were also encouraged that he’s, you know,
00:14:40 we gave him the old neural net chapter
00:14:43 and said, modernize that.
00:14:47 And he said, you know, half of that was okay.
00:14:50 That certainly there’s lots of new things
00:14:52 that have been developed,
00:14:54 but some of the core was still the same.
00:14:58 So I think we’ll gain a better understanding
00:15:02 of what you can do there.
00:15:04 I think we’ll need to incorporate
00:15:07 all the things we can do with the other technologies, right?
00:15:10 So deep learning started out with convolutional networks
00:15:14 and very close to perception.
00:15:18 And it’s since moved to be able to do more
00:15:23 with actions and some degree of longer term planning.
00:15:28 But we need to do a better job
00:15:30 with representation than reasoning
00:15:32 and one shot learning and so on.
00:15:36 And I think we don’t know yet how that’s gonna play out.
00:15:41 So do you think looking at some success,
00:15:45 but certainly eventual demise,
00:15:49 a partial demise of experts
00:15:51 to symbolic systems in the 80s,
00:15:54 do you think there is kernels of wisdom
00:15:56 and the work that was done there
00:15:59 with logic and reasoning and so on
00:16:01 that will rise again in your view?
00:16:05 So certainly I think the idea of representation
00:16:08 and reasoning is crucial
00:16:10 that sometimes you just don’t have enough data
00:16:13 about the world to learn de novo.
00:16:17 So you’ve got to have some idea of representation,
00:16:22 whether that was programmed in or told or whatever,
00:16:24 and then be able to take steps of reasoning.
00:16:28 I think the problem with the good old fashioned AI
00:16:33 was one, we tried to base everything on these symbols
00:16:39 that were atomic.
00:16:42 And that’s great if you’re like trying to define
00:16:45 the properties of a triangle, right?
00:16:47 Because they have necessary and sufficient conditions.
00:16:50 But things in the real world don’t.
00:16:52 The real world is messy and doesn’t have sharp edges
00:16:55 and atomic symbols do.
00:16:57 So that was a poor match.
00:16:59 And then the other aspect was that the reasoning
00:17:05 was universal and applied anywhere,
00:17:09 which in some sense is good,
00:17:11 but it also means there’s no guidance
00:17:13 as to where to apply.
00:17:15 And so you started getting these paradoxes
00:17:17 like, well, if I have a mountain
00:17:20 and I remove one grain of sand,
00:17:22 then it’s still a mountain.
00:17:25 But if I do that repeatedly, at some point it’s not, right?
00:17:28 And with logic, there’s nothing to stop you
00:17:32 from applying things repeatedly.
00:17:37 But maybe with something like deep learning,
00:17:42 and I don’t really know what the right name for it is,
00:17:44 we could separate out those ideas.
00:17:46 So one, we could say a mountain isn’t just an atomic notion.
00:17:52 It’s some sort of something like a word embedding
00:17:56 that has a more complex representation.
00:18:02 And secondly, we could somehow learn,
00:18:05 yeah, there’s this rule that you can remove
00:18:06 one grain of sand and you can do that a bunch of times,
00:18:09 but you can’t do it a near infinite amount of times.
00:18:12 But on the other hand, when you’re doing induction
00:18:15 on the integer, sure, then it’s fine to do it
00:18:17 an infinite number of times.
00:18:18 And if we could, somehow we have to learn
00:18:22 when these strategies are applicable
00:18:24 rather than having the strategies be completely neutral
00:18:28 and available everywhere.
00:18:31 Anytime you use neural networks,
00:18:32 anytime you learn from data,
00:18:34 form representation from data in an automated way,
00:18:36 it’s not very explainable as to,
00:18:41 or it’s not introspective to us humans
00:18:45 in terms of how this neural network sees the world,
00:18:48 where, why does it succeed so brilliantly in so many cases
00:18:53 and fail so miserably in surprising ways and small.
00:18:56 So what do you think is the future there?
00:19:00 Can simply more data, better data,
00:19:03 more organized data solve that problem?
00:19:06 Or is there elements of symbolic systems
00:19:09 that need to be brought in
00:19:10 which are a little bit more explainable?
00:19:12 Yeah, so I prefer to talk about trust
00:19:16 and validation and verification
00:19:20 rather than just about explainability.
00:19:22 And then I think explanations are one tool
00:19:25 that you use towards those goals.
00:19:28 And I think it is an important issue
00:19:30 that we don’t wanna use these systems unless we trust them
00:19:33 and we wanna understand where they work
00:19:35 and where they don’t work.
00:19:37 And an explanation can be part of that, right?
00:19:40 So I apply for a loan and I get denied,
00:19:44 I want some explanation of why.
00:19:46 And you have, in Europe, we have the GDPR
00:19:50 that says you’re required to be able to get that.
00:19:53 But on the other hand,
00:19:54 the explanation alone is not enough, right?
00:19:57 So we are used to dealing with people
00:20:01 and with organizations and corporations and so on,
00:20:04 and they can give you an explanation
00:20:06 and you have no guarantee
00:20:07 that that explanation relates to reality, right?
00:20:11 So the bank can tell me, well, you didn’t get the loan
00:20:13 because you didn’t have enough collateral.
00:20:16 And that may be true, or it may be true
00:20:18 that they just didn’t like my religion or something else.
00:20:22 I can’t tell from the explanation,
00:20:24 and that’s true whether the decision was made
00:20:27 by a computer or by a person.
00:20:30 So I want more.
00:20:33 I do wanna have the explanations
00:20:35 and I wanna be able to have a conversation
00:20:37 to go back and forth and said,
00:20:39 well, you gave this explanation, but what about this?
00:20:41 And what would have happened if this had happened?
00:20:44 And what would I need to change that?
00:20:48 So I think a conversation is a better way to think about it
00:20:50 than just an explanation as a single output.
00:20:55 And I think we need testing of various kinds, right?
00:20:58 So in order to know,
00:21:00 was the decision really based on my collateral
00:21:03 or was it based on my religion or skin color or whatever?
00:21:08 I can’t tell if I’m only looking at my case,
00:21:10 but if I look across all the cases,
00:21:12 then I can detect the pattern, right?
00:21:15 So you wanna have that kind of capability.
00:21:18 You wanna have these adversarial testing, right?
00:21:21 So we thought we were doing pretty good
00:21:23 at object recognition in images.
00:21:25 We said, look, we’re at sort of pretty close
00:21:28 to human level performance on ImageNet and so on.
00:21:32 And then you start seeing these adversarial images
00:21:34 and you say, wait a minute,
00:21:36 that part is nothing like human performance.
00:21:39 You can mess with it really easily.
00:21:40 You can mess with it really easily, right?
00:21:42 And yeah, you can do that to humans too, right?
00:21:45 So we.
00:21:46 In a different way perhaps.
00:21:47 Right, humans don’t know what color the dress was.
00:21:49 Right.
00:21:50 And so they’re vulnerable to certain attacks
00:21:52 that are different than the attacks on the machines,
00:21:55 but the attacks on the machines are so striking.
00:21:59 They really change the way you think
00:22:00 about what we’ve done, right?
00:22:03 And the way I think about it is,
00:22:05 I think part of the problem is we’re seduced
00:22:08 by our low dimensional metaphors, right?
00:22:13 Yeah.
00:22:14 I like that phrase.
00:22:15 You look in a textbook and you say,
00:22:18 okay, now we’ve mapped out the space
00:22:20 and a cat is here and dog is here
00:22:24 and maybe there’s a tiny little spot in the middle
00:22:27 where you can’t tell the difference,
00:22:28 but mostly we’ve got it all covered.
00:22:30 And if you believe that metaphor,
00:22:33 then you say, well, we’re nearly there.
00:22:35 And there’s only gonna be a couple adversarial images.
00:22:39 But I think that’s the wrong metaphor
00:22:40 and what you should really say is,
00:22:42 it’s not a 2D flat space that we’ve got mostly covered.
00:22:45 It’s a million dimension space
00:22:47 and a cat is this string that goes out in this crazy path.
00:22:52 And if you step a little bit off the path in any direction,
00:22:55 you’re in nowhere’s land
00:22:57 and you don’t know what’s gonna happen.
00:22:59 And so I think that’s where we are
00:23:01 and now we’ve got to deal with that.
00:23:03 So it wasn’t so much an explanation,
00:23:06 but it was an understanding of what the models are
00:23:09 and what they’re doing
00:23:10 and now we can start exploring, how do you fix that?
00:23:12 Yeah, validating the robustness of the system and so on,
00:23:15 but take it back to this word trust.
00:23:20 Do you think we’re a little too hard on our robots
00:23:22 in terms of the standards we apply?
00:23:25 So, you know,
00:23:30 there’s a dance in nonverbal
00:23:34 and verbal communication between humans.
00:23:37 If we apply the same kind of standard in terms of humans,
00:23:40 we trust each other pretty quickly.
00:23:43 You know, you and I haven’t met before
00:23:45 and there’s some degree of trust, right?
00:23:48 That nothing’s gonna go crazy wrong
00:23:50 and yet to AI, when we look at AI systems
00:23:53 or we seem to approach skepticism always, always.
00:23:58 And it’s like they have to prove through a lot of hard work
00:24:03 that they’re even worthy of even inkling of our trust.
00:24:06 What do you think about that?
00:24:08 How do we break that barrier, close that gap?
00:24:11 I think that’s right.
00:24:12 I think that’s a big issue.
00:24:13 Just listening, my friend Mark Moffat is a naturalist
00:24:18 and he says, the most amazing thing about humans
00:24:22 is that you can walk into a coffee shop
00:24:25 or a busy street in a city
00:24:28 and there’s lots of people around you
00:24:30 that you’ve never met before and you don’t kill each other.
00:24:34 Yeah.
00:24:34 He says, chimpanzees cannot do that.
00:24:36 Yeah, right.
00:24:37 Right?
00:24:38 If a chimpanzee’s in a situation where here’s some
00:24:42 that aren’t from my tribe, bad things happen.
00:24:46 Especially in a coffee shop,
00:24:47 there’s delicious food around, you know.
00:24:48 Yeah, yeah.
00:24:49 But we humans have figured that out, right?
00:24:53 And you know.
00:24:54 For the most part.
00:24:55 For the most part.
00:24:55 We still go to war, we still do terrible things
00:24:58 but for the most part, we’ve learned to trust each other
00:25:01 and live together.
00:25:02 So that’s gonna be important for our AI systems as well.
00:25:08 And also I think a lot of the emphasis is on AI
00:25:13 but in many cases, AI is part of the technology
00:25:18 but isn’t really the main thing.
00:25:19 So a lot of what we’ve seen is more due
00:25:22 to communications technology than AI technology.
00:25:27 Yeah, you wanna make these good decisions
00:25:30 but the reason we’re able to have any kind of system at all
00:25:33 is we’ve got the communication
00:25:35 so that we’re collecting the data
00:25:37 and so that we can reach lots of people around the world.
00:25:41 I think that’s a bigger change that we’re dealing with.
00:25:45 Speaking of reaching a lot of people around the world,
00:25:47 on the side of education,
00:25:51 one of the many things in terms of education you’ve done,
00:25:53 you’ve taught the Intro to Artificial Intelligence course
00:25:56 that signed up 160,000 students.
00:26:00 There’s one of the first successful example
00:26:02 of a MOOC, Massive Open Online Course.
00:26:06 What did you learn from that experience?
00:26:09 What do you think is the future of MOOCs,
00:26:11 of education online?
00:26:12 Yeah, it was a great fun doing it,
00:26:15 particularly being right at the start
00:26:19 just because it was exciting and new
00:26:21 but it also meant that we had less competition, right?
00:26:24 So one of the things you hear about,
00:26:27 well, the problem with MOOCs is the completion rates
00:26:31 are so low so there must be a failure
00:26:33 and I gotta admit, I’m a prime contributor, right?
00:26:37 I probably started 50 different courses
00:26:40 that I haven’t finished
00:26:42 but I got exactly what I wanted out of them
00:26:44 because I had never intended to finish them.
00:26:46 I just wanted to dabble in a little bit
00:26:48 either to see the topic matter
00:26:50 or just to see the pedagogy of how are they doing this class.
00:26:53 So I guess the main thing I learned is when I came in,
00:26:58 I thought the challenge was information,
00:27:03 saying if I’m just, take the stuff I want you to know
00:27:07 and I’m very clear and explain it well,
00:27:10 then my job is done and good things are gonna happen.
00:27:14 And then in doing the course, I learned,
00:27:17 well, yeah, you gotta have the information
00:27:19 but really the motivation is the most important thing
00:27:23 that if students don’t stick with it,
00:27:26 it doesn’t matter how good the content is.
00:27:29 And I think being one of the first classes,
00:27:32 we were helped by sort of exterior motivation.
00:27:36 So we tried to do a good job of making it enticing
00:27:39 and setting up ways for the community
00:27:44 to work with each other to make it more motivating
00:27:46 but really a lot of it was, hey, this is a new thing
00:27:49 and I’m really excited to be part of a new thing.
00:27:51 And so the students brought their own motivation.
00:27:54 And so I think this is great
00:27:56 because there’s lots of people around the world
00:27:58 who have never had this before,
00:28:03 would never have the opportunity to go to Stanford
00:28:07 and take a class or go to MIT
00:28:08 or go to one of the other schools
00:28:10 but now we can bring that to them
00:28:12 and if they bring their own motivation,
00:28:15 they can be successful in a way they couldn’t before.
00:28:18 But that’s really just the top tier of people
00:28:21 that are ready to do that.
00:28:22 The rest of the people just don’t see
00:28:26 or don’t have the motivation
00:28:29 and don’t see how if they push through
00:28:31 and were able to do it, what advantage that would get them.
00:28:34 So I think we got a long way to go
00:28:36 before we were able to do that.
00:28:37 And I think some of it is based on technology
00:28:40 but more of it’s based on the idea of community.
00:28:43 You gotta actually get people together.
00:28:46 Some of the getting together can be done online.
00:28:49 I think some of it really has to be done in person
00:28:52 in order to build that type of community and trust.
00:28:56 You know, there’s an intentional mechanism
00:28:59 that we’ve developed a short attention span,
00:29:02 especially younger people
00:29:04 because sort of shorter and shorter videos online,
00:29:08 there’s a whatever the way the brain is developing now
00:29:13 and with people that have grown up with the internet,
00:29:16 they have quite a short attention span.
00:29:18 So, and I would say I had the same
00:29:21 when I was growing up too, probably for different reasons.
00:29:23 So I probably wouldn’t have learned as much as I have
00:29:28 if I wasn’t forced to sit in a physical classroom,
00:29:31 sort of bored, sometimes falling asleep,
00:29:33 but sort of forcing myself through that process.
00:29:36 So sometimes extremely difficult computer science courses.
00:29:39 What’s the difference in your view
00:29:42 between in person education experience,
00:29:46 which you, first of all, yourself had
00:29:48 and you yourself taught and online education
00:29:52 and how do we close that gap if it’s even possible?
00:29:54 Yeah, so I think there’s two issues.
00:29:56 One is whether it’s in person or online.
00:30:00 So it’s sort of the physical location
00:30:03 and then the other is kind of the affiliation, right?
00:30:07 So you stuck with it in part
00:30:10 because you were in the classroom
00:30:12 and you saw everybody else was suffering
00:30:15 the same way you were,
00:30:17 but also because you were enrolled,
00:30:20 you had paid tuition,
00:30:22 sort of everybody was expecting you to stick with it.
00:30:25 Society, parents, peers.
00:30:29 And so those are two separate things.
00:30:31 I mean, you could certainly imagine
00:30:32 I pay a huge amount of tuition
00:30:35 and everybody signed up and says, yes, you’re doing this,
00:30:38 but then I’m in my room
00:30:40 and my classmates are in different rooms, right?
00:30:43 We could have things set up that way.
00:30:45 So it’s not just the online versus offline.
00:30:48 I think what’s more important
00:30:50 is the commitment that you’ve made.
00:30:53 And certainly it is important
00:30:56 to have that kind of informal,
00:30:59 you know, I meet people outside of class,
00:31:01 we talk together because we’re all in it together.
00:31:05 I think that’s really important,
00:31:07 both in keeping your motivation
00:31:10 and also that’s where
00:31:11 some of the most important learning goes on.
00:31:13 So you wanna have that.
00:31:15 Maybe, you know, especially now
00:31:17 we start getting into higher bandwidths
00:31:19 and augmented reality and virtual reality,
00:31:22 you might be able to get that
00:31:23 without being in the same physical place.
00:31:25 Do you think it’s possible we’ll see a course at Stanford,
00:31:30 for example, that for students,
00:31:33 enrolled students is only online in the near future
00:31:37 or literally sort of it’s part of the curriculum
00:31:39 and there is no…
00:31:41 Yeah, so you’re starting to see that.
00:31:42 I know Georgia Tech has a master’s that’s done that way.
00:31:46 Oftentimes it’s sort of,
00:31:48 they’re creeping in in terms of a master’s program
00:31:50 or sort of further education,
00:31:54 considering the constraints of students and so on.
00:31:56 But I mean, literally, is it possible that we,
00:32:00 you know, Stanford, MIT, Berkeley,
00:32:02 all these places go online only in the next few decades?
00:32:07 Yeah, probably not,
00:32:08 because, you know, they’ve got a big commitment
00:32:11 to a physical campus.
00:32:13 Sure, so there’s a momentum
00:32:16 that’s both financial and culturally.
00:32:18 Right, and then there are certain things
00:32:21 that’s just hard to do virtually, right?
00:32:25 So, you know, we’re in a field where,
00:32:29 if you have your own computer and your own paper,
00:32:32 and so on, you can do the work anywhere.
00:32:36 But if you’re in a biology lab or something,
00:32:39 you know, you don’t have all the right stuff at home.
00:32:42 Right, so our field, programming,
00:32:45 you’ve also done a lot of programming yourself.
00:32:50 In 2001, you wrote a great article about programming
00:32:54 called Teach Yourself Programming in 10 Years,
00:32:57 sort of response to all the books
00:32:59 that say teach yourself programming in 21 days.
00:33:01 So if you were giving advice to someone
00:33:02 getting into programming today,
00:33:04 this is a few years since you’ve written that article,
00:33:07 what’s the best way to undertake that journey?
00:33:10 I think there’s lots of different ways,
00:33:12 and I think programming means more things now.
00:33:17 And I guess, you know, when I wrote that article,
00:33:20 I was thinking more about
00:33:23 becoming a professional software engineer,
00:33:25 and I thought that’s a, you know,
00:33:27 sort of a career long field of study.
00:33:31 But I think there’s lots of things now
00:33:33 that people can do where programming is a part
00:33:37 of solving what they wanna solve
00:33:40 without achieving that professional level status, right?
00:33:44 So I’m not gonna be going
00:33:45 and writing a million lines of code,
00:33:47 but, you know, I’m a biologist or a physicist or something,
00:33:51 or even a historian, and I’ve got some data,
00:33:55 and I wanna ask a question of that data.
00:33:58 And I think for that, you don’t need 10 years, right?
00:34:02 So there are many shortcuts
00:34:04 to being able to answer those kinds of questions.
00:34:08 And, you know, you see today a lot of emphasis
00:34:11 on learning to code, teaching kids how to code.
00:34:16 I think that’s great,
00:34:18 but I wish they would change the message a little bit,
00:34:21 right, so I think code isn’t the main thing.
00:34:24 I don’t really care if you know the syntax of JavaScript
00:34:28 or if you can connect these blocks together
00:34:31 in this visual language.
00:34:33 But what I do care about is that you can analyze a problem,
00:34:38 you can think of a solution, you can carry out,
00:34:43 you know, make a model, run that model,
00:34:46 test the model, see the results,
00:34:50 verify that they’re reasonable,
00:34:53 ask questions and answer them, right?
00:34:55 So it’s more modeling and problem solving,
00:34:58 and you use coding in order to do that,
00:35:01 but it’s not just learning coding for its own sake.
00:35:04 That’s really interesting.
00:35:05 So it’s actually almost, in many cases,
00:35:08 it’s learning to work with data,
00:35:10 to extract something useful out of data.
00:35:11 So when you say problem solving,
00:35:13 you really mean taking some kind of,
00:35:15 maybe collecting some kind of data set,
00:35:17 cleaning it up, and saying something interesting about it,
00:35:20 which is useful in all kinds of domains.
00:35:23 And, you know, and I see myself being stuck sometimes
00:35:28 in kind of the old ways, right?
00:35:30 So, you know, I’ll be working on a project,
00:35:34 maybe with a younger employee, and we say,
00:35:37 oh, well, here’s this new package
00:35:39 that could help solve this problem.
00:35:42 And I’ll go and I’ll start reading the manuals,
00:35:44 and, you know, I’ll be two hours into reading the manuals,
00:35:48 and then my colleague comes back and says, I’m done.
00:35:51 You know, I downloaded the package, I installed it,
00:35:53 I tried calling some things, the first one didn’t work,
00:35:56 the second one worked, now I’m done.
00:35:58 And I say, but I have a hundred questions
00:36:00 about how does this work and how does that work?
00:36:02 And they say, who cares, right?
00:36:04 I don’t need to understand the whole thing.
00:36:05 I answered my question, it’s a big, complicated package,
00:36:09 I don’t understand the rest of it,
00:36:10 but I got the right answer.
00:36:12 And I’m just, it’s hard for me to get into that mindset.
00:36:15 I want to understand the whole thing.
00:36:17 And, you know, if they wrote a manual,
00:36:19 I should probably read it.
00:36:21 And, but that’s not necessarily the right way.
00:36:23 I think I have to get used to dealing with more,
00:36:28 being more comfortable with uncertainty
00:36:30 and not knowing everything.
00:36:32 Yeah, so I struggle with the same,
00:36:33 instead of the spectrum between Donald and Don Knuth.
00:36:37 Yeah.
00:36:38 It’s kind of the very, you know,
00:36:39 before he can say anything about a problem,
00:36:42 he really has to get down to the machine code assembly.
00:36:45 Yeah.
00:36:46 And that forces exactly what you said of several students
00:36:50 in my group that, you know, 20 years old,
00:36:53 and they can solve almost any problem within a few hours.
00:36:56 That would take me probably weeks
00:36:58 because I would try to, as you said, read the manual.
00:37:00 So do you think the nature of mastery,
00:37:04 you’re mentioning biology,
00:37:06 sort of outside disciplines, applying programming,
00:37:11 but computer scientists.
00:37:13 So over time, there’s higher and higher levels
00:37:16 of abstraction available now.
00:37:18 So with this week, there’s the TensorFlow Summit, right?
00:37:23 So if you’re not particularly into deep learning,
00:37:27 but you’re still a computer scientist,
00:37:29 you can accomplish an incredible amount with TensorFlow
00:37:33 without really knowing any fundamental internals
00:37:35 of machine learning.
00:37:37 Do you think the nature of mastery is changing,
00:37:40 even for computer scientists,
00:37:42 like what it means to be an expert programmer?
00:37:45 Yeah, I think that’s true.
00:37:47 You know, we never really should have focused on programmer,
00:37:51 right, because it’s still, it’s the skill,
00:37:53 and what we really want to focus on is the result.
00:37:56 So we built this ecosystem
00:37:59 where the way you can get stuff done
00:38:01 is by programming it yourself.
00:38:04 At least when I started, you know,
00:38:06 library functions meant you had square root,
00:38:09 and that was about it, right?
00:38:10 Everything else you built from scratch.
00:38:13 And then we built up an ecosystem where a lot of times,
00:38:16 well, you can download a lot of stuff
00:38:17 that does a big part of what you need.
00:38:20 And so now it’s more a question of assembly
00:38:23 rather than manufacturing.
00:38:28 And that’s a different way of looking at problems.
00:38:32 From another perspective in terms of mastery
00:38:34 and looking at programmers or people that reason
00:38:37 about problems in a computational way.
00:38:39 So Google, you know, from the hiring perspective,
00:38:44 from the perspective of hiring
00:38:45 or building a team of programmers,
00:38:47 how do you determine if someone’s a good programmer?
00:38:50 Or if somebody, again, so I want to deviate from,
00:38:53 I want to move away from the word programmer,
00:38:55 but somebody who could solve problems
00:38:57 of large scale data and so on.
00:38:59 What’s, how do you build a team like that
00:39:02 through the interviewing process?
00:39:03 Yeah, and I think as a company grows,
00:39:08 you get more expansive in the types
00:39:12 of people you’re looking for, right?
00:39:14 So I think, you know, in the early days,
00:39:16 we’d interview people and the question we were trying
00:39:19 to ask is how close are they to Jeff Dean?
00:39:22 And most people were pretty far away,
00:39:26 but we take the ones that were not that far away.
00:39:29 And so we got kind of a homogeneous group
00:39:31 of people who were really great programmers.
00:39:34 Then as a company grows, you say,
00:39:37 well, we don’t want everybody to be the same,
00:39:39 to have the same skill set.
00:39:40 And so now we’re hiring biologists in our health areas
00:39:47 and we’re hiring physicists,
00:39:48 we’re hiring mechanical engineers,
00:39:51 we’re hiring, you know, social scientists and ethnographers
00:39:56 and people with different backgrounds
00:39:59 who bring different skills.
00:40:01 So you have mentioned that you still may partake
00:40:06 in code reviews, given that you have a wealth of experience,
00:40:10 as you’ve also mentioned.
00:40:13 What errors do you often see and tend to highlight
00:40:16 in the code of junior developers of people coming up now,
00:40:20 given your background from Blisp
00:40:23 to a couple of decades of programming?
00:40:26 Yeah, that’s a great question.
00:40:28 You know, sometimes I try to look at the flexibility
00:40:31 of the design of, yes, you know, this API solves this problem,
00:40:37 but where is it gonna go in the future?
00:40:39 Who else is gonna wanna call this?
00:40:41 And, you know, are you making it easier for them to do that?
00:40:46 That’s a matter of design, is it documentation,
00:40:50 is it sort of an amorphous thing
00:40:53 you can’t really put into words?
00:40:55 It’s just how it feels.
00:40:56 If you put yourself in the shoes of a developer,
00:40:58 would you use this kind of thing?
00:40:59 I think it is how you feel, right?
00:41:01 And so yeah, documentation is good,
00:41:03 but it’s more a design question, right?
00:41:06 If you get the design right,
00:41:07 then people will figure it out,
00:41:10 whether the documentation is good or not.
00:41:12 And if the design’s wrong, then it’d be harder to use.
00:41:16 How have you yourself changed as a programmer over the years?
00:41:22 In a way, you already started to say sort of,
00:41:26 you want to read the manual,
00:41:28 you want to understand the core of the syntax
00:41:30 to how the language is supposed to be used and so on.
00:41:33 But what’s the evolution been like
00:41:36 from the 80s, 90s to today?
00:41:40 I guess one thing is you don’t have to worry
00:41:42 about the small details of efficiency
00:41:46 as much as you used to, right?
00:41:48 So like I remember I did my list book in the 90s,
00:41:53 and one of the things I wanted to do was say,
00:41:56 here’s how you do an object system.
00:41:58 And basically, we’re going to make it
00:42:01 so each object is a hash table,
00:42:03 and you look up the methods, and here’s how it works.
00:42:05 And then I said, of course,
00:42:07 the real Common Lisp object system is much more complicated.
00:42:12 It’s got all these efficiency type issues,
00:42:15 and this is just a toy,
00:42:16 and nobody would do this in real life.
00:42:18 And it turns out Python pretty much did exactly
00:42:22 what I said and said objects are just dictionaries.
00:42:27 And yeah, they have a few little tricks as well.
00:42:30 But mostly, the thing that would have been
00:42:34 100 times too slow in the 80s
00:42:36 is now plenty fast for most everything.
00:42:39 So you had to, as a programmer,
00:42:40 let go of perhaps an obsession
00:42:44 that I remember coming up with
00:42:45 of trying to write efficient code.
00:42:48 Yeah, to say what really matters
00:42:51 is the total time it takes to get the project done.
00:42:56 And most of that’s gonna be the programmer time.
00:42:59 So if you’re a little bit less efficient,
00:43:00 but it makes it easier to understand and modify,
00:43:04 then that’s the right trade off.
00:43:05 So you’ve written quite a bit about Lisp.
00:43:07 Your book on programming is in Lisp.
00:43:10 You have a lot of code out there that’s in Lisp.
00:43:12 So myself and people who don’t know what Lisp is
00:43:16 should look it up.
00:43:18 It’s my favorite language for many AI researchers.
00:43:20 It is a favorite language.
00:43:22 The favorite language they never use these days.
00:43:25 So what part of Lisp do you find most beautiful and powerful?
00:43:28 So I think the beautiful part is the simplicity
00:43:31 that in half a page, you can define the whole language.
00:43:36 And other languages don’t have that.
00:43:38 So you feel like you can hold everything in your head.
00:43:42 And then a lot of people say,
00:43:46 well, then that’s too simple.
00:43:48 Here’s all these things I wanna do.
00:43:50 And my Java or Python or whatever
00:43:54 has 100 or 200 or 300 different syntax rules
00:43:58 and don’t I need all those?
00:44:00 And Lisp’s answer was, no, we’re only gonna give you
00:44:03 eight or so syntax rules,
00:44:06 but we’re gonna allow you to define your own.
00:44:09 And so that was a very powerful idea.
00:44:11 And I think this idea of saying,
00:44:15 I can start with my problem and with my data,
00:44:20 and then I can build the language I want for that problem
00:44:24 and for that data.
00:44:25 And then I can make Lisp define that language.
00:44:28 So you’re sort of mixing levels and saying,
00:44:32 I’m simultaneously a programmer in a language
00:44:36 and a language designer.
00:44:38 And that allows a better match between your problem
00:44:41 and your eventual code.
00:44:43 And I think Lisp had done that better than other languages.
00:44:47 Yeah, it’s a very elegant implementation
00:44:49 of functional programming.
00:44:51 But why do you think Lisp has not had the mass adoption
00:44:55 and success of languages like Python?
00:44:57 Is it the parentheses?
00:44:59 Is it all the parentheses?
00:45:02 Yeah, so I think a couple things.
00:45:05 So one was, I think it was designed for a single programmer
00:45:10 or a small team and a skilled programmer
00:45:14 who had the good taste to say,
00:45:17 well, I am doing language design
00:45:19 and I have to make good choices.
00:45:21 And if you make good choices, that’s great.
00:45:23 If you make bad choices, you can hurt yourself
00:45:28 and it can be hard for other people on the team
00:45:30 to understand it.
00:45:31 So I think there was a limit to the scale
00:45:34 of the size of a project in terms of number of people
00:45:37 that Lisp was good for.
00:45:38 And as an industry, we kind of grew beyond that.
00:45:43 I think it is in part the parentheses.
00:45:46 You know, one of the jokes is the acronym for Lisp
00:45:49 is lots of irritating, silly parentheses.
00:45:53 My acronym was Lisp is syntactically pure,
00:45:58 saying all you need is parentheses and atoms.
00:46:01 But I remember, you know, as we had the AI textbook
00:46:05 and because we did it in the nineties,
00:46:08 we had pseudocode in the book,
00:46:11 but then we said, well, we’ll have Lisp online
00:46:13 because that’s the language of AI at the time.
00:46:16 And I remember some of the students complaining
00:46:18 because they hadn’t had Lisp before
00:46:20 and they didn’t quite understand what was going on.
00:46:22 And I remember one student complained,
00:46:24 I don’t understand how this pseudocode
00:46:26 corresponds to this Lisp.
00:46:29 And there was a one to one correspondence
00:46:31 between the symbols in the code and the pseudocode.
00:46:35 And the only thing difference was the parentheses.
00:46:39 So I said, it must be that for some people,
00:46:41 a certain number of left parentheses shuts off their brain.
00:46:45 Yeah, it’s very possible in that sense
00:46:47 and Python just goes the other way.
00:46:49 So that was the point at which I said,
00:46:51 okay, can’t have only Lisp as a language.
00:46:54 Cause I don’t wanna, you know,
00:46:56 you only got 10 or 12 or 15 weeks or whatever it is
00:46:59 to teach AI and I don’t want to waste two weeks
00:47:01 of that teaching Lisp.
00:47:03 So I say, I gotta have another language.
00:47:04 Java was the most popular language at the time.
00:47:06 I started doing that.
00:47:08 And then I said, it’s really hard to have a one to one
00:47:12 correspondence between the pseudocode and the Java
00:47:14 because Java is so verbose.
00:47:16 So then I said, I’m gonna do a survey
00:47:18 and find the language that’s most like my pseudocode.
00:47:22 And it turned out Python basically was my pseudocode.
00:47:26 Somehow I had channeled Guido,
00:47:30 designed a pseudocode that was the same as Python,
00:47:32 although I hadn’t heard of Python at that point.
00:47:36 And from then on, that’s what I’ve been using
00:47:38 cause it’s been a good match.
00:47:41 So what’s the story in Python behind PyTudes?
00:47:45 Your GitHub repository with puzzles and exercises
00:47:48 in Python is pretty fun.
00:47:49 Yeah, just it, it seems like fun, you know,
00:47:53 I like doing puzzles and I like being an educator.
00:47:57 I did a class with Udacity, Udacity 212, I think it was.
00:48:02 It was basically problem solving using Python
00:48:07 and looking at different problems.
00:48:08 Does PyTudes feed that class in terms of the exercises?
00:48:11 I was wondering what the…
00:48:12 Yeah, so the class came first.
00:48:15 Some of the stuff that’s in PyTudes was write ups
00:48:17 of what was in the class and then some of it
00:48:19 was just continuing to work on new problems.
00:48:24 So what’s the organizing madness of PyTudes?
00:48:26 Is it just a collection of cool exercises?
00:48:30 Just whatever I thought was fun.
00:48:31 Okay, awesome.
00:48:32 So you were the director of search quality at Google
00:48:35 from 2001 to 2005 in the early days
00:48:40 when there’s just a few employees
00:48:41 and when the company was growing like crazy, right?
00:48:46 So, I mean, Google revolutionized the way we discover,
00:48:52 share and aggregate knowledge.
00:48:55 So just, this is one of the fundamental aspects
00:49:00 of civilization, right, is information being shared
00:49:03 and there’s different mechanisms throughout history
00:49:04 but Google has just 10x improved that, right?
00:49:08 And you’re a part of that, right?
00:49:10 People discovering that information.
00:49:11 So what were some of the challenges on a philosophical
00:49:15 or the technical level in those early days?
00:49:18 It definitely was an exciting time
00:49:20 and as you say, we were doubling in size every year
00:49:24 and the challenges were we wanted
00:49:26 to get the right answers, right?
00:49:29 And we had to figure out what that meant.
00:49:32 We had to implement that and we had to make it all efficient
00:49:36 and we had to keep on testing
00:49:41 and seeing if we were delivering good answers.
00:49:44 And now when you say good answers,
00:49:45 it means whatever people are typing in
00:49:47 in terms of keywords, in terms of that kind of thing
00:49:50 that the results they get are ordered
00:49:53 by the desirability for them of those results.
00:49:56 Like they’re like, the first thing they click on
00:49:58 will likely be the thing that they were actually looking for.
00:50:01 Right, one of the metrics we had
00:50:03 was focused on the first thing.
00:50:05 Some of it was focused on the whole page.
00:50:07 Some of it was focused on top three or so.
00:50:11 So we looked at a lot of different metrics
00:50:13 for how well we were doing
00:50:15 and we broke it down into subclasses of,
00:50:19 maybe here’s a type of query that we’re not doing well on
00:50:23 and we try to fix that.
00:50:25 Early on we started to realize that we were in an adversarial
00:50:29 position, right, so we started thinking,
00:50:32 well, we’re kind of like the card catalog in the library,
00:50:35 right, so the books are here and we’re off to the side
00:50:39 and we’re just reflecting what’s there.
00:50:42 And then we realized every time we make a change,
00:50:45 the webmasters make a change and it’s game theoretic.
00:50:50 And so we had to think not only of is this the right move
00:50:54 for us to make now, but also if we make this move,
00:50:57 what’s the counter move gonna be?
00:50:59 Is that gonna get us into a worse place,
00:51:02 in which case we won’t make that move,
00:51:03 we’ll make a different move.
00:51:05 And did you find, I mean, I assume with the popularity
00:51:08 and the growth of the internet
00:51:09 that people were creating new content,
00:51:11 so you’re almost helping guide the creation of new content.
00:51:14 Yeah, so that’s certainly true, right,
00:51:15 so we definitely changed the structure of the network.
00:51:20 So if you think back in the very early days,
00:51:24 Larry and Sergey had the PageRank paper
00:51:28 and John Kleinberg had this hubs and authorities model,
00:51:33 which says the web is made out of these hubs,
00:51:38 which will be my page of cool links about dogs or whatever,
00:51:44 and people would just list links.
00:51:46 And then there’d be authorities,
00:51:47 which were the page about dogs that most people linked to.
00:51:53 That doesn’t happen anymore.
00:51:54 People don’t bother to say my page of cool links,
00:51:57 because we took over that function, right,
00:52:00 so we changed the way that worked.
00:52:03 Did you imagine back then that the internet
00:52:05 would be as massively vibrant as it is today?
00:52:08 I mean, it was already growing quickly,
00:52:10 but it’s just another, I don’t know if you’ve ever,
00:52:14 today, if you sit back and just look at the internet
00:52:18 with wonder the amount of content
00:52:20 that’s just constantly being created,
00:52:22 constantly being shared and deployed.
00:52:24 Yeah, it’s always been surprising to me.
00:52:27 I guess I’m not very good at predicting the future.
00:52:31 And I remember being a graduate student in 1980 or so,
00:52:35 and we had the ARPANET,
00:52:39 and then there was this proposal to commercialize it,
00:52:44 and have this internet, and this crazy Senator Gore
00:52:49 thought that might be a good idea.
00:52:51 And I remember thinking, oh, come on,
00:52:53 you can’t expect a commercial company
00:52:55 to understand this technology.
00:52:58 They’ll never be able to do it.
00:52:59 Yeah, okay, we can have this.com domain,
00:53:01 but it won’t go anywhere.
00:53:03 So I was wrong, Al Gore was right.
00:53:05 At the same time, the nature of what it means
00:53:07 to be a commercial company has changed, too.
00:53:09 So Google, in many ways, at its founding
00:53:12 is different than what companies were before, I think.
00:53:16 Right, so there’s all these business models
00:53:19 that are so different than what was possible back then.
00:53:23 So in terms of predicting the future,
00:53:25 what do you think it takes to build a system
00:53:27 that approaches human level intelligence?
00:53:29 You’ve talked about, of course,
00:53:31 that we shouldn’t be so obsessed
00:53:34 about creating human level intelligence.
00:53:36 We just create systems that are very useful for humans.
00:53:39 But what do you think it takes
00:53:40 to approach that level?
00:53:44 Right, so certainly I don’t think
00:53:47 human level intelligence is one thing, right?
00:53:49 So I think there’s lots of different tasks,
00:53:51 lots of different capabilities.
00:53:54 I also don’t think that should be the goal, right?
00:53:56 So I wouldn’t wanna create a calculator
00:54:01 that could do multiplication at human level, right?
00:54:04 That would be a step backwards.
00:54:06 And so for many things,
00:54:07 we should be aiming far beyond human level
00:54:09 for other things.
00:54:12 Maybe human level is a good level to aim at.
00:54:15 And for others, we’d say,
00:54:16 well, let’s not bother doing this
00:54:18 because we already have humans can take on those tasks.
00:54:21 So as you say, I like to focus on what’s a useful tool.
00:54:26 And in some cases, being at human level
00:54:30 is an important part of crossing that threshold
00:54:32 to make the tool useful.
00:54:34 So we see in things like these personal assistants now
00:54:39 that you get either on your phone
00:54:41 or on a speaker that sits on the table,
00:54:44 you wanna be able to have a conversation with those.
00:54:47 And I think as an industry,
00:54:49 we haven’t quite figured out what the right model is
00:54:51 for what these things can do.
00:54:55 And we’re aiming towards,
00:54:56 well, you just have a conversation with them
00:54:57 the way you can with a person.
00:55:00 But we haven’t delivered on that model yet, right?
00:55:02 So you can ask it, what’s the weather?
00:55:04 You can ask it, play some nice songs.
00:55:08 And five or six other things,
00:55:11 and then you run out of stuff that it can do.
00:55:14 In terms of a deep, meaningful connection.
00:55:16 So you’ve mentioned the movie Her
00:55:18 as one of your favorite AI movies.
00:55:20 Do you think it’s possible for a human being
00:55:22 to fall in love with an AI assistant, as you mentioned?
00:55:25 So taking this big leap from what’s the weather
00:55:28 to having a deep connection.
00:55:31 Yeah, I think as people, that’s what we love to do.
00:55:35 And I was at a showing of Her
00:55:39 where we had a panel discussion and somebody asked me,
00:55:43 what other movie do you think Her is similar to?
00:55:46 And my answer was Life of Brian,
00:55:50 which is not a science fiction movie,
00:55:53 but both movies are about wanting to believe
00:55:57 in something that’s not necessarily real.
00:56:00 Yeah, by the way, for people that don’t know,
00:56:01 it’s Monty Python.
00:56:03 Yeah, it’s been brilliantly put.
00:56:05 Right, so I think that’s just the way we are.
00:56:07 We want to trust, we want to believe,
00:56:11 we want to fall in love,
00:56:12 and it doesn’t necessarily take that much, right?
00:56:15 So my kids fell in love with their teddy bear,
00:56:20 and the teddy bear was not very interactive.
00:56:23 So that’s all us pushing our feelings
00:56:26 onto our devices and our things,
00:56:29 and I think that that’s what we like to do,
00:56:31 so we’ll continue to do that.
00:56:33 So yeah, as human beings, we long for that connection,
00:56:36 and just AI has to do a little bit of work
00:56:39 to catch us in the other end.
00:56:41 Yeah, and certainly, if you can get to dog level,
00:56:46 a lot of people have invested a lot of love in their pets.
00:56:49 In their pets.
00:56:50 Some people, as I’ve been told,
00:56:52 in working with autonomous vehicles,
00:56:54 have invested a lot of love into their inanimate cars,
00:56:58 so it really doesn’t take much.
00:57:00 So what is a good test to linger on a topic
00:57:05 that may be silly or a little bit philosophical?
00:57:07 What is a good test of intelligence in your view?
00:57:12 Is natural conversation like in the Turing test
00:57:14 a good test?
00:57:16 Put another way, what would impress you
00:57:20 if you saw a computer do it these days?
00:57:22 Yeah, I mean, I get impressed all the time.
00:57:24 Go playing, StarCraft playing, those are all pretty cool.
00:57:35 And I think, sure, conversation is important.
00:57:39 I think we sometimes have these tests
00:57:44 where it’s easy to fool the system, where
00:57:46 you can have a chat bot that can have a conversation,
00:57:51 but it never gets into a situation
00:57:54 where it has to be deep enough that it really reveals itself
00:57:58 as being intelligent or not.
00:58:00 I think Turing suggested that, but I think if he were alive,
00:58:07 he’d say, you know, I didn’t really mean that seriously.
00:58:11 And I think, this is just my opinion,
00:58:15 but I think Turing’s point was not
00:58:17 that this test of conversation is a good test.
00:58:21 I think his point was having a test is the right thing.
00:58:25 So rather than having the philosophers say, oh, no,
00:58:28 AI is impossible, you should say, well,
00:58:31 we’ll just have a test, and then the result of that
00:58:33 will tell us the answer.
00:58:34 And it doesn’t necessarily have to be a conversation test.
00:58:37 That’s right.
00:58:37 And coming up a new, better test as the technology evolves
00:58:40 is probably the right way.
00:58:42 Do you worry, as a lot of the general public does about,
00:58:46 not a lot, but some vocal part of the general public
00:58:51 about the existential threat of artificial intelligence?
00:58:53 So looking farther into the future, as you said,
00:58:56 most of us are not able to predict much.
00:58:59 So when shrouded in such mystery, there’s a concern of,
00:59:02 well, you start thinking about worst case.
00:59:05 Is that something that occupies your mind, space, much?
00:59:09 So I certainly think about threats.
00:59:11 I think about dangers.
00:59:13 And I think any new technology has positives and negatives.
00:59:19 And if it’s a powerful technology,
00:59:21 it can be used for bad as well as for good.
00:59:24 So I’m certainly not worried about the robot
00:59:27 apocalypse and the Terminator type scenarios.
00:59:32 I am worried about change in employment.
00:59:37 And are we going to be able to react fast enough
00:59:41 to deal with that?
00:59:41 I think we’re already seeing it today, where
00:59:44 a lot of people are disgruntled about the way
00:59:48 income inequality is working.
00:59:50 And automation could help accelerate
00:59:53 those kinds of problems.
00:59:55 I see powerful technologies can always be used as weapons,
00:59:59 whether they’re robots or drones or whatever.
01:00:03 Some of that we’re seeing due to AI.
01:00:06 A lot of it, you don’t need AI.
01:00:09 And I don’t know what’s a worst threat,
01:00:12 if it’s an autonomous drone or it’s CRISPR technology
01:00:17 becoming available.
01:00:18 Or we have lots of threats to face.
01:00:21 And some of them involve AI, and some of them don’t.
01:00:24 So the threats that technology presents,
01:00:27 are you, for the most part, optimistic about technology
01:00:31 also alleviating those threats or creating new opportunities
01:00:34 or protecting us from the more detrimental effects
01:00:38 of these new technologies?
01:00:38 I don’t know.
01:00:39 Again, it’s hard to predict the future.
01:00:41 And as a society so far, we’ve survived
01:00:47 nuclear bombs and other things.
01:00:50 Of course, only societies that have survived
01:00:53 are having this conversation.
01:00:54 So maybe that’s survivorship bias there.
01:00:59 What problem stands out to you as exciting, challenging,
01:01:02 impactful to work on in the near future for yourself,
01:01:06 for the community, and broadly?
01:01:09 So we talked about these assistance and conversation.
01:01:13 I think that’s a great area.
01:01:14 I think combining common sense reasoning
01:01:20 with the power of data is a great area.
01:01:26 In which application?
01:01:27 In conversation, or just broadly speaking?
01:01:29 Just in general, yeah.
01:01:31 As a programmer, I’m interested in programming tools,
01:01:35 both in terms of the current systems
01:01:38 we have today with TensorFlow and so on.
01:01:41 Can we make them much easier to use
01:01:43 for a broader class of people?
01:01:45 And also, can we apply machine learning
01:01:49 to the more traditional type of programming?
01:01:52 So when you go to Google and you type in a query
01:01:57 and you spell something wrong, it says, did you mean?
01:02:00 And the reason we’re able to do that
01:02:01 is because lots of other people made a similar error,
01:02:04 and then they corrected it.
01:02:06 We should be able to go into our code bases and our bug fix
01:02:10 bases.
01:02:10 And when I type a line of code, it should be able to say,
01:02:13 did you mean such and such?
01:02:15 If you type this today, you’re probably going to type
01:02:17 in this bug fix tomorrow.
01:02:20 Yeah, that’s a really exciting application
01:02:22 of almost an assistant for the coding programming experience
01:02:27 at every level.
01:02:29 So I think I could safely speak for the entire AI community,
01:02:35 first of all, for thanking you for the amazing work you’ve
01:02:37 done, certainly for the amazing work you’ve done
01:02:40 with AI and Modern Approach book.
01:02:43 I think we’re all looking forward very much
01:02:45 for the fourth edition, and then the fifth edition, and so on.
01:02:48 So Peter, thank you so much for talking today.
01:02:51 Yeah, thank you.
01:02:51 My pleasure.