Rodney Brooks: Robotics #217

Transcript

00:00:00 The following is a conversation with Rodney Brooks, one of the greatest roboticists in history.

00:00:06 He led the Computer Science and Artificial Intelligence Laboratory at MIT,

00:00:10 then cofounded iRobot, which is one of the most successful robotics companies ever.

00:00:16 Then he cofounded Rethink Robotics that created some amazing collaborative robots like Baxter

00:00:22 and Sawyer. Finally, he cofounded Robust.ai, whose mission is to teach robots common sense,

00:00:30 which is a lot harder than it sounds. To support this podcast,

00:00:35 please check out our sponsors in the description.

00:00:38 As a side note, let me say that Rodney is someone I’ve looked up to for many years in my now over

00:00:43 two decade journey in robotics because, one, he’s a legit great engineer of real world systems,

00:00:52 and two, he’s not afraid to state controversial opinions that challenge the way we see the AI

00:00:57 world. But of course, while I agree with him on some of his critical views of AI, I don’t agree

00:01:04 with some others, and he’s fully supportive of such disagreement. Nobody ever built anything great

00:01:10 by being fully agreeable. There’s always respect and love behind our interactions, and when a

00:01:16 conversation is recorded like it was for this podcast, I think a little bit of disagreement is

00:01:22 fun. This is the Lex Friedman Podcast, and here is my conversation with Rodney Brooks.

00:01:31 What is the most amazing or beautiful robot that you’ve ever had the chance to work with?

00:01:37 I think it was Domo, which was made by one of my grad students, Aaron Edsinger. It now sits in

00:01:43 Daniela Russo’s office, director of CSAIL, and it was just a beautiful robot. Aaron was really

00:01:50 clever. He didn’t give me a budget ahead of time. He didn’t tell me what he was going to do.

00:01:56 He just started spending money. He spent a lot of money. He and Jeff Weber, who is a mechanical

00:02:02 engineer who Aaron insisted he bring with him when he became a grad student, built this beautiful,

00:02:08 gorgeous robot, Domo, which is an upper torso humanoid, two arms with fingers, three fingered

00:02:17 hands, and face eyeballs. Not the eyeballs, but everything else, series elastic actuators.

00:02:26 You can interact with it. Cable driven. All the motors are inside, and it’s just gorgeous.

00:02:33 The eyeballs are actuated too, or no?

00:02:35 Oh yeah, the eyeballs are actuated with cameras, so it had a visual attention mechanism,

00:02:41 looking when people came in and looking in their face and talking with them.

00:02:46 Wow, was it amazing?

00:02:48 The beauty of it.

00:02:49 You said what was the most beautiful?

00:02:51 What is the most beautiful?

00:02:52 It’s just mechanically gorgeous. As everything Aaron builds,

00:02:55 there’s always been mechanically gorgeous. It’s just exquisite in the detail.

00:03:00 We’re talking about mechanically, like literally the amount of actuators.

00:03:04 The actuators, the cables, he anodizes different parts, different colors,

00:03:10 and it just looks like a work of art.

00:03:13 What about the face? Do you find the face beautiful in robots?

00:03:17 When you make a robot, it’s making a promise for how well it will be able to interact,

00:03:23 so I always encourage my students not to overpromise.

00:03:27 Even with its essence, like the thing it presents, it should not overpromise.

00:03:31 Yeah, so the joke I make, which I think you’ll get, is if your robot looks like Albert Einstein,

00:03:37 it should be as smart as Albert Einstein.

00:03:39 So the only thing in Domo’s face is the eyeballs, because that’s all it can do.

00:03:47 It can look at you and pay attention.

00:03:52 It’s not like one of those Japanese robots that looks exactly like a person at all.

00:03:58 But see, the thing is, us humans and dogs, too, don’t just use eyes as attentional mechanisms.

00:04:06 They also use it to communicate, as part of the communication.

00:04:09 Like a dog can look at you, look at another thing, and look back at you,

00:04:12 and that designates that we’re going to be looking at that thing together.

00:04:15 Yeah, or intent, you know, on both Baxter and Sawyer at Rethink Robotics,

00:04:21 they had a screen with, you know, graphic eyes,

00:04:25 so it wasn’t actually where the cameras were pointing, but the eyes would look in the direction

00:04:31 it was about to move its arm, so people in the factory nearby were not surprised by its motions,

00:04:36 because it gave that intent away.

00:04:39 Before we talk about Baxter, which I think is a beautiful robot, let’s go back to the beginning.

00:04:45 When did you first fall in love with robotics?

00:04:48 We’re talking about beauty and love to open the conversation.

00:04:50 This is great.

00:04:51 I was born in the end of 1954, and I grew up in Adelaide, South Australia,

00:04:57 and I have these two books that are dated 1961, so I’m guessing my mother found them in a store

00:05:05 in 62 or 63, How and Why Wonder Books.

00:05:09 How and Why Wonder Book of Electricity, and a How and Why Wonder Book of Giant Brains and Robots.

00:05:15 And I learned how to build circuits, you know, when I was eight or nine, simple circuits,

00:05:23 and I read, you know, learned the binary system, and saw all these drawings, mostly, of robots,

00:05:31 and then I tried to build them for the rest of my childhood.

00:05:36 Wait, 61, you said?

00:05:38 This was when the two books, I’ve still got them at home.

00:05:41 What does the robot mean in that context?

00:05:43 Some of the robots that they had were arms, you know, big arms to move nuclear material around,

00:05:51 but they had pictures of welding robots that looked like humans under the sea, welding stuff

00:05:57 underwater.

00:05:59 So they weren’t real robots, but they were, you know, what people were thinking about for robots.

00:06:05 What were you thinking about?

00:06:06 Were you thinking about humanoids?

00:06:07 Were you thinking about arms with fingers?

00:06:09 Were you thinking about faces or colors?

00:06:12 Were you thinking about faces or cars?

00:06:14 No, actually, to be honest, I realized my limitation on building mechanical stuff.

00:06:19 So I just built the brains, mostly, out of different technologies as I got older.

00:06:28 I built a learning system which was chemical based, and I had this ice cube tray.

00:06:35 Each well was a cell, and by applying voltage to the two electrodes, it would build up a

00:06:42 copper bridge.

00:06:43 So over time, it would learn a simple network so I could teach it stuff.

00:06:50 And mostly, things were driven by my budget, and nails as electrodes and an ice cube tray

00:07:00 was about my budget at that stage.

00:07:02 Later, I managed to buy transistors, and I could build gates and flip flops and stuff.

00:07:07 So one of your first robots was an ice cube tray?

00:07:11 Yeah, it was very cerebral because it learned to add.

00:07:16 Very nice.

00:07:17 Well, just a decade or so before, in 1950, Alan Turing wrote a paper that formulated

00:07:26 the Turing Test, and he opened that paper with the question, can machines think?

00:07:32 So let me ask you this question.

00:07:34 Can machines think?

00:07:36 Can your ice cube tray one day think?

00:07:40 Certainly, machines can think because I believe you’re a machine, and I’m a machine, and I

00:07:44 believe we both think.

00:07:46 I think any other philosophical position is sort of a little ludicrous.

00:07:51 What does think mean if it’s not something that we do?

00:07:53 And we are machines.

00:07:56 So yes, machines can, but do we have a clue how to build such machines?

00:08:00 That’s a very different question.

00:08:02 Are we capable of building such machines?

00:08:05 Are we smart enough?

00:08:06 We think we’re smart enough to do anything, but maybe we’re not.

00:08:10 Maybe we’re just not smart enough to build stuff like us.

00:08:14 The kind of computer that Alan Turing was thinking about, do you think there is something

00:08:18 fundamentally or significantly different between the computer between our ears, the biological

00:08:25 computer that humans use, and the computer that he was thinking about from a sort of

00:08:31 high level philosophical?

00:08:33 Yeah, I believe that it’s very wrong.

00:08:36 In fact, I’m halfway through a, I think it’ll be about a 480 page book, the working title

00:08:44 is Not Even Wrong.

00:08:45 And if I may, I’ll tell you a bit about that book.

00:08:48 Yes, please.

00:08:48 So there’s two, well, three thrusts to it.

00:08:52 One is the history of computation, what we call computation.

00:08:56 It goes all the way back to some manuscripts in Latin from 1614 and 1620 by Napier and

00:09:03 Kepler through Babbage and Lovelace.

00:09:06 And then Turing’s 1936 paper is what we think of as the invention of modern computation.

00:09:17 And that paper, by the way, did not set out to invent computation.

00:09:23 It set out to negatively answer one of Hilbert’s three later set of problems.

00:09:29 He called it an effective way of getting answers.

00:09:38 And Hilbert really worked with rewriting rules, as did Church, who also, at the same time,

00:09:49 a month earlier than Turing, disproved Hilbert’s one of these three hypotheses.

00:09:54 The other two had already been disproved by Gödel.

00:09:57 Turing set out to disprove it, because it’s always easier to disprove these things than

00:10:01 to prove that there is an answer.

00:10:04 And so he needed, and it really came from his professor while I was an undergrad at

00:10:12 Cambridge, who turned it into, is there a mechanical process?

00:10:16 So he wanted to show a mechanical process that could calculate numbers, because that

00:10:23 was a mechanical process that people used to generate tables.

00:10:27 They were called computers, the people at the time.

00:10:30 And they followed a set of rules where they had paper, and they would write numbers down,

00:10:35 and based on the numbers, they’d keep writing other numbers.

00:10:39 And they would produce numbers for these tables, engineering tables, that the more iterations

00:10:46 they did, the more significant digits came out.

00:10:48 And so Turing, in that paper, set out to define what sort of machine could do that, mechanical

00:10:56 machine, where it could produce an arbitrary number of digits in the same way a human computer

00:11:04 did.

00:11:06 And he came up with a very simple set of constraints where there was an infinite supply

00:11:13 of paper.

00:11:14 This is the tape of the Turing machine, and each Turing machine came with a set of instructions

00:11:22 that, as a person, could do with pencil and paper, write down things on the tape and erase

00:11:27 them and put new things there.

00:11:30 And he was able to show that that system was not able to do something that Hilbert had

00:11:36 hypothesized, so he disproved it.

00:11:38 But he had to show that this system was good enough to do whatever could be done, but couldn’t

00:11:47 do this other thing.

00:11:48 And there he said, and he says in the paper, I don’t have any real arguments for this,

00:11:53 but based on intuition.

00:11:55 So that’s how he defined computation.

00:11:58 And then if you look over the next, from 1936 up until really around 1975, you see people

00:12:05 struggling with, is this really what computation is?

00:12:10 And so Marvin Minsky, very well known in AI, but also a fantastic mathematician, in his

00:12:17 book Finite and Infant Machines from the mid-’60s, which is a beautiful, beautiful mathematical

00:12:22 book, says at the start of the book, well, what is computation?

00:12:26 Turing says it’s this, and yeah, I sort of think it’s that.

00:12:29 It doesn’t really matter whether the stuff’s made of wood or plastic.

00:12:32 It’s just that relatively cheap stuff can do this stuff.

00:12:36 And so yeah, seems like computation.

00:12:40 And Donald Knuth, in his first volume of his Art of Computer Programming in around 1968,

00:12:49 says, well, what’s computation?

00:12:52 It’s this stuff, like Turing says, that a person could do each step without too much

00:12:57 trouble.

00:12:57 And so one of his examples of what would be too much trouble was a step which required

00:13:03 knowing whether Fermat’s Last Theorem was true or not, because it was not known at the

00:13:08 time.

00:13:08 And that’s too much trouble for a person to do as a step.

00:13:12 And Hopcroft and Ullman sort of said a similar thing later that year.

00:13:18 And by 1975, in the A.H.O.

00:13:20 Hopcroft and Ullman book, they’re saying, well, you know, we don’t really know what

00:13:24 computation is, but intuition says this is sort of about right, and this is what it is.

00:13:31 That’s computation.

00:13:32 It’s a sort of agreed upon thing which happens to be really easy to implement in silicon.

00:13:39 And then we had Moore’s Law, which took off, and it’s been an incredibly powerful tool.

00:13:44 I certainly wouldn’t argue with that.

00:13:46 The version we have of computation, incredibly powerful.

00:13:49 Can we just take a pause?

00:13:51 So what we’re talking about is there’s an infinite tape with some simple rules of how

00:13:55 to write on that tape, and that’s what we’re kind of thinking about.

00:13:59 This is computation.

00:14:00 Yeah, and it’s modeled after humans, how humans do stuff.

00:14:03 And I think it’s, Turing says in the 36th paper, one of the critical facts here is that

00:14:09 a human has a limited amount of memory.

00:14:11 So that’s what we’re going to put onto our mechanical computers.

00:14:15 So, you know, I’m like mass.

00:14:19 I’m like mass or charge or, you know, it’s not given by the universe.

00:14:26 It was, this is what we’re going to call computation.

00:14:29 And then it has this really, you know, it had this really good implementation, which

00:14:33 has completely changed our technological world.

00:14:36 That’s computation.

00:14:40 Second part of the book, or argument in the book, I have this two by two matrix with science.

00:14:48 In the top row, engineering in the bottom row, left column is intelligence, right column

00:14:56 is life.

00:14:58 So in the bottom row, the engineering, there’s artificial intelligence and artificial life.

00:15:03 In the top row, there’s neuroscience and abiogenesis.

00:15:07 How does living matter turn in?

00:15:09 How does nonliving matter become living matter?

00:15:12 Four disciplines.

00:15:14 These four disciplines all came into the current form in the period 1945 to 1965.

00:15:24 That’s interesting.

00:15:24 There was neuroscience before, but it wasn’t effective neuroscience.

00:15:28 It was, you know, there were these ganglia and there’s electrical charges, but no one

00:15:32 knows what to do with it.

00:15:33 And furthermore, there are a lot of players who are common across them.

00:15:38 I’ve identified common players except for artificial intelligence and abiogenesis.

00:15:43 I don’t have, but for any other pair, I can point to people who work them.

00:15:47 And a whole bunch of them, by the way, were at the research lab for electronics at MIT

00:15:53 where Warren McCulloch held forth.

00:15:58 In fact, McCulloch, Pitts, Letvin, and Maturana wrote the first paper on functional neuroscience

00:16:06 called What the Frog’s Eye Tells the Frog’s Brain, where instead of it just being this

00:16:10 bunch of nerves, they sort of showed what different anatomical components were doing

00:16:17 and telling other anatomical components and, you know, generating behavior in the frog.

00:16:23 Would you put them as basically the fathers or one of the early pioneers of what are now

00:16:29 called artificial neural networks?

00:16:33 Yeah, I mean, McCulloch and Pitts.

00:16:36 Pitts was a much younger than him.

00:16:38 In 1943, had written a paper inspired by Bertrand Russell on a calculus for the ideas eminent

00:16:48 in neural systems where they had tried to, without any real proof, they had tried to

00:16:56 give a formalism for neurons basically in terms of logic and gates or gates and not

00:17:03 gates with no real evidence that that was what was going on, but they talked about it

00:17:09 and that was picked up by Minsky for his 1953 dissertation on, which was a neural

00:17:16 network, we call it today.

00:17:18 It was picked up by John von Neumann when he was designing the Edbeck computer in 1945.

00:17:26 He talked about its components being neurons based on, and in references, he’s only got

00:17:31 three references and one of them is the McCulloch Pitts paper.

00:17:35 So all these people and then the AI people and the artificial life people, which was

00:17:40 John von Neumann originally, there’s like overlap between all, they’re all going around

00:17:44 the same time.

00:17:45 And three of these four disciplines turned to computation as their primary metaphor.

00:17:51 So I’ve got a couple of chapters in the book.

00:17:54 One is titled, wait, computers are people?

00:17:58 Because that’s where our computers came from.

00:18:00 Yeah.

00:18:01 And, you know, from people who were computing stuff.

00:18:05 And then I’ve got another chapter, wait, people are computers?

00:18:08 Which is about computational neuroscience.

00:18:10 Yeah.

00:18:11 So there’s this whole circle here.

00:18:14 And that computation is it.

00:18:16 And, you know, I have talked to people about, well, maybe it’s not computation that goes

00:18:21 on in the head.

00:18:22 Of course it is.

00:18:24 Yeah.

00:18:24 Okay, well, when Elon Musk’s rocket goes up, is it computing?

00:18:31 Is that how it gets into orbit?

00:18:32 By computing?

00:18:34 But we’ve got this idea, if you want to build an AI system, you write a computer program.

00:18:39 Yeah, so the word computation very quickly starts doing a lot of work that it was not

00:18:46 initially intended to do.

00:18:48 It’s the second and same if you talk about the universe as essentially performing a

00:18:53 computation.

00:18:53 Yeah, right.

00:18:54 Wolfram does this.

00:18:55 He turns it into computation.

00:18:57 You don’t turn rockets into computation.

00:18:59 Yeah.

00:18:59 By the way, when you say computation in our conversation, do you tend to think of computation

00:19:04 narrowly in the way Turing thought of computation?

00:19:08 It’s gotten very, you know, squishy.

00:19:14 Yeah.

00:19:14 Squishy.

00:19:17 But computation in the way Turing thinks about it and the way most people think about it

00:19:22 actually fits very well with thinking like a hunter gatherer.

00:19:29 There are places and there can be stuff in places and the stuff in places can change

00:19:34 and it stays there until someone changes it.

00:19:37 And it’s this metaphor of place and container, which, you know, is a combination of our place

00:19:44 cells in our hippocampus and our cortex.

00:19:48 But this is how we use metaphors for mostly to think about.

00:19:52 And when we get outside of our metaphor range, we have to invent tools which we can sort

00:19:57 of switch on to use.

00:19:58 So calculus is an example of a tool.

00:20:01 It can do stuff that our raw reasoning can’t do, and we’ve got conventions of when you

00:20:06 can use it or not.

00:20:08 But sometimes, you know, people try to all the time, we always try to get physical metaphors

00:20:15 for things, which is why quantum mechanics has been such a problem for a hundred years.

00:20:21 Because it’s a particle.

00:20:22 No, it’s a wave.

00:20:22 It’s got to be something we understand.

00:20:24 And I say, no, it’s some weird mathematical logic that’s different from those, but we

00:20:29 want that metaphor.

00:20:30 Well, you know, I suspect that, you know, a hundred years or 200 years from now, neither

00:20:35 quantum mechanics nor dark matter will be talked about in the same terms, you know,

00:20:39 in the same way that Flogerson’s theory eventually went away.

00:20:44 Because it just wasn’t an adequate explanatory metaphor, you know.

00:20:49 That metaphor was the stuff, there is stuff in the burning, the burning is in the matter.

00:20:56 As it turns out, the burning was outside the matter, it was the oxygen.

00:20:59 So our desire for metaphor and combined with our limited cognitive capabilities gets us

00:21:05 into trouble.

00:21:06 That’s my argument in this book.

00:21:08 Now, and people say, well, what is it then?

00:21:10 And I say, well, I wish I knew that, right, the book about that.

00:21:12 But I, you know, I give some ideas.

00:21:14 But so there’s the three things.

00:21:17 Computation is sort of a particular thing we use.

00:21:22 Oh, can I tell you one beautiful thing, one beautiful thing I found?

00:21:26 So, you know, I used an example of a thing that’s different from computation.

00:21:30 You hit a drum and it vibrates, and there are some stationary points on the drum surface,

00:21:35 you know, because the waves are going up and down the stationary points.

00:21:37 Now, you could compute them to arbitrary precision, but the drum just knows them.

00:21:45 The drum doesn’t have to compute.

00:21:47 What was the very first computer program ever written by Ada Lovelace?

00:21:51 To compute Bernoulli numbers, and the Bernoulli numbers are exactly what you need to find those

00:21:56 stable points in the drum surface.

00:21:58 Wow.

00:21:59 And there was a bug in the program.

00:22:03 The arguments to divide were, I don’t know, I don’t know.

00:22:06 The arguments to divide were reversed in one place.

00:22:10 And it still worked?

00:22:11 Well, no, she’s never got to run it.

00:22:12 They never built the analytical engine.

00:22:14 She wrote the program without it, you know.

00:22:19 So the computation?

00:22:21 Computation is sort of, you know, a thing that’s become dominant as a metaphor, but

00:22:27 is it the right metaphor?

00:22:29 All three of these four fields adopted computation.

00:22:33 And, you know, a lot of it swirls around Warren McCulloch and all his students, and he funded

00:22:40 a lot of people.

00:22:45 And our human metaphors, our limitations to human thinking, all play into this.

00:22:50 Those are the three themes of the book.

00:22:52 So I have a little to say about computation.

00:22:54 So you’re saying that there is a gap between the computer or the machine that performs

00:23:05 computation and this machine that appears to have consciousness and intelligence.

00:23:13 Yeah, that piece of meat in your head.

00:23:16 Piece of meat.

00:23:16 And maybe it’s not just the meat in your head, it’s the rest of you too.

00:23:20 I mean, you actually have a neural system in your gut.

00:23:24 I tend to also believe, not believe, but we’re now dancing around things we don’t know, but

00:23:31 I tend to believe other humans are important.

00:23:36 Like, so we’re almost like, I just don’t think we would ever have achieved the level

00:23:42 of intelligence we have with other humans.

00:23:44 I’m not saying so confidently, but I have an intuition that some of the intelligence

00:23:49 is in the interaction.

00:23:51 Yeah, and I think it seems to be very likely, again, this is speculation, but we, our species,

00:24:00 and probably neanderthals to some extent, because you can find old bones where they

00:24:06 seem to be counting on them by putting notches that were neanderthals, we are able to put

00:24:15 some of our stuff outside our body into the world.

00:24:18 And then other people can share it.

00:24:20 And then we get these tools that become shared tools.

00:24:22 And so there’s a whole coupling that would not occur in the single deep learning network,

00:24:30 which was fed all of literature or something.

00:24:33 Yeah, the neural network can’t step outside of itself.

00:24:38 But is there some, can we explore this dark room a little bit and try to get at something?

00:24:46 What is the magic?

00:24:47 Where does the magic come from in the human brain that creates the mind?

00:24:52 What’s your sense as scientists that try to understand it and try to build it?

00:24:58 What are the directions it followed might be productive?

00:25:04 Is it creative, interactive robots?

00:25:07 Is it creating large deep neural networks that do like self supervised learning and

00:25:13 just like we’ll discover that when you make something large enough, some interesting things

00:25:18 will emerge?

00:25:19 Is it through physics and chemistry, biology, like artificial life angle?

00:25:23 Like we’ll sneak up in this four quadrant matrix that you mentioned.

00:25:28 Is there anything you’re most, if you had to bet all your money, financial?

00:25:33 I wouldn’t.

00:25:35 So every intelligence we know, animal intelligence, dog intelligence,

00:25:40 octopus intelligence, which is a very different sort of architecture from us.

00:25:49 All the intelligences we know perceive the world in some way and then have action in

00:25:59 the world, but they’re able to perceive objects in a way which is actually pretty damn phenomenal

00:26:11 and surprising.

00:26:13 We tend to think that the box over here between us, which is a sound box, I think is a blue

00:26:22 box, but blueness is something that we construct with color constancy.

00:26:32 The blueness is not a direct function of the photons we’re receiving.

00:26:37 It’s actually context, which is why you can turn, maybe seeing the examples where someone

00:26:47 turns a stop sign into some other sort of sign by just putting a couple of marks on

00:26:53 them and the deep learning system gets it wrong.

00:26:55 And everyone says, but the stop sign’s red.

00:26:58 Why is it thinking it’s the other sort of sign?

00:26:59 Because redness is not intrinsic in just the photons.

00:27:02 It’s actually a construction of an understanding of the whole world and the relationship between

00:27:07 objects to get color constancy.

00:27:11 But our tendency, in order that we get an archive paper really quickly, is you just

00:27:15 show a lot of data and give the labels and hope it figures it out.

00:27:18 But it’s not figuring it out in the same way we do.

00:27:21 We have a very complex perceptual understanding of the world.

00:27:24 Dogs have a very different perceptual understanding based on smell.

00:27:28 They go smell a post, they can tell how many different dogs have visited it in the last

00:27:34 10 hours and how long ago.

00:27:36 There’s all sorts of stuff that we just don’t perceive about the world.

00:27:39 And just taking a single snapshot is not perceiving about the world.

00:27:42 It’s not seeing the registration between us and the object.

00:27:48 And registration is a philosophical concept.

00:27:52 Brian Cantwell Smith talks about it a lot.

00:27:54 Very difficult, squirmy thing to understand.

00:27:59 But I think none of our systems do that.

00:28:02 We’ve always talked in AI about the symbol grounding problem, how our symbols that we

00:28:06 talk about are grounded in the world.

00:28:08 And when deep learning came along and started labeling images, people said, ah, the grounding

00:28:12 problem has been solved.

00:28:13 No, the labeling problem was solved with some percentage accuracy, which is different from

00:28:18 the grounding problem.

00:28:20 So you agree with Hans Marvick and what’s called the Marvick’s paradox that highlights

00:28:28 this counterintuitive notion that reasoning is easy, but perception and mobility are hard.

00:28:39 Yeah.

00:28:39 We shared an office when I was working on computer vision and he was working on his

00:28:45 first mobile robot.

00:28:46 What were those conversations like?

00:28:48 They were great.

00:28:50 So do you still kind of, maybe you can elaborate, do you still believe this kind of notion that

00:28:56 perception is really hard?

00:28:59 Like, can you make sense of why we humans have this poor intuition about what’s hard

00:29:04 and not?

00:29:04 Well, let me give us sort of another story.

00:29:10 Sure.

00:29:11 If you go back to the original teams working on AI from the late 50s into the 60s, and

00:29:21 you go to the AI lab at MIT, who was it that was doing that?

00:29:27 It was a bunch of really smart kids who got into MIT and they were intelligent.

00:29:32 So what’s intelligence about?

00:29:34 Well, the stuff they were good at, playing chess, doing integrals, that was hard stuff.

00:29:40 But, you know, a baby could see stuff, that wasn’t intelligent, anyone could do that,

00:29:45 that’s not intelligence.

00:29:47 And so, you know, there was this intuition that the hard stuff is the things they were

00:29:52 good at and the easy stuff was the stuff that everyone could do.

00:29:57 Yeah.

00:29:57 And maybe I’m overplaying it a little bit, but I think there’s an element of that.

00:30:00 Yeah, I mean, I don’t know how much truth there is to, like chess, for example, was

00:30:08 for the longest time seen as the highest level of intellect, right?

00:30:14 Until we got computers that were better at it than people.

00:30:17 And then we realized, you know, if you go back to the 90s, you’ll see, you know, the

00:30:21 stories in the press around when Kasparov was beaten by Deep Blue.

00:30:26 Oh, this is the end of all sorts of things.

00:30:28 Computers are going to be able to do anything from now on.

00:30:30 And we saw exactly the same stories with Alpha Zero, the Go Playing program.

00:30:36 Yeah.

00:30:37 But still, to me, reasoning is a special thing.

00:30:41 And perhaps…

00:30:41 No, actually, we’re really bad at reasoning.

00:30:44 We just use these analogies based on our hunter gatherer intuitions.

00:30:48 But why is that not, don’t you think the ability to construct metaphor is a really powerful

00:30:53 thing?

00:30:53 Oh, yeah, it is.

00:30:54 Tell stories.

00:30:55 It is.

00:30:55 It’s the constructing the metaphor and registering that something constant in our brains.

00:31:00 Like, isn’t that what we’re doing with vision too?

00:31:04 And we’re telling our stories.

00:31:06 We’re constructing good models of the world.

00:31:08 Yeah, yeah.

00:31:09 But I think we jumped between what we’re capable of and how we’re doing it right there.

00:31:16 It was a little confusion that went on as we were telling each other stories.

00:31:21 Yes, exactly.

00:31:23 Trying to delude each other.

00:31:24 No, I just think I’m not exactly so.

00:31:27 I’m trying to pull apart this Moravec’s paradox.

00:31:30 I don’t view it as a paradox.

00:31:33 What did evolution spend its time on?

00:31:36 Yes.

00:31:36 It spent its time on getting us to perceive and move in the world.

00:31:39 That was 600 million years as multi cell creatures doing that.

00:31:43 And then it was relatively recent that we were able to hunt or gather or even animals hunting.

00:31:53 That’s much more recent.

00:31:54 And then anything that we, speech, language, those things are a couple of hundred thousand

00:32:02 years probably, if that long.

00:32:05 And then agriculture, 10,000 years.

00:32:09 All that stuff was built on top of those earlier things, which took a long time to develop.

00:32:14 So if you then look at the engineering of these things, so building it into robots,

00:32:20 what’s the hardest part of robotics?

00:32:22 Do you think as the decades that you worked on robots in the context of what we’re talking

00:32:29 about, vision, perception, the actual sort of the biomechanics of movement, I’m kind

00:32:37 of drawing parallels here between humans and machines always.

00:32:40 Like what do you think is the hardest part of robotics?

00:32:44 I just want to think all of them.

00:32:45 I just want to think all of them.

00:32:49 There are no easy parts to do well.

00:32:53 We sort of go reductionist and we reduce it.

00:32:55 If only we had all the location of all the points in 3D, things would be great.

00:33:02 If only we had labels on the images, things would be great.

00:33:07 But as we see, that’s not good enough.

00:33:10 Some deeper understanding.

00:33:13 But if I came to you and I could solve one category of problems in robotics instantly,

00:33:21 what would give you the greatest pleasure?

00:33:28 I mean, you look at robots that manipulate objects, what’s hard about that?

00:33:36 You know, is it the perception, is it the reasoning about the world, that common sense

00:33:43 reasoning, is it the actual building a robot that’s able to interact with the world?

00:33:49 Is it like human aspects of a robot that’s interacting with humans in that game theory

00:33:54 of how they work well together?

00:33:56 Well, let’s talk about manipulation for a second because I had this really blinding

00:34:00 moment, you know, I’m a grandfather, so grandfathers have blinding moments.

00:34:05 Just three or four miles from here, last year, my 16 month old grandson was in his new house

00:34:16 for the first time, right?

00:34:18 First time in this house.

00:34:19 And he’d never been able to get to a window before, but this had some low windows.

00:34:25 And he goes up to this window with a handle on it that he’s never seen before.

00:34:29 And he’s got one hand pushing the window and the other hand turning the handle to open

00:34:34 the window.

00:34:36 He knew two different hands, two different things he knew how to put together.

00:34:44 And he’s 16 months old.

00:34:45 And there you are watching in awe.

00:34:51 In an environment he’d never seen before, a mechanism he’d never seen.

00:34:55 How did he do that?

00:34:56 Yes, that’s a good question.

00:34:57 How did he do that?

00:34:58 That’s why.

00:34:59 It’s like, okay, like you could see the leap of genius from using one hand to perform a

00:35:05 task to combining, doing, I mean, first of all, in manipulation, that’s really difficult.

00:35:11 It’s like two hands, both necessary to complete the action.

00:35:15 And completely different.

00:35:16 And he’d never seen a window open before, but he inferred somehow handle open something.

00:35:25 Yeah, there may have been a lot of slightly different failure cases that you didn’t see.

00:35:32 Not with a window, but with other objects of turning and twisting and handles.

00:35:37 There’s a great counter to reinforcement learning.

00:35:42 We’ll just give the robot plenty of time to try everything.

00:35:50 Can I tell a little side story here?

00:35:52 Yeah, so I’m in DeepMind in London, this is three, four years ago, where there’s a big

00:36:01 Google building, and then you go inside and you go through this more security, and then

00:36:06 you get to DeepMind where the other Google employees can’t go.

00:36:09 And I’m in a conference room, a conference room with some of the people, and they tell

00:36:15 me about their reinforcement learning experiment with robots, which are just trying stuff out.

00:36:23 And they’re my robots.

00:36:25 They’re Sawyer’s.

00:36:26 We sold them.

00:36:29 And they really like them because Sawyer’s are compliant and can sense forces, so they

00:36:33 don’t break when they’re bashing into walls.

00:36:36 They stop and they do all this stuff.

00:36:38 So you just let the robot do stuff, and eventually it figures stuff out.

00:36:42 By the way, Sawyer, we’re talking about robot manipulation, so robot arms and so on.

00:36:47 Yeah, Sawyer’s a robot.

00:36:50 What’s Sawyer?

00:36:51 Sawyer’s a robot arm that my company Rethink Robotics built.

00:36:55 Thank you for the context.

00:36:56 Sorry.

00:36:57 Okay, cool.

00:36:57 So we’re in DeepMind.

00:36:59 And it’s in the next room, these robots are just bashing around to try and use reinforcement

00:37:04 learning to learn how to act.

00:37:05 Can I go see them?

00:37:06 Oh no, they’re secret.

00:37:08 They were my robots.

00:37:09 They were secret.

00:37:10 That’s hilarious.

00:37:11 Okay.

00:37:12 Anyway, the point is, you know, this idea that you just let reinforcement learning figure

00:37:17 everything out is so counter to how a kid does stuff.

00:37:21 So again, story about my grandson.

00:37:24 I gave him this box that had lots of different lock mechanisms.

00:37:29 He didn’t randomly, you know, and he was 18 months old, he didn’t randomly try to touch

00:37:34 every surface or push everything.

00:37:35 He found he could see where the mechanism was, and he started exploring the mechanism

00:37:42 for each of these different lock mechanisms.

00:37:44 And there was reinforcement, no doubt, of some sort going on there.

00:37:48 But he applied a pre filter, which cut down the search space dramatically.

00:37:55 I wonder to what level we’re able to introspect what’s going on.

00:37:59 Because what’s also possible is you have something like reinforcement learning going

00:38:03 on in the mind in the space of imagination.

00:38:05 So like you have a good model of the world you’re predicting and you may be running those

00:38:10 tens of thousands of like loops, but you’re like, as a human, you’re just looking at yourself

00:38:16 trying to tell a story of what happened.

00:38:18 And it might seem simple, but maybe there’s a lot of computation going on.

00:38:24 Whatever it is, but there’s also a mechanism that’s being built up.

00:38:28 It’s not just random search.

00:38:30 Yeah, that mechanism prunes it dramatically.

00:38:33 Yeah, that pruning, that pruning stuff, but it doesn’t, it’s possible that that’s, so

00:38:40 you don’t think that’s akin to a neural network inside a reinforcement learning algorithm.

00:38:46 Is it possible?

00:38:49 It’s, yeah, until it’s possible.

00:38:52 It’s possible, but I’ll be incredibly surprised if that happens.

00:39:01 I’ll also be incredibly surprised that after all the decades that I’ve been doing this,

00:39:06 where every few years someone thinks, now we’ve got it.

00:39:10 Now we’ve got it.

00:39:12 Four or five years ago, I was saying, I don’t think we’ve got it yet.

00:39:15 And everyone was saying, you don’t understand how powerful AI is.

00:39:18 I had people tell me, you don’t understand how powerful it is.

00:39:22 I sort of had a track record of what the world had done to think, well, this is no different

00:39:30 from before.

00:39:31 Or we have bigger computers.

00:39:33 We had bigger computers in the 90s and we could do more stuff.

00:39:37 But okay, so let me push back because I’m generally sort of optimistic and try to find

00:39:43 the beauty in things.

00:39:44 I think there’s a lot of surprising and beautiful things that neural networks, this new generation

00:39:51 of deep learning revolution has revealed to me, has continually been very surprising

00:39:57 the kind of things it’s able to do.

00:39:59 Now, generalizing that over saying like this, we’ve solved intelligence.

00:40:03 That’s another big leap.

00:40:05 But is there something surprising and beautiful to you about neural networks that were actually

00:40:10 you said back and said, I did not expect this?

00:40:16 Oh, I think their performance on ImageNet was shocking.

00:40:22 The computer vision in those early days was just very like, wow, okay.

00:40:26 That doesn’t mean that they’re solving everything in computer vision we need to solve or in

00:40:32 vision for robots.

00:40:33 What about AlphaZero and self play mechanisms and reinforcement learning?

00:40:37 Yeah, that was all in the 90s.

00:40:39 Yeah, that was all in Donald Mickey’s 1961 paper.

00:40:44 Everything that was there, which introduced reinforcement learning.

00:40:48 No, but come on.

00:40:49 So no, you’re talking about the actual techniques.

00:40:52 But isn’t it surprising to you the level it’s able to achieve with no human supervision

00:40:58 of chess play?

00:40:59 Like, to me, there’s a big, big difference between Deep Blue and…

00:41:05 Maybe what that’s saying is how overblown our view of ourselves is.

00:41:13 You know, the chess is easy.

00:41:16 Yeah, I mean, I came across this 1946 report that, and I’d seen this as a kid in one of

00:41:28 those books that my mother had given me actually.

00:41:30 The 1946 report, which pitted someone with an abacus against an electronic calculator,

00:41:39 and he beat the electronic calculator.

00:41:42 You know, so there at that point was, well, humans are still better than machines at calculating.

00:41:48 Are you surprised today that a machine can, you know, do a billion floating point operations

00:41:54 a second and, you know, you’re puzzling for minutes through one?

00:41:58 I mean, I don’t know, but I am certainly surprised there’s something, to me, different about

00:42:07 learning, so a system that’s able to learn.

00:42:10 Learning.

00:42:10 See, now you’re getting into one of the deadly sins.

00:42:15 Because of using terms overly broadly.

00:42:19 Yeah, I mean, there’s so many different forms of learning.

00:42:21 Yeah.

00:42:22 So many different forms.

00:42:23 You know, I learned my way around the city.

00:42:24 I learned to play chess.

00:42:26 I learned Latin.

00:42:28 I learned to ride a bicycle.

00:42:30 All of those are, you know, very different capabilities.

00:42:33 Yeah.

00:42:34 And if someone, you know, has a, you know, in the old days, people would write a paper

00:42:41 about learning something.

00:42:43 Now the corporate press office puts out a press release about how Company X is leading

00:42:52 the world because they have a system that can…

00:42:56 Yeah, but here’s the thing.

00:42:58 Okay.

00:42:58 So what is learning?

00:43:00 When I refer to…

00:43:00 Learning is many things.

00:43:02 But…

00:43:02 It’s a suitcase word.

00:43:04 It’s a suitcase word, but loosely, there’s a dumb system, and over time, it becomes smart.

00:43:13 Well, it becomes less dumb at the thing that it’s doing.

00:43:16 Smart is a loaded word.

00:43:19 Yes, less dumb at the thing it’s doing.

00:43:21 It gets better performance under some measure, under some set of conditions at that thing.

00:43:27 And most of these learning algorithms, learning systems, fail when you change the conditions

00:43:35 just a little bit in a way that humans don’t.

00:43:37 So I was at DeepMind, the AlphaGo had just come out, and I said, what would have happened

00:43:45 if you’d given it a 21 by 21 board instead of a 19 by 19 board?

00:43:49 They said, fail totally.

00:43:51 But a human player would actually be able to play.

00:43:55 And actually, funny enough, if you look at DeepMind’s work since then, they’re presenting

00:44:02 a lot of algorithms that would do well at the bigger board.

00:44:07 So they’re slowly expanding this generalization.

00:44:10 I mean, to me, there’s a core element there.

00:44:12 I think it is very surprising to me that even in a constrained game of chess or Go, that

00:44:20 through self play, by a system playing itself, that it can achieve superhuman level performance

00:44:28 through learning alone.

00:44:29 Okay, so you didn’t like it when I referred to Donald Mickey’s 1961 paper.

00:44:38 There, in the second part of it, which came a year later, they had self play on an electronic

00:44:46 computer at tic tac toe, okay, but it learned to play tic tac toe through self play.

00:44:52 And it learned to play optimally.

00:44:54 What I’m saying is, okay, I have a little bit of a bias, but I find ideas beautiful,

00:45:02 but only when they actually realize the promise.

00:45:06 That’s another level of beauty.

00:45:08 For example, what Bezos and Elon Musk are doing with rockets.

00:45:13 We had rockets for a long time, but doing reusable cheap rockets, it’s very impressive.

00:45:18 In the same way, I would have not predicted.

00:45:22 First of all, when I started and fell in love with AI, the game of Go was seen to be impossible

00:45:30 to solve.

00:45:31 Okay, so I thought maybe, you know, maybe it’d be possible to maybe have big leaps in

00:45:38 a Moore’s law style of way, in computation, I’ll be able to solve it.

00:45:42 But I would never have guessed that you can learn your way, however, I mean, in the narrow

00:45:50 sense of learning, learn your way to beat the best people in the world at the game of

00:45:55 Go without human supervision, not studying the game of experts.

00:45:59 Okay, so using a different learning technique, Arthur Samuel in the early 60s, and he was

00:46:08 the first person to use machine learning, had a program that could beat the world champion

00:46:14 at checkers.

00:46:16 And that at the time was considered amazing.

00:46:19 By the way, Arthur Samuel had some fantastic advantages.

00:46:23 Do you want to hear Arthur Samuel’s advantages?

00:46:25 Two things.

00:46:26 One, he was at the 1956 AI conference.

00:46:30 I knew Arthur later in life.

00:46:32 He was at Stanford when I was a graduate student there.

00:46:34 He wore a tie and a jacket every day, the rest of us didn’t.

00:46:38 Delightful man, delightful man.

00:46:42 It turns out Claude Shannon, in a 1950 Scientific American article, on chess playing, outlined

00:46:51 the learning mechanism that Arthur Samuel used, and they had met in 1956.

00:46:57 I assume there was some communication, but I don’t know that for sure.

00:47:00 But Arthur Samuel had been a vacuum tube engineer, getting reliability of vacuum tubes, and then

00:47:07 had overseen the first transistorized computers at IBM.

00:47:11 And in those days, before you shipped a computer, you ran it for a week to get early failures.

00:47:18 So he had this whole farm of computers running random code for hours and hours for each computer.

00:47:28 He had a whole bunch of them.

00:47:29 So he ran his chess learning program with self play on IBM’s production line.

00:47:38 He had more computation available to him than anyone else in the world, and then he was

00:47:43 able to produce a chess playing program, I mean a checkers playing program, that could

00:47:48 beat the world champion.

00:47:49 So that’s amazing.

00:47:51 The question is, what I mean surprised, I don’t just mean it’s nice to have that accomplishment,

00:47:58 is there is a stepping towards something that feels more intelligent than before.

00:48:06 Yeah, but that’s in your view of the world.

00:48:08 Okay, well let me then, it doesn’t mean I’m wrong.

00:48:11 No, no it doesn’t.

00:48:13 So the question is, if we keep taking steps like that, how far that takes us?

00:48:18 Are we going to build a better recommender systems?

00:48:21 Are we going to build a better robot?

00:48:23 Or will we solve intelligence?

00:48:25 So, you know, I’m putting my bet on, but still missing a whole lot.

00:48:33 A lot.

00:48:34 And why would I say that?

00:48:36 Well, in these games, they’re all, you know, 100% information games, but again, but each

00:48:43 of these systems is a very short description of the current state, which is different from

00:48:50 registering and perception in the world, which gets back to Marovec’s paradox.

00:48:55 I’m definitely not saying that chess is somehow harder than perception or any kind of, even

00:49:05 any kind of robotics in the physical world, I definitely think is way harder than the

00:49:10 game of chess.

00:49:10 So I was always much more impressed by the workings of the human mind.

00:49:15 It’s incredible.

00:49:15 The human mind is incredible.

00:49:17 I believe that from the very beginning, I wanted to be a psychiatrist for the longest

00:49:20 time.

00:49:20 I always thought that’s way more incredible in the game of chess.

00:49:23 I think the game of chess is, I love the Olympics.

00:49:26 It’s just another example of us humans picking a task and then agreeing that a million humans

00:49:31 will dedicate their whole life to that task.

00:49:33 And that’s the cool thing that the human mind is able to focus on one task and then compete

00:49:39 against each other and achieve like weirdly incredible levels of performance.

00:49:44 That’s the aspect of chess that’s super cool.

00:49:46 Not that chess in itself is really difficult.

00:49:49 It’s like the Fermat’s last theorem is not in itself to me that interesting.

00:49:53 The fact that thousands of people have been struggling to solve that particular problem

00:49:57 is fascinating.

00:49:58 So can I tell you my disease in this way?

00:50:00 Sure.

00:50:01 Which actually is closer to what you’re saying.

00:50:03 So as a child, I was building various, I called them computers.

00:50:07 They weren’t general purpose computers.

00:50:09 Ice cube tray.

00:50:10 The ice cube tray was one.

00:50:11 But I built other machines.

00:50:12 And what I liked to build was machines that could beat adults at a game and the adults

00:50:18 couldn’t beat my machine.

00:50:19 Yeah.

00:50:19 So you were like, that’s powerful.

00:50:22 That’s a way to rebel.

00:50:24 Oh, by the way, when was the first time you built something that outperformed you?

00:50:33 Do you remember?

00:50:34 Well, I knew how it worked.

00:50:36 I was probably nine years old and I built a thing that was a game where you take turns

00:50:42 in taking matches from a pile and either the one who takes the last one or the one who

00:50:47 doesn’t take the last one wins.

00:50:48 I forget.

00:50:49 And so it was pretty easy to build that out of wires and nails and little coils that were

00:50:54 like plugging in the number and a few light bulbs.

00:50:59 The one I was proud of, I was 12 when I built a thing out of old telephone switchboard switches

00:51:07 that could always win at tic tac toe.

00:51:11 And that was a much harder circuit to design.

00:51:14 But again, it was no active components.

00:51:17 It was just three position switches, empty, X, zero, O.

00:51:23 And nine of them and a light bulb on which move it wanted next.

00:51:29 And then the human would go and move that.

00:51:31 See, there’s magic in that creation.

00:51:33 There was.

00:51:33 Yeah, yeah.

00:51:34 I tend to see magic in robots that like I also think that intelligence is a little bit

00:51:43 overrated.

00:51:44 I think we can have deep connections with robots very soon.

00:51:49 And well, we’ll come back to connections for sure.

00:51:52 But I do want to say, I think too many people make the mistake of seeing that magic and

00:52:00 thinking, well, we’ll just continue.

00:52:02 But each one of those is a hard fought battle for the next step, the next step.

00:52:07 Yes.

00:52:08 The open question here is, and this is why I’m playing devil’s advocate, but I often

00:52:11 do when I read your blog post in my mind because I have like this eternal optimism, is it’s

00:52:18 not clear to me.

00:52:19 So I don’t do what obviously the journalists do or they give into the hype, but it’s not

00:52:23 obvious to me how many steps away we are from a truly transformational understanding of

00:52:34 what it means to build intelligent systems or how to build intelligent systems.

00:52:40 I’m also aware of the whole history of artificial intelligence, which is where your deep grounding

00:52:45 of this is, is there has been an optimism for decades and that optimism, just like reading

00:52:51 old optimism is absurd because people were like, this is, they were saying things are

00:52:57 trivial for decades since the sixties, they’re saying everything is true.

00:53:00 Computer vision is trivial, but I think my mind is working crisply enough to where, I

00:53:07 mean, we can dig into if you want.

00:53:09 I’m really surprised by the things DeepMind has done.

00:53:12 I don’t think they’re so, they’re yet close to solving intelligence, but I’m not sure

00:53:19 it’s not 10 to 10 years away.

00:53:22 What I’m referring to is interesting to see when the engineering, it takes that idea to

00:53:30 scale and the idea works.

00:53:32 And no, it fools people.

00:53:34 Okay.

00:53:35 Honestly, Rodney, if it was you, me and Demis inside a room, forget the press, forget all

00:53:40 those things, just as a scientist, as a roboticist, that wasn’t surprising to you that at scale.

00:53:47 So we’re talking about very large now, okay, let’s pick one.

00:53:50 That’s the most surprising to you.

00:53:52 Okay.

00:53:52 Please don’t yell at me.

00:53:53 GPT three, okay.

00:53:56 Hold on, hold on, I was going to say, okay, alpha zero, alpha go, alpha go, zero, alpha

00:54:03 zero, and then alpha fold one and two.

00:54:06 So do any of these kind of have this core of, forget usefulness or application and so

00:54:13 on, which you could argue for alpha fold, like, as a scientist, was those surprising

00:54:19 to you that it worked as well as it did?

00:54:23 Okay, so if we’re going to make the distinction between surprise and usefulness, and I have

00:54:30 to explain this, I would say alpha fold, and one of the problems at the moment with alpha

00:54:40 fold is, you know, it gets a lot of them right, which is a surprise to me, because they’re

00:54:44 a really complex thing, but you don’t know which ones it gets right, which then is a

00:54:51 bit of a problem.

00:54:52 Now they’ve come out with a recent…

00:54:53 You mean the structure of the proteins, it gets a lot of those right.

00:54:56 Yeah, it’s a surprising number of them right, it’s been a really hard problem.

00:55:00 So that was a surprise how many it gets right.

00:55:03 So far, the usefulness is limited, because you don’t know which ones are right or not,

00:55:07 and now they’ve come out with a thing in the last few weeks, which is trying to get a useful

00:55:12 tool out of it, and they may well do it.

00:55:15 In that sense, at least alpha fold is different, because your alpha fold tool is different,

00:55:21 because now it’s producing data sets that are actually, you know, potentially revolutionizing

00:55:27 competition biology, like they will actually help a lot of people, but…

00:55:31 You would say potentially revolutionizing, we don’t know yet, but yeah.

00:55:36 That’s true, yeah.

00:55:36 But they’re, you know, but I got you.

00:55:39 I mean, this is…

00:55:40 Okay, so you know what, this is gonna be so fun, so let’s go right into it.

00:55:45 Speaking of robots that operate in the real world, let’s talk about self driving cars.

00:55:52 Oh, okay.

00:55:54 Okay, because you have built robotics companies, you’re one of the greatest roboticists in

00:56:00 history, and that’s not just in the space of ideas, we’ll also probably talk about that,

00:56:06 but in the actual building and execution of businesses that make robots that are useful

00:56:13 for people and that actually work in the real world and make money.

00:56:18 You also sometimes are critical of Mr. Elon Musk, or let’s more specifically focus on

00:56:24 this particular technology, which is autopilot inside Teslas.

00:56:29 What are your thoughts about Tesla autopilot, or more generally vision based machine learning

00:56:33 approach to semi autonomous driving?

00:56:38 These are robots, they’re being used in the real world by hundreds of thousands of people,

00:56:43 and if you want to go there, I can go there, but that’s not too much, which they’re…

00:56:49 Let’s say they’re on par safety wise as humans currently, meaning human alone versus human

00:56:57 plus robot.

00:56:58 Okay, so first let me say I really like the car I came in here today.

00:57:03 Which is?

00:57:06 2021 model, Mercedes E450.

00:57:12 I am impressed by the machine vision, sonar, other things.

00:57:19 I’m impressed by what it can do.

00:57:21 I’m really impressed with many aspects of it.

00:57:29 It’s able to stay in lane, is it?

00:57:31 Oh yeah, it does the lane stuff.

00:57:35 It’s looking on either side of me, it’s telling me about nearby cars.

00:57:40 For blind spots and so on.

00:57:41 Yeah, when I’m going in close to something in the park, I get this beautiful, gorgeous,

00:57:48 top down view of the world.

00:57:49 I am impressed up the wazoo of how registered and metrical that is.

00:57:56 So it’s like multiple cameras and it’s all ready to go to produce the 360 view kind of

00:58:00 thing?

00:58:00 360 view, it’s synthesized so it’s above the car, and it is unbelievable.

00:58:06 I got this car in January, it’s the longest I’ve ever owned a car without digging it.

00:58:11 So it’s better than me.

00:58:13 Me and it together are better.

00:58:15 So I’m not saying technology’s bad or not useful, but here’s my point.

00:58:24 Yes, it’s a replay of the same movie.

00:58:31 Okay, so maybe you’ve seen me ask this question before.

00:58:34 But when did the first car go over 55 miles an hour for over 10 miles on a public freeway

00:58:54 with other traffic around driving completely autonomously?

00:58:56 When did that happen?

00:58:59 Was it CMU in the 80s or something?

00:59:01 It was a long time ago.

00:59:02 It was actually in 1987 in Munich at the Bundeswehr.

00:59:09 So they had it running in 1987.

00:59:12 When do you think, and Elon has said he’s going to do this, when do you think we’ll

00:59:16 have the first car drive coast to coast in the US, hands off the wheel, feet off the

00:59:23 pedals, coast to coast?

00:59:25 As far as I know, a few people have claimed to do it.

00:59:28 1995, that was Carnegie Mellon.

00:59:30 I didn’t know, but oh, that was the, they didn’t claim, did they claim 100%?

00:59:35 Not 100%, not 100%.

00:59:37 And then there’s a few marketing people who have claimed 100% since then.

00:59:41 My point is that, you know, what I see happening again is someone sees a demo and they overgeneralize

00:59:50 and say, we must be almost there.

00:59:52 But we’ve been working on it for 35 years.

00:59:54 So that’s demos.

00:59:56 But this is going to take us back to the same conversation with AlphaZero.

00:59:59 Are you not, okay, I’ll just say what I am because I thought, okay, when I first started

01:00:06 interacting with the Mobileye implementation of Tesla Autopilot, I’ve driven a lot of car,

01:00:12 you know, I’ve been in Google self driving car since the beginning.

01:00:18 I thought there was no way before I sat and used Mobileye, I thought they’re just knowing

01:00:23 computer vision.

01:00:24 I thought there’s no way it could work as well as it was working.

01:00:26 So my model of the limits of computer vision was way more limited than the actual implementation

01:00:35 of Mobileye.

01:00:35 I was so that’s one example.

01:00:37 I was really surprised.

01:00:39 It’s like, wow, that was that was incredible.

01:00:41 The second surprise came when Tesla threw away Mobileye and started from scratch.

01:00:50 I thought there’s no way they can catch up to Mobileye.

01:00:52 I thought what Mobileye was doing was kind of incredible, like the amount of work and

01:00:56 the annotation.

01:00:56 Yeah, well, Mobileye was started by Amnon Shashua and used a lot of traditional, you

01:01:01 know, hard fought computer vision techniques.

01:01:04 But they also did a lot of good sort of like non research stuff, like actual like just

01:01:11 good, like what you do to make a successful product, right?

01:01:14 Scale, all that kind of stuff.

01:01:16 And so I was very surprised when they from scratch were able to catch up to that.

01:01:20 That’s very impressive.

01:01:21 And I’ve talked to a lot of engineers that was involved.

01:01:23 This is that was impressive.

01:01:25 That was impressive.

01:01:27 And the recent progress, especially under the involvement of Andrej Karpathy, what they

01:01:34 were what they’re doing with the data engine, which is converting into the driving task

01:01:40 into these multiple tasks and then doing this edge case discovery when they’re pulling back

01:01:45 like the level of engineering made me rethink what’s possible.

01:01:49 I don’t I still, you know, I don’t know to that intensity, but I always thought it was

01:01:55 very difficult to solve autonomous driving with all the sensors, with all the computation.

01:02:00 I just thought it’s a very difficult problem.

01:02:02 But I’ve been continuously surprised how much you can engineer.

01:02:07 First of all, the data acquisition problem, because I thought, you know, just because

01:02:12 I worked with a lot of car companies and they’re they’re so a little a little bit old school

01:02:20 to where I didn’t think they could do this at scale like AWS style data collection.

01:02:25 So when Tesla was able to do that, I started to think, OK, so what are the limits of this?

01:02:33 I still believe that driver like sensing and the interaction with the driver and like studying

01:02:40 the human factor psychology problem is essential.

01:02:43 It’s it’s always going to be there.

01:02:45 It’s always going to be there, even with fully autonomous driving.

01:02:48 But I’ve been surprised what is the limit, especially a vision based alone, how far that

01:02:55 can take us.

01:02:57 So that’s my levels of surprise now.

01:03:00 OK, can you explain in the same way you said, like Alpha Zero, that’s a homework problem

01:03:07 that’s scaled large in its chest, like who cares?

01:03:10 Go with here’s actual people using an actual car and driving.

01:03:15 Many of them drive more than half their miles using the system.

01:03:19 Right.

01:03:20 So, yeah, they’re doing well with with pure vision for your vision.

01:03:24 Yeah.

01:03:25 And, you know, and now no radar, which is I suspect that can’t go all the way.

01:03:30 And one reason is without without new cameras that have a dynamic range closer to the human

01:03:36 eye, because human eye has incredible dynamic range.

01:03:39 And we make use of that dynamic range in its 11 orders of magnitude or some crazy number

01:03:46 like that.

01:03:47 The cameras don’t have that, which is why you see the the the bad cases where the sun

01:03:53 on a white thing and it blinds it in a way it wouldn’t blind the person.

01:03:59 I think there’s a bunch of things to think about before you say this is so good, it’s

01:04:06 just going to work.

01:04:06 OK, and I’ll come at it from multiple angles.

01:04:12 And I know you’ve got a lot of time.

01:04:13 Yeah.

01:04:14 OK, let’s let’s I have thought about these things.

01:04:17 Yeah, I know.

01:04:18 You’ve been writing a lot of great blog posts about it for a while before Tesla had autopilot.

01:04:24 Right.

01:04:25 So you’ve been thinking about autonomous driving for a while from every angle.

01:04:29 So so a few things, you know, in the US, I think that the death rate for autonomous driving

01:04:36 death rate from motor vehicle accidents is about thirty five thousand a year,

01:04:44 which is an outrageous number, not outrageous compared to covid deaths.

01:04:49 But, you know, there is no rationality.

01:04:52 And that’s part of the thing people have said.

01:04:54 Engineers say to me, well, if we cut down the number of deaths by 10 percent by having

01:04:58 autonomous driving, that’s going to be great.

01:05:01 Everyone will love it.

01:05:02 And my prediction is that if autonomous vehicles kill more than 10 people a year, they’ll be

01:05:09 screaming and hollering, even though thirty five thousand people a year have been killed

01:05:14 by human drivers.

01:05:16 It’s not rational.

01:05:17 It’s a different set of expectations.

01:05:20 And that will probably continue.

01:05:23 So there’s that aspect of it.

01:05:25 The other aspect of it is that when we introduce new technology, we often change the rules

01:05:34 of the game.

01:05:36 So when we introduced cars first into our daily lives, we completely rebuilt our cities

01:05:45 and we changed all the laws.

01:05:46 Yeah, jaywalking was not an offense that was pushed by the car companies so that people

01:05:52 would stay off the road so there wouldn’t be deaths from pedestrians getting hit.

01:05:57 We completely changed the structure of our cities and had these foul smelling things

01:06:02 everywhere around us.

01:06:04 And now you see pushback in cities like Barcelona is really trying to exclude cars, et cetera.

01:06:11 So I think that to get to self driving, we will, large adoption, it’s not going to be

01:06:21 just take the current situation, take out the driver and put the same car doing the

01:06:27 same stuff because the end case is too many.

01:06:31 Here’s an interesting question.

01:06:33 How many fully autonomous train systems do we have in the U.S.?

01:06:41 I mean, do you count them as fully autonomous?

01:06:43 I don’t know because they’re usually as a driver, but they’re kind of autonomous, right?

01:06:47 No, let’s get rid of the driver.

01:06:51 Okay.

01:06:51 I don’t know.

01:06:52 It’s either 15 or 16.

01:06:54 Most of them are in airports.

01:06:56 There’s a few that are fully autonomous.

01:06:59 Seven are in airports, there’s a few that go about five, two that go about five kilometers

01:07:06 out of airports.

01:07:11 When is the first fully autonomous train system for mass transit expected to operate fully

01:07:17 autonomously with no driver in a U.S.

01:07:22 City?

01:07:23 It’s expected to operate in 2017 in Honolulu.

01:07:27 Oh, wow.

01:07:29 It’s delayed, but they will get there.

01:07:32 BART, by the way, was originally going to be autonomous here in the Bay Area.

01:07:35 I mean, they’re all very close to fully autonomous, right?

01:07:38 Yeah, but getting that close is the thing.

01:07:41 And I’ve often gone on a fully autonomous train in Japan, one that goes out to that

01:07:48 fake island in the middle of Tokyo Bay.

01:07:50 I forget the name of that.

01:07:53 And what do you see when you look at that?

01:07:55 What do you see when you go to a fully autonomous train in an airport?

01:08:03 It’s not like regular trains.

01:08:07 At every station, there’s a double set of doors so that there’s a door of the train

01:08:12 and there’s a door off the platform.

01:08:18 And this is really visible in this Japanese one because it goes out in amongst buildings.

01:08:23 The whole track is built so that people can’t climb onto it.

01:08:27 Yeah.

01:08:27 So there’s an engineering that then makes the system safe and makes them acceptable.

01:08:32 I think we’ll see similar sorts of things happen in the U.S.

01:08:37 What surprised me, I thought, wrongly, that we would have special purpose lanes on 101

01:08:46 in the Bay Area, the leftmost lane, so that it would be normal for Teslas or other cars

01:08:55 to move into that lane and then say, okay, now it’s autonomous and have that dedicated lane.

01:09:00 I was expecting movement to that.

01:09:03 Five years ago, I was expecting we’d have a lot more movement towards that.

01:09:06 We haven’t.

01:09:07 And it may be because Tesla’s been overpromising by saying this, calling their system fully

01:09:12 self driving, I think they may have been gotten there quicker by collaborating to change the

01:09:21 infrastructure.

01:09:23 This is one of the problems with long haul trucking being autonomous.

01:09:30 I think it makes sense on freeways at night for the trucks to go autonomously, but then

01:09:38 is that how do you get onto and off of the freeway?

01:09:40 What sort of infrastructure do you need for that?

01:09:43 Do you need to have the human in there to do that or can you get rid of the human?

01:09:48 So I think there’s ways to get there, but it’s an infrastructure argument because the

01:09:55 long tail of cases is very long and the acceptance of it will not be at the same level as human

01:10:02 drivers.

01:10:02 So I’m with you still, and I was with you for a long time, but I am surprised how well

01:10:09 how many edge cases of machine learning and vision based methods can cover.

01:10:15 This is what I’m trying to get at is I think there’s something fundamentally different

01:10:22 with vision based methods and Tesla Autopilot and any company that’s trying to do the same.

01:10:27 Okay, well, I’m not going to argue with you because, you know, we’re speculating.

01:10:34 Yes, but, you know, my gut feeling tells me it’s going to be things will speed up when

01:10:43 there is engineering of the environment because that’s what happened with every other technology.

01:10:48 I’m a bit, I don’t know about you, but I’m a bit cynical that infrastructure is going

01:10:53 to rely on government to help out in these cases.

01:11:00 If you just look at infrastructure in all domains, it’s just a government always drags

01:11:05 behind on infrastructure.

01:11:07 There’s like there’s so many just well in this country in the future.

01:11:11 Sorry.

01:11:12 Yes, in this country.

01:11:13 And of course, there’s many, many countries that are actually much worse on infrastructure.

01:11:17 Oh, yes, many of the much worse and there’s some that are much worse.

01:11:21 You know, like high speed rail, the other countries are much better.

01:11:25 I guess my question is, like, which is at the core of what I was trying to think through

01:11:31 here and ask is like, how hard is the driving problem as it currently stands?

01:11:37 So you mentioned, like, we don’t want to just take the human out and duplicate whatever

01:11:41 the human was doing.

01:11:42 But if we were to try to do that, what, how hard is that problem?

01:11:48 Because I used to think is way harder.

01:11:52 Like, I used to think it’s with vision alone, it would be three decades, four decades.

01:11:59 Okay, so I don’t know the answer to this thing I’m about to pose, but I do notice that on

01:12:06 Highway 280 here in the Bay Area, which largely has concrete surface rather than blacktop

01:12:13 surface, the white lines that are painted there now have black boundaries around them.

01:12:20 And my lane drift system in my car would not work without those black boundaries.

01:12:27 Interesting.

01:12:28 So I don’t know whether they started doing it to help the lane drift, whether it is an

01:12:32 instance of infrastructure following the technology, but my car would not perform as well as the

01:12:41 lane, my car would not perform as well without that change in the way they paint the line.

01:12:45 Unfortunately, really good lane keeping is not as valuable.

01:12:50 Like, it’s orders of magnitude more valuable to have a fully autonomous system.

01:12:54 Like, yeah, but for me, lane keeping is really helpful because I’m more healthy at it.

01:13:00 But you wouldn’t pay 10 times.

01:13:03 Like, the problem is there’s not financial, like, it doesn’t make sense to revamp the

01:13:11 infrastructure to make lane keeping easier.

01:13:14 It does make sense to revamp the infrastructure.

01:13:17 If you have a large fleet of autonomous vehicles, now you change what it means to own cars,

01:13:22 you change the nature of transportation.

01:13:24 But for that, you need autonomous vehicles.

01:13:29 Let me ask you about Waymo then.

01:13:31 I’ve gotten a bunch of chances to ride in a Waymo self driving car.

01:13:37 And they’re, I don’t know if you’d call them self driving, but.

01:13:40 Well, I mean, I rode in one before they were called Waymo when I was still at X.

01:13:45 So there’s currently, there’s a big leap, another surprising leap I didn’t think would

01:13:50 happen, which is they have no driver currently.

01:13:53 Yeah, in Chandler.

01:13:55 In Chandler, Arizona.

01:13:56 And I think they’re thinking of doing that in Austin as well.

01:13:58 But they’re expanding.

01:14:01 Although, you know, and I do an annual checkup on this.

01:14:06 So as of late last year, they were aiming for hundreds of rides a week, not thousands.

01:14:14 And there is no one in the car, but there’s certainly safety people in the loop.

01:14:22 And it’s not clear how many, you know, what the ratio of cars to safety people is.

01:14:26 It wasn’t, obviously, they’re not 100% transparent about this.

01:14:31 None of them are 100% transparent.

01:14:33 They’re very untransparent.

01:14:34 But at least the way they’re, I don’t want to make definitively, but they’re saying

01:14:39 there’s no teleoperation.

01:14:42 So like, they’re, I mean, okay.

01:14:45 And that sort of fits with YouTube videos I’ve seen of people being trapped in the car

01:14:52 by a red cone on the street.

01:14:55 And they do have rescue vehicles that come, and then a person gets in and drives it.

01:15:01 Yeah.

01:15:02 But isn’t it incredible to you, it was to me, to get in a car with no driver and watch

01:15:09 the steering wheel turn, like for somebody who has been studying, at least certainly

01:15:15 the human side of autonomous vehicles for many years, and you’ve been doing it for way

01:15:18 longer, like it was incredible to me that this was actually could happen.

01:15:22 I don’t care if that scale is 100 cars.

01:15:24 This is not a demo.

01:15:25 This is not, this is me as a regular human.

01:15:28 The argument I have is that people make interpolations from that.

01:15:33 Interpolations.

01:15:33 That, you know, it’s here, it’s done.

01:15:37 You know, it’s just, you know, we’ve solved it.

01:15:39 No, we haven’t yet.

01:15:40 And that’s my argument.

01:15:42 Okay.

01:15:42 So I’d like to go to, you keep a list of predictions on your amazing blog post.

01:15:48 It’d be fun to go through them.

01:15:49 But before then, let me ask you about this.

01:15:51 You have a harshness to you sometimes in your criticisms of what is perceived as hype.

01:16:05 And so like, because people extrapolate, like you said, and they kind of buy into the hype

01:16:10 and then they kind of start to think that the technology is way better than it is.

01:16:18 But let me ask you maybe a difficult question.

01:16:22 Sure.

01:16:23 Do you think if you look at history of progress, don’t you think to achieve the quote impossible,

01:16:30 you have to believe that it’s possible?

01:16:32 Oh, absolutely.

01:16:34 Yeah.

01:16:34 Look, his two great runs, great, unbelievable, 1903, first human power, human, you know,

01:16:46 human, you know, heavier than their flight.

01:16:49 Yeah.

01:16:50 1969, we land on the moon.

01:16:52 That’s 66 years.

01:16:53 I’m 66 years old in my lifetime, that span of my lifetime, barely, you know, flying,

01:17:00 I don’t know what it was, 50 feet, the length of the first flight or something to landing

01:17:05 on the moon.

01:17:06 Unbelievable.

01:17:08 Fantastic.

01:17:08 But that requires, by the way, one of the Wright brothers, both of them, but one of

01:17:13 them didn’t believe it’s even possible like a year before.

01:17:16 Right.

01:17:16 So, like, not just possible soon, but like ever.

01:17:20 So, you know.

01:17:21 How important is it to believe and be optimistic is what I guess.

01:17:24 Oh, yeah, it is important.

01:17:26 It’s when it goes crazy, when I, you know, you said that, what was the word you used

01:17:32 for my bad?

01:17:33 Harshness.

01:17:33 Harshness.

01:17:34 Yes.

01:17:40 I just get so frustrated.

01:17:41 Yes.

01:17:42 When people make these leaps and tell me that I’m, that I don’t understand, you know, yeah.

01:17:53 There’s just from iRobot, which I was co founder of.

01:17:57 Yeah.

01:17:57 I don’t know the exact numbers now because I haven’t, it’s 10 years since I stepped

01:18:00 off the board, but I believe it’s well over 30 million robots cleaning houses from that

01:18:06 one company.

01:18:06 And now there’s lots of other companies.

01:18:08 Yes.

01:18:08 Was that a crazy idea that we had to believe in 2002 when we released it?

01:18:14 Yeah, that was, we had, we had to, you know, believe that it could be done.

01:18:20 Let me ask you about this.

01:18:21 So iRobot, one of the greatest robotics companies ever in terms of creating a robot that actually

01:18:28 works in the real world, probably the greatest robotics company ever.

01:18:31 You were the co founder of it.

01:18:33 If, if the Rodney Brooks of today talked to the Rodney of back then, what would you tell

01:18:40 him?

01:18:41 Cause I have a sense that would you pat him on the back and say, well, you’re doing is

01:18:47 going to fail, but go at it anyway.

01:18:50 That’s what I’m referring to with the harshness.

01:18:54 You’ve accomplished an incredible thing there.

01:18:56 One of the several things we’ll talk about was, you know, you know, you know, you’ve

01:19:01 done several things we’ll talk about.

01:19:03 Well, like that’s what I’m trying to get at that line.

01:19:06 No, it’s, it’s when my harshness is reserved for people who are not doing it, who claim

01:19:14 it’s just, well, this shows that it’s just going to happen.

01:19:16 But here, here’s the thing.

01:19:18 This shows.

01:19:19 But you have that harshness for Elon too.

01:19:24 And no, no, it’s a different harshness.

01:19:26 No, it’s, it’s a different argument with Elon.

01:19:30 I think SpaceX is an amazing company.

01:19:34 On the other hand, you know, I, in one of my blog posts, I said, what’s easy and what’s

01:19:40 hard.

01:19:40 I said, yeah, space X vertical landing rockets.

01:19:44 It had been done before.

01:19:46 Grid fins had been done since the sixties.

01:19:48 Every Soyuz has them.

01:19:52 Reusable space DCX reuse those rockets that landed vertically.

01:19:58 There’s a whole insurance industry in place for rocket launches.

01:20:02 There are all sorts of infrastructure that was doable.

01:20:07 It took a great entrepreneur, a great personal expense.

01:20:11 He almost drove himself, you know, bankrupt doing it, a great belief to do it.

01:20:18 Whereas Hyperloop, there’s a whole bunch more stuff that’s never been thought about and

01:20:25 never been demonstrated.

01:20:28 So my estimation is Hyperloop is a long, long, long, a lot further off.

01:20:33 But, and if I’ve got a criticism of, of, of Elon, it’s that he doesn’t make distinctions

01:20:39 between when the technology’s coming along and ready.

01:20:44 And then he’ll go off and mouth off about other things, which then people go and compete

01:20:50 about and try and do.

01:20:51 And so this is where I, I, I, I understand what you’re saying.

01:20:57 I tend to draw a different distinction.

01:21:00 I, I have a similar kind of harshness towards people who are not telling the truth, who

01:21:06 are basically fabricating stuff to make money or to, well, he believes what he says.

01:21:11 I just think that’s a very important difference because I think in order to fly, in order

01:21:18 to get to the moon, you have to believe even when most people tell you you’re wrong and

01:21:24 most likely you’re wrong, but sometimes you’re right.

01:21:26 I mean, that’s the same thing I have with Tesla autopilot.

01:21:29 I think that’s an interesting one.

01:21:31 I was, especially when I was at MIT and just the entire human factors in the robotics community

01:21:38 were very negative towards Elon.

01:21:40 It was very interesting for me to observe colleagues at MIT.

01:21:45 I wasn’t sure what to make of that.

01:21:46 That was very upsetting to me because I understood where that, where that’s coming from.

01:21:51 And I agreed with them and I kind of almost felt the same thing in the beginning until

01:21:56 I kind of opened my eyes and realized there’s a lot of interesting ideas here that might

01:22:01 be over hype.

01:22:02 You know, if you focus yourself on the idea that you shouldn’t call a system full self

01:22:09 driving when it’s obviously not autonomous, fully autonomous, you’re going to miss the

01:22:16 magic.

01:22:16 Oh, yeah, you are going to miss the magic.

01:22:18 But at the same time, there are people who buy it, literally pay money for it and take

01:22:25 those words as given.

01:22:27 So it’s, but I haven’t.

01:22:30 So that I take words as given is one thing.

01:22:33 I haven’t actually seen people that use autopilot that believe that the behavior is really important,

01:22:39 like the actual action.

01:22:40 So like, this is to push back on the very thing that you’re frustrated about, which

01:22:45 is like journalists and general people buying all the hype and going out in the same way.

01:22:52 I think there’s a lot of hype about the negatives of this, too, that people are buying without

01:22:57 using people use the way this is what this was.

01:23:01 This opened my eyes.

01:23:02 Actually, the way people use a product is very different than the way they talk about

01:23:07 it.

01:23:07 This is true with robotics, with everything.

01:23:09 Everybody has dreams of how a particular product might be used or so on.

01:23:13 And then when it meets reality, there’s a lot of fear of robotics, for example, that

01:23:17 robots are somehow dangerous and all those kinds of things.

01:23:20 But when you actually have robots in your life, whether it’s in the factory or in the

01:23:23 home, making your life better, that’s going to be that’s way different.

01:23:28 Your perceptions of it are going to be way different.

01:23:30 And so my just tension was like, here’s an innovator.

01:23:34 Supercruise from Cadillac was super interesting, too.

01:23:41 That’s a really interesting system.

01:23:43 We should be excited by those innovations.

01:23:45 OK, so can I tell you something that’s really annoyed me recently?

01:23:49 It’s really annoyed me that the press and friends of mine on Facebook are going, these

01:23:56 billionaires and their space games, why are they doing that?

01:23:59 And that really, really pisses me off.

01:24:02 I must say, I applaud that.

01:24:05 I applaud it.

01:24:06 It’s the taking and not necessarily the people who are doing the things, but, you know, that

01:24:13 I keep having to push back against unrealistic expectations when these things can become

01:24:19 real.

01:24:20 Yeah, I this was interesting on because there’s been a particular focus for me is autonomous

01:24:26 driving, Elon’s prediction of when certain milestones will be hit.

01:24:30 There’s several things to be said there that I always I thought about, because whenever

01:24:37 you said them, it was obvious that’s not going to me as a person that kind of not inside

01:24:44 the system is obvious.

01:24:46 It’s unlikely to hit those.

01:24:48 There’s two comments I want to make.

01:24:50 One, he legitimately believes it.

01:24:54 And two, much more importantly, I think that having ambitious deadlines drives people to

01:25:04 do the best work of their life, even when the odds of those deadlines are very low.

01:25:09 To a point, and I’m not talking about anyone here, I’m just saying.

01:25:12 So there’s a line there, right?

01:25:14 You have to have a line because you overextend and it’s demoralizing.

01:25:20 It’s demoralizing, but I will say that there’s an additional thing here that those words

01:25:28 also drive the stock market.

01:25:34 And we have because of the way that rich people in the past have manipulated the rubes through

01:25:42 investment, we have developed laws about what you’re allowed to say.

01:25:49 And you know, there’s an area here which is I tend to be maybe I’m naive, but I tend to

01:25:58 believe that like engineers, innovators, people like that, they’re not they’re my they don’t

01:26:06 think like that, like manipulating the price of the stock price.

01:26:09 But it’s possible that I’m I’m certain it’s possible that I’m wrong.

01:26:13 It’s a very cynical view of the world because I think most people that run companies, especially

01:26:21 original founders, they yeah, I’m not saying that’s the intent.

01:26:27 I’m saying it’s eventually it’s kind of you you you you fall into that kind of behavior

01:26:33 pattern.

01:26:33 I don’t know.

01:26:33 I tend to I wasn’t saying I wasn’t saying it’s falling into that intent.

01:26:37 It’s just you also have to protect investors in this environment.

01:26:43 In this market.

01:26:44 Yeah.

01:26:45 OK, so you have first of all, you have an amazing blog that people should check out.

01:26:50 But you also have this in that blog, a set of predictions.

01:26:54 Such a cool idea.

01:26:55 I don’t know how long ago you started, like three, four years ago.

01:26:58 It was January 1st, 2018.

01:27:01 18.

01:27:02 And I made these predictions and I said that every January 1st, I was going to check back

01:27:07 on how my predictions.

01:27:09 That’s such a great thought experiment.

01:27:10 For 32 years.

01:27:11 Oh, you said 32 years.

01:27:13 I said 32 years because it’s still that’ll be January 1st, 2050.

01:27:16 I’ll be I will just turn ninety.

01:27:21 Five, you know, and so people know that your predictions, at least for now, are in the

01:27:31 space of artificial intelligence.

01:27:33 Yeah, I didn’t say I was going to make new predictions.

01:27:34 I was just going to measure this set of predictions that I made because I was sort of I was sort

01:27:38 of annoyed that everyone could make predictions.

01:27:40 They didn’t come true and everyone forgot.

01:27:42 So I should hold myself to a high standard.

01:27:44 Yeah, but also just putting years and like date ranges on things.

01:27:48 It’s a good thought exercise.

01:27:50 Yeah, like and like reasoning your thoughts out.

01:27:52 And so the topics are artificial intelligence, autonomous vehicles and space.

01:27:58 Yeah.

01:28:00 I was wondering if we could just go through some that stand out maybe from memory.

01:28:04 I can just mention to you some.

01:28:06 Let’s talk about self driving cars, like some predictions that you’re particularly proud

01:28:10 of or are particularly interesting from flying cars to the other element here is like how

01:28:20 widespread the location where the deployment of the autonomous vehicles is.

01:28:25 And there’s also just a few fun ones.

01:28:27 Is there something that jumps to mind that you remember from the predictions?

01:28:31 Well, I think I did put in there that there would be a dedicated self driving lane on

01:28:37 101 by some year, and I think I was over optimistic on that one.

01:28:42 Yeah, actually.

01:28:42 Yeah, I actually do remember that.

01:28:44 But you I think you were mentioning like difficulties at different cities.

01:28:48 Yeah.

01:28:50 Cambridge, Massachusetts, I think was an example.

01:28:52 Yeah, like in Cambridge Port, you know, I lived in Cambridge Port for a number of years

01:28:56 and you know, the roads are narrow and getting getting anywhere as a human driver is incredibly

01:29:02 frustrating when you start to put and people drive the wrong way on one way streets there.

01:29:07 It’s just your prediction was driverless taxi services operating on all streets in

01:29:14 Cambridge Port, Massachusetts in 2035.

01:29:21 Yeah.

01:29:21 And that may have been too optimistic.

01:29:25 You think so?

01:29:26 You know, I’ve gotten a little more pessimistic since I made these internally on some of these

01:29:31 things.

01:29:31 So what can you put a year to a major milestone of deployment of a taxi service in in a few

01:29:42 major cities like something where you feel like autonomous vehicles are here.

01:29:47 So let’s let’s take the grid streets of San Francisco north of market.

01:29:55 Okay.

01:29:56 Okay.

01:29:57 Relatively benign environment, the streets are wide, the major problem is delivery trucks

01:30:07 stopping everywhere, which made things more complicated.

01:30:12 Taxi system there with somewhat designated pickup and drop offs, unlike with Uber and

01:30:21 Lyft, where you can sort of get to any place and the drivers will figure out how to get

01:30:28 in there.

01:30:30 We’re still a few years away.

01:30:32 I, you know, I live in that area.

01:30:35 So I see, you know, the self driving car companies cars, multiple multiple ones every day.

01:30:42 Now if they’re cruise, Zooks less often, Waymo all the time, different and different ones

01:30:52 come and go.

01:30:53 And there’s always a driver.

01:30:55 There’s always a driver at the moment, although I have noticed that sometimes the driver does

01:31:02 not have the authority to take over without talking to the home office, because they will

01:31:08 sit there waiting for a long time, and clearly something’s going on where the home office

01:31:14 is making a decision.

01:31:16 So they’re, you know, and, and so you can see whether they’ve got their hands on the

01:31:21 wheel or not.

01:31:22 And, and it’s the incident resolution time that tells you, gives you some clues.

01:31:28 So what year do you think, what’s your intuition?

01:31:30 What date range are you currently thinking San Francisco would be?

01:31:34 Are you currently thinking San Francisco would be autonomous taxi service from any point

01:31:42 A to any point B without a driver?

01:31:47 Are you still, are you thinking 10 years from now, 20 years from now, 30 years from now?

01:31:53 Certainly not 10 years from now.

01:31:55 It’s going to be longer.

01:31:56 If you’re allowed to go south of market way longer.

01:31:59 And unless it’s reengineering of roads.

01:32:03 By the way, what’s the biggest challenge?

01:32:05 You mentioned a few.

01:32:06 Is it, is it the delivery trucks?

01:32:09 Is it the edge cases, the computer perception, well, here’s a case that I saw outside my

01:32:15 house a few weeks ago, about 8pm on a Friday night, it was getting dark, it was before

01:32:20 the solstice.

01:32:23 It was a cruise vehicle come down the hill, turned right and stopped dead, covering the

01:32:32 crosswalk.

01:32:33 Why did it stop dead?

01:32:35 Because there was a human just two feet from it.

01:32:38 Now, I just glanced, I knew what was happening.

01:32:41 The human was a woman was at the door of her car trying to unlock it with one of those

01:32:47 things that, you know, when you don’t have a key.

01:32:50 That car thought, oh, she could jump out in front of me any second.

01:32:55 As a human, I could tell, no, she’s not going to jump out.

01:32:57 She’s busy trying to unlock her.

01:32:59 She’s lost her keys.

01:33:00 She’s trying to get in the car.

01:33:01 And it stayed there for, until I got bored.

01:33:05 And so the human driver in there did not take over.

01:33:11 But here’s the kicker to me.

01:33:14 A guy comes down the hill with a stroller, I assume there’s a baby in there, and now

01:33:22 the crosswalk’s blocked by this cruise vehicle.

01:33:25 What’s he going to do?

01:33:27 Cleverly, I think, he decided not to go in front of the car.

01:33:30 But he had to go behind it.

01:33:34 He had to get off the crosswalk, out into the intersection, to push his baby around

01:33:39 this car, which was stopped there.

01:33:41 And no human driver would have stopped there for that length of time.

01:33:44 They would have got out and out of the way.

01:33:46 And that’s another one of my pet peeves, that safety is being compromised for individuals

01:33:56 who didn’t sign up for having this happen in their neighborhood.

01:33:59 Now you can say that’s an edge case, but…

01:34:03 Yeah, well, I’m in general not a fan of anecdotal evidence for stuff like this is one of my

01:34:13 biggest problems with the discussion of autonomous vehicles in general, people that criticize

01:34:17 them or support them are using edge cases, are using anecdotal evidence, but I got you.

01:34:24 Your question is, when is it going to happen in San Francisco?

01:34:26 I say not soon, but it’s going to be one of them.

01:34:29 But where it is going to happen is in limited domains, campuses of various sorts, gated

01:34:38 communities where the other drivers are not arbitrary people.

01:34:46 They’re people who know about these things, they’ve been warned about them, and at velocities

01:34:52 where it’s always safe to stop dead.

01:34:57 You can’t do that on the freeway.

01:34:58 That I think we’re going to start to see, and they may not be shaped like current cars,

01:35:06 they may be things like May Mobility has those things and various companies have these.

01:35:12 Yeah, I wonder if that’s a compelling experience.

01:35:14 To me, it’s not just about automation, it’s about creating a product that makes your…

01:35:20 It’s not just cheaper, but it’s fun to ride.

01:35:23 One of the least fun things is for a car that stops and waits.

01:35:29 There’s something deeply frustrating for us humans for the rest of the world to take advantage

01:35:34 of us as we wait.

01:35:35 But think about not you as the customer, but someone who’s in their 80s in a retirement

01:35:47 village whose kids have said, you’re not driving anymore, and this gives you the freedom to

01:35:53 go to the market.

01:35:54 That’s a hugely beneficial thing, but it’s a very few orders of magnitude less impact

01:35:59 on the world.

01:36:00 It’s just a few people in a small community using cars as opposed to the entirety of the

01:36:05 world.

01:36:07 I like that the first time that a car equipped with some version of a solution to the trolley

01:36:13 problem is…

01:36:14 What’s NIML stand for?

01:36:16 Not in my life.

01:36:17 Not in my life.

01:36:17 I define my lifetime as up to 2050.

01:36:20 You know, I ask you, when have you had to decide which person shall I kill?

01:36:29 No, you put the brakes on and you break as hard as you can.

01:36:31 You’re not making that decision.

01:36:35 I do think autonomous vehicles or semi autonomous vehicles do need to solve the whole pedestrian

01:36:41 problem that has elements of the trolley problem within it, but it’s not…

01:36:45 Yeah, well, and I talk about it in one of the articles or blog posts that I wrote, and

01:36:51 people have told me, one of my coworkers has told me he does this.

01:36:56 He tortures autonomously driven vehicles and pedestrians will torture them.

01:37:01 Now, once they realize that putting one foot off the curb makes the car think that they

01:37:07 might walk into the road, teenagers will be doing that all the time.

01:37:10 I, by the way, one of my, and this is a whole nother discussion, because my main interest

01:37:15 with robotics is HRI, human robot interaction.

01:37:19 I believe that robots that interact with humans will have to push back.

01:37:25 Like they can’t just be bullied because that creates a very uncompelling experience for

01:37:30 the humans.

01:37:31 Yeah, well, you know, Waymo, before it was called Waymo, discovered that, you know, they

01:37:35 had to do that at four way intersections.

01:37:38 They had to nudge forward to give the cue that they were going to go, because otherwise

01:37:42 the other drivers would just beat them all the time.

01:37:46 So you cofounded iRobot, as we mentioned, one of the most successful robotics companies

01:37:52 ever.

01:37:53 What are you most proud of with that company and the approach you took to robotics?

01:38:00 Well, there’s something I’m quite proud of there, which may be a surprise, but, you know,

01:38:07 I was still on the board when this happened, it was March 2011, and we sent robots to Japan

01:38:17 and they were used to help shut down the Fukushima Daiichi nuclear power plant, which was, everything

01:38:27 was, I’ve been there since, I was there in 2014, and the robots, some of the robots were

01:38:32 still there.

01:38:33 I was proud that we were able to do that.

01:38:35 Why were we able to do that?

01:38:38 And, you know, people have said, well, you know, Japan is so good at robotics.

01:38:42 It was because we had had about 6,500 robots deployed in Iraq and Afghanistan, teleopt,

01:38:51 but with intelligence, dealing with roadside bombs.

01:38:56 So we had, it was at that time, nine years of in field experience with the robots in

01:39:03 harsh conditions, whereas the Japanese robots, which were, you know, getting, this goes back

01:39:09 to what annoys me so much, getting all the hype, look at that, look at that Honda robot,

01:39:14 it can walk, wow, the future’s here, couldn’t do a thing because they weren’t deployed,

01:39:20 but we had deployed in really harsh conditions for a long time, and so we’re able to do

01:39:26 something very positive in a very bad situation.

01:39:30 What about just the simple, and for people who don’t know, one of the things that iRobot

01:39:36 has created is the Roomba vacuum cleaner.

01:39:42 What about the simple robot that, that is the Roomba, quote unquote, simple, that’s

01:39:47 deployed in tens of millions of, in tens of millions of homes?

01:39:53 What do you think about that?

01:39:54 Well, I make the joke that I started out life as a pure mathematician and turned into a

01:39:59 vacuum cleaner salesman, so if you’re going to be an entrepreneur, be ready for, be ready

01:40:05 to do anything, but I was, you know, there was a, there was a wacky lawsuit that I got

01:40:15 opposed for not too many years ago, and I was the only one who had emailed from the

01:40:20 1990s, and no one in the company had it, so I went and went through my email, and it

01:40:27 reminded me of, you know, the joy of what we were doing, and what was I doing?

01:40:34 What was I doing at the time we were building, building the Roomba?

01:40:41 One of the things was we had this, you know, incredibly tight budget because we wanted

01:40:46 to put it on the shelves at $200.

01:40:50 There was another home cleaning robot at the time, it was the Electrolux Trilobite, which

01:40:59 sold for 2,000 euros, and to us that was not going to be a consumer product, so we had

01:41:05 reason to believe that $200 was a, was a thing that people would buy at.

01:41:10 That was our aim, but that meant we had, you know, that’s on the shelf making profit.

01:41:19 That means the cost of goods has to be minimal, so I find all these emails of me going, you

01:41:26 know, I’d be in Taipei for a MIT meeting, and I’d stay a few extra days and go down

01:41:32 to Hsinchu and talk to these little tiny companies, lots of little tiny companies outside of TSMC,

01:41:38 Taiwan Semiconductor Manufacturing Corporation, which let all these little companies be fabulous.

01:41:45 They didn’t have to have their own fab so they could innovate, and they were building,

01:41:51 their innovations were to build, strip down 6802s, 6802 was what was in an Apple I, get

01:41:57 rid of half the silicon and still have it be viable, and I’d previously got some of

01:42:03 those for some earlier failed products of iRobot, and that was in Hong Kong going to

01:42:11 all these companies that built, you know, they weren’t gaming in the current sense,

01:42:16 there were these handheld games that you would play, or birthday cards, because we had about

01:42:23 a 50 cent budget for computation, so I’m trekking from place to place looking at their chips,

01:42:30 looking at what they’d removed, ah, their interrupt handling is too weak for a general

01:42:38 purpose, so I was going deep technical detail, and then I found this one from a company called

01:42:43 Winbond, which had, and I’d forgotten it had this much RAM, it had 512 bytes of RAM,

01:42:50 and it was in our budget, and it had all the capabilities we needed.

01:42:54 Yeah, and you were excited.

01:42:57 Yeah, and I was reading all these emails, Colin, I found this, so.

01:43:02 Did you think, did you ever think that you guys could be so successful?

01:43:07 Like, eventually this company would be so successful, could you possibly have imagined?

01:43:12 No, we never did think that.

01:43:13 We’d had 14 failed business models up to 2002, and then we had two winners the same year.

01:43:19 No, and then, you know, we, I remember the board, because by this time we had some venture

01:43:27 capital in, the board went along with us building some robots for, you know, aiming at the Christmas

01:43:36 2002 market, and we went three times over what they authorized and built 70,000 of them,

01:43:44 and sold them all in that first, because we released on September 18th, and they were

01:43:51 all sold by Christmas.

01:43:52 So it was, so we were gutsy, but.

01:43:57 But yeah, you didn’t think this will take over the world.

01:44:00 Well, this is, so a lot of amazing robotics companies have gone under over the past few

01:44:09 decades.

01:44:10 Why do you think it’s so damn hard to run a successful robotics company?

01:44:17 There’s a few things.

01:44:20 One is expectations of capabilities by the founders that are off base.

01:44:29 The founders, not the consumer, the founders.

01:44:31 Yeah, expectations of what can be delivered.

01:44:34 Sure.

01:44:34 Mispricing, and what a customer thinks is a valid price, is not rational, necessarily.

01:44:42 Yeah.

01:44:43 And expectations of customers, and just the sheer hardness of getting people to adopt a

01:44:56 new technology.

01:44:57 And I’ve suffered from all three of these, you know.

01:44:59 I’ve had more failures than successes, in terms of companies.

01:45:04 I’ve suffered from all three.

01:45:07 So, do you think one day there will be a robotics company, and by robotics company, I mean, where

01:45:18 your primary source of income is from robots, that will be a trillion plus dollar company?

01:45:24 And if so, what would that company do?

01:45:31 I can’t, you know, because I’m still starting robot companies.

01:45:35 Yeah.

01:45:38 I’m not making any such predictions in my own mind.

01:45:41 I’m not thinking about a trillion dollar company.

01:45:43 And by the way, I don’t think, you know, in the 90s, anyone was thinking that Apple would

01:45:47 ever be a trillion dollar company.

01:45:48 So, these are, these are, you know, these are, you know, these are, you know, these

01:45:52 would be a trillion dollar company, so these are, these are very hard to predict.

01:45:57 But, sorry to interrupt, but don’t you, because I kind of have a vision in a small way, and

01:46:03 it’s a big vision in a small way, that I see that there would be robots in the home,

01:46:10 at scale, like Roomba, but more.

01:46:13 And that’s trillion dollar.

01:46:15 Right.

01:46:16 And I think there’s a real market pull for them because of the demographic inversion,

01:46:22 you know, who’s going to do all the stuff for the older people?

01:46:26 There’s too many, you know, I’m leading here.

01:46:31 There’s going to be too many of us.

01:46:36 But we don’t have capable enough robots to make that economic argument at this point.

01:46:42 Do I expect that that will happen?

01:46:44 Yes, I expect it will happen.

01:46:45 But I got to tell you, we introduced the Roomba in 2002, and I stayed another

01:46:50 nine years.

01:46:51 We were always trying to find what the next home robot would be, and still today, the

01:46:57 primary product of 20 years late, almost 20 years later, 19 years later, the primary product

01:47:02 is still the Roomba.

01:47:03 So iRobot hasn’t found the next one.

01:47:07 Do you think it’s possible for one person in the garage to build it versus, like, Google

01:47:12 launching Google self driving car that turns into Waymo?

01:47:16 Do you think this is almost like what it takes to build a successful robotics company?

01:47:20 Do you think it’s possible to go from the ground up, or is it just too much capital

01:47:24 investment?

01:47:25 Yeah, so it’s very hard to get there without a lot of capital.

01:47:31 And we’re starting to see, you know, fair chunks of capital for some robotics companies.

01:47:38 You know, Series B’s, I saw one yesterday for $80 million, I think it was, for Covariant.

01:47:45 But it can take real money to get into these things, and you may fail along the way.

01:47:54 I’ve certainly failed at Rethink Robotics, and we lost $150 million in capital there.

01:48:00 So, okay, so Rethink Robotics is another amazing robotics company you cofounded.

01:48:06 So what was the vision there?

01:48:09 What was the dream?

01:48:11 And what are you most proud of with Rethink Robotics?

01:48:15 I’m most proud of the fact that we got robots out of the cage in factories that were safe,

01:48:23 absolutely safe, for people and robots to be next to each other.

01:48:26 So these are robotic arms.

01:48:27 Robotic arms.

01:48:28 Able to pick up stuff and interact with humans.

01:48:31 Yeah, and that humans could retask them without writing code.

01:48:35 And now that’s sort of become an expectation for a lot of other little companies and big

01:48:40 companies, our advertising they’re doing.

01:48:42 That’s both an interface problem and also a safety problem.

01:48:45 Yeah, yeah.

01:48:47 So I’m most proud of that.

01:48:51 I completely, I let myself be talked out of what I wanted to do.

01:48:59 And, you know, you always got, you know, I can’t replay the tape.

01:49:02 I can’t replay it.

01:49:05 Maybe, you know, if I’d been stronger on, and I remember the day, I remember the exact

01:49:12 meeting.

01:49:13 Can you take me through that meeting?

01:49:16 Yeah.

01:49:18 So I’d said that I’d set as a target for the company that we were going to build $3,000

01:49:23 robots with force feedback that was safe for people to be around.

01:49:29 Wow.

01:49:30 That was my goal.

01:49:31 And we built, so we started in 2008, and we had prototypes built of plastic, plastic

01:49:38 gearboxes, and at a $3,000, you know, lifetime, or $3,000, I was saying, we’re going to go

01:49:48 after not the people who already have robot arms in factories, the people who would never

01:49:52 have a robot arm.

01:49:53 We’re going to go after a different market.

01:49:55 So we don’t have to meet their expectations.

01:49:57 And so we’re going to build it out of plastic.

01:49:59 It doesn’t have to have a $35,000 lifetime.

01:50:02 It’s going to be so cheap that it’s OpEx, not CapEx.

01:50:09 And so we had a prototype that worked reasonably well, but the control engineers were complaining

01:50:16 about these plastic gearboxes with a beautiful little planetary gearbox that we could use

01:50:24 something called series elastic actuators.

01:50:29 We embedded them in there.

01:50:30 We could measure forces.

01:50:32 We knew when we hit something, et cetera.

01:50:35 The control engineers were saying, yeah, but there’s this torque ripple because these plastic

01:50:40 gears, they’re not great gears, and there’s this ripple, and trying to do force control

01:50:44 around this ripple is so hard.

01:50:47 And I’m not going to name names, but I remember one of the mechanical engineers saying, we’ll

01:50:55 just build a metal gearbox with spur gears, and it’ll take six weeks.

01:50:59 We’ll be done.

01:51:01 Problem solved.

01:51:03 Two years later, we got the spur gearbox working.

01:51:08 We cost reduced it every possible way we could, but now the price went up too.

01:51:15 And then the CEO at the time said, well, we have to have two arms, not one arm.

01:51:19 So our first robot product, Baxter, now cost $25,000, and the only people who were going

01:51:27 to look at that were people who had arms in factories because that was somewhat cheaper

01:51:31 for two arms than arms in factories.

01:51:34 But they were used to 0.1 millimeter reproducibility of motion and certain velocities, and I kept

01:51:43 thinking, but that’s not what we’re giving you.

01:51:45 You don’t need position repeatability.

01:51:47 Use force control like a human does.

01:51:49 No, no, but we want that repeatability.

01:51:53 We want that repeatability.

01:51:54 All the other robots have that repeatability.

01:51:56 Why don’t you have that repeatability?

01:51:58 So can you clarify?

01:51:59 Force control is you can grab the arm and you can move it.

01:52:02 You can move it around, but suppose you…

01:52:06 Can you see that?

01:52:06 Yes.

01:52:07 Suppose you want to…

01:52:09 Yes.

01:52:10 Suppose this thing is a precise thing that’s got to fit here in this right angle.

01:52:16 Under position control, you have fixtured where this is.

01:52:20 You know where this is precisely, and you just move it, and it goes there.

01:52:25 In force control, you would do something like slide over here till we feel that and slide

01:52:30 it in there, and that’s how a human gets precision.

01:52:34 They use force feedback and get the things to mate rather than just go straight to it.

01:52:42 Couldn’t convince our customers who were in factories and were used to thinking about

01:52:48 things a certain way, and they wanted it, wanted it, wanted it.

01:52:51 So then we said, okay, we’re going to build an arm that gives you that.

01:52:56 So now we ended up building a $35,000 robot with one arm with…

01:52:59 Oh, what are they called?

01:53:04 A certain sort of gearbox made by a company whose name I can’t remember right now, but

01:53:08 it’s the name of the gearbox.

01:53:11 But it’s got torque ripple in it.

01:53:15 So now there was an extra two years of solving the problem of doing the force with the torque

01:53:19 ripple.

01:53:20 So we had to do the thing we had avoided for the plastic gearboxes, which is a little bit

01:53:28 for the plastic gearboxes we ended up having to do.

01:53:31 The robot was now overpriced and they…

01:53:35 And that was your intuition from the very beginning kind of that this is not…

01:53:40 You’re opening a door to solve a lot of problems that you’re eventually going to have to solve

01:53:44 this problem anyway.

01:53:45 Yeah.

01:53:46 And also I was aiming at a low price to go into a different market.

01:53:49 Low price.

01:53:50 That didn’t have robots.

01:53:51 $3,000 would be amazing.

01:53:52 Yeah.

01:53:52 I think we could have done it for five.

01:53:54 But, you know, you talked about setting the goal a little too far for the engineers.

01:53:58 Yeah, exactly.

01:54:02 So why would you say that company not failed, but went under?

01:54:09 We had buyers and there’s this thing called the Committee on Foreign Investment in the

01:54:15 U.S., CFIUS.

01:54:18 And that had previously been invoked twice.

01:54:21 Around where the government could stop foreign money coming into a U.S. company based on

01:54:29 defense requirements.

01:54:32 We went through due diligence multiple times.

01:54:34 We were going to get acquired, but every consortium had Chinese money in it, and all the bankers

01:54:42 would say at the last minute, you know, this isn’t going to get past CFIUS, and the investors

01:54:47 would go away.

01:54:47 And then we had two buyers, once we were about to run out of money, two buyers, and one used

01:54:54 heavy handed legal stuff with the other one, said they were going to take it and pay more,

01:55:02 dropped out when we were out of cash, and then bought the assets at 1 30th of the price

01:55:08 they had offered a week before.

01:55:10 It was a tough week.

01:55:12 Do you, does it hurt to think about like an amazing company that didn’t, you know, like

01:55:21 iRobot didn’t find a way?

01:55:24 Yeah, it was tough.

01:55:25 I said I was never going to start another company.

01:55:27 I was pleased that everyone liked what we did so much that the team was hired by three

01:55:36 companies, and I was very happy that we were able to do that.

01:55:40 Three companies within a week.

01:55:42 Everyone had a job in one of these three companies.

01:55:44 Some stayed in their same desks because another company came in and rented the space.

01:55:50 So I felt good about people not being out on the street.

01:55:55 So Baxter has a screen with a face.

01:55:59 What, that’s a revolutionary idea for a robot manipulation, like for a robotic arm.

01:56:07 How much opposition did you get?

01:56:08 Well, first the screen was also used during codeless programming.

01:56:12 We taught by demonstration.

01:56:14 It showed you what its understanding of the task was.

01:56:17 So it had two roles.

01:56:21 Some customers hated it, and so we made it so that when the robot was running it could

01:56:26 be showing graphs of what was happening and not show the eyes.

01:56:30 Other people, and some of them surprised me who they were, saying well this one doesn’t

01:56:36 look as human as the old one.

01:56:37 We liked the human looking.

01:56:39 Yeah.

01:56:40 So there was a mixed bag.

01:56:43 But do you think that’s, I don’t know, I’m kind of disappointed whenever I talk to

01:56:50 roboticists, like the best robotics people in the world, they seem to not want to do

01:56:55 the eyes type of thing.

01:56:56 Like they seem to see it as a machine as opposed to a machine that can also have a human connection.

01:57:02 I’m not sure what to do with that.

01:57:03 It seems like a lost opportunity.

01:57:05 I think the trillion dollar company will have to do the human connection very well no matter

01:57:10 what it does.

01:57:11 Yeah, I agree.

01:57:13 Can I ask you a ridiculous question?

01:57:15 Sure.

01:57:17 I might give a ridiculous answer.

01:57:19 Do you think, well maybe by way of asking the question, let me first mention that you’re

01:57:25 kind of critical of the idea of the Turing test as a test of intelligence.

01:57:32 Let me first ask this question.

01:57:33 Do you think we’ll be able to build an AI system that humans fall in love with and it

01:57:40 falls in love with the human, like romantic love?

01:57:46 Well, we’ve had that with humans falling in love with cars even back in the 50s.

01:57:51 It’s a different love, right?

01:57:52 Well, yeah.

01:57:53 I think there’s a lifelong partnership where you can communicate and grow like…

01:57:59 I think we’re a long way from that.

01:58:01 I think we’re a long, long way.

01:58:03 I think Blade Runner had the time scale totally wrong.

01:58:10 Yeah, but so to me, honestly, the most difficult part is the thing that you said with the Marvex

01:58:16 Paradox is to create a human form that interacts and perceives the world.

01:58:21 But if we just look at a voice, like the movie Her or just like an Alexa type voice, I tend

01:58:28 to think we’re not that far away.

01:58:29 Well, for some people, maybe not, but as humans, as we think about the future, we always try

01:58:43 to…

01:58:44 And this is the premise of most science fiction movies.

01:58:46 You’ve got the world just as it is today and you change one thing.

01:58:50 But that’s not how…

01:58:51 And it’s the same with a self driving car.

01:58:53 You change one thing.

01:58:55 No, everything changes.

01:58:56 Everything grows together.

01:58:59 So surprisingly, it might be surprising to you or might not, I think the best movie about

01:59:04 this stuff was Bicentennial Man.

01:59:09 And what was happening there?

01:59:11 It was schmaltzy and, you know, but what was happening there?

01:59:15 As the robot was trying to become more human, the humans were adopting the technology of

01:59:21 the robot and changing their bodies.

01:59:23 So there was a convergence happening in a sense.

01:59:27 So we will not be the same.

01:59:28 You know, we’re already talking about genetically modifying our babies.

01:59:32 You know, there’s more and more stuff happening around that.

01:59:36 We will want to modify ourselves even more for all sorts of things.

01:59:43 We put all sorts of technology in our bodies to improve it.

01:59:48 You know, I’ve got things in my ears so that I can sort of hear you.

01:59:53 Yeah.

01:59:56 So we’re always modifying our bodies.

01:59:57 So, you know, I think it’s hard to imagine exactly what it will be like in the future.

02:00:03 But on the Turing test side, do you think, so forget about love for a second, let’s talk

02:00:09 about just like the Alexa Prize.

02:00:12 Actually, I was invited to be a part of the Alexa Prize.

02:00:16 Actually, I was invited to be a, what is the interviewer for the Alexa Prize or whatever

02:00:23 that’s in two days.

02:00:25 Their idea is success looks like a person wanting to talk to an AI system for a prolonged

02:00:32 period of time, like 20 minutes.

02:00:35 How far away are we and why is it difficult to build an AI system with which you’d want

02:00:41 to have a beer and talk for an hour or two hours?

02:00:45 Like not for to check the weather or to check music, but just like to talk as friends.

02:00:53 Yeah, well, you know, we saw Weizenbaum back in the 60s with his programmer, Elisa, being

02:01:00 shocked at how much people would talk to Elisa.

02:01:03 And I remember, you know, in the 70s typing, you know, stuff to Elisa to see what it would

02:01:08 come back with.

02:01:09 You know, I think right now, and this is a thing that Amazon’s been trying to improve

02:01:17 with Alexa, there is no continuity of topic.

02:01:22 There’s not, you can’t refer to what we talked about yesterday.

02:01:27 It’s not the same as talking to a person where there seems to be an ongoing existence, which

02:01:32 changes.

02:01:33 We share moments together and they last in our memory together.

02:01:37 Yeah, there’s none of that.

02:01:39 And there’s no sort of intention of these systems that they have any goal in life, even

02:01:46 if it’s to be happy, you know, they don’t even have a semblance of that.

02:01:51 Now, I’m not saying this can’t be done.

02:01:53 I’m just saying, I think this is why we don’t feel that way about them.

02:01:57 That’s a sort of a minimal requirement.

02:02:01 If you want the sort of interaction you’re talking about, it’s a minimal requirement.

02:02:06 Whether it’s going to be sufficient, I don’t know.

02:02:10 We haven’t seen it yet.

02:02:11 We don’t know what it feels like.

02:02:14 I tend to think it’s not as difficult as solving intelligence, for example, and I think it’s

02:02:23 achievable in the near term.

02:02:26 But on the Turing test, why don’t you think the Turing test is a good test of intelligence?

02:02:32 Oh, because, you know, again, the Turing, if you read the paper, Turing wasn’t saying

02:02:39 this is a good test.

02:02:40 He was using it as a rhetorical device to argue that if you can’t tell the difference

02:02:46 between a computer and a person, you must say that the computer’s thinking because you

02:02:52 can’t tell the difference, you know, when it’s thinking.

02:02:56 You can’t say something different.

02:02:58 What it has become as this sort of weird game of fooling people, so back at the AI Lab in

02:03:08 the late 80s, we had this thing that still goes on called the AI Olympics, and one of

02:03:14 the events we had one year was the original imitation game, as Turing talked about, because

02:03:21 he starts by saying, can you tell whether it’s a man or a woman?

02:03:25 So we did that at the Lab.

02:03:28 You’d go and type, and the thing would come back, and you had to tell whether it was a

02:03:33 man or a woman, and one man came up with a question that he could ask, which was always

02:03:50 a dead giveaway of whether the other person was really a man or a woman.

02:03:56 He would ask them, did you have green plastic toy soldiers as a kid?

02:04:01 Yeah.

02:04:01 What did you do with them?

02:04:03 And a woman trying to be a man would say, oh, I lined them up.

02:04:07 We had wars.

02:04:07 We had battles.

02:04:08 And the man, just being a man, would say, I stomped on them.

02:04:11 I burned them.

02:04:11 So that’s what the Turing test with computers has become.

02:04:21 What’s the trick question?

02:04:23 That’s why I say it’s sort of devolved into this weirdness.

02:04:29 Nevertheless, conversation not formulated as a test is a fascinatingly challenging dance.

02:04:36 That’s a really hard problem.

02:04:38 To me, conversation, when non poses a test, is a more intuitive illustration how far away

02:04:45 we are from solving intelligence than computer vision.

02:04:48 It’s hard.

02:04:49 Computer vision is harder for me to pull apart.

02:04:53 But with language, with conversation, you could see.

02:04:55 Because language is so human.

02:04:56 It’s so human.

02:04:58 We can so clearly see it.

02:05:04 Shit, you mentioned something I was going to go off on.

02:05:06 OK.

02:05:08 I mean, I have to ask you, because you were the head of CSAIL, AI Lab, for a long time.

02:05:17 I don’t know.

02:05:18 To me, when I came to MIT, you were one of the greats at MIT.

02:05:22 So what was that time like?

02:05:25 And plus, you’re friends with, but you knew Minsky and all the folks there, all the legendary

02:05:34 AI people of which you’re one.

02:05:37 So what was that time like?

02:05:39 What are memories that stand out to you from that time, from your time at MIT, from the

02:05:46 AI Lab, from the dreams that the AI Lab represented, to the actual revolutionary work?

02:05:53 Well, let me tell you first the disappointment in myself.

02:05:56 As I’ve been researching this book, and so many of the players were active in the 50s

02:06:03 and 60s, I knew many of them when they were older, and I didn’t ask them all the questions

02:06:08 now I wish I had asked.

02:06:11 I’d sit with them at our Thursday lunches, which we had a faculty lunch, and I didn’t

02:06:16 ask them so many questions that now I wish I had.

02:06:19 Can I ask you that question?

02:06:20 Because you wrote that.

02:06:22 You wrote that you were fortunate to know and rub shoulders with many of the greats,

02:06:26 those who founded AI, robotics, and computer science, and the World Wide Web.

02:06:30 And you wrote that your big regret nowadays is that often I have questions for those who

02:06:34 have passed on, and I didn’t think to ask them any of these questions, even as I saw

02:06:41 them and said hello to them on a daily basis.

02:06:44 So maybe also another question I want to ask, if you could talk to them today, what question

02:06:51 would you ask?

02:06:51 What questions would you ask?

02:06:53 Well, Licklider, I would ask him.

02:06:56 You know, he had the vision for humans and computers working together, and he really

02:07:02 founded that at DARPA, and he gave the money to MIT, which started Project MAC in 1963.

02:07:12 And I would have talked to him about what the successes were, what the failures were,

02:07:16 what he saw as progress, etc.

02:07:18 I would have asked him more questions about that, because now I could use it in my book,

02:07:24 you know, but I think it’s lost.

02:07:25 It’s lost forever.

02:07:26 A lot of the motivations are lost.

02:07:33 I should have asked Marvin why he and Seymour Pappert came down so hard on neural networks

02:07:40 in 1968 in their book Perceptrons, because Marvin’s PhD thesis was all about neural networks.

02:07:48 And how do you make sense of that?

02:07:50 That book destroyed the field.

02:07:52 He probably, do you think he knew the effect that book would have?

02:07:59 All the theorems are negative theorems.

02:08:02 Yeah.

02:08:03 Yeah.

02:08:04 So, yeah.

02:08:05 That’s just the way of, that’s the way of life.

02:08:10 But still, it’s kind of tragic that he was both the proponent and the destroyer of neural

02:08:15 networks.

02:08:16 Yeah.

02:08:19 Is there other memories stand out from the robotics and the AI work at MIT?

02:08:28 Well, yeah, but you gotta be more specific.

02:08:31 Well, I mean, like, it’s such a magical place.

02:08:33 I mean, to me, it’s a little bit also heartbreaking that, you know, with Google and Facebook,

02:08:40 like DeepMind and so on, so much of the talent, you know, it doesn’t stay necessarily

02:08:46 for prolonged periods of time in these universities.

02:08:50 Oh, yeah.

02:08:50 I mean, some of the companies are more guilty than others of paying fabulous salaries to

02:08:57 some of the highest, you know, producers.

02:09:00 And then just, you never hear from them again.

02:09:02 They’re not allowed to give public talks.

02:09:04 They’re sort of locked away.

02:09:06 And it’s sort of like collecting, you know, Hollywood stars or something.

02:09:12 And they’re not allowed to make movies anymore.

02:09:13 I own them.

02:09:14 Yeah.

02:09:15 That’s tragic because, I mean, there’s an openness to the university setting where you

02:09:20 do research to both in the space of ideas and like publication, all those kinds of things.

02:09:25 Yeah, you know, and, you know, there’s the publication and all that.

02:09:28 And often, you know, although these places say they publish.

02:09:32 There’s pressure.

02:09:33 But I think, for instance, you know, on net net, I think Google buying those eight or

02:09:41 nine robotics company was bad for the field because it locked those people away.

02:09:46 They didn’t have to make the company succeed anymore, locked them away for years, and then

02:09:53 sort of all frid it away.

02:09:55 Yeah.

02:09:56 So do you have hope for MIT, for MIT?

02:10:02 Yeah.

02:10:03 Why shouldn’t I?

02:10:04 Well, I could be harsh and say that I’m not sure I would say MIT is leading the world

02:10:11 in AI or even Stanford or Berkeley.

02:10:15 I would say, I would say DeepMind, Google AI, Facebook AI, all of those things.

02:10:23 I would take a slightly different approach, a different answer.

02:10:30 I’ll come back to Facebook in a minute.

02:10:32 But I think those other places are following a dream of one of the founders.

02:10:42 And I’m not sure that it’s well founded, the dream.

02:10:46 And I’m not sure that it’s going to have the impact that he believes it is.

02:10:54 You’re talking about Facebook and Google and so on.

02:10:56 I’m talking about Google.

02:10:57 Google.

02:10:58 But the thing is, those research labs aren’t, there’s the big dream.

02:11:03 And I’m usually a fan of no matter what the dream is, a big dream is a unifier.

02:11:08 Because what happens is you have a lot of bright minds working together on a dream.

02:11:15 What results is a lot of adjacent ideas and how so much progress is made.

02:11:20 Yeah.

02:11:21 So I’m not saying they’re actually leading.

02:11:22 I’m not saying that the universities are leading.

02:11:25 Yeah.

02:11:25 But I don’t think those companies are leading in general because they’re,

02:11:28 we saw this incredible spike in attendees at NeurIPS.

02:11:36 And as I said in my January 1st review this year for 2020, 2020 will not be

02:11:44 remembered as a watershed year for machine learning or AI.

02:11:48 There was nothing surprising happened anyway.

02:11:52 Unlike when deep learning hit ImageNet.

02:11:57 That was a shake.

02:12:02 And there’s a lot more people writing papers, but the papers are fundamentally

02:12:06 boring and uninteresting.

02:12:08 Incremental work.

02:12:09 Yeah.

02:12:10 Is there a particular memories you have with Minsky or somebody else at

02:12:13 MIT that stand out, funny stories?

02:12:16 I mean, unfortunately, he’s another one that’s passed away.

02:12:21 You’ve known some of the biggest minds in AI.

02:12:24 Yeah.

02:12:25 And you know, they, they did amazing things and sometimes they were grumpy.

02:12:31 Well, he was, uh, he was interesting cause he was very grumpy, but that,

02:12:35 that was his, uh, I remember him saying in an interview that the key to success

02:12:41 or being to keep being productive is to hate everything you’ve ever done in the past.

02:12:45 Maybe that, maybe that explains the Perceptron book.

02:12:49 There it was.

02:12:50 He told you exactly.

02:12:53 But he, meaning like, just like, I mean, maybe that’s the way to not

02:12:58 treat yourself too seriously.

02:12:59 Just, uh, you know, you’re not, you’re not, you’re not, you’re not, you’re not,

02:13:03 you’re not treating yourself too seriously.

02:13:05 Just, uh, always be moving forward.

02:13:09 Uh, that was the idea.

02:13:10 I mean, that, that crankiness, I mean, there’s a, uh, that’s the scary.

02:13:14 So let me, let me, let me tell you, uh, you know, what really, um, you know,

02:13:21 the joy memories are about having access to technology before anyone else has seen

02:13:27 it.

02:13:27 You know, I got to Stanford in 1977 and we had, um, you know, we had terminals

02:13:34 that could show live video on them.

02:13:37 Um, digital, digital sound system.

02:13:40 We had a Xerox graphics printer.

02:13:45 We could print, um, uh, it wasn’t, you know, it wasn’t like a typewriter

02:13:50 ball hitting in characters.

02:13:51 It could print arbitrary things.

02:13:53 I mean, you know, one bit, you know, black or white, but you get arbitrary pictures.

02:13:58 This was science fiction sort of stuff.

02:14:00 Um, um, at, at MIT, the, uh, the list machines, which, you know, they were the

02:14:07 first personal computers and, you know, cost a hundred thousand dollars each.

02:14:12 And I could, you know, I got there early enough in the day.

02:14:14 I got one for the day.

02:14:15 Couldn’t, couldn’t stand up.

02:14:17 I had to keep working.

02:14:18 Um, um, so they’re having that like direct glimpse into the future.

02:14:25 Yeah.

02:14:25 And, and, you know, I’ve had email every day since 1977.

02:14:29 Um, and, uh, you know, the, the host field was only eight bits, you know, that many

02:14:36 places, but I could send the email to other people at a few places.

02:14:39 So that was, that was pretty exciting to be in that world so different from what

02:14:45 the rest of the world knew.

02:14:46 Um, uh, uh, let me ask you probably edit this out, but just in case you have a

02:14:53 story, uh, I’m hanging out with Don Knuth, uh, for a while tomorrow.

02:15:00 Did you ever get a chance to such a different world than yours?

02:15:03 He’s a very kind of theoretical computer science, the puzzle of, uh, of, uh, computer

02:15:08 science and mathematics.

02:15:09 And you’re so much about the magic of robotics, like the practice of it.

02:15:13 You mentioned him earlier for like, not, you know, about computation.

02:15:17 Did your worlds cross?

02:15:19 They did enough.

02:15:20 You know, I, I know him now we talk, you know, but let me tell you my, my Donald

02:15:25 Knuth story.

02:15:26 So, um, you know, besides, you know, analysis of algorithms, he’s well known for

02:15:32 writing tech, which is in LaTeX, which is the academic publishing system.

02:15:37 So he did that at the AI lab and he would do it.

02:15:41 He would work overnight at the AI lab.

02:15:45 And one, one day, one night, the, uh, the mainframe computer went down and, um, uh,

02:15:55 a guy named Robert Pore was there.

02:15:57 He did his PhD at the Media Lab at MIT and he was, um, you know, an engineer.

02:16:04 And so I, he and I, you know, tracked down what were the problem was.

02:16:08 It was one of this big refrigerator size or washing machine size disk drives had

02:16:13 failed.

02:16:13 And that’s what brought the whole system down.

02:16:15 So we’ve got panels pulled off and we’re pulling, you know, circuit cards out.

02:16:20 And Donald Knuth, who’s a really tall guy walks in and he’s looking down and says,

02:16:25 when will it be fixed?

02:16:26 You know, cause he wanted to get back to writing his tech system.

02:16:31 And so we, we figured out, you know, it was a particular chip, 7,400 series chip,

02:16:37 which was socketed.

02:16:38 We popped it out.

02:16:40 We put a replacement in, put it back in.

02:16:43 Smoke comes out cause we put it in backwards.

02:16:45 Cause we were so nervous that Donald Knuth was standing over us.

02:16:49 Anyway, we eventually got it fixed and got the mainframe running again.

02:16:53 So that was your little, when was that again?

02:16:56 Well, that must have been before October 79.

02:16:58 Cause we moved out of that building then.

02:17:00 So sometime probably 78 sometime early 79.

02:17:03 Yeah, those, all those figures is just fascinating.

02:17:06 All the people with pass, pass through MIT is really fascinating.

02:17:10 Is there, let me ask you to put on your big wise man hat.

02:17:18 Is there advice that you can give to young people today,

02:17:20 whether in high school or college who are thinking about their career

02:17:24 or thinking about life, how to live a life they’re proud of, a successful life?

02:17:32 Yeah. So, so many people ask me for advice and have asked for,

02:17:36 and I give, I talk to a lot of people all the time and there is no one way.

02:17:44 You know, there’s a lot of pressure to produce papers

02:17:51 that will be acceptable and be published.

02:17:56 Maybe I was, maybe I can’t do it.

02:17:58 Maybe I was, maybe I come from an age where I would,

02:18:03 I could be a rebel against that and still succeed.

02:18:07 Maybe it’s harder today, but I think it’s important not to get too caught up

02:18:14 with what everyone else is doing.

02:18:18 And if you, if, well, it depends on what you want of life.

02:18:22 If you want to have real impact, you have to be ready to fail a lot of times.

02:18:31 So you have to make a lot of unsafe decisions.

02:18:34 And the only way to make that work is to make, keep doing it for a long time.

02:18:38 And then one of them will be work out.

02:18:40 And so that, that, that will make something successful.

02:18:43 Or not.

02:18:45 Or yeah, or you may, or you just may, you know, end up, you know,

02:18:48 not having a, you know, having a lousy career.

02:18:50 I mean, it’s certainly possible.

02:18:52 Taking the risk is the thing.

02:18:53 Yeah.

02:18:56 But there’s no way to, to make all safe decisions and actually really contribute.

02:19:06 Do you think about your death, about your mortality?

02:19:12 I got to say when COVID hit, I did.

02:19:15 Because we did, you know, in the early days, we didn’t know how bad it was going to be.

02:19:18 And I, that, that made me work on my book harder for a while,

02:19:22 but then I’d started this company and now I’m doing full time,

02:19:25 more than full time of the company.

02:19:27 So the book’s on hold, but I do want to finish this book.

02:19:30 When you think about it, are you afraid of it?

02:19:35 I’m afraid of dribbling, you know, of losing it.

02:19:42 The details of, okay.

02:19:43 Yeah.

02:19:45 Yeah.

02:19:45 But the fact that the ride ends, I’ve known that for a long time.

02:19:51 So it’s, yeah, but there’s knowing and knowing.

02:19:55 It’s such a, yeah.

02:19:57 And it really sucks.

02:19:58 It feels, it feels a lot closer.

02:20:01 So my, in, in my, my blog with my predictions, my sort of push back against that was that I said,

02:20:08 I’m going to review these every year for 32 years and that puts me into my mid nineties.

02:20:14 So, you know, it’s my whole every, every time you write the blog posts,

02:20:18 you’re getting closer and closer to your own prediction of your death.

02:20:23 Yeah.

02:20:24 What do you hope your legacy is?

02:20:28 You’re one of the greatest roboticist AI researchers of all time.

02:20:34 What I hope is that I actually finished writing this book

02:20:38 and that there’s one person who reads it and see something about changing the way they’re thinking.

02:20:48 And that leads to the next big.

02:20:54 And then there’ll be on a podcast a hundred years from now saying I once read that book

02:21:01 and that changed everything.

02:21:04 What do you think is the meaning of life?

02:21:06 This whole thing, the existence, the, the, the, all the hurried things we do

02:21:10 on this planet, what do you think is the meaning of it all?

02:21:13 Yeah. Well, you know, I think we’re all really bad at it.

02:21:17 Life or finding meaning or both.

02:21:19 Yeah. We get caught up in, in, in the, it’s easy to get easier to do the stuff that’s immediate

02:21:24 and not through the stuff. It’s not immediate.

02:21:27 So the big picture we’re bad at.

02:21:29 Yeah. Yeah.

02:21:31 Do you have a sense of what that big picture is?

02:21:33 Like why you ever look up to the stars and ask, why the hell are we here?

02:21:41 You know, my, my, my, my atheism tells me it’s just random, but you know, I want to understand the,

02:21:50 the way random in the, in the, that’s what I talk about in this book, how order comes from disorder.

02:21:55 Yeah.

02:21:58 But it kind of sprung up like most of the whole thing is random, but this, this, this,

02:22:02 the whole thing is random, but this little pocket of complexity they will call earth

02:22:07 that like, why the hell does that happen?

02:22:10 And, and what we don’t know is how common that those pockets of complexity are or how often,

02:22:18 um, cause they may not last forever.

02:22:22 Which is, uh, more exciting slash sad to you if we’re alone or if there’s infinite number of.

02:22:30 Oh, I think, I think it’s impossible for me to believe that we’re alone.

02:22:36 Um, that would just be too horrible, too cruel.

02:22:41 It could be like the sad thing.

02:22:43 It could be like a graveyard of intelligent civilizations.

02:22:46 Oh, everywhere.

02:22:46 Yeah.

02:22:47 That might be the most likely outcome.

02:22:50 And for us too.

02:22:51 Yeah, exactly.

02:22:52 Yeah.

02:22:52 And all of this will be forgotten.

02:22:54 Yeah.

02:22:54 Yeah, including all the robots you build, everything forgotten.

02:23:01 Well, on average, everyone has been forgotten in history.

02:23:05 Yeah.

02:23:06 Right.

02:23:06 Yeah.

02:23:07 Most people are not remembered beyond the generation or two.

02:23:11 Um, I mean, yeah.

02:23:12 Well, not just on average, basically very close to a hundred percent of people who’ve ever lived

02:23:17 are forgotten.

02:23:18 Yeah.

02:23:19 I mean, you know, long arc of, I don’t know anyone alive who remembers my great grandparents

02:23:24 because we didn’t meet them.

02:23:26 So still this fun, this, uh, this, uh, life is pretty fun somehow.

02:23:32 Yeah.

02:23:33 Even the immense absurdity and, and, uh, at times, meaninglessness of it all.

02:23:39 It’s pretty fun.

02:23:40 And one of the, for me, one of the most fun things is robots.

02:23:43 And I’ve looked up to your work.

02:23:45 I’ve looked up to you for a long time.

02:23:46 That’s right.

02:23:47 God.

02:23:47 Rod, it’s, it’s an honor that, uh, you would spend your valuable time with me today talking.

02:23:53 It was an amazing conversation.

02:23:54 Thank you so much for being here.

02:23:55 Well, thanks for, thanks for talking with me.

02:23:57 I’ve enjoyed it.

02:24:00 Thanks for listening to this conversation with Rodney Brooks.

02:24:02 To support this podcast, please check out our sponsors in the description.

02:24:06 And now let me leave you with the three laws of robotics from Isaac Asimov.

02:24:12 One, a robot may not injure a human being or through inaction, allow human being to come to

02:24:19 harm. Two, a robot must obey the orders given to it by human beings, except when such orders

02:24:25 would conflict with the first law. And three, a robot must protect its own existence as long

02:24:32 as such protection does not conflict with the first or the second laws.

02:24:38 Thank you for listening.

02:24:39 I hope to see you next time.