Transcript
00:00:00 The following is a conversation with Chris Sampson.
00:00:03 He was a CTO of the Google self driving car team,
00:00:06 a key engineer and leader behind the Carnegie Mellon
00:00:08 University autonomous vehicle entries in the DARPA Grand
00:00:12 Challenges and the winner of the DARPA Urban Challenge.
00:00:16 Today, he’s the CEO of Aurora Innovation, an autonomous
00:00:20 vehicle software company.
00:00:21 He started with Sterling Anderson,
00:00:23 who was the former director of Tesla Autopilot,
00:00:25 and drew back now, Uber’s former autonomy and perception lead.
00:00:30 Chris is one of the top roboticists and autonomous
00:00:32 vehicle experts in the world, and a longtime voice
00:00:36 of reason in a space that is shrouded
00:00:38 in both mystery and hype.
00:00:41 He both acknowledges the incredible challenges
00:00:43 involved in solving the problem of autonomous driving
00:00:46 and is working hard to solve it.
00:00:49 This is the Artificial Intelligence podcast.
00:00:52 If you enjoy it, subscribe on YouTube,
00:00:54 give it five stars on iTunes, support it on Patreon,
00:00:57 or simply connect with me on Twitter
00:00:59 at Lex Friedman, spelled F R I D M A N.
00:01:03 And now, here’s my conversation with Chris Sampson.
00:01:09 You were part of both the DARPA Grand Challenge
00:01:11 and the DARPA Urban Challenge teams
00:01:13 at CMU with Red Whitaker.
00:01:17 What technical or philosophical things
00:01:19 have you learned from these races?
00:01:22 I think the high order bit was that it could be done.
00:01:26 I think that was the thing that was
00:01:30 incredible about the first of the Grand Challenges,
00:01:34 that I remember I was a grad student at Carnegie Mellon,
00:01:38 and there was kind of this dichotomy of it
00:01:45 seemed really hard, so that would
00:01:46 be cool and interesting.
00:01:48 But at the time, we were the only robotics institute around,
00:01:52 and so if we went into it and fell on our faces,
00:01:55 that would be embarrassing.
00:01:58 So I think just having the will to go do it,
00:02:01 to try to do this thing that at the time
00:02:02 was marked as darn near impossible,
00:02:05 and then after a couple of tries,
00:02:06 be able to actually make it happen,
00:02:08 I think that was really exciting.
00:02:12 But at which point did you believe it was possible?
00:02:15 Did you from the very beginning?
00:02:16 Did you personally?
00:02:18 Because you’re one of the lead engineers.
00:02:19 You actually had to do a lot of the work.
00:02:21 Yeah, I was the technical director there,
00:02:23 and did a lot of the work, along with a bunch
00:02:26 of other really good people.
00:02:28 Did I believe it could be done?
00:02:29 Yeah, of course.
00:02:31 Why would you go do something you thought
00:02:32 was completely impossible?
00:02:34 We thought it was going to be hard.
00:02:36 We didn’t know how we were going to be able to do it.
00:02:37 We didn’t know if we’d be able to do it the first time.
00:02:42 Turns out we couldn’t.
00:02:45 That, yeah, I guess you have to.
00:02:48 I think there’s a certain benefit to naivete, right?
00:02:52 That if you don’t know how hard something really is,
00:02:55 you try different things, and it gives you an opportunity
00:02:59 that others who are wiser maybe don’t have.
00:03:04 What were the biggest pain points?
00:03:05 Mechanical, sensors, hardware, software,
00:03:08 algorithms for mapping, localization,
00:03:11 just general perception, control?
00:03:13 Like hardware, software, first of all?
00:03:15 I think that’s the joy of this field, is that it’s all hard
00:03:20 and that you have to be good at each part of it.
00:03:25 So for the urban challenges, if I look back at it from today,
00:03:32 it should be easy today, that it was a static world.
00:03:38 There weren’t other actors moving through it,
00:03:40 is what that means.
00:03:42 It was out in the desert, so you get really good GPS.
00:03:47 So that went, and we could map it roughly.
00:03:51 And so in retrospect now, it’s within the realm of things
00:03:55 we could do back then.
00:03:57 Just actually getting the vehicle and the,
00:03:59 there’s a bunch of engineering work
00:04:00 to get the vehicle so that we could control it and drive it.
00:04:04 That’s still a pain today, but it was even more so back then.
00:04:09 And then the uncertainty of exactly what they wanted us to do
00:04:14 was part of the challenge as well.
00:04:17 Right, you didn’t actually know the track heading in here.
00:04:19 You knew approximately, but you didn’t actually
00:04:21 know the route that was going to be taken.
00:04:23 That’s right, we didn’t know the route.
00:04:24 We didn’t even really, the way the rules had been described,
00:04:28 you had to kind of guess.
00:04:29 So if you think back to that challenge,
00:04:33 the idea was that the government would give us,
00:04:36 the DARPA would give us a set of waypoints
00:04:40 and kind of the width that you had to stay within
00:04:43 between the line that went between each of those waypoints.
00:04:46 And so the most devious thing they could have done
00:04:49 is set a kilometer wide corridor across a field
00:04:53 of scrub brush and rocks and said, go figure it out.
00:04:58 Fortunately, it really, it turned into basically driving
00:05:01 along a set of trails, which is much more relevant
00:05:05 to the application they were looking for.
00:05:08 But no, it was a hell of a thing back in the day.
00:05:12 So the legend, Red, was kind of leading that effort
00:05:16 in terms of just broadly speaking.
00:05:19 So you’re a leader now.
00:05:22 What have you learned from Red about leadership?
00:05:25 I think there’s a couple things.
00:05:26 One is go and try those really hard things.
00:05:31 That’s where there is an incredible opportunity.
00:05:34 I think the other big one, though,
00:05:36 is to see people for who they can be, not who they are.
00:05:41 It’s one of the things that I actually,
00:05:43 one of the deepest lessons I learned from Red
00:05:46 was that he would look at undergraduates
00:05:50 or graduate students and empower them to be leaders,
00:05:56 to have responsibility, to do great things
00:06:00 that I think another person might look at them
00:06:04 and think, oh, well, that’s just an undergraduate student.
00:06:06 What could they know?
00:06:08 And so I think that kind of trust but verify,
00:06:12 have confidence in what people can become,
00:06:14 I think is a really powerful thing.
00:06:16 So through that, let’s just fast forward through the history.
00:06:20 Can you maybe talk through the technical evolution
00:06:24 of autonomous vehicle systems
00:06:26 from the first two Grand Challenges to the Urban Challenge
00:06:29 to today, are there major shifts in your mind
00:06:33 or is it the same kind of technology just made more robust?
00:06:37 I think there’s been some big, big steps.
00:06:40 So for the Grand Challenge,
00:06:43 the real technology that unlocked that was HD mapping.
00:06:51 Prior to that, a lot of the off road robotics work
00:06:55 had been done without any real prior model
00:06:58 of what the vehicle was going to encounter.
00:07:01 And so that innovation that the fact that we could get
00:07:05 decimeter resolution models was really a big deal.
00:07:13 And that allowed us to kind of bound the complexity
00:07:18 of the driving problem the vehicle had
00:07:19 and allowed it to operate at speed
00:07:21 because we could assume things about the environment
00:07:23 that it was going to encounter.
00:07:25 So that was the big step there.
00:07:31 For the Urban Challenge,
00:07:37 one of the big technological innovations there
00:07:39 was the multi beam LIDAR
00:07:41 and being able to generate high resolution,
00:07:45 mid to long range 3D models of the world
00:07:48 and use that for understanding the world around the vehicle.
00:07:53 And that was really kind of a game changing technology.
00:07:58 In parallel with that,
00:08:00 we saw a bunch of other technologies
00:08:04 that had been kind of converging
00:08:06 half their day in the sun.
00:08:08 So Bayesian estimation had been,
00:08:12 SLAM had been a big field in robotics.
00:08:17 You would go to a conference a couple of years before that
00:08:20 and every paper would effectively have SLAM somewhere in it.
00:08:24 And so seeing that the Bayesian estimation techniques
00:08:30 play out on a very visible stage,
00:08:33 I thought that was pretty exciting to see.
00:08:38 And mostly SLAM was done based on LIDAR at that time.
00:08:41 Yeah, and in fact, we weren’t really doing SLAM per se
00:08:45 in real time because we had a model ahead of time,
00:08:47 we had a roadmap, but we were doing localization.
00:08:51 And we were using the LIDAR or the cameras
00:08:53 depending on who exactly was doing it
00:08:55 to localize to a model of the world.
00:08:57 And I thought that was a big step
00:09:00 from kind of naively trusting GPS, INS before that.
00:09:06 And again, lots of work had been going on in this field.
00:09:09 Certainly this was not doing anything
00:09:13 particularly innovative in SLAM or in localization,
00:09:16 but it was seeing that technology necessary
00:09:20 in a real application on a big stage,
00:09:21 I thought was very cool.
00:09:23 So for the urban challenge,
00:09:24 those are already maps constructed offline in general.
00:09:28 And did people do that individually,
00:09:30 did individual teams do it individually
00:09:33 so they had their own different approaches there
00:09:36 or did everybody kind of share that information
00:09:41 at least intuitively?
00:09:42 So DARPA gave all the teams a model of the world, a map.
00:09:49 And then one of the things that we had to figure out
00:09:53 back then was, and it’s still one of these things
00:09:56 that trips people up today
00:09:57 is actually the coordinate system.
00:10:00 So you get a latitude longitude
00:10:03 and to so many decimal places,
00:10:05 you don’t really care about kind of the ellipsoid
00:10:07 of the earth that’s being used.
00:10:09 But when you want to get to 10 centimeter
00:10:12 or centimeter resolution,
00:10:14 you care whether the coordinate system is NADS 83
00:10:18 or WGS 84 or these are different ways to describe
00:10:24 both the kind of non sphericalness of the earth,
00:10:26 but also kind of the, I think,
00:10:31 I can’t remember which one,
00:10:32 the tectonic shifts that are happening
00:10:33 and how to transform the global datum as a function of that.
00:10:37 So getting a map and then actually matching it to reality
00:10:41 to centimeter resolution, that was kind of interesting
00:10:42 and fun back then.
00:10:44 So how much work was the perception doing there?
00:10:46 So how much were you relying on localization based on maps
00:10:52 without using perception to register to the maps?
00:10:55 And I guess the question is how advanced
00:10:58 was perception at that point?
00:10:59 It’s certainly behind where we are today, right?
00:11:01 We’re more than a decade since the urban challenge.
00:11:05 But the core of it was there.
00:11:08 That we were tracking vehicles.
00:11:13 We had to do that at 100 plus meter range
00:11:15 because we had to merge with other traffic.
00:11:18 We were using, again, Bayesian estimates
00:11:21 for state of these vehicles.
00:11:23 We had to deal with a bunch of the problems
00:11:25 that you think of today,
00:11:26 of predicting where that vehicle’s going to be
00:11:29 a few seconds into the future.
00:11:31 We had to deal with the fact
00:11:32 that there were multiple hypotheses for that
00:11:35 because a vehicle at an intersection might be going right
00:11:37 or it might be going straight
00:11:38 or it might be making a left turn.
00:11:41 And we had to deal with the challenge of the fact
00:11:44 that our behavior was going to impact the behavior
00:11:47 of that other operator.
00:11:48 And we did a lot of that in relatively naive ways,
00:11:53 but it kind of worked.
00:11:54 Still had to have some kind of solution.
00:11:57 And so where does that, 10 years later,
00:11:59 where does that take us today
00:12:01 from that artificial city construction
00:12:04 to real cities to the urban environment?
00:12:07 Yeah, I think the biggest thing
00:12:09 is that the actors are truly unpredictable.
00:12:15 That most of the time, the drivers on the road,
00:12:18 the other road users are out there behaving well,
00:12:24 but every once in a while they’re not.
00:12:27 The variety of other vehicles is, you have all of them.
00:12:32 In terms of behavior, in terms of perception, or both?
00:12:35 Both.
00:12:38 Back then we didn’t have to deal with cyclists,
00:12:40 we didn’t have to deal with pedestrians,
00:12:42 didn’t have to deal with traffic lights.
00:12:46 The scale over which that you have to operate is now
00:12:49 is much larger than the air base
00:12:51 that we were thinking about back then.
00:12:52 So what, easy question,
00:12:56 what do you think is the hardest part about driving?
00:12:59 Easy question.
00:13:00 Yeah, no, I’m joking.
00:13:02 I’m sure nothing really jumps out at you as one thing,
00:13:07 but in the jump from the urban challenge to the real world,
00:13:12 is there something that’s a particular,
00:13:15 you foresee as very serious, difficult challenge?
00:13:18 I think the most fundamental difference
00:13:21 is that we’re doing it for real.
00:13:26 That in that environment,
00:13:28 it was both a limited complexity environment
00:13:31 because certain actors weren’t there,
00:13:33 because the roads were maintained,
00:13:35 there were barriers keeping people separate
00:13:37 from robots at the time,
00:13:40 and it only had to work for 60 miles.
00:13:43 Which, looking at it from 2006,
00:13:46 it had to work for 60 miles, right?
00:13:48 Looking at it from now,
00:13:51 we want things that will go and drive
00:13:53 for half a million miles,
00:13:57 and it’s just a different game.
00:14:00 So how important,
00:14:03 you said LiDAR came into the game early on,
00:14:06 and it’s really the primary driver
00:14:07 of autonomous vehicles today as a sensor.
00:14:10 So how important is the role of LiDAR
00:14:11 in the sensor suite in the near term?
00:14:14 So I think it’s essential.
00:14:17 I believe, but I also believe that cameras are essential,
00:14:20 and I believe the radar is essential.
00:14:22 I think that you really need to use
00:14:26 the composition of data from these different sensors
00:14:28 if you want the thing to really be robust.
00:14:32 The question I wanna ask,
00:14:34 let’s see if we can untangle it,
00:14:35 is what are your thoughts on the Elon Musk
00:14:39 provocative statement that LiDAR is a crutch,
00:14:42 that it’s a kind of, I guess, growing pains,
00:14:47 and that much of the perception task
00:14:49 can be done with cameras?
00:14:52 So I think it is undeniable
00:14:55 that people walk around without lasers in their foreheads,
00:14:59 and they can get into vehicles and drive them,
00:15:01 and so there’s an existence proof
00:15:05 that you can drive using passive vision.
00:15:10 No doubt, can’t argue with that.
00:15:12 In terms of sensors, yeah, so there’s proof.
00:15:14 Yeah, in terms of sensors, right?
00:15:16 So there’s an example that we all go do it,
00:15:20 many of us every day.
00:15:21 In terms of LiDAR being a crutch, sure.
00:15:28 But in the same way that the combustion engine
00:15:33 was a crutch on the path to an electric vehicle,
00:15:35 in the same way that any technology ultimately gets
00:15:40 replaced by some superior technology in the future,
00:15:44 and really the way that I look at this
00:15:47 is that the way we get around on the ground,
00:15:51 the way that we use transportation is broken,
00:15:55 and that we have this, I think the number I saw this morning,
00:15:59 37,000 Americans killed last year on our roads,
00:16:04 and that’s just not acceptable.
00:16:05 And so any technology that we can bring to bear
00:16:09 that accelerates this self driving technology
00:16:12 coming to market and saving lives
00:16:14 is technology we should be using.
00:16:18 And it feels just arbitrary to say,
00:16:20 well, I’m not okay with using lasers
00:16:26 because that’s whatever,
00:16:27 but I am okay with using an eight megapixel camera
00:16:30 or a 16 megapixel camera.
00:16:32 These are just bits of technology,
00:16:34 and we should be taking the best technology
00:16:36 from the tool bin that allows us to go and solve a problem.
00:16:41 The question I often talk to, well, obviously you do as well,
00:16:45 to sort of automotive companies,
00:16:48 and if there’s one word that comes up more often
00:16:51 than anything, it’s cost, and trying to drive costs down.
00:16:55 So while it’s true that it’s a tragic number, the 37,000,
00:17:01 the question is, and I’m not the one asking this question
00:17:04 because I hate this question,
00:17:05 but we want to find the cheapest sensor suite
00:17:09 that creates a safe vehicle.
00:17:13 So in that uncomfortable trade off,
00:17:18 do you foresee LiDAR coming down in cost in the future,
00:17:23 or do you see a day where level four autonomy
00:17:26 is possible without LiDAR?
00:17:29 I see both of those, but it’s really a matter of time.
00:17:32 And I think really, maybe I would talk to the question
00:17:36 you asked about the cheapest sensor.
00:17:37 I don’t think that’s actually what you want.
00:17:40 What you want is a sensor suite that is economically viable.
00:17:45 And then after that, everything is about margin
00:17:49 and driving costs out of the system.
00:17:52 What you also want is a sensor suite that works.
00:17:55 And so it’s great to tell a story about
00:17:59 how it would be better to have a self driving system
00:18:03 with a $50 sensor instead of a $500 sensor.
00:18:08 But if the $500 sensor makes it work
00:18:10 and the $50 sensor doesn’t work, who cares?
00:18:15 So long as you can actually have an economic opportunity,
00:18:20 there’s an economic opportunity there.
00:18:21 And the economic opportunity is important
00:18:23 because that’s how you actually have a sustainable business
00:18:27 and that’s how you can actually see this come to scale
00:18:31 and be out in the world.
00:18:32 And so when I look at LiDAR,
00:18:35 I see a technology that has no underlying
00:18:38 fundamentally expense to it, fundamental expense to it.
00:18:42 It’s going to be more expensive than an imager
00:18:46 because CMOS processes or FAP processes
00:18:51 are dramatically more scalable than mechanical processes.
00:18:56 But we still should be able to drive costs down
00:18:58 substantially on that side.
00:19:00 And then I also do think that with the right business model
00:19:05 you can absorb more,
00:19:07 certainly more cost on the bill of materials.
00:19:09 Yeah, if the sensor suite works, extra value is provided,
00:19:12 thereby you don’t need to drive costs down to zero.
00:19:15 It’s the basic economics.
00:19:17 You’ve talked about your intuition
00:19:18 that level two autonomy is problematic
00:19:22 because of the human factor of vigilance,
00:19:25 decrement, complacency, over trust and so on,
00:19:28 just us being human.
00:19:29 We over trust the system,
00:19:31 we start doing even more so partaking
00:19:34 in the secondary activities like smartphones and so on.
00:19:38 Have your views evolved on this point in either direction?
00:19:43 Can you speak to it?
00:19:44 So, and I want to be really careful
00:19:47 because sometimes this gets twisted in a way
00:19:50 that I certainly didn’t intend.
00:19:53 So active safety systems are a really important technology
00:19:58 that we should be pursuing and integrating into vehicles.
00:20:02 And there’s an opportunity in the near term
00:20:04 to reduce accidents, reduce fatalities,
00:20:06 and we should be pushing on that.
00:20:11 Level two systems are systems
00:20:14 where the vehicle is controlling two axes.
00:20:18 So braking and throttle slash steering.
00:20:23 And I think there are variants of level two systems
00:20:25 that are supporting the driver.
00:20:27 That absolutely we should encourage to be out there.
00:20:31 Where I think there’s a real challenge
00:20:32 is in the human factors part around this
00:20:37 and the misconception from the public
00:20:41 around the capability set that that enables
00:20:43 and the trust that they should have in it.
00:20:46 And that is where I kind of,
00:20:50 I’m actually incrementally more concerned
00:20:52 around level three systems
00:20:54 and how exactly a level two system is marketed and delivered
00:20:58 and how much effort people have put into those human factors.
00:21:01 So I still believe several things around this.
00:21:05 One is people will overtrust the technology.
00:21:09 We’ve seen over the last few weeks
00:21:11 a spate of people sleeping in their Tesla.
00:21:14 I watched an episode last night of Trevor Noah
00:21:19 talking about this and him,
00:21:23 this is a smart guy who has a lot of resources
00:21:26 at his disposal describing a Tesla as a self driving car
00:21:30 and that why shouldn’t people be sleeping in their Tesla?
00:21:33 And it’s like, well, because it’s not a self driving car
00:21:36 and it is not intended to be
00:21:38 and these people will almost certainly die at some point
00:21:46 or hurt other people.
00:21:48 And so we need to really be thoughtful
00:21:50 about how that technology is described
00:21:51 and brought to market.
00:21:54 I also think that because of the economic challenges
00:21:59 we were just talking about,
00:22:01 that these level two driver assistance systems,
00:22:05 that technology path will diverge
00:22:07 from the technology path that we need to be on
00:22:10 to actually deliver truly self driving vehicles,
00:22:14 ones where you can get in it and drive it.
00:22:16 Can get in it and sleep and have the equivalent
00:22:20 or better safety than a human driver behind the wheel.
00:22:24 Because again, the economics are very different
00:22:28 in those two worlds and so that leads
00:22:30 to divergent technology.
00:22:32 So you just don’t see the economics
00:22:34 of gradually increasing from level two
00:22:38 and doing so quickly enough
00:22:41 to where it doesn’t cause safety, critical safety concerns.
00:22:44 You believe that it needs to diverge at this point
00:22:48 into basically different routes.
00:22:50 And really that comes back to what are those L2
00:22:55 and L1 systems doing?
00:22:57 And they are driver assistance functions
00:22:59 where the people that are marketing that responsibly
00:23:04 are being very clear and putting human factors in place
00:23:08 such that the driver is actually responsible for the vehicle
00:23:12 and that the technology is there to support the driver.
00:23:15 And the safety cases that are built around those
00:23:19 are dependent on that driver attention and attentiveness.
00:23:24 And at that point, you can kind of give up
00:23:29 to some degree for economic reasons,
00:23:31 you can give up on say false negatives.
00:23:34 And the way to think about this
00:23:36 is for a four collision mitigation braking system,
00:23:39 if it half the times the driver missed a vehicle
00:23:43 in front of it, it hit the brakes
00:23:46 and brought the vehicle to a stop,
00:23:47 that would be an incredible, incredible advance
00:23:51 in safety on our roads, right?
00:23:53 That would be equivalent to seat belts.
00:23:55 But it would mean that if that vehicle
00:23:56 wasn’t being monitored, it would hit one out of two cars.
00:24:00 And so economically, that’s a perfectly good solution
00:24:05 for a driver assistance system.
00:24:06 What you should do at that point,
00:24:07 if you can get it to work 50% of the time,
00:24:09 is drive the cost out of that
00:24:10 so you can get it on as many vehicles as possible.
00:24:13 But driving the cost out of it
00:24:14 doesn’t drive up performance on the false negative case.
00:24:18 And so you’ll continue to not have a technology
00:24:21 that could really be available for a self driven vehicle.
00:24:25 So clearly the communication,
00:24:28 and this probably applies to all four vehicles as well,
00:24:31 the marketing and communication
00:24:34 of what the technology is actually capable of,
00:24:37 how hard it is, how easy it is,
00:24:38 all that kind of stuff is highly problematic.
00:24:41 So say everybody in the world was perfectly communicated
00:24:45 and were made to be completely aware
00:24:48 of every single technology out there,
00:24:50 what it’s able to do.
00:24:52 What’s your intuition?
00:24:54 And now we’re maybe getting into philosophical ground.
00:24:56 Is it possible to have a level two vehicle
00:25:00 where we don’t over trust it?
00:25:04 I don’t think so.
00:25:05 If people truly understood the risks and internalized it,
00:25:11 then sure, you could do that safely.
00:25:14 But that’s a world that doesn’t exist.
00:25:16 The people are going to,
00:25:18 if the facts are put in front of them,
00:25:20 they’re gonna then combine that with their experience.
00:25:24 And let’s say they’re using an L2 system
00:25:28 and they go up and down the 101 every day
00:25:30 and they do that for a month.
00:25:32 And it just worked every day for a month.
00:25:36 Like that’s pretty compelling at that point,
00:25:39 just even if you know the statistics,
00:25:41 you’re like, well, I don’t know,
00:25:43 maybe there’s something funny about those.
00:25:44 Maybe they’re driving in difficult places.
00:25:46 Like I’ve seen it with my own eyes, it works.
00:25:49 And the problem is that that sample size that they have,
00:25:52 so it’s 30 miles up and down,
00:25:53 so 60 miles times 30 days,
00:25:56 so 60, 180, 1,800 miles.
00:25:58 Like that’s a drop in the bucket
00:26:03 compared to the, what, 85 million miles between fatalities.
00:26:07 And so they don’t really have a true estimate
00:26:11 based on their personal experience of the real risks,
00:26:14 but they’re gonna trust it anyway,
00:26:15 because it’s hard not to.
00:26:16 It worked for a month, what’s gonna change?
00:26:18 So even if you start a perfect understanding of the system,
00:26:21 your own experience will make it drift.
00:26:24 I mean, that’s a big concern.
00:26:25 Over a year, over two years even,
00:26:28 it doesn’t have to be months.
00:26:29 And I think that as this technology moves
00:26:32 from what I would say is kind of the more technology savvy
00:26:37 ownership group to the mass market,
00:26:42 you may be able to have some of those folks
00:26:44 who are really familiar with technology,
00:26:46 they may be able to internalize it better.
00:26:48 And your kind of immunization
00:26:50 against this kind of false risk assessment
00:26:53 might last longer,
00:26:54 but as folks who aren’t as savvy about that
00:26:58 read the material and they compare that
00:27:00 to their personal experience,
00:27:02 I think there it’s going to move more quickly.
00:27:08 So your work, the program that you’ve created at Google
00:27:11 and now at Aurora is focused more on the second path
00:27:16 of creating full autonomy.
00:27:18 So it’s such a fascinating,
00:27:20 I think it’s one of the most interesting AI problems
00:27:24 of the century, right?
00:27:25 It’s, I just talked to a lot of people,
00:27:28 just regular people, I don’t know,
00:27:29 my mom, about autonomous vehicles,
00:27:31 and you begin to grapple with ideas
00:27:34 of giving your life control over to a machine.
00:27:38 It’s philosophically interesting,
00:27:40 it’s practically interesting.
00:27:41 So let’s talk about safety.
00:27:43 How do you think we demonstrate,
00:27:46 you’ve spoken about metrics in the past,
00:27:47 how do you think we demonstrate to the world
00:27:51 that an autonomous vehicle, an Aurora system is safe?
00:27:56 This is one where it’s difficult
00:27:57 because there isn’t a soundbite answer.
00:27:59 That we have to show a combination of work
00:28:05 that was done diligently and thoughtfully,
00:28:08 and this is where something like a functional safety process
00:28:10 is part of that.
00:28:11 It’s like here’s the way we did the work,
00:28:15 that means that we were very thorough.
00:28:17 So if you believe that what we said
00:28:20 about this is the way we did it,
00:28:21 then you can have some confidence
00:28:22 that we were thorough in the engineering work
00:28:25 we put into the system.
00:28:26 And then on top of that,
00:28:28 to kind of demonstrate that we weren’t just thorough,
00:28:32 we were actually good at what we did,
00:28:35 there’ll be a kind of a collection of evidence
00:28:38 in terms of demonstrating that the capabilities
00:28:40 worked the way we thought they did,
00:28:42 statistically and to whatever degree
00:28:45 we can demonstrate that,
00:28:48 both in some combination of simulations,
00:28:50 some combination of unit testing
00:28:53 and decomposition testing,
00:28:54 and then some part of it will be on road data.
00:28:58 And I think the way we’ll ultimately
00:29:02 convey this to the public
00:29:04 is there’ll be clearly some conversation
00:29:06 with the public about it,
00:29:08 but we’ll kind of invoke the kind of the trusted nodes
00:29:12 and that we’ll spend more time
00:29:13 being able to go into more depth with folks like NHTSA
00:29:17 and other federal and state regulatory bodies
00:29:19 and kind of given that they are
00:29:22 operating in the public interest and they’re trusted,
00:29:26 that if we can show enough work to them
00:29:28 that they’re convinced,
00:29:30 then I think we’re in a pretty good place.
00:29:33 That means you work with people
00:29:35 that are essentially experts at safety
00:29:36 to try to discuss and show.
00:29:39 Do you think, the answer’s probably no,
00:29:41 but just in case,
00:29:42 do you think there exists a metric?
00:29:44 So currently people have been using
00:29:46 number of disengagements.
00:29:48 And it quickly turns into a marketing scheme
00:29:50 to sort of you alter the experiments you run to adjust.
00:29:54 I think you’ve spoken that you don’t like.
00:29:56 Don’t love it.
00:29:57 No, in fact, I was on the record telling DMV
00:29:59 that I thought this was not a great metric.
00:30:01 Do you think it’s possible to create a metric,
00:30:05 a number that could demonstrate safety
00:30:09 outside of fatalities?
00:30:12 So I do.
00:30:13 And I think that it won’t be just one number.
00:30:17 So as we are internally grappling with this,
00:30:21 and at some point we’ll be able to talk
00:30:23 more publicly about it,
00:30:25 is how do we think about human performance
00:30:28 in different tasks,
00:30:29 say detecting traffic lights
00:30:32 or safely making a left turn across traffic?
00:30:37 And what do we think the failure rates are
00:30:40 for those different capabilities for people?
00:30:42 And then demonstrating to ourselves
00:30:44 and then ultimately folks in the regulatory role
00:30:48 and then ultimately the public
00:30:50 that we have confidence that our system
00:30:52 will work better than that.
00:30:54 And so these individual metrics
00:30:57 will kind of tell a compelling story ultimately.
00:31:01 I do think at the end of the day
00:31:03 what we care about in terms of safety
00:31:06 is life saved and injuries reduced.
00:31:12 And then ultimately kind of casualty dollars
00:31:16 that people aren’t having to pay to get their car fixed.
00:31:19 And I do think that in aviation
00:31:22 they look at a kind of an event pyramid
00:31:25 where a crash is at the top of that
00:31:28 and that’s the worst event obviously
00:31:30 and then there’s injuries and near miss events and whatnot
00:31:34 and violation of operating procedures
00:31:37 and you kind of build a statistical model
00:31:40 of the relevance of the low severity things
00:31:44 or the high severity things.
00:31:45 And I think that’s something
00:31:46 where we’ll be able to look at as well
00:31:48 because an event per 85 million miles
00:31:51 is statistically a difficult thing
00:31:54 even at the scale of the U.S.
00:31:56 to kind of compare directly.
00:31:59 And that event fatality that’s connected
00:32:02 to an autonomous vehicle is significantly
00:32:07 at least currently magnified
00:32:09 in the amount of attention it gets.
00:32:12 So that speaks to public perception.
00:32:15 I think the most popular topic
00:32:16 about autonomous vehicles in the public
00:32:19 is the trolley problem formulation, right?
00:32:23 Which has, let’s not get into that too much
00:32:27 but is misguided in many ways.
00:32:29 But it speaks to the fact that people are grappling
00:32:32 with this idea of giving control over to a machine.
00:32:36 So how do you win the hearts and minds of the people
00:32:41 that autonomy is something that could be a part
00:32:44 of their lives?
00:32:45 I think you let them experience it, right?
00:32:47 I think it’s right.
00:32:50 I think people should be skeptical.
00:32:52 I think people should ask questions.
00:32:55 I think they should doubt
00:32:57 because this is something new and different.
00:33:00 They haven’t touched it yet.
00:33:01 And I think that’s perfectly reasonable.
00:33:03 And, but at the same time,
00:33:07 it’s clear there’s an opportunity to make the road safer.
00:33:09 It’s clear that we can improve access to mobility.
00:33:12 It’s clear that we can reduce the cost of mobility.
00:33:16 And that once people try that
00:33:19 and understand that it’s safe
00:33:22 and are able to use in their daily lives,
00:33:24 I think it’s one of these things
00:33:25 that will just be obvious.
00:33:28 And I’ve seen this practically in demonstrations
00:33:32 that I’ve given where I’ve had people come in
00:33:35 and they’re very skeptical.
00:33:38 Again, in a vehicle, my favorite one
00:33:40 is taking somebody out on the freeway
00:33:42 and we’re on the 101 driving at 65 miles an hour.
00:33:46 And after 10 minutes, they kind of turn and ask,
00:33:48 is that all it does?
00:33:49 And you’re like, it’s a self driving car.
00:33:52 I’m not sure exactly what you thought it would do, right?
00:33:54 But it becomes mundane,
00:33:58 which is exactly what you want a technology
00:34:01 like this to be, right?
00:34:02 We don’t really, when I turn the light switch on in here,
00:34:07 I don’t think about the complexity of those electrons
00:34:12 being pushed down a wire from wherever it was
00:34:14 and being generated.
00:34:15 It’s like, I just get annoyed if it doesn’t work, right?
00:34:19 And what I value is the fact
00:34:21 that I can do other things in this space.
00:34:23 I can see my colleagues.
00:34:24 I can read stuff on a paper.
00:34:26 I can not be afraid of the dark.
00:34:30 And I think that’s what we want this technology to be like
00:34:33 is it’s in the background
00:34:34 and people get to have those life experiences
00:34:37 and do so safely.
00:34:38 So putting this technology in the hands of people
00:34:42 speaks to scale of deployment, right?
00:34:46 So what do you think the dreaded question about the future
00:34:50 because nobody can predict the future,
00:34:53 but just maybe speak poetically
00:34:57 about when do you think we’ll see a large scale deployment
00:35:00 of autonomous vehicles, 10,000, those kinds of numbers?
00:35:06 We’ll see that within 10 years.
00:35:09 I’m pretty confident.
00:35:14 What’s an impressive scale?
00:35:16 What moment, so you’ve done the DARPA challenge
00:35:19 where there’s one vehicle.
00:35:20 At which moment does it become, wow, this is serious scale?
00:35:23 So I think the moment it gets serious
00:35:26 is when we really do have a driverless vehicle
00:35:32 operating on public roads
00:35:35 and that we can do that kind of continuously.
00:35:37 Without a safety driver.
00:35:38 Without a safety driver in the vehicle.
00:35:40 I think at that moment,
00:35:41 we’ve kind of crossed the zero to one threshold.
00:35:45 And then it is about how do we continue to scale that?
00:35:50 How do we build the right business models?
00:35:53 How do we build the right customer experience around it
00:35:56 so that it is actually a useful product out in the world?
00:36:00 And I think that is really,
00:36:03 at that point it moves from
00:36:05 what is this kind of mixed science engineering project
00:36:09 into engineering and commercialization
00:36:12 and really starting to deliver on the value
00:36:15 that we all see here and actually making that real in the world.
00:36:20 What do you think that deployment looks like?
00:36:22 Where do we first see the inkling of no safety driver,
00:36:26 one or two cars here and there?
00:36:28 Is it on the highway?
00:36:29 Is it in specific routes in the urban environment?
00:36:33 I think it’s gonna be urban, suburban type environments.
00:36:37 Yeah, with Aurora, when we thought about how to tackle this,
00:36:41 it was kind of in vogue to think about trucking
00:36:46 as opposed to urban driving.
00:36:47 And again, the human intuition around this
00:36:51 is that freeways are easier to drive on
00:36:57 because everybody’s kind of going in the same direction
00:36:59 and lanes are a little wider, et cetera.
00:37:01 And I think that that intuition is pretty good,
00:37:03 except we don’t really care about most of the time.
00:37:06 We care about all of the time.
00:37:08 And when you’re driving on a freeway with a truck,
00:37:10 say 70 miles an hour,
00:37:14 and you’ve got 70,000 pound load with you,
00:37:16 that’s just an incredible amount of kinetic energy.
00:37:18 And so when that goes wrong, it goes really wrong.
00:37:22 And those challenges that you see occur more rarely,
00:37:27 so you don’t get to learn as quickly.
00:37:31 And they’re incrementally more difficult than urban driving,
00:37:34 but they’re not easier than urban driving.
00:37:37 And so I think this happens in moderate speed
00:37:41 urban environments because if two vehicles crash
00:37:45 at 25 miles per hour, it’s not good,
00:37:48 but probably everybody walks away.
00:37:51 And those events where there’s the possibility
00:37:53 for that occurring happen frequently.
00:37:55 So we get to learn more rapidly.
00:37:58 We get to do that with lower risk for everyone.
00:38:02 And then we can deliver value to people
00:38:04 that need to get from one place to another.
00:38:05 And once we’ve got that solved,
00:38:08 then the freeway driving part of this just falls out.
00:38:11 But we’re able to learn more safely,
00:38:13 more quickly in the urban environment.
00:38:15 So 10 years and then scale 20, 30 year,
00:38:18 who knows if a sufficiently compelling experience
00:38:22 is created, it could be faster and slower.
00:38:24 Do you think there could be breakthroughs
00:38:27 and what kind of breakthroughs might there be
00:38:29 that completely change that timeline?
00:38:32 Again, not only am I asking you to predict the future,
00:38:35 I’m asking you to predict breakthroughs
00:38:37 that haven’t happened yet.
00:38:38 So what’s the, I think another way to ask that
00:38:41 would be if I could wave a magic wand,
00:38:44 what part of the system would I make work today
00:38:46 to accelerate it as quickly as possible?
00:38:52 Don’t say infrastructure, please don’t say infrastructure.
00:38:54 No, it’s definitely not infrastructure.
00:38:56 It’s really that perception forecasting capability.
00:39:00 So if tomorrow you could give me a perfect model
00:39:04 of what’s happened, what is happening
00:39:06 and what will happen for the next five seconds
00:39:10 around a vehicle on the roadway,
00:39:13 that would accelerate things pretty dramatically.
00:39:15 Are you, in terms of staying up at night,
00:39:17 are you mostly bothered by cars, pedestrians or cyclists?
00:39:21 So I worry most about the vulnerable road users
00:39:25 about the combination of cyclists and cars, right?
00:39:28 Or cyclists and pedestrians because they’re not in armor.
00:39:31 The cars, they’re bigger, they’ve got protection
00:39:36 for the people and so the ultimate risk is lower there.
00:39:41 Whereas a pedestrian or a cyclist,
00:39:43 they’re out on the road and they don’t have any protection
00:39:46 and so we need to pay extra attention to that.
00:39:49 Do you think about a very difficult technical challenge
00:39:55 of the fact that pedestrians,
00:39:58 if you try to protect pedestrians
00:40:00 by being careful and slow, they’ll take advantage of that.
00:40:04 So the game theoretic dance, does that worry you
00:40:09 of how, from a technical perspective, how we solve that?
00:40:12 Because as humans, the way we solve that
00:40:14 is kind of nudge our way through the pedestrians
00:40:17 which doesn’t feel, from a technical perspective,
00:40:20 as a appropriate algorithm.
00:40:23 But do you think about how we solve that problem?
00:40:25 Yeah, I think there’s two different concepts there.
00:40:31 So one is, am I worried that because these vehicles
00:40:35 are self driving, people will kind of step in the road
00:40:37 and take advantage of them?
00:40:38 And I’ve heard this and I don’t really believe it
00:40:43 because if I’m driving down the road
00:40:45 and somebody steps in front of me, I’m going to stop.
00:40:50 Even if I’m annoyed, I’m not gonna just drive
00:40:53 through a person stood in the road.
00:40:56 And so I think today people can take advantage of this
00:41:00 and you do see some people do it.
00:41:02 I guess there’s an incremental risk
00:41:04 because maybe they have lower confidence
00:41:05 that I’m gonna see them than they might have
00:41:07 for an automated vehicle and so maybe that shifts
00:41:10 it a little bit.
00:41:12 But I think people don’t wanna get hit by cars.
00:41:14 And so I think that I’m not that worried
00:41:17 about people walking out of the 101
00:41:18 and creating chaos more than they would today.
00:41:24 Regarding kind of the nudging through a big stream
00:41:27 of pedestrians leaving a concert or something,
00:41:30 I think that is further down the technology pipeline.
00:41:33 I think that you’re right, that’s tricky.
00:41:36 I don’t think it’s necessarily,
00:41:40 I think the algorithm people use for this is pretty simple.
00:41:43 It’s kind of just move forward slowly
00:41:44 and if somebody’s really close then stop.
00:41:46 And I think that that probably can be replicated
00:41:50 pretty easily and particularly given that
00:41:54 you don’t do this at 30 miles an hour,
00:41:55 you do it at one, that even in those situations
00:41:59 the risk is relatively minimal.
00:42:01 But it’s not something we’re thinking about
00:42:03 in any serious way.
00:42:04 And probably that’s less an algorithm problem
00:42:07 and more creating a human experience.
00:42:10 So the HCI people that create a visual display
00:42:14 that you’re pleasantly as a pedestrian
00:42:16 nudged out of the way, that’s an experience problem,
00:42:20 not an algorithm problem.
00:42:22 Who’s the main competitor to Aurora today?
00:42:25 And how do you outcompete them in the long run?
00:42:28 So we really focus a lot on what we’re doing here.
00:42:31 I think that, I’ve said this a few times,
00:42:34 that this is a huge difficult problem
00:42:37 and it’s great that a bunch of companies are tackling it
00:42:40 because I think it’s so important for society
00:42:42 that somebody gets there.
00:42:43 So we don’t spend a whole lot of time
00:42:49 thinking tactically about who’s out there
00:42:51 and how do we beat that person individually.
00:42:55 What are we trying to do to go faster ultimately?
00:42:59 Well part of it is the leadership team we have
00:43:02 has got pretty tremendous experience.
00:43:04 And so we kind of understand the landscape
00:43:06 and understand where the cul de sacs are to some degree
00:43:09 and we try and avoid those.
00:43:10 I think there’s a part of it,
00:43:14 just this great team we’ve built.
00:43:16 People, this is a technology and a company
00:43:19 that people believe in the mission of
00:43:22 and so it allows us to attract
00:43:23 just awesome people to go work.
00:43:26 We’ve got a culture I think that people appreciate
00:43:29 that allows them to focus,
00:43:30 allows them to really spend time solving problems.
00:43:33 And I think that keeps them energized.
00:43:35 And then we’ve invested hard,
00:43:38 invested heavily in the infrastructure
00:43:43 and architectures that we think will ultimately accelerate us.
00:43:46 So because of the folks we’re able to bring in early on,
00:43:50 because of the great investors we have,
00:43:53 we don’t spend all of our time doing demos
00:43:56 and kind of leaping from one demo to the next.
00:43:58 We’ve been given the freedom to invest in
00:44:03 infrastructure to do machine learning,
00:44:05 infrastructure to pull data from our on road testing,
00:44:08 infrastructure to use that to accelerate engineering.
00:44:11 And I think that early investment
00:44:14 and continuing investment in those kind of tools
00:44:17 will ultimately allow us to accelerate
00:44:19 and do something pretty incredible.
00:44:21 Chris, beautifully put.
00:44:23 It’s a good place to end.
00:44:24 Thank you so much for talking today.
00:44:26 Thank you very much. Really enjoyed it.