Sebastian Thrun: Flying Cars, Autonomous Vehicles, and Education #59

Transcript

00:00:00 The following is a conversation with Sebastian Thrun.

00:00:03 He’s one of the greatest roboticists, computer scientists, and educators of our time.

00:00:08 He led the development of the autonomous vehicles at Stanford

00:00:11 that won the 2005 DARPA Grand Challenge and placed second in the 2007 DARPA Urban Challenge.

00:00:18 He then led the Google self driving car program, which launched the self driving car revolution.

00:00:24 He taught the popular Stanford course on artificial intelligence in 2011,

00:00:29 which was one of the first massive open online courses, or MOOCs as they’re commonly called.

00:00:35 That experience led him to co found Udacity, an online education platform.

00:00:39 If you haven’t taken courses on it yet, I highly recommend it.

00:00:43 Their self driving car program, for example, is excellent.

00:00:47 He’s also the CEO of Kitty Hawk, a company working on building flying cars,

00:00:52 or more technically, EVTOLs, which stands for electric vertical takeoff and landing aircraft.

00:00:58 He has launched several revolutions and inspired millions of people.

00:01:02 But also, as many know, he’s just a really nice guy.

00:01:06 It was an honor and a pleasure to talk with him.

00:01:10 This is the Artificial Intelligence Podcast.

00:01:12 If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast,

00:01:17 follow it on Spotify, support it on Patreon, or simply connect with me on Twitter

00:01:21 at Lex Friedman, spelled F R I D M A N.

00:01:25 If you leave a review on Apple Podcast or YouTube or Twitter,

00:01:29 consider mentioning ideas, people, topics you find interesting.

00:01:32 It helps guide the future of this podcast.

00:01:35 But in general, I just love comments with kindness and thoughtfulness in them.

00:01:40 This podcast is a side project for me, as many people know,

00:01:43 but I still put a lot of effort into it.

00:01:45 So the positive words of support from an amazing community, from you, really help.

00:01:52 I recently started doing ads at the end of the introduction.

00:01:55 I’ll do one or two minutes after introducing the episode

00:01:58 and never any ads in the middle that can break the flow of the conversation.

00:02:01 I hope that works for you and doesn’t hurt the listening experience.

00:02:05 I provide timestamps for the start of the conversation that you can skip to,

00:02:09 but it helps if you listen to the ad and support this podcast

00:02:12 by trying out the product or service being advertised.

00:02:16 This show is presented by Cash App, the number one finance app in the App Store.

00:02:21 I personally use Cash App to send money to friends,

00:02:24 but you can also use it to buy, sell, and deposit Bitcoin in just seconds.

00:02:28 Cash App also has a new investing feature.

00:02:31 You can buy fractions of a stock, say $1 worth, no matter what the stock price is.

00:02:36 Broker services are provided by Cash App Investing,

00:02:39 a subsidiary of Square, and member SIPC.

00:02:42 I’m excited to be working with Cash App

00:02:44 to support one of my favorite organizations called FIRST,

00:02:47 best known for their FIRST Robotics and LEGO competitions.

00:02:51 They educate and inspire hundreds of thousands of students

00:02:54 in over 110 countries and have a perfect rating on Charity Navigator,

00:02:59 which means the donated money is used to maximum effectiveness.

00:03:03 When you get Cash App from the App Store or Google Play

00:03:06 and use code LEGSPODCAST, you’ll get $10,

00:03:09 and Cash App will also donate $10 to FIRST,

00:03:12 which again is an organization that I’ve personally seen inspire girls and boys

00:03:16 to dream of engineering a better world.

00:03:19 And now, here’s my conversation with Sebastian Thrun.

00:03:24 You mentioned that The Matrix may be your favorite movie.

00:03:28 So let’s start with a crazy philosophical question.

00:03:32 Do you think we’re living in a simulation?

00:03:34 And in general, do you find the thought experiment interesting?

00:03:40 Define simulation, I would say.

00:03:42 Maybe we are, maybe we are not,

00:03:43 but it’s completely irrelevant to the way we should act.

00:03:47 Putting aside, for a moment,

00:03:49 the fact that it might not have any impact on how we should act as human beings,

00:03:55 for people studying theoretical physics,

00:03:57 these kinds of questions might be kind of interesting,

00:03:59 looking at the universe as an information processing system.

00:04:03 The universe is an information processing system.

00:04:05 It’s a huge physical, biological, chemical computer, there’s no question.

00:04:10 But I live here and now.

00:04:12 I care about people, I care about us.

00:04:15 What do you think is trying to compute?

00:04:17 I don’t think there’s an intention.

00:04:18 I think the world evolves the way it evolves.

00:04:22 And it’s beautiful, it’s unpredictable.

00:04:25 And I’m really, really grateful to be alive.

00:04:28 Spoken like a true human.

00:04:30 Which last time I checked, I was.

00:04:33 Or that, in fact, this whole conversation is just a touring test

00:04:36 to see if indeed you are.

00:04:40 You’ve also said that one of the first programs,

00:04:42 or the first few programs you’ve written was a, wait for it, TI57 calculator.

00:04:49 Yeah.

00:04:50 Maybe that’s early 80s.

00:04:52 We don’t want to date calculators or anything.

00:04:54 That’s early 80s, correct.

00:04:55 Yeah.

00:04:56 So if you were to place yourself back into that time, into the mindset you were in,

00:05:02 could you have predicted the evolution of computing, AI,

00:05:06 the internet technology in the decades that followed?

00:05:10 I was super fascinated by Silicon Valley, which I’d seen on television once

00:05:14 and thought, my god, this is so cool.

00:05:16 They build like DRAMs there and CPUs.

00:05:19 How cool is that?

00:05:20 And as a college student a few years later, I decided to really study

00:05:25 intelligence and study human beings.

00:05:26 And found that even back then in the 80s and 90s,

00:05:30 artificial intelligence is what fascinated me the most.

00:05:33 What’s missing is that back in the day, the computers are really small.

00:05:38 The brains we could build were not anywhere bigger than a cockroach.

00:05:41 And cockroaches aren’t very smart.

00:05:43 So we weren’t at the scale yet where we are today.

00:05:46 Did you dream at that time to achieve the kind of scale we have today?

00:05:51 Or did that seem possible?

00:05:52 I always wanted to make robots smart.

00:05:54 And I felt it was super cool to build an artificial human.

00:05:57 And the best way to build an artificial human was to build a robot,

00:06:00 because that’s kind of the closest we could do.

00:06:03 Unfortunately, we aren’t there yet.

00:06:04 The robots today are still very brittle.

00:06:07 But it’s fascinating to study intelligence from a constructive

00:06:10 perspective when you build something.

00:06:12 To understand you build, what do you think it takes to build an intelligent

00:06:18 system, an intelligent robot?

00:06:20 I think the biggest innovation that we’ve seen is machine learning.

00:06:23 And it’s the idea that the computers can basically teach themselves.

00:06:28 Let’s give an example.

00:06:29 I’d say everybody pretty much knows how to walk.

00:06:33 And we learn how to walk in the first year or two of our lives.

00:06:36 But no scientist has ever been able to write down the rules of human gait.

00:06:41 We don’t understand it.

00:06:42 We have it in our brains somehow.

00:06:43 We can practice it.

00:06:45 We understand it.

00:06:46 But we can’t articulate it.

00:06:47 We can’t pass it on by language.

00:06:50 And that, to me, is kind of the deficiency of today’s computer programming.

00:06:53 When you program a computer, they’re so insanely dumb that you have to give them

00:06:57 rules for every contingencies.

00:06:59 Very unlike the way people learn from data and experience,

00:07:03 computers are being instructed.

00:07:05 And because it’s so hard to get this instruction set right,

00:07:07 we pay software engineers $200,000 a year.

00:07:11 Now, the most recent innovation, which has been in the make for 30,

00:07:14 40 years, is an idea that computers can find their own rules.

00:07:18 So they can learn from falling down and getting up the same way children can

00:07:21 learn from falling down and getting up.

00:07:23 And that revolution has led to a capability that’s completely unmatched.

00:07:28 Today’s computers can watch experts do their jobs, whether you’re

00:07:32 a doctor or a lawyer, pick up the regularities, learn those rules,

00:07:36 and then become as good as the best experts.

00:07:39 So the dream of in the 80s of expert systems, for example, had at its core

00:07:44 the idea that humans could boil down their expertise on a sheet of paper,

00:07:49 so to sort of reduce, sort of be able to explain to machines

00:07:53 how to do something explicitly.

00:07:55 So do you think, what’s the use of human expertise into this whole picture?

00:08:00 Do you think most of the intelligence will come from machines learning

00:08:03 from experience without human expertise input?

00:08:06 So the question for me is much more how do you express expertise?

00:08:10 You can express expertise by writing a book.

00:08:12 You can express expertise by showing someone what you’re doing.

00:08:16 You can express expertise by applying it by many different ways.

00:08:20 And I think the expert systems was our best attempt in AI

00:08:23 to capture expertise and rules.

00:08:25 But someone sat down and said, here are the rules of human gait.

00:08:28 Here’s when you put your big toe forward and your heel backwards

00:08:32 and you always stop stumbling.

00:08:34 And as we now know, the set of rules, the set of language that we can command

00:08:39 is incredibly limited.

00:08:41 The majority of the human brain doesn’t deal with language.

00:08:43 It deals with subconscious, numerical, perceptual things

00:08:48 that we don’t even self aware of.

00:08:51 Now, when an AI system watches an expert do their job and practice their job,

00:08:57 it can pick up things that people can’t even put into writing,

00:09:01 into books or rules.

00:09:03 And that’s where the real power is.

00:09:04 We now have AI systems that, for example, look over the shoulders

00:09:08 of highly paid human doctors like dermatologists or radiologists,

00:09:12 and they can somehow pick up those skills that no one can express in words.

00:09:18 So you were a key person in launching three revolutions,

00:09:22 online education, autonomous vehicles, and flying cars or VTOLs.

00:09:28 So high level, and I apologize for all the philosophical questions.

00:09:34 There’s no apology necessary.

00:09:37 How do you choose what problems to try and solve?

00:09:40 What drives you to make those solutions a reality?

00:09:43 I have two desires in life.

00:09:44 I want to literally make the lives of others better.

00:09:48 Or as we often say, maybe jokingly, make the world a better place.

00:09:52 I actually believe in this.

00:09:54 It’s as funny as it sounds.

00:09:57 And second, I want to learn.

00:09:59 I want to get new skills.

00:10:00 I don’t want to be in a job I’m good at, because if I’m in a job

00:10:02 that I’m good at, the chances for me to learn something interesting

00:10:05 is actually minimized.

00:10:06 So I want to be in a job I’m bad at.

00:10:09 That’s really important to me.

00:10:10 So in a bill, for example, what people often

00:10:12 call flying cars, these are electrical, vertical, takeoff,

00:10:15 and landing vehicles.

00:10:17 I’m just no expert in any of this.

00:10:19 And it’s so much fun to learn on the job what it actually means

00:10:23 to build something like this.

00:10:24 Now, I’d say the stuff that I’ve done lately

00:10:27 after I finished my professorship at Stanford,

00:10:31 they really focused on what has the maximum impact on society.

00:10:35 Transportation is something that has transformed the 21st

00:10:38 or 20th century more than any other invention,

00:10:40 in my opinion, even more than communication.

00:10:42 And cities are different.

00:10:43 Workers are different.

00:10:45 Women’s rights are different because of transportation.

00:10:47 And yet, we still have a very suboptimal transportation

00:10:51 solution where we kill 1.2 or so million people every year

00:10:56 in traffic.

00:10:57 It’s like the leading cause of death for young people

00:10:59 in many countries, where we are extremely inefficient

00:11:02 resource wise.

00:11:03 Just go to your average neighborhood city

00:11:06 and look at the number of parked cars.

00:11:08 That’s a travesty, in my opinion.

00:11:10 Or where we spend endless hours in traffic jams.

00:11:13 And very, very simple innovations,

00:11:15 like a self driving car or what people call a flying car,

00:11:18 could completely change this.

00:11:20 And it’s there.

00:11:21 I mean, the technology is basically there.

00:11:23 You have to close your eyes not to see it.

00:11:26 So lingering on autonomous vehicles, a fascinating space,

00:11:30 some incredible work you’ve done throughout your career there.

00:11:33 So let’s start with DARPA, I think, the DARPA challenge,

00:11:39 through the desert and then urban to the streets.

00:11:42 I think that inspired an entire generation of roboticists

00:11:45 and obviously sprung this whole excitement

00:11:49 about this particular kind of four wheeled robots

00:11:52 we called autonomous cars, self driving cars.

00:11:55 So you led the development of Stanley, the autonomous car

00:11:58 that won the race to the desert, the DARPA challenge in 2005.

00:12:03 And Junior, the car that finished second

00:12:07 in the DARPA urban challenge, also did incredibly well

00:12:11 in 2007, I think.

00:12:14 What are some painful, inspiring, or enlightening

00:12:17 experiences from that time that stand out to you?

00:12:20 Oh my god.

00:12:22 Painful were all these incredibly complicated,

00:12:28 stupid bugs that had to be found.

00:12:30 We had a phase where Stanley, our car that eventually

00:12:35 won the DARPA grand challenge, would every 30 miles

00:12:38 just commit suicide.

00:12:39 And we didn’t know why.

00:12:40 And it ended up to be that in the sinking of two computer

00:12:44 clocks, occasionally a clock went backwards

00:12:47 and that negative time elapsed, screwed up

00:12:50 the entire internal logic.

00:12:51 But it took ages to find this.

00:12:54 There were bugs like that.

00:12:56 I’d say enlightening is the Stanford team immediately

00:12:59 focused on machine learning and on software,

00:13:02 whereas everybody else seemed to focus on building better hardware.

00:13:05 Our analysis had been a human being with an existing rental

00:13:08 car can perfectly drive the course

00:13:10 but why do I have to build a better rental car?

00:13:12 I just should replace the human being.

00:13:15 And the human being, to me, was a conjunction of three steps.

00:13:18 We had sensors, eyes and ears, mostly eyes.

00:13:22 We had brains in the middle.

00:13:23 And then we had actuators, our hands and our feet.

00:13:26 Now, the actuators are easy to build.

00:13:28 The sensors are actually also easy to build.

00:13:29 What was missing was the brain.

00:13:30 So we had to build a human brain.

00:13:32 And nothing clearer than to me that the human brain

00:13:36 is a learning machine.

00:13:37 So why not just train our robot?

00:13:38 So we would build massive machine learning

00:13:40 into our machine.

00:13:42 And with that, we were able to not just learn

00:13:44 from human drivers.

00:13:45 We had the entire speed control of the vehicle

00:13:47 was copied from human driving.

00:13:49 But also have the robot learn from experience

00:13:51 where it made a mistake and recover from it

00:13:53 and learn from it.

00:13:55 You mentioned the pain point of software and clocks.

00:14:00 Synchronization seems to be a problem that

00:14:04 continues with robotics.

00:14:06 It’s a tricky one with drones and so on.

00:14:09 What does it take to build a thing, a system

00:14:14 with so many constraints?

00:14:16 You have a deadline, no time.

00:14:20 You’re unsure about anything really.

00:14:22 It’s the first time that people really even exploring.

00:14:24 It’s not even sure that anybody can finish

00:14:26 when we’re talking about the race to the desert

00:14:28 the year before nobody finish.

00:14:30 What does it take to scramble and finish

00:14:32 a product that actually, a system that actually works?

00:14:35 We were very lucky.

00:14:36 We were a really small team.

00:14:38 The core of the team were four people.

00:14:40 It was four because five couldn’t comfortably sit

00:14:43 inside a car, but four could.

00:14:45 And I, as a team leader, my job was

00:14:47 to get pizza for everybody and wash the car and stuff

00:14:50 like this and repair the radiator when it broke

00:14:52 and debug the system.

00:14:55 And we were very open minded.

00:14:56 We had no egos involved.

00:14:58 We just wanted to see how far we can get.

00:15:00 What we did really, really well was time management.

00:15:03 We were done with everything a month before the race.

00:15:06 And we froze the entire software a month before the race.

00:15:08 And it turned out, looking at other teams,

00:15:11 every other team complained if they had just one more week,

00:15:14 they would have won.

00:15:15 And we decided we’re not going to fall into that mistake.

00:15:18 We’re going to be early.

00:15:19 And we had an entire month to shake the system.

00:15:22 And we actually found two or three minor bugs

00:15:24 in the last month that we had to fix.

00:15:27 And we were completely prepared when the race occurred.

00:15:30 Okay, so first of all, that’s such an incredibly rare

00:15:33 achievement in terms of being able to be done on time

00:15:37 or ahead of time.

00:15:39 What do you, how do you do that in your future work?

00:15:43 What advice do you have in general?

00:15:44 Because it seems to be so rare,

00:15:46 especially in highly innovative projects like this.

00:15:49 People work till the last second.

00:15:50 Well, the nice thing about the DARPA Grand Challenge

00:15:52 is that the problem was incredibly well defined.

00:15:55 We were able for a while to drive

00:15:57 the old DARPA Grand Challenge course,

00:15:58 which had been used the year before.

00:16:00 And then at some reason we were kicked out of the region.

00:16:04 So we had to go to a different desert, the Snorran Desert,

00:16:06 and we were able to drive desert trails

00:16:08 just of the same type.

00:16:10 So there was never any debate about like,

00:16:12 what is actually the problem?

00:16:13 We didn’t sit down and say,

00:16:14 hey, should we build a car or a plane?

00:16:16 We had to build a car.

00:16:18 That made it very, very easy.

00:16:20 Then I studied my own life and life of others.

00:16:23 And we realized that the typical mistake that people make

00:16:26 is that there’s this kind of crazy bug left

00:16:29 that they haven’t found yet.

00:16:32 And it’s just, they regret it.

00:16:34 And that bug would have been trivial to fix.

00:16:36 They just haven’t fixed it yet.

00:16:37 They didn’t want to fall into that trap.

00:16:39 So I built a testing team.

00:16:41 We had a testing team that built a testing booklet

00:16:43 of 160 pages of tests we had to go through

00:16:46 just to make sure we shake out the system appropriately.

00:16:49 And the testing team was with us all the time

00:16:51 and dictated to us today, we do railroad crossings.

00:16:55 Tomorrow we do, we practice the start of the event.

00:16:58 And in all of these, we thought,

00:17:00 oh my God, it’s long solved trivial.

00:17:02 And then we tested it out.

00:17:03 Oh my God, it doesn’t do a railroad crossing.

00:17:04 Why not?

00:17:05 Oh my God, it mistakes the rails for metal barriers.

00:17:09 We have to fix this.

00:17:11 So it was really a continuous focus

00:17:14 on improving the weakest part of the system.

00:17:16 And as long as you focus on improving

00:17:19 the weakest part of the system,

00:17:20 you eventually build a really great system.

00:17:23 Let me just pause on that, to me as an engineer,

00:17:25 it’s just super exciting that you were thinking like that,

00:17:28 especially at that stage as brilliant,

00:17:30 that testing was such a core part of it.

00:17:33 It may be to linger on the point of leadership.

00:17:36 I think it’s one of the first times

00:17:39 you were really a leader

00:17:41 and you’ve led many very successful teams since then.

00:17:46 What does it take to be a good leader?

00:17:48 I would say most of all, I just take credit.

00:17:51 I put the work of others, right?

00:17:55 That’s very convenient turns out

00:17:57 because I can’t do all these things myself.

00:18:00 I’m an engineer at heart.

00:18:01 So I care about engineering.

00:18:03 So I don’t know what the chicken and the egg is,

00:18:06 but as a kid, I loved computers

00:18:07 because you could tell them to do something

00:18:09 and they actually did it.

00:18:10 It was very cool.

00:18:11 And you could like in the middle of the night,

00:18:12 wake up at one in the morning and switch on your computer.

00:18:15 And what he told you to yesterday, it would still do.

00:18:18 That was really cool.

00:18:19 Unfortunately, that didn’t quite work with people.

00:18:21 So you go to people and tell them what to do

00:18:22 and they don’t do it.

00:18:24 And they hate you for it, or you do it today

00:18:26 and then you go a day later and they stop doing it.

00:18:29 So you have to…

00:18:30 So then the question really became,

00:18:31 how can you put yourself in the brain of people

00:18:34 as opposed to computers?

00:18:35 And in terms of computers, it’s super dumb.

00:18:37 That’s so dumb.

00:18:38 If people were as dumb as computers,

00:18:39 I wouldn’t want to work with them.

00:18:41 But people are smart and people are emotional

00:18:43 and people have pride and people have aspirations.

00:18:45 So how can I connect to that?

00:18:49 And that’s the thing that most of our leadership just fails

00:18:52 because many, many engineers turn manager

00:18:56 believe they can treat their team just the same way

00:18:58 it can treat your computer.

00:18:59 And it just doesn’t work this way.

00:19:00 It’s just really bad.

00:19:02 So how can I connect to people?

00:19:05 And it turns out as a college professor,

00:19:07 the wonderful thing you do all the time

00:19:10 is to empower other people.

00:19:11 Like your job is to make your students look great.

00:19:14 That’s all you do.

00:19:15 You’re the best coach.

00:19:16 And it turns out if you do a fantastic job with making

00:19:19 your students look great, they actually love you

00:19:21 and their parents love you.

00:19:22 And they give you all the credit for stuff you don’t deserve.

00:19:25 All my students were smarter than me.

00:19:27 All the great stuff invented at Stanford

00:19:28 was their stuff, not my stuff.

00:19:30 And they give me credit and say, oh, Sebastian.

00:19:32 We’re just making them feel good about themselves.

00:19:35 So the question really is, can you take a team of people

00:19:38 and what does it take to make them

00:19:40 to connect to what they actually want in life

00:19:43 and turn this into productive action?

00:19:45 It turns out every human being that I know

00:19:48 has incredibly good intentions.

00:19:50 I’ve really rarely met a person with bad intentions.

00:19:54 I believe every person wants to contribute.

00:19:55 I think every person I’ve met wants to help others.

00:19:59 It’s amazing how much of an urge we have

00:20:01 not to just help ourselves, but to help others.

00:20:04 So how can we empower people and give them

00:20:06 the right framework that they can accomplish this?

00:20:10 In moments when it works, it’s magical.

00:20:12 Because you’d see the confluence of people

00:20:17 being able to make the world a better place

00:20:19 and deriving enormous confidence and pride out of this.

00:20:22 And that’s when my environment works the best.

00:20:27 These are moments where I can disappear for a month

00:20:29 and come back and things still work.

00:20:31 It’s very hard to accomplish.

00:20:32 But when it works, it’s amazing.

00:20:35 So I agree with you very much.

00:20:37 It’s not often heard that most people in the world

00:20:42 have good intentions.

00:20:43 At the core, their intentions are good

00:20:45 and they’re good people.

00:20:47 That’s a beautiful message, it’s not often heard.

00:20:50 We make this mistake, and this is a friend of mine,

00:20:52 Alex Werder, talking to us, that we judge ourselves

00:20:56 by our intentions and others by their actions.

00:21:00 And I think that the biggest skill,

00:21:01 I mean, here in Silicon Valley, we follow engineers

00:21:03 who have very little empathy and are kind of befuddled

00:21:06 by why it doesn’t work for them.

00:21:09 The biggest skill, I think, that people should acquire

00:21:13 is to put themselves into the position of the other

00:21:16 and listen, and listen to what the other has to say.

00:21:20 And they’d be shocked how similar they are to themselves.

00:21:23 And they might even be shocked how their own actions

00:21:26 don’t reflect their intentions.

00:21:28 I often have conversations with engineers

00:21:30 where I say, look, hey, I love you, you’re doing a great job.

00:21:33 And by the way, what you just did has the following effect.

00:21:37 Are you aware of that?

00:21:38 And then people would say, oh my God, not I wasn’t,

00:21:41 because my intention was that.

00:21:43 And I say, yeah, I trust your intention.

00:21:45 You’re a good human being.

00:21:46 But just to help you in the future,

00:21:48 if you keep expressing it that way,

00:21:51 then people will just hate you.

00:21:53 And I’ve had many instances where people say,

00:21:55 oh my God, thank you for telling me this,

00:21:56 because it wasn’t my intention to look like an idiot.

00:21:59 It wasn’t my intention to help other people.

00:22:00 I just didn’t know how to do it.

00:22:02 Very simple, by the way.

00:22:04 There’s a book, Dale Carnegie, 1936,

00:22:07 how to make friends and how to influence others.

00:22:10 Has the entire Bible, you just read it and you’re done

00:22:12 and you apply it every day.

00:22:13 And I wish I was good enough to apply it every day.

00:22:16 But it’s just simple things, right?

00:22:18 Like be positive, remember people’s name, smile,

00:22:22 and eventually have empathy.

00:22:24 Really think that the person that you hate

00:22:27 and you think is an idiot,

00:22:28 is actually just like yourself.

00:22:30 It’s a person who’s struggling, who means well,

00:22:33 and who might need help, and guess what, you need help.

00:22:36 I’ve recently spoken with Stephen Schwarzman.

00:22:39 I’m not sure if you know who that is, but.

00:22:41 I do.

00:22:42 So, and he said.

00:22:44 It’s on my list.

00:22:45 On the list.

00:22:47 But he said, sort of to expand on what you’re saying,

00:22:52 that one of the biggest things you can do

00:22:56 is hear people when they tell you what their problem is

00:23:00 and then help them with that problem.

00:23:02 He says, it’s surprising how few people

00:23:06 actually listen to what troubles others.

00:23:09 And because it’s right there in front of you

00:23:12 and you can benefit the world the most.

00:23:15 And in fact, yourself and everybody around you

00:23:18 by just hearing the problems and solving them.

00:23:20 I mean, that’s my little history of engineering.

00:23:23 That is, while I was engineering with computers,

00:23:28 I didn’t care at all what the computer’s problems were.

00:23:32 I just told them what to do and to do it.

00:23:34 And it just doesn’t work this way with people.

00:23:37 It doesn’t work with me.

00:23:38 If you come to me and say, do A, I do the opposite.

00:23:43 But let’s return to the comfortable world of engineering.

00:23:47 And can you tell me in broad strokes in how you see it?

00:23:52 Because you’re the core of starting it,

00:23:53 the core of driving it,

00:23:55 the technical evolution of autonomous vehicles

00:23:58 from the first DARPA Grand Challenge

00:24:00 to the incredible success we see with the program

00:24:03 you started with Google self driving car

00:24:05 and Waymo and the entire industry that sprung up

00:24:08 of different kinds of approaches, debates and so on.

00:24:11 Well, the idea of self driving car goes back to the 80s.

00:24:14 There was a team in Germany and another team

00:24:15 at Carnegie Mellon that did some very pioneering work.

00:24:18 But back in the day, I’d say the computers were so deficient

00:24:21 that even the best professors and engineers in the world

00:24:25 basically stood no chance.

00:24:28 It then folded into a phase where the US government

00:24:31 spent at least half a billion dollars

00:24:33 that I could count on research projects.

00:24:36 But the way the procurement works,

00:24:38 a successful stack of paper describing lots of stuff

00:24:42 that no one’s ever gonna read

00:24:43 was a successful product of a research project.

00:24:47 So we trained our researchers to produce lots of paper.

00:24:52 That all changed with the DARPA Grand Challenge.

00:24:54 And I really gotta credit the ingenious people at DARPA

00:24:58 and the US government and Congress

00:25:00 that took a complete new funding model where they said,

00:25:03 let’s not fund effort, let’s fund outcomes.

00:25:05 And it sounds very trivial,

00:25:06 but there was no tax code that allowed

00:25:09 the use of congressional tax money for a price.

00:25:13 It was all effort based.

00:25:15 So if you put in a hundred hours in,

00:25:16 you could charge a hundred hours.

00:25:17 If you put in a thousand hours in,

00:25:18 you could build a thousand hours.

00:25:20 By changing the focus instead of making the price,

00:25:22 we don’t pay you for development,

00:25:24 we pay for the accomplishment.

00:25:26 They drew in, they automatically drew out

00:25:28 all these contractors who are used to the drug

00:25:31 of getting money per hour.

00:25:33 And they drew in a whole bunch of new people.

00:25:35 And these people are mostly crazy people.

00:25:37 They were people who had a car and a computer

00:25:40 and they wanted to make a million bucks.

00:25:42 The million bucks was their visual price money,

00:25:43 it was then doubled.

00:25:45 And they felt if I put my computer in my car

00:25:48 and program it, I can be rich.

00:25:50 And that was so awesome.

00:25:52 Like half the teams, there was a team that was surfer dudes

00:25:55 and they had like two surfboards on their vehicle

00:25:58 and brought like these fashion girls, super cute girls,

00:26:01 like twin sisters.

00:26:03 And you could tell these guys were not your common

00:26:06 beltway bandit who gets all these big multimillion

00:26:10 and billion dollar countries from the US government.

00:26:13 And there was a great reset.

00:26:16 Universities moved in.

00:26:18 I was very fortunate at Stanford that I just received tenure

00:26:21 so I couldn’t get fired no matter what I do,

00:26:23 otherwise I wouldn’t have done it.

00:26:25 And I had enough money to finance this thing

00:26:28 and I was able to attract a lot of money from third parties.

00:26:31 And even car companies moved in.

00:26:32 They kind of moved in very quietly

00:26:34 because they were super scared to be embarrassed

00:26:36 that their car would flip over.

00:26:38 But Ford was there and Volkswagen was there

00:26:40 and a few others and GM was there.

00:26:43 So it kind of reset the entire landscape of people.

00:26:46 And if you look at who’s a big name

00:26:48 in self driving cars today,

00:26:49 these were mostly people who participated

00:26:51 in those challenges.

00:26:53 Okay, that’s incredible.

00:26:54 Can you just comment quickly on your sense of lessons learned

00:26:59 from that kind of funding model

00:27:01 and the research that’s going on in academia

00:27:04 in terms of producing papers,

00:27:06 is there something to be learned and scaled up bigger,

00:27:10 having these kinds of grand challenges

00:27:11 that could improve outcomes?

00:27:14 So I’m a big believer in focusing

00:27:16 on kind of an end to end system.

00:27:19 I’m a really big believer in systems building.

00:27:21 I’ve always built systems in my academic career,

00:27:23 even though I do a lot of math and abstract stuff,

00:27:27 but it’s all derived from the idea

00:27:28 of let’s solve a real problem.

00:27:29 And it’s very hard for me to be an academic

00:27:33 and say, let me solve a component of a problem.

00:27:35 Like with someone there’s fields like nonmonetary logic

00:27:38 or AI planning systems where people believe

00:27:41 that a certain style of problem solving

00:27:44 is the ultimate end objective.

00:27:47 And I would always turn it around and say,

00:27:49 hey, what problem would my grandmother care about

00:27:52 that doesn’t understand computer technology

00:27:54 and doesn’t wanna understand?

00:27:56 And how could I make her love what I do?

00:27:58 Because only then do I have an impact on the world.

00:28:01 I can easily impress my colleagues.

00:28:02 That is much easier,

00:28:04 but impressing my grandmother is very, very hard.

00:28:07 So I would always thought if I can build a self driving car

00:28:10 and my grandmother can use it

00:28:12 even after she loses her driving privileges

00:28:14 or children can use it,

00:28:16 or we save maybe a million lives a year,

00:28:20 that would be very impressive.

00:28:22 And then there’s so many problems like these,

00:28:23 like there’s a problem with curing cancer,

00:28:25 or whatever it is, live twice as long.

00:28:27 Once a problem is defined,

00:28:29 of course I can’t solve it in its entirety.

00:28:31 Like it takes sometimes tens of thousands of people

00:28:34 to find a solution.

00:28:35 There’s no way you can fund an army of 10,000 at Stanford.

00:28:39 So you gotta build a prototype.

00:28:41 Let’s build a meaningful prototype.

00:28:42 And the DARPA Grand Challenge was beautiful

00:28:43 because it told me what this prototype had to do.

00:28:46 I didn’t have to think about what it had to do,

00:28:47 I just had to read the rules.

00:28:48 And that was really beautiful.

00:28:51 And it’s most beautiful,

00:28:52 you think what academia could aspire to

00:28:54 is to build a prototype that’s the systems level,

00:28:58 that solves or gives you an inkling

00:29:01 that this problem could be solved with this prototype.

00:29:03 First of all, I wanna emphasize what academia really is.

00:29:06 And I think people misunderstand it.

00:29:08 First and foremost, academia is a way

00:29:11 to educate young people.

00:29:13 First and foremost, a professor is an educator.

00:29:15 No matter where you are at,

00:29:17 a small suburban college,

00:29:18 or whether you are a Harvard or Stanford professor,

00:29:21 that’s not the way most people think of themselves

00:29:25 in academia because we have this kind of competition

00:29:28 going on for citations and publication.

00:29:31 That’s a measurable thing,

00:29:32 but that is secondary to the primary purpose

00:29:35 of educating people to think.

00:29:37 Now, in terms of research,

00:29:39 most of the great science,

00:29:42 the great research comes out of universities.

00:29:45 You can trace almost everything back,

00:29:46 including Google, to universities.

00:29:48 So there’s nothing really fundamentally broken here.

00:29:52 It’s a good system.

00:29:53 And I think America has the finest university system

00:29:55 on the planet.

00:29:57 We can talk about reach

00:29:59 and how to reach people outside the system.

00:30:01 It’s a different topic,

00:30:02 but the system itself is a good system.

00:30:04 If I had one wish, I would say it’d be really great

00:30:08 if there was more debate about

00:30:11 what the great big problems are in society

00:30:15 and focus on those.

00:30:18 And most of them are interdisciplinary.

00:30:21 Unfortunately, it’s very easy to fall

00:30:24 into an interdisciplinary viewpoint

00:30:28 where your problem is dictated

00:30:30 by what your closest colleagues believe the problem is.

00:30:33 It’s very hard to break out and say,

00:30:35 well, there’s an entire new field of problems.

00:30:37 So to give an example,

00:30:39 prior to me working on self driving cars,

00:30:41 I was a roboticist and a machine learning expert.

00:30:44 And I wrote books on robotics,

00:30:46 something called probabilistic robotics.

00:30:48 It’s a very methods driven kind of viewpoint of the world.

00:30:51 I built robots that acted in museums as tour guides,

00:30:54 that let children around.

00:30:55 It is something that at the time was moderately challenging.

00:31:00 When I started working on cars,

00:31:02 several colleagues told me,

00:31:03 Sebastian, you’re destroying your career

00:31:06 because in our field of robotics,

00:31:08 cars are looked like as a gimmick

00:31:10 and they’re not expressive enough.

00:31:11 They can only push the throttle and the brakes.

00:31:15 There’s no dexterity.

00:31:16 There’s no complexity.

00:31:18 It’s just too simple.

00:31:19 And no one came to me and said,

00:31:21 wow, if you solve that problem,

00:31:22 you can save a million lives, right?

00:31:25 Among all robotic problems that I’ve seen in my life,

00:31:27 I would say the self driving car, transportation,

00:31:29 is the one that has the most hope for society.

00:31:32 So how come the robotics community wasn’t all over the place?

00:31:35 And it was because we focused on methods and solutions

00:31:37 and not on problems.

00:31:39 Like if you go around today and ask your grandmother,

00:31:42 what bugs you?

00:31:43 What really makes you upset?

00:31:45 I challenge any academic to do this

00:31:48 and then realize how far your research

00:31:51 is probably away from that today.

00:31:54 At the very least, that’s a good thing

00:31:56 for academics to deliberate on.

00:31:59 The other thing that’s really nice in Silicon Valley is,

00:32:01 Silicon Valley is full of smart people outside academia.

00:32:04 So there’s the Larry Pages and Mark Zuckerbergs in the world

00:32:06 who are anywhere smarter, smarter

00:32:09 than the best academics I’ve met in my life.

00:32:11 And what they do is they are at a different level.

00:32:15 They build the systems,

00:32:16 they build the customer facing systems,

00:32:19 they build things that people can use

00:32:21 without technical education.

00:32:23 And they are inspired by research.

00:32:25 They’re inspired by scientists.

00:32:27 They hire the best PhDs from the best universities

00:32:30 for a reason.

00:32:31 So I think this kind of vertical integration

00:32:35 between the real product, the real impact

00:32:37 and the real thought, the real ideas,

00:32:39 that’s actually working surprisingly well in Silicon Valley.

00:32:42 It did not work as well in other places in this nation.

00:32:44 So when I worked at Carnegie Mellon,

00:32:46 we had the world’s finest computer science university,

00:32:49 but there wasn’t those people in Pittsburgh

00:32:52 that would be able to take these

00:32:54 very fine computer science ideas

00:32:56 and turn them into massive, impactful products.

00:33:00 That symbiosis seemed to exist

00:33:02 pretty much only in Silicon Valley

00:33:04 and maybe a bit in Boston and Austin.

00:33:06 Yeah, with Stanford, that’s really interesting.

00:33:11 So if we look a little bit further on

00:33:14 from the DARPA Grand Challenge

00:33:17 and the launch of the Google self driving car,

00:33:20 what do you see as the state,

00:33:22 the challenges of autonomous vehicles as they are now

00:33:25 is actually achieving that huge scale

00:33:29 and having a huge impact on society?

00:33:31 I’m extremely proud of what has been accomplished.

00:33:35 And again, I’m taking a lot of credit for the work of others.

00:33:38 And I’m actually very optimistic.

00:33:40 And people have been kind of worrying,

00:33:42 is it too fast? Is it too slow?

00:33:43 Why is it not there yet? And so on.

00:33:45 It is actually quite an interesting, hard problem.

00:33:48 And in that a self driving car,

00:33:51 to build one that manages 90% of the problems

00:33:55 encountered in everyday driving is easy.

00:33:57 We can literally do this over a weekend.

00:33:59 To do 99% might take a month.

00:34:02 Then there’s 1% left.

00:34:03 So 1% would mean that you still have a fatal accident

00:34:06 every week, very unacceptable.

00:34:08 So now you work on this 1%

00:34:10 and the 99% of that, the remaining 1%

00:34:13 is actually still relatively easy,

00:34:15 but now you’re down to like a hundredth of 1%.

00:34:18 And it’s still completely unacceptable in terms of safety.

00:34:21 So the variety of things you encounter are just enormous.

00:34:24 And that gives me enormous respect for human being

00:34:26 that we’re able to deal with the couch on the highway,

00:34:30 or the deer in the headlights, or the blown tire

00:34:33 that we’ve never been trained for.

00:34:34 And all of a sudden have to handle it

00:34:35 in an emergency situation

00:34:37 and often do very, very successfully.

00:34:38 It’s amazing from that perspective,

00:34:40 how safe driving actually is given how many millions

00:34:43 of miles we drive every year in this country.

00:34:47 We are now at a point where I believe the technology

00:34:49 is there and I’ve seen it.

00:34:51 I’ve seen it in Waymo, I’ve seen it in Aptiv,

00:34:53 I’ve seen it in Cruise and in a number of companies

00:34:56 and in Voyage where vehicles now driving around

00:35:00 and basically flawlessly are able to drive people around

00:35:04 in limited scenarios.

00:35:06 In fact, you can go to Vegas today

00:35:07 and order a Summon and Lift.

00:35:09 And if you get the right setting of your app,

00:35:13 you’ll be picked up by a driverless car.

00:35:15 Now there’s still safety drivers in there,

00:35:18 but that’s a fantastic way to kind of learn

00:35:21 what the limits are of technology today.

00:35:22 And there’s still some glitches,

00:35:24 but the glitches have become very, very rare.

00:35:26 I think the next step is gonna be to down cost it,

00:35:29 to harden it, the entrapment, the sensors

00:35:33 are not quite an automotive grade standard yet.

00:35:36 And then to really build the business models,

00:35:37 to really kind of go somewhere and make the business case.

00:35:40 And the business case is hard work.

00:35:42 It’s not just, oh my God, we have this capability,

00:35:44 people are just gonna buy it.

00:35:45 You have to make it affordable.

00:35:46 You have to find the social acceptance of people.

00:35:52 None of the teams yet has been able to or gutsy enough

00:35:55 to drive around without a person inside the car.

00:35:59 And that’s the next magical hurdle.

00:36:01 We’ll be able to send these vehicles around

00:36:03 completely empty in traffic.

00:36:05 And I think, I mean, I wait every day,

00:36:08 wait for the news that Waymo has just done this.

00:36:11 So, interesting you mentioned gutsy.

00:36:15 Let me ask some maybe unanswerable question,

00:36:20 maybe edgy questions.

00:36:21 But in terms of how much risk is required,

00:36:26 some guts in terms of leadership style,

00:36:30 it would be good to contrast approaches.

00:36:32 And I don’t think anyone knows what’s right.

00:36:34 But if we compare Tesla and Waymo, for example,

00:36:38 Elon Musk and the Waymo team,

00:36:43 there’s slight differences in approach.

00:36:45 So on the Elon side, there’s more,

00:36:49 I don’t know what the right word to use,

00:36:50 but aggression in terms of innovation.

00:36:53 And on Waymo side, there’s more sort of cautious,

00:36:59 safety focused approach to the problem.

00:37:03 What do you think it takes?

00:37:06 What leadership at which moment is right?

00:37:09 Which approach is right?

00:37:11 Look, I don’t sit in either of those teams.

00:37:13 So I’m unable to even verify like somebody says correct.

00:37:18 In the end of the day, every innovator in that space

00:37:21 will face a fundamental dilemma.

00:37:23 And I would say you could put aerospace titans

00:37:27 into the same bucket,

00:37:28 which is you have to balance public safety

00:37:31 with your drive to innovate.

00:37:34 And this country in particular in the States

00:37:36 has a hundred plus year history

00:37:38 of doing this very successfully.

00:37:40 Air travel is what a hundred times a safe per mile

00:37:43 than ground travel, than cars.

00:37:46 And there’s a reason for it because people have found ways

00:37:50 to be very methodological about ensuring public safety

00:37:55 while still being able to make progress

00:37:56 on important aspects, for example,

00:37:59 like air and noise and fuel consumption.

00:38:03 So I think that those practices are proven

00:38:06 and they actually work.

00:38:07 We live in a world safer than ever before.

00:38:09 And yes, there will always be the provision

00:38:11 that something goes wrong.

00:38:12 There’s always the possibility

00:38:14 that someone makes a mistake

00:38:15 or there’s an unexpected failure.

00:38:17 We can never guarantee to a hundred percent

00:38:19 absolute safety other than just not doing it.

00:38:23 But I think I’m very proud of the history of the United States.

00:38:27 I mean, we’ve dealt with much more dangerous technology

00:38:30 like nuclear energy and kept that safe too.

00:38:33 We have nuclear weapons and we keep those safe.

00:38:36 So we have methods and procedures

00:38:39 that really balance these two things very, very successfully.

00:38:42 You’ve mentioned a lot of great autonomous vehicle companies

00:38:46 that are taking sort of the level four, level five,

00:38:48 they jump in full autonomy with a safety driver

00:38:51 and take that kind of approach

00:38:53 and also through simulation and so on.

00:38:55 There’s also the approach that Tesla Autopilot is doing,

00:38:59 which is kind of incrementally taking a level two vehicle

00:39:03 and using machine learning

00:39:04 and learning from the driving of human beings

00:39:08 and trying to creep up,

00:39:10 trying to incrementally improve the system

00:39:12 until it’s able to achieve level four autonomy.

00:39:15 So perfect autonomy in certain kind of geographical regions.

00:39:19 What are your thoughts on these contrasting approaches?

00:39:23 Well, so first of all, I’m a very proud Tesla owner

00:39:25 and I literally use the Autopilot every day

00:39:27 and it literally has kept me safe.

00:39:30 It is a beautiful technology specifically

00:39:33 for highway driving when I’m slightly tired

00:39:37 because then it turns me into a much safer driver.

00:39:42 And I’m 100% confident that’s the case.

00:39:46 In terms of the right approach,

00:39:47 I think the biggest change I’ve seen

00:39:49 since I went to Waymo team is this thing called deep learning.

00:39:54 I think deep learning was not a hot topic

00:39:56 when I started Waymo or Google self driving cars.

00:39:59 It was there, in fact, we started Google Brain

00:40:01 at the same time in Google X.

00:40:02 So I invested in deep learning,

00:40:04 but people didn’t talk about it, it wasn’t a hot topic.

00:40:07 And now it is, there’s a shift of emphasis

00:40:10 from a more geometric perspective

00:40:12 where you use geometric sensors

00:40:14 that give you a full 3D view

00:40:15 when you do a geometric reasoning about,

00:40:17 oh, this box over here might be a car

00:40:19 towards a more human like, oh, let’s just learn about it.

00:40:24 This looks like the thing I’ve seen 10,000 times before.

00:40:26 So maybe it’s the same thing, machine learning perspective.

00:40:30 And that has really put, I think,

00:40:32 all these approaches on steroids.

00:40:36 At Udacity, we teach a course in self driving cars.

00:40:38 In fact, I think we’ve graduated over 20,000 or so people

00:40:43 on self driving car skills.

00:40:45 So every self driving car team in the world

00:40:47 now uses our engineers.

00:40:49 And in this course, the very first homework assignment

00:40:51 is to do lane finding on images.

00:40:54 And lane finding images for layman,

00:40:56 what this means is you put a camera into your car

00:40:59 or you open your eyes and you would know where the lane is.

00:41:02 So you can stay inside the lane with your car.

00:41:05 Humans can do this super easily.

00:41:06 You just look and you know where the lane is,

00:41:08 just intuitively.

00:41:10 For machines, for a long time, it was super hard

00:41:12 because people would write these kind of crazy rules.

00:41:14 If there’s like wine lane markers

00:41:16 and here’s what white really means,

00:41:17 this is not quite white enough.

00:41:19 So let’s, oh, it’s not white.

00:41:20 Or maybe the sun is shining.

00:41:21 So when the sun shines and this is white

00:41:23 and this is a straight line,

00:41:24 I mean, it’s not quite a straight line

00:41:25 because the road is curved.

00:41:27 And do we know that there’s really six feet

00:41:29 between lane markings or not or 12 feet, whatever it is.

00:41:34 And now what the students are doing,

00:41:36 they would take machine learning.

00:41:37 So instead of like writing these crazy rules

00:41:39 for the lane marker,

00:41:40 they’ll say, hey, let’s take an hour of driving

00:41:42 and label it and tell the vehicle,

00:41:44 this is actually the lane by hand.

00:41:45 And then these are examples

00:41:47 and have the machine find its own rules,

00:41:49 what lane markings are.

00:41:51 And within 24 hours, now every student

00:41:53 that’s never done any programming before in this space

00:41:56 can write a perfect lane finder

00:41:58 as good as the best commercial lane finders.

00:42:00 And that’s completely amazing to me.

00:42:02 We’ve seen progress using machine learning

00:42:05 that completely dwarfs anything

00:42:08 that I saw 10 years ago.

00:42:10 Yeah, and just as a side note,

00:42:12 the self driving car nanodegree,

00:42:15 the fact that you launched that many years ago now,

00:42:18 maybe four years ago, three years ago is incredible

00:42:22 that that’s a great example of system level thinking

00:42:24 sort of just taking an entire course

00:42:27 that teaches you how to solve the entire problem.

00:42:29 I definitely recommend people.

00:42:31 It’s become super popular

00:42:32 and it’s become actually incredibly high quality

00:42:34 really with Mercedes and various other companies

00:42:37 in that space.

00:42:38 And we find that engineers from Tesla and Waymo

00:42:40 are taking it today.

00:42:43 The insight was that two things,

00:42:45 one is existing universities will be very slow to move

00:42:49 because they’re departmentalized

00:42:50 and there’s no department for self driving cars.

00:42:52 So between Mac E and double E and computer science,

00:42:56 getting those folks together

00:42:57 into one room is really, really hard.

00:42:59 And every professor listening here will know,

00:43:01 they’ll probably agree to that.

00:43:02 And secondly, even if all the great universities

00:43:06 just did this, which none so far has developed

00:43:09 a curriculum in this field,

00:43:11 it is just a few thousand students that can partake

00:43:13 because all the great universities are super selective.

00:43:16 So how about people in India?

00:43:18 How about people in China or in the Middle East

00:43:20 or Indonesia or Africa?

00:43:23 Why should those be excluded

00:43:25 from the skill of building self driving cars?

00:43:27 Are they any dumber than we are?

00:43:28 Are we any less privileged?

00:43:30 And the answer is we should just give everybody the skill

00:43:34 to build a self driving car.

00:43:35 Because if we do this,

00:43:37 then we have like a thousand self driving car startups.

00:43:40 And if 10% succeed, that’s like a hundred,

00:43:42 that means hundred countries now

00:43:44 will have self driving cars and be safer.

00:43:46 It’s kind of interesting to imagine impossible to quantify,

00:43:50 but the number, the, you know,

00:43:53 over a period of several decades,

00:43:55 the impact that has like a single course,

00:43:57 like a ripple effect of society.

00:44:00 If you, I just recently talked to Andrew

00:44:03 who was creator of Cosmos show.

00:44:06 It’s interesting to think about

00:44:08 how many scientists that show launched.

00:44:10 And so it’s really, in terms of impact,

00:44:15 I can’t imagine a better course

00:44:17 than the self driving car course.

00:44:18 That’s, you know, there’s other more specific disciplines

00:44:21 like deep learning and so on that Udacity is also teaching,

00:44:24 but self driving cars,

00:44:25 it’s really, really interesting course.

00:44:26 And then it came at the right moment.

00:44:28 It came at a time when there were a bunch of Acqui hires.

00:44:31 Acqui hire is a acquisition of a company,

00:44:34 not for its technology or its products or business,

00:44:36 but for its people.

00:44:38 So Acqui hire means maybe that a company of 70 people,

00:44:40 they have no product yet, but they’re super smart people

00:44:43 and they pay a certain amount of money.

00:44:44 So I took Acqui hires like GM Cruise and Uber and others,

00:44:48 and did the math and said,

00:44:50 hey, how many people are there and how much money was paid?

00:44:53 And as a lower bound,

00:44:55 I estimated the value of a self driving car engineer

00:44:58 in these acquisitions to be at least $10 million, right?

00:45:02 So think about this, you get yourself a skill

00:45:05 and you team up and build a company

00:45:06 and your worth now is $10 million.

00:45:09 I mean, that’s kind of cool.

00:45:10 I mean, what other thing could you do in life

00:45:13 to be worth $10 million within a year?

00:45:15 Yeah, amazing.

00:45:17 But to come back for a moment on to deep learning

00:45:21 and its application in autonomous vehicles,

00:45:23 what are your thoughts on Elon Musk’s statement,

00:45:28 provocative statement, perhaps that light air is a crutch.

00:45:31 So this geometric way of thinking about the world

00:45:34 may be holding us back if what we should instead be doing

00:45:38 in this robotic space,

00:45:39 in this particular space of autonomous vehicles

00:45:42 is using camera as a primary sensor

00:45:46 and using computer vision and machine learning

00:45:48 as the primary way to…

00:45:49 Look, I have two comments.

00:45:50 I think first of all, we all know

00:45:52 that people can drive cars without lighters in their heads

00:45:56 because we only have eyes

00:45:59 and we mostly just use eyes for driving.

00:46:02 Maybe we use some other perception about our bodies,

00:46:04 accelerations, occasionally our ears,

00:46:08 certainly not our noses.

00:46:10 So the existence proof is there,

00:46:12 that eyes must be sufficient.

00:46:15 In fact, we could even drive a car

00:46:17 if someone put a camera out

00:46:19 and then gave us the camera image with no latency,

00:46:23 we would be able to drive a car that way the same way.

00:46:26 So a camera is also sufficient.

00:46:28 Secondly, I really love the idea that in the Western world,

00:46:31 we have many, many different people

00:46:33 trying different hypotheses.

00:46:35 It’s almost like an anthill,

00:46:36 like if an anthill tries to forge for food,

00:46:39 you can sit there as two ants

00:46:41 and agree what the perfect path is

00:46:42 and then every single ant marches

00:46:44 for the most likely location of food is,

00:46:46 or you can even just spread out.

00:46:47 And I promise you the spread out solution will be better

00:46:50 because if the discussing philosophical,

00:46:53 intellectual ants get it wrong

00:46:55 and they’re all moving the wrong direction,

00:46:56 they’re going to waste a day

00:46:58 and then they’re going to discuss again for another week.

00:47:00 Whereas if all these ants go in a random direction,

00:47:02 someone’s going to succeed

00:47:03 and they’re going to come back and claim victory

00:47:05 and get the Nobel prize or whatever the ant equivalent is.

00:47:08 And then they all march in the same direction.

00:47:10 And that’s great about society.

00:47:11 That’s great about the Western society.

00:47:13 We’re not plan based, we’re not central based.

00:47:15 We don’t have a Soviet Union style central government

00:47:19 that tells us where to forge.

00:47:20 We just forge.

00:47:21 We started in C Corp.

00:47:24 We get investor money, go out and try it out.

00:47:25 And who knows who’s going to win.

00:47:28 I like it.

00:47:30 In your, when you look at the longterm vision

00:47:33 of autonomous vehicles,

00:47:35 do you see machine learning

00:47:36 as fundamentally being able to solve most of the problems?

00:47:39 So learning from experience.

00:47:42 I’d say we should be very clear

00:47:44 about what machine learning is and is not.

00:47:46 And I think there’s a lot of confusion.

00:47:48 What it is today is a technology

00:47:50 that can go through large databases

00:47:54 of repetitive patterns and find those patterns.

00:48:00 So in example, we did a study at Stanford two years ago

00:48:03 where we applied machine learning

00:48:05 to detecting skin cancer in images.

00:48:07 And we harvested or built a data set

00:48:10 of 129,000 skin photo shots

00:48:15 that were all had been biopsied

00:48:17 for what the actual situation was.

00:48:19 And those included melanomas and carcinomas,

00:48:22 also included rashes and other skin conditions, lesions.

00:48:27 And then we had a network find those patterns.

00:48:30 And it was by and large able to then detect skin cancer

00:48:34 with an iPhone as accurately

00:48:36 as the best board certified Stanford level dermatologist.

00:48:41 We proved that.

00:48:42 Now this thing was great in this one thing

00:48:45 and finding skin cancer, but it couldn’t drive a car.

00:48:49 So the difference to human intelligence

00:48:51 is we do all these many, many things

00:48:53 and we can often learn from a very small data set

00:48:56 of experiences.

00:48:58 Whereas machines still need very large data sets

00:49:01 and things that will be very repetitive.

00:49:03 Now that’s still super impactful

00:49:04 because almost everything we do is repetitive.

00:49:06 So that’s gonna really transform human labor

00:49:10 but it’s not this almighty general intelligence.

00:49:13 We’re really far away from a system

00:49:15 that will exhibit general intelligence.

00:49:18 To that end, I actually commiserate the naming a little bit

00:49:21 because artificial intelligence, if you believe Hollywood

00:49:24 is immediately mixed into the idea of human suppression

00:49:27 and machine superiority.

00:49:30 I don’t think that we’re gonna see this in my lifetime.

00:49:32 I don’t think human suppression is a good idea.

00:49:36 I don’t see it coming.

00:49:37 I don’t see the technology being there.

00:49:39 What I see instead is a very pointed focused

00:49:42 pattern recognition technology that’s able to

00:49:45 extract patterns from large data sets.

00:49:48 And in doing so, it can be super impactful.

00:49:51 Super impactful.

00:49:53 Let’s take the impact of artificial intelligence

00:49:55 on human work.

00:49:57 We all know that it takes something like 10,000 hours

00:50:00 to become an expert.

00:50:01 If you’re gonna be a doctor or a lawyer

00:50:03 or even a really good driver,

00:50:05 it takes a certain amount of time to become experts.

00:50:08 Machines now are able and have been shown

00:50:11 to observe people become experts and observe experts

00:50:15 and then extract those rules from experts

00:50:17 in some interesting way.

00:50:18 They could go from law to sales to driving cars

00:50:25 to diagnosing cancer.

00:50:28 And then giving that capability to people who are

00:50:30 completely new in their job.

00:50:32 We now can, and that’s been done.

00:50:34 It’s been done commercially in many, many instantiations.

00:50:37 So that means we can use machine learning

00:50:40 to make people expert on the very first day of their work.

00:50:44 Like think about the impact.

00:50:45 If your doctor is still in their first 10,000 hours,

00:50:50 you have a doctor who is not quite an expert yet.

00:50:53 Who would not want a doctor who is the world’s best expert?

00:50:56 And now we can leverage machines to really eradicate

00:51:00 the error in decision making,

00:51:02 error and lack of expertise for human doctors.

00:51:06 That could save your life.

00:51:08 If we can link on that for a little bit,

00:51:10 in which way do you hope machines in the medical field

00:51:14 could help assist doctors?

00:51:16 You mentioned this sort of accelerating the learning curve

00:51:21 or people, if they start a job or in the first 10,000 hours

00:51:26 can be assisted by machines.

00:51:27 How do you envision that assistance looking?

00:51:29 So we built this app for an iPhone that can detect

00:51:33 and classify and diagnose skin cancer.

00:51:36 And we proved two years ago that it does pretty much

00:51:40 as good or better than the best human doctors.

00:51:42 So let me tell you a story.

00:51:43 So there’s a friend of mine, let’s call him Ben.

00:51:45 Ben is a very famous venture capitalist.

00:51:47 He goes to his doctor and the doctor looks at a mole

00:51:50 and says, hey, that mole is probably harmless.

00:51:55 And for some very funny reason, he pulls out that phone

00:51:59 with our app.

00:52:00 He’s a collaborator in our study.

00:52:02 And the app says, no, no, no, no, this is a melanoma.

00:52:06 And for background, melanomas are,

00:52:08 and skin cancer is the most common cancer in this country.

00:52:12 Melanomas can go from stage zero to stage four

00:52:16 within less than a year.

00:52:18 Stage zero means you can basically cut it out yourself

00:52:20 with a kitchen knife and be safe.

00:52:23 And stage four means your chances of living

00:52:25 five more years in less than 20%.

00:52:28 So it’s a very serious, serious, serious condition.

00:52:31 So this doctor who took out the iPhone,

00:52:36 looked at the iPhone and was a little bit puzzled.

00:52:37 He said, I mean, but just to be safe,

00:52:39 let’s cut it out and biopsy it.

00:52:41 That’s the technical term for let’s get

00:52:43 an in depth diagnostics that is more than just looking at it.

00:52:47 And it came back as cancerous, as a melanoma.

00:52:50 And it was then removed.

00:52:52 And my friend, Ben, I was hiking with him

00:52:54 and we were talking about AI.

00:52:56 And I told him I do this work on skin cancer.

00:52:58 And he said, oh, funny.

00:53:00 My doctor just had an iPhone that found my cancer.

00:53:05 So I was like completely intrigued.

00:53:06 I didn’t even know about this.

00:53:08 So here’s a person, I mean, this is a real human life, right?

00:53:11 Like who doesn’t know somebody

00:53:12 who has been affected by cancer.

00:53:14 Cancer is cause of death number two.

00:53:16 Cancer is this kind of disease that is mean

00:53:19 in the following way.

00:53:21 Most cancers can actually be cured relatively easily

00:53:24 if we catch them early.

00:53:25 And the reason why we don’t tend to catch them early

00:53:28 is because they have no symptoms.

00:53:30 Like your very first symptom of a gallbladder cancer

00:53:33 or a pancreas cancer might be a headache.

00:53:37 And when you finally go to your doctor

00:53:38 because of these headaches or your back pain

00:53:41 and you’re being imaged, it’s usually stage four plus.

00:53:45 And that’s the time when the occurring chances

00:53:48 might be dropped to a single digit percentage.

00:53:50 So if we could leverage AI to inspect your body

00:53:54 on a regular basis without even a doctor in the room,

00:53:58 maybe when you take a shower or what have you,

00:54:00 I know this sounds creepy,

00:54:01 but then we might be able to save millions

00:54:03 and millions of lives.

00:54:06 You’ve mentioned there’s a concern that people have

00:54:09 about near term impacts of AI in terms of job loss.

00:54:12 So you’ve mentioned being able to assist doctors,

00:54:15 being able to assist people in their jobs.

00:54:17 Do you have a worry of people losing their jobs

00:54:22 or the economy being affected by the improvements in AI?

00:54:25 Yeah, anybody concerned about job losses,

00:54:27 please come to Gdacity.com.

00:54:30 We teach contemporary tech skills

00:54:32 and we have a kind of implicit job promise.

00:54:36 We often, when we measure,

00:54:38 we spend way over 50% of our graders in new jobs

00:54:41 and they’re very satisfied about it.

00:54:43 And it costs almost nothing,

00:54:44 costs like 1,500 max or something like that.

00:54:47 And so there’s a cool new program

00:54:48 that you agree with the U.S. government,

00:54:51 guaranteeing that you will help us give scholarships

00:54:54 that educate people in this kind of situation.

00:54:57 Yeah, we’re working with the U.S. government

00:54:59 on the idea of basically rebuilding the American dream.

00:55:03 So Gdacity has just dedicated 100,000 scholarships

00:55:07 for citizens of America for various levels of courses

00:55:12 that eventually will get you a job.

00:55:15 And those courses are all somewhat related

00:55:18 to the tech sector because the tech sector

00:55:20 is kind of the hottest sector right now.

00:55:22 And they range from interlevel digital marketing

00:55:24 to very advanced self diving car engineering.

00:55:28 And we’re doing this with the White House

00:55:29 because we think it’s bipartisan.

00:55:30 It’s an issue that if you wanna really make America great,

00:55:36 being able to be a part of the solution

00:55:40 and live the American dream requires us to be proactive

00:55:43 about our education and our skillset.

00:55:45 It’s just the way it is today.

00:55:47 And it’s always been this way.

00:55:48 And we always had this American dream

00:55:49 to send our kids to college.

00:55:51 And now the American dream has to be

00:55:53 to send ourselves to college.

00:55:54 We can do this very, very, very efficiently

00:55:58 and very, very, we can squeeze in in the evenings

00:56:00 and things to online.

00:56:01 So at all ages.

00:56:03 All ages.

00:56:03 So our learners go from age 11 to age 80.

00:56:08 I just traveled Germany and the guy in the train compartment

00:56:15 next to me was one of my students.

00:56:17 It’s like, wow, that’s amazing.

00:56:19 Think about impact.

00:56:21 We’ve become the educator of choice for now,

00:56:24 I believe officially six countries or five countries.

00:56:26 Most in the Middle East, like Saudi Arabia and in Egypt.

00:56:30 In Egypt, we just had a cohort graduate

00:56:33 where we had 1100 high school students

00:56:37 that went through programming skills,

00:56:39 proficient at the level of a computer science undergrad.

00:56:42 And we had a 95% graduation rate,

00:56:45 even though everything’s online, it’s kind of tough,

00:56:46 but we kind of trying to figure out

00:56:48 how to make this effective.

00:56:50 The vision is very, very simple.

00:56:52 The vision is education ought to be a basic human right.

00:56:58 It cannot be locked up behind ivory tower walls

00:57:02 only for the rich people, for the parents

00:57:04 who might be bribe themselves into the system.

00:57:06 And only for young people and only for people

00:57:09 from the right demographics and the right geography

00:57:11 and possibly even the right race.

00:57:14 It has to be opened up to everybody.

00:57:15 If we are truthful to the human mission,

00:57:18 if we are truthful to our values,

00:57:20 we’re gonna open up education to everybody in the world.

00:57:23 So Udacity’s pledge of 100,000 scholarships,

00:57:27 I think is the biggest pledge of scholarships ever

00:57:29 in terms of numbers.

00:57:30 And we’re working, as I said, with the White House

00:57:33 and with very accomplished CEOs like Tim Cook

00:57:36 from Apple and others to really bring education

00:57:39 to everywhere in the world.

00:57:40 Not to ask you to pick the favorite of your children,

00:57:44 but at this point.

00:57:45 Oh, that’s Jasper.

00:57:46 I only have one that I know of.

00:57:49 Okay, good.

00:57:52 In this particular moment, what nano degree,

00:57:55 what set of courses are you most excited about at Udacity

00:58:00 or is that too impossible to pick?

00:58:02 I’ve been super excited about something

00:58:03 we haven’t launched yet in the building,

00:58:05 which is when we talk to our partner companies,

00:58:09 we have now a very strong footing in the enterprise world.

00:58:12 And also to our students,

00:58:14 we’ve kind of always focused on these hard skills,

00:58:17 like the programming skills or math skills

00:58:19 or building skills or design skills.

00:58:22 And a very common ask is soft skills.

00:58:25 Like how do you behave in your work?

00:58:26 How do you develop empathy?

00:58:28 How do you work on a team?

00:58:30 What are the very basics of management?

00:58:32 How do you do time management?

00:58:33 How do you advance your career

00:58:36 in the context of a broader community?

00:58:39 And that’s something that we haven’t done very well

00:58:41 at Udacity and I would say most universities

00:58:43 are doing very poorly as well

00:58:45 because we are so obsessed with individual test scores

00:58:47 and pays a little attention to teamwork in education.

00:58:52 So that’s something I see us moving into as a company

00:58:55 because I’m excited about this.

00:58:56 And I think, look, we can teach people tech skills

00:59:00 and they’re gonna be great.

00:59:00 But if you teach people empathy,

00:59:02 that’s gonna have the same impact.

00:59:04 Maybe harder than self driving cars, but.

00:59:08 I don’t think so.

00:59:08 I think the rules are really simple.

00:59:11 You just have to, you have to want to engage.

00:59:14 It’s, we literally went in school and in K through 12,

00:59:18 we teach kids like get the highest math score.

00:59:20 And if you are a rational human being,

00:59:22 you might evolve from this education say,

00:59:25 having the best math score and the best English scores

00:59:28 make me the best leader.

00:59:29 And it turns out not to be that case.

00:59:31 It’s actually really wrong because making the,

00:59:34 first of all, in terms of math scores,

00:59:35 I think it’s perfectly fine to hire somebody

00:59:37 with great math skills.

00:59:38 You don’t have to do it yourself.

00:59:40 You can hire someone with good empathy for you.

00:59:42 That’s much harder,

00:59:43 but you can always hire someone with great math skills.

00:59:46 But we live in an affluent world

00:59:48 where we constantly deal with other people.

00:59:51 And that’s a beauty.

00:59:51 It’s not a nuisance.

00:59:52 It’s a beauty.

00:59:53 So if we somehow develop that muscle

00:59:55 that we can do that well and empower others

00:59:59 in the workplace, I think we’re gonna be super successful.

01:00:02 And I know many fellow robot assistant computer scientists

01:00:07 that I will insist to take this course.

01:00:09 Not to be named here.

01:00:12 Not to be named.

01:00:13 Many, many years ago, 1903,

01:00:17 the Wright brothers flew in Kitty Hawk for the first time.

01:00:22 And you’ve launched a company of the same name, Kitty Hawk,

01:00:26 with the dream of building flying cars, eVTOLs.

01:00:32 So at the big picture,

01:00:34 what are the big challenges of making this thing

01:00:36 that actually have inspired generations of people

01:00:39 about what the future looks like?

01:00:41 What does it take?

01:00:42 What are the biggest challenges?

01:00:43 So flying cars has always been a dream.

01:00:47 Every boy, every girl wants to fly.

01:00:49 Let’s be honest.

01:00:50 Yes.

01:00:51 And let’s go back in our history

01:00:52 of your dreaming of flying.

01:00:53 I think honestly, my single most remembered childhood dream

01:00:57 has been a dream where I was sitting on a pillow

01:00:59 and I could fly.

01:01:00 I was like five years old.

01:01:02 I remember like maybe three dreams of my childhood,

01:01:04 but that’s the one I remember most vividly.

01:01:07 And then Peter Thiel famously said,

01:01:09 they promised us flying cars

01:01:10 and they gave us 140 characters pointing as Twitter

01:01:14 at the time, limited message size to 140 characters.

01:01:18 So if you’re coming back now to really go

01:01:20 for these super impactful stuff like flying cars

01:01:23 and to be precise, they’re not really cars.

01:01:25 They don’t have wheels.

01:01:27 They’re actually much closer to a helicopter

01:01:28 than anything else.

01:01:29 They take off vertically and they fly horizontally,

01:01:32 but they have important differences.

01:01:34 One difference is that they are much quieter.

01:01:37 We just released a vehicle called Project Heaviside

01:01:41 that can fly over you as low as a helicopter

01:01:43 and you basically can’t hear.

01:01:45 It’s like 38 decibels.

01:01:46 It’s like, if you were inside the library,

01:01:49 you might be able to hear it,

01:01:50 but anywhere outdoors, your ambient noise is higher.

01:01:53 Secondly, they’re much more affordable.

01:01:57 They’re much more affordable than helicopters.

01:01:58 And the reason is helicopters are expensive

01:02:01 for many reasons.

01:02:04 There’s lots of single point of figures in a helicopter.

01:02:06 There’s a bolt between the blades

01:02:09 that’s caused Jesus bolt.

01:02:10 And the reason why it’s called Jesus bolt

01:02:12 is that if this bolt breaks, you will die.

01:02:16 There is no second solution in helicopter flight.

01:02:19 Whereas we have these distributed mechanism.

01:02:21 When you go from gasoline to electric,

01:02:23 you can now have many, many, many small motors

01:02:25 as opposed to one big motor.

01:02:27 And that means if you lose one of those motors,

01:02:28 not a big deal.

01:02:29 Heaviside, if it loses a motor, has eight of those.

01:02:32 If it loses one of those eight motors,

01:02:34 so it’s seven left, it can take off just like before

01:02:37 and land just like before.

01:02:40 We are now also moving into a technology

01:02:42 that doesn’t require a commercial pilot

01:02:44 because in some level,

01:02:45 flight is actually easier than ground transportation

01:02:48 like in self driving cars.

01:02:51 The world is full of like children and bicycles

01:02:54 and other cars and mailboxes and curbs and shrubs

01:02:57 and what have you.

01:02:58 All these things you have to avoid.

01:03:00 When you go above the buildings and tree lines,

01:03:03 there’s nothing there.

01:03:04 I mean, you can do the test right now,

01:03:06 look outside and count the number of things you see flying.

01:03:09 I’d be shocked if you could see more than two things.

01:03:11 It’s probably just zero.

01:03:13 In the Bay Area, the most I’ve ever seen was six.

01:03:16 And maybe it’s 15 or 20,

01:03:18 but not 10,000.

01:03:20 So the sky is very ample and very empty and very free.

01:03:24 So the vision is, can we build a socially acceptable

01:03:27 mass transit solution for daily transportation

01:03:32 that is affordable?

01:03:34 And we have an existence proof.

01:03:36 Heaviside can fly 100 miles in range

01:03:39 with still 30% electric reserves.

01:03:43 It can fly up to like 180 miles an hour.

01:03:46 We know that that solution at scale

01:03:48 would make your ground transportation

01:03:51 10 times as fast as a car

01:03:53 based on use census or statistics data,

01:03:57 which means you would take your 300 hours of daily,

01:04:00 of yearly commute down to 30 hours

01:04:03 and give you 270 hours back.

01:04:05 Who wouldn’t want, I mean, who doesn’t hate traffic?

01:04:07 Like I hate, give me the person that doesn’t hate traffic.

01:04:10 I hate traffic.

01:04:11 Every time I’m in traffic, I hate it.

01:04:13 And if we could free the world from traffic,

01:04:17 we have technology.

01:04:18 We can free the world from traffic.

01:04:20 We have the technology.

01:04:21 It’s there.

01:04:22 We have an existence proof.

01:04:23 It’s not a technological problem anymore.

01:04:25 Do you think there is a future where tens of thousands,

01:04:29 maybe hundreds of thousands of both delivery drones

01:04:34 and flying cars of this kind, EV talls fill the sky?

01:04:39 I absolutely believe this.

01:04:40 And there’s obviously the societal acceptance

01:04:43 is a major question.

01:04:45 And of course, safety is.

01:04:46 I believe in safety,

01:04:48 we’re gonna exceed ground transportation safety

01:04:50 as has happened for aviation already, commercial aviation.

01:04:54 And in terms of acceptance,

01:04:56 I think one of the key things is noise.

01:04:58 That’s why we are focusing relentlessly on noise

01:05:00 and we build perhaps the quietest electric vehicle

01:05:05 ever built.

01:05:07 The nice thing about the sky is it’s three dimensional.

01:05:09 So any mathematician will immediately recognize

01:05:12 the difference between 1D of like a regular highway

01:05:14 to 3D of a sky.

01:05:17 But to make it clear for the layman,

01:05:20 say you wanna make 100 vertical lanes

01:05:22 of highway 101 in San Francisco,

01:05:25 because you believe building 100 vertical lanes

01:05:27 is the right solution.

01:05:28 Imagine how much it would cost to stack 100 vertical lanes

01:05:31 physically onto 101.

01:05:33 That would be prohibitive.

01:05:34 That would be consuming the world’s GDP for an entire year

01:05:37 just for one highway.

01:05:39 It’s amazingly expensive.

01:05:41 In the sky, it would just be a recompilation

01:05:43 of a piece of software because all these lanes are virtual.

01:05:46 That means any vehicle that is in conflict

01:05:49 with another vehicle would just go to different altitudes

01:05:51 and then the conflict is gone.

01:05:53 And if you don’t believe this,

01:05:55 that’s exactly how commercial aviation works.

01:05:58 When you fly from New York to San Francisco,

01:06:01 another plane flies from San Francisco to New York,

01:06:04 they are different altitudes.

01:06:05 So they don’t hit each other.

01:06:06 It’s a solved problem for the jet space

01:06:10 and it will be a solved problem for the urban space.

01:06:12 There’s companies like Google Wing and Amazon

01:06:15 working on very innovative solutions.

01:06:17 How do we have space management?

01:06:18 They use exactly the same principles as we use today

01:06:21 to route today’s jets.

01:06:23 There’s nothing hard about this.

01:06:25 Do you envision autonomy being a key part of it

01:06:29 so that the flying vehicles are either semi autonomous

01:06:34 semi autonomous or fully autonomous?

01:06:36 100% autonomous.

01:06:37 You don’t want idiots like me flying in the sky,

01:06:40 I promise you.

01:06:41 And if you have 10,000,

01:06:44 watch the movie, The Fifth Element

01:06:46 to get a feel for what will happen if it’s not autonomous.

01:06:49 And a centralized, that’s a really interesting idea

01:06:51 of a centralized sort of management system

01:06:55 for lanes and so on.

01:06:56 So actually just being able to have

01:07:00 similar as we have in the current commercial aviation,

01:07:03 but scale it up to much, much more vehicles.

01:07:05 That’s a really interesting optimization problem.

01:07:07 It is very mathematically, very, very straightforward.

01:07:11 Like the gap we leave between jets is gargantuous.

01:07:13 And part of the reason is there isn’t that many jets.

01:07:16 So it just feels like a good solution.

01:07:18 Today, when you get vectored by air traffic control,

01:07:22 someone talks to you, right?

01:07:23 So any ATC controller might have up to maybe 20 planes

01:07:26 on the same frequency.

01:07:28 And then they talk to you, you have to talk back.

01:07:30 And it feels right because there isn’t more than 20 planes

01:07:32 around anyhow, so you can talk to everybody.

01:07:34 But if there’s 20,000 things around,

01:07:36 you can’t talk to everybody anymore.

01:07:37 So we have to do something that’s called digital,

01:07:40 like text messaging.

01:07:41 Like we do have solutions.

01:07:43 Like we have what, four or five billion smartphones

01:07:45 in the world now, right?

01:07:46 And they’re all connected.

01:07:47 And somehow we solve the scale problem for smartphones.

01:07:50 We know where they all are.

01:07:51 They can talk to somebody and they’re very reliable.

01:07:54 They’re amazingly reliable.

01:07:56 We could use the same system,

01:07:58 the same scale for air traffic control.

01:08:01 So instead of me as a pilot talking to a human being

01:08:04 and in the middle of the conversation

01:08:06 receiving a new frequency, like how ancient is that?

01:08:09 We could digitize this stuff

01:08:11 and digitally transmit the right flight coordinates.

01:08:15 And that solution will automatically scale

01:08:18 to 10,000 vehicles.

01:08:20 We talked about empathy a little bit.

01:08:22 Do you think we will one day build an AI system

01:08:25 that a human being can love

01:08:27 and that loves that human back, like in the movie, Her?

01:08:31 Look, I’m a pragmatist.

01:08:33 For me, AI is a tool.

01:08:35 It’s like a shovel.

01:08:36 And the ethics of using the shovel are always

01:08:40 with us, the people.

01:08:41 And it has to be this way.

01:08:44 In terms of emotions,

01:08:47 I would hate to come into my kitchen

01:08:49 and see that my refrigerator spoiled all my food,

01:08:54 then have it explained to me

01:08:55 that it fell in love with the dishwasher

01:08:57 and it wasn’t as nice as the dishwasher.

01:08:59 So as a result, it neglected me.

01:09:02 That would just be a bad experience

01:09:05 and it would be a bad product.

01:09:07 I would probably not recommend this refrigerator

01:09:09 to my friends.

01:09:11 And that’s where I draw the line.

01:09:12 I think to me, technology has to be reliable

01:09:16 and has to be predictable.

01:09:17 I want my car to work.

01:09:19 I don’t want to fall in love with my car.

01:09:22 I just want it to work.

01:09:24 I want it to compliment me, not to replace me.

01:09:27 I have very unique human properties

01:09:30 and I want the machines to make me,

01:09:33 turn me into a superhuman.

01:09:35 Like I’m already a superhuman today,

01:09:37 thanks to the machines that surround me.

01:09:39 And I give you examples.

01:09:40 I can run across the Atlantic

01:09:44 at near the speed of sound at 36,000 feet today.

01:09:48 That’s kind of amazing.

01:09:49 I can, my voice now carries me all the way to Australia

01:09:54 using a smartphone today.

01:09:56 And it’s not the speed of sound, which would take hours.

01:10:00 It’s the speed of light.

01:10:01 My voice travels at the speed of light.

01:10:03 How cool is that?

01:10:04 That makes me superhuman.

01:10:06 I would even argue my flushing toilet makes me superhuman.

01:10:10 Just think of the time before flushing toilets.

01:10:13 And maybe you have a very old person in your family

01:10:16 that you can ask about this

01:10:18 or take a trip to rural India to experience it.

01:10:23 It makes me superhuman.

01:10:25 So to me, what technology does, it compliments me.

01:10:28 It makes me stronger.

01:10:30 Therefore, words like love and compassion

01:10:33 have very little interest in this for machines.

01:10:38 I have interest in people.

01:10:40 You don’t think, first of all, beautifully put,

01:10:44 beautifully argued,

01:10:45 but do you think love has use in our tools?

01:10:49 Compassion.

01:10:50 I think love is a beautiful human concept.

01:10:53 And if you think of what love really is,

01:10:55 love is a means to convey safety, to convey trust.

01:11:03 I think trust has a huge need in technology as well,

01:11:07 not just people.

01:11:09 We want to trust our technology the same way,

01:11:12 in a similar way we trust people.

01:11:15 In human interaction, standards have emerged

01:11:19 and feelings, emotions have emerged,

01:11:21 maybe genetically, maybe biologically,

01:11:23 that are able to convey sense of trust, sense of safety,

01:11:26 sense of passion, of love, of dedication

01:11:28 that makes the human fabric.

01:11:30 And I’m a big slacker for love.

01:11:33 I want to be loved.

01:11:34 I want to be trusted.

01:11:35 I want to be admired.

01:11:36 All these wonderful things.

01:11:38 And because all of us, we have this beautiful system,

01:11:42 I wouldn’t just blindly copy this to the machines.

01:11:44 Here’s why.

01:11:46 When you look at, say, transportation,

01:11:49 you could have observed that up to the end

01:11:53 of the 19th century, almost all transportation used

01:11:57 any number of legs, from one leg to two legs

01:11:59 to a thousand legs.

01:12:01 And you could have concluded that is the right way

01:12:03 to move about the environment.

01:12:06 We’ve been made the exception of birds

01:12:08 who use flapping wings.

01:12:08 In fact, there are many people in aviation

01:12:10 that flap wings to their arms and jump from cliffs.

01:12:13 Most of them didn’t survive.

01:12:16 Then the interesting thing is that the technology solutions

01:12:19 are very different.

01:12:21 Like in technology, it’s really easy to build a wheel.

01:12:23 In biology, it’s super hard to build a wheel.

01:12:25 There’s very few perpetually rotating things in biology

01:12:30 and they usually run cells and things.

01:12:34 In engineering, we can build wheels.

01:12:37 And those wheels gave rise to cars.

01:12:41 Similar wheels gave rise to aviation.

01:12:44 Like there’s no thing that flies

01:12:46 that wouldn’t have something that rotates,

01:12:48 like a jet engine or helicopter blades.

01:12:52 So the solutions have used very different physical laws

01:12:55 than nature, and that’s great.

01:12:58 So for me to be too much focused on,

01:13:00 oh, this is how nature does it, let’s just replicate it.

01:13:03 If you really believed that the solution

01:13:05 to the agricultural evolution was a humanoid robot,

01:13:08 you would still be waiting today.

01:13:10 Again, beautifully put.

01:13:12 You said that you don’t take yourself too seriously.

01:13:15 Did I say that?

01:13:18 You want me to say that?

01:13:19 Maybe.

01:13:20 You’re not taking me seriously.

01:13:20 I’m not, that’s right.

01:13:22 Good, you’re right, I don’t wanna.

01:13:24 I just made that up.

01:13:25 But you have a humor and a lightness about life

01:13:29 that I think is beautiful and inspiring to a lot of people.

01:13:33 Where does that come from?

01:13:35 The smile, the humor, the lightness

01:13:38 amidst all the chaos of the hard work that you’re in,

01:13:42 where does that come from?

01:13:43 I just love my life.

01:13:44 I love the people around me.

01:13:47 I’m just so glad to be alive.

01:13:49 Like I’m, what, 52, hard to believe.

01:13:53 People say 52 is a new 51, so now I feel better.

01:13:56 But in looking around the world,

01:14:01 looking around the world, just go back 200, 300 years.

01:14:06 Humanity is, what, 300,000 years old?

01:14:09 But for the first 300,000 years minus the last 100,

01:14:13 our life expectancy would have been

01:14:17 plus or minus 30 years roughly, give or take.

01:14:20 So I would be long dead now.

01:14:24 That makes me just enjoy every single day of my life

01:14:26 because I don’t deserve this.

01:14:28 Why am I born today when so many of my ancestors

01:14:32 died of horrible deaths, like famines, massive wars

01:14:38 that ravaged Europe for the last 1,000 years

01:14:41 mystically disappeared after World War II

01:14:44 when the Americans and the Allies

01:14:46 did something amazing to my country

01:14:48 that didn’t deserve it, the country of Germany.

01:14:51 This is so amazing.

01:14:52 And then when you’re alive and feel this every day,

01:14:56 then it’s just so amazing what we can accomplish,

01:15:02 what we can do.

01:15:03 We live in a world that is so incredibly,

01:15:06 vastly changing every day.

01:15:08 Almost everything that we cherish from your smartphone

01:15:12 to your flushing toilet, to all these basic inventions,

01:15:16 your new clothes you’re wearing, your watch, your plane,

01:15:19 penicillin, I don’t know, anesthesia for surgery,

01:15:24 penicillin have been invented in the last 150 years.

01:15:29 So in the last 150 years, something magical happened.

01:15:31 And I would trace it back to Gutenberg

01:15:33 and the printing press that has been able

01:15:34 to disseminate information more efficiently than before

01:15:37 that all of a sudden we were able to invent agriculture

01:15:41 and nitrogen fertilization that made agriculture

01:15:44 so much more potent that we didn’t have to work

01:15:47 in the farms anymore and we could start reading and writing

01:15:49 and we could become all these wonderful things

01:15:51 we are today, from airline pilot to massage therapist

01:15:53 to software engineer.

01:15:56 It’s just amazing.

01:15:57 Like living in that time is such a blessing.

01:16:00 We should sometimes really think about this, right?

01:16:03 Steven Pinker, who is a very famous author and philosopher

01:16:06 whom I really adore, wrote a great book called

01:16:08 Enlightenment Now.

01:16:09 And that’s maybe the one book I would recommend.

01:16:11 And he asks the question,

01:16:13 if there was only a single article written

01:16:15 in the 20th century, it’s only one article, what would it be?

01:16:18 What’s the most important innovation,

01:16:20 the most important thing that happened?

01:16:22 And he would say this article would credit

01:16:24 a guy named Karl Bosch.

01:16:27 And I challenge anybody, have you ever heard

01:16:29 of the name Karl Foch?

01:16:31 I hadn’t, okay.

01:16:32 There’s a Bosch Corporation in Germany,

01:16:35 but it’s not associated with Karl Bosch.

01:16:38 So I looked it up.

01:16:39 Karl Bosch invented nitrogen fertilization.

01:16:42 And in doing so, together with an older invention

01:16:45 of irrigation, was able to increase the yields

01:16:49 per agricultural land by a factor of 26.

01:16:52 So a 2,500% increase in fertility of land.

01:16:57 And that, so Steve Pinker argues,

01:17:00 saved over 2 billion lives today.

01:17:03 2 billion people who would be dead

01:17:05 if this man hadn’t done what he had done, okay?

01:17:08 Think about that impact and what that means to society.

01:17:12 That’s the way I look at the world.

01:17:14 I mean, it’s so amazing to be alive and to be part of this.

01:17:16 And I’m so glad I lived after Karl Bosch and not before.

01:17:21 I don’t think there’s a better way to end this, Sebastian.

01:17:23 It’s an honor to talk to you,

01:17:25 to have had the chance to learn from you.

01:17:27 Thank you so much for talking to me.

01:17:28 Thanks for coming out.

01:17:29 It’s been a real pleasure.

01:17:30 Thank you for listening to this conversation

01:17:32 with Sebastian Thrun.

01:17:34 And thank you to our presenting sponsor, Cash App.

01:17:37 Download it, use code LexPodcast,

01:17:40 you’ll get $10 and $10 will go to FIRST,

01:17:43 a STEM education nonprofit that inspires

01:17:45 hundreds of thousands of young minds

01:17:47 to learn and to dream of engineering our future.

01:17:50 If you enjoy this podcast, subscribe on YouTube,

01:17:53 get five stars on Apple Podcast, support it on Patreon,

01:17:56 or connect with me on Twitter.

01:17:58 And now, let me leave you with some words of wisdom

01:18:01 from Sebastian Thrun.

01:18:03 It’s important to celebrate your failures

01:18:05 as much as your successes.

01:18:07 If you celebrate your failures really well,

01:18:09 if you say, wow, I failed, I tried, I was wrong,

01:18:13 but I learned something, then you realize you have no fear.

01:18:18 And when your fear goes away, you can move the world.

01:18:22 Thank you for listening and hope to see you next time.