Eric Schmidt: Google #8

Transcript

00:00:00 The following is a conversation with Eric Schmidt.

00:00:03 He was the CEO of Google for 10 years

00:00:05 and a chairman for six more,

00:00:06 guiding the company through an incredible period of growth

00:00:10 and a series of world changing innovations.

00:00:12 He is one of the most impactful leaders

00:00:15 in the era of the internet and the powerful voice

00:00:19 for the promise of technology in our society.

00:00:22 It was truly an honor to speak with him

00:00:24 as part of the MIT course

00:00:26 on artificial general intelligence

00:00:28 and the artificial intelligence podcast.

00:00:31 And now here’s my conversation with Eric Schmidt.

00:00:37 What was the first moment

00:00:38 when you fell in love with technology?

00:00:40 I grew up in the 1960s as a boy

00:00:44 where every boy wanted to be an astronaut

00:00:46 and part of the space program.

00:00:48 So like everyone else of my age,

00:00:51 we would go out to the cow pasture behind my house,

00:00:54 which was literally a cow pasture

00:00:56 and we would shoot model rockets off.

00:00:58 And that I think is the beginning.

00:01:00 And of course, generationally today,

00:01:03 it would be video games and all the amazing things

00:01:05 that you can do online with computers.

00:01:09 There’s a transformative, inspiring aspect of science

00:01:12 and math that maybe rockets would bring

00:01:15 would instill in individuals.

00:01:17 You’ve mentioned yesterday that eighth grade math

00:01:20 is where the journey through mathematical universe

00:01:22 diverges from many people.

00:01:23 It’s this fork in the roadway.

00:01:26 There’s a professor of math at Berkeley, Edward Frankel.

00:01:30 He, I’m not sure if you’re familiar with him.

00:01:32 I am.

00:01:33 He has written this amazing book

00:01:35 I recommend to everybody called Love and Math.

00:01:37 Two of my favorite words.

00:01:41 He says that if painting was taught like math,

00:01:46 then the students would be asked to paint a fence,

00:01:49 which is his analogy of essentially how math is taught.

00:01:52 And so you never get a chance to discover the beauty

00:01:55 of the art of painting or the beauty of the art of math.

00:01:59 So how, when, and where did you discover that beauty?

00:02:05 I think what happens with people like myself

00:02:08 is that your math enabled pretty early

00:02:11 and all of a sudden you discover that you can use that

00:02:14 to discover new insights.

00:02:16 The great scientists will all tell a story,

00:02:19 the men and women who are fantastic today,

00:02:22 that somewhere when they were in high school or in college,

00:02:24 they discovered that they could discover

00:02:26 something themselves.

00:02:27 And that sense of building something,

00:02:29 of having an impact that you own,

00:02:32 drives knowledge acquisition and learning.

00:02:35 In my case, it was programming.

00:02:37 And the notion that I could build things

00:02:39 that had not existed that I had built,

00:02:42 that it had my name on it.

00:02:44 And this was before open source,

00:02:46 but you could think of it as open source contributions.

00:02:49 So today, if I were a 16 or 17 year old boy,

00:02:51 I’m sure that I would aspire as a computer scientist

00:02:54 to make a contribution like the open source heroes

00:02:58 of the world today.

00:02:58 That would be what would be driving me.

00:03:00 And I’d be trying and learning and making mistakes

00:03:03 and so forth in the ways that it works.

00:03:06 The repository that GitHub represents

00:03:09 and that open source libraries represent

00:03:12 is an enormous bank of knowledge

00:03:14 of all of the people who are doing that.

00:03:17 And one of the lessons that I learned at Google

00:03:19 was that the world is a very big place

00:03:21 and there’s an awful lot of smart people.

00:03:23 And an awful lot of them are underutilized.

00:03:26 So here’s an opportunity, for example,

00:03:28 building parts of programs, building new ideas

00:03:31 to contribute to the greater of society.

00:03:36 So in that moment in the 70s,

00:03:38 the inspiring moment where there was nothing

00:03:40 and then you created something through programming,

00:03:42 that magical moment.

00:03:44 So in 1975, I think you’ve created a program called Lex,

00:03:49 which I especially like because my name is Lex.

00:03:51 So thank you, thank you for creating a brand

00:03:54 that established a reputation that’s long lasting, reliable

00:03:58 and has a big impact on the world and still used today.

00:04:01 So thank you for that.

00:04:02 But more seriously, in that time, in the 70s,

00:04:08 as an engineer, personal computers were being born.

00:04:12 Do you think you’d be able to predict the 80s, 90s

00:04:15 and the aughts of where computers would go?

00:04:18 I’m sure I could not and would not have gotten it right.

00:04:23 I was the beneficiary of the great work

00:04:25 of many, many people who saw it clearer than I did.

00:04:29 With Lex, I worked with a fellow named Michael Lesk,

00:04:32 who was my supervisor.

00:04:33 And he essentially helped me architect

00:04:36 and deliver a system that’s still in use today.

00:04:39 After that, I worked at Xerox Palo Alto Research Center,

00:04:42 where the Alto was invented.

00:04:43 And the Alto is the predecessor

00:04:46 of the modern personal computer or Macintosh and so forth.

00:04:50 And the Altos were very rare.

00:04:52 And I had to drive an hour from Berkeley to go use them.

00:04:55 But I made a point of skipping classes

00:04:57 and doing whatever it took to have access

00:05:00 to this extraordinary achievement.

00:05:02 I knew that they were consequential.

00:05:04 What I did not understand was scaling.

00:05:08 I did not understand what would happen

00:05:09 when you had 100 million as opposed to 100.

00:05:12 And so the, since then,

00:05:14 and I have learned the benefit of scale,

00:05:16 I always look for things

00:05:17 which are going to scale to platforms, right?

00:05:19 So mobile phones, Android, all those things.

00:05:23 There are, the world is in numerous,

00:05:25 there are many, many people in the world,

00:05:27 people really have needs.

00:05:28 They really will use these platforms

00:05:29 and you can build big businesses on top of them.

00:05:32 So it’s interesting.

00:05:33 So when you see a piece of technology,

00:05:34 now you think, what will this technology look like

00:05:37 when it’s in the hands of a billion people?

00:05:39 That’s right.

00:05:39 So an example would be that the market is so competitive now

00:05:44 that if you can’t figure out a way

00:05:46 for something to have a million users or a billion users,

00:05:50 it probably is not going to be successful

00:05:53 because something else will become the general platform

00:05:56 and your idea will become a lost idea

00:06:01 or a specialized service with relatively few users.

00:06:04 So it’s a path to generality.

00:06:05 It’s a path to general platform use.

00:06:07 It’s a path to broad applicability.

00:06:10 Now there are plenty of good businesses that are tiny.

00:06:12 So luxury goods, for example.

00:06:14 But if you want to have an impact at scale,

00:06:18 you have to look for things which are of common value,

00:06:21 common pricing, common distribution

00:06:23 and solve common problems.

00:06:24 They’re problems that everyone has.

00:06:26 And by the way, people have lots of problems.

00:06:28 Information, medicine, health, education and so forth.

00:06:31 Work on those problems.

00:06:32 Like you said, you’re a big fan of the middle class.

00:06:36 Because there’s so many of them.

00:06:37 There’s so many of them.

00:06:38 By definition.

00:06:40 So any product, any thing that has a huge impact

00:06:44 and improves their lives is a great business decision

00:06:47 and it’s just good for society.

00:06:48 And there’s nothing wrong with starting off in the high end

00:06:52 as long as you have a plan to get to the middle class.

00:06:55 There’s nothing wrong with starting with a specialized

00:06:57 market in order to learn and to build and to fund things.

00:07:01 So you start with a luxury market

00:07:02 to build a general purpose market.

00:07:04 But if you define yourself as only a narrow market,

00:07:07 someone else can come along with a general purpose market

00:07:10 that can push you to the corner,

00:07:12 can restrict the scale of operation,

00:07:14 can force you to be a lesser impact than you might be.

00:07:17 So it’s very important to think in terms of broad businesses

00:07:21 and broad impact.

00:07:22 Even if you start in a little corner somewhere.

00:07:26 So as you look to the 70s but also in the decades to come

00:07:30 and you saw computers, did you see them as tools

00:07:34 or was there a little element of another entity?

00:07:40 I remember a quote saying AI began with our dream

00:07:44 to create the gods.

00:07:46 Is there a feeling when you wrote that program

00:07:48 that you were creating another entity,

00:07:51 giving life to something?

00:07:52 I wish I could say otherwise,

00:07:54 but I simply found the technology platforms so exciting.

00:07:58 That’s what I was focused on.

00:08:00 I think the majority of the people that I’ve worked with,

00:08:03 and there are a few exceptions, Steve Jobs being an example,

00:08:06 really saw this as a great technological play.

00:08:09 I think relatively few of the technical people understood

00:08:13 the scale of its impact.

00:08:15 So I used NCP, which is a predecessor to TCPIP.

00:08:19 It just made sense to connect things.

00:08:21 We didn’t think of it in terms of the internet

00:08:23 and then companies and then Facebook and then Twitter

00:08:27 and then politics and so forth.

00:08:29 We never did that build.

00:08:30 We didn’t have that vision.

00:08:32 And I think most people, it’s a rare person

00:08:35 who can see compounding at scale.

00:08:38 Most people can see,

00:08:39 if you ask people to predict the future,

00:08:40 they’ll give you an answer of six to nine months

00:08:43 or 12 months,

00:08:44 because that’s about as far as people can imagine.

00:08:47 But there’s an old saying,

00:08:48 which actually was attributed to a professor at MIT

00:08:50 a long time ago,

00:08:52 that we overestimate what can be done in one year

00:08:56 and we underestimate what can be done in a decade.

00:09:00 And there’s a great deal of evidence

00:09:02 that these core platforms at hardware and software

00:09:05 take a decade, right?

00:09:07 So think about self driving cars.

00:09:09 Self driving cars were thought about in the 90s.

00:09:12 There were projects around them.

00:09:13 The first DARPA Grand Challenge was roughly 2004.

00:09:17 So that’s roughly 15 years ago.

00:09:19 And today we have self driving cars operating

00:09:22 in a city in Arizona, right?

00:09:23 It’s 15 years and we still have a ways to go

00:09:26 before they’re more generally available.

00:09:31 So you’ve spoken about the importance,

00:09:33 you just talked about predicting into the future.

00:09:37 You’ve spoken about the importance of thinking

00:09:39 five years ahead and having a plan for those five years.

00:09:42 The way to say it is that almost everybody

00:09:45 has a one year plan.

00:09:47 Almost no one has a proper five year plan.

00:09:50 And the key thing to having a five year plan

00:09:52 is to having a model for what’s going to happen

00:09:55 under the underlying platforms.

00:09:56 So here’s an example.

00:09:59 Moore’s Law as we know it,

00:10:01 the thing that powered improvements in CPUs

00:10:04 has largely halted in its traditional shrinking mechanism

00:10:07 because the costs have just gotten so high.

00:10:10 It’s getting harder and harder.

00:10:12 But there’s plenty of algorithmic improvements

00:10:14 and specialized hardware improvements.

00:10:16 So you need to understand the nature of those improvements

00:10:19 and where they’ll go in order to understand

00:10:21 how it will change the platform.

00:10:24 In the area of network connectivity,

00:10:26 what are the gains that are gonna be possible in wireless?

00:10:29 It looks like there’s an enormous expansion

00:10:33 of wireless connectivity at many different bands.

00:10:36 And that we will primarily,

00:10:38 historically I’ve always thought

00:10:39 that we were primarily gonna be using fiber,

00:10:42 but now it looks like we’re gonna be using fiber

00:10:43 plus very powerful high bandwidth

00:10:47 sort of short distance connectivity

00:10:49 to bridge the last mile.

00:10:51 That’s an amazing achievement.

00:10:53 If you know that,

00:10:54 then you’re gonna build your systems differently.

00:10:56 By the way, those networks

00:10:57 have different latency properties, right?

00:10:59 Because they’re more symmetric,

00:11:01 the algorithms feel faster for that reason.

00:11:04 And so when you think about whether it’s a fiber

00:11:07 or just technologies in general,

00:11:09 so there’s this barber wooden poem or quote

00:11:14 that I really like.

00:11:15 It’s from the champions of the impossible

00:11:18 rather than the slaves of the possible

00:11:20 that evolution draws its creative force.

00:11:23 So in predicting the next five years,

00:11:25 I’d like to talk about the impossible and the possible.

00:11:29 Well, and again, one of the great things about humanity

00:11:32 is that we produce dreamers, right?

00:11:34 We literally have people who have a vision and a dream.

00:11:37 They are, if you will, disagreeable

00:11:40 in the sense that they disagree with the,

00:11:42 they disagree with what the sort of zeitgeist is.

00:11:45 They say there is another way.

00:11:48 They have a belief, they have a vision.

00:11:50 If you look at science, science is always marked

00:11:54 by such people who went against some conventional wisdom,

00:11:58 collected the knowledge at the time

00:12:00 and assembled it in a way that produced a powerful platform.

00:12:03 And you’ve been amazingly honest about,

00:12:08 in an inspiring way, about things you’ve been wrong

00:12:11 about predicting and you’ve obviously been right

00:12:13 about a lot of things, but in this kind of tension,

00:12:18 how do you balance, as a company,

00:12:21 in predicting the next five years,

00:12:23 the impossible, planning for the impossible,

00:12:26 so listening to those crazy dreamers, letting them do,

00:12:30 letting them run away and make the impossible real,

00:12:34 make it happen, and slow, you know,

00:12:36 that’s how programmers often think,

00:12:38 and slowing things down and saying,

00:12:41 well, this is the rational, this is the possible,

00:12:44 the pragmatic, the dreamer versus the pragmatist,

00:12:48 so it’s helpful to have a model

00:12:51 which encourages a predictable revenue stream

00:12:56 as well as the ability to do new things.

00:12:58 So in Google’s case, we’re big enough

00:13:00 and well enough managed and so forth

00:13:02 that we have a pretty good sense of what our revenue will be

00:13:05 for the next year or two, at least for a while.

00:13:07 And so we have enough cash generation

00:13:11 that we can make bets, and indeed,

00:13:14 Google has become alphabet,

00:13:16 so the corporation is organized around these bets,

00:13:19 and these bets are in areas of fundamental importance

00:13:22 to the world, whether it’s artificial intelligence,

00:13:26 medical technology, self driving cars,

00:13:29 connectivity through balloons, on and on and on.

00:13:33 And there’s more coming and more coming.

00:13:35 So one way you could express this

00:13:38 is that the current business is successful enough

00:13:41 that we have the luxury of making bets.

00:13:44 And another one that you could say

00:13:45 is that we have the wisdom of being able to see

00:13:49 that a corporate structure needs to be created

00:13:51 to enhance the likelihood of the success of those bets.

00:13:55 So we essentially turned ourselves into a conglomerate

00:13:58 of bets and then this underlying corporation, Google,

00:14:02 which is itself innovative.

00:14:04 So in order to pull this off,

00:14:05 you have to have a bunch of belief systems,

00:14:08 and one of them is that you have to have

00:14:09 bottoms up and tops down.

00:14:11 The bottoms up we call 20% time,

00:14:13 and the idea is that people can spend 20% of the time

00:14:15 whatever they want, and the top down

00:14:17 is that our founders in particular

00:14:19 have a keen eye on technology

00:14:21 and they’re reviewing things constantly.

00:14:23 So an example would be they’ll hear about an idea

00:14:26 or I’ll hear about something and it sounds interesting,

00:14:28 let’s go visit them.

00:14:30 And then let’s begin to assemble the pieces

00:14:33 to see if that’s possible.

00:14:34 And if you do this long enough,

00:14:35 you get pretty good at predicting what’s likely to work.

00:14:39 So that’s a beautiful balance that struck.

00:14:42 Is this something that applies at all scale?

00:14:44 It seems to be that Sergey, again, 15 years ago,

00:14:53 came up with a concept called 10% of the budget

00:14:56 should be on things that are unrelated.

00:14:58 It was called 70, 20, 10.

00:15:00 70% of our time on core business,

00:15:03 20% on adjacent business, and 10% on other.

00:15:06 And he proved mathematically,

00:15:08 of course he’s a brilliant mathematician,

00:15:10 that you needed that 10% to make the sum

00:15:13 of the growth work.

00:15:14 And it turns out he was right.

00:15:18 So getting into the world of artificial intelligence,

00:15:20 you’ve talked quite extensively and effectively

00:15:25 to the impact in the near term,

00:15:28 the positive impact of artificial intelligence,

00:15:32 whether it’s especially machine learning

00:15:34 in medical applications and education,

00:15:38 and just making information more accessible, right?

00:15:41 In the AI community, there is a kind of debate.

00:15:45 There’s this shroud of uncertainty

00:15:47 as we face this new world

00:15:49 with artificial intelligence in it.

00:15:50 And there’s some people, like Elon Musk,

00:15:54 you’ve disagreed, at least on the degree of emphasis

00:15:57 he places on the existential threat of AI.

00:16:00 So I’ve spoken with Stuart Russell,

00:16:02 Max Tegmark, who share Elon Musk’s view,

00:16:05 and Yoshua Bengio, Steven Pinker, who do not.

00:16:09 And so there’s a lot of very smart people

00:16:11 who are thinking about this stuff, disagreeing,

00:16:14 which is really healthy, of course.

00:16:17 So what do you think is the healthiest way

00:16:19 for the AI community to,

00:16:22 and really for the general public,

00:16:23 to think about AI and the concern

00:16:27 of the technology being mismanaged in some kind of way?

00:16:32 So the source of education for the general public

00:16:35 has been robot killer movies.

00:16:37 Right.

00:16:38 And Terminator, et cetera.

00:16:40 And the one thing I can assure you we’re not building

00:16:44 are those kinds of solutions.

00:16:46 Furthermore, if they were to show up,

00:16:48 someone would notice and unplug them, right?

00:16:51 So as exciting as those movies are,

00:16:53 and they’re great movies,

00:16:54 were the killer robots to start,

00:16:57 we would find a way to stop them, right?

00:17:00 So I’m not concerned about that.

00:17:04 And much of this has to do

00:17:05 with the timeframe of conversation.

00:17:08 So you can imagine a situation 100 years from now

00:17:13 when the human brain is fully understood

00:17:15 and the next generation and next generation

00:17:18 of brilliant MIT scientists have figured all this out,

00:17:20 we’re gonna have a large number of ethics questions, right?

00:17:25 Around science and thinking and robots and computers

00:17:28 and so forth and so on.

00:17:29 So it depends on the question of the timeframe.

00:17:32 In the next five to 10 years,

00:17:34 we’re not facing those questions.

00:17:37 What we’re facing in the next five to 10 years

00:17:39 is how do we spread this disruptive technology

00:17:42 as broadly as possible to gain the maximum benefit of it?

00:17:46 The primary benefits should be in healthcare

00:17:48 and in education.

00:17:50 Healthcare because it’s obvious.

00:17:52 We’re all the same even though we somehow believe we’re not.

00:17:55 As a medical matter,

00:17:57 the fact that we have big data about our health

00:17:59 will save lives, allow us to deal with skin cancer

00:18:02 and other cancers, ophthalmological problems.

00:18:05 There’s people working on psychological diseases

00:18:08 and so forth using these techniques.

00:18:10 I can go on and on.

00:18:11 The promise of AI in medicine is extraordinary.

00:18:15 There are many, many companies and startups

00:18:17 and funds and solutions

00:18:19 and we will all live much better for that.

00:18:22 The same argument in education.

00:18:25 Can you imagine that for each generation of child

00:18:28 and even adult, you have a tutor educator that’s AI based,

00:18:33 that’s not a human but is properly trained,

00:18:35 that helps you get smarter,

00:18:37 helps you address your language difficulties

00:18:39 or your math difficulties or what have you.

00:18:41 Why don’t we focus on those two?

00:18:43 The gains societally of making humans smarter and healthier

00:18:47 are enormous and those translate for decades and decades

00:18:51 and we’ll all benefit from them.

00:18:53 There are people who are working on AI safety,

00:18:56 which is the issue that you’re describing

00:18:58 and there are conversations in the community

00:19:00 that should there be such problems,

00:19:02 what should the rules be like?

00:19:04 Google, for example, has announced its policies

00:19:07 with respect to AI safety, which I certainly support

00:19:10 and I think most everybody would support

00:19:12 and they make sense, right?

00:19:14 So it helps guide the research

00:19:16 but the killer robots are not arriving this year

00:19:19 and they’re not even being built.

00:19:22 And on that line of thinking, you said the time scale.

00:19:26 In this topic or other topics,

00:19:30 have you found it useful on the business side

00:19:34 or the intellectual side to think beyond five, 10 years,

00:19:37 to think 50 years out?

00:19:39 Has it ever been useful or productive?

00:19:41 In our industry, there are essentially no examples

00:19:45 of 50 year predictions that have been correct.

00:19:48 Let’s review AI, right?

00:19:50 AI, which was largely invented here at MIT

00:19:53 and a couple of other universities in the 1956, 1957,

00:19:56 1958, the original claims were a decade or two.

00:20:01 And when I was a PhD student, I studied AI a bit

00:20:05 and it entered during my looking at it,

00:20:07 a period which is known as AI winter,

00:20:10 which went on for about 30 years,

00:20:12 which is a whole generation of science,

00:20:14 scientists and a whole group of people

00:20:16 who didn’t make a lot of progress

00:20:18 because the algorithms had not improved

00:20:20 and the computers had not approved.

00:20:22 It took some brilliant mathematicians

00:20:23 starting with a fellow named Jeff Hinton

00:20:25 at Toronto and Montreal who basically invented

00:20:29 this deep learning model which empowers us today.

00:20:33 The seminal work there was 20 years ago

00:20:36 and in the last 10 years, it’s become popularized.

00:20:39 So think about the timeframes for that level of discovery.

00:20:43 It’s very hard to predict.

00:20:45 Many people think that we’ll be flying around

00:20:47 in the equivalent of flying cars, who knows?

00:20:51 My own view, if I wanna go out on a limb,

00:20:54 is to say that we know a couple of things

00:20:56 about 50 years from now.

00:20:57 We know that there’ll be more people alive.

00:21:00 We know that we’ll have to have platforms

00:21:02 that are more sustainable because the earth is limited

00:21:05 in the ways we all know and that the kind of platforms

00:21:09 that are gonna get built will be consistent

00:21:11 with the principles that I’ve described.

00:21:13 They will be much more empowering of individuals.

00:21:15 They’ll be much more sensitive to the ecology

00:21:17 because they have to be, they just have to be.

00:21:20 I also think that humans are gonna be a great deal smarter

00:21:23 and I think they’re gonna be a lot smarter

00:21:25 because of the tools that I’ve discussed with you

00:21:27 and of course, people will live longer.

00:21:29 Life extension is continuing apace.

00:21:32 A baby born today has a reasonable chance

00:21:34 of living to 100, which is pretty exciting.

00:21:37 It’s well past the 21st century,

00:21:38 so we better take care of them.

00:21:40 And you mentioned an interesting statistic

00:21:42 on some very large percentage, 60, 70% of people

00:21:46 may live in cities.

00:21:48 Today, more than half the world lives in cities

00:21:50 and one of the great stories of humanity

00:21:53 in the last 20 years has been the rural to urban migration.

00:21:57 This has occurred in the United States,

00:21:59 it’s occurred in Europe, it’s occurring in Asia

00:22:02 and it’s occurring in Africa.

00:22:04 When people move to cities, the cities get more crowded,

00:22:07 but believe it or not, their health gets better,

00:22:10 their productivity gets better,

00:22:12 their IQ and educational capabilities improve.

00:22:15 So it’s good news that people are moving to cities,

00:22:18 but we have to make them livable and safe.

00:22:20 So you, first of all, you are,

00:22:25 but you’ve also worked with some of the greatest leaders

00:22:28 in the history of tech.

00:22:29 What insights do you draw from the difference

00:22:32 in leadership styles of yourself,

00:22:35 Steve Jobs, Elon Musk, Larry Page,

00:22:39 now the new CEO, Sandra Pichai and others?

00:22:42 From the, I would say, calm sages to the mad geniuses.

00:22:47 One of the things that I learned as a young executive

00:22:50 is that there’s no single formula for leadership.

00:22:54 They try to teach one, but that’s not how it really works.

00:22:58 There are people who just understand what they need to do

00:23:01 and they need to do it quickly.

00:23:02 Those people are often entrepreneurs.

00:23:05 They just know and they move fast.

00:23:07 There are other people who are systems thinkers

00:23:09 and planners, that’s more who I am,

00:23:11 somewhat more conservative, more thorough in execution,

00:23:15 a little bit more risk of risk.

00:23:16 A little bit more risk averse.

00:23:18 There’s also people who are sort of slightly insane,

00:23:22 in the sense that they are emphatic and charismatic

00:23:26 and they feel it and they drive it and so forth.

00:23:28 There’s no single formula to success.

00:23:31 There is one thing that unifies all of the people

00:23:33 that you named, which is very high intelligence.

00:23:36 At the end of the day, the thing that characterizes

00:23:40 all of them is that they saw the world quicker, faster,

00:23:43 they processed information faster.

00:23:45 They didn’t necessarily make the right decisions

00:23:47 all the time, but they were on top of it.

00:23:49 And the other thing that’s interesting

00:23:51 about all those people is they all started young.

00:23:54 So think about Steve Jobs starting Apple

00:23:56 roughly at 18 or 19.

00:23:58 Think about Bill Gates starting at roughly 20, 21.

00:24:01 Think about by the time they were 30,

00:24:03 Mark Zuckerberg, a good example, at 19, 20.

00:24:06 By the time they were 30, they had 10 years.

00:24:10 At 30 years old, they had 10 years of experience

00:24:13 of dealing with people and products and shipments

00:24:16 and the press and business and so forth.

00:24:19 It’s incredible how much experience they had

00:24:22 compared to the rest of us who were busy getting our PhDs.

00:24:25 Yes, exactly.

00:24:26 So we should celebrate these people

00:24:28 because they’ve just had more life experience, right?

00:24:32 And that helps inform the judgment.

00:24:34 At the end of the day, when you’re at the top

00:24:38 of these organizations, all the easy questions

00:24:41 have been dealt with, right?

00:24:43 How should we design the buildings?

00:24:45 Where should we put the colors on our product?

00:24:48 What should the box look like, right?

00:24:51 The problems, that’s why it’s so interesting

00:24:53 to be in these rooms, the problems that they face, right,

00:24:56 in terms of the way they operate,

00:24:58 the way they deal with their employees,

00:25:00 their customers, their innovation,

00:25:01 are profoundly challenging.

00:25:03 Each of the companies is demonstrably different culturally.

00:25:09 They are not, in fact, cut of the same.

00:25:11 They behave differently based on input.

00:25:14 Their internal cultures are different.

00:25:15 Their compensation schemes are different.

00:25:17 Their values are different.

00:25:19 So there’s proof that diversity works.

00:25:24 So, so when faced with a tough decision,

00:25:29 in need of advice, it’s been said that the best thing

00:25:33 one can do is to find the best person in the world

00:25:36 who can give that advice and find a way to be

00:25:40 in a room with them, one on one and ask.

00:25:44 So here we are, and let me ask in a long winded way,

00:25:48 I wrote this down.

00:25:50 In 1998, there were many good search engines,

00:25:53 Lycos, Excite, AltaVista, Infoseek, Ask Jeeves maybe,

00:25:59 Yahoo even.

00:26:01 So Google stepped in and disrupted everything.

00:26:04 They disrupted the nature of search,

00:26:06 the nature of our access to information,

00:26:08 the way we discover new knowledge.

00:26:11 So now it’s 2018, actually 20 years later.

00:26:16 There are many good personal AI assistants,

00:26:18 including, of course, the best from Google.

00:26:22 So you’ve spoken in medical and education,

00:26:25 the impact of such an AI assistant could bring.

00:26:28 So we arrive at this question.

00:26:30 So it’s a personal one for me,

00:26:32 but I hope my situation represents that of many other,

00:26:36 as we said, dreamers and the crazy engineers.

00:26:40 So my whole life, I’ve dreamed of creating

00:26:43 such an AI assistant.

00:26:45 Every step I’ve taken has been towards that goal.

00:26:48 Now I’m a research scientist in human centered AI

00:26:51 here at MIT.

00:26:52 So the next step for me as I sit here,

00:26:54 so facing my passion is to do what Larry and Sergey did

00:26:59 in 98, this simple startup.

00:27:04 And so here’s my simple question.

00:27:06 Given the low odds of success, the timing and luck required,

00:27:10 the countless other factors that can’t be controlled

00:27:12 or predicted, which is all the things

00:27:14 that Larry and Sergey faced,

00:27:16 is there some calculation, some strategy

00:27:20 to follow in this step?

00:27:21 Or do you simply follow the passion

00:27:23 just because there’s no other choice?

00:27:26 I think the people who are in universities

00:27:29 are always trying to study

00:27:31 the extraordinarily chaotic nature of innovation

00:27:35 and entrepreneurship.

00:27:37 My answer is that they didn’t have that conversation.

00:27:41 They just did it.

00:27:42 They sensed a moment when in the case of Google,

00:27:47 there was all of this data that needed to be organized

00:27:49 and they had a better algorithm.

00:27:51 They had invented a better way.

00:27:53 So today with human centered AI,

00:27:56 which is your area of research,

00:27:58 there must be new approaches.

00:28:00 It’s such a big field.

00:28:02 There must be new approaches,

00:28:04 different from what we and others are doing.

00:28:07 There must be startups to fund.

00:28:09 There must be research projects to try.

00:28:11 There must be graduate students to work on new approaches.

00:28:15 Here at MIT, there are people who are looking at learning

00:28:18 from the standpoint of looking at child learning.

00:28:20 How do children learn starting at age one and two?

00:28:23 And the work is fantastic.

00:28:25 Those approaches are different from the approach

00:28:28 that most people are taking.

00:28:29 Perhaps that’s a bet that you should make

00:28:31 or perhaps there’s another one.

00:28:33 But at the end of the day,

00:28:35 the successful entrepreneurs are not as crazy as they sound.

00:28:40 They see an opportunity based on what’s happened.

00:28:43 Let’s use Uber as an example.

00:28:45 As Travis sells the story,

00:28:46 he and his co founder were sitting in Paris

00:28:48 and they had this idea because they couldn’t get a cab.

00:28:52 And they said, we have smartphones and the rest is history.

00:28:56 So what’s the equivalent of that Travis Eiffel Tower,

00:29:00 where is a cab moment that you could,

00:29:03 as an entrepreneur, take advantage of?

00:29:05 Whether it’s in human centered AI or something else.

00:29:08 That’s the next great startup.

00:29:11 And the psychology of that moment.

00:29:13 So when Sergey and Larry talk about,

00:29:17 and listen to a few interviews, it’s very nonchalant.

00:29:20 Well, here’s the very fascinating web data

00:29:23 and here’s an algorithm we have for,

00:29:27 we just kind of want to play around with that data.

00:29:29 And it seems like that’s a really nice way

00:29:31 to organize this data.

00:29:34 I should say what happened to remember

00:29:35 is that they were graduate students at Stanford

00:29:38 and they thought this was interesting.

00:29:39 So they built a search engine

00:29:40 and they kept it in their room.

00:29:43 And they had to get power from the room next door

00:29:46 because they were using too much power in the room.

00:29:48 So they ran an extension cord over, right?

00:29:51 And then they went and they found a house

00:29:53 and they had Google world headquarters of five people,

00:29:56 right, to start the company.

00:29:57 And they raised $100,000 from Andy Bechtolsheim,

00:30:00 who was the Sun founder to do this

00:30:02 and Dave Cheriton and a few others.

00:30:04 The point is their beginnings were very simple

00:30:08 but they were based on a powerful insight.

00:30:11 That is a replicable model for any startup.

00:30:14 It has to be a powerful insight.

00:30:16 The beginnings are simple.

00:30:17 And there has to be an innovation.

00:30:19 In Larry and Sergey’s case, it was PageRank,

00:30:22 which was a brilliant idea,

00:30:23 one of the most cited papers in the world today.

00:30:26 What’s the next one?

00:30:29 So you’re one of, if I may say,

00:30:33 richest people in the world.

00:30:36 And yet it seems that money is simply a side effect

00:30:38 of your passions and not an inherent goal.

00:30:42 But you’re a fascinating person to ask.

00:30:48 So much of our society at the individual level

00:30:51 and at the company level and as nations

00:30:55 is driven by the desire for wealth.

00:30:58 What do you think about this drive?

00:31:01 And what have you learned about,

00:31:03 if I may romanticize the notion,

00:31:05 the meaning of life,

00:31:06 having achieved success on so many dimensions?

00:31:10 There have been many studies of human happiness

00:31:13 and above some threshold,

00:31:16 which is typically relatively low for this conversation,

00:31:19 there’s no difference in happiness about money.

00:31:23 The happiness is correlated with meaning and purpose,

00:31:27 a sense of family, a sense of impact.

00:31:30 So if you organize your life,

00:31:31 assuming you have enough to get around

00:31:33 and have a nice home and so forth,

00:31:35 you’ll be far happier if you figure out

00:31:38 what you care about and work on that.

00:31:41 It’s often being in service to others.

00:31:44 There’s a great deal of evidence that people are happiest

00:31:46 when they’re serving others and not themselves.

00:31:49 This goes directly against the sort of

00:31:52 press induced excitement about

00:31:56 powerful and wealthy leaders of one kind.

00:31:59 And indeed these are consequential people.

00:32:01 But if you are in a situation

00:32:03 where you’ve been very fortunate as I have,

00:32:06 you also have to take that as a responsibility

00:32:09 and you have to basically work both to educate others

00:32:12 and give them that opportunity,

00:32:13 but also use that wealth to advance human society.

00:32:16 In my case, I’m particularly interested in

00:32:18 using the tools of artificial intelligence

00:32:20 and machine learning to make society better.

00:32:22 I’ve mentioned education, I’ve mentioned inequality

00:32:26 and middle class and things like this,

00:32:28 all of which are a passion of mine.

00:32:30 It doesn’t matter what you do,

00:32:31 it matters that you believe in it,

00:32:33 that it’s important to you,

00:32:35 and that your life will be far more satisfying

00:32:38 if you spend your life doing that.

00:32:40 I think there’s no better place to end

00:32:43 than a discussion of the meaning of life.

00:32:45 Eric, thank you so much.