Transcript
00:00:00 The following is a conversation with Erik Brynjolfsson.
00:00:03 He’s an economics professor at Stanford
00:00:05 and the director of Stanford’s Digital Economy Lab.
00:00:09 Previously, he was a long, long time professor at MIT
00:00:13 where he did groundbreaking work
00:00:15 on the economics of information.
00:00:17 He’s the author of many books,
00:00:19 including The Second Machine Age
00:00:21 and Machine Platform Crowd,
00:00:24 coauthored with Andrew McAfee.
00:00:27 Quick mention of each sponsor,
00:00:29 followed by some thoughts related to the episode.
00:00:31 Ventura Watches, the maker of classy,
00:00:34 well performing watches.
00:00:35 Four Sigmatic, the maker of delicious mushroom coffee.
00:00:39 ExpressVPN, the VPN I’ve used for many years
00:00:42 to protect my privacy on the internet.
00:00:44 And CashApp, the app I use to send money to friends.
00:00:48 Please check out these sponsors in the description
00:00:50 to get a discount and to support this podcast.
00:00:54 As a side note, let me say that the impact
00:00:56 of artificial intelligence and automation
00:00:59 on our economy and our world
00:01:01 is something worth thinking deeply about.
00:01:04 Like with many topics that are linked
00:01:06 to predicting the future evolution of technology,
00:01:09 it is often too easy to fall into one of two camps.
00:01:12 The fear mongering camp
00:01:14 or the technological utopianism camp.
00:01:18 As always, the future will land us somewhere in between.
00:01:21 I prefer to wear two hats in these discussions
00:01:24 and alternate between them often.
00:01:26 The hat of a pragmatic engineer
00:01:29 and the hat of a futurist.
00:01:31 This is probably a good time to mention Andrew Yang,
00:01:34 the presidential candidate who has been
00:01:37 one of the high profile thinkers on this topic.
00:01:41 And I’m sure I will speak with him
00:01:42 on this podcast eventually.
00:01:44 A conversation with Andrew has been on the table many times.
00:01:48 Our schedules just haven’t aligned,
00:01:50 especially because I have a strongly held to preference
00:01:54 for long form, two, three, four hours or more,
00:01:58 and in person.
00:02:00 I work hard to not compromise on this.
00:02:02 Trust me, it’s not easy.
00:02:04 Even more so in the times of COVID,
00:02:07 which requires getting tested nonstop,
00:02:09 staying isolated and doing a lot of costly
00:02:12 and uncomfortable things that minimize risk for the guest.
00:02:15 The reason I do this is because to me,
00:02:17 something is lost in remote conversation.
00:02:20 That something, that magic,
00:02:23 I think is worth the effort,
00:02:25 even if it ultimately leads to a failed conversation.
00:02:29 This is how I approach life,
00:02:31 treasuring the possibility of a rare moment of magic.
00:02:35 I’m willing to go to the ends of the world
00:02:38 for just such a moment.
00:02:40 If you enjoy this thing, subscribe on YouTube,
00:02:43 review it with five stars on Apple Podcast,
00:02:45 follow on Spotify, support on Patreon,
00:02:47 connect with me on Twitter at Lex Friedman.
00:02:51 And now here’s my conversation with Erik Brynjolfsson.
00:02:56 You posted a quote on Twitter by Albert Bartlett
00:02:59 saying that the greatest shortcoming of the human race
00:03:03 is our inability to understand the exponential function.
00:03:07 Why would you say the exponential growth
00:03:09 is important to understand?
00:03:12 Yeah, that quote, I remember posting that.
00:03:15 It’s actually a reprise of something Andy McAfee and I said
00:03:17 in the second machine age,
00:03:19 but I posted it in early March
00:03:21 when COVID was really just beginning to take off
00:03:23 and I was really scared.
00:03:25 There were actually only a couple dozen cases,
00:03:28 maybe less at that time,
00:03:29 but they were doubling every like two or three days
00:03:32 and I could see, oh my God, this is gonna be a catastrophe
00:03:35 and it’s gonna happen soon,
00:03:36 but nobody was taking it very seriously
00:03:38 or not a lot of people were taking it very seriously.
00:03:40 In fact, I remember I did my last in person conference
00:03:45 that week, I was flying back from Las Vegas
00:03:47 and I was the only person on the plane wearing a mask
00:03:50 and the flight attendant came over to me.
00:03:52 She looked very concerned.
00:03:53 She kind of put her hands on my shoulder.
00:03:54 She was touching me all over, which I wasn’t thrilled about
00:03:56 and she goes, do you have some kind of anxiety disorder?
00:03:59 Are you okay?
00:04:00 And I was like, no, it’s because of COVID.
00:04:02 This is early March.
00:04:03 Early March, but I was worried
00:04:06 because I knew I could see or I suspected, I guess,
00:04:10 that that doubling would continue and it did
00:04:13 and pretty soon we had thousands of times more cases.
00:04:17 Most of the time when I use that quote,
00:04:18 I try to, it’s motivated by more optimistic things
00:04:21 like Moore’s law and the wonders
00:04:23 of having more computer power,
00:04:25 but in either case, it can be very counterintuitive.
00:04:28 I mean, if you walk for 10 minutes,
00:04:31 you get about 10 times as far away
00:04:32 as if you walk for one minute.
00:04:34 That’s the way our physical world works.
00:04:35 That’s the way our brains are wired,
00:04:38 but if something doubles for 10 times as long,
00:04:41 you don’t get 10 times as much.
00:04:43 You get a thousand times as much
00:04:45 and after 20, it’s a billion.
00:04:47 After 30, it’s a, no, sorry, after 20, it’s a million.
00:04:50 After 30, it’s a billion.
00:04:53 And pretty soon after that,
00:04:54 it just gets to these numbers that you can barely grasp.
00:04:57 Our world is becoming more and more exponential,
00:05:00 mainly because of digital technologies.
00:05:03 So more and more often our intuitions are out of whack
00:05:06 and that can be good in the case of things creating wonders,
00:05:10 but it can be dangerous in the case of viruses
00:05:13 and other things.
00:05:14 Do you think it generally applies,
00:05:16 like is there spaces where it does apply
00:05:18 and where it doesn’t?
00:05:19 How are we supposed to build an intuition
00:05:21 about in which aspects of our society
00:05:25 does exponential growth apply?
00:05:27 Well, you can learn the math,
00:05:29 but the truth is our brains, I think,
00:05:32 tend to learn more from experiences.
00:05:35 So we just start seeing it more and more often.
00:05:37 So hanging around Silicon Valley,
00:05:39 hanging around AI and computer researchers,
00:05:41 I see this kind of exponential growth a lot more frequently
00:05:44 and I’m getting used to it, but I still make mistakes.
00:05:46 I still underestimate some of the progress
00:05:48 in just talking to someone about GPT3
00:05:50 and how rapidly natural language has improved.
00:05:54 But I think that as the world becomes more exponential,
00:05:58 we’ll all start experiencing it more frequently.
00:06:01 The danger is that we may make some mistakes in the meantime
00:06:05 using our old kind of caveman intuitions
00:06:07 about how the world works.
00:06:09 Well, the weird thing is it always kind of looks linear
00:06:11 in the moment.
00:06:12 Like it’s hard to feel,
00:06:16 it’s hard to like introspect
00:06:19 and really acknowledge how much has changed
00:06:22 in just a couple of years or five years or 10 years
00:06:26 with the internet.
00:06:27 If we just look at advancements of AI
00:06:29 or even just social media,
00:06:31 all the various technologies
00:06:33 that go into the digital umbrella,
00:06:36 it feels pretty calm and normal and gradual.
00:06:39 Well, a lot of stuff,
00:06:40 I think there are parts of the world,
00:06:42 most of the world that is not exponential.
00:06:45 The way humans learn,
00:06:47 the way organizations change,
00:06:49 the way our whole institutions adapt and evolve,
00:06:52 those don’t improve at exponential paces.
00:06:54 And that leads to a mismatch oftentimes
00:06:56 between these exponentially improving technologies
00:06:58 or let’s say changing technologies
00:07:00 because some of them are exponentially more dangerous
00:07:03 and our intuitions and our human skills
00:07:06 and our institutions that just don’t change very fast at all.
00:07:11 And that mismatch I think is at the root
00:07:13 of a lot of the problems in our society,
00:07:15 the growing inequality
00:07:18 and other dysfunctions in our political
00:07:22 and economic systems.
00:07:24 So one guy that talks about exponential functions
00:07:28 a lot is Elon Musk.
00:07:29 He seems to internalize this kind of way
00:07:32 of exponential thinking.
00:07:34 He calls it first principles thinking,
00:07:36 sort of the kind of going to the basics,
00:07:39 asking the question,
00:07:41 like what were the assumptions of the past?
00:07:43 How can we throw them out the window?
00:07:46 How can we do this 10X much more efficiently
00:07:49 and constantly practicing that process?
00:07:51 And also using that kind of thinking
00:07:54 to estimate sort of when, you know, create deadlines
00:08:01 and estimate when you’ll be able to deliver
00:08:04 on some of these technologies.
00:08:06 Now, it often gets him in trouble
00:08:09 because he overestimates,
00:08:12 like he doesn’t meet the initial estimates of the deadlines,
00:08:17 but he seems to deliver late but deliver.
00:08:22 And which is kind of interesting.
00:08:25 Like, what are your thoughts about this whole thing?
00:08:26 I think we can all learn from Elon.
00:08:28 I think going to first principles,
00:08:30 I talked about two ways of getting more of a grip
00:08:32 on the exponential function.
00:08:34 And one of them just comes from first principles.
00:08:36 You know, if you understand the math of it,
00:08:37 you can see what’s gonna happen.
00:08:39 And even if it seems counterintuitive
00:08:41 that a couple of dozen of COVID cases
00:08:42 can become thousands or tens or hundreds of thousands
00:08:46 of them in a month,
00:08:48 it makes sense once you just do the math.
00:08:51 And I think Elon tries to do that a lot.
00:08:53 You know, in fairness, I think he also benefits
00:08:55 from hanging out in Silicon Valley
00:08:56 and he’s experienced it in a lot of different applications.
00:09:00 So, you know, it’s not as much of a shock to him anymore,
00:09:04 but that’s something we can all learn from.
00:09:07 In my own life, I remember one of my first experiences
00:09:10 really seeing it was when I was a grad student
00:09:12 and my advisor asked me to plot the growth of computer power
00:09:17 in the US economy in different industries.
00:09:20 And there are all these, you know,
00:09:21 exponentially growing curves.
00:09:23 And I was like, holy shit, look at this.
00:09:24 In each industry, it was just taking off.
00:09:26 And, you know, you didn’t have to be a rocket scientist
00:09:29 to extend that and say, wow,
00:09:30 this means that this was in the late 80s and early 90s
00:09:33 that, you know, if it goes anything like that,
00:09:35 we’re gonna have orders of magnitude more computer power
00:09:38 than we did at that time.
00:09:39 And of course we do.
00:09:41 So, you know, when people look at Moore’s law,
00:09:45 they often talk about it as just,
00:09:46 so the exponential function is actually
00:09:49 a stack of S curves.
00:09:51 So basically it’s you milk or whatever,
00:09:57 take the most advantage of a particular little revolution
00:10:01 and then you search for another revolution.
00:10:03 And it’s basically revolutions stack on top of revolutions.
00:10:06 Do you have any intuition about how the head humans
00:10:08 keep finding ways to revolutionize things?
00:10:12 Well, first, let me just unpack that first point
00:10:14 that I talked about exponential curves,
00:10:17 but no exponential curve continues forever.
00:10:21 It’s been said that if anything can’t go on forever,
00:10:24 eventually it will stop.
00:10:26 And, and it’s very profound, but it’s,
00:10:29 it seems that a lot of people don’t appreciate
00:10:32 that half of it as well either.
00:10:33 And that’s why all exponential functions eventually turn
00:10:36 into some kind of S curve or stop in some other way,
00:10:39 maybe catastrophically.
00:10:41 And that’s a cap with COVID as well.
00:10:42 I mean, it was, it went up and then it sort of, you know,
00:10:44 at some point it starts saturating the pool of people
00:10:47 to be infected.
00:10:49 There’s a standard epidemiological model
00:10:51 that’s based on that.
00:10:52 And it’s beginning to happen with Moore’s law
00:10:55 or different generations of computer power.
00:10:56 It happens with all exponential curves.
00:10:59 The remarkable thing is you elude,
00:11:01 the second part of your question is that we’ve been able
00:11:03 to come up with a new S curve on top of the previous one
00:11:06 and do that generation after generation with new materials,
00:11:10 new processes, and just extend it further and further.
00:11:15 I don’t think anyone has a really good theory
00:11:17 about why we’ve been so successful in doing that.
00:11:21 It’s great that we have been,
00:11:23 and I hope it continues for some time,
00:11:26 but it’s, you know, one beginning of a theory
00:11:31 is that there’s huge incentives when other parts
00:11:34 of the system are going on that clock speed
00:11:36 of doubling every two to three years.
00:11:39 If there’s one component of it that’s not keeping up,
00:11:42 then the economic incentives become really large
00:11:44 to improve that one part.
00:11:46 It becomes a bottleneck and anyone who can do improvements
00:11:49 in that part can reap huge returns
00:11:51 so that the resources automatically get focused
00:11:54 on whatever part of the system isn’t keeping up.
00:11:56 Do you think some version of the Moore’s law will continue?
00:11:59 Some version, yes, it is.
00:12:01 I mean, one version that has become more important
00:12:04 is something I call Coomey’s law,
00:12:06 which is named after John Coomey,
00:12:08 who I should mention was also my college roommate,
00:12:10 but he identified the fact that energy consumption
00:12:14 has been declining by a factor of two.
00:12:17 And for most of us, that’s more important.
00:12:18 The new iPhones came out today as we’re recording this.
00:12:21 I’m not sure when you’re gonna make it available.
00:12:23 Very soon after this, yeah.
00:12:24 And for most of us, having the iPhone be twice as fast,
00:12:30 it’s nice, but having the battery lifelonger,
00:12:33 that would be much more valuable.
00:12:35 And the fact that a lot of the progress in chips now
00:12:38 is reducing energy consumption is probably more important
00:12:42 for many applications than just the raw speed.
00:12:46 Other dimensions of Moore’s law
00:12:47 are in AI and machine learning.
00:12:51 Those tend to be very parallelizable functions,
00:12:55 especially deep neural nets.
00:12:58 And so instead of having one chip,
00:13:01 you can have multiple chips or you can have a GPU,
00:13:05 graphic processing unit that goes faster.
00:13:07 Now, special chips designed for machine learning
00:13:09 like tensor processing units,
00:13:11 each time you switch, there’s another 10X
00:13:13 or 100X improvement above and beyond Moore’s law.
00:13:16 So I think that the raw silicon
00:13:18 isn’t improving as much as it used to,
00:13:20 but these other dimensions are becoming important,
00:13:23 more important, and we’re seeing progress in them.
00:13:26 I don’t know if you’ve seen the work by OpenAI
00:13:28 where they show the exponential improvement
00:13:31 of the training of neural networks
00:13:34 just literally in the techniques used.
00:13:36 So that’s almost like the algorithm.
00:13:40 It’s fascinating to think like, can I actually continue?
00:13:43 I was figuring out more and more tricks
00:13:45 on how to train networks faster and faster.
00:13:47 The progress has been staggering.
00:13:49 If you look at image recognition, as you mentioned,
00:13:51 I think it’s a function of at least three things
00:13:53 that are coming together.
00:13:54 One, we just talked about faster chips,
00:13:56 not just Moore’s law, but GPUs, TPUs and other technologies.
00:14:00 The second is just a lot more data.
00:14:02 I mean, we are awash in digital data today
00:14:05 in a way we weren’t 20 years ago.
00:14:08 Photography, I’m old enough to remember,
00:14:09 it used to be chemical, and now everything is digital.
00:14:12 I took probably 50 digital photos yesterday.
00:14:16 I wouldn’t have done that if it was chemical.
00:14:17 And we have the internet of things
00:14:20 and all sorts of other types of data.
00:14:22 When we walk around with our phone,
00:14:24 it’s just broadcasting a huge amounts of digital data
00:14:27 that can be used as training sets.
00:14:29 And then last but not least, as they mentioned at OpenAI,
00:14:34 there’ve been significant improvements in the techniques.
00:14:37 The core idea of deep neural nets
00:14:39 has been around for a few decades,
00:14:41 but the advances in making it work more efficiently
00:14:44 have also improved a couple of orders of magnitude or more.
00:14:48 So you multiply together,
00:14:49 a hundred fold improvement in computer power,
00:14:52 a hundred fold or more improvement in data,
00:14:55 a hundred fold improvement in techniques
00:14:59 of software and algorithms,
00:15:00 and soon you’re getting into a million fold improvements.
00:15:03 So somebody brought this up, this idea with GPT3 that,
00:15:09 so it’s trained in a self supervised way
00:15:11 on basically internet data.
00:15:15 And that’s one of the, I’ve seen arguments made
00:15:18 and they seem to be pretty convincing
00:15:21 that the bottleneck there is going to be
00:15:23 how much data there is on the internet,
00:15:25 which is a fascinating idea that it literally
00:15:29 will just run out of human generated data to train on.
00:15:33 Right, I know we make it to the point where it’s consumed
00:15:35 basically all of human knowledge
00:15:37 or all digitized human knowledge, yeah.
00:15:39 And that will be the bottleneck.
00:15:40 But the interesting thing with bottlenecks
00:15:44 is people often use bottlenecks
00:15:47 as a way to argue against exponential growth.
00:15:49 They say, well, there’s no way
00:15:51 you can overcome this bottleneck,
00:15:53 but we seem to somehow keep coming up in new ways
00:15:56 to like overcome whatever bottlenecks
00:15:59 the critics come up with, which is fascinating.
00:16:01 I don’t know how you overcome the data bottleneck,
00:16:04 but probably more efficient training algorithms.
00:16:07 Yeah, well, you already mentioned that,
00:16:08 that these training algorithms are getting much better
00:16:10 at using smaller amounts of data.
00:16:12 We also are just capturing a lot more data than we used to,
00:16:15 especially in China, but all around us.
00:16:18 So those are both important.
00:16:20 In some applications, you can simulate the data,
00:16:24 video games, some of the self driving car systems
00:16:28 are simulating driving, and of course,
00:16:32 that has some risks and weaknesses,
00:16:34 but you can also, if you want to exhaust
00:16:38 all the different ways you could beat a video game,
00:16:39 you could just simulate all the options.
00:16:42 Can we take a step in that direction of autonomous vehicles?
00:16:44 Next, you’re talking to the CTO of Waymo tomorrow.
00:16:48 And obviously, I’m talking to Elon again in a couple of weeks.
00:16:53 What’s your thoughts on autonomous vehicles?
00:16:57 Like where do we stand as a problem
00:17:01 that has the potential of revolutionizing the world?
00:17:04 Well, I’m really excited about that,
00:17:06 but it’s become much clearer
00:17:09 that the original way that I thought about it,
00:17:10 most people thought about like,
00:17:11 you know, will we have a self driving car or not
00:17:13 is way too simple.
00:17:15 The better way to think about it
00:17:17 is that there’s a whole continuum
00:17:19 of how much driving and assisting the car can do.
00:17:22 I noticed that you’re right next door
00:17:24 to the Toyota Research Institute.
00:17:25 That is a total accident.
00:17:27 I love the TRI folks, but yeah.
00:17:29 Have you talked to Gil Pratt?
00:17:30 Yeah, we’re supposed to talk.
00:17:34 It’s kind of hilarious.
00:17:34 So there’s kind of the,
00:17:35 I think it’s a good counterpart to say what Elon is doing.
00:17:38 And hopefully they can be frank
00:17:40 in what they think about each other,
00:17:41 because I’ve heard both of them talk about it.
00:17:43 But they’re much more, you know,
00:17:45 this is an assistive, a guardian angel
00:17:47 that watches over you as opposed to try to do everything.
00:17:50 I think there’s some things like driving on a highway,
00:17:53 you know, from LA to Phoenix,
00:17:55 where it’s mostly good weather, straight roads.
00:17:58 That’s close to a solved problem, let’s face it.
00:18:01 In other situations, you know,
00:18:02 driving through the snow in Boston
00:18:04 where the roads are kind of crazy.
00:18:06 And most importantly, you have to make a lot of judgments
00:18:08 about what the other driver is gonna do
00:18:09 at these intersections that aren’t really right angles
00:18:11 and aren’t very well described.
00:18:13 It’s more like game theory.
00:18:15 That’s a much harder problem
00:18:17 and requires understanding human motivations.
00:18:22 So there’s a continuum there of some places
00:18:24 where the cars will work very well
00:18:27 and others where it could probably take decades.
00:18:30 What do you think about the Waymo?
00:18:33 So you mentioned two companies
00:18:36 that actually have cars on the road.
00:18:38 There’s the Waymo approach that it’s more like
00:18:40 we’re not going to release anything until it’s perfect
00:18:42 and we’re gonna be very strict
00:18:45 about the streets that we travel on,
00:18:47 but it better be perfect.
00:18:49 Yeah.
00:18:50 Well, I’m smart enough to be humble
00:18:53 and not try to get between.
00:18:55 I know there’s very bright people
00:18:56 on both sides of the argument.
00:18:57 I’ve talked to them and they make convincing arguments to me
00:19:00 about how careful they need to be and the social acceptance.
00:19:04 Some people thought that when the first few people died
00:19:07 from self driving cars, that would shut down the industry,
00:19:09 but it was more of a blip actually.
00:19:11 And, you know, so that was interesting.
00:19:14 Of course, there’s still a concern
00:19:16 that if there could be setbacks, if we do this wrong,
00:19:20 you know, your listeners may be familiar
00:19:22 with the different levels of self driving,
00:19:24 you know, level one, two, three, four, five.
00:19:26 I think Andrew Ng has convinced me that this idea
00:19:29 of really focusing on level four,
00:19:32 where you only go in areas that are well mapped
00:19:35 rather than just going out in the wild
00:19:37 is the way things are gonna evolve.
00:19:39 But you can just keep expanding those areas
00:19:42 where you’ve mapped things really well,
00:19:44 where you really understand them
00:19:45 and eventually all become kind of interconnected.
00:19:47 And that could be a kind of another way of progressing
00:19:51 to make it more feasible over time.
00:19:55 I mean, that’s kind of like the Waymo approach,
00:19:57 which is they just now released,
00:19:59 I think just like a day or two ago,
00:20:01 a public, like anyone from the public
00:20:05 in the Phoenix, Arizona to, you know,
00:20:12 you can get a ride in a Waymo car
00:20:14 with no person, no driver.
00:20:16 Oh, they’ve taken away the safety driver?
00:20:17 Oh yeah, for a while now there’s been no safety driver.
00:20:21 Okay, because I mean, I’ve been following that one
00:20:22 in particular, but I thought it was kind of funny
00:20:24 about a year ago when they had the safety driver
00:20:26 and then they added a second safety driver
00:20:28 because the first safety driver would fall asleep.
00:20:30 It’s like, I’m not sure they’re going
00:20:32 in the right direction with that.
00:20:33 No, they’ve Waymo in particular
00:20:38 done a really good job of that.
00:20:39 They actually have a very interesting infrastructure
00:20:44 of remote like observation.
00:20:47 So they’re not controlling the vehicles remotely,
00:20:49 but they’re able to, it’s like a customer service.
00:20:52 They can anytime tune into the car.
00:20:55 I bet they can probably remotely control it as well,
00:20:58 but that’s officially not the function that they use.
00:21:00 Yeah, I can see that being really,
00:21:02 because I think the thing that’s proven harder
00:21:06 than maybe some of the early people expected
00:21:08 was there’s a long tail of weird exceptions.
00:21:10 So you can deal with 90, 99, 99.99% of the cases,
00:21:15 but then there’s something that just never been seen before
00:21:17 in the training data.
00:21:18 And humans more or less can work around that.
00:21:21 Although let me be clear and note,
00:21:22 there are about 30,000 human fatalities
00:21:25 just in the United States and maybe a million worldwide.
00:21:28 So they’re far from perfect.
00:21:30 But I think people have higher expectations of machines.
00:21:33 They wouldn’t tolerate that level of death
00:21:36 and damage from a machine.
00:21:40 And so we have to do a lot better
00:21:41 at dealing with those edge cases.
00:21:43 And also the tricky thing that if I have a criticism
00:21:46 for the Waymo folks, there’s such a huge focus on safety
00:21:51 where people don’t talk enough about creating products
00:21:55 that people, that customers love,
00:21:57 that human beings love using.
00:22:00 It’s very easy to create a thing that’s safe
00:22:03 at the extremes, but then nobody wants to get into it.
00:22:06 Yeah, well, back to Elon, I think one of,
00:22:09 part of his genius was with the electric cars.
00:22:11 Before he came along, electric cars were all kind of
00:22:13 underpowered, really light,
00:22:15 and there were sort of wimpy cars that weren’t fun.
00:22:20 And the first thing he did was he made a roadster
00:22:23 that went zero to 60 faster than just about any other car
00:22:27 and went the other end.
00:22:28 And I think that was a really wise marketing move
00:22:30 as well as a wise technology move.
00:22:33 Yeah, it’s difficult to figure out
00:22:34 what the right marketing move is for AI systems.
00:22:37 That’s always been, I think it requires guts and risk taking
00:22:42 which is what Elon practices.
00:22:46 I mean, to the chagrin of perhaps investors or whatever,
00:22:50 but it also requires rethinking what you’re doing.
00:22:54 I think way too many people are unimaginative,
00:22:57 intellectually lazy, and when they take AI,
00:22:59 they basically say, what are we doing now?
00:23:01 How can we make a machine do the same thing?
00:23:04 Maybe we’ll save some costs, we’ll have less labor.
00:23:06 And yeah, it’s not necessarily the worst thing
00:23:08 in the world to do, but it’s really not leading
00:23:10 to a quantum change in the way you do things.
00:23:12 When Jeff Bezos said, hey, we’re gonna use the internet
00:23:16 to change how bookstores work and we’re gonna use technology,
00:23:19 he didn’t go and say, okay, let’s put a robot cashier
00:23:22 where the human cashier is and leave everything else alone.
00:23:25 That would have been a very lame way to automate a bookstore.
00:23:28 He’s like went from soup to nuts and let’s just rethink it.
00:23:31 We get rid of the physical bookstore.
00:23:33 We have a warehouse, we have delivery,
00:23:34 we have people order on a screen
00:23:36 and everything was reinvented.
00:23:38 And that’s been the story
00:23:39 of these general purpose technologies all through history.
00:23:43 And in my books, I write about like electricity
00:23:46 and how for 30 years, there was almost no productivity gain
00:23:50 from the electrification of factories a century ago.
00:23:53 Now it’s not because electricity
00:23:54 is a wimpy useless technology.
00:23:55 We all know how awesome electricity is.
00:23:57 It’s cause at first,
00:23:58 they really didn’t rethink the factories.
00:24:00 It was only after they reinvented them
00:24:02 and we describe how in the book,
00:24:04 then you suddenly got a doubling and tripling
00:24:05 of productivity growth.
00:24:07 But it’s the combination of the technology
00:24:09 with the new business models, new business organization.
00:24:12 That just takes a long time
00:24:14 and it takes more creativity than most people have.
00:24:16 Can you maybe linger on electricity?
00:24:19 Cause that’s a fun one.
00:24:20 Yeah, well, sure, I’ll tell you what happened.
00:24:22 Before electricity, there were basically steam engines
00:24:25 or sometimes water wheels and to power the machinery,
00:24:28 you had to have pulleys and crankshafts
00:24:30 and you really can’t make them too long
00:24:32 cause they’ll break the torsion.
00:24:34 So all the equipment was kind of clustered
00:24:35 around this one giant steam engine.
00:24:37 You can’t make small steam engines either
00:24:39 cause of thermodynamics.
00:24:40 So you have one giant steam engine,
00:24:42 all the equipment clustered around it, multi story.
00:24:44 They have it vertical to minimize the distance
00:24:46 as well as horizontal.
00:24:47 And then when they did electricity,
00:24:48 they took out the steam engine.
00:24:50 They got the biggest electric motor
00:24:51 they could buy from General Electric or someone like that.
00:24:54 And nothing much else changed.
00:24:57 It took until a generation of managers retired
00:25:00 or died three years later,
00:25:03 that people started thinking,
00:25:04 wait, we don’t have to do it that way.
00:25:05 You can make electric motors, big, small, medium.
00:25:09 You can put one with each piece of equipment.
00:25:11 There’s this big debate
00:25:12 if you read the management literature
00:25:13 between what they call a group drive versus unit drive
00:25:16 where every machine would have its own motor.
00:25:18 Well, once they did that, once they went to unit drive,
00:25:21 those guys won the debate.
00:25:23 Then you started having a new kind of factory
00:25:25 which is sometimes spread out over acres, single story
00:25:29 and each piece of equipment has its own motor.
00:25:31 And most importantly, they weren’t laid out based on
00:25:33 who needed the most power.
00:25:35 They were laid out based on
00:25:37 what is the workflow of materials?
00:25:40 Assembly line, let’s have it go from this machine
00:25:41 to that machine, to that machine.
00:25:43 Once they rethought the factory that way,
00:25:46 huge increases in productivity.
00:25:47 It was just staggering.
00:25:48 People like Paul David have documented this
00:25:50 in their research papers.
00:25:51 And I think that that is a lesson you see over and over.
00:25:55 It happened when the steam engine changed manual production.
00:25:58 It’s happened with the computerization.
00:26:00 People like Michael Hammer said, don’t automate, obliterate.
00:26:03 In each case, the big gains only came once
00:26:08 smart entrepreneurs and managers
00:26:10 basically reinvented their industries.
00:26:13 I mean, one other interesting point about all that
00:26:14 is that during that reinvention period,
00:26:18 you often actually not only don’t see productivity growth,
00:26:22 you can actually see a slipping back.
00:26:24 Measured productivity actually falls.
00:26:26 I just wrote a paper with Chad Severson and Daniel Rock
00:26:29 called the productivity J curve,
00:26:31 which basically shows that in a lot of these cases,
00:26:33 you have a downward dip before it goes up.
00:26:36 And that downward dip is when everyone’s trying
00:26:38 to like reinvent things.
00:26:40 And you could say that they’re creating knowledge
00:26:43 and intangible assets,
00:26:44 but that doesn’t show up on anyone’s balance sheet.
00:26:46 It doesn’t show up in GDP.
00:26:48 So it’s as if they’re doing nothing.
00:26:50 Like take self driving cars, we were just talking about it.
00:26:52 There have been hundreds of billions of dollars
00:26:55 spent developing self driving cars.
00:26:57 And basically no chauffeur has lost his job, no taxi driver.
00:27:02 I guess I got to check out the ones that.
00:27:03 It’s a big J curve.
00:27:04 Yeah, so there’s a bunch of spending
00:27:06 and no real consumer benefit.
00:27:08 Now they’re doing that in the belief,
00:27:11 I think the justified belief
00:27:13 that they will get the upward part of the J curve
00:27:15 and there will be some big returns,
00:27:16 but in the short run, you’re not seeing it.
00:27:19 That’s happening with a lot of other AI technologies,
00:27:21 just as it happened
00:27:22 with earlier general purpose technologies.
00:27:25 And it’s one of the reasons
00:27:25 we’re having relatively low productivity growth lately.
00:27:29 As an economist, one of the things that disappoints me
00:27:31 is that as eye popping as these technologies are,
00:27:34 you and I are both excited
00:27:35 about some of the things they can do.
00:27:36 The economic productivity statistics are kind of dismal.
00:27:40 We actually, believe it or not,
00:27:42 have had lower productivity growth
00:27:44 in the past about 15 years
00:27:47 than we did in the previous 15 years,
00:27:48 in the 90s and early 2000s.
00:27:51 And so that’s not what you would have expected
00:27:53 if these technologies were that much better.
00:27:55 But I think we’re in kind of a long J curve there.
00:27:59 Personally, I’m optimistic.
00:28:00 We’ll start seeing the upward tick,
00:28:02 maybe as soon as next year.
00:28:04 But the past decade has been a bit disappointing
00:28:08 if you thought there’s a one to one relationship
00:28:10 between cool technology and higher productivity.
00:28:12 Well, what would you place your biggest hope
00:28:15 for productivity increases on?
00:28:17 Because you kind of said at a high level AI,
00:28:19 but if I were to think about
00:28:22 what has been so revolutionary in the last 10 years,
00:28:28 I would 15 years and thinking about the internet,
00:28:32 I would say things like,
00:28:35 hopefully I’m not saying anything ridiculous,
00:28:37 but everything from Wikipedia to Twitter.
00:28:41 So like these kind of websites,
00:28:43 not so much AI,
00:28:46 but like I would expect to see some kind
00:28:48 of big productivity increases
00:28:50 from just the connectivity between people
00:28:54 and the access to more information.
00:28:58 Yeah, well, so that’s another area
00:29:00 I’ve done quite a bit of research on actually,
00:29:01 is these free goods like Wikipedia, Facebook, Twitter, Zoom.
00:29:06 We’re actually doing this in person,
00:29:08 but almost everything else I do these days is online.
00:29:12 The interesting thing about all those
00:29:13 is most of them have a price of zero.
00:29:18 What do you pay for Wikipedia?
00:29:19 Maybe like a little bit for the electrons
00:29:21 to come to your house.
00:29:22 Basically zero, right?
00:29:25 Take a small pause and say, I donate to Wikipedia.
00:29:28 Often you should too.
00:29:28 It’s good for you, yeah.
00:29:30 So, but what does that do mean for GDP?
00:29:32 GDP is based on the price and quantity
00:29:36 of all the goods, things bought and sold.
00:29:37 If something has zero price,
00:29:39 you know how much it contributes to GDP?
00:29:42 To a first approximation, zero.
00:29:44 So these digital goods that we’re getting more and more of,
00:29:47 we’re spending more and more hours a day
00:29:50 consuming stuff off of screens,
00:29:52 little screens, big screens,
00:29:54 that doesn’t get priced into GDP.
00:29:56 It’s like they don’t exist.
00:29:58 That doesn’t mean they don’t create value.
00:30:00 I get a lot of value from watching cat videos
00:30:03 and reading Wikipedia articles and listening to podcasts,
00:30:06 even if I don’t pay for them.
00:30:08 So we’ve got a mismatch there.
00:30:10 Now, in fairness, economists,
00:30:12 since Simon Kuznets invented GDP and productivity,
00:30:15 all those statistics back in the 1930s,
00:30:17 he recognized, he in fact said,
00:30:19 this is not a measure of wellbeing.
00:30:21 This is not a measure of welfare.
00:30:23 It’s a measure of production.
00:30:25 But almost everybody has kind of forgotten
00:30:28 that he said that and they just use it.
00:30:31 It’s like, how well off are we?
00:30:32 What was GDP last year?
00:30:33 It was 2.3% growth or whatever.
00:30:35 That is how much physical production,
00:30:39 but it’s not the value we’re getting.
00:30:42 We need a new set of statistics
00:30:43 and I’m working with some colleagues.
00:30:45 Avi Collis and others to develop something
00:30:48 we call GDP dash B.
00:30:50 GDP B measures the benefits you get, not the cost.
00:30:55 If you get benefit from Zoom or Wikipedia or Facebook,
00:31:00 then that gets counted in GDP B,
00:31:02 even if you pay zero for it.
00:31:04 So, you know, back to your original point,
00:31:07 I think there is a lot of gain over the past decade
00:31:10 in these digital goods that doesn’t show up in GDP,
00:31:15 doesn’t show up in productivity.
00:31:16 By the way, productivity is just defined
00:31:17 as GDP divided by hours worked.
00:31:20 So if you mismeasure GDP,
00:31:22 you mismeasure productivity by the exact same amount.
00:31:25 That’s something we need to fix.
00:31:26 I’m working with the statistical agencies
00:31:28 to come up with a new set of metrics.
00:31:30 And, you know, over the coming years,
00:31:32 I think we’ll see, we’re not gonna do away with GDP.
00:31:34 It’s very useful, but we’ll see a parallel set of accounts
00:31:37 that measure the benefits.
00:31:38 How difficult is it to get that B in the GDP B?
00:31:41 It’s pretty hard.
00:31:41 I mean, one of the reasons it hasn’t been done before
00:31:44 is that, you know, you can measure it,
00:31:46 the cash register, what people pay for stuff,
00:31:49 but how do you measure what they would have paid,
00:31:51 like what the value is?
00:31:52 That’s a lot harder, you know?
00:31:54 How much is Wikipedia worth to you?
00:31:56 That’s what we have to answer.
00:31:57 And to do that, what we do is we can use online experiments.
00:32:00 We do massive online choice experiments.
00:32:03 We ask hundreds of thousands, now millions of people
00:32:05 to do lots of sort of A, B tests.
00:32:07 How much would I have to pay you
00:32:09 to give up Wikipedia for a month?
00:32:10 How much would I have to pay you to stop using your phone?
00:32:14 And in some cases, it’s hypothetical.
00:32:15 In other cases, we actually enforce it,
00:32:17 which is kind of expensive.
00:32:18 Like we pay somebody $30 to stop using Facebook
00:32:22 and we see if they’ll do it.
00:32:23 And some people will give it up for $10.
00:32:26 Some people won’t give it up even if you give them $100.
00:32:28 And then you get a whole demand curve.
00:32:31 You get to see what all the different prices are
00:32:33 and how much value different people get.
00:32:36 And not surprisingly,
00:32:36 different people have different values.
00:32:38 We find that women tend to value Facebook more than men.
00:32:41 Old people tend to value it a little bit more
00:32:43 than young people.
00:32:44 That was interesting.
00:32:44 I think young people maybe know about other networks
00:32:46 that I don’t know the name of that are better than Facebook.
00:32:50 And so you get to see these patterns,
00:32:53 but every person’s individual.
00:32:55 And then if you add up all those numbers,
00:32:57 you start getting an estimate of the value.
00:33:00 Okay, first of all, that’s brilliant.
00:33:01 Is this a work that will soon eventually be published?
00:33:05 Yeah, well, there’s a version of it
00:33:07 in the Proceedings of the National Academy of Sciences
00:33:09 about I think we call it massive online choice experiments.
00:33:11 I should remember the title, but it’s on my website.
00:33:14 So yeah, we have some more papers coming out on it,
00:33:17 but the first one is already out.
00:33:20 You know, it’s kind of a fascinating mystery
00:33:22 that Twitter, Facebook,
00:33:24 like all these social networks are free.
00:33:26 And it seems like almost none of them except for YouTube
00:33:31 have experimented with removing ads for money.
00:33:35 Can you like, do you understand that
00:33:37 from both economics and the product perspective?
00:33:39 Yeah, it’s something that, you know,
00:33:41 so I teach a course on digital business models.
00:33:43 So I used to at MIT, at Stanford, I’m not quite sure.
00:33:45 I’m not teaching until next spring.
00:33:47 I’m still thinking what my course is gonna be.
00:33:50 But there are a lot of different business models.
00:33:52 And when you have something that has zero marginal cost,
00:33:54 there’s a lot of forces,
00:33:56 especially if there’s any kind of competition
00:33:57 that push prices down to zero.
00:33:59 But you can have ad supported systems,
00:34:03 you can bundle things together.
00:34:05 You can have volunteer, you mentioned Wikipedia,
00:34:07 there’s donations.
00:34:08 And I think economists underestimate
00:34:11 the power of volunteerism and donations.
00:34:14 Your national public radio.
00:34:16 Actually, how do you, this podcast, how is this,
00:34:18 what’s the revenue model?
00:34:19 There’s sponsors at the beginning.
00:34:22 And then, and people, the funny thing is,
00:34:24 I tell people they can, it’s very,
00:34:26 I tell them the timestamp.
00:34:27 So if you wanna skip the sponsors, you’re free.
00:34:30 But it’s funny that a bunch of people,
00:34:33 so I read the advertisement
00:34:36 and then a bunch of people enjoy reading it.
00:34:38 And it’s.
00:34:39 Well, they may learn something from it.
00:34:40 And also from the advertiser’s perspective,
00:34:42 those are people who are actually interested.
00:34:45 I mean, the example I sometimes get is like,
00:34:46 I bought a car recently and all of a sudden,
00:34:49 all the car ads were like interesting to me.
00:34:52 Exactly.
00:34:53 And then like, now that I have the car,
00:34:54 like I sort of zone out on, but that’s fine.
00:34:56 The car companies, they don’t really wanna be advertising
00:34:58 to me if I’m not gonna buy their product.
00:35:01 So there are a lot of these different revenue models
00:35:03 and it’s a little complicated,
00:35:06 but the economic theory has to do
00:35:08 with what the shape of the demand curve is,
00:35:09 when it’s better to monetize it with charging people
00:35:13 versus when you’re better off doing advertising.
00:35:15 I mean, in short, when the demand curve
00:35:18 is relatively flat and wide,
00:35:20 like generic news and things like that,
00:35:22 then you tend to do better with advertising.
00:35:25 If it’s a good that’s only useful to a small number
00:35:28 of people, but they’re willing to pay a lot,
00:35:30 they have a very high value for it,
00:35:32 then advertising isn’t gonna work as well
00:35:34 and you’re better off charging for it.
00:35:36 Both of them have some inefficiencies.
00:35:38 And then when you get into targeting
00:35:39 and you get into these other revenue models,
00:35:40 it gets more complicated,
00:35:41 but there’s some economic theory on it.
00:35:45 I also think to be frank,
00:35:47 there’s just a lot of experimentation that’s needed
00:35:49 because sometimes things are a little counterintuitive,
00:35:53 especially when you get into what are called
00:35:55 two sided networks or platform effects,
00:35:57 where you may grow the market on one side
00:36:01 and harvest the revenue on the other side.
00:36:04 Facebook tries to get more and more users
00:36:06 and then they harvest the revenue from advertising.
00:36:08 So that’s another way of kind of thinking about it.
00:36:12 Is it strange to you that they haven’t experimented?
00:36:14 Well, they are experimenting.
00:36:15 So they are doing some experiments
00:36:17 about what the willingness is for people to pay.
00:36:22 I think that when they do the math,
00:36:23 it’s gonna work out that they still are better off
00:36:26 with an advertising driven model, but…
00:36:29 What about a mix?
00:36:30 Like this is what YouTube is, right?
00:36:32 It’s you allow the person to decide,
00:36:36 the customer to decide exactly which model they prefer.
00:36:39 No, that can work really well.
00:36:40 And newspapers, of course,
00:36:41 have known this for a long time.
00:36:42 The Wall Street Journal, the New York Times,
00:36:44 they have subscription revenue.
00:36:45 They also have advertising revenue.
00:36:48 And that can definitely work.
00:36:52 Online, it’s a lot easier to have a dial
00:36:54 that’s much more personalized
00:36:55 and everybody can kind of roll their own mix.
00:36:57 And I could imagine having a little slider
00:37:00 about how much advertising you want or are willing to take.
00:37:05 And if it’s done right and it’s incentive compatible,
00:37:07 it could be a win win where both the content provider
00:37:10 and the consumer are better off
00:37:12 than they would have been before.
00:37:14 Yeah, the done right part is a really good point.
00:37:17 Like with the Jeff Bezos
00:37:19 and the single click purchase on Amazon,
00:37:22 the frictionless effort there,
00:37:23 if I could just rant for a second
00:37:25 about the Wall Street Journal,
00:37:27 all the newspapers you mentioned,
00:37:29 is I have to click so many times to subscribe to them
00:37:34 that I literally don’t subscribe
00:37:37 just because of the number of times I have to click.
00:37:39 I’m totally with you.
00:37:40 I don’t understand why so many companies make it so hard.
00:37:44 I mean, another example is when you buy a new iPhone
00:37:47 or a new computer, whatever,
00:37:48 I feel like, okay, I’m gonna lose an afternoon
00:37:51 just like loading up and getting all my stuff back.
00:37:53 And for a lot of us,
00:37:56 that’s more of a deterrent than the price.
00:37:58 And if they could make it painless,
00:38:01 we’d give them a lot more money.
00:38:03 So I’m hoping somebody listening is working
00:38:06 on making it more painless for us to buy your products.
00:38:10 If we could just like linger a little bit
00:38:12 on the social network thing,
00:38:13 because there’s this Netflix social dilemma.
00:38:18 Yeah, no, I saw that.
00:38:19 And Tristan Harris and company, yeah.
00:38:24 And people’s data,
00:38:29 it’s really sensitive and social networks
00:38:31 are at the core arguably of many of societal like tension
00:38:37 and some of the most important things happening in society.
00:38:39 So it feels like it’s important to get this right,
00:38:42 both from a business model perspective
00:38:43 and just like a trust perspective.
00:38:46 I still gotta, I mean, it just still feels like,
00:38:49 I know there’s experimentation going on.
00:38:52 It still feels like everyone is afraid
00:38:54 to try different business models, like really try.
00:38:57 Well, I’m worried that people are afraid
00:38:59 to try different business models.
00:39:01 I’m also worried that some of the business models
00:39:03 may lead them to bad choices.
00:39:06 And Danny Kahneman talks about system one and system two,
00:39:10 sort of like a reptilian brain
00:39:12 that reacts quickly to what we see,
00:39:14 see something interesting, we click on it,
00:39:16 we retweet it versus our system two,
00:39:20 our frontal cortex that’s supposed to be more careful
00:39:24 and rational that really doesn’t make
00:39:26 as many decisions as it should.
00:39:28 I think there’s a tendency for a lot of these social networks
00:39:32 to really exploit system one, our quick instant reaction,
00:39:37 make it so we just click on stuff and pass it on
00:39:40 and not really think carefully about it.
00:39:42 And that system, it tends to be driven
00:39:45 by sex, violence, disgust, anger, fear,
00:39:51 these relatively primitive kinds of emotions.
00:39:53 Maybe they’re important for a lot of purposes,
00:39:55 but they’re not a great way to organize a society.
00:39:58 And most importantly, when you think about this huge,
00:40:01 amazing information infrastructure we’ve had
00:40:04 that’s connected billions of brains across the globe,
00:40:08 not just so we can all access information,
00:40:09 but we can all contribute to it and share it.
00:40:12 Arguably the most important thing
00:40:14 that that network should do is favor truth over falsehoods.
00:40:19 And the way it’s been designed,
00:40:21 not necessarily intentionally, is exactly the opposite.
00:40:24 My MIT colleagues are all, and Deb Roy and others at MIT,
00:40:29 did a terrific paper in the cover of Science.
00:40:31 And they documented what we all feared,
00:40:33 which is that lies spread faster than truth
00:40:37 on social networks.
00:40:39 They looked at a bunch of tweets and retweets,
00:40:42 and they found that false information
00:40:44 was more likely to spread further, faster, to more people.
00:40:48 And why was that?
00:40:49 It’s not because people like lies.
00:40:53 It’s because people like things that are shocking,
00:40:55 amazing, can you believe this?
00:40:57 Something that is not mundane,
00:41:00 not something that everybody else already knew.
00:41:02 And what are the most unbelievable things?
00:41:05 Well, lies.
00:41:07 And so if you wanna find something unbelievable,
00:41:09 it’s a lot easier to do that
00:41:10 if you’re not constrained by the truth.
00:41:12 So they found that the emotional valence
00:41:15 of false information was just much higher.
00:41:17 It was more likely to be shocking,
00:41:19 and therefore more likely to be spread.
00:41:22 Another interesting thing was that
00:41:24 that wasn’t necessarily driven by the algorithms.
00:41:27 I know that there is some evidence,
00:41:29 Zeynep Tufekci and others have pointed out on YouTube,
00:41:32 some of the algorithms unintentionally were tuned
00:41:34 to amplify more extremist content.
00:41:37 But in the study of Twitter that Sinan and Deb and others did,
00:41:42 they found that even if you took out all the bots
00:41:44 and all the automated tweets,
00:41:47 you still had lies spreading significantly faster.
00:41:50 It’s just the problems with ourselves
00:41:52 that we just can’t resist passing on the salacious content.
00:41:58 But I also blame the platforms
00:41:59 because there’s different ways you can design a platform.
00:42:03 You can design a platform in a way
00:42:05 that makes it easy to spread lies
00:42:07 and to retweet and spread things on,
00:42:09 or you can kind of put some friction on that
00:42:11 and try to favor truth.
00:42:13 I had dinner with Jimmy Wales once,
00:42:15 the guy who helped found Wikipedia.
00:42:19 And he convinced me that, look,
00:42:22 you can make some design choices,
00:42:24 whether it’s at Facebook, at Twitter,
00:42:26 at Wikipedia, or Reddit, whatever,
00:42:29 and depending on how you make those choices,
00:42:32 you’re more likely or less likely to have false news.
00:42:35 Create a little bit of friction, like you said.
00:42:37 Yeah.
00:42:38 You know, that’s the, and so if I’m…
00:42:39 It could be friction, it could be speeding the truth,
00:42:41 either way, but, and I don’t totally understand…
00:42:44 Speeding the truth, I love it.
00:42:45 Yeah, yeah.
00:42:47 Amplifying it and giving it more credit.
00:42:48 And in academia, which is far, far from perfect,
00:42:52 but when someone has an important discovery,
00:42:55 it tends to get more cited
00:42:56 and people kind of look to it more
00:42:58 and sort of, it tends to get amplified a little bit.
00:43:00 So you could try to do that too.
00:43:03 I don’t know what the silver bullet is,
00:43:04 but the meta point is that if we spend time
00:43:07 thinking about it, we can amplify truth over falsehoods.
00:43:10 And I’m disappointed in the heads of these social networks
00:43:14 that they haven’t been as successful
00:43:16 or maybe haven’t tried as hard to amplify truth.
00:43:19 And part of it, going back to what we said earlier,
00:43:21 is these revenue models may push them
00:43:25 more towards growing fast, spreading information rapidly,
00:43:29 getting lots of users,
00:43:31 which isn’t the same thing as finding truth.
00:43:34 Yeah, I mean, implicit in what you’re saying now
00:43:38 is a hopeful message that with platforms,
00:43:42 we can take a step towards a greater
00:43:47 and greater popularity of truth.
00:43:51 But the more cynical view is that
00:43:54 what the last few years have revealed
00:43:56 is that there’s a lot of money to be made
00:43:59 in dismantling even the idea of truth,
00:44:03 that nothing is true.
00:44:05 And as a thought experiment,
00:44:07 I’ve been thinking about if it’s possible
00:44:09 that our future will have,
00:44:11 like the idea of truth is something we won’t even have.
00:44:14 Do you think it’s possible in the future
00:44:17 that everything is on the table in terms of truth,
00:44:20 and we’re just swimming in this kind of digital economy
00:44:24 where ideas are just little toys
00:44:29 that are not at all connected to reality?
00:44:33 Yeah, I think that’s definitely possible.
00:44:35 I’m not a technological determinist,
00:44:37 so I don’t think that’s inevitable.
00:44:40 I don’t think it’s inevitable that it doesn’t happen.
00:44:42 I mean, the thing that I’ve come away with
00:44:43 every time I do these studies,
00:44:45 and I emphasize it in my books and elsewhere,
00:44:47 is that technology doesn’t shape our destiny,
00:44:50 we shape our destiny.
00:44:51 So just by us having this conversation,
00:44:54 I hope that your audience is gonna take it upon themselves
00:44:58 as they design their products,
00:44:59 and they think about, they use products,
00:45:01 as they manage companies,
00:45:02 how can they make conscious decisions
00:45:05 to favor truth over falsehoods,
00:45:08 favor the better kinds of societies,
00:45:10 and not abdicate and say, well, we just build the tools.
00:45:13 I think there was a saying that,
00:45:16 was it the German scientist
00:45:18 when they were working on the missiles in late World War II?
00:45:23 They said, well, our job is to make the missiles go up.
00:45:25 Where they come down, that’s someone else’s department.
00:45:28 And that’s obviously not the, I think it’s obvious,
00:45:31 that’s not the right attitude
00:45:32 that technologists should have,
00:45:33 that engineers should have.
00:45:35 They should be very conscious
00:45:36 about what the implications are.
00:45:38 And if we think carefully about it,
00:45:40 we can avoid the kind of world that you just described,
00:45:42 where truth is all relative.
00:45:45 There are going to be people who benefit from a world
00:45:47 of where people don’t check facts,
00:45:51 and where truth is relative,
00:45:52 and popularity or fame or money is orthogonal to truth.
00:45:59 But one of the reasons I suspect
00:46:01 that we’ve had so much progress over the past few hundred
00:46:04 years is the invention of the scientific method,
00:46:07 which is a really powerful tool or meta tool
00:46:10 for finding truth and favoring things that are true
00:46:15 versus things that are false.
00:46:16 If they don’t pass the scientific method,
00:46:18 they’re less likely to be true.
00:46:20 And that has, the societies and the people
00:46:25 and the organizations that embrace that
00:46:27 have done a lot better than the ones who haven’t.
00:46:30 And so I’m hoping that people keep that in mind
00:46:32 and continue to try to embrace not just the truth,
00:46:35 but methods that lead to the truth.
00:46:37 So maybe on a more personal question,
00:46:41 if one were to try to build a competitor to Twitter,
00:46:45 what would you advise?
00:46:47 Is there, I mean, the bigger, the meta question,
00:46:53 is that the right way to improve systems?
00:46:55 Yeah, no, I think that the underlying premise
00:46:59 behind Twitter and all these networks is amazing,
00:47:01 that we can communicate with each other.
00:47:02 And I use it a lot.
00:47:04 There’s a subpart of Twitter called Econ Twitter,
00:47:05 where we economists tweet to each other
00:47:08 and talk about new papers.
00:47:10 Something came out in the NBER,
00:47:11 the National Bureau of Economic Research,
00:47:13 and we share about it.
00:47:14 People critique it.
00:47:15 I think it’s been a godsend
00:47:16 because it’s really sped up the scientific process,
00:47:20 if you can call economic scientific.
00:47:21 Does it get divisive in that little?
00:47:23 Sometimes, yeah, sure.
00:47:24 Sometimes it does.
00:47:25 It can also be done in nasty ways and there’s the bad parts.
00:47:28 But the good parts are great
00:47:29 because you just speed up that clock speed
00:47:31 of learning about things.
00:47:33 Instead of like in the old, old days,
00:47:35 waiting to read it in a journal,
00:47:36 or the not so old days when you’d see it posted
00:47:39 on a website and you’d read it.
00:47:41 Now on Twitter, people will distill it down
00:47:44 and it’s a real art to getting to the essence of things.
00:47:47 So that’s been great.
00:47:49 But it certainly, we all know that Twitter
00:47:52 can be a cesspool of misinformation.
00:47:55 And like I just said,
00:47:57 unfortunately misinformation tends to spread faster
00:48:00 on Twitter than truth.
00:48:02 And there are a lot of people
00:48:03 who are very vulnerable to it.
00:48:04 I’m sure I’ve been fooled at times.
00:48:06 There are agents, whether from Russia
00:48:09 or from political groups or others
00:48:11 that explicitly create efforts at misinformation
00:48:15 and efforts at getting people to hate each other.
00:48:17 Or even more important lately I’ve discovered
00:48:19 is nut picking.
00:48:21 You know the idea of nut picking?
00:48:22 No, what’s that?
00:48:23 It’s a good term.
00:48:24 Nut picking is when you find like an extreme nut case
00:48:27 on the other side and then you amplify them
00:48:30 and make it seem like that’s typical of the other side.
00:48:34 So you’re not literally lying.
00:48:35 You’re taking some idiot, you know,
00:48:37 renting on the subway or just, you know,
00:48:39 whether they’re in the KKK or Antifa or whatever,
00:48:42 they’re just, and you,
00:48:44 normally nobody would pay attention to this guy.
00:48:46 Like 12 people would see him and it’d be the end.
00:48:48 Instead with video or whatever,
00:48:51 you get tens of millions of people say it.
00:48:54 And I’ve seen this, you know, I look at it,
00:48:56 I’m like, I get angry.
00:48:57 I’m like, I can’t believe that person
00:48:58 did something so terrible.
00:48:59 Let me tell all my friends about this terrible person.
00:49:02 And it’s a great way to generate division.
00:49:06 I talked to a friend who studied Russian misinformation
00:49:10 campaigns, and they’re very clever about literally
00:49:13 being on both sides of some of these debates.
00:49:15 They would have some people pretend to be part of BLM.
00:49:18 Some people pretend to be white nationalists
00:49:21 and they would be throwing epithets at each other,
00:49:22 saying crazy things at each other.
00:49:25 And they’re literally playing both sides of it,
00:49:26 but their goal wasn’t for one or the other to win.
00:49:28 It was for everybody to get behaving
00:49:30 and distrusting everyone else.
00:49:32 So these tools can definitely be used for that.
00:49:34 And they are being used for that.
00:49:36 It’s been super destructive for our democracy
00:49:39 and our society.
00:49:41 And the people who run these platforms,
00:49:43 I think have a social responsibility,
00:49:46 a moral and ethical, personal responsibility
00:49:48 to do a better job and to shut that stuff down.
00:49:51 Well, I don’t know if you can shut it down,
00:49:52 but to design them in a way that, you know,
00:49:55 as I said earlier, favors truth over falsehoods
00:49:58 and favors positive types of
00:50:03 communication versus destructive ones.
00:50:06 And just like you said, it’s also on us.
00:50:09 I try to be all about love and compassion,
00:50:12 empathy on Twitter.
00:50:13 I mean, one of the things,
00:50:14 nut picking is a fascinating term.
00:50:16 One of the things that people do,
00:50:18 that’s I think even more dangerous
00:50:21 is nut picking applied to individual statements
00:50:26 of good people.
00:50:28 So basically worst case analysis in computer science
00:50:32 is taking sometimes out of context,
00:50:35 but sometimes in context,
00:50:38 a statement, one statement by a person,
00:50:42 like I’ve been, because I’ve been reading
00:50:43 The Rise and Fall of the Third Reich,
00:50:45 I often talk about Hitler on this podcast with folks
00:50:48 and it is so easy.
00:50:50 That’s really dangerous.
00:50:52 But I’m all leaning in, I’m 100%.
00:50:54 Because, well, it’s actually a safer place
00:50:56 than people realize because it’s history
00:50:59 and history in long form is actually very fascinating
00:51:04 to think about and it’s,
00:51:06 but I could see how that could be taken
00:51:09 totally out of context and it’s very worrying.
00:51:11 You know, these digital infrastructures,
00:51:12 not just they disseminate things,
00:51:14 but they’re sort of permanent.
00:51:14 So anything you say at some point,
00:51:16 someone can go back and find something you said
00:51:18 three years ago, perhaps jokingly, perhaps not,
00:51:21 maybe you’re just wrong and you made them, you know,
00:51:22 and like that becomes, they can use that to define you
00:51:25 if they have ill intent.
00:51:26 And we all need to be a little more forgiving.
00:51:29 I mean, somewhere in my 20s, I told myself,
00:51:32 I was going through all my different friends
00:51:33 and I was like, you know, every one of them
00:51:37 has at least like one nutty opinion.
00:51:39 And I was like, there’s like nobody
00:51:42 who’s like completely, except me, of course,
00:51:44 but I’m sure they thought that about me too.
00:51:45 And so you just kind of like learned
00:51:47 to be a little bit tolerant that like, okay,
00:51:49 there’s just, you know.
00:51:51 Yeah, I wonder who the responsibility lays on there.
00:51:55 Like, I think ultimately it’s about leadership.
00:51:59 Like the previous president, Barack Obama,
00:52:02 has been, I think, quite eloquent
00:52:06 at walking this very difficult line
00:52:07 of talking about cancel culture, but it’s a difficult,
00:52:10 it takes skill.
00:52:12 Because you say the wrong thing
00:52:13 and you piss off a lot of people.
00:52:15 And so you have to do it well.
00:52:17 But then also the platform of the technology is,
00:52:21 should slow down, create friction,
00:52:23 and spreading this kind of nut picking in all its forms.
00:52:26 Absolutely.
00:52:27 No, and your point that we have to like learn over time,
00:52:29 how to manage it.
00:52:30 I mean, we can’t put it all on the platform
00:52:31 and say, you guys design it.
00:52:33 Because if we’re idiots about using it,
00:52:35 nobody can design a platform that withstands that.
00:52:38 And every new technology people learn its dangers.
00:52:41 You know, when someone invented fire,
00:52:43 it’s great cooking and everything,
00:52:44 but then somebody burned themself.
00:52:46 And then you had to like learn how to like avoid,
00:52:48 maybe somebody invented a fire extinguisher later.
00:52:50 So you kind of like figure out ways
00:52:52 of working around these technologies.
00:52:54 Someone invented seat belts, et cetera.
00:52:57 And that’s certainly true
00:52:58 with all the new digital technologies
00:53:00 that we have to figure out,
00:53:02 not just technologies that protect us,
00:53:05 but ways of using them that emphasize
00:53:08 that are more likely to be successful than dangerous.
00:53:11 So you’ve written quite a bit
00:53:12 about how artificial intelligence might change our world.
00:53:19 How do you think if we look forward,
00:53:21 again, it’s impossible to predict the future,
00:53:23 but if we look at trends from the past
00:53:26 and we tried to predict what’s gonna happen
00:53:28 in the rest of the 21st century,
00:53:29 how do you think AI will change our world?
00:53:33 That’s a big question.
00:53:34 You know, I’m mostly a techno optimist.
00:53:37 I’m not at the extreme, you know,
00:53:38 the singularity is near end of the spectrum,
00:53:41 but I do think that we’re likely in
00:53:44 for some significantly improved living standards,
00:53:47 some really important progress,
00:53:49 even just the technologies that are already kind of like
00:53:51 in the can that haven’t diffused.
00:53:53 You know, when I talked earlier about the J curve,
00:53:54 it could take 10, 20, 30 years for an existing technology
00:53:58 to have the kind of profound effects.
00:54:00 And when I look at whether it’s, you know,
00:54:03 vision systems, voice recognition, problem solving systems,
00:54:07 even if nothing new got invented,
00:54:09 we would have a few decades of progress.
00:54:11 So I’m excited about that.
00:54:13 And I think that’s gonna lead to us being wealthier,
00:54:16 healthier, I mean,
00:54:17 the healthcare is probably one of the applications
00:54:19 that I’m most excited about.
00:54:22 So that’s good news.
00:54:23 I don’t think we’re gonna have the end of work anytime soon.
00:54:26 There’s just too many things that machines still can’t do.
00:54:30 When I look around the world
00:54:32 and think of whether it’s childcare or healthcare,
00:54:34 cleaning the environment, interacting with people,
00:54:37 scientific work, artistic creativity,
00:54:40 these are things that for now,
00:54:42 machines aren’t able to do nearly as well as humans,
00:54:45 even just something as mundane as, you know,
00:54:47 folding laundry or whatever.
00:54:48 And many of these, I think are gonna be years or decades
00:54:52 before machines catch up.
00:54:54 You know, I may be surprised on some of them,
00:54:56 but overall, I think there’s plenty of work
00:54:58 for humans to do.
00:54:59 There’s plenty of problems in society
00:55:01 that need the human touch.
00:55:02 So we’ll have to repurpose.
00:55:04 We’ll have to, as machines are able to do some tasks,
00:55:07 people are gonna have to reskill and move into other areas.
00:55:11 And that’s probably what’s gonna be going on
00:55:12 for the next, you know, 10, 20, 30 years or more,
00:55:16 kind of big restructuring of society.
00:55:18 We’ll get wealthier and people will have to do new skills.
00:55:22 Now, if you turn the dial further, I don’t know,
00:55:24 50 or a hundred years into the future,
00:55:26 then, you know, maybe all bets are off.
00:55:29 Then it’s possible that machines will be able to do
00:55:32 most of what people do.
00:55:34 You know, say one or 200 years, I think it’s even likely.
00:55:37 And at that point,
00:55:38 then we’re more in the sort of abundance economy.
00:55:41 Then we’re in a world where there’s really little
00:55:44 for the humans can do economically better than machines,
00:55:48 other than be human.
00:55:49 And, you know, that will take a transition as well,
00:55:53 kind of more of a transition of how we get meaning in life
00:55:56 and what our values are.
00:55:58 But shame on us if we screw that up.
00:56:00 I mean, that should be like great, great news.
00:56:02 And it kind of saddens me that some people see that
00:56:04 as like a big problem.
00:56:05 I think that would be, should be wonderful
00:56:07 if people have all the health and material things
00:56:10 that they need and can focus on loving each other
00:56:14 and discussing philosophy and playing
00:56:16 and doing all the other things that don’t require work.
00:56:19 Do you think you’d be surprised to see what the 20,
00:56:23 if we were to travel in time, 100 years into the future,
00:56:27 do you think you’ll be able to,
00:56:29 like if I gave you a month to like talk to people,
00:56:32 no, like let’s say a week,
00:56:34 would you be able to understand what the hell’s going on?
00:56:37 You mean if I was there for a week?
00:56:39 Yeah, if you were there for a week.
00:56:40 A hundred years in the future?
00:56:42 Yeah.
00:56:43 So like, so I’ll give you one thought experiment is like,
00:56:46 isn’t it possible that we’re all living in virtual reality
00:56:49 by then?
00:56:50 Yeah, no, I think that’s very possible.
00:56:52 I’ve played around with some of those VR headsets
00:56:54 and they’re not great,
00:56:55 but I mean the average person spends many waking hours
00:57:00 staring at screens right now.
00:57:03 They’re kind of low res compared to what they could be
00:57:05 in 30 or 50 years, but certainly games
00:57:10 and why not any other interactions could be done with VR?
00:57:15 And that would be a pretty different world
00:57:16 and we’d all, in some ways be as rich as we wanted.
00:57:19 We could have castles and we could be traveling
00:57:21 anywhere we want and it could obviously be multisensory.
00:57:25 So that would be possible and of course there’s people,
00:57:30 you’ve had Elon Musk on and others, there are people,
00:57:33 Nick Bostrom makes the simulation argument
00:57:35 that maybe we’re already there.
00:57:36 We’re already there.
00:57:37 So, but in general, or do you not even think about
00:57:41 in this kind of way, you’re self critically thinking,
00:57:45 how good are you as an economist at predicting
00:57:48 what the future looks like?
00:57:50 Do you have a?
00:57:51 Well, it starts getting, I mean,
00:57:52 I feel reasonably comfortable the next five, 10, 20 years
00:57:55 in terms of that path.
00:57:58 When you start getting truly superhuman
00:58:01 artificial intelligence, kind of by definition,
00:58:06 be able to think of a lot of things
00:58:07 that I couldn’t have thought of and create a world
00:58:09 that I couldn’t even imagine.
00:58:10 And so I’m not sure I can predict what that world
00:58:15 is going to be like.
00:58:16 One thing that AI researchers, AI safety researchers
00:58:19 worry about is what’s called the alignment problem.
00:58:22 When an AI is that powerful,
00:58:25 then they can do all sorts of things.
00:58:27 And you really hope that their values
00:58:30 are aligned with our values.
00:58:32 And it’s even tricky to finding what our values are.
00:58:34 I mean, first off, we all have different values.
00:58:37 And secondly, maybe if we were smarter,
00:58:40 we would have better values.
00:58:41 Like, I like to think that we have better values
00:58:44 than we did in 1860 and, or in the year 200 BC
00:58:50 on a lot of dimensions,
00:58:51 things that we consider barbaric today.
00:58:53 And it may be that if I thought about it more deeply,
00:58:56 I would also be morally evolved.
00:58:57 Maybe I’d be a vegetarian or do other things
00:59:00 that right now, whether my future self
00:59:02 would consider kind of immoral.
00:59:05 So that’s a tricky problem,
00:59:07 getting the AI to do what we want,
00:59:11 assuming it’s even a friendly AI.
00:59:12 I mean, I should probably mention
00:59:14 there’s a nontrivial other branch
00:59:17 where we destroy ourselves, right?
00:59:18 I mean, there’s a lot of exponentially improving
00:59:22 technologies that could be ferociously destructive,
00:59:26 whether it’s in nanotechnology or biotech
00:59:29 and weaponized viruses, AI and other things that.
00:59:34 nuclear weapons.
00:59:35 Nuclear weapons, of course.
00:59:36 The old school technology.
00:59:37 Yeah, good old nuclear weapons that could be devastating
00:59:42 or even existential and new things yet to be invented.
00:59:45 So that’s a branch that I think is pretty significant.
00:59:52 And there are those who think that one of the reasons
00:59:54 we haven’t been contacted by other civilizations, right?
00:59:57 Is that once you get to a certain level of complexity
01:00:01 in technology, there’s just too many ways to go wrong.
01:00:04 There’s a lot of ways to blow yourself up.
01:00:06 And people, or I should say species,
01:00:09 end up falling into one of those traps.
01:00:12 The great filter.
01:00:13 The great filter.
01:00:14 I mean, there’s an optimistic view of that.
01:00:16 If there is literally no intelligent life out there
01:00:19 in the universe, or at least in our galaxy,
01:00:22 that means that we’ve passed at least one
01:00:25 of the great filters or some of the great filters
01:00:27 that we survived.
01:00:30 Yeah, no, I think Robin Hansen has a good way of,
01:00:32 maybe others have a good way of thinking about this,
01:00:33 that if there are no other intelligence creatures out there
01:00:38 that we’ve been able to detect,
01:00:40 one possibility is that there’s a filter ahead of us.
01:00:43 And when you get a little more advanced,
01:00:44 maybe in a hundred or a thousand or 10,000 years,
01:00:47 things just get destroyed for some reason.
01:00:50 The other one is the great filters behind us.
01:00:53 That’ll be good, is that most planets don’t even evolve life
01:00:57 or if they don’t evolve life,
01:00:58 they don’t evolve intelligent life.
01:01:00 Maybe we’ve gotten past that.
01:01:02 And so now maybe we’re on the good side
01:01:03 of the great filter.
01:01:05 So if we sort of rewind back and look at the thing
01:01:10 where we could say something a little bit more comfortably
01:01:12 at five years and 10 years out,
01:01:15 you’ve written about jobs
01:01:20 and the impact on sort of our economy and the jobs
01:01:24 in terms of artificial intelligence that it might have.
01:01:28 It’s a fascinating question of what kind of jobs are safe,
01:01:30 what kind of jobs are not.
01:01:32 Can you maybe speak to your intuition
01:01:34 about how we should think about AI changing
01:01:38 the landscape of work?
01:01:39 Sure, absolutely.
01:01:40 Well, this is a really important question
01:01:42 because I think we’re very far
01:01:43 from artificial general intelligence,
01:01:45 which is AI that can just do the full breadth
01:01:48 of what humans can do.
01:01:49 But we do have human level or superhuman level
01:01:52 narrow intelligence, narrow artificial intelligence.
01:01:56 And obviously my calculator can do math a lot better
01:01:59 than I can.
01:02:00 And there’s a lot of other things
01:02:01 that machines can do better than I can.
01:02:03 So which is which?
01:02:04 We actually set out to address that question
01:02:06 with Tom Mitchell.
01:02:08 I wrote a paper called what can machine learning do
01:02:12 that was in science.
01:02:13 And we went and interviewed a whole bunch of AI experts
01:02:16 and kind of synthesized what they thought machine learning
01:02:20 was good at and wasn’t good at.
01:02:22 And we came up with what we called a rubric,
01:02:25 basically a set of questions you can ask about any task
01:02:28 that will tell you whether it’s likely to score high or low
01:02:30 on suitability for machine learning.
01:02:33 And then we’ve applied that
01:02:34 to a bunch of tasks in the economy.
01:02:36 In fact, there’s a data set of all the tasks
01:02:39 in the US economy, believe it or not, it’s called ONET.
01:02:41 The US government put it together,
01:02:43 part of the Bureau of Labor Statistics.
01:02:45 They divide the economy into about 970 occupations
01:02:48 like bus driver, economist, primary school teacher,
01:02:52 radiologist, and then for each one of them,
01:02:54 they describe which tasks need to be done.
01:02:57 Like for radiologists, there are 27 distinct tasks.
01:03:00 So we went through all those tasks
01:03:02 to see whether or not a machine could do them.
01:03:04 And what we found interestingly was…
01:03:06 Brilliant study by the way, that’s so awesome.
01:03:08 Yeah, thank you.
01:03:10 So what we found was that there was no occupation
01:03:13 in our data set where machine learning just ran the table
01:03:16 and did everything.
01:03:17 And there was almost no occupation
01:03:18 where machine learning didn’t have
01:03:19 like a significant ability to do things.
01:03:22 Like take radiology, a lot of people I hear saying,
01:03:24 you know, it’s the end of radiology.
01:03:26 And one of the 27 tasks is read medical images.
01:03:29 Really important one, like it’s kind of a core job.
01:03:31 And machines have basically gotten as good
01:03:34 or better than radiologists.
01:03:35 There was just an article in Nature last week,
01:03:38 but they’ve been publishing them for the past few years
01:03:42 showing that machine learning can do as well as humans
01:03:46 on many kinds of diagnostic imaging tasks.
01:03:49 But other things that radiologists do,
01:03:51 they sometimes administer conscious sedation.
01:03:54 They sometimes do physical exams.
01:03:55 They have to synthesize the results
01:03:57 and explain it to the other doctors or to the patients.
01:04:01 In all those categories,
01:04:02 machine learning isn’t really up to snuff yet.
01:04:05 So that job, we’re gonna see a lot of restructuring.
01:04:09 Parts of the job, they’ll hand over to machines.
01:04:11 Others, humans will do more of.
01:04:13 That’s been more or less the pattern all of them.
01:04:15 So, you know, to oversimplify a bit,
01:04:17 we’re gonna see a lot of restructuring,
01:04:19 reorganization of work.
01:04:20 And it’s real gonna be a great time.
01:04:22 It is a great time for smart entrepreneurs and managers
01:04:24 to do that reinvention of work.
01:04:27 I’m not gonna see mass unemployment.
01:04:30 To get more specifically to your question,
01:04:33 the kinds of tasks that machines tend to be good at
01:04:36 are a lot of routine problem solving,
01:04:39 mapping inputs X into outputs Y.
01:04:42 If you have a lot of data on the Xs and the Ys,
01:04:44 the inputs and the outputs,
01:04:45 you can do that kind of mapping and find the relationships.
01:04:48 They tend to not be very good at,
01:04:50 even now, fine motor control and dexterity.
01:04:53 Emotional intelligence and human interactions
01:04:58 and thinking outside the box, creative work.
01:05:01 If you give it a well structured task,
01:05:03 machines can be very good at it.
01:05:05 But even asking the right questions, that’s hard.
01:05:08 There’s a quote that Andrew McAfee and I use
01:05:10 in our book, Second Machine Age.
01:05:12 Apparently Pablo Picasso was shown an early computer
01:05:16 and he came away kind of unimpressed.
01:05:18 He goes, well, I don’t see all the fusses.
01:05:20 All that does is answer questions.
01:05:23 And to him, the interesting thing was asking the questions.
01:05:26 Yeah, try to replace me, GPT3, I dare you.
01:05:31 Although some people think I’m a robot.
01:05:33 You have this cool plot that shows,
01:05:37 I just remember where economists land,
01:05:39 where I think the X axis is the income.
01:05:43 And then the Y axis is, I guess,
01:05:46 aggregating the information of how replaceable the job is.
01:05:49 Or I think there’s an index.
01:05:50 There’s a suitability for machine learning index.
01:05:51 Exactly.
01:05:52 So we have all 970 occupations on that chart.
01:05:55 It’s a cool plot.
01:05:56 And there’s scatters in all four corners
01:05:59 have some occupations.
01:06:01 But there is a definite pattern,
01:06:02 which is the lower wage occupations tend to have more tasks
01:06:05 that are suitable for machine learning, like cashiers.
01:06:07 I mean, anyone who’s gone to a supermarket or CVS
01:06:10 knows that they not only read barcodes,
01:06:12 but they can recognize an apple and an orange
01:06:14 and a lot of things cashiers, humans used to be needed for.
01:06:19 At the other end of the spectrum,
01:06:21 there are some jobs like airline pilot
01:06:23 that are among the highest paid in our economy,
01:06:26 but also a lot of them are suitable for machine learning.
01:06:28 A lot of those tasks are.
01:06:30 And then, yeah, you mentioned economists.
01:06:32 I couldn’t help peeking at those
01:06:33 and they’re paid a fair amount,
01:06:36 maybe not as much as some of us think they should be.
01:06:39 But they have some tasks that are suitable
01:06:43 for machine learning, but for now at least,
01:06:45 most of the tasks of economists
01:06:47 didn’t end up being in that category.
01:06:48 And I should say, I didn’t like create that data.
01:06:50 We just took the analysis and that’s what came out of it.
01:06:54 And over time, that scatter plot will be updated
01:06:57 as the technology improves.
01:06:59 But it was just interesting to see the pattern there.
01:07:02 And it is a little troubling in so far
01:07:05 as if you just take the technology as it is today,
01:07:08 it’s likely to worsen income inequality
01:07:10 on a lot of dimensions.
01:07:12 So on this topic of the effect of AI
01:07:16 on our landscape of work,
01:07:21 one of the people that have been speaking about it
01:07:23 in the public domain, public discourse
01:07:25 is the presidential candidate, Andrew Yang.
01:07:28 Yeah.
01:07:29 What are your thoughts about Andrew?
01:07:31 What are your thoughts about UBI,
01:07:34 that universal basic income
01:07:36 that he made one of the core ideas,
01:07:39 by the way, he has like hundreds of ideas
01:07:40 about like everything, it’s kind of interesting.
01:07:44 But what are your thoughts about him
01:07:45 and what are your thoughts about UBI?
01:07:46 Let me answer the question about his broader approach first.
01:07:52 I mean, I just love that.
01:07:52 He’s really thoughtful, analytical.
01:07:56 I agree with his values.
01:07:58 So that’s awesome.
01:07:59 And he read my book and mentions it sometimes,
01:08:02 so it makes me even more excited.
01:08:04 And the thing that he really made the centerpiece
01:08:07 of his campaign was UBI.
01:08:09 And I was originally kind of a fan of it.
01:08:13 And then as I studied it more, I became less of a fan,
01:08:15 although I’m beginning to come back a little bit.
01:08:17 So let me tell you a little bit of my evolution.
01:08:19 As an economist, we have, by looking at the problem
01:08:23 of people not having enough income and the simplest thing
01:08:25 is, well, why don’t we write them a check?
01:08:26 Problem solved.
01:08:28 But then I talked to my sociologist friends
01:08:30 and they really convinced me that just writing a check
01:08:34 doesn’t really get at the core values.
01:08:36 Voltaire once said that work solves three great ills,
01:08:40 boredom, vice, and need.
01:08:43 And you can deal with the need thing by writing a check,
01:08:46 but people need a sense of meaning,
01:08:49 they need something to do.
01:08:50 And when, say, steel workers or coal miners lost their jobs
01:08:57 and were just given checks, alcoholism, depression, divorce,
01:09:03 all those social indicators, drug use, all went way up.
01:09:06 People just weren’t happy
01:09:08 just sitting around collecting a check.
01:09:11 Maybe it’s part of the way they were raised.
01:09:13 Maybe it’s something innate in people
01:09:14 that they need to feel wanted and needed.
01:09:17 So it’s not as simple as just writing people a check.
01:09:19 You need to also give them a way to have a sense of purpose.
01:09:23 And that was important to me.
01:09:25 And the second thing is that, as I mentioned earlier,
01:09:28 we are far from the end of work.
01:09:31 I don’t buy the idea that there’s just like
01:09:32 not enough work to be done.
01:09:34 I see like our cities need to be cleaned up.
01:09:37 And robots can’t do most of that.
01:09:39 We need to have better childcare.
01:09:40 We need better healthcare.
01:09:41 We need to take care of people who are mentally ill or older.
01:09:44 We need to repair our roads.
01:09:46 There’s so much work that require at least partly,
01:09:49 maybe entirely a human component.
01:09:52 So rather than like write all these people off,
01:09:54 let’s find a way to repurpose them and keep them engaged.
01:09:58 Now that said, I would like to see more buying power
01:10:04 from people who are sort of at the bottom end
01:10:06 of the spectrum.
01:10:07 The economy has been designed and evolved in a way
01:10:12 that’s I think very unfair to a lot of hardworking people.
01:10:15 I see super hardworking people who aren’t really seeing
01:10:18 their wages grow over the past 20, 30 years,
01:10:20 while some other people who have been super smart
01:10:24 and or super lucky have made billions
01:10:29 or hundreds of billions.
01:10:30 And I don’t think they need those hundreds of billions
01:10:33 to have the right incentives to invent things.
01:10:35 I think if you talk to almost any of them as I have,
01:10:39 they don’t think that they need an extra $10 billion
01:10:42 to do what they’re doing.
01:10:43 Most of them probably would love to do it for only a billion
01:10:48 or maybe for nothing.
01:10:49 For nothing, many of them, yeah.
01:10:50 I mean, an interesting point to make is,
01:10:54 do we think that Bill Gates would have founded Microsoft
01:10:56 if tax rates were 70%?
01:10:58 Well, we know he would have because they were tax rates
01:11:01 of 70% when he founded it.
01:11:03 So I don’t think that’s as big a deterrent
01:11:06 and we could provide more buying power to people.
01:11:09 My own favorite tool is the Earned Income Tax Credit,
01:11:12 which is basically a way of supplementing income
01:11:16 of people who have jobs and giving employers
01:11:18 an incentive to hire even more people.
01:11:20 The minimum wage can discourage employment,
01:11:22 but the Earned Income Tax Credit encourages employment
01:11:25 by supplementing people’s wages.
01:11:27 If the employer can only afford to pay them $10 for a task,
01:11:32 the rest of us kick in another five or $10
01:11:35 and bring their wages up to 15 or 20 total.
01:11:37 And then they have more buying power.
01:11:39 Then entrepreneurs are thinking, how can we cater to them?
01:11:42 How can we make products for them?
01:11:44 And it becomes a self reinforcing system
01:11:47 where people are better off.
01:11:49 Ian Drang and I had a good discussion
01:11:51 where he suggested instead of a universal basic income,
01:11:55 he suggested, or instead of an unconditional basic income,
01:11:59 how about a conditional basic income
01:12:00 where the condition is you learn some new skills,
01:12:03 we need to reskill our workforce.
01:12:05 So let’s make it easier for people to find ways
01:12:09 to get those skills and get rewarded for doing them.
01:12:11 And that’s kind of a neat idea as well.
01:12:13 That’s really interesting.
01:12:13 So, I mean, one of the questions,
01:12:16 one of the dreams of UBI is that you provide
01:12:19 some little safety net while you retrain,
01:12:24 while you learn a new skill.
01:12:26 But like, I think, I guess you’re speaking
01:12:28 to the intuition that that doesn’t always,
01:12:31 like there needs to be some incentive to reskill,
01:12:33 to train, to learn a new thing.
01:12:35 I think it helps.
01:12:36 I mean, there are lots of self motivated people,
01:12:37 but there are also people that maybe need a little guidance
01:12:40 or help and I think it’s a really hard question
01:12:44 for someone who is losing a job in one area to know
01:12:48 what is the new area I should be learning skills in.
01:12:50 And we could provide a much better set of tools
01:12:52 and platforms that maps it.
01:12:54 Okay, here’s a set of skills you already have.
01:12:56 Here’s something that’s in demand.
01:12:58 Let’s create a path for you to go from where you are
01:13:00 to where you need to be.
01:13:03 So I’m a total, how do I put it nicely about myself?
01:13:07 I’m totally clueless about the economy.
01:13:09 It’s not totally true, but pretty good approximation.
01:13:12 If you were to try to fix our tax system
01:13:20 and, or maybe from another side,
01:13:23 if there’s fundamental problems in taxation
01:13:26 or some fundamental problems about our economy,
01:13:29 what would you try to fix?
01:13:31 What would you try to speak to?
01:13:33 You know, I definitely think our whole tax system,
01:13:36 our political and economic system has gotten more
01:13:40 and more screwed up over the past 20, 30 years.
01:13:43 I don’t think it’s that hard to make headway
01:13:46 in improving it.
01:13:47 I don’t think we need to totally reinvent stuff.
01:13:49 A lot of it is what I’ve been elsewhere with Andy
01:13:52 and others called economics 101.
01:13:54 You know, there’s just some basic principles
01:13:56 that have worked really well in the 20th century
01:14:00 that we sort of forgot, you know,
01:14:01 in terms of investing in education,
01:14:03 investing in infrastructure, welcoming immigrants,
01:14:07 having a tax system that was more progressive and fair.
01:14:13 At one point, tax rates were on top incomes
01:14:16 were significantly higher.
01:14:18 And they’ve come down a lot to the point where
01:14:19 in many cases they’re lower now
01:14:21 than they are for poorer people.
01:14:24 So, and we could do things like earned income tax credit
01:14:27 to get a little more wonky.
01:14:29 I’d like to see more Pigouvian taxes.
01:14:31 What that means is you tax things that are bad
01:14:35 instead of things that are good.
01:14:36 So right now we tax labor, we tax capital
01:14:40 and which is unfortunate
01:14:42 because one of the basic principles of economics
01:14:44 if you tax something, you tend to get less of it.
01:14:46 So, you know, right now there’s still work to be done
01:14:48 and still capital to be invested in.
01:14:51 But instead we should be taxing things like pollution
01:14:54 and congestion.
01:14:57 And if we did that, we would have less pollution.
01:15:00 So a carbon tax is, you know,
01:15:02 almost every economist would say it’s a no brainer
01:15:04 whether they’re Republican or Democrat,
01:15:07 Greg Mankiw who is head of George Bush’s
01:15:09 Council of Economic Advisers or Dick Schmollensie
01:15:13 who is another Republican economist agree.
01:15:16 And of course a lot of Democratic economists agree as well.
01:15:21 If we taxed carbon,
01:15:22 we could raise hundreds of billions of dollars.
01:15:26 We could take that money and redistribute it
01:15:28 through an earned income tax credit or other things
01:15:31 so that overall our tax system would become more progressive.
01:15:35 We could tax congestion.
01:15:36 One of the things that kills me as an economist
01:15:39 is every time I sit in a traffic jam,
01:15:41 I know that it’s completely unnecessary.
01:15:43 This is complete wasted time.
01:15:44 You just visualize the cost and productivity.
01:15:47 Exactly, because they are taking costs for me
01:15:51 and all the people around me.
01:15:52 And if they charged a congestion tax,
01:15:54 they would take that same amount of money
01:15:57 and people would, it would streamline the roads.
01:15:59 Like when you’re in Singapore, the traffic just flows
01:16:01 because they have a congestion tax.
01:16:02 They listened to economists.
01:16:03 They invited me and others to go talk to them.
01:16:06 And then I’d still be paying,
01:16:09 I’d be paying a congestion tax instead of paying in my time,
01:16:11 but that money would now be available for healthcare,
01:16:14 be available for infrastructure,
01:16:15 or be available just to give to people
01:16:16 so they could buy food or whatever.
01:16:18 So it’s just, it saddens me when you sit,
01:16:22 when you’re sitting in a traffic jam,
01:16:23 it’s like taxing me and then taking that money
01:16:25 and dumping it in the ocean, just like destroying it.
01:16:27 So there are a lot of things like that
01:16:29 that economists, and I’m not,
01:16:32 I’m not like doing anything radical here.
01:16:33 Most, you know, good economists would,
01:16:36 I probably agree with me point by point on these things.
01:16:39 And we could do those things
01:16:41 and our whole economy would become much more efficient.
01:16:43 It’d become fairer, invest in R&D and research,
01:16:47 which is close to a free lunch is what we have.
01:16:50 My erstwhile MIT colleague, Bob Solla,
01:16:53 got the Nobel Prize, not yesterday, but 30 years ago,
01:16:57 for describing that most improvements
01:17:00 in living standards come from tech progress.
01:17:02 And Paul Romer later got a Nobel Prize
01:17:04 for noting that investments in R&D and human capital
01:17:08 can speed the rate of tech progress.
01:17:11 So if we do that, then we’ll be healthier and wealthier.
01:17:14 Yeah, from an economics perspective,
01:17:16 I remember taking an undergrad econ,
01:17:18 you mentioned econ 101.
01:17:20 It seemed from all the plots I saw
01:17:23 that R&D is an obvious, as close to free lunch as we have,
01:17:29 it seemed like obvious that we should do more research.
01:17:32 It is.
01:17:33 Like what, what, like, there’s no.
01:17:36 Well, we should do basic research.
01:17:38 I mean, so let me just be clear.
01:17:39 It’d be great if everybody did more research
01:17:41 and I would make this issue
01:17:42 between applied development versus basic research.
01:17:46 So applied development, like, you know,
01:17:48 how do we get this self driving car, you know,
01:17:52 feature to work better in the Tesla?
01:17:53 That’s great for private companies
01:17:55 because they can capture the value from that.
01:17:57 If they make a better self driving car system,
01:17:59 they can sell cars that are more valuable
01:18:02 and then make money.
01:18:03 So there’s an incentive that there’s not a big problem there
01:18:05 and smart companies, Amazon, Tesla,
01:18:08 and others are investing in it.
01:18:09 The problem is with basic research,
01:18:11 like coming up with core basic ideas,
01:18:14 whether it’s in nuclear fusion
01:18:16 or artificial intelligence or biotech.
01:18:19 There, if someone invents something,
01:18:21 it’s very hard for them to capture the benefits from it.
01:18:23 It’s shared by everybody, which is great in a way,
01:18:26 but it means that they’re not gonna have the incentives
01:18:28 to put as much effort into it.
01:18:30 There you need, it’s a classic public good.
01:18:32 There you need the government to be involved in it.
01:18:35 And the US government used to be investing much more in R&D,
01:18:39 but we have slashed that part of the government
01:18:42 really foolishly and we’re all poorer,
01:18:46 significantly poorer as a result.
01:18:48 Growth rates are down.
01:18:50 We’re not having the kind of scientific progress
01:18:51 we used to have.
01:18:53 It’s been sort of a short term eating the seed corn,
01:18:57 whatever metaphor you wanna use
01:19:00 where people grab some money, put it in their pockets today,
01:19:03 but five, 10, 20 years later,
01:19:07 they’re a lot poorer than they otherwise would have been.
01:19:10 So we’re living through a pandemic right now,
01:19:12 globally in the United States.
01:19:16 From an economics perspective,
01:19:18 how do you think this pandemic will change the world?
01:19:23 It’s been remarkable.
01:19:24 And it’s horrible how many people have suffered,
01:19:27 the amount of death, the economic destruction.
01:19:31 It’s also striking just the amount of change in work
01:19:34 that I’ve seen.
01:19:35 In the last 20 weeks, I’ve seen more change
01:19:38 than there were in the previous 20 years.
01:19:41 There’s been nothing like it
01:19:42 since probably the World War II mobilization
01:19:44 in terms of reorganizing our economy.
01:19:47 The most obvious one is the shift to remote work.
01:19:50 And I and many other people stopped going into the office
01:19:54 and teaching my students in person.
01:19:56 I did a study on this with a bunch of colleagues
01:19:57 at MIT and elsewhere.
01:19:59 And what we found was that before the pandemic,
01:20:02 in the beginning of 2020, about one in six,
01:20:05 a little over 15% of Americans were working remotely.
01:20:09 When the pandemic hit, that grew steadily and hit 50%,
01:20:13 roughly half of Americans working at home.
01:20:16 So a complete transformation.
01:20:17 And of course, it wasn’t even,
01:20:19 it wasn’t like everybody did it.
01:20:20 If you’re an information worker, professional,
01:20:22 if you work mainly with data,
01:20:24 then you’re much more likely to work at home.
01:20:26 If you’re a manufacturing worker,
01:20:28 working with other people or physical things,
01:20:32 then it wasn’t so easy to work at home.
01:20:34 And instead, those people were much more likely
01:20:36 to become laid off or unemployed.
01:20:39 So it’s been something that’s had very disparate effects
01:20:41 on different parts of the workforce.
01:20:44 Do you think it’s gonna be sticky in a sense
01:20:46 that after vaccine comes out and the economy reopens,
01:20:51 do you think remote work will continue?
01:20:55 That’s a great question.
01:20:57 My hypothesis is yes, a lot of it will.
01:20:59 Of course, some of it will go back,
01:21:00 but a surprising amount of it will stay.
01:21:03 I personally, for instance, I moved my seminars,
01:21:06 my academic seminars to Zoom,
01:21:08 and I was surprised how well it worked.
01:21:10 So it works?
01:21:11 Yeah, I mean, obviously we were able to reach
01:21:13 a much broader audience.
01:21:14 So we have people tuning in from Europe
01:21:16 and other countries,
01:21:18 just all over the United States for that matter.
01:21:20 I also actually found that it would,
01:21:21 in many ways, is more egalitarian.
01:21:23 We use the chat feature and other tools,
01:21:25 and grad students and others who might’ve been
01:21:27 a little shy about speaking up,
01:21:29 we now kind of have more of ability for lots of voices.
01:21:32 And they’re answering each other’s questions,
01:21:34 so you kind of get parallel.
01:21:35 Like if someone had some question about some of the data
01:21:39 or a reference or whatever,
01:21:40 then someone else in the chat would answer it.
01:21:42 And the whole thing just became like a higher bandwidth,
01:21:44 higher quality thing.
01:21:46 So I thought that was kind of interesting.
01:21:48 I think a lot of people are discovering that these tools
01:21:51 that thanks to technologists have been developed
01:21:54 over the past decade,
01:21:56 they’re a lot more powerful than we thought.
01:21:57 I mean, all the terrible things we’ve seen with COVID
01:22:00 and the real failure of many of our institutions
01:22:03 that I thought would work better.
01:22:04 One area that’s been a bright spot is our technologies.
01:22:09 Bandwidth has held up pretty well,
01:22:11 and all of our email and other tools
01:22:14 have just scaled up kind of gracefully.
01:22:18 So that’s been a plus.
01:22:20 Economists call this question
01:22:21 of whether it’ll go back a hysteresis.
01:22:23 The question is like when you boil an egg
01:22:25 after it gets cold again, it stays hard.
01:22:29 And I think that we’re gonna have a fair amount
01:22:30 of hysteresis in the economy.
01:22:32 We’re gonna move to this new,
01:22:33 we have moved to a new remote work system,
01:22:35 and it’s not gonna snap all the way back
01:22:37 to where it was before.
01:22:38 One of the things that worries me is that the people
01:22:44 with lots of followers on Twitter and people with voices,
01:22:51 people that can, voices that can be magnified by reporters
01:22:56 and all that kind of stuff are the people
01:22:57 that fall into this category
01:22:59 that we were referring to just now
01:23:01 where they can still function
01:23:03 and be successful with remote work.
01:23:06 And then there is a kind of quiet suffering
01:23:11 of what feels like millions of people
01:23:14 whose jobs are disturbed profoundly by this pandemic,
01:23:21 but they don’t have many followers on Twitter.
01:23:26 What do we, and again, I apologize,
01:23:31 but I’ve been reading the rise and fall of the Third Reich
01:23:35 and there’s a connection to the depression
01:23:38 on the American side.
01:23:39 There’s a deep, complicated connection
01:23:42 to how suffering can turn into forces
01:23:46 that potentially change the world in destructive ways.
01:23:51 So like it’s something I worry about is like,
01:23:53 what is this suffering going to materialize itself
01:23:56 in five, 10 years?
01:23:58 Is that something you worry about, think about?
01:24:01 It’s like the center of what I worry about.
01:24:03 And let me break it down to two parts.
01:24:05 There’s a moral and ethical aspect to it.
01:24:07 We need to relieve this suffering.
01:24:09 I mean, I’m sure the values of, I think most Americans,
01:24:13 we like to see shared prosperity
01:24:15 or most people on the planet.
01:24:16 And we would like to see people not falling behind
01:24:20 and they have fallen behind, not just due to COVID,
01:24:23 but in the previous couple of decades,
01:24:25 median income has barely moved,
01:24:27 depending on how you measure it.
01:24:29 And the incomes of the top 1% have skyrocketed.
01:24:33 And part of that is due to the ways technology has been used.
01:24:36 Part of this been due to, frankly, our political system
01:24:38 has continually shifted more wealth into those people
01:24:43 who have the powerful interest.
01:24:45 So there’s just, I think, a moral imperative
01:24:48 to do a better job.
01:24:49 And ultimately, we’re all gonna be wealthier
01:24:51 if more people can contribute,
01:24:53 more people have the wherewithal.
01:24:55 But the second thing is that there’s a real political risk.
01:24:58 I’m not a political scientist,
01:24:59 but you don’t have to be one, I think,
01:25:02 to see how a lot of people are really upset
01:25:05 with they’re getting a raw deal
01:25:07 and they want to smash the system in different ways,
01:25:13 in 2016 and 2018.
01:25:15 And now I think there are a lot of people
01:25:18 who are looking at the political system
01:25:19 and they feel like it’s not working for them
01:25:21 and they just wanna do something radical.
01:25:24 Unfortunately, demagogues have harnessed that
01:25:28 in a way that is pretty destructive to the country.
01:25:33 And an analogy I see is what happened with trade.
01:25:37 Almost every economist thinks that free trade
01:25:39 is a good thing, that when two people voluntarily exchange
01:25:42 almost by definition, they’re both better off
01:25:44 if it’s voluntary.
01:25:47 And so generally, trade is a good thing.
01:25:49 But they also recognize that trade can lead
01:25:52 to uneven effects, that there can be winners and losers
01:25:56 in some of the people who didn’t have the skills
01:25:59 to compete with somebody else or didn’t have other assets.
01:26:02 And so trade can shift prices
01:26:04 in ways that are averse to some people.
01:26:08 So there’s a formula that economists have,
01:26:11 which is that you have free trade,
01:26:13 but then you compensate the people who are hurt
01:26:15 and free trade makes the pie bigger.
01:26:18 And since the pie is bigger,
01:26:19 it’s possible for everyone to be better off.
01:26:21 You can make the winners better off,
01:26:23 but you can also compensate those who don’t win.
01:26:25 And so they end up being better off as well.
01:26:28 What happened was that we didn’t fulfill that promise.
01:26:33 We did have some more increased free trade
01:26:36 in the 80s and 90s, but we didn’t compensate the people
01:26:39 who were hurt.
01:26:40 And so they felt like the people in power
01:26:43 reneged on the bargain, and I think they did.
01:26:45 And so then there’s a backlash against trade.
01:26:48 And now both political parties,
01:26:50 but especially Trump and company,
01:26:53 have really pushed back against free trade.
01:26:58 Ultimately, that’s bad for the country.
01:27:00 Ultimately, that’s bad for living standards.
01:27:02 But in a way I can understand
01:27:04 that people felt they were betrayed.
01:27:07 Technology has a lot of similar characteristics.
01:27:10 Technology can make us all better off.
01:27:14 It makes the pie bigger.
01:27:16 It creates wealth and health, but it can also be uneven.
01:27:18 Not everyone automatically benefits.
01:27:21 It’s possible for some people,
01:27:22 even a majority of people to get left behind
01:27:25 while a small group benefits.
01:27:28 What most economists would say,
01:27:29 well, let’s make the pie bigger,
01:27:30 but let’s make sure we adjust the system
01:27:33 so we compensate the people who are hurt.
01:27:35 And since the pie is bigger,
01:27:36 we can make the rich richer,
01:27:38 we can make the middle class richer,
01:27:39 we can make the poor richer.
01:27:40 Mathematically, everyone could be better off.
01:27:43 But again, we’re not doing that.
01:27:45 And again, people are saying this isn’t working for us.
01:27:48 And again, instead of fixing the distribution,
01:27:52 a lot of people are beginning to say,
01:27:54 hey, technology sucks, we’ve got to stop it.
01:27:57 Let’s throw rocks at the Google bus.
01:27:59 Let’s blow it up.
01:27:59 Let’s blow it up.
01:28:01 And there were the Luddites almost exactly 200 years ago
01:28:04 who smashed the looms and the spinning machines
01:28:08 because they felt like those machines weren’t helping them.
01:28:11 We have a real imperative,
01:28:12 not just to do the morally right thing,
01:28:14 but to do the thing that is gonna save the country,
01:28:17 which is make sure that we create
01:28:19 not just prosperity, but shared prosperity.
01:28:22 So you’ve been at MIT for over 30 years, I think.
01:28:27 Don’t tell anyone how old I am.
01:28:28 Yeah, no, that’s true, that’s true.
01:28:30 And you’re now moved to Stanford.
01:28:34 I’m gonna try not to say anything
01:28:37 about how great MIT is.
01:28:39 What’s that move been like?
01:28:41 What, it’s East Coast to West Coast?
01:28:44 Well, MIT is great.
01:28:46 MIT has been very good to me.
01:28:48 It continues to be very good to me.
01:28:49 It’s an amazing place.
01:28:51 I continue to have so many amazing friends
01:28:53 and colleagues there.
01:28:54 I’m very fortunate to have been able
01:28:56 to spend a lot of time at MIT.
01:28:58 Stanford’s also amazing.
01:29:00 And part of what attracted me out here
01:29:01 was not just the weather, but also Silicon Valley,
01:29:04 let’s face it, is really more of the epicenter
01:29:07 of the technological revolution.
01:29:09 And I wanna be close to the people
01:29:10 who are inventing AI and elsewhere.
01:29:12 A lot of it is being invested at MIT for that matter
01:29:14 in Europe and China and elsewhere, in Nia.
01:29:18 But being a little closer to some of the key technologists
01:29:23 was something that was important to me.
01:29:25 And it may be shallow,
01:29:28 but I also do enjoy the good weather.
01:29:30 And I felt a little ripped off
01:29:33 when I came here a couple of months ago.
01:29:35 And immediately there are the fires
01:29:36 and my eyes were burning, the sky was orange
01:29:39 and there’s the heat waves.
01:29:41 And so it wasn’t exactly what I’ve been promised,
01:29:44 but fingers crossed it’ll get back to better.
01:29:47 But maybe on a brief aside,
01:29:50 there’s been some criticism of academia
01:29:52 and universities and different avenues.
01:29:55 And I, as a person who’s gotten to enjoy universities
01:30:00 from the pure playground of ideas that it can be,
01:30:06 always kind of try to find the words
01:30:08 to tell people that these are magical places.
01:30:13 Is there something that you can speak to
01:30:17 that is beautiful or powerful about universities?
01:30:22 Well, sure.
01:30:23 I mean, first off, I mean,
01:30:24 economists have this concept called revealed preference.
01:30:26 You can ask people what they say
01:30:28 or you can watch what they do.
01:30:29 And so obviously by reveal preferences, I love academia.
01:30:33 I could be doing lots of other things,
01:30:35 but it’s something I enjoy a lot.
01:30:37 And I think the word magical is exactly right.
01:30:39 At least it is for me.
01:30:41 I do what I love, you know,
01:30:43 hopefully my Dean won’t be listening,
01:30:44 but I would do this for free.
01:30:45 You know, it’s just what I like to do.
01:30:49 I like to do research.
01:30:50 I love to have conversations like this with you
01:30:51 and with my students, with my fellow colleagues.
01:30:53 I love being around the smartest people I can find
01:30:55 and learning something from them
01:30:57 and having them challenge me.
01:30:58 And that just gives me joy.
01:31:02 And every day I find something new and exciting to work on.
01:31:05 And a university environment is really filled
01:31:08 with other people who feel that way.
01:31:09 And so I feel very fortunate to be part of it.
01:31:12 And I’m lucky that I’m in a society
01:31:14 where I can actually get paid for it
01:31:16 and put food on the table
01:31:17 while doing the stuff that I really love.
01:31:19 And I hope someday everybody can have jobs
01:31:21 that are like that.
01:31:22 And I appreciate that it’s not necessarily easy
01:31:25 for everybody to have a job that they both love
01:31:27 and also they get paid for.
01:31:30 So there are things that don’t go well in academia,
01:31:34 but by and large, I think it’s a kind of, you know,
01:31:36 kinder, gentler version of a lot of the world.
01:31:37 You know, we sort of cut each other a little slack
01:31:41 on things like, you know, on just a lot of things.
01:31:45 You know, of course there’s harsh debates
01:31:48 and discussions about things
01:31:49 and some petty politics here and there.
01:31:52 I personally, I try to stay away
01:31:53 from most of that sort of politics.
01:31:55 It’s not my thing.
01:31:56 And so it doesn’t affect me most of the time,
01:31:58 sometimes a little bit, maybe.
01:32:00 But, you know, being able to pull together something,
01:32:03 we have the digital economy lab.
01:32:04 We’ve got all these brilliant grad students
01:32:07 and undergraduates and postdocs
01:32:09 that are just doing stuff that I learned from.
01:32:12 And every one of them has some aspect
01:32:14 of what they’re doing that’s just,
01:32:16 I couldn’t even understand.
01:32:17 It’s like way, way more brilliant.
01:32:19 And that’s really, to me, actually I really enjoy that,
01:32:23 being in a room with lots of other smart people.
01:32:25 And Stanford has made it very easy to attract,
01:32:29 you know, those people.
01:32:31 I just, you know, say I’m gonna do a seminar, whatever,
01:32:33 and the people come, they come and wanna work with me.
01:32:36 We get funding, we get data sets,
01:32:38 and it’s come together real nicely.
01:32:41 And the rest is just fun.
01:32:44 It’s fun, yeah.
01:32:45 And we feel like we’re working on important problems,
01:32:47 you know, and we’re doing things that, you know,
01:32:50 I think are first order in terms of what’s important
01:32:53 in the world, and that’s very satisfying to me.
01:32:56 Maybe a bit of a fun question.
01:32:58 What three books, technical, fiction, philosophical,
01:33:02 you’ve enjoyed, had a big, big impact in your life?
01:33:07 Well, I guess I go back to like my teen years,
01:33:09 and, you know, I read Sid Arthur,
01:33:12 which is a philosophical book,
01:33:13 and kind of helps keep me centered.
01:33:15 By Herman Hesse.
01:33:16 Yeah, by Herman Hesse, exactly.
01:33:17 Don’t get too wrapped up in material things
01:33:20 or other things, and just sort of, you know,
01:33:21 try to find peace on things.
01:33:24 A book that actually influenced me a lot
01:33:26 in terms of my career was called
01:33:27 The Worldly Philosophers by Robert Halbrenner.
01:33:30 It’s actually about economists.
01:33:31 It goes through a series of different,
01:33:33 it’s written in a very lively form,
01:33:34 and it probably sounds boring,
01:33:36 but it did describe whether it’s Adam Smith
01:33:38 or Karl Marx or John Maynard Keynes,
01:33:40 and each of them sort of what their key insights were,
01:33:43 but also kind of their personalities,
01:33:45 and I think that’s one of the reasons
01:33:46 I became an economist was just understanding
01:33:50 how they grapple with the big questions of the world.
01:33:53 So would you recommend it as a good whirlwind overview
01:33:56 of the history of economics?
01:33:57 Yeah, yeah, I think that’s exactly right.
01:33:59 It kind of takes you through the different things,
01:34:00 and so you can understand how they reach,
01:34:04 thinking some of the strengths and weaknesses.
01:34:06 I mean, it probably is a little out of date now.
01:34:07 It needs to be updated a bit,
01:34:08 but you could at least look through
01:34:10 the first couple hundred years of economics,
01:34:12 which is not a bad place to start.
01:34:15 More recently, I mean, a book I really enjoyed
01:34:17 is by my friend and colleague, Max Tegmark,
01:34:20 called Life 3.0.
01:34:21 You should have him on your podcast if you haven’t already.
01:34:23 He was episode number one.
01:34:25 Oh my God.
01:34:26 And he’s back, he’ll be back, he’ll be back soon.
01:34:30 Yeah, no, he’s terrific.
01:34:31 I love the way his brain works,
01:34:33 and he makes you think about profound things.
01:34:35 He’s got such a joyful approach to life,
01:34:38 and so that’s been a great book,
01:34:41 and I learn a lot from it, I think everybody,
01:34:43 but he explains it in a way, even though he’s so brilliant,
01:34:45 that everyone can understand, that I can understand.
01:34:50 That’s three, but let me mention maybe one or two others.
01:34:52 I mean, I recently read More From Less
01:34:55 by my sometimes coauthor, Andrew McAfee.
01:34:58 It made me optimistic about how we can continue
01:35:01 to have rising living standards
01:35:04 while living more lightly on the planet.
01:35:06 In fact, because of higher living standards,
01:35:07 because of technology,
01:35:09 because of digitization that I mentioned,
01:35:11 we don’t have to have as big an impact on the planet,
01:35:13 and that’s a great story to tell,
01:35:15 and he documents it very carefully.
01:35:19 You know, a personal kind of self help book
01:35:21 that I found kind of useful, People, is Atomic Habits.
01:35:24 I think it’s, what’s his name, James Clear.
01:35:26 Yeah, James Clear.
01:35:27 He’s just, yeah, it’s a good name,
01:35:29 because he writes very clearly,
01:35:30 and you know, most of the sentences I read in that book,
01:35:33 I was like, yeah, I know that,
01:35:34 but it just really helps to have somebody like remind you
01:35:37 and tell you and kind of just reinforce it, and it’s helpful.
01:35:40 So build habits in your life that you hope to have,
01:35:45 that have a positive impact,
01:35:46 and don’t have to make it big things.
01:35:48 It could be just tiny little.
01:35:49 Exactly, I mean, the word atomic,
01:35:50 it’s a little bit of a pun, I think he says.
01:35:52 You know, one, atomic means they’re really small.
01:35:54 You take these little things, but also like atomic power,
01:35:56 can have like, you know, big impact.
01:35:59 That’s funny, yeah.
01:36:01 The biggest ridiculous question,
01:36:04 especially to ask an economist, but also a human being,
01:36:06 what’s the meaning of life?
01:36:08 I hope you’ve gotten the answer to that from somebody else.
01:36:11 I think we’re all still working on that one, but what is it?
01:36:14 You know, I actually learned a lot from my son, Luke,
01:36:18 and he’s 19 now, but he’s always loved philosophy,
01:36:22 and he reads way more sophisticated philosophy than I do.
01:36:24 I went and took him to Oxford,
01:36:25 and he spent the whole time like pulling
01:36:27 all these obscure books down and reading them.
01:36:29 And a couple of years ago, we had this argument,
01:36:32 and he was trying to convince me that hedonism
01:36:34 was the ultimate, you know, meaning of life,
01:36:37 just pleasure seeking, and…
01:36:40 Well, how old was he at the time?
01:36:41 17, so…
01:36:42 Okay.
01:36:43 But he made a really good like intellectual argument
01:36:46 for it too, and you know,
01:36:47 but you know, it just didn’t strike me as right.
01:36:50 And I think that, you know, while I am kind of a utilitarian,
01:36:54 like, you know, I do think we should do the grace,
01:36:55 good for the grace number, that’s just too shallow.
01:36:58 And I think I’ve convinced myself that real happiness
01:37:02 doesn’t come from seeking pleasure.
01:37:04 It’s kind of a little, it’s ironic.
01:37:05 Like if you really focus on being happy,
01:37:07 I think it doesn’t work.
01:37:09 You gotta like be doing something bigger.
01:37:12 I think the analogy I sometimes use is, you know,
01:37:14 when you look at a dim star in the sky,
01:37:17 if you look right at it, it kind of disappears,
01:37:19 but you have to look a little to the side,
01:37:20 and then the parts of your retina
01:37:23 that are better at absorbing light,
01:37:24 you know, can pick it up better.
01:37:26 It’s the same thing with happiness.
01:37:27 I think you need to sort of find something, other goal,
01:37:32 something, some meaning in life,
01:37:33 and that ultimately makes you happier
01:37:36 than if you go squarely at just pleasure.
01:37:39 And so for me, you know, the kind of research I do
01:37:42 that I think is trying to change the world,
01:37:44 make the world a better place,
01:37:46 and I’m not like an evolutionary psychologist,
01:37:47 but my guess is that our brains are wired,
01:37:50 not just for pleasure, but we’re social animals,
01:37:53 and we’re wired to like help others.
01:37:57 And ultimately, you know,
01:37:58 that’s something that’s really deeply rooted in our psyche.
01:38:02 And if we do help others, if we do,
01:38:04 or at least feel like we’re helping others,
01:38:06 you know, our reward systems kick in,
01:38:08 and we end up being more deeply satisfied
01:38:10 than if we just do something selfish and shallow.
01:38:13 Beautifully put.
01:38:14 I don’t think there’s a better way to end it, Eric.
01:38:16 You were one of the people when I first showed up at MIT,
01:38:20 that made me proud to be at MIT.
01:38:22 So it’s so sad that you’re now at Stanford,
01:38:24 but I’m sure you’ll do wonderful things at Stanford as well.
01:38:28 I can’t wait till future books,
01:38:30 and people should definitely read your other books.
01:38:32 Well, thank you so much.
01:38:33 And I think we’re all part of the invisible college,
01:38:35 as we call it.
01:38:36 You know, we’re all part of this intellectual
01:38:38 and human community where we all can learn from each other.
01:38:41 It doesn’t really matter physically
01:38:43 where we are so much anymore.
01:38:44 Beautiful.
01:38:45 Thanks for talking today.
01:38:46 My pleasure.
01:38:48 Thanks for listening to this conversation
01:38:49 with Eric Brynjolfsson.
01:38:50 And thank you to our sponsors.
01:38:52 Vincero Watches, the maker of classy,
01:38:55 well performing watches.
01:38:56 Fort Sigmatic, the maker of delicious mushroom coffee.
01:39:00 ExpressVPN, the VPN I’ve used for many years
01:39:03 to protect my privacy on the internet.
01:39:05 And CashApp, the app I use to send money to friends.
01:39:09 Please check out these sponsors in the description
01:39:11 to get a discount and to support this podcast.
01:39:14 If you enjoy this thing, subscribe on YouTube.
01:39:17 Review it with five stars on Apple Podcast,
01:39:19 follow on Spotify, support on Patreon,
01:39:22 or connect with me on Twitter at Lex Friedman.
01:39:25 And now, let me leave you with some words
01:39:27 from Albert Einstein.
01:39:29 It has become appallingly obvious
01:39:32 that our technology has exceeded our humanity.
01:39:36 Thank you for listening and hope to see you next time.