Transcript
00:00:00 You’ve studied the human mind, cognition, language, vision, evolution, psychology,
00:00:05 from child to adult, from the level of individual to the level of our entire civilization.
00:00:11 So I feel like I can start with a simple multiple choice question.
00:00:16 What is the meaning of life? Is it A. to attain knowledge as Plato said,
00:00:22 B. to attain power as Nietzsche said, C. to escape death as Ernest Becker said,
00:00:27 D. to propagate our genes as Darwin and others have said,
00:00:33 E. there is no meaning as the nihilists have said,
00:00:37 F. knowing the meaning of life is beyond our cognitive capabilities as Stephen Pinker said,
00:00:42 based on my interpretation 20 years ago, and G. none of the above.
00:00:48 I’d say A. comes closest, but I would amend that to
00:00:51 C. to attaining not only knowledge but fulfillment more generally, that is life, health, stimulation,
00:01:00 access to the living cultural and social world.
00:01:06 Now this is our meaning of life. It’s not the meaning of life if you were to ask our genes.
00:01:12 Their meaning is to propagate copies of themselves, but that is distinct from the
00:01:17 meaning that the brain that they lead to sets for itself.
00:01:22 So to you knowledge is a small subset or a large subset?
00:01:27 It’s a large subset, but it’s not the entirety of human striving because we also want to
00:01:35 interact with people. We want to experience beauty. We want to experience the richness
00:01:39 of the natural world, but understanding what makes the universe tick is way up there.
00:01:47 For some of us more than others, certainly for me that’s one of the top five.
00:01:54 So is that a fundamental aspect? Are you just describing your own preference or is this a
00:02:00 fundamental aspect of human nature is to seek knowledge? In your latest book you talk about
00:02:05 the power, the usefulness of rationality and reason and so on. Is that a fundamental
00:02:11 nature of human beings or is it something we should just strive for?
00:02:17 Both. We’re capable of striving for it because it is one of the things that make us what we are,
00:02:23 homo sapiens, wise men. We are unusual among animals in the degree to which we acquire
00:02:32 knowledge and use it to survive. We make tools. We strike agreements via language. We extract
00:02:41 poisons. We predict the behavior of animals. We try to get at the workings of plants.
00:02:47 And when I say we, I don’t just mean we in the modern West, but we as a species everywhere,
00:02:52 which is how we’ve managed to occupy every niche on the planet, how we’ve managed to drive other
00:02:58 animals to extinction. And the refinement of reason in pursuit of human wellbeing, of health,
00:03:06 happiness, social richness, cultural richness is our main challenge in the present. That is
00:03:14 using our intellect, using our knowledge to figure out how the world works, how we work
00:03:19 in order to make discoveries and strike agreements that make us all better off in the long run.
00:03:25 Right. And you do that almost undeniably and in a data driven way in your recent book,
00:03:31 but I’d like to focus on the artificial intelligence aspect of things and not just
00:03:36 artificial intelligence, but natural intelligence too. So 20 years ago in a book you’ve written on
00:03:41 how the mind works, you conjecture again, am I right to interpret things? You can correct me
00:03:49 if I’m wrong, but you conjecture that human thought in the brain may be a result of a
00:03:54 massive network of highly interconnected neurons. So from this interconnectivity emerges thought
00:04:01 compared to artificial neural networks, which we use for machine learning today,
00:04:06 is there something fundamentally more complex, mysterious, even magical about the biological
00:04:12 neural networks versus the ones we’ve been starting to use over the past 60 years and
00:04:19 have become to success in the past 10? There is something a little bit mysterious
00:04:24 about the human neural networks, which is that each one of us who is a neural network knows that
00:04:31 we ourselves are conscious. Conscious not in the sense of registering our surroundings or even
00:04:36 registering our internal state, but in having subjective first person, present tense experience.
00:04:42 That is when I see red, it’s not just different from green, but there’s a redness to it that I
00:04:49 feel. Whether an artificial system would experience that or not, I don’t know and I don’t think I can
00:04:54 know. That’s why it’s mysterious. If we had a perfectly lifelike robot that was behaviorally
00:05:00 indistinguishable from a human, would we attribute consciousness to it or ought we to attribute
00:05:06 consciousness to it? And that’s something that it’s very hard to know. But putting that aside,
00:05:12 putting aside that largely philosophical question, the question is, is there some difference between
00:05:19 the human neural network and the ones that we’re building in artificial intelligence will mean
00:05:23 that we’re on the current trajectory, not going to reach the point where we’ve got a lifelike
00:05:30 robot indistinguishable from a human because the way their so called neural networks are organized
00:05:35 are different from the way ours are organized. I think there’s overlap, but I think there are
00:05:39 some big differences that current neural networks, current so called deep learning systems are in
00:05:48 reality not all that deep. That is, they are very good at extracting high order statistical
00:05:53 regularities, but most of the systems don’t have a semantic level, a level of actual understanding
00:06:00 of who did what to whom, why, where, how things work, what causes what else. Do you think that
00:06:07 kind of thing can emerge as it does? So artificial neural networks are much smaller, the number of
00:06:11 connections and so on than the current human biological networks, but do you think sort of
00:06:18 to go to consciousness or to go to this higher level semantic reasoning about things, do you
00:06:22 think that can emerge with just a larger network with a more richly weirdly interconnected network?
00:06:29 Separate it in consciousness because consciousness is even a matter of complexity.
00:06:33 A really weird one.
00:06:34 Yeah, you could sensibly ask the question of whether shrimp are conscious, for example,
00:06:38 they’re not terribly complex, but maybe they feel pain. So let’s just put that part of it aside.
00:06:44 But I think sheer size of a neural network is not enough to give it structure and knowledge,
00:06:52 but if it’s suitably engineered, then why not? That is, we’re neural networks, natural selection
00:06:59 did a kind of equivalent of engineering of our brains. So I don’t think there’s anything mysterious
00:07:04 in the sense that no system made out of silicon could ever do what a human brain can do. I think
00:07:11 it’s possible in principle. Whether it’ll ever happen depends not only on how clever we are
00:07:17 in engineering these systems, but whether we even want to, whether that’s even a sensible goal.
00:07:22 That is, you can ask the question, is there any locomotion system that is as good as a human?
00:07:29 Well, we kind of want to do better than a human ultimately in terms of legged locomotion.
00:07:35 There’s no reason that humans should be our benchmark. They’re tools that might be better
00:07:39 in some ways. It may be that we can’t duplicate a natural system because at some point it’s so much
00:07:49 cheaper to use a natural system that we’re not going to invest more brainpower and resources.
00:07:54 So for example, we don’t really have an exact substitute for wood. We still build houses out
00:08:00 of wood. We still build furniture out of wood. We like the look. We like the feel. It has certain
00:08:05 properties that synthetics don’t. It’s not that there’s anything magical or mysterious about wood.
00:08:11 It’s just that the extra steps of duplicating everything about wood is something we just haven’t
00:08:17 bothered because we have wood. Likewise, say cotton. I’m wearing cotton clothing now. It feels
00:08:21 much better than polyester. It’s not that cotton has something magic in it. It’s not that we couldn’t
00:08:29 ever synthesize something exactly like cotton, but at some point it’s just not worth it. We’ve got
00:08:35 cotton. Likewise, in the case of human intelligence, the goal of making an artificial system that is
00:08:42 exactly like the human brain is a goal that we probably know is going to pursue to the bitter
00:08:47 end, I suspect, because if you want tools that do things better than humans, you’re not going to
00:08:53 care whether it does something like humans. So for example, diagnosing cancer or predicting the
00:08:58 weather, why set humans as your benchmark? But in general, I suspect you also believe
00:09:05 that even if the human should not be a benchmark and we don’t want to imitate humans in their
00:09:10 system, there’s a lot to be learned about how to create an artificial intelligence system by
00:09:15 studying the human. Yeah, I think that’s right. In the same way that to build flying machines,
00:09:22 we want to understand the laws of aerodynamics, including birds, but not mimic the birds,
00:09:27 but they’re the same laws. You have a view on AI, artificial intelligence, and safety
00:09:35 that, from my perspective, is refreshingly rational or perhaps more importantly, has elements
00:09:47 of positivity to it, which I think can be inspiring and empowering as opposed to paralyzing.
00:09:53 For many people, including AI researchers, the eventual existential threat of AI is obvious,
00:09:59 not only possible, but obvious. And for many others, including AI researchers, the threat
00:10:05 is not obvious. So Elon Musk is famously in the highly concerned about AI camp, saying things like
00:10:14 AI is far more dangerous than nuclear weapons, and that AI will likely destroy human civilization.
00:10:21 Human civilization. So in February, he said that if Elon was really serious about AI, the threat
00:10:29 of AI, he would stop building self driving cars that he’s doing very successfully as part of Tesla.
00:10:35 Then he said, wow, if even Pinker doesn’t understand the difference between narrow AI,
00:10:40 like a car and general AI, when the latter literally has a million times more compute power
00:10:47 and an open ended utility function, humanity is in deep trouble. So first, what did you mean by
00:10:54 the statement about Elon Musk should stop building self driving cars if he’s deeply concerned?
00:11:00 Not the last time that Elon Musk has fired off an intemperate tweet.
00:11:04 Well, we live in a world where Twitter has power.
00:11:07 Yes. Yeah, I think there are two kinds of existential threat that have been discussed
00:11:16 in connection with artificial intelligence, and I think that they’re both incoherent.
00:11:20 One of them is a vague fear of AI takeover, that just as we subjugated animals and less technologically
00:11:29 advanced peoples, so if we build something that’s more advanced than us, it will inevitably turn us
00:11:34 into pets or slaves or domesticated animal equivalents. I think this confuses intelligence
00:11:42 with a will to power, that it so happens that in the intelligence system we are most familiar with,
00:11:49 namely homo sapiens, we are products of natural selection, which is a competitive process,
00:11:54 and so bundled together with our problem solving capacity are a number of nasty traits like
00:12:00 dominance and exploitation and maximization of power and glory and resources and influence.
00:12:08 There’s no reason to think that sheer problem solving capability will set that as one of its
00:12:13 goals. Its goals will be whatever we set its goals as, and as long as someone isn’t building a
00:12:18 megalomaniacal artificial intelligence, then there’s no reason to think that it would naturally
00:12:24 evolve in that direction. Now, you might say, well, what if we gave it the goal of maximizing
00:12:28 its own power source? That’s a pretty stupid goal to give an autonomous system. You don’t give it
00:12:34 that goal. I mean, that’s just self evidently idiotic. So if you look at the history of the
00:12:40 world, there’s been a lot of opportunities where engineers could instill in a system
00:12:45 destructive power and they choose not to because that’s the natural process of engineering.
00:12:49 Well, except for weapons. I mean, if you’re building a weapon, its goal is to destroy people,
00:12:53 and so I think there are good reasons to not build certain kinds of weapons. I think building
00:12:58 nuclear weapons was a massive mistake. You do. So maybe pause on that because that is one of
00:13:06 the serious threats. Do you think that it was a mistake in a sense that it should have been
00:13:12 stopped early on? Or do you think it’s just an unfortunate event of invention that this was
00:13:19 invented? Do you think it’s possible to stop? I guess is the question. It’s hard to rewind the
00:13:23 clock because of course it was invented in the context of World War II and the fear that the
00:13:28 Nazis might develop one first. Then once it was initiated for that reason, it was hard to turn
00:13:35 off, especially since winning the war against the Japanese and the Nazis was such an overwhelming
00:13:42 goal of every responsible person that there’s just nothing that people wouldn’t have done then
00:13:47 to ensure victory. It’s quite possible if World War II hadn’t happened that nuclear weapons
00:13:52 wouldn’t have been invented. We can’t know, but I don’t think it was by any means a necessity,
00:13:57 any more than some of the other weapon systems that were envisioned but never implemented,
00:14:02 like planes that would disperse poison gas over cities like crop dusters or systems to try to
00:14:10 create earthquakes and tsunamis in enemy countries, to weaponize the weather,
00:14:16 weaponize solar flares, all kinds of crazy schemes that we thought the better of.
00:14:21 I think analogies between nuclear weapons and artificial intelligence are fundamentally
00:14:25 misguided because the whole point of nuclear weapons is to destroy things. The point of
00:14:30 artificial intelligence is not to destroy things. So the analogy is misleading.
00:14:36 So there’s two artificial intelligence you mentioned. The first one I guess is highly
00:14:39 intelligent or power hungry.
00:14:42 Yeah, it’s a system that we design ourselves where we give it the goals. Goals are external to
00:14:46 the means to attain the goals. If we don’t design an artificially intelligent system to
00:14:55 maximize dominance, then it won’t maximize dominance. It’s just that we’re so familiar
00:15:00 with homo sapiens where these two traits come bundled together, particularly in men,
00:15:06 that we are apt to confuse high intelligence with a will to power, but that’s just an error.
00:15:15 The other fear is that will be collateral damage that will give artificial intelligence a goal
00:15:21 like make paper clips and it will pursue that goal so brilliantly that before we can stop it,
00:15:27 it turns us into paper clips. We’ll give it the goal of curing cancer and it will turn us into
00:15:32 guinea pigs for lethal experiments or give it the goal of world peace and its conception of world
00:15:38 peace is no people, therefore no fighting and so it will kill us all. Now I think these are utterly
00:15:43 fanciful. In fact, I think they’re actually self defeating. They first of all assume that we’re
00:15:49 going to be so brilliant that we can design an artificial intelligence that can cure cancer,
00:15:53 but so stupid that we don’t specify what we mean by curing cancer in enough detail that it won’t
00:15:59 kill us in the process and it assumes that the system will be so smart that it can cure cancer,
00:16:06 but so idiotic that it can’t figure out that what we mean by curing cancer is not killing everyone.
00:16:12 I think that the collateral damage scenario, the value alignment problem is also based on
00:16:18 a misconception. So one of the challenges, of course, we don’t know how to build either system
00:16:23 currently or are we even close to knowing? Of course, those things can change overnight,
00:16:27 but at this time, theorizing about it is very challenging in either direction. So that’s
00:16:33 probably at the core of the problem is without that ability to reason about the real engineering
00:16:39 things here at hand is your imagination runs away with things. Exactly. But let me sort of ask,
00:16:45 what do you think was the motivation, the thought process of Elon Musk? I build autonomous vehicles,
00:16:52 I study autonomous vehicles, I study Tesla autopilot. I think it is one of the greatest
00:16:57 currently large scale application of artificial intelligence in the world. It has potentially a
00:17:04 very positive impact on society. So how does a person who’s creating this very good quote unquote
00:17:10 narrow AI system also seem to be so concerned about this other general AI? What do you think
00:17:19 is the motivation there? What do you think is the thing? Well, you probably have to ask him,
00:17:23 but there, and he is notoriously flamboyant, impulsive to the, as we have just seen,
00:17:31 to the detriment of his own goals of the health of the company. So I don’t know what’s going on
00:17:37 in his mind. You probably have to ask him, but I don’t think the, and I don’t think the distinction
00:17:42 between special purpose AI and so called general AI is relevant that in the same way that special
00:17:50 purpose AI is not going to do anything conceivable in order to attain a goal. All engineering systems
00:17:57 are designed to trade off across multiple goals. When we build cars in the first place,
00:18:02 we didn’t forget to install brakes because the goal of a car is to go fast. It occurred to people,
00:18:08 yes, you want it to go fast, but not always. So you would build in brakes too. Likewise,
00:18:13 if a car is going to be autonomous and program it to take the shortest route to the airport,
00:18:20 it’s not going to take the diagonal and mow down people and trees and fences because that’s the
00:18:24 shortest route. That’s not what we mean by the shortest route when we program it. And that’s just
00:18:29 what an intelligence system is by definition. It takes into account multiple constraints.
00:18:36 The same is true, in fact, even more true of so called general intelligence. That is,
00:18:41 if it’s genuinely intelligent, it’s not going to pursue some goal singlemindedly, omitting every
00:18:48 other consideration and collateral effect. That’s not artificial and general intelligence. That’s
00:18:54 artificial stupidity. I agree with you, by the way, on the promise of autonomous vehicles for
00:19:01 improving human welfare. I think it’s spectacular. And I’m surprised at how little press coverage
00:19:06 notes that in the United States alone, something like 40,000 people die every year on the highways,
00:19:11 vastly more than are killed by terrorists. And we spent a trillion dollars on a war to combat
00:19:18 deaths by terrorism, about half a dozen a year. Whereas year in, year out, 40,000 people are
00:19:24 massacred on the highways, which could be brought down to very close to zero. So I’m with you on
00:19:29 the humanitarian benefit. Let me just mention that as a person who’s building these cars,
00:19:34 it is a little bit offensive to me to say that engineers would be clueless enough not to engineer
00:19:39 safety into systems. I often stay up at night thinking about those 40,000 people that are dying.
00:19:45 And everything I tried to engineer is to save those people’s lives. So every new invention that
00:19:50 I’m super excited about, in all the deep learning literature and CVPR conferences and NIPS, everything
00:19:59 I’m super excited about is all grounded in making it safe and help people. So I just don’t see how
00:20:08 that trajectory can all of a sudden slip into a situation where intelligence will be highly
00:20:13 negative. You and I certainly agree on that. And I think that’s only the beginning of the
00:20:17 potential humanitarian benefits of artificial intelligence. There’s been enormous attention to
00:20:24 what are we going to do with the people whose jobs are made obsolete by artificial intelligence,
00:20:28 but very little attention given to the fact that the jobs that are going to be made obsolete are
00:20:32 horrible jobs. The fact that people aren’t going to be picking crops and making beds and driving
00:20:38 trucks and mining coal, these are soul deadening jobs. And we have a whole literature sympathizing
00:20:45 with the people stuck in these menial, mind deadening, dangerous jobs. If we can eliminate
00:20:53 them, this is a fantastic boon to humanity. Now granted, you solve one problem and there’s another
00:20:58 one, namely, how do we get these people a decent income? But if we’re smart enough to invent machines
00:21:05 that can make beds and put away dishes and handle hospital patients, I think we’re smart enough to
00:21:12 figure out how to redistribute income to apportion some of the vast economic savings to the human
00:21:19 beings who will no longer be needed to make beds. Okay. Sam Harris says that it’s obvious that
00:21:26 eventually AI will be an existential risk. He’s one of the people who says it’s obvious.
00:21:31 We don’t know when the claim goes, but eventually it’s obvious. And because we don’t know when,
00:21:38 we should worry about it now. This is a very interesting argument in my eyes. So how do we
00:21:45 think about timescale? How do we think about existential threats when we don’t really, we know
00:21:51 so little about the threat, unlike nuclear weapons perhaps, about this particular threat, that it
00:21:58 could happen tomorrow, right? So, but very likely it won’t. Very likely it’d be a hundred years away.
00:22:04 So how do we ignore it? How do we talk about it? Do we worry about it? How do we think about those?
00:22:12 What is it?
00:22:13 A threat that we can imagine. It’s within the limits of our imagination,
00:22:18 but not within our limits of understanding to accurately predict it.
00:22:24 But what is the it that we’re afraid of?
00:22:26 Sorry. AI being the existential threat.
00:22:30 AI. How? Like enslaving us or turning us into paperclips?
00:22:35 I think the most compelling from the Sam Harris perspective would be the paperclip situation.
00:22:39 Yeah. I mean, I just think it’s totally fanciful. I mean, that is don’t build a system.
00:22:43 Don’t give a, don’t, first of all, the code of engineering is you don’t implement a system with
00:22:50 massive control before testing it. Now, perhaps the culture of engineering will radically change.
00:22:55 Then I would worry, but I don’t see any signs that engineers will suddenly do idiotic things,
00:23:00 like put a electric power plant in control of a system that they haven’t tested first.
00:23:07 Or all of these scenarios, not only imagine almost a magically powered intelligence,
00:23:15 including things like cure cancer, which is probably an incoherent goal because there’s
00:23:20 so many different kinds of cancer or bring about world peace. I mean, how do you even specify that
00:23:25 as a goal? But the scenarios also imagine some degree of control of every molecule in the
00:23:31 universe, which not only is itself unlikely, but we would not start to connect these systems to
00:23:39 infrastructure without testing as we would any kind of engineering system.
00:23:45 Now, maybe some engineers will be irresponsible and we need legal and regulatory and legal
00:23:53 responsibility implemented so that engineers don’t do things that are stupid by their own standards.
00:24:00 But the, I’ve never seen enough of a plausible scenario of existential threat to devote large
00:24:08 amounts of brain power to, to forestall it. So you believe in the sort of the power on
00:24:14 mass of the engineering of reason, as you argue in your latest book of Reason and Science, to sort of
00:24:20 be the very thing that guides the development of new technology so it’s safe and also keeps us safe.
00:24:28 You know, granted the same culture of safety that currently is part of the engineering mindset for
00:24:34 airplanes, for example. So yeah, I don’t think that that should be thrown out the window and
00:24:40 that untested all powerful systems should be suddenly implemented, but there’s no reason to
00:24:45 think they are. And in fact, if you look at the progress of artificial intelligence, it’s been,
00:24:50 you know, it’s been impressive, especially in the last 10 years or so, but the idea that suddenly
00:24:54 there’ll be a step function that all of a sudden before we know it, it will be all powerful,
00:25:00 that there’ll be some kind of recursive self improvement, some kind of fume is also fanciful.
00:25:06 We, certainly by the technology that we, that we’re now impresses us, such as deep learning,
00:25:13 where you train something on hundreds of thousands or millions of examples,
00:25:18 they’re not hundreds of thousands of problems of which curing cancer is a typical example.
00:25:26 And so the kind of techniques that have allowed AI to increase in the last five years are not the
00:25:31 kind that are going to lead to this fantasy of exponential sudden self improvement. I think it’s
00:25:40 kind of a magical thinking. It’s not based on our understanding of how AI actually works.
00:25:45 Now give me a chance here. So you said fanciful, magical thinking. In his TED talk,
00:25:51 Sam Harris says that thinking about AI killing all human civilization is somehow fun,
00:25:55 intellectually. Now I have to say as a scientist engineer, I don’t find it fun,
00:26:00 but when I’m having beer with my non AI friends, there is indeed something fun and appealing about
00:26:08 it. Like talking about an episode of Black Mirror, considering if a large meteor is headed towards
00:26:14 Earth, we were just told a large meteor is headed towards Earth, something like this. And can you
00:26:20 relate to this sense of fun? And do you understand the psychology of it?
00:26:24 Yes. Good question. I personally don’t find it fun. I find it kind of actually a waste of time
00:26:32 because there are genuine threats that we ought to be thinking about like pandemics, like cyber
00:26:39 security vulnerabilities, like the possibility of nuclear war and certainly climate change.
00:26:46 You know, this is enough to fill many conversations. And I think Sam did put his
00:26:54 finger on something, namely that there is a community, sometimes called the rationality
00:27:00 community, that delights in using its brainpower to come up with scenarios that would not occur
00:27:07 to mere mortals, to less cerebral people. So there is a kind of intellectual thrill in finding new
00:27:14 things to worry about that no one has worried about yet. I actually think, though, that it’s
00:27:19 not only is it a kind of fun that doesn’t give me particular pleasure, but I think there can be a
00:27:25 pernicious side to it, namely that you overcome people with such dread, such fatalism, that there
00:27:32 are so many ways to die, to annihilate our civilization, that we may as well enjoy life
00:27:39 while we can. There’s nothing we can do about it. If climate change doesn’t do us in, then runaway
00:27:42 robots will. So let’s enjoy ourselves now. We’ve got to prioritize. We have to look at threats that
00:27:52 are close to certainty, such as climate change, and distinguish those from ones that are merely
00:27:58 imaginable but with infinitesimal probabilities. And we have to take into account people’s worry
00:28:05 budget. You can’t worry about everything. And if you sow dread and fear and terror and fatalism,
00:28:12 it can lead to a kind of numbness. Well, these problems are overwhelming, and the engineers are
00:28:17 just going to kill us all. So let’s either destroy the entire infrastructure of science, technology,
00:28:26 or let’s just enjoy life while we can. So there’s a certain line of worry, which I’m worried about
00:28:32 a lot of things in engineering. There’s a certain line of worry when you cross, you’re allowed to
00:28:36 cross, that it becomes paralyzing fear as opposed to productive fear. And that’s kind of what
00:28:44 you’re highlighting. Exactly right. And we’ve seen some, we know that human effort is not
00:28:50 well calibrated against risk in that because a basic tenet of cognitive psychology is that
00:28:58 perception of risk and hence perception of fear is driven by imaginability, not by data. And so we
00:29:05 misallocate vast amounts of resources to avoiding terrorism, which kills on average about six
00:29:11 Americans a year with one exception of 9 11. We invade countries, we invent entire new departments
00:29:18 of government with massive, massive expenditure of resources and lives to defend ourselves against
00:29:25 a trivial risk. Whereas guaranteed risks, one of them you mentioned traffic fatalities and even
00:29:34 risks that are not here, but are plausible enough to worry about like pandemics, like nuclear war,
00:29:45 receive far too little attention. In presidential debates, there’s no discussion of how to minimize
00:29:51 the risk of nuclear war. Lots of discussion of terrorism, for example. And so I think it’s
00:29:58 essential to calibrate our budget of fear, worry, concern, planning to the actual probability of
00:30:08 harm. Yep. So let me ask this question. So speaking of imaginability, you said it’s important to think
00:30:15 about reason and one of my favorite people who likes to dip into the outskirts of reason through
00:30:23 fascinating exploration of his imagination is Joe Rogan. Oh yes. So who has through reason used to
00:30:32 believe a lot of conspiracies and through reason has stripped away a lot of his beliefs in that
00:30:37 way. So it’s fascinating actually to watch him through rationality kind of throw away the ideas
00:30:43 of Bigfoot and 9 11. I’m not sure exactly. Kim Trails. I don’t know what he believes in. Yes.
00:30:50 Okay. But he no longer believed in. No, that’s right. No, he’s become a real force for good.
00:30:55 Yep. So you were on the Joe Rogan podcast in February and had a fascinating conversation,
00:31:00 but as far as I remember, didn’t talk much about artificial intelligence. I will be on his podcast
00:31:05 in a couple of weeks. Joe is very much concerned about existential threat of AI. I’m not sure if
00:31:11 you’re, this is why I was hoping that you would get into that topic. And in this way,
00:31:17 he represents quite a lot of people who look at the topic of AI from 10,000 foot level.
00:31:22 So as an exercise of communication, you said it’s important to be rational and reason
00:31:29 about these things. Let me ask, if you were to coach me as an AI researcher about how to speak
00:31:34 to Joe and the general public about AI, what would you advise? Well, the short answer would be to
00:31:40 read the sections that I wrote in enlightenment now about AI, but a longer reason would be I
00:31:45 think to emphasize, and I think you’re very well positioned as an engineer to remind people about
00:31:50 the culture of engineering, that it really is safety oriented, that another discussion in
00:31:57 enlightenment now, I plot rates of accidental death from various causes, plane crashes, car
00:32:04 crashes, occupational accidents, even death by lightning strikes. And they all plummet because
00:32:12 the culture of engineering is how do you squeeze out the lethal risks, death by fire, death by
00:32:18 drowning, death by asphyxiation, all of them drastically declined because of advances in
00:32:24 engineering that I got to say, I did not appreciate until I saw those graphs. And it is because
00:32:29 exactly, people like you who stay up at night thinking, oh my God, is what I’m inventing likely
00:32:37 to hurt people and to deploy ingenuity to prevent that from happening. Now, I’m not an engineer,
00:32:43 although I spent 22 years at MIT, so I know something about the culture of engineering.
00:32:48 My understanding is that this is the way you think if you’re an engineer. And it’s essential
00:32:53 that that culture not be suddenly switched off when it comes to artificial intelligence. So,
00:32:59 I mean, that could be a problem, but is there any reason to think it would be switched off?
00:33:02 I don’t think so. And one, there’s not enough engineers speaking up for this
00:33:06 way, for the excitement, for the positive view of human nature, what you’re trying to create
00:33:13 is positivity. Like everything we try to invent is trying to do good for the world.
00:33:18 But let me ask you about the psychology of negativity. It seems just objectively,
00:33:23 not considering the topic, it seems that being negative about the future makes you sound smarter
00:33:28 than being positive about the future, irregardless of topic. Am I correct in this observation? And
00:33:34 if so, why do you think that is? Yeah, I think there is that phenomenon that,
00:33:40 as Tom Lehrer, the satirist said, always predict the worst and you’ll be hailed as a prophet.
00:33:45 It may be part of our overall negativity bias. We are as a species more attuned to the negative
00:33:52 than the positive. We dread losses more than we enjoy gains. And that might open up a space for
00:34:02 prophets to remind us of harms and risks and losses that we may have overlooked.
00:34:07 So I think there is that asymmetry. So you’ve written some of my favorite books
00:34:16 all over the place. So starting from Enlightenment Now to The Better Ages of Our Nature,
00:34:21 Blank Slate, How the Mind Works, the one about language, Language Instinct. Bill Gates,
00:34:29 big fan too, said of your most recent book that it’s my new favorite book of all time.
00:34:37 So for you as an author, what was a book early on in your life that had a profound impact on the
00:34:43 way you saw the world? Certainly this book, Enlightenment Now, was influenced by David
00:34:49 Deutsch’s The Beginning of Infinity, a rather deep reflection on knowledge and the power of
00:34:55 knowledge to improve the human condition. And with bits of wisdom such as that problems are
00:35:02 inevitable but problems are solvable given the right knowledge and that solutions create new
00:35:07 problems that have to be solved in their turn. That’s I think a kind of wisdom about the human
00:35:11 condition that influenced the writing of this book. There are some books that are excellent
00:35:16 but obscure, some of which I have on a page on my website. I read a book called The History of Force,
00:35:22 self published by a political scientist named James Payne on the historical decline of violence
00:35:27 and that was one of the inspirations for The Better Angels of Our Nature.
00:35:33 What about early on? If you look back when you were maybe a teenager?
00:35:38 I loved a book called One, Two, Three, Infinity. When I was a young adult I read that book by
00:35:43 George Gamow, the physicist, which had very accessible and humorous explanations of
00:35:48 relativity, of number theory, of dimensionality, high multiple dimensional spaces in a way that I
00:35:59 think is still delightful 70 years after it was published. I like the Time Life Science series.
00:36:06 These are books that would arrive every month that my mother subscribed to, each one on a different
00:36:11 topic. One would be on electricity, one would be on forests, one would be on evolution and then one
00:36:17 was on the mind. I was just intrigued that there could be a science of mind and that book I would
00:36:24 cite as an influence as well. Then later on… That’s when you fell in love with the idea of
00:36:28 studying the mind? Was that the thing that grabbed you? It was one of the things I would say. I read
00:36:35 as a college student the book Reflections on Language by Noam Chomsky. I spent most of his
00:36:41 career here at MIT. Richard Dawkins, two books, The Blind Watchmaker and The Selfish Gene,
00:36:47 were enormously influential, mainly for the content but also for the writing style, the
00:36:55 ability to explain abstract concepts in lively prose. Stephen Jay Gould’s first collection,
00:37:02 Ever Since Darwin, also an excellent example of lively writing. George Miller, a psychologist that
00:37:10 most psychologists are familiar with, came up with the idea that human memory has a capacity of
00:37:16 seven plus or minus two chunks. That’s probably his biggest claim to fame. But he wrote a couple
00:37:20 of books on language and communication that I read as an undergraduate. Again, beautifully written
00:37:25 and intellectually deep. Wonderful. Stephen, thank you so much for taking the time today.
00:37:31 My pleasure. Thanks a lot, Lex.