Transcript
00:00:00 The following is a conversation with Daniel Kahneman, winner of the Nobel Prize in Economics
00:00:05 for his integration of economic science with the psychology of human behavior,
00:00:10 judgment, and decision making. He’s the author of the popular book Thinking Fast and Slow that
00:00:16 summarizes in an accessible way his research of several decades, often in collaboration with
00:00:22 Amos Tversky on cognitive biases, prospect theory, and happiness. The central thesis of this work
00:00:29 is the dichotomy between two modes of thought. What he calls system one is fast, instinctive,
00:00:35 and emotional. System two is slower, more deliberative, and more logical. The book
00:00:41 delineates cognitive biases associated with each of these two types of thinking.
00:00:46 His study of the human mind and its peculiar and fascinating limitations are both instructive and
00:00:53 inspiring for those of us seeking to engineer intelligent systems. This is the Artificial
00:00:59 Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast,
00:01:05 follow on Spotify, support it on Patreon, or simply connect with me on Twitter at
00:01:10 Lex Friedman spelled F R I D M A N. I recently started doing ads at the end of the introduction.
00:01:16 I’ll do one or two minutes after introducing the episode and never any ads in the middle
00:01:21 that can break the flow of the conversation. I hope that works for you and doesn’t hurt the
00:01:25 listening experience. This show is presented by Cash App, the number one finance app in the App
00:01:32 Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell,
00:01:37 and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy
00:01:43 fractions of a stock, say one dollar’s worth, no matter what the stock price is. Broker services
00:01:48 are provided by Cash App Investing, a subsidiary of Square and member SIPC. I’m excited to be
00:01:55 working with Cash App to support one of my favorite organizations called First, best known
00:02:00 for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands
00:02:05 of students in over 110 countries and have a perfect rating at Charity Navigator, which means
00:02:11 that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google
00:02:17 Play and use code LEXPODCAST, you’ll get $10 and Cash App will also donate $10 to FIRST,
00:02:24 which again is an organization that I’ve personally seen inspire girls and boys to dream
00:02:29 of engineering a better world. And now here’s my conversation with Daniel Kahneman.
00:02:36 You tell a story of an SS soldier early in the war, World War II, in Nazi occupied France in
00:02:43 Paris, where you grew up. He picked you up and hugged you and showed you a picture of a boy,
00:02:50 Daniel Kahneman. Maybe not realizing that you were Jewish.
00:02:53 Not maybe, certainly not.
00:02:56 So I told you I’m from the Soviet Union that was significantly impacted by the war as well,
00:03:01 and I’m Jewish as well. What do you think World War II taught us about human psychology broadly?
00:03:09 Well, I think the only big surprise is the extermination policy, genocide,
00:03:17 by the German people. That’s when you look back on it, and I think that’s a major surprise.
00:03:27 It’s a surprise because…
00:03:28 It’s a surprise that they could do it. It’s a surprise that enough people
00:03:34 willingly participated in that. This is a surprise. Now it’s no longer a surprise,
00:03:41 but it’s changed many people’s views, I think, about human beings. Certainly for me,
00:03:50 the Ackman trial, that teaches you something because it’s very clear that if it could happen
00:03:58 in Germany, it could happen anywhere. It’s not that the Germans were special.
00:04:04 This could happen anywhere.
00:04:05 So what do you think that is? Do you think we’re all capable of evil? We’re all capable of cruelty?
00:04:13 I don’t think in those terms. I think that what is certainly possible is you can dehumanize people
00:04:23 so that you treat them not as people anymore, but as animals. And the same way that you can slaughter
00:04:32 animals without feeling much of anything, it can be the same. And when you feel that,
00:04:41 I think, the combination of dehumanizing the other side and having uncontrolled power over
00:04:49 other people, I think that doesn’t bring out the most generous aspect of human nature.
00:04:54 So that Nazi soldier, he was a good man. And he was perfectly capable of killing a lot of people,
00:05:08 and I’m sure he did.
00:05:10 But what did the Jewish people mean to Nazis? So what the dismissal of Jewish as worthy of?
00:05:20 IA Again, this is surprising that it was so extreme,
00:05:25 but it’s not one thing in human nature. I don’t want to call it evil, but the distinction between
00:05:32 the in group and the out group, that is very basic. So that’s built in. The loyalty and
00:05:40 affection towards in group and the willingness to dehumanize the out group, that is in human nature.
00:05:50 That’s what I think probably didn’t need the Holocaust to teach us that. But the Holocaust is
00:05:57 a very sharp lesson of what can happen to people and what people can do.
00:06:05 SL. So the effect of the in group and the out group. IA It’s clear. Those were people,
00:06:13 you could shoot them. They were not human. There was no empathy, or very, very little empathy left.
00:06:23 So occasionally, there might have been. And very quickly, by the way, the empathy disappeared,
00:06:32 if there was initially. And the fact that everybody around you was doing it,
00:06:39 that completely, the group doing it, and everybody shooting Jews, I think that makes it permissible.
00:06:51 Now, how much, whether it could happen in every culture, or whether the Germans were just
00:07:01 particularly efficient and disciplined, so they could get away with it. It’s an interesting
00:07:10 question. SL. Are these artifacts of history or is it human nature? IA I think that’s really human
00:07:15 nature. You put some people in a position of power relative to other people, and then they become
00:07:24 less human, they become different. SL. But in general, in war, outside of concentration camps
00:07:32 in World War Two, it seems that war brings out darker sides of human nature, but also the beautiful
00:07:39 things about human nature. IA Well, I mean, what it brings out is the loyalty among soldiers. I mean,
00:07:49 it brings out the bonding, male bonding, I think is a very real thing that happens. And there is
00:07:57 a certain thrill to friendship, and there is certainly a certain thrill to friendship under
00:08:03 risk and to shared risk. And so people have very profound emotions, up to the point where it gets
00:08:12 so traumatic that little is left. SL. So let’s talk about psychology a little bit. In your book,
00:08:23 Thinking Fast and Slow, you describe two modes of thought, system one, the fast and instinctive,
00:08:31 and emotional one, and system two, the slower, deliberate, logical one. At the risk of asking
00:08:37 Darwin to discuss theory of evolution, can you describe distinguishing characteristics for people
00:08:46 who have not read your book of the two systems? IA Well, I mean, the word system is a bit
00:08:52 misleading, but at the same time it’s misleading, it’s also very useful. But what I call system one,
00:09:01 it’s easier to think of it as a family of activities. And primarily, the way I describe it
00:09:09 is there are different ways for ideas to come to mind. And some ideas come to mind automatically,
00:09:17 and the standard example is two plus two, and then something happens to you. And in other cases,
00:09:26 you’ve got to do something, you’ve got to work in order to produce the idea. And my example,
00:09:32 I always give the same pair of numbers as 27 times 14, I think. SL. You have to perform some
00:09:38 algorithm in your head, some steps. IA Yes, and it takes time. It’s a very difference. Nothing
00:09:44 comes to mind except something comes to mind, which is the algorithm, I mean, that you’ve got
00:09:50 to perform. And then it’s work, and it engages short term memory, it engages executive function,
00:09:58 and it makes you incapable of doing other things at the same time. So the main characteristic of
00:10:04 system two is that there is mental effort involved, and there is a limited capacity for mental effort,
00:10:10 whereas system one is effortless, essentially. That’s the major distinction.
00:10:15 SL. So you talk about there, you know, it’s really convenient to talk about two systems,
00:10:21 but you also mentioned just now and in general that there’s no distinct two systems in the brain
00:10:29 from a neurobiological, even from a psychology perspective. But why does it seem to, from the
00:10:36 experiments you’ve conducted, there does seem to be kind of emergent two modes of thinking? So
00:10:47 at some point, these kinds of systems came into a brain architecture. Maybe mammals share it.
00:10:57 Or do you not think of it at all in those terms that it’s all a mush and these two things just
00:11:01 emerge? RL. Evolutionary theorizing about this is cheap and easy. So it’s the way I think about it
00:11:12 is that it’s very clear that animals have perceptual system, and that includes an ability
00:11:20 to understand the world, at least to the extent that they can predict, they can’t explain anything,
00:11:27 but they can anticipate what’s going to happen. And that’s a key form of understanding the world.
00:11:34 And my crude idea is that what I call system two, well, system two grew out of this.
00:11:45 And, you know, there is language and there is the capacity of manipulating ideas and the capacity
00:11:51 of imagining futures and of imagining counterfactual things that haven’t happened
00:11:58 and to do conditional thinking. And there are really a lot of abilities that without language
00:12:06 and without the very large brain that we have compared to others would be impossible. Now,
00:12:13 system one is more like what the animals are, but system one also can talk. I mean,
00:12:20 it has language. It understands language. Indeed, it speaks for us. I mean, you know,
00:12:26 I’m not choosing every word as a deliberate process. The words, I have some idea and then
00:12:32 the words come out and that’s automatic and effortless. And many of the experiments you’ve
00:12:39 done is to show that, listen, system one exists and it does speak for us and we should be careful
00:12:44 about the voice it provides. Well, I mean, you know, we have to trust it because it’s
00:12:55 the speed at which it acts. System two, if we’re dependent on system two for survival,
00:13:01 we wouldn’t survive very long because it’s very slow. Yeah. Crossing the street.
00:13:06 Crossing the street. I mean, many things depend on their being automatic. One very important aspect
00:13:12 of system one is that it’s not instinctive. You use the word instinctive. It contains skills that
00:13:20 clearly have been learned. So that skilled behavior like driving a car or speaking, in fact,
00:13:28 skilled behavior has to be learned. And so it doesn’t, you know, you don’t come equipped with
00:13:35 driving. You have to learn how to drive and you have to go through a period where driving is not
00:13:41 automatic before it becomes automatic. So. Yeah. You construct, I mean, this is where you talk
00:13:48 about heuristic and biases is you, to make it automatic, you create a pattern and then system
00:13:57 one essentially matches a new experience against the previously seen pattern. And when that match
00:14:02 is not a good one, that’s when the cognitive, all the mess happens, but it’s most of the time
00:14:08 it works. And so it’s pretty. Most of the time, the anticipation of what’s going to happen next
00:14:13 is correct. And most of the time the plan about what you have to do is correct. And so most of
00:14:22 the time everything works just fine. What’s interesting actually is that in some sense,
00:14:29 system one is much better at what it does than system two is at what it does. That is there is
00:14:36 that quality of effortlessly solving enormously complicated problems, which clearly exists so
00:14:44 that the chess player, a very good chess player, all the moves that come to their mind are strong
00:14:52 moves. So all the selection of strong moves happens unconsciously and automatically and
00:14:58 very, very fast. And all that is in system one. So system two verifies.
00:15:07 So along this line of thinking, really what we are are machines that construct
00:15:12 a pretty effective system one. You could think of it that way. So we’re not talking about humans,
00:15:19 but if we think about building artificial intelligence systems, robots, do you think
00:15:26 all the features and bugs that you have highlighted in human beings are useful
00:15:32 for constructing AI systems? So both systems are useful for perhaps instilling in robots?
00:15:39 What is happening these days is that actually what is happening in deep learning is more like
00:15:50 a system one product than like a system two product. I mean, deep learning matches patterns
00:15:57 and anticipate what’s going to happen. So it’s highly predictive. What deep learning
00:16:05 doesn’t have and many people think that this is the critical, it doesn’t have the ability to
00:16:12 reason. So there is no system two there. But I think very importantly, it doesn’t have any
00:16:19 causality or any way to represent meaning and to represent real interactions. So until that is
00:16:27 solved, what can be accomplished is marvelous and very exciting, but limited.
00:16:35 That’s actually really nice to think of current advances in machine learning as essentially
00:16:40 system one advances. So how far can we get with just system one? If we think of deep learning
00:16:46 in artificial intelligence systems? I mean, you know, it’s very clear that deep mind has already
00:16:52 gone way beyond what people thought was possible. I think the thing that has impressed me most about
00:17:00 the developments in AI is the speed. It’s that things, at least in the context of deep learning,
00:17:07 and maybe this is about to slow down, but things moved a lot faster than anticipated.
00:17:14 The transition from solving chess to solving Go, that’s bewildering how quickly it went.
00:17:25 The move from Alpha Go to Alpha Zero is sort of bewildering the speed at which they accomplished
00:17:31 that. Now, clearly, there are many problems that you can solve that way, but there are some problems
00:17:41 for which you need something else. Something like reasoning.
00:17:45 Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus, who is
00:17:54 also a critic of AI. I mean, what he points out, and I think he has a point, is that humans learn
00:18:05 quickly. Children don’t need a million examples, they need two or three examples. So, clearly,
00:18:16 there is a fundamental difference. And what enables a machine to learn quickly, what you have
00:18:25 to build into the machine, because it’s clear that you have to build some expectations or
00:18:30 or something in the machine to make it ready to learn quickly. That at the moment seems to be
00:18:38 unsolved. I’m pretty sure that DeepMind is working on it, but if they have solved it, I haven’t heard
00:18:47 yet. They’re trying to actually, them and OpenAI are trying to start to get to use neural networks
00:18:54 to reason. So, assemble knowledge. Of course, causality is, temporal causality, is out of
00:19:02 reach to most everybody. You mentioned the benefits of System 1 is essentially that it’s
00:19:09 fast, allows us to function in the world.
00:19:10 Fast and skilled, yeah.
00:19:13 It’s skill.
00:19:13 And it has a model of the world. You know, in a sense, I mean, there was the early phase of
00:19:19 AI attempted to model reasoning. And they were moderately successful, but, you know, reasoning
00:19:29 by itself doesn’t get you much. Deep learning has been much more successful in terms of, you know,
00:19:37 what they can do. But now, it’s an interesting question, whether it’s approaching its limits.
00:19:43 What do you think?
00:19:44 I think absolutely. So, I just talked to Gian LeCun. He mentioned, you know, so he thinks
00:19:51 that the limits, we’re not going to hit the limits with neural networks, that ultimately,
00:19:57 this kind of System 1 pattern matching will start to look like System 2 without significant
00:20:06 transformation of the architecture. So, I’m more with the majority of the people who think that,
00:20:12 yes, neural networks will hit a limit in their capability.
00:20:16 He, on the one hand, I have heard him tell them it’s a sub, it’s essentially that, you know,
00:20:22 what they have accomplished is not a big deal, that they have just touched, that basically,
00:20:28 you know, they can’t do unsupervised learning in an effective way. But you’re telling me that he
00:20:35 thinks that the current, within the current architecture, you can do causality and reasoning?
00:20:41 So, he’s very much a pragmatist in a sense that’s saying that we’re very far away,
00:20:47 that there’s still, I think there’s this idea that he says is, we can only see
00:20:54 one or two mountain peaks ahead and there might be either a few more after or
00:20:59 thousands more after. Yeah, so that kind of idea.
00:21:01 I heard that metaphor.
00:21:03 Yeah, right. But nevertheless, it doesn’t see the final answer not fundamentally looking like one
00:21:13 that we currently have. So, neural networks being a huge part of that.
00:21:18 Yeah, I mean, that’s very likely because pattern matching is so much of what’s going on.
00:21:26 And you can think of neural networks as processing information sequentially.
00:21:30 Yeah, I mean, you know, there is an important aspect to, for example, you get systems that
00:21:39 translate and they do a very good job, but they really don’t know what they’re talking about.
00:21:45 And for that, I’m really quite surprised. For that, you would need an AI that has sensation,
00:21:55 an AI that is in touch with the world.
00:21:58 Yes, self awareness and maybe even something resembles consciousness kind of ideas.
00:22:04 Certainly awareness of, you know, awareness of what’s going on so that the words have meaning
00:22:10 or can get, are in touch with some perception or some action.
00:22:16 Yeah, so that’s a big thing for Jan and as what he refers to as grounding to the physical space.
00:22:23 So that’s what we’re talking about the same thing.
00:22:26 Yeah, so how do you ground?
00:22:29 I mean, the grounding, without grounding, then you get a machine that doesn’t know what
00:22:35 it’s talking about because it is talking about the world ultimately.
00:22:40 The question, the open question is what it means to ground. I mean, we’re very
00:22:44 human centric in our thinking, but what does it mean for a machine to understand what it means
00:22:50 to be in this world? Does it need to have a body? Does it need to have a finiteness like we humans
00:22:57 have all of these elements? It’s a very, it’s an open question.
00:23:02 You know, I’m not sure about having a body, but having a perceptual system,
00:23:05 having a body would be very helpful too. I mean, if you think about human, mimicking human,
00:23:12 you know, but having a perception that seems to be essential so that you can build,
00:23:20 you can accumulate knowledge about the world. So if you can imagine a human completely paralyzed,
00:23:28 and there’s a lot that the human brain could learn, you know, with a paralyzed body.
00:23:33 So if we got a machine that could do that, that would be a big deal.
00:23:38 TK And then the flip side of that, something you see in children and something in machine
00:23:44 learning world is called active learning. Maybe it is also in, is being able to play with the world.
00:23:52 How important for developing System 1 or System 2 do you think it is to play with the world?
00:23:59 To be able to interact with the world?
00:24:00 MG A lot of what you learn is you learn to anticipate the outcomes of your actions. I mean,
00:24:08 you can see that how babies learn it, you know, with their hands, how they learn, you know,
00:24:15 to connect, you know, the movements of their hands with something that clearly is something
00:24:20 that happens in the brain and the ability of the brain to learn new patterns. So, you know,
00:24:28 it’s the kind of thing that you get with artificial limbs, that you connect it and then people learn
00:24:34 to operate the artificial limb, you know, really impressively quickly, at least from what I hear.
00:24:44 So we have a system that is ready to learn the world through action.
00:24:49 TK At the risk of going into way too mysterious of land,
00:24:52 what do you think it takes to build a system like that? Obviously, we’re very far from understanding
00:25:00 how the brain works, but how difficult is it to build this mind of ours?
00:25:08 MG You know, I mean, I think that Jan LeCun’s answer that we don’t know how many mountains
00:25:13 there are, I think that’s a very good answer. I think that, you know, if you look at what Ray
00:25:20 Kurzweil is saying, that strikes me as off the wall. But I think people are much more realistic
00:25:28 than that, where actually Demis Hassabis is and Jan is, and so the people are actually doing the
00:25:35 work fairly realistic, I think. TK To maybe phrase it another way,
00:25:41 from a perspective not of building it, but from understanding it,
00:25:44 how complicated are human beings in the following sense? You know, I work with autonomous vehicles
00:25:52 and pedestrians, so we tried to model pedestrians. How difficult is it to model a human being,
00:26:00 their perception of the world, the two systems they operate under, sufficiently to be able to
00:26:06 predict whether the pedestrian is going to cross the road or not?
00:26:09 MG I’m, you know, I’m fairly optimistic about that, actually, because what we’re talking about
00:26:18 is a huge amount of information that every vehicle has, and that feeds into one system,
00:26:26 into one gigantic system. And so anything that any vehicle learns becomes part of what the whole
00:26:33 system knows. And with a system multiplier like that, there is a lot that you can do.
00:26:41 So human beings are very complicated, and the system is going to make mistakes, but human
00:26:48 makes mistakes. I think that they’ll be able to, I think they are able to anticipate pedestrians,
00:26:56 otherwise a lot would happen. They’re able to, you know, they’re able to get into a roundabout
00:27:04 and into traffic, so they must know both to expect or to anticipate how people will react
00:27:14 when they’re sneaking in. And there’s a lot of learning that’s involved in that.
00:27:18 RL Currently, the pedestrians are treated as things that cannot be hit, and they’re not
00:27:28 treated as agents with whom you interact in a game theoretic way. So, I mean, it’s not,
00:27:37 it’s a totally open problem, and every time somebody tries to solve it, it seems to be harder
00:27:41 than we think. And nobody’s really tried to seriously solve the problem of that dance,
00:27:46 because I’m not sure if you’ve thought about the problem of pedestrians, but you’re really
00:27:52 putting your life in the hands of the driver.
00:27:54 RL You know, there is a dance, there’s part of the dance that would be quite complicated,
00:28:00 but for example, when I cross the street and there is a vehicle approaching, I look the driver
00:28:05 in the eye, and I think many people do that. And, you know, that’s a signal that I’m sending,
00:28:13 and I would be sending that machine to an autonomous vehicle, and it had better understand
00:28:18 it, because it means I’m crossing.
00:28:20 RL So, and there’s another thing you do, that actually, so I’ll tell you what you do,
00:28:26 because we watched, I’ve watched hundreds of hours of video on this, is when you step
00:28:31 in the street, you do that before you step in the street, and when you step in the street,
00:28:35 you actually look away.
00:28:36 RL Look away.
00:28:36 RL Yeah. Now, what is that? What that’s saying is, I mean, you’re trusting that the car who
00:28:45 hasn’t slowed down yet will slow down.
00:28:48 RL Yeah. And you’re telling him, I’m committed. I mean, this is like in a game of chicken,
00:28:53 so I’m committed, and if I’m committed, I’m looking away. So, there is, you just have
00:28:59 to stop.
00:29:00 RL So, the question is whether a machine that observes that needs to understand mortality.
00:29:06 RL Here, I’m not sure that it’s got to understand so much as it’s got to anticipate. So, and
00:29:17 here, but you know, you’re surprising me, because here I would think that maybe you
00:29:24 can anticipate without understanding, because I think this is clearly what’s happening in
00:29:30 playing go or in playing chess. There’s a lot of anticipation, and there is zero understanding.
00:29:35 RL Exactly.
00:29:36 RL So, I thought that you didn’t need a model of the human and a model of the human mind
00:29:46 to avoid hitting pedestrians, but you are suggesting that actually…
00:29:50 RL There you go, yeah.
00:29:51 RL You do. Then it’s a lot harder, I thought.
00:29:56 RL And I have a follow up question to see where your intuition lies. It seems that almost
00:30:02 every robot human collaboration system is a lot harder than people realize. So, do you
00:30:10 think it’s possible for robots and humans to collaborate successfully? We talked a little
00:30:17 bit about semi autonomous vehicles, like in the Tesla autopilot, but just in tasks in
00:30:23 general. If you think we talked about current neural networks being kind of system one,
00:30:30 do you think those same systems can borrow humans for system two type tasks and collaborate
00:30:40 successfully?
00:30:40 RL Well, I think that in any system where humans and the machine interact, the human
00:30:49 will be superfluous within a fairly short time. That is, if the machine is advanced
00:30:55 enough so that it can really help the human, then it may not need the human for a long
00:31:01 time. Now, it would be very interesting if there are problems that for some reason the
00:31:08 machine cannot solve, but that people could solve. Then you would have to build into the
00:31:14 machine an ability to recognize that it is in that kind of problematic situation and
00:31:22 to call the human. That cannot be easy without understanding. That is, it must be very difficult
00:31:30 to program a recognition that you are in a problematic situation without understanding
00:31:38 the problem.
00:31:39 SL. That’s very true. In order to understand the full scope of situations that are problematic,
00:31:47 you almost need to be smart enough to solve all those problems.
00:31:51 RL It’s not clear to me how much the machine will need the human. I think the example of
00:32:01 chess is very instructive. I mean, there was a time at which Kasparov was saying that human
00:32:06 machine combinations will beat everybody. Even stockfish doesn’t need people and Alpha
00:32:13 Zero certainly doesn’t need people.
00:32:15 SL. The question is, just like you said, how many problems are like chess and how many
00:32:20 problems are not like chess? Every problem probably in the end is like chess. The question
00:32:27 is, how long is that transition period?
00:32:29 RL That’s a question I would ask you. Autonomous vehicle, just driving, is probably a lot more
00:32:38 complicated than Go to solve that problem. Because it’s open. That’s not surprising to
00:32:47 me because there is a hierarchical aspect to this, which is recognizing a situation
00:32:58 and then within the situation bringing up the relevant knowledge. For that hierarchical
00:33:09 type of system to work, you need a more complicated system than we currently have.
00:33:15 SL. A lot of people think, because as human beings, this is probably the cognitive biases,
00:33:22 they think of driving as pretty simple because they think of their own experience. This is
00:33:28 actually a big problem for AI researchers or people thinking about AI because they evaluate
00:33:36 how hard a particular problem is based on very limited knowledge, based on how hard
00:33:43 it is for them to do the task. And then they take for granted, maybe you can speak to that
00:33:49 because most people tell me driving is trivial and humans in fact are terrible at driving
00:33:56 is what people tell me. And I see humans and humans are actually incredible at driving
00:34:02 and driving is really terribly difficult. Is that just another element of the effects
00:34:08 that you’ve described in your work on the psychology side?
00:34:13 No, I mean, I haven’t really, I would say that my research has contributed nothing to
00:34:22 understanding the ecology and to understanding the structure of situations and the complexity
00:34:27 of problems. So all we know is very clear that that goal, it’s endlessly complicated,
00:34:38 but it’s very constrained. And in the real world, there are far fewer constraints and
00:34:46 many more potential surprises.
00:34:49 SL. So that’s obvious because it’s not always obvious to people, right? So when you think
00:34:54 about…
00:34:55 Well, I mean, you know, people thought that reasoning was hard and perceiving was easy,
00:35:02 but you know, they quickly learned that actually modeling vision was tremendously complicated
00:35:09 and modeling, even proving theorems was relatively straightforward.
00:35:15 To push back on that a little bit on the quickly part, it took several decades to learn that
00:35:22 and most people still haven’t learned that. I mean, our intuition, of course, AI researchers
00:35:28 have, but you drift a little bit outside the specific AI field, the intuition is still
00:35:34 perceptible to solve that.
00:35:36 No, I mean, that’s true. Intuitions, the intuitions of the public haven’t changed
00:35:41 radically. And they are, as you said, they’re evaluating the complexity of problems by how
00:35:48 difficult it is for them to solve the problems. And that’s got very little to do with the
00:35:55 complexities of solving them in AI.
00:35:58 SL. How do you think from the perspective of an AI researcher, do we deal with the intuitions
00:36:06 of the public? So in trying to think, arguably, the combination of hype investment and the
00:36:15 public intuition is what led to the AI winters. I’m sure that same could be applied to tech
00:36:21 or that the intuition of the public leads to media hype, leads to companies investing
00:36:29 in the tech, and then the tech doesn’t make the company’s money. And then there’s a crash.
00:36:36 Is there a way to educate people to fight the, let’s call it system one thinking?
00:36:43 In general, no. I think that’s the simple answer. And it’s going to take a long time
00:36:54 before the understanding of what those systems can do becomes public knowledge. And then
00:37:09 the expectations, there are several aspects that are going to be very complicated. The
00:37:20 fact that you have a device that cannot explain itself is a major, major difficulty. And we’re
00:37:29 already seeing that. I mean, this is really something that is happening. So it’s happening
00:37:35 in the judicial system. So you have system that are clearly better at predicting parole
00:37:43 violations than judges, but they can’t explain their reasoning. And so people don’t want
00:37:54 to trust them.
00:37:56 We seem to in system one, even use cues to make judgements about our environment. So
00:38:05 this explainability point, do you think humans can explain stuff?
00:38:11 No, but I mean, there is a very interesting aspect of that. Humans think they can explain
00:38:20 themselves. So when you say something and I ask you, why do you believe that? Then reasons
00:38:28 will occur to you. But actually, my own belief is that in most cases, the reasons have very
00:38:35 little to do with why you believe what you believe. So that the reasons are a story that
00:38:41 comes to your mind when you need to explain yourself. But people traffic in those explanations
00:38:50 I mean, the human interaction depends on those shared fictions and, and the stories that
00:38:56 people tell themselves.
00:38:58 You just made me actually realize and we’ll talk about stories in a second. That not to
00:39:05 be cynical about it, but perhaps there’s a whole movement of people trying to do explainable
00:39:11 AI. And really, we don’t necessarily need to explain AI doesn’t need to explain itself.
00:39:19 It just needs to tell a convincing story.
00:39:21 Yeah, absolutely.
00:39:23 It doesn’t necessarily, the story doesn’t necessarily need to reflect the truth as it
00:39:29 might, it just needs to be convincing. There’s something to that.
00:39:32 You can say exactly the same thing in a way that sounds cynical or doesn’t sound cynical.
00:39:38 Right.
00:39:39 But the objective of having an explanation is to tell a story that will be acceptable
00:39:48 to people. And, and, and for it to be acceptable and to be robustly acceptable, it has to have
00:39:56 some elements of truth. But, but the objective is for people to accept it.
00:40:04 It’s quite brilliant, actually. But so on the, on the stories that we tell, sorry to
00:40:11 ask me, ask you the question that most people know the answer to, but you talk about two
00:40:18 selves in terms of how life is lived, the experienced self and remembering self. Can
00:40:24 you describe the distinction between the two?
00:40:26 Well, sure. I mean, the, there is an aspect of, of life that occasionally, you know, most
00:40:33 of the time we just live and we have experiences and they’re better and they’re worse and it
00:40:38 goes on over time. And mostly we forget everything that happens or we forget most of what happens.
00:40:45 Then occasionally you, when something ends or at different points, you evaluate the past
00:40:56 and you form a memory and the memory is schematic. It’s not that you can roll a film of an interaction.
00:41:03 You construct, in effect, the elements of a story about an, about an episode. So there
00:41:12 is the experience and there is the story that is created about the experience. And that’s
00:41:18 what I call the remembering. So I had the image of two selves. So there is a self that
00:41:24 lives and there is a self that evaluates life. Now the paradox and the deep paradox in that
00:41:32 is that we have one system or one self that does the living, but the other system, the
00:41:41 remembering self is all we get to keep. And basically decision making and, and everything
00:41:49 that we do is governed by our memories, not by what actually happened. It’s, it’s governed
00:41:55 by, by the story that we told ourselves or by the story that we’re keeping. So that’s,
00:42:02 that’s the distinction.
00:42:03 I mean, there’s a lot of brilliant ideas about the pursuit of happiness that come out of
00:42:08 that. What are the properties of happiness which emerge from a remembering self?
00:42:14 There are, there are properties of how we construct stories that are really important.
00:42:19 So that I studied a few, but, but a couple are really very striking. And one is that
00:42:29 in stories, time doesn’t matter. There’s a sequence of events or there are highlights
00:42:37 or not. And, and how long it took, you know, they lived happily ever after or three years
00:42:45 later or something. It, time really doesn’t matter. And in stories, events matter, but
00:42:53 time doesn’t. That, that leads to a very interesting set of problems because time is all we got
00:43:03 to live. I mean, you know, time is the currency of life. And yet time is not represented basically
00:43:11 in evaluated memories. So that, that creates a lot of paradoxes that I’ve thought about.
00:43:18 Yeah. They’re fascinating. But if you were to give advice on how one lives a happy life
00:43:27 based on such properties, what’s the optimal?
00:43:33 You know, I gave up, I abandoned happiness research because I couldn’t solve that problem.
00:43:38 I couldn’t, I couldn’t see. And in the first place, it’s very clear that if you do talk
00:43:46 in terms of those two selves, then that what makes the remembering self happy and what
00:43:51 makes the experiencing self happy are different things. And I, I asked the question of, suppose
00:43:59 you’re planning a vacation and you’re just told that at the end of the vacation, you’ll
00:44:04 get an amnesic drug, so you remember nothing. And they’ll also destroy all your photos.
00:44:10 So there’ll be nothing. Would you still go to the same vacation? And, and it’s, it turns
00:44:20 out we go to vacations in large part to construct memories, not to have experiences, but to
00:44:26 construct memories. And it turns out that the vacation that you would want for yourself,
00:44:32 if you knew, you will not remember is probably not the same vacation that you will want for
00:44:38 yourself if you will remember. So I have no solution to these problems, but clearly those
00:44:46 are big issues.
00:44:47 And you’ve talked about, you’ve talked about sort of how many minutes or hours you spend
00:44:53 about the vacation. It’s an interesting way to think about it because that’s how you really
00:44:58 experience the vacation outside the being in it. But there’s also a modern, I don’t
00:45:03 know if you think about this or interact with it. There’s a modern way to, um, magnify the
00:45:11 remembering self, which is by posting on Instagram, on Twitter, on social networks. A lot of people
00:45:17 live life for the picture that you take, that you post somewhere. And now thousands of people
00:45:24 share and potentially potentially millions. And then you can relive it even much more
00:45:29 than just those minutes. Do you think about that magnification much?
00:45:34 You know, I’m too old for social networks. I, you know, I’ve never seen Instagram, so
00:45:41 I cannot really speak intelligently about those things. I’m just too old.
00:45:46 But it’s interesting to watch the exact effects you’ve described.
00:45:49 Make a very big difference. I mean, and it will make, it will also make a difference.
00:45:55 And that I don’t know whether, uh, it’s clear that in some ways the devices that serve us
00:46:06 are supplant functions. So you don’t have to remember phone numbers. You don’t have,
00:46:12 you really don’t have to know facts. I mean, the number of conversations I’m involved with,
00:46:19 somebody says, well, let’s look it up. Uh, so it’s, it’s in a way it’s made conversations.
00:46:27 Well it’s, it means that it’s much less important to know things. You know, it used to be very
00:46:33 important to know things. This is changing. So the requirements of that, that we have
00:46:43 for ourselves and for other people are changing because of all those supports and because,
00:46:50 and I have no idea what Instagram does, but it’s, uh, well, I’ll tell you, I wish I could
00:46:57 just have the, my remembering self could enjoy this conversation, but I’ll get to enjoy it
00:47:03 even more by having watched, by watching it and then talking to others. It’ll be about
00:47:08 a hundred thousand people as scary as this to say, well, listen or watch this, right?
00:47:14 It changes things. It changes the experience of the world that you seek out experiences
00:47:20 which could be shared in that way. It’s in, and I haven’t seen, it’s, it’s the same effects
00:47:25 that you described. And I don’t think the psychology of that magnification has been
00:47:30 described yet because it’s a new world.
00:47:33 But the sharing, there was a, there was a time when people read books and, uh, and,
00:47:43 and you could assume that your friends had read the same books that you read. So there
00:47:51 was kind of invisible sharing. There was a lot of sharing going on and there was a lot
00:47:57 of assumed common knowledge and, you know, that was built in. I mean, it was obvious
00:48:03 that you had read the New York Times. It was obvious that you had read the reviews. I mean,
00:48:09 so a lot was taken for granted that was shared. And, you know, when there were, when there
00:48:17 were three television channels, it was obvious that you’d seen one of them probably the same.
00:48:26 So sharing, sharing always was always there. It was just different.
00:48:32 At the risk of, uh, inviting mockery from you, let me say that I’m also a fan of Sartre
00:48:40 and Camus and existentialist philosophers. And, um, I’m joking of course about mockery,
00:48:47 but from the perspective of the two selves, what do you think of the existentialist philosophy
00:48:54 of life? So trying to really emphasize the experiencing self as the proper way to, or
00:49:03 the best way to live life.
00:49:05 I don’t know enough philosophy to answer that, but it’s not, uh, you know, the emphasis on,
00:49:13 on experience is also the emphasis in Buddhism.
00:49:16 Yeah, right. That’s right.
00:49:18 So, uh, that’s, you just have got to, to experience things and, and, and not to evaluate and not
00:49:27 to pass judgment and not to score, not to keep score. So, uh,
00:49:33 If, when you look at the grand picture of experience, you think there’s something to
00:49:37 that, that one, one of the ways to achieve contentment and maybe even happiness is letting
00:49:44 go of any of the things, any of the procedures of the remembering self.
00:49:51 Well, yeah, I mean, I think, you know, if one could imagine a life in which people don’t
00:49:58 score themselves, uh, it, it feels as if that would be a better life as if the self scoring
00:50:05 and you know, how am I doing a kind of question, uh, is not, is not a very happy thing to have.
00:50:18 But I got out of that field because I couldn’t solve that problem and, and that was because
00:50:25 my intuition was that the experiencing self, that’s reality.
00:50:31 But then it turns out that what people want for themselves is not experiences. They want
00:50:36 memories and they want a good story about their life. And so you cannot have a theory
00:50:41 of happiness that doesn’t correspond to what people want for themselves. And when I, when
00:50:47 I realized that this, this was where things were going, I really sort of left the field
00:50:53 of research.
00:50:54 Do you think there’s something instructive about this emphasis of reliving memories in
00:51:01 building AI systems. So currently artificial intelligence systems are more like experiencing
00:51:09 self in that they react to the environment. There’s some pattern formation like a learning
00:51:16 so on, but you really don’t construct memories, uh, except in reinforcement learning every
00:51:23 once in a while that you replay over and over.
00:51:25 Yeah, but you know, that would in principle would not be.
00:51:30 Do you think that’s useful? Do you think it’s a feature or a bug of human beings that we,
00:51:36 that we look back?
00:51:37 Oh, I think that’s definitely a feature. That’s not a bug. I mean, you, you have to look back
00:51:43 in order to look forward. So, uh, without, without looking back, you couldn’t, you couldn’t
00:51:50 really intelligently look forward.
00:51:53 You’re looking for the echoes of the same kind of experience in order to predict what
00:51:57 the future holds.
00:51:58 Yeah.
00:51:59 So though Victor Frankel in his book, man’s search for meaning, I’m not sure if you’ve
00:52:05 read, describes his experience at the consecration concentration camps during world war two as
00:52:10 a way to describe that finding identifying a purpose in life, a positive purpose in life
00:52:18 can save one from suffering. First of all, do you connect with the philosophy that he
00:52:23 describes there?
00:52:28 Not really. I mean, the, so I can, I can really see that somebody who has that feeling of
00:52:37 purpose and meaning and so on, that, that could sustain you. Uh, I in general don’t
00:52:44 have that feeling and I’m pretty sure that if I were in a concentration camp, I’d give
00:52:50 up and die, you know? So he talks, he is, he is a survivor.
00:52:56 Yeah.
00:52:57 And, you know, he survived with that. And I’m, and I’m not sure how essential to survival
00:53:04 this sense is, but I do know when I think about myself that I would have given up. Oh,
00:53:12 this isn’t going anywhere. And there is, there is a sort of character that, that, that manages
00:53:20 to survive in conditions like that. And then because they survive, they tell stories and
00:53:26 it sounds as if they survive because of what they were doing. We have no idea. They survived
00:53:31 because the kind of people that they are and the other kind of people who survives and
00:53:36 would tell themselves stories of a particular kind. So I’m not, uh,
00:53:41 So you don’t think seeking purpose is a significant driver in our being?
00:53:46 Oh, I mean, it’s, it’s a very interesting question because when you ask people whether
00:53:52 it’s very important to have meaning in their life, they say, oh yes, that’s the most important
00:53:56 thing. But when you ask people, what kind of a day did you have? And, and you know,
00:54:03 what were the experiences that you remember? You don’t get much meaning. You get social
00:54:10 experiences. Then, uh, and, and some people say that, for example, in, in, in child, you
00:54:21 know, in taking care of children, the fact that they are your children and you’re taking
00:54:25 care of them, uh, makes a very big difference. I think that’s entirely true. Uh, but it’s
00:54:34 more because of a story that we’re telling ourselves, which is a very different story
00:54:40 when we’re taking care of our children or when we’re taking care of other things.
00:54:45 Jumping around a little bit in doing a lot of experiments, let me ask a question. Most
00:54:50 of the work I do, for example, is in the, in the real world, but most of the clean good
00:54:56 science that you can do is in the lab. So that distinction, do you think we can understand
00:55:04 the fundamentals of human behavior through controlled experiments in the lab? If we talk
00:55:12 about pupil diameter, for example, it’s much easier to do when you can control lighting
00:55:18 conditions, right? So when we look at driving, lighting variation destroys almost completely
00:55:27 your ability to use pupil diameter. But in the lab for, as I mentioned, semi autonomous
00:55:34 or autonomous vehicles in driving simulators, we can’t, we don’t capture true, honest, uh,
00:55:43 human behavior in that particular domain. So what’s your intuition? How much of human
00:55:49 behavior can we study in this controlled environment of the lab? A lot, but you’d have to verify
00:55:56 it, you know, that your, your conclusions are basically limited to the situation, to
00:56:03 the experimental situation. Then you have to jump the big inductive leap to the real
00:56:09 world. Uh, so, and, and that’s the flare. That’s where the difference, I think, between
00:56:17 the good psychologists and others that are mediocre is in the sense of that your experiment
00:56:25 captures something that’s important and something that’s real and others are just running experiments.
00:56:33 So what is that? Like the birth of an idea to his development in your mind to something
00:56:39 that leads to an experiment. Is that similar to maybe like what Einstein or a good physicist
00:56:44 do is your intuition. You basically use your intuition to build up.
00:56:48 Yeah, but I mean, you know, it’s, it’s very skilled intuition. I mean, I just had that
00:56:54 experience actually. I had an idea that turns out to be very good idea a couple of days
00:57:00 ago and, and you, and you have a sense of that building up. So I’m working with a collaborator
00:57:08 and he essentially was saying, you know, what, what are you doing? What’s, what’s going on?
00:57:14 And I was, I really, I couldn’t exactly explain it, but I knew this is going somewhere, but
00:57:21 you know, I’ve been around that game for a very long time. And so I can, you, you develop
00:57:26 that anticipation that yes, this, this is worth following up. That’s part of the skill.
00:57:34 Is that something you can reduce to words in describing a process in the form of advice
00:57:41 to others?
00:57:42 No.
00:57:43 Follow your heart, essentially.
00:57:45 I mean, you know, it’s, it’s like trying to explain what it’s like to drive. It’s not,
00:57:51 you’ve got to break it apart and it’s not.
00:57:54 And then you lose.
00:57:55 And then you lose the experience.
00:57:58 You mentioned collaboration. You’ve written about your collaboration with Amos Tversky
00:58:05 that this is you writing, the 12 or 13 years in which most of our work was joint were years
00:58:10 of interpersonal and intellectual bliss. Everything was interesting. Almost everything
00:58:16 was funny. And there was a current joy of seeing an idea take shape. So many times in
00:58:22 those years, we shared the magical experience of one of us saying something, which the other
00:58:27 one would understand more deeply than the speaker had done. Contrary to the old laws
00:58:32 of information theory, it was common for us to find that more information was received
00:58:38 than had been sent. I have almost never had the experience with anyone else. If you have
00:58:43 not had it, you don’t know how marvelous collaboration can be.
00:58:49 So let me ask a perhaps a silly question. How does one find and create such a collaboration?
00:58:58 That may be asking like, how does one find love?
00:59:01 Yeah, you have to be lucky. And I think you have to have the character for that because
00:59:10 I’ve had many collaborations. I mean, none were as exciting as with Amos, but I’ve had
00:59:17 and I’m having just very. So it’s a skill. I think I’m good at it. Not everybody is good
00:59:27 at it. And then it’s the luck of finding people who are also good at it.
00:59:32 Is there advice in a form for a young scientist who also seeks to violate this law of information
00:59:39 theory?
00:59:48 I really think it’s so much luck is involved. And in those really serious collaborations,
00:59:59 at least in my experience, are a very personal experience. And I have to like the person
01:00:06 I’m working with. Otherwise, I mean, there is that kind of collaboration, which is like
01:00:13 an exchange, a commercial exchange of giving this, you give me that. But the real ones
01:00:21 are interpersonal. They’re between people who like each other and who like making each
01:00:28 other think and who like the way that the other person responds to your thoughts. You
01:00:34 have to be lucky.
01:00:37 But I already noticed that even just me showing up here, you’ve quickly started to digging
01:00:43 in on a particular problem I’m working on and already new information started to emerge.
01:00:49 Is that a process, just the process of curiosity of talking to people about problems and seeing?
01:00:56 I’m curious about anything to do with AI and robotics. And I knew you were dealing with
01:01:03 that. So I was curious.
01:01:05 Just follow your curiosity. Jumping around on the psychology front, the dramatic sounding
01:01:13 terminology of replication crisis, but really just the, at times, this effect that at times
01:01:24 studies do not, are not fully generalizable. They don’t.
01:01:29 You are being polite. It’s worse than that.
01:01:33 Is it? So I’m actually not fully familiar to the degree how bad it is, right? So what
01:01:39 do you think is the source? Where do you think?
01:01:41 I think I know what’s going on actually. I mean, I have a theory about what’s going on
01:01:47 and what’s going on is that there is, first of all, a very important distinction between
01:01:55 two types of experiments. And one type is within subject. So it’s the same person has
01:02:03 two experimental conditions. And the other type is between subjects where some people
01:02:09 are this condition, other people are that condition. They’re different worlds. And between
01:02:14 subject experiments are much harder to predict and much harder to anticipate. And the reason,
01:02:25 and they’re also more expensive because you need more people. And it’s just, so between
01:02:31 subject experiments is where the problem is. It’s not so much in within subject experiments,
01:02:38 it’s really between. And there is a very good reason why the intuitions of researchers about
01:02:46 between subject experiments are wrong. And that’s because when you are a researcher,
01:02:54 you’re in a within subject situation. That is you are imagining the two conditions and
01:03:00 you see the causality and you feel it. But in the between subject condition, they live
01:03:09 in one condition and the other one is just nowhere. So our intuitions are very weak about
01:03:18 between subject experiments. And that I think is something that people haven’t realized.
01:03:26 And in addition, because of that, we have no idea about the power of manipulations of
01:03:34 experimental manipulations because the same manipulation is much more powerful when you
01:03:42 are in the two conditions than when you live in only one condition. And so the experimenters
01:03:48 have very poor intuitions about between subject experiments. And there is something else which
01:03:56 is very important, I think, which is that almost all psychological hypotheses are true.
01:04:04 That is in the sense that, you know, directionally, if you have a hypothesis that A really causes
01:04:13 B, that it’s not true that A causes the opposite of B. Maybe A just has very little effect,
01:04:21 but hypotheses are true mostly, except mostly they’re very weak. They’re much weaker than
01:04:28 you think when you are having images. So the reason I’m excited about that is that I recently
01:04:38 heard about some friends of mine who they essentially funded 53 studies of behavioral
01:04:50 change by 20 different teams of people with a very precise objective of changing the number
01:04:59 of times that people go to the gym. And the success rate was zero. Not one of the 53 studies
01:05:12 worked. Now, what’s interesting about that is those are the best people in the field
01:05:18 and they have no idea what’s going on. So they’re not calibrated. They think that it’s
01:05:24 going to be powerful because they can imagine it, but actually it’s just weak because you
01:05:30 are focusing on your manipulation and it feels powerful to you. There’s a thing that I’ve
01:05:37 written about that’s called the focusing illusion. That is that when you think about something,
01:05:43 it looks very important, more important than it really is.
01:05:48 More important than it really is. But if you don’t see that effect, the 53 studies, doesn’t
01:05:53 that mean you just report that? So what was, I guess, the solution to that?
01:05:59 Well, I mean, the solution is for people to trust their intuitions less or to try out
01:06:07 their intuitions before. I mean, experiments have to be pre registered and by the time
01:06:14 you run an experiment, you have to be committed to it and you have to run the experiment seriously
01:06:20 enough and in a public. And so this is happening. The interesting thing is what happens before
01:06:32 and how do people prepare themselves and how they run pilot experiments. It’s going to
01:06:37 train the way psychology is done and it’s already happening.
01:06:41 Do you have a hope for, this might connect to the study sample size.
01:06:48 Yeah.
01:06:49 Do you have a hope for the internet?
01:06:51 Well, I mean, you know, this is really happening. MTurk, everybody’s running experiments on
01:06:59 MTurk and it’s very cheap and very effective.
01:07:03 Do you think that changes psychology essentially? Because you’re thinking you cannot run 10,000
01:07:09 subjects.
01:07:10 Eventually it will. I mean, I, you know, I can’t put my finger on how exactly, but it’s,
01:07:18 that’s been true in psychology with whenever an important new method came in, it changes
01:07:24 the field. So, and MTurk is really a method because it makes it very much easier to do
01:07:33 something, to do some things.
01:07:35 Is there a undergrad students who’ll ask me, you know, how big a neural network should
01:07:40 be for a particular problem? So let me ask you an equivalent question. How big, how many
01:07:49 subjects does the study have for it to have a conclusive result?
01:07:53 Well, it depends on the strength of the effect. So if you’re studying visual perception or
01:08:00 the perception of color, many of the classic results in visual, in color perception were
01:08:08 done on three or four people. And I think one of them was colorblind, but partly colorblind,
01:08:14 but on vision, you know, it’s highly reliable. Many people don’t need a lot of replications
01:08:24 for some type of neurological experiment. When you’re studying weaker phenomena and
01:08:35 especially when you’re studying them between subjects, then you need a lot more subjects
01:08:41 than people have been running. And that is, that’s one of the things that are happening
01:08:47 in psychology now is that the power, the statistical power of experiments is increasing rapidly.
01:08:54 Does the between subject, as the number of subjects goes to infinity approach?
01:08:59 Well, I mean, you know, it goes to infinity is exaggerated, but people, the standard number
01:09:06 of subjects for an experiment in psychology were 30 or 40. And for a weak effect, that’s
01:09:15 simply not enough. And you may need a couple of hundred. I mean, it’s that sort of order
01:09:25 of magnitude.
01:09:28 What are the major disagreements in theories and effects that you’ve observed throughout
01:09:35 your career that still stand today? You’ve worked on several fields, but what still is
01:09:42 out there as a major disagreement that pops into your mind?
01:09:47 I’ve had one extreme experience of, you know, controversy with somebody who really doesn’t
01:09:54 like the work that Amos Tversky and I did. And he’s been after us for 30 years or more,
01:10:01 at least.
01:10:02 Do you want to talk about it?
01:10:03 Well, I mean, his name is Gerd Gigerenzer. He’s a well known German psychologist. And
01:10:10 that’s the one controversy, which I, it’s been unpleasant. And no, I don’t particularly
01:10:18 want to talk about it.
01:10:21 But is there is there open questions, even in your own mind, every once in a while? You
01:10:25 know, we talked about semi autonomous vehicles. In my own mind, I see what the data says,
01:10:31 but I also constantly torn. Do you have things where you or your studies have found something,
01:10:38 but you’re also intellectually torn about what it means? And there’s maybe disagreements
01:10:44 within your own mind about particular things.
01:10:47 I mean, it’s, you know, one of the things that are interesting is how difficult it is
01:10:52 for people to change their mind. Essentially, you know, once they are committed, people
01:11:00 just don’t change their mind about anything that matters. And that is surprisingly, but
01:11:05 it’s true about scientists. So the controversy that I described, you know, that’s been going
01:11:12 on like 30 years and it’s never going to be resolved. And you build a system and you live
01:11:19 within that system and other other systems of ideas look foreign to you and there is
01:11:27 very little contact and very little mutual influence. That happens a fair amount.
01:11:33 Do you have a hopeful advice or message on that? Thinking about science, thinking about
01:11:41 politics, thinking about things that have impact on this world, how can we change our
01:11:47 mind?
01:11:49 I think that, I mean, on things that matter, which are political or really political or
01:11:56 religious and people just don’t, don’t change their mind. And by and large, and there’s
01:12:04 very little that you can do about it. The, what does happen is that if leaders change
01:12:13 their minds. So for example, the public, the American public doesn’t really believe in
01:12:19 climate change, doesn’t take it very seriously. But if some religious leaders decided this
01:12:26 is a major threat to humanity, that would have a big effect. So that we have the opinions
01:12:34 that we have, not because we know why we have them, but because we trust some people and
01:12:39 we don’t trust other people. And so it’s much less about evidence than it is about stories.
01:12:49 So the way, one way to change your mind isn’t at the individual level, is that the leaders
01:12:55 of the communities you look up with, the stories change and therefore your mind changes with
01:12:59 them. So there’s a guy named Alan Turing, came up with a Turing test. What do you think
01:13:08 is a good test of intelligence? Perhaps we’re drifting in a topic that we’re maybe philosophizing
01:13:18 about, but what do you think is a good test for intelligence, for an artificial intelligence
01:13:22 system?
01:13:23 Well, the standard definition of artificial general intelligence is that it can do anything
01:13:32 that people can do and it can do them better. What we are seeing is that in many domains,
01:13:39 you have domain specific devices or programs or software, and they beat people easily in
01:13:51 a specified way. What we are very far from is that general ability, general purpose intelligence.
01:14:04 In machine learning, people are approaching something more general. I mean, for Alpha
01:14:08 Zero was much more general than Alpha Go, but it’s still extraordinarily narrow and
01:14:18 specific in what it can do. So we’re quite far from something that can, in every domain,
01:14:28 think like a human except better.
01:14:30 What aspect, so the Turing test has been criticized, it’s natural language conversation that is
01:14:36 too simplistic. It’s easy to quote unquote pass under constraints specified. What aspect
01:14:44 of conversation would impress you if you heard it? Is it humor? What would impress the heck
01:14:52 out of you if you saw it in conversation?
01:14:55 Yeah, I mean, certainly wit would be impressive and humor would be more impressive than just
01:15:06 factual conversation, which I think is easy. And allusions would be interesting and metaphors
01:15:17 would be interesting. I mean, but new metaphors, not practiced metaphors. So there is a lot
01:15:25 that would be sort of impressive that is completely natural in conversation, but that you really
01:15:33 wouldn’t expect.
01:15:34 Does the possibility of creating a human level intelligence or superhuman level intelligence
01:15:40 system excite you, scare you? How does it make you feel?
01:15:47 I find the whole thing fascinating. Absolutely fascinating.
01:15:51 So exciting.
01:15:52 I think. And exciting. It’s also terrifying, you know, but I’m not going to be around
01:16:00 to see it. And so I’m curious about what is happening now, but I also know that predictions
01:16:09 about it are silly. We really have no idea what it will look like 30 years from now.
01:16:16 No idea.
01:16:18 Speaking of silly, bordering on the profound, let me ask the question of, in your view,
01:16:26 what is the meaning of it all? The meaning of life? He’s a descendant of great apes that
01:16:32 we are. Why, what drives us as a civilization, as a human being, as a force behind everything
01:16:40 that you’ve observed and studied? Is there any answer or is it all just a beautiful mess?
01:16:49 There is no answer that I can understand and I’m not, and I’m not actively looking for
01:16:58 one.
01:16:59 Do you think an answer exists?
01:17:02 No. There is no answer that we can understand. I’m not qualified to speak about what we cannot
01:17:08 understand, but there is, I know that we cannot understand reality, you know. I mean, there
01:17:17 are a lot of things that we can do. I mean, you know, gravity waves, I mean, that’s a
01:17:22 big moment for humanity. And when you imagine that ape, you know, being able to go back
01:17:29 to the Big Bang, that’s, that’s, but…
01:17:34 But the why.
01:17:35 Yeah, the why.
01:17:36 It’s bigger than us.
01:17:37 The why is hopeless, really.
01:17:40 Danny, thank you so much. It was an honor. Thank you for speaking today.
01:17:43 Thank you.
01:17:44 Thanks for listening to this conversation. And thank you to our presenting sponsor, Cash
01:17:49 App. Download it, use code LexPodcast, you’ll get $10 and $10 will go to FIRST, a STEM education
01:17:56 nonprofit that inspires hundreds of thousands of young minds to become future leaders and
01:18:01 innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast,
01:18:08 follow on Spotify, support it on Patreon, or simply connect with me on Twitter.
01:18:13 And now, let me leave you with some words of wisdom from Daniel Kahneman.
01:18:19 Intelligence is not only the ability to reason, it is also the ability to find relevant material
01:18:24 and memory and to deploy attention when needed.
01:18:29 Thank you for listening and hope to see you next time.